搜索
热搜: music

Intelligence explosion

2014-3-26 22:45| view publisher: amanda| views: 1002| wiki(57883.com) 0 : 0

description: The notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:Let an ultraintelligent machine be defined as a machine that can ...
The notion of an "intelligence explosion" was first described thus by Good (1965), who speculated on the effects of superhuman machines:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces and mind uploading. The existence of multiple paths to an intelligence explosion makes a singularity more likely; for a singularity to not occur they would all have to fail.[9]

Hanson (1998) is skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity.[citation needed]

Whether or not an intelligence explosion occurs depends on three factors.[55] The first, accelerating factor, is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly overcoming the advantage of increased intelligence. Each improvement must be able to beget at least one more improvement, on average, for the singularity to continue. Finally the laws of physics will eventually prevent any further improvements.

There are two logically independent, but mutually reinforcing, accelerating effects: increases in the speed of computation, and improvements to the algorithms used.[56] The former is predicted by Moore’s Law and the forecast improvements in hardware,[57] and is comparatively similar to previous technological advance. On the other hand, most AI researchers believe that software is more important than hardware.[citation needed]

Speed improvements[edit]
The first is the improvements to the speed at which minds can be run. Whether human or AI, better hardware increases the rate of future hardware improvements. Oversimplified,[58] Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[59] An upper limit on speed may eventually be reached, although it is unclear how high this would be. Hawkins (2008), responding to Good, argued that the upper limit is relatively low;

Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.
Whereas if it were a lot higher than current human levels of intelligence, the effects of the singularity would be enormous enough as to be indistinguishable (to humans) from a singularity with an upper limit. For example, if the speed of thought could be increased a million-fold, a subjective year would pass in 30 physical seconds.[9]

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

Intelligence improvements[edit]
Some intelligence technologies, like seed AI, may also have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

This mechanism for an intelligence explosion differs from an increase in speed in two ways. First, it does not require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was rewriting its own source code, however, could do so while contained in an AI box.

Second, as with Vernor Vinge’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual improvements in intelligence would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life had been a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[60]

There are substantial dangers associated with an intelligence explosion singularity. First, the goal structure of the AI may not be invariant under self-improvement, potentially causing the AI to optimise for something other than was intended.[61][62] Secondly, AIs could compete for the scarce resources mankind uses to survive.[63]

While not actively malicious, there is no reason to think that AIs would actively promote human goals unless they could be programmed as such, and if not, might use the resources currently used to support mankind to promote its own goals, causing human extinction.[13][64][65]

Impact[edit]
Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[50]

Existential risk[edit]
Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility;[66][67][68] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[69]) AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[63][70] and humans would be powerless to stop them.[71] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[65]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[72]

Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind.[64]

Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines. These ideas were refined in 2008[73] and revised in 2012.[74][75][76]

One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.[28][77][78]

Implications for human society[edit]
In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.

Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved "cockroach intelligence." The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[79]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[80] A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[81][82]

The AAAI has commissioned a study to examine this issue,[83] pointing to programs like the Language Acquisition Device, which was claimed to emulate human interaction.

Some support the design of friendly artificial intelligence, meaning that the advances that are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[84]

Isaac Asimov's Three Laws of Robotics is one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans. In Asimov’s stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are merely acting to their best interpretation of their rules. In the 2004 film I, Robot, loosely based on Asimov's Robot stories, an AI attempts to take complete control over humanity for the purpose of protecting humanity from itself due to an extrapolation of the Three Laws. In 2004, the Singularity Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov’s laws in particular.[85]

About us|Jobs|Help|Disclaimer|Advertising services|Contact us|Sign in|Website map|Search|

GMT+8, 2015-9-11 21:57 , Processed in 0.132128 second(s), 16 queries .

57883.com service for you! X3.1

返回顶部