The Singularity Is Near_ When Humans Transcend Biology - novelonlinefull.com
You’re read light novel The Singularity Is Near_ When Humans Transcend Biology Part 12 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
MOLLY 2004: Okay, but the virus writers will be improving their craft as well. Okay, but the virus writers will be improving their craft as well.
RAY: It's going to be a nervous standoff, no question about it. But the benefit today clearly outweighs the damage. It's going to be a nervous standoff, no question about it. But the benefit today clearly outweighs the damage.
MOLLY 2004: How clear is that? How clear is that?
RAY: Well, no one is seriously arguing we should do away with the Internet because software viruses are such a big problem. Well, no one is seriously arguing we should do away with the Internet because software viruses are such a big problem.
MOLLY 2004: I'll give you that. I'll give you that.
RAY: When nanotechnology is mature, it's going to solve the problems of biology by overcoming biological pathogens, removing toxins, correcting DNA errors, and reversing other sources of aging. We will then have to contend with new dangers that it introduces, just as the Internet introduced the danger of software viruses. These new pitfalls will include the potential for self-replicating nanotechnology getting out of control, as well as the integrity of the software controlling these powerful, distributed nan.o.bots. When nanotechnology is mature, it's going to solve the problems of biology by overcoming biological pathogens, removing toxins, correcting DNA errors, and reversing other sources of aging. We will then have to contend with new dangers that it introduces, just as the Internet introduced the danger of software viruses. These new pitfalls will include the potential for self-replicating nanotechnology getting out of control, as well as the integrity of the software controlling these powerful, distributed nan.o.bots.
MOLLY 2004: Did you say reverse aging? Did you say reverse aging?
RAY: I see you're already picking up on a key benefit. I see you're already picking up on a key benefit.
MOLLY 2004: So how are the nan.o.bots going to do that? So how are the nan.o.bots going to do that?
RAY: We'll actually accomplish most of that with biotechnology, methods such as RNA interference for turning off destructive genes, gene therapy for changing your genetic code, therapeutic cloning for regenerating your cells and tissues, smart drugs to reprogram your metabolic pathways, and many other emerging techniques. But whatever biotechnology doesn't get around to accomplishing, we'll have the means to do with nanotechnology. We'll actually accomplish most of that with biotechnology, methods such as RNA interference for turning off destructive genes, gene therapy for changing your genetic code, therapeutic cloning for regenerating your cells and tissues, smart drugs to reprogram your metabolic pathways, and many other emerging techniques. But whatever biotechnology doesn't get around to accomplishing, we'll have the means to do with nanotechnology.
MOLLY 2004: Such as? Such as?
RAY: Nan.o.bots will be able to travel through the bloodstream, then go in and around our cells and perform various services, such as removing toxins, sweeping out debris, correcting DNA errors, repairing and restoring cell membranes, reversing atherosclerosis, modifying the levels of hormones, neurotransmitters, and other metabolic chemicals, and a myriad of other tasks. For each aging process, we can describe a means for nan.o.bots to reverse the process, down to the level of individual cells, cell components, and molecules. Nan.o.bots will be able to travel through the bloodstream, then go in and around our cells and perform various services, such as removing toxins, sweeping out debris, correcting DNA errors, repairing and restoring cell membranes, reversing atherosclerosis, modifying the levels of hormones, neurotransmitters, and other metabolic chemicals, and a myriad of other tasks. For each aging process, we can describe a means for nan.o.bots to reverse the process, down to the level of individual cells, cell components, and molecules.
MOLLY 2004: So I'll stay young indefinitely? So I'll stay young indefinitely?
RAY: That's the idea. That's the idea.
MOLLY 2004: When did you say I could get these? When did you say I could get these?
RAY: I thought you were worried about nan.o.bot firewalls. I thought you were worried about nan.o.bot firewalls.
MOLLY 2004: Yeah, well, I've got time to worry about that. So what was that time frame again? Yeah, well, I've got time to worry about that. So what was that time frame again?
RAY: About twenty to twenty-five years. About twenty to twenty-five years.
MOLLY 2004: I'm twenty-five now, so I'll age to about forty-five and then stay there? I'm twenty-five now, so I'll age to about forty-five and then stay there?
RAY: No, that's not exactly the idea. You can slow down aging to a crawl right now by adopting the knowledge we already have. Within ten to twenty years, the biotechnology revolution will provide far more powerful means to stop and in many cases reverse each disease and aging process. And it's not like nothing is going to happen in the meantime. Each year, we'll have more powerful techniques, and the process will accelerate. Then nanotechnology will finish the job. No, that's not exactly the idea. You can slow down aging to a crawl right now by adopting the knowledge we already have. Within ten to twenty years, the biotechnology revolution will provide far more powerful means to stop and in many cases reverse each disease and aging process. And it's not like nothing is going to happen in the meantime. Each year, we'll have more powerful techniques, and the process will accelerate. Then nanotechnology will finish the job.
MOLLY 2004: Yes, of course, it's hard for you to get out a sentence without using the word "accelerate." So what biological age am I going to get to? Yes, of course, it's hard for you to get out a sentence without using the word "accelerate." So what biological age am I going to get to?
RAY: I think you'll settle somewhere in your thirties and stay there for a while. I think you'll settle somewhere in your thirties and stay there for a while.
MOLLY 2004: Thirties sounds pretty good. I think a slightly more mature age than twenty-five is a good idea anyway. But what do you mean "for a while"? Thirties sounds pretty good. I think a slightly more mature age than twenty-five is a good idea anyway. But what do you mean "for a while"?
RAY: Stopping and reversing aging is only the beginning. Using nan.o.bots for health and longevity is just the early adoption phase of introducing nanotechnology and intelligent computation into our bodies and brains. The more profound implication is that we'll augment our thinking processes with nan.o.bots that communicate with one another and with our biological neurons. Once nonbiological intelligence gets a foothold, so to speak, in our brains, it will be subject to the law of accelerating returns and expand exponentially. Our biological thinking, on the other hand, is basically stuck. Stopping and reversing aging is only the beginning. Using nan.o.bots for health and longevity is just the early adoption phase of introducing nanotechnology and intelligent computation into our bodies and brains. The more profound implication is that we'll augment our thinking processes with nan.o.bots that communicate with one another and with our biological neurons. Once nonbiological intelligence gets a foothold, so to speak, in our brains, it will be subject to the law of accelerating returns and expand exponentially. Our biological thinking, on the other hand, is basically stuck.
MOLLY 2004: There you go again with things accelerating, but when this really gets going, thinking with biological neurons will be pretty trivial in comparison. There you go again with things accelerating, but when this really gets going, thinking with biological neurons will be pretty trivial in comparison.
RAY: That's a fair statement. That's a fair statement.
MOLLY 2004: So, Miss Molly of the future, when did I drop my biological body and brain? So, Miss Molly of the future, when did I drop my biological body and brain?
MOLLY 2104: Well, you don't really want me to spell out your future, do you? And anyway it's actually not a straightforward question. Well, you don't really want me to spell out your future, do you? And anyway it's actually not a straightforward question.
MOLLY 2004: How's that? How's that?
MOLLY 2104: In the 2040s we developed the means to instantly create new portions of ourselves, either biological or nonbiological. It became apparent that our true nature was a pattern of information, but we still needed to manifest ourselves in some physical form. However, we could quickly change that physical form. In the 2040s we developed the means to instantly create new portions of ourselves, either biological or nonbiological. It became apparent that our true nature was a pattern of information, but we still needed to manifest ourselves in some physical form. However, we could quickly change that physical form.
MOLLY 2004: By? By?
MOLLY 2104: By applying new high-speed MNT manufacturing. So we could readily and rapidly redesign our physical instantiation. So I could have a biological body at one time and not at another, then have it again, then change it, and so on. By applying new high-speed MNT manufacturing. So we could readily and rapidly redesign our physical instantiation. So I could have a biological body at one time and not at another, then have it again, then change it, and so on.
MOLLY 2004: I think I'm following this. I think I'm following this.
MOLLY 2104: The point is that I could have my biological brain and/or body or not have it. It's not a matter of dropping anything, because we can always get back something we drop. The point is that I could have my biological brain and/or body or not have it. It's not a matter of dropping anything, because we can always get back something we drop.
MOLLY 2004: So you're still doing this? So you're still doing this?
MOLLY 2104: Some people still do this, but now in 2104 it's a bit anachronistic. I mean, the simulations of biology are totally indistinguishable from actual biology, so why bother with physical instantiations? Some people still do this, but now in 2104 it's a bit anachronistic. I mean, the simulations of biology are totally indistinguishable from actual biology, so why bother with physical instantiations? MOLLY 2004: MOLLY 2004: Yeah, it's messy isn't it? Yeah, it's messy isn't it?
MOLLY 2104: I'll say. I'll say.
MOLLY 2004: I do have to say that it seems strange to be able to change your physical embodiment. I mean, where's your-my-continuity? I do have to say that it seems strange to be able to change your physical embodiment. I mean, where's your-my-continuity?
MOLLY 2104: It's the same as your continuity in 2004. You're changing your particles all the time also. It's just your pattern of information that has continuity. It's the same as your continuity in 2004. You're changing your particles all the time also. It's just your pattern of information that has continuity.
MOLLY 2004: But in 2104 you're able to change your pattern of information quickly also. I can't do that yet. But in 2104 you're able to change your pattern of information quickly also. I can't do that yet.
MOLLY 2104: It's really not that different. You change your pattern-your memory, skills, experiences, even personality over time-but there is a continuity, a core that changes only gradually. It's really not that different. You change your pattern-your memory, skills, experiences, even personality over time-but there is a continuity, a core that changes only gradually.
MOLLY 2004: But I thought you could change your appearance and personality dramatically in an instant? But I thought you could change your appearance and personality dramatically in an instant? MOLLY 2104: MOLLY 2104: Yes, but that's just a surface manifestation. My true core changes only gradually, just like when I was you in 2004. Yes, but that's just a surface manifestation. My true core changes only gradually, just like when I was you in 2004.
MOLLY 2004: Well, there are lots of times when I'd be delighted to instantly change my surface appearance. Well, there are lots of times when I'd be delighted to instantly change my surface appearance.
Robotics: Strong AI
Consider another argument put forth by Turing. So far we have constructed only fairly simple and predictable artifacts. When we increase the complexity of our machines, there may, perhaps, be surprises in store for us. He draws a parallel with a fission pile. Below a certain "critical" size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps, with brains and machines. Most brains and all machines are, at present "sub-critical"-they react to incoming stimuli in a stodgy and uninteresting way, have no ideas of their own, can produce only stock responses-but a few brains at present, and possibly some machines in the future, are super-critical, and scintillate on their own account. Turing is suggesting that it is only a matter of complexity, and that above a certain level of complexity a qualitative difference appears, so that "super-critical" machines will be quite unlike the simple ones. .h.i.therto envisaged.
-J. R. LUCAS, OXFORD PHILOSOPHER, IN HIS 1961 ESSAY "MINDS, MACHINES, AND G.o.dEL"157
Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. a.s.sociated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a compet.i.tive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment-there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technophobics could plausibly argue "hither but not further."-NICK BOSTROM, "HOW LONG BEFORE SUPERINTELLIGENCE?" 1997 It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could a.s.sist us in creating a highly appealing experiential world in which we could live lives devoted to joyful gameplaying, relating to each other, experiencing, personal growth, and to living closer to our ideals.-NICK BOSTROM, "ETHICAL ISSUES IN ADVANCED ARTIFICIAL INTELLIGENCE," 2003 Will robots inherit the earth? Yes, but they will be our children.-MARVIN MINSKY, 1995
Of the three primary revolutions underlying the Singularity (G, N, and R), the most profound is R, which refers to the creation of nonbiological intelligence that exceeds that of un enhanced humans. A more intelligent process will inherently outcompete one that is less intelligent, making intelligence the most powerful force in the universe.
While the R in GNR stands for robotics, the real issue involved here is strong AI (artificial intelligence that exceeds human intelligence). The standard reason for emphasizing robotics in this formulation is that intelligence needs an embodiment, a physical presence, to affect the world. I disagree with the emphasis on physical presence, however, for I believe that the central concern is intelligence. Intelligence will inherently find a way to influence the world, including creating its own means for embodiment and physical manipulation. Furthermore, we can include physical skills as a fundamental part of intelligence; a large portion of the human brain (the cerebellum, comprising more than half our neurons), for example, is devoted to coordinating our skills and muscles.
Artificial intelligence at human levels will necessarily greatly exceed human intelligence for several reasons. As I pointed out earlier, machines can readily share their knowledge. As unenhanced humans we do not have the means of sharing the vast patterns of interneuronal connections and neurotransmitter-concentration levels that comprise our learning, knowledge, and skills, other than through slow, language-based communication. Of course, even this method of communication has been very beneficial, as it has distinguished us from other animals and has been an enabling factor in the creation of technology.
Human skills are able to develop only in ways that have been evolutionarily encouraged. Those skills, which are primarily based on ma.s.sively parallel pattern recognition, provide proficiency for certain tasks, such as distinguishing faces, identifying objects, and recognizing language sounds. But they're not suited for many others, such as determining patterns in financial data. Once we fully master pattern-recognition paradigms, machine methods can apply these techniques to any type of pattern.158 Machines can pool their resources in ways that humans cannot. Although teams of humans can accomplish both physical and mental feats that individual humans cannot achieve, machines can more easily and readily aggregate their computational, memory, and communications resources. As discussed earlier, the Internet is evolving into a worldwide grid of computing resources that can instantly be brought together to form ma.s.sive supercomputers.
Machines have exacting memories. Contemporary computers can master billions of facts accurately, a capability that is doubling every year.159 The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating. The underlying speed and price-performance of computing itself is doubling every year, and the rate of doubling is itself accelerating.
As human knowledge migrates to the Web, machines will be able to read, understand, and synthesize all human-machine information. The last time a biological human was able to grasp all human scientific knowledge was hundreds of years ago.
Another advantage of machine intelligence is that it can consistently perform at peak levels and can combine peak skills. Among humans one person may have mastered music composition, while another may have mastered transistor design, but given the fixed architecture of our brains we do not have the capacity (or the time) to develop and utilize the highest level of skill in every increasingly specialized area. Humans also vary a great deal in a particular skill, so that when we speak, say, of human levels of composing music, do we mean Beethoven, or do we mean the average person? Nonbiological intelligence will be able to match and exceed peak human skills in each area.
For these reasons, once a computer is able to match the subtlety and range of human intelligence, it will necessarily soar past it and then continue its double-exponential ascent.
A key question regarding the Singularity is whether the "chicken" (strong AI) or the "egg"(nanotechnology) will come first. In other words, will strong AI lead to full nanotechnology (molecular-manufacturing a.s.semblers that can turn information into physical products), or will full nanotechnology lead to strong AI? The logic of the first premise is that strong AI would imply superhuman AI for the reasons just cited, and superhuman AI would be in a position to solve any remaining design problems required to implement full nanotechnology.
The second premise is based on the realization that the hardware requirements for strong AI will be met by nanotechnology-based computation. Likewise the software requirements will be facilitated by nan.o.bots that could create highly detailed scans of human brain functioning and thereby achieve the completion of reverse engineering the human brain.
Both premises are logical; it's clear that either technology can a.s.sist the other. The reality is that progress in both areas will necessarily use our most advanced tools, so advances in each field will simultaneously facilitate the other. However, I do expect that full MNT will emerge prior to strong AI, but only by a few years (around 2025 for nanotechnology, around 2029 for strong AI).
As revolutionary as nanotechnology will be, strong AI will have far more profound consequences. Nanotechnology is powerful but not necessarily intelligent. We can devise ways of at least trying to manage the enormous powers of nanotechnology, but superintelligence innately cannot be controlled.
Runaway AI. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence. Once strong AI is achieved, it can readily be advanced and its powers multiplied, as that is the fundamental nature of machine abilities. As one strong AI immediately begets many strong Als, the latter access their own design, understand and improve it, and thereby very rapidly evolve into a yet more capable, more intelligent AI, with the cycle repeating itself indefinitely. Each cycle not only creates a more intelligent AI but takes less time than the cycle before it, as is the nature of technological evolution (or any evolutionary process). The premise is that once strong AI is achieved, it will immediately become a runaway phenomenon of rapidly escalating superintelligence.160 My own view is only slightly different. The logic of runaway AI is valid, but we still need to consider the timing. Achieving human levels in a machine will not immediately cause a runaway phenomenon. Consider that a human level of intelligence has limitations. We have examples of this today-about six billion of them. Consider a scenario in which you took one hundred humans from, say, a shopping mall. This group would const.i.tute examples of reasonably well-educated humans. Yet if this group was presented with the task of improving human intelligence, it wouldn't get very far, even if provided with the templates of human intelligence. It would probably have a hard time creating a simple computer. Speeding up the thinking and expanding the memory capacities of these one hundred humans would not immediately solve this problem.
I pointed out above that machines will match (and quickly exceed) peak human skills in each area of skill. So instead, let's take one hundred scientists and engineers. A group of technically trained people with the right backgrounds would be capable of improving accessible designs. If a machine attained equivalence to one hundred (and eventually one thousand, then one million) technically trained humans, each operating much faster than a biological human, a rapid acceleration of intelligence would ultimately follow.
However, this acceleration won't happen immediately when a computer pa.s.ses the Turing test. The Turing test is comparable to matching the capabilities of an average, educated human and thus is closer to the example of humans from a shopping mall. It will take time for computers to master all of the requisite skills and to marry these skills with all the necessary knowledge bases.
Once we've succeeded in creating a machine that can pa.s.s the Turing test (around 2029), the succeeding period will be an era of consolidation in which nonbiological intelligence will make rapid gains. However, the extraordinary expansion contemplated for the Singularity, in which human intelligence is multiplied by billions, won't take place until the mid-2040s (as discussed in chapter 3).
The AI Winter
There's this stupid myth out there that A.I. has failed, but A.I. is everywhere around you every second of the day. People just don't notice it. You've got A.I. systems in cars, tuning the parameters of the fuel injection systems. When you land in an airplane, your gate gets chosen by an A.I. scheduling system. Every time you use a piece of Microsoft software, you've got an A.I. system trying to figure out what you're doing, like writing a letter, and it does a pretty d.a.m.ned good job. Every time you see a movie with computer-generated characters, they're all little A.I. characters behaving as a group. Every time you playa video game, you're playing against an A.I. system.-RODNEY BROOKS, DIRECTOR OF THE MIT AI LAB161
I still run into people who claim that artificial intelligence withered in the 1980s, an argument that is comparable to insisting that the Internet died in the dot-com bust of the early 2000s.162 The bandwidth and price-performance of Internet technologies, the number of nodes (servers), and the dollar volume of e-commerce all accelerated smoothly through the boom as well as the bust and the period since. The same has been true for AI. The bandwidth and price-performance of Internet technologies, the number of nodes (servers), and the dollar volume of e-commerce all accelerated smoothly through the boom as well as the bust and the period since. The same has been true for AI.
The technology hype cycle for a paradigm shift-railroads, AI, Internet, telecommunications, possibly now nanotechnology-typically starts with a period of unrealistic expectations based on a lack of understanding of all the enabling factors required. Although utilization of the new paradigm does increase exponentially, early growth is slow until the knee of the exponential-growth curve is realized. While the widespread expectations for revolutionary change are accurate, they are incorrectly timed. When the prospects do not quickly pan out, a period of disillusionment sets in. Nevertheless exponential growth continues unabated, and years later a more mature and more realistic transformation does occur.
We saw this in the railroad frenzy of the nineteenth century, which was followed by widespread bankruptcies. (I have some of these early unpaid railroad bonds in my collection of historical doc.u.ments.) And we are still feeling the effects of the e-commerce and telecommunications busts of several years ago, which helped fuel a recession from which we are now recovering.
AI experienced a similar premature optimism in the wake of programs such as the 1957 General Problem Solver created by Allen Newell, J. C. Shaw, and Herbert Simon, which was able to find proofs for theorems that had stumped mathematicians such as Bertrand Russell, and early programs from the MIT Artificial Intelligence Laboratory, which could answer SAT questions (such as a.n.a.logies and story problems) at the level of college students.163 A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter." Many observers still think that the AI winter was the end of the story and that nothing has since come of the AI field. A rash of AI companies occurred in the 1970s, but when profits did not materialize there was an AI "bust" in the 1980s, which has become known as the "AI winter." Many observers still think that the AI winter was the end of the story and that nothing has since come of the AI field.
Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry. Most of these applications were research projects ten to fifteen years ago; People who ask, "Whatever happened to AI?" remind me of travelers to the rain forest who wonder, "Where are all the many species that are supposed to live here?" when hundreds of species of flora and fauna are flourishing only a few dozen meters away, deeply integrated into the local ecology.
We are well into the era of "narrow AI," which refers to artificial intelligence that performs a useful and specific function that once required human intelligence to perform, and does so at human levels or better. Often narrow AI systems greatly exceed the speed of humans, as well as provide the ability to manage and consider thousands of variables simultaneously. I describe a broad variety of narrow AI examples below.
These time frames for AI's technology cycle (a couple of decades of growing enthusiasm, a decade of disillusionment, then a decade and a half of solid advance in adoption) may seem lengthy, compared to the relatively rapid phases of the Internet and telecommunications cycles (measured in years, not decades), but two factors must be considered. First, the Internet and telecommunications cycles were relatively recent, so they are more affected by the acceleration of paradigm shift (as discussed in chapter 1). So recent adoption cycles (boom, bust, and recovery) will be much faster than ones that started forty years ago. Second, the AI revolution is the most profound transformation that human civilization will experience, so it will take longer to mature than less complex technologies. It is characterized by the mastery of the most important and most powerful attribute of human civilization, indeed of the entire sweep of evolution on our planet: intelligence.
It's the nature of technology to understand a phenomenon and then engineer systems that concentrate and focus that phenomenon to greatly amplify it. For example, scientists discovered a subtle property of curved surfaces known as Bernoulli's principle: a gas (such as air) travels more quickly over a curved surface than over a flat surface. Thus, air pressure over a curved surface is lower than over a flat surface. By understanding, focusing, and amplifying the implications of this subtle observation, our engineering created all of aviation. Once we understand the principles of intelligence, we will have a similar opportunity to focus, concentrate, and amplify its powers.
As we reviewed in chapter 4, every aspect of understanding, modeling, and simulating the human brain is accelerating: the price-performance and temporal and spatial resolution of brain scanning, the amount of data and knowledge available about brain function, and the sophistication of the models and simulations of the brain's varied regions.
We already have a set of powerful tools that emerged from AI research and that have been refined and improved over several decades of development. The brain reverse-engineering project will greatly augment this toolkit by also providing a panoply of new, biologically inspired, self-organizing techniques. We will ultimately be able to apply engineering's ability to focus and amplify human intelligence vastly beyond the hundred trillion extremely slow interneuronal connections that each of us struggles with today. Intelligence will then be fully subject to the law of accelerating returns, which is currently doubling the power of information technologies every year.
An underlying problem with artificial intelligence that I have personally experienced in my forty years in this area is that as soon as an AI technique works, it's no longer considered AI and is spun off as its own field (for example, character recognition, speech recognition, machine vision, robotics, data mining, medical informatics, automated investing).
Computer scientist Elaine Rich defines AI as "the study of how to make computers do things at which, at the moment, people are better." Rodney Brooks, director of the MIT AI Lab, puts it a different way: "Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation Oh, that's just a computation." I am also reminded of Watson's remark to Sherlock Holmes, "I thought at first that you had done something clever, but I see that there was nothing in it after all."164 That has been our experience as AI scientists. The enchantment of intelligence seems to be reduced to "nothing" when we fully understand its methods. The mystery that is left is the intrigue inspired by the remaining, not as yet understood methods of intelligence. That has been our experience as AI scientists. The enchantment of intelligence seems to be reduced to "nothing" when we fully understand its methods. The mystery that is left is the intrigue inspired by the remaining, not as yet understood methods of intelligence.
AI's Toolkit
AI is the study of techniques for solving exponentially hard problems in polynomial time by exploiting knowledge about the problem domain.-ELAINE RICH
As I mentioned in chapter 4, it's only recently that we have been able to obtain sufficiently detailed models of how human brain regions function to influence AI design. Prior to that, in the absence of tools that could peer into the brain with sufficient resolution, AI scientists and engineers developed their own techniques. Just as aviation engineers did not model the ability to fly on the flight of birds, these early AI methods were not based on reverse engineering natural intelligence.
A small sample of these approaches is reviewed here. Since their adoption, they have grown in sophistication, which has enabled the creation of practical products that avoid the fragility and high error rates of earlier systems.
Expert Systems. In the 1970s AI was often equated with one specific method: expert systems. This involves the development of specific logical rules to simulate the decision-making processes of human experts. A key part of the procedure entails knowledge engineers interviewing domain experts such as doctors and engineers to codify their decision-making rules. In the 1970s AI was often equated with one specific method: expert systems. This involves the development of specific logical rules to simulate the decision-making processes of human experts. A key part of the procedure entails knowledge engineers interviewing domain experts such as doctors and engineers to codify their decision-making rules.
There were early successes in this area, such as medical diagnostic systems that compared well to human physicians, at least in limited tests. For example, a system called MYCIN, which was designed to diagnose and recommend remedial treatment for infectious diseases, was developed through the 1970s. In 1979 a team of expert evaluators compared diagnosis and treatment recommendations by MYCIN to those of human doctors and found that MYCIN did as well as or better than any of the physicians.165 It became apparent from this research that human decision making typically is based not on definitive logic rules but rather on "softer" types of evidence. A dark spot on a medical imaging test may suggest cancer, but other factors such as its exact shape, location, and contrast are likely to influence a diagnosis. The hunches of human decision making are usually influenced by combining many pieces of evidence from prior experience, none definitive by itself. Often we are not even consciously aware of many of the rules that we use.