Ted Chiang Compilation - novelonlinefull.com
You’re read light novel Ted Chiang Compilation Part 33 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
"No! Of course not. I just decided that Marco's arguments made sense, and that he was old enough to choose."
"We talked about that. You agreed that it was better to wait until he had more experience."
"I know. But then I - I decided I was being overly cautious."
"Overly cautious? You're not letting Marco risk sc.r.a.ping his knee; Binary Desire is going to perform brain surgery on him. How can you be too cautious about that?"
He pauses, and then says, "I realized it was time to let go."
"Let go?" As if the idea of protecting Marco and Polo were some childish fancy he'd outgrown. "I didn't know you thought of it that way."
"I didn't either, until recently."
"Does this mean you don't plan on incorporating Marco and Polo someday?"
"No, I still plan to do that. I just won't be as - " Again he hesitates. "Fixated."
"Not as fixated." Ana wonders how well she knew Derek at all. "Good for you, I guess."
He looks hurt by that, which is fine with her. "It's good for everyone," he says. "The digients get access to Real s.p.a.ce - "
"I know, I know."
"Really, I think it's for the best," he says, but he doesn't seem to believe it himself.
"How can it be for the best?" she asks. Derek doesn't say anything, and she just stares at him.
"I'll talk to you later," says Ana, and closes the phone window. Thinking about the ways Marco might be used - without ever realizing that he's being used - makes her heart break. You can't save them all, she reminds herself. But it never occurred to her that Marco might be one of those at risk. She a.s.sumed Derek felt the same way she does, that he understood the need to make sacrifices.
In her Data Earth window she can see Jax gleefully piloting his hovercar up and down slopes like a kid on a trackless rollercoaster. She doesn't want to tell him about the deal with Binary Desire right now; they would have to discuss what it means for Marco, and she doesn't have the energy for that conversation right now. For the moment, all she wants to do is watch him and, tentatively, try to get used to the idea that the Neuroblast port is actually underway. It's a peculiar sensation. She can't call it relief, because of the cost entailed, but it's undeniably a good thing that this enormous obstacle to Jax's future has been removed, and she didn't have to take the job with Polytope to do it. It'll be months before the port is finished, but the time will pa.s.s quickly now that the destination is known. Jax will be able to enter Real s.p.a.ce, see his friends again and rejoin the rest of the social universe.
Not that the future will be all smooth sailing. There are still an endless series of obstacles ahead, but at least she and Jax will have a chance to tackle them. Briefly, Ana indulges herself, fantasizing about what might happen if they succeed.
She imagines Jax maturing over the years, both in Real s.p.a.ce and in the real world. Imagines him incorporated, a legal person, employed and earning a living. Imagines him as a partic.i.p.ant in the digient subculture, a community with enough money and skills to port itself to new platforms when the need arises. Imagines him accepted by a generation of humans who have grown up with digients and view them as potential relationship partners in a way that members of her generation will never be able to. Imagines him loving and being loved, arguing and compromising. Imagines him making sacrifices, some hard and some made easy because they're for a person he truly cares about.
A few minutes pa.s.s, and Ana tells herself to stop daydreaming. There's no guarantee that Jax is capable of any of those things. But if he's ever going to get the chance to try them, she has to get on with the job in front of her now: teaching him, as best she can, the business of living.
She initiates the game's shutdown procedure and calls Jax on the intercom. "Playtime's over, Jax," she says. "Time to do your homework."
Story Notes
Tower of Babylon This story was inspired by a conversation with a friend, when he mentioned the version of the Tower of Babel myth he'd been taught in Hebrew school. At that point I knew only the Old Testament account, and it hadn't made a big impression on me. But in the more elaborate version, the tower is so tall that it takes a year to climb, and when a man falls to his death, no one mourns, but when a brick is dropped, the bricklayers weep because it will take a year to replace.
The original legend is about the consequences of defying G.o.d. For me, however, the tale conjured up images of a fantastic city in the sky, reminiscent of Magritte's Castle in the Pyrenees Castle in the Pyrenees. I was captivated by the audacity of such a vision and started wondering what life in such a city would be like.
Tom Disch called this story "Babylonian science fiction." I hadn't thought about it that way when I was writing it- the Babylonians certainly knew enough physics and astronomy to recognize this story as fanciful- but I understood what he meant. The characters may be religious, but they rely on engineering rather than prayer. No deity makes an appearance in the story; everything that happens can be understood in purely mechanistic terms. It's in that sense that- despite the obvious difference in cosmology- the universe in the story resembles our own.
Understand This is the oldest story in this volume and might never have been published if it weren't for Spider Robinson, one of my instructors at Clarion. This story had collected a bunch of rejection slips when I first sent it out, but Spider encouraged me to resubmit it after I had Clarion on my resume. I made some revisions and sent it out, and it got a much better response the second time around.
The initial germ for this story was an offhand remark made by a roommate of mine in college; he was reading Sartre'sNauseaat the time, whose protagonist finds only meaninglessness in everything he sees. But what would it be like, my roommate wondered, to find meaning and order in everything you saw? To me that suggested a kind of heightened perception, which in turn suggested superintelligence. I started thinking about the point at which quant.i.tative improvements- better memory, faster pattern recognition- turn into a qualitative difference, a fundamentally different mode of cognition.
Something else I wondered about was the possibility of truly understanding how our minds work. Some people are certain that it's impossible for us to understand our minds, offering a.n.a.logies like "you can't see your face with your own eyes." I never found that persuasive. It may turn out that we can't, in fact, understand our minds (for certain values of "understand" and "mind"), but it'll take an argument much more persuasive than that to convince me.
Division by Zero There's a famous equation that looks like this: [image]
When I first saw the derivation of this equation, my jaw dropped in amazement. Let me try to explain why.
One of the things we admire most in fiction is an ending that is surprising, yet inevitable. This is also what characterizes elegance in design: the invention that's clever yet seems totally natural. Of course we know that they aren't really really inevitable; it's human ingenuity that makes them seem that way, temporarily. inevitable; it's human ingenuity that makes them seem that way, temporarily.
Now consider the equation mentioned above. It's definitely surprising; you could work with the numbers e e,[image] and and i i for years, each in a dozen different contexts, without realizing they intersected in this particular way. Yet once you've seen the derivation, you feel that this equation really is inevitable, that this is the only way things could be. It's a feeling of awe, as if you've come into contact with absolute truth. for years, each in a dozen different contexts, without realizing they intersected in this particular way. Yet once you've seen the derivation, you feel that this equation really is inevitable, that this is the only way things could be. It's a feeling of awe, as if you've come into contact with absolute truth.
A proof that mathematics is inconsistent, and that all its wondrous beauty was just an illusion, would, it seemed to me, be one of the worst things you could ever learn.
Story of Your Life This story grew out of my interest in the variational principles of physics. I've found these principles fascinating ever since I first learned of them, but I didn't know how to use them in a story until I saw a performance of Time Flies When You're Alive Time Flies When You're Alive, Paul Linke's one-man show about his wife's battle with breast cancer. It occurred to me then that I might be able to use variational principles to tell a story about a person's response to the inevitable. A few years later, that notion combined with a friend's remark about her newborn baby to form the nucleus of this story.
For those interested in physics, I should note that the story's discussion of Fermat's Principle of Least Time omits all mention of its quantum-mechanical underpinnings. The QM formulation is interesting in its own way, but I preferred the metaphoric possibilities of the cla.s.sical version.
As for this story's theme, probably the most concise summation of it that I've seen appears in Kurt Vonnegut's introduction to the 25th anniversary edition of Slaughterhouse-Five Slaughterhouse-Five: "Stephen Hawking... found it tantalizing that we could not remember the future. But remembering the future is child's play for me now. I know what will become of my helpless, trusting babies because they are grown-ups now. I know how my closest friends will end up because so many of them are retired or dead now... To Stephen Hawking and all others younger than myself I say, 'Be patient. Your future will come to you and lie down at your feet like a dog who knows and loves you no matter what you are.' "
Seventy-Two Letters This story came about when I noticed a connection between two ideas I'd previously thought were unrelated. The first one was the golem.
In what's probably the best known story of the golem, Rabbi Loew of Prague brings a clay statue to life to act as a defender of the Jews, protecting them from persecution. It turns out this story is a modern invention, dating back only to 1909. Stories in which the golem is used as a servant to perform ch.o.r.es- with varying degrees of success- originated in the 1500s, but they still aren't the oldest references to the golem. In stories dating back to the second century, rabbis would animate golems not to accomplish anything practical, but rather to demonstrate mastery of the art of permutating letters; they sought to know G.o.d better by performing acts of creation.
The whole theme of the creative power of language has been discussed elsewhere, by people smarter than me. What I found particularly interesting about golems was the fact that they're traditionally unable to speak. Since the golem is created through language, this limitation is also a limitation on reproduction. If a golem were able to use language, it would be capable of self-replication, rather like a Von Neumann machine.
The other idea I'd been thinking about was preformation, the theory that organisms exist fully formed in the germ cells of their parents. It's easy for people now to dismiss it as ridiculous, but at the time, preformation made a lot of sense. It was an attempt to solve the problem of how living organisms are able to replicate themselves, which is the same problem that later inspired Von Neumann machines. When I recognized that, it seemed that I was interested in these two ideas for the same reason, and I knew I had to write about them.
The Evolution of Human Science This short-short was written for the British science journal Nature Nature. Throughout the year 2000, Nature Nature ran a feature called "Futures;" each week a different writer provided a short fictional treatment of a scientific development occurring in the next millenium. ran a feature called "Futures;" each week a different writer provided a short fictional treatment of a scientific development occurring in the next millenium. Nature Nature happens to be a distant corporate cousin of Tor Books, so the editor in charge of "Futures," Dr. Henry Gee, asked Patrick Nielsen Hayden to suggest some possible contributors. Patrick was kind enough to mention me. happens to be a distant corporate cousin of Tor Books, so the editor in charge of "Futures," Dr. Henry Gee, asked Patrick Nielsen Hayden to suggest some possible contributors. Patrick was kind enough to mention me.
Since the piece would appear in a scientific journal, making it about about a scientific journal seemed like a natural choice. I started wondering about what such a journal might look like after the advent of superhuman intelligence. William Gibson once said, "The future is already here; it's just not evenly distributed." Right now there are people in the world who, if they're aware of the computer revolution at all, know of it only as something happening to other people, somewhere else. I expect that will remain true no matter what technological revolutions await us. a scientific journal seemed like a natural choice. I started wondering about what such a journal might look like after the advent of superhuman intelligence. William Gibson once said, "The future is already here; it's just not evenly distributed." Right now there are people in the world who, if they're aware of the computer revolution at all, know of it only as something happening to other people, somewhere else. I expect that will remain true no matter what technological revolutions await us.
(A note about the t.i.tle: this short-short originally appeared under a t.i.tle chosen by the editors of Nature Nature; I've chosen to restore its original t.i.tle for this reprint.)
h.e.l.l Is the Absence of G.o.d I first wanted to write a story about angels after seeing the movie The Prophecy The Prophecy, a supernatural thriller written and directed by Gregory Widen. For a long time I tried to think of a story in which angels were characters, but couldn't come up with a scenario I liked; it was only when I started thinking about angels as phenomena of terrifying power, whose visitations resembled natural disasters, that I was able to move forward. (Perhaps I was subconsciously thinking of Annie Dillard. Later on I remembered she once wrote that if people had more belief, they'd wear crash helmets when attending church and lash themselves to the pews.) Thinking about natural disasters led to thinking about the problem of innocent suffering. An enormous range of advice has been offered from a religious perspective to those who suffer, and it seems clear that no single response can satisfy everyone; what comforts one person inevitably strikes someone else as outrageous. Consider the Book of Job as an example.
For me, one of the unsatisfying things about the Book of Job is that, in the end, G.o.d rewards Job. Leave aside the question of whether new children can compensate for the loss of his original ones. Why does G.o.d restore Job's fortunes at all? Why the happy ending? One of the basic messages of the book is that virtue isn't always rewarded; bad things happen to good people. Job ultimately accepts this, demonstrating virtue, and is subsequently rewarded. Doesn't this undercut the message?
It seems to me that the Book of Job lacks the courage of its convictions: If the author were really committed to the idea that virtue isn't always rewarded, shouldn't the book have ended with Job still bereft of everything?
Liking What You See: A Doc.u.mentary Psychologists once conducted an experiment where they repeatedly left a fake college application in an airport, supposedly forgotten by a traveler. The answers on the application were always the same, but sometimes they changed the photo of the fict.i.tious applicant. It turned out people were more likely to mail in the application if the applicant was attractive. This is perhaps not surprising, but it ill.u.s.trates just how thoroughly we're influenced by appearances; we favor attractive people even in a situation where we'll never meet them.
Yet any discussion of beauty's advantages is usually accompanied by a mention of the burden of beauty. I don't doubt that beauty has its drawbacks, but so does everything else. Why do people seem more sympathetic to the idea of burdensome beauty than to, say, the idea of burdensome wealth? It's because beauty is working its magic again: even in a discussion of its drawbacks, beauty is providing its possessors with an advantage.
I expect physical beauty will be around for as long as we have bodies and eyes. But if calliagnosia ever becomes available, I for one will give it a try.
The Lifecycle of Software Objects People routinely attempt to describe the human brain's capabilities in terms of instructions per second, and then use that as a guidepost for predicting when computers will be as smart as people. I think this makes about as much sense as judging the brain by the amount of heat it generates. Imagine if someone were to say, "when we have a computer that runs as hot as a human brain, we will have a computer as smart as a human brain." We'd laugh at such a claim, but people make similar claims about processing speed and for some reason they get taken seriously.
It's been over a decade since we built a computer that could defeat the best human chess players, yet we're nowhere near building a robot that can walk into your kitchen and cook you some scrambled eggs. It turns out that, unlike chess, navigating the real world is not a problem that can be solved by simply using faster processors and more memory. There's more and more evidence that if we want an AI to have common sense, it will have to develop it in the same ways that children do: by imitating others, by trying different things and seeing what works, and most of all by accruing experience. This means that creating a useful AI won't just be a matter of programming, although some amazing advances in software will definitely be required; it will also involve many years of training. And the more useful you want it to be, the longer the training will take.
But surely the training can be accelerated somehow, can't it? I don't believe so, or at least not easily. This seems related to the misconception that a faster computer is a smarter one, but with humans it's easier to see that speed is not the same thing as intelligence. Suppose you had a digital simulation of Paris Hilton's brain; no matter how fast a computer you run her on, she's never going to understand differential equations. By the same token, if you run a child at twice normal speed, all you'd get is a child whose attention span has been cut in half, and how useful is that?
But surely the AI will be able to learn faster because it won't be hampered by emotions, right? On the contrary, I think creating software that feels emotions will be a necessary step towards creating software that actually thinks, in much the same way that brains capable of emotion are an evolutionary predecessor to brains capable of thought. But even if it's possible to separate thinking from feeling, there may be other reasons to give AIs emotions. Human beings are social animals, and the success of virtual pets like Tamagotchis demonstrates that we respond to things that appear to need care and affection. And if an AI takes years to train, a good way to get a human to invest that kind of time is to create an emotional bond between the two.
And that's what I was really interested in writing about: the kind of emotional relationship that might develop between humans and AIs. I don't mean the affection that people feel for their iPhones or their scrupulously maintained cla.s.sic cars, because those machines have no desires of their own. It's only when the other party in the relationship has independent desires that you can really gauge how deep a relationship is. Some pet owners ignore their pets whenever they become inconvenient; some parents do as little for their children as they can get away with; some lovers break up with each other the first time they have a big argument. In all of those cases, the people are unwilling to put effort into the relationship. Having a real relationship, whether with a pet or a child or a lover, requires that you be willing to balance someone else's wants and needs with your own.
I don't know if humans will ever have that kind of relationship with AIs, but I feel like this is an area that's been largely overlooked in science fiction. I've read a lot of stories in which people argue that AIs deserve legal rights, but in focusing on the big philosophical question, there's a mundane reality that these stories gloss over. It's a bit like how movies show separated lovers overcoming tremendous obstacles to be reunited: that's wonderfully romantic, but it's not the whole story when it comes to love; over the long term, love also means working through money problems and picking dirty laundry off the floor. So while achieving legal rights for AIs would clearly be a major milestone, another stage that I think is just as important and indeed, probably a prerequisite for initiating a legal battle is for people to put real effort into their individual relationships with AIs.
And even if we don't care about them having legal rights, there's still good reason to treat AIs with respect. Think about the pets of neglectful owners, or the lovers who have never stayed with someone longer than a month; are they the pets or lovers you would choose? Think about the kind of people that bad parenting produces; are those the people you want to have as your friends or your employees? No matter what roles we a.s.sign AIs, I suspect they will do a better job if, at some point during their development, there were people who cared about them.
Acknowledgments.
Thanks to Mich.e.l.le for being my sister, and thanks to my parents, Fu-Pen and Charlotte, for their sacrifices.
Thanks to the partic.i.p.ants of Clarion, Acme Rhetoric, and Sycamore Hill for letting me work with them. Thanks to Tom Disch for the visit, Spider Robinson for the phone call, Damon Knight and Kate Wilhelm for the guidance, Karen Fowler for the anecdotes, and John Crowley for reopening my eyes. Thanks to Larret Galasyn-Wright for encouragement when I needed it and Danny Krashin for lending me his mind. Thanks to Alan Kaplan for all the conversations.
Thanks to Patrick Nielsen Hayden for taking a chance on this book. Thanks to everyone at the Virginia Kidd Agency- Virginia Kidd, Jim Allen, Linn Prentis, Nanci McCloskey, Christine Cohen, and Vaughne Hansen- for sticking with me.
Thanks to Juliet Albertson for love. And thanks to Marcia Glover, for love.