Future Babble - novelonlinefull.com
You’re read light novel Future Babble Part 6 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
Kudlow was not pleased. He categorically denied having made any such prediction, he denounced Huffington's "ad hominem" attack, and he quickly and firmly directed the conversation elsewhere.
But Huffington wouldn't let go that easily. After a commercial break, she held up a copy of Kudlow's prediction and read it. Kudlow sat frozen. His cohost intervened-"Let's just move on for the sake of conversation"-and steered the discussion to politics and predictions about the elections of 2010.
Finally, at the end of the segment, Kudlow let loose. "Just let me say, Arianna, insofar as your personal attack on me, I am a great believer in American free-market capitalism for the long run. Unlike you, I have never changed my stripes. You were a conservative Newt Gingrich supporter, then you flip-flopped to a liberal. I stay the course. We may get to Dow fifty thousand or not. I don't recall ever making that forecast. But in the long run, in the long run, economic freedom and free-market capitalism will keep this country great. I say that on New Year's Eve. I will keep that point of view. And I have not changed my point of view for my entire adult life."
Huffington tried to respond but Kudlow cut her off. "We're going to move on," he said. "I just want to make a response. We gave you time, Arianna. I don't like ad hominem stuff. It's a cheap shot. You're trying to promote your Web site. That's your right as an American."
"It's not a cheap shot," Huffington squeezed in. "I'm just telling you what you said. You're denying what you said."
Kudlow waved her off. "Don't drag us through the mud."
Kudlow's cohost wrapped things up. "Happy New Year!" he gushed. And that was that.
YESTERDAY'S NEWS In 1993, New York Times book critic Christopher Lehmann-Haupt was glum. He'd just read Paul Kennedy's new book Preparing for the Twenty-first Century, which warned that soaring populations would combine with diminishing resources and worsening environmental problems to produce disaster. "When you come to the end, you are so depressed you barely have strength to close the book," he wrote. His only criticism of the author was that Kennedy had suggested there were still uncertainties in play so the grim future he foresaw would not necessarily come to pa.s.s. "Kennedy whistles past the graveyard," Lehmann-Haupt wrote.
What makes this review revealing is that in the early 1970s Christopher Lehmann-Haupt reviewed-and praised-several books whose themes were identical to Kennedy's. The only difference was that the earlier books said the decade that would decide everything would be the 1970s: Either there would be major change then or we were all doomed. One of the books Lehmann-Haupt reviewed was Paul Ehrlich's The End of Affluence, which predicted, as we have seen, that American prosperity was finished no matter what. It also made a long string of precise predictions about the end of oil, food shortages, ma.s.s famines, and the collapse of India. By 1993, it was clear that most of the forecasts in Ehrlich's book and the others reviewed by Lehmann-Haupt had completely failed. And yet in his review of Paul Kennedy's book, Lehmann-Haupt didn't mention that Kennedy's claims had been routinely made in the 1970s, or that those earlier predictions had fallen flat. Instead, Lehmann-Haupt faulted Kennedy for refusing to declare our doom inevitable.
So why didn't Lehmann-Haupt mention the many failed predictions of the 1970s? A cynic would say it was deliberate. He's pushing an agenda and he doesn't care if his readers are properly informed. I don't think that's right. I think he simply forgot. Why would he remember books he had read more than two decades earlier? It was 1993, after all. Oil was cheap, people were getting fat, and the economy had done reasonably well for most of the previous decade. Food shortages? Overpopulation? The "energy crisis"? That was so 1970s. People hadn't talked about any of it for years. You can't blame Lehmann-Haupt if The End of Affluence had vanished from his memory.
This is another huge reason we notice hits but ignore misses: If a prediction about a subject hits, that subject will probably be in the news and people will be talking about it, but if the prediction misses, the subject is likely yesterday's news and n.o.body will be talking about it. If Paul Ehrlich's predictions in The End of Affluence had been right, you can be sure that overpopulation and food shortages would have been very hot topics in 1993-and Christopher Lehmann-Haupt would remember Ehrlich's book. But they weren't right. By 1993, overpopulation and food shortages were as dated as bell bottoms and disco, and Lehmann-Haupt had forgotten the predictions he had once found so compelling.
"Three-quarters of news is 'new,'" an editor once told me. News happens today. If a famine happens now and an expert predicted it a decade ago, that old prediction is news. But if the famine doesn't happen, that old prediction is not news. It is merely old. In 2006, political scientist John Mueller contacted a Wall Street Journal reporter about a cover story on terrorism she had written for National Journal magazine two years earlier, shortly after the presidential election of 2004. In the story, the reporter quoted experts who predicted that the threat would rise in the months following the election. "Bin Laden, having uttered his warning, will be marshaling his resources to make good on his promise that Americans will not be able to avoid a new 9/11. It'll be a race against time," an expert said at the end of the article. Nothing remotely like that happened, so Mueller asked the reporter if she would write a follow-up piece noting that the prediction had failed. Probably not, she suggested politely. "It's hard to do stories that do not have a hard news component." Of course, if there had been terrorist attacks, you can be sure there would have been stories about the expert who predicted them. But with no terrorist attacks, there was no "new" news, and thus no reason to write a story about the expert who blew smoke.
Heads: I win. Tails: You forget we had a bet.
There's not much risk for experts who make predictions.
CAPRICORNS ARE HONEST, INTELLIGENT, HARDWORKING, GULLIBLE. . . .
Self-interest and media amnesia aside, the more profound reason we notice hits and ignore misses lies in human psychology.
In 1949, psychologist Bertram Forer asked his students to complete a personality test. Later, he gave them a personality profile based on the test's results and asked them to judge the test's accuracy. Everyone was impressed. They were sure the test had really nailed who they were, which was odd because everyone had been given the same profile. Forer had a.s.sembled it out of vague statements-"You have a tendency to be critical of yourself "-culled from a book on astrology.
"The root of all superst.i.tion is that men observe when a thing hits, but not when it misses," wrote Sir Francis Bacon. The "Forer Effect" is one demonstration of this universal tendency. It's what makes horoscopes so appealing. When we read a string of statements-"A business opportunity is promising," "New love beckons," "A figure from the past makes contact"-the hits and misses are not equally weighted. Those that seem to fit our circ.u.mstances grab our attention and are remembered, while those that don't are scarcely noticed and quickly forgotten.
That much is obvious. But bear in mind that hits and misses don't come with labels. It's a matter of perception whether something is a hit or a miss, which makes language important. The more ambiguous the wording is, the more a statement can be stretched, and since we want hits, that's the direction in which things will tend to stretch-a tendency astrologers, psychics, soothsayers, and prophets have understood since the dawn of time. When the notoriously vague Oracle of Delphi was asked by King Croesus of Lydia whether he should attack the Persian Empire, the oracle is said to have responded that if he did he would destroy a great empire. Encouraged, the king attacked and lost. Croesus hadn't considered that whether he won or lost, a great empire would be destroyed. Nostradamus also knew better than to be precise. All the sixteenth-century sage's predictions were written in such fuzzy, poetic images-"Serpents introduced into the iron cage where the seven children of the king are taken"-they could be taken to mean almost anything. And they have been. For centuries, people have insisted that a careful reading of the master's work reveals that Nostradamus predicted the present-even though it is only by constantly reinterpreting the same writing that his admirers keep Nostradamus on top of the day's events.
Mysticism invites this sort of gullibility but incense and spooky stories aren't necessary for the human mind to go to absurd lengths to discover hits, or even turn misses into hits. This truth is neatly ill.u.s.trated in a book simply called Predictions. Published in 1956, it's a compilation of old cartoons and ill.u.s.trations that made predictions about the future. One cartoon shows a map of Russia as a ravenous bear that has swallowed almost all of Europe and Asia and has its jaws open for its next meal: j.a.pan. In a caption, the author describes the cartoon as "too accurate for comfort." He thought this because, in 1956, Soviet Russia was a superpower threatening to dominate all of Europe and Asia-and since the cartoon had been drawn in 1904, more than fifty years earlier, it seemed astonishingly prescient. To come to that conclusion, however, the author had to overlook a lot of history. In fact, the cartoon was a commentary on rising tensions between j.a.pan and Russia that exploded into war in 1905. Russia was crushed. Russia then suffered a civil insurrection. Less than a decade later came the First World War, another defeat, the loss of vast territories, revolution, and civil war. A decade after that came ma.s.s starvation. Then Russia was nearly annihilated by n.a.z.i Germany. Only forty years after the cartoon was published did Russia achieve the superpower status that made an observer in 1956 think this cartoon was an amazing hit instead of the spectacular miss it really was.
Simply reading a list of old predictions reveals our unfortunate bias. The misses may produce a chuckle or two, but they are soon forgotten. A hit, on the other hand, leaps off the page and is remembered. I often experienced this phenomenon in doing the research for this book. One day, for example, I was thunderstruck to read the following in a newspaper column Anthony Lewis wrote in 1969: "The increasing carbon dioxide in the air gradually warms the oceans and could, it is feared, eventually melt the polar ice caps at a rate fast enough to flood the coasts of our continents. Daniel Patrick Moynihan, counselor to President Nixon, warned here this week that the atmosphere's carbon dioxide content would grow 25 percent by the year 2000." That's more or less the current theory of man-made climate change, which didn't become scientific orthodoxy until the 1990s. Talk about a hit! But then I reminded myself that in the late 1960s and early 1970s, there were several hypotheses about climate change floating about. Some called for warming. Some for cooling. Some of those raising alarms of the day said either was possible. If someone in 1969 had made a short list of those theories, including a "no change" option, and then chosen one outcome entirely at random, she would have had a good chance of successfully "predicting" the future. But more to the point, I was so focused on this supposedly successful prediction that I paid little attention to others that appeared in the same column. They included "Compet.i.tion for food and raw materials is going to become savage as populations grow." And "People and engines are using up oxygen at an alarming rate: one transAtlantic jet burns 35 tons. . . . One day, suddenly, the world's billions of creatures may literally be struggling for a last breath." The hit may have been more apparent than real but it still had the power to overwhelm some spectacular misses.
PETER SCHIFF WAS RIGHT!.
Which brings us back to the amazing Peter Schiff. As the t.i.tle of that famous video says, he was right. When so many pundits were saying everything was fine with the American economy, he said it would crash. And it did. Peter Schiff was right.
That time. In a sense. Sort of.
"The dollar is going to start to fall. And as the dollar falls, you're going to have significant flows out of U.S. financial a.s.sets from all around the world. And that is going to send interest rates through the roof. And when that happens, this whole consumer-led, borrow-and-spend economy is going to come tumbling down. Then we're going to have a real recession." That was Peter Schiff in a television interview that was not included in the famous "Peter Schiff Was Right" video. The year was 2002. "The bear market began in 2000," Schiff said. "It's probably going to last another five or ten years. I think the bulk of the downside is going to happen in the next couple of years." And how much "downside" would there be? "My prediction for the NASDAQ is that it's going to fall to around five hundred. Right now, it's about seventeen hundred. It's got a long way to go down. Dow Jones is still above ten thousand. Probably going to fall to between two thousand and four thousand. But it might go below two thousand."
Even if we are so generous as to stretch the time frame of Schiff's prediction to 2009-and it's clear he was actually talking about the first half of the decade-this looks bad. The dollar did decline from 2002 to 2008, but it didn't send interest rates through the roof and it didn't cause a recession. In fact, when the crisis of 2008 hit, investors ran to the American dollar, pushing it up significantly. There also was no rampant inflation, another of Schiff's predictions. The stock markets did decline for about a year following his forecast, but they then rose steadily for four years, with the Dow hitting a peak of 14,000 in late 2007. And even in the darkest days of the 2008 crash, the markets never sank to anywhere near the depths Schiff forecast-the Dow hit bottom at 6,500 but surprised most observers, including Schiff, by bouncing back above 10,000 months later.
But Schiff didn't let any of this dent his confidence. "While the housing bubble was inflating, I was telling people to rent. I was telling people to get out of tech stocks in 1998 and 1999. They kept rising, but then they collapsed, and I turned out to be right," Schiff said in May 2008, when the American economy was sinking and his star was on the rise. "The reality is I don't think I've been wrong on anything," Schiff said in a May 2008 interview. If Schiff were right about his dazzling predictive powers, investors who took his advice must have really cleaned up in the year of Schiff's alleged vindication. But they didn't. "The year that Schiff became a star prognosticator on TV was also one of the worst periods ever for his clients," Fortune magazine reported. "In most cases the foreign markets he likes got hit even harder than the U.S. in 2008 and even more surprising to Schiff, the U.S. dollar rallied strongly as investors rushed to the perceived safety of Treasuries." Schiff provided some other reasons to question his claims of perfection in May 2008. "I think the stock market is heading lower," he predicted. He was right about that. "Gold is going to be twelve hundred to fifteen hundred dollars by the end of the year." Off by several hundred dollars on that one. "Oil prices had a pretty big run and might not make more headway by the end of the year. But we could see a hundred fifty to two hundred dollars next year." Oil crashed; it was less than forty dollars at the end of the year and it spent most of 2009 around seventy dollars. "At a minimum, the dollar will lose another forty to fifty percent of its value. I'm confident that by next year we'll see more aggressive movements to abandon the dollar by the [Persian] Gulf region and by the Asian bloc. That's when the stuff really hits the fan." Stuff hit the fan in 2009, but not that stuff.
But it's not the misses that dazzle. In December 2008, New York Times business columnist Joe Nocera gushed about the "Peter Schiff Was Right" YouTube video. "One thing that makes it amazing is how unflinching Mr. Schiff is, how unyielding, how matter-of-fact, no matter how scornful and sneering the response from the other talking heads. Even when they laugh at him, he keeps coming back," Nocera wrote. "The other thing that makes it amazing, of course, is that Mr. Schiff absolutely nailed the current crisis-and did so many, many months before the rest of us could feel the first tremor." True enough. But it's somewhat less amazing if you bear in mind that Schiff has been making essentially the same prediction for the same reason for many years. And the amazement fades entirely when you learn that the man Schiff credits for his understanding of economics-his father, Irwin-has been doing the same at least since 1976. Now, even if we generously give Schiff unqualified credit for calling 2008, that means the combined record of Peter and Irwin Schiff is something on the order of one in thirty-two. As the old saying goes, even a stopped clock is right twice a day-which produces a record of one in twelve. It seems only reasonable that prognosticators should be required to do better than stopped clocks before we declare them gurus.
Nocera's judgment of Schiff's accuracy may be doubtful but he's absolutely right about Schiff's style. He is articulate, pa.s.sionate, and authoritative. And he is absolutely, unswervingly, unconquerably sure of himself. Just like the other hedgehogs who dominate the talk shows, best-seller charts, and lecture halls.
They may be wrong far more often than they are right. They may do worse than flipped coins and stopped clocks. But they never fail to deliver the certainty that we crave.
And we never fail to ask for more.
POSTSCRIPT: HANG THE INNOCENT.
On the very short list of pundits who suffered for making bad predictions, one name must take top spot.
Norman Angell was British but he spent part of his youth knocking about the American West, working as a journalist, a cowboy, a laborer, and a homesteader. In the years prior to the First World War, he wrote an internationally renowned essay and turned it into a hugely influential best-selling book. He became a member of Parliament, a lecturer, and a statesman. In 1933, he won the n.o.bel Peace Prize.
But all that is forgotten. The sole reason that Norman Angell's name continues to appear in print today, the only thing for which this remarkable man is remembered, is a prediction he made in 1909.
The economies and financial systems of the major powers were now intertwined, Angell noted in a pamphlet ent.i.tled Europe's Optical Illusion. It followed that in a war between the major powers, "the victor would suffer equally with the vanquished," wrote historian Barbara Tuchman, summarizing Angell's views, "therefore war had become unprofitable, therefore no nation would be so foolish as to start one." And so the major powers would never again go to war with each other, Angell concluded with impeccable logic and terrible timing: Five years later, Europe exploded in war and the great powers spent much of the next half century doing what Angell said they would never do again.
Norman Angell has been mocked ever since. Even now, almost a century after Angell's prediction failed so spectacularly, his name routinely appears in print as a warning against foolish optimism or economic determinism or the perils of making predictions. No one has ever suffered more for a prediction that failed.
And that is deeply unfair, for the simple reason that Norman Angell never predicted there would be no war.
What Angell actually argued in Europe's Optical Illusion and in the many best-selling editions of the book that followed-under the t.i.tle The Great Illusion-was that the interconnections of the economic and financial systems meant that a victorious nation would suffer more than it gained if it attempted to profit from war by looting national banks or otherwise plundering the defeated. This was a limited thesis. It left open the possibility that nations could reap political or strategic gains in war. It also did not deny the possibility that nations would go to war despite their self-interest, since Angell never thought that individuals and groups are always guided by strict rationality. So war was not impossible, in Angell's view; it was merely unprofitable, in a precise and limited sense.
Tuchman's summary of Angell's views, quoted above, is simply wrong. And Tuchman was far from the first to misrepresent what Angell wrote.
Almost from the moment Angell's book was published, his argument was simplified and s.e.xed up: Victors always lose more than they gain; war is always contrary to self-interest; therefore no one would be so stupid as to launch a war in the modern world; therefore no one ever will. It wasn't only critics who made this mistake. So did many of Angell's ardent admirers. "What shall we say of the Great War of Europe, ever threatening, ever impending, and which never comes?" wrote David Starr Jordan, president of Stanford University, in 1913. "We shall say that it will never come." Even though Angell's work was discussed at the highest levels in London and other capitals, even though dozens of study groups were created to pore over The Great Illusion, the misunderstanding persisted. Angell later pinned some of the blame on his writing. There was a "fundamental defect of presentation in a book that was highly, at times extravagantly, praised for its clarity and lucidity," he wrote. Angell's biographer, Martin Ceadel, thinks the t.i.tles-both Europe's Optical Illusion and The Great Illusion-added to the confusion because they didn't make clear that the "illusion" in question wasn't the threat of war but the profitability of war. "Angell would have been spared much heartache had he called his book The Economic Contradictions of Aggression or some similarly substantive formulation that would have clarified that he was disputing neither the possibility of war nor the utility of defense," Ceadell wrote.
Angell struggled mightily to set the record straight. "War is, unhappily, quite possible, and, in the prevailing condition of ignorance of certain politico-economic facts, even likely," Angell wrote to the Daily Mail in 1911 after the newspaper claimed Angell had argued "war is impossible." Angell wrote many such letters. "You are good enough to say that I am 'one of the very few advocates of peace at any price who is not altogether an a.s.s.' And yet you state that I have been on a mission 'to persuade the German people that war in the twentieth century is impossible.' If I had ever tried to teach anybody such sorry rubbish I should be altogether an unmitigated a.s.s," Angell wrote in 1913. "Personally, not only do I regard war as possible, but extremely likely. What I have been preaching in Germany is that it is impossible for Germany to benefit by war, especially a war against us; and that, of course, is quite a different matter."
It did no good. In 1914, the First World War exploded. As the most famous of the many experts who had said-or were believed to have said-that there would be no war, Norman Angell was singled out. He was the scapegoat.
Angell fought back in letters and lectures and interviews, but it was no use. In 1933, when Angell was awarded the n.o.bel Peace Prize for his tireless antiwar activism, the citation prominently denounced the claim that he had said war was impossible. Not even that helped. Among those less informed than the n.o.bel committee, Angell's reputation was permanently stained. Angell even complained that he had to avoid getting involved with causes he supported lest he "might taint others with the derision which has grown from this falsehood or confusion."
In 1952, when Angell published his autobiography, a sympathetic reviewer in The New York Times noted how absurd it was that Angell had suffered so much for so little reason. Even the critics' description of Angell was c.o.c.keyed, the reviewer noted. They called him "starry-eyed," an ivory-tower academic, a theorist out of touch with reality, but he was, in fact, a much-traveled and experienced journalist. And, no, the reviewer stated emphatically, he had not claimed war was impossible. Still, five decades later, in the very same newspaper, a writer mocked "the starry-eyed British economist Norman Angell" who had claimed war was impossible.
One might think Angell would have been left in peace after he died in 1967, at the age of ninety-five, but the torment continued posthumously, thanks largely to Barbara Tuchman. Tuchman didn't merely repeat the myth of Angell's prediction. She repeated it in The Guns of August, a 1962 examination of the causes of the First World War that won the Pulitzer Prize, deeply impressed President John F. Kennedy, and became a ma.s.sive best seller that shaped the popular understanding of the war that launched the twentieth century. Tuchman chiseled the myth into marble.
And so, decades after he died, Norman Angell continues to be mocked as the fool who said war was impossible, while the many fools who actually said war was impossible lie undisturbed in their graves. There really is no justice in the matter of predictions.
7.
When Prophets Fail.
When the facts change, I change my mind. What do you do, sir?
-JOHN MAYNARD KEYNES.
The cataclysm would be swift, terrible, and awesome. At sunrise on the morning of December 21, tectonic plates would lurch and buckle. The entire west coast of the Americas, from Seattle to Chile, would crumble into the Pacific Ocean. Floodwaters would surge across the heart of the continent, creating a vast inland sea stretching from the Arctic Circle to the Gulf of Mexico. Tens of millions would die and the United States would all but vanish.
Marian Keech, the American psychic who experienced this terrible premonition of the future, planned on being far away that fateful morning in 1955. Hours earlier, at the stroke of midnight, she and her small band of believers would be met by the aliens who had warned Keech of the coming doom. Together, they would board a flying saucer and zoom off into outer s.p.a.ce.
History records that on December 21, 1955, the United States was not destroyed and Marian Keech did not leave the planet. Given these facts, it follows that Keech's prediction was wrong. How could it be anything else? The prediction was clear and the timing was precise. The predicted events did not happen, ergo, the prediction was wrong. No person of sound mind would dispute it. Or so one might think. But as psychologist Leon Festinger demonstrated in a legendary study, a mind deeply committed to the truth of a prediction will do almost anything to avoid seeing evidence of the prediction's failure for what it is.
In 1955, Festinger and his colleagues were working on a new psychological theory when they heard about Keech's prophecy and saw a chance to put their thinking to a real-world test. Posing as laypeople, the psychologists joined Keech's group, which they dubbed the "Seekers." (All the names used by Festinger were pseudonyms, as they didn't wish to subject Keech and her followers to more ridicule.) What they found were ordinary midwestern Americans. Marian Keech was a middle-aged housewife remarkable only for the strange beliefs she concocted from a variety of popular sources. "Almost all her conceptions of the universe, the spiritual world, interplanetary communication and travel, and the dread possibilities of atomic warfare can be found, in a.n.a.logue or ident.i.ty, in popular magazines, sensational books, and even columns of daily [news]papers," wrote Festinger and his colleagues. The Seekers may have let their imaginations run loose but they weren't in any sense mentally ill.
Among Keech's most ardent supporters were Thomas and Daisy Armstrong, a couple who had worked as Christian missionaries before a shared interest in the occult led them to UFOs and prophecies. When Keech had her vision of the disaster to come-she believed she could psychically "channel" the aliens, which allowed them to take control of her hand and write messages with a pen and paper-it was Daisy who claimed to find "corroborating evidence" in the literature on UFOs. And it was Thomas who brought others to Keech. As a physician, Thomas was a respected authority figure. "A tall man in his early forties, Dr. Armstrong had an air of ease and self-a.s.surance that seemed to inspire confidence in his listeners."
Keech had little interest in spreading the word. The Seekers contacted the media, but only briefly. Mostly, they refused interviews. Even in the days leading up to December 21, when reporters from the major national newspapers and wire services were knocking on Keech's door, they were shunned. Not surprisingly, Keech's following grew slowly, to a total of thirty-three people who were affiliated to some degree. Of these, eight had committed to the prophecy by quitting a job or doing something else that showed they strongly believed the cataclysm would occur. These true believers were well aware of how much they had on the line. "I have to believe the flood is coming on the twenty-first," one said to a researcher on December 4, "because I've spent nearly all my money. I quit my job, I quit comptometer school, and my apartment costs me a hundred dollars a month. I have to believe."
The days and hours counted down. On what the Seekers believed would be the last night of the world as they knew it, they gathered in Keech's house. Midnight approached. "The last ten minutes were tense ones for the group in the living room. They had nothing to do but sit and wait, their coats in their laps," Festinger wrote. Midnight struck. Nothing happened. Keech's followers sat in silence. At five minutes past midnight, someone noticed that the time on a different clock was four minutes to midnight. Clearly, the first clock was running fast. All attention shifted to the second clock.
Three more minutes pa.s.sed. "And not a plan has gone astray!" Keech squeaked nervously. A fourth minute slipped by and midnight struck again. Still, nothing happened. Where was the flying saucer? Channeling the aliens, Keech announced there had been a short delay. Again they sat in silence. "Occasionally someone shifted in his chair or coughed but no one made a comment or asked a question." The phone rang. A reporter wanted to know what was happening. No comment.
In the hours that followed, the tension dissolved into confusion. Maybe "time didn't mean anything," one man suggested. Maybe the event happened a thousand years ago. Maybe it will happen a thousand years from now. Others didn't accept that. Something had gone wrong. "Well, all right," Keech said finally. "Suppose they gave us a wrong date. Well, this only got into the newspapers on Thursday and people only had seventy-two hours to get ready to meet their maker. Now suppose it doesn't happen tonight. Let's suppose it happens next year or two years or three or four years from now. I'm not going to change one bit. I'm going to sit here and write and maybe people will say it was this little group spreading light here that prevented the flood. Or maybe if it's delayed for a couple of years there'll be time to get people together. I don't know. All I know is that the plan has never gone astray. We have never had a plan changed. And you'll see tomorrow the house will be full of them and we'll have an open house and I'll need every one of you to answer the phone and maybe they'll ask us to go on television. I'm not sorry a bit. I won't be sorry no matter what happens."
It was three A.M. The group looked back at the original prophecy. They had misread it, some decided. It was a mistake, for example, to read a reference to "parked cars" that would take them somewhere literally. Parked cars don't move. They're parked. So clearly that was symbolic language. Happily, at that very moment, the aliens sent a message via another channeler in the group. The message confirmed that "parked cars" was symbolic of the believers' physical bodies, and their physical bodies had indeed been present at midnight. The flying saucer in the prophecy was also symbolic. It stood for spiritual knowledge. Thus, the prophecy hadn't failed at all. It had been fulfilled.
Some eagerly agreed with the new interpretation, but Keech balked. The woman who channeled the new interpretation was offended. Do you have a better explanation? she huffed. No, Keech conceded. But "I don't think we have to interpret it, we don't have to understand everything. The plan has never gone astray. We don't know what the plan is but it has never gone astray." The group wasn't satisfied but Dr. Armstrong urged others to keep the faith. "I've given up just about everything. I've cut every tie. I've burned every bridge. I've turned my back on the world. I can't afford to doubt. I have to believe."
Keech started to cry. "They were all now visibly shaken," Festinger wrote, "and many were close to tears."
The Seekers milled about, bewildered and hurt. But at 4:45 A.M., Marian Keech excitedly asked everyone to gather in the living room. She had received a message from the aliens, she said. Or rather, she had a message from G.o.d-for He was the ultimate source of the words from above. "For this day it is established that there is but one G.o.d of Earth, and He is in thy midst, and from His hand thou has written these words," the message read. "And mighty is the word of G.o.d-and by His word have ye been saved-for from the mouth of death have ye been delivered and at no time has there been such a force loosed upon the Earth. Not since the beginning of time has there been such a force of Good and light as now floods this room and that which has been loosed in this room now floods the earth." The group had shown such faith that G.o.d had chosen to avert the catastrophe, Keech explained. So the prediction hadn't been wrong. In fact, it had been proved right! "This message was received with enthusiasm by the group," Festinger noted.
Keech channeled another message, which essentially repeated the first and directed the group to go out into the world and spread the word. At this, one man got up and left, but "the rest of the believers were jubilant." Of course, this explanation flatly contradicted the earlier message about the prophecy being "symbolic," but the group didn't see a contradiction because the earlier message was never mentioned again.
In the hours and days that followed, Keech and her slightly diminished band of followers were transformed. They issued press releases, published flyers, called reporters, and described in great detail to anyone who would listen how they had saved the world. In a channeled message, the aliens further directed Keech to lift the long-standing ban on photographs. In fact, they directed, the group should "make special efforts to please photographers."
The Seekers also became keenly interested in the news itself. Keech excitedly noted a story in the newspaper about an earthquake that had happened five days earlier in Nevada. More evidence the prediction was right! And when they learned that earthquakes had struck Italy and California on December 21 itself, the Seekers were delighted. "It all ties in with what I believe," Keech proudly declared.
To an external observer, the prediction had clearly and conclusively failed. But to Marian Keech and the others committed to its truth, this simply could not be. The prediction was true. And nothing could prove otherwise.
MAKING EVERYTHING FIT.
The theory Leon Festinger was developing in 1955 is now a foundational concept of modern psychology. It is cognitive dissonance.
The human mind wants the world to make sense, Festinger noted. For that to happen, our cognitions-our thoughts, perceptions, and memories-must fit together. They must be consonant. If they aren't, if they clash, and we are aware of the contradiction, they are dissonant. And we can't be comfortable in the presence of cognitive dissonance. It has to be resolved. Distraction-"Think about something else!"-is the simplest solution. But sometimes it's impossible to ignore the thoughts crashing into each other and we have to deal with it. That may take the creation of new cognitions or the alteration of existing ones, or they may have to be forgotten altogether. However it's done, it must be done, because dissonance is a highly aversive emotional state. Like a bad headache, we must make it go away.
Say you're a cop walking the beat. You see a man park a car and get out. He seems a little unsteady on his feet. Is he drunk? You walk over. He makes a flippant comment and you shove him backward. He stumbles and hits his head. He's bleeding. Now what? You're a decent person and a trained professional-and you hurt a man for no good reason. Those cognitions do not fit together. Worse, they go right to the heart of your self-definition. The cognitive dissonance is throbbing. How do you resolve it? You can't take the shove back, and you can't forget it, at least not instantly. But what you can do is change your perception of the man and the incident. Maybe the way he spoke to you was really offensive, aggressive, almost threatening. Maybe he deserved it. Like morphine easing a headache, the new cognition-he deserved it-dissolves the dissonance, and you feel better.
We engage in this sort of rationalization all the time, though it's seldom as explicit and conscious as I've made it out to be. It happens, for example, anytime we make a difficult decision. By definition, difficult decisions involve factors pointing in opposite directions. Should I buy this stock? Should I quit my job? Should I get married? In every case, there are reasons for and against, and so, no matter what choice we finally make, we will have done it despite reasons that suggest we should not. That creates dissonance and is the reason we rationalize these tough calls by playing up the factors that supported our decision while belittling those that didn't. This transforms the decision. Before, it was hard, but afterward we are sure that the decision was the right one. In the cla.s.sic demonstration of this tendency, psychologists interviewed people lined up to place a bet at a racetrack. "How likely is it you will win?" they asked. On average, people gave themselves a "fair" shot. They also interviewed people after they placed their bets; they said they had a "good" chance of winning. The researchers were particularly amused when a man who had been interviewed on his way to placing a bet sought them out immediately after putting his money down. He wanted to let them know he'd changed his mind. He had rated his chance to be fair, he said, but now he was sure it was "good. No, by G.o.d, make that an excellent chance."
But things get really interesting after the race is run. At that point, there's hard evidence about the decision. What happens if the evidence says, "You were wrong"? Sometimes people don't fool themselves. They see the evidence for what it is and they say, "I was wrong." But that reaction is much less likely than one would expect, because people tend to be "cognitive conservatives"-meaning they stick with existing beliefs far more than is reasonable.
In one experiment, researchers showed people a pair of notes. One of these is a genuine suicide note, they said, but the other is a fake. See if you can tell which is which. After guessing, they were told if they were right or wrong. This process was repeated twenty-five times. Or at least this is how it seemed to the test subjects. In reality, the results weren't real. They were randomly a.s.signed: Some people were told they were excellent at the task, having correctly spotted twenty-four out of twenty-five of the genuine suicide notes; others were told they were average, getting seventeen right; and the rest were told they had done a lousy job, getting only ten right. But after giving people this bogus evidence, and letting them form a belief about how good they were at spotting genuine suicide notes, the researchers let them in on the secret. These results are meaningless, they said. They tell you nothing about how good you would be at this task. Then the researchers asked three questions: How many correct guesses do you think you actually made? How many correct guesses do you think the average person would make? If you did a similar test a second time, how many correct guesses would you make? The researchers found that people who had been told falsely that they were excellent at the task tended to rate their ability above average while those who had been told they were lousy at it rated themselves to be below average-which clearly demonstrated that the initial results swayed perceptions even though everyone knew the initial results were meaningless. "It is clear that beliefs can survive potent logical or empirical challenges," wrote psychologists Lee Ross and Craig Anderson in a summary of the many studies on the subject. "They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases."
The people in the suicide-note experiment obviously did not have a strong commitment to what they believed about their suicide-note identification skills. That makes the persistence of their beliefs all the more remarkable because, thanks to cognitive dissonance, how committed we are to a belief makes a big difference when we are faced with proof that the belief is wrong. A casual bettor at the racetrack who puts five dollars on a horse he thinks will win doesn't have a lot on the line. He never claimed to be an expert, and it's only five bucks. But someone who bets a thousand dollars, who lives for the racetrack, who prides himself on knowing horses, who brags to anyone who will listen that his pick is a sure thing-he's going to have a much harder time saying, "I was wrong." In psychological terms, the cognitive dissonance experienced by the first bettor will be mild. Confront him with the evidence-"You were wrong! Admit it!"-and he may just shrug and nod. Yes, he made a bad call. So what? But the second bettor will suffer the cognitive equivalent of a migraine. Instead of admitting he was wrong, he is far more likely to rifle the medicine cabinet of his mind for a soothing rationalization. Maybe the track was muddier than he'd thought, or the jockey made a stupid mistake. Whatever. There has to be an explanation. Anything to make the cognitive pain go away.
An excellent place to see how strong commitments can skew our thoughts and perceptions is in the political arena. To many people, politics is something they think about only during election campaigns or not at all. But for party stalwarts or ideological warriors, politics is a constant pa.s.sion that matters deeply. It is a big part of their personal ident.i.ty, and these committed citizens who follow politics closely tend to be very knowledgeable. So who's more susceptible to mistaken or distorted thinking about politics? I suspect most people would a.s.sume that the less interested, less informed, and less involved citizens are, but cognitive dissonance theory suggests the opposite: It's the more interested, more informed, and more involved citizens who are more committed to their beliefs. Greater commitment produces more cognitive dissonance when facts don't fit beliefs, and this prompts more effort to rationalize the problem away-to make the facts fit the beliefs. And this is just what researchers have found, many times. The deficit of the U.S. federal government decreased 90 percent between 1993 and 1996, so there was no question what the right answer was when Americans were asked in 1996 if the deficit had decreased, increased, or stayed the same. And yet only one-third of respondents said the deficit had declined. Forty percent said it had increased. Now, some of that result comes from simple ignorance. Polls consistently find large chunks of the public get basic factual questions wrong. More revealing was the partisan split: Slightly more than half of Republicans said the deficit had increased a lot or a little, compared to 37 percent of independents and 31 percent of Democrats. Why the difference? The president was a Democrat, and the thought of a Democrat successfully defeating the deficit did not sit easily in Republican minds. And, no, this isn't just a Republican thing. Surveys in the late 1980s, years after the decade-long plague of inflation had finally been stamped out, revealed a similar partisan split-except this time the president was a Republican and it was Democrats whose perceptions were unreasonably negative. In the face of such bias, mere facts don't stand a chance. "Political knowledge does not correct for partisan bias in perception of 'objective' conditions, nor does it mitigate the bias. Instead, and unfortunately, it enhances the bias; party identification colors the perceptions of the most politically informed citizens far more than the relatively less-informed citizens." And bear in mind that this research didn't separate less committed Democrats and Republicans from their fiercer comrades. It's a safe bet that if it had, it would show bias growing in lockstep with commitment: In the mind of the True Believer, belief determines reality, not the other way around.
After j.a.pan attacked the United States at Pearl Harbor, General John DeWitt was sure American citizens of j.a.panese origin would unleash a wave of sabotage. They must be rounded up and imprisoned, he insisted. When time pa.s.sed and there was no sabotage, DeWitt didn't reconsider. "The very fact that no sabotage has taken place is a disturbing and confirming indication that such action will be taken," he declared. As the reader may realize, DeWitt's reasoning is an extreme example of the "confirmation bias" discussed earlier. Confirmation bias is cognitive dissonance at work: Having settled on a belief, we naturally subject evidence that contradicts the belief to harsh critical scrutiny or ignore such evidence altogether. At the same time, we lower our standards when we examine supportive evidence so that even weak evidence is accepted as powerful proof. And if, like DeWitt, we're desperate to sh.o.r.e up a crumbling belief we are deeply committed to, we are likely to drop our standards altogether-and say something as silly as "the absence of evidence that I am right proves I am right."
When the clock struck midnight on December 21, 1955, Marian Keech was even more compelled to defend her belief than DeWitt had been in 1942. Her commitment to the prediction was ma.s.sive. It was the centerpiece of her understanding of how the universe worked; her name and prophecy were in newspapers across the country; people had abandoned jobs and families to join her. When the flying saucer didn't appear and the sun rose on an ordinary day, the cognitive dissonance she experienced must have been knee-buckling. Keech's more ardent followers suffered similarly, because they, too, had completely committed themselves. Only the peripheral members of the group, whose belief and commitment were modest, escaped severe mental turmoil.
For Leon Festinger, that was the moment of truth. The reaction of Keech and her followers to that moment would put cognitive dissonance theory to the test. In the months leading up to December 21, 1955, Festinger made three of his own predictions: First, those Seekers who were only modestly committed would accept that the prediction had been proved wrong and would drift away; second, those who were heavily committed would find rationalizations that would allow them to maintain their belief in the prediction, even if they had to twist themselves into mental knots to do it; third, the remaining faithful would greatly boost their efforts to spread the word in order to have their beliefs affirmed by the interest and support of others.
That is precisely what happened. Leon Festinger was a better prophet than Marian Keech.
EXPERTS ON THE DEFENSIVE.
One might think the experts who took part in Philip Tetlock's landmark study suffered little cognitive dissonance when they were told just how bad their predictions had turned out to be. After all, Tetlock had carefully drafted his questions so there would be no doubt about whether a prediction was right or wrong after the fact, and the experts in his study had been guaranteed anonymity, so they wouldn't be publicly humiliated. And Tetlock is a quiet and gentle man who tried, as he says, "to put it as nonjudgmentally as I could." Surely, under these encouraging circ.u.mstances, the experts would look at the evidence, shrug, and say, "I guess I was wrong."
Some did. But Tetlock noticed that those who were frank about their failure tended to be those who didn't think prediction was even possible. "They would say, 'I didn't think I could do predictions but I was humoring you and I wanted to be helpful,'" Tetlock recalls with a laugh. These experts tended to be foxes, not surprisingly. Much less forthcoming were experts who thought prediction is possible, especially hedgehogs. When Tetlock reviewed their failed predictions with them, they dug in and fought back with an impressive a.r.s.enal of mental defenses. They weren't really wrong, they insisted. Things almost turned out as they had predicted. Or they still might. And anyway, the prediction got thrown off by an "exogenous shock" that no one could possibly have foreseen and that shouldn't be held against them. Reviewing the excuses made by Tetlock's experts, psychologists Carol Tavris and Elliot Aronson offered this succinct and withering summary: "Blah blah blah."
This defensiveness doesn't surprise psychologists because, while the experts in Tetlock's experiment may have had nothing on the line in public, they still had a significant commitment at stake. "Our convictions about who we are carry us through the day and we are constantly interpreting the things that happen to us through the filter of our core beliefs," Tavris and Aaronson wrote. "When experts are wrong, the centerpiece of their professional ident.i.ty is threatened," and that generates cognitive dissonance. It may not have been as severe as that suffered by General DeWitt or Marian Keech, but it was enough to inspire some vigorous rationalizing. All humans are talented rationalizers-as Michael Gazzaniga demonstrated with his research on the "Interpreter"-but experts are particularly good at it. Not only are their brains stuffed with information about the subject at hand, giving them more raw material to work with, but experts are experienced at generating a hypothesis, a.s.sembling arguments, and making a case. If they do not restrain their thoughts with self-scrutiny and reason, they can easily spin failure until it seems meaningless, or very close to a success, or even a triumphant vindication.
The forms rationalization can take are limited only by human creativity, and we are a very creative species. But two varieties of rationalizing deserve special mention because they are heard so often when predictions fail.
The first line of defense of the failed prophet-whether preacher or Ph.D.-is to insist that while it may appear that the time frame of the prediction has pa.s.sed, a closer examination reveals the clock is still ticking. Marian Keech and the Seekers did this quite literally when they stopped watching the clock that showed midnight had come and gone and turned to one that showed there was still time left. This sort of evasion often happens in apocalyptic religious movements. In the early nineteenth century, for example, an American sect called "Millerites" identified 1843 as the year in which biblical prophecies of the end of the world would be fulfilled. When the sun rose on January 1, 1843, William Miller, the movement's founder, refined his forecast by declaring that the end would come sometime between March 21, 1843, and March 21, 1844, "according to the Jewish mode of computation." For various reasons, many believers became certain the big day was April 23, 1843. When that came and went, they chastised themselves for having made an obvious error. They then decided the real date was the last day of the year. When it, too, pa.s.sed uneventfully they settled on March 21, 1844. Again, nothing happened. "There was strong and severe disappointment among the believers," Leon Festinger wrote in When Prophecy Fails, "but this was of brief duration and soon the energy and enthusiasm were back to where they had been before and even greater." Another date was determined-October 22, 1844-and belief was so strong that some farmers didn't bother to plant their fields that summer, knowing they would be in heaven before winter came. When the world still didn't go down in flames, the Millerites finally conceded that perhaps their core belief was not entirely correct. Amid acrimony and accusations, the sect finally dissolved.
A second major line of defense lies in memory, a fact that is not often recognized because far too many people misunderstand what memory is and how it works. We think memory is a collection of permanent records, like a shoe box full of photographs. A photo may be lost now and then. And some are a little fuzzy and hard to make out. But generally, we a.s.sume, memory is an accurate and unchanging reflection of past experiences and feelings. For better or worse, the reality is very different. Memory is an organic process, not a recording. While memories can remain sharp and fixed for decades, they can also evolve, sometimes subtly, sometimes dramatically. These changes aren't random. Memories serve the present: We misremember in ways that suit the needs of the moment. Have a falling-out with an adult sibling and you can be sure your childhood memories of that sibling will grow darker, with good memories fading and bad memories growing more vivid; repair the relationship and the memories will get sunnier too. Change your mind about an issue and your memories are likely to change as well, leading you to erroneously believe that your current opinion is the opinion you have always held-which is why people very commonly deny having changed their minds when they clearly have. As the years pa.s.s and we learn and grow and change, our old selves can even become strangers to our present selves. In 1962, researchers asked seventy-three fourteen-year-old boys questions about such emotional subjects as their families, s.e.xuality, politics, and religion. More than three decades later, the same boys, now forty-eight years old, were asked to search their memories and recall how they'd felt when they were fourteen. "The men's ability to guess what they had said about themselves in adolescence was no better than chance." Memories can even be cobbled together out of nothing more than desire and sc.r.a.ps of real memories. Psychologist Carol Tavris describes a vivid memory she has of her father reading her a children's storybook called The Wonderful O. It is a cherished image, as Tavris's father died when she was very young and this was a direct link to him. It was also false. Tavris was stunned to discover that The Wonderful O was published after her father had died.
The obvious sincerity of these distorted memories underscores an absolutely essential point that is too often missed: There is nothing dishonest about any of this. We can't consciously change our memories. Nor can we distinguish between a changed memory and the original. When we make a statement based on what we recall, we a.s.sume that our memories are accurate. If that a.s.sumption is incorrect, the statement will be false, but it will not be a lie. This distinction was lost in 2008 when Hillary Clinton-who was campaigning for the Democratic presidential nomination-told audiences about the time her plane landed at a Bosnian airport in the 1990s and she had to run from sniper fire. It was a thrilling story. But when reporters looked up video of the event, they found nothing more exciting happened that day than a little girl giving flowers to a smiling Hillary Clinton. Clearly, the story was false, so Clinton was hammered for lying. But did that make sense? Clinton had visited Bosnia as first lady of the United States. Her every footstep had been recorded by TV cameras and stored in archives. She knew that. She also knew that her every word on the campaign trail was being scrutinized by the media and the opposition. Why would she tell an easily exposed lie for so little gain? A much more plausible explanation is that her memories evolved to serve the needs of the present-a present in which Clinton was constantly making speeches and giving interviews in which she described herself as an experienced, tough, "battle-tested" leader. She didn't lie. She really believed what she said. And no one, I suspect, was more shocked by that video than Clinton herself.
The ease with which memories change to suit the needs of the present makes them an ideal tool for resolving cognitive dissonance. If, for example, an esteemed expert thinks the sudden collapse of the Soviet Union is close to impossible, and then the Soviet Union collapses, that expert may pay a cognitive price-unless his memory of what he believed before the collapse undergoes a suitable evolution. Maybe he hadn't thought it was close to impossible, maybe instead he had thought it was a significant possibility, or even more likely than not that the Soviet Union would fall. If the expert's memory recalls it that way, he could say that he had seen the whole thing coming-and there wouldn't be a trace of cognitive dissonance.
This may seem extreme, even absurd. It's hard to believe that people can delude themselves so badly. But it is precisely what Philip Tetlock discovered when he asked his experts in the 1980s about the likelihood of a Soviet collapse and then went back to them, after the collapse, and asked them to recall what they had predicted. In every case, the experts remembered their earlier predictions incorrectly. And always the mistake they made leaned in the same direction: They remembered themselves rating the likelihood of a Soviet collapse much higher than they actually did. "It was a big distortion," Tetlock says. On average, the shift was about 30 or 40 percentage points. So an expert who had actually thought there was only a 20 percent chance of a Soviet collapse would remember having thought the odds were 50/50 or even that it was more likely than not. There were more extreme cases in which experts who had said there was only a 10 or 20 percent chance of a collapse remembered themselves rating it a 60 or 70 percent chance. There was wide variation in the extent to which Tetlock's experts suffered hindsight bias, of course, but the variation wasn't random-hedgehogs tended to be more afflicted than foxes.
As startling as these results may be to laypeople, they replicate what psychologists have known for decades. "Hindsight bias" is a well-doc.u.mented phenomenon: Once we know the outcome of anything, we will tend to think that outcome was more likely to happen than we would have if we had judged it without knowing the outcome. It's a potent force that can even alter our memories of how we thought and felt in the past. Psychologist Baruch Fischhoff demonstrated the bias by giving students information about the early-nineteenth-century war between Britain and the Gurkhas of Nepal. Based on this information, Fischhoff asked the students, estimate the likelihood of the war ending in a British victory, a Gurkha victory, a stalemate with no peace settlement, or a stalemate with a peace settlement. The information given to one group included only the military forces available to the two sides and other factors that may have had an effect on the result. A second group was given this information along with the actual outcome-a British victory-and asked to estimate the chances of each outcome without regard to what actually happened. Knowing how history played out made all the difference: Those who knew the British won rated that outcome to be much more likely than those who did not. Fischhoff confirmed his findings by telling other groups that the actual outcome was a Gurkha victory or one of the two stalemate results. In each case, knowing the supposed outcome drove up the estimated likelihood of that outcome. In another version of the experiment, the researchers asked people not to let their awareness of the outcome influence their judgment. It still did. In a final experiment, Fischhoff had students estimate the likelihood that certain events in the news would take place; months later, well after it was clear that the events had or had not taken place, Fischhoff went back to the students and asked them to recall how likely they had judged the events to be. The hindsight bias was obvious: If the event actually occurred, they recalled themselves thinking it was more likely than they actually had; if it didn't occur, they underestimated how strongly they had felt it would.
Hindsight bias is universal, but the degree to which we suffer it varies. After a football team wins a game, for example, all fans are likely to remember themselves giving the team better odds to win than they actually did. But researchers found they could amplify this bias simply by asking fans to construct explanations for why the team won. So merely knowing an outcome gets hindsight bias started, but having a satisfactory explanation for it really cranks the bias up. "Of course the team won. The defense is third in the league and the other team's star was injured a week ago. It was obvious they'd win. I knew it all along." This fact means experts are particularly susceptible, because by definition, they know lots about their subject, and for most, identifying causal connections is almost habitual. So it is only to be expected that they are better than laypeople at constructing after-the-fact explanations for why some event happened.
Tetlock also conducted hindsight studies using less spectacular events than the collapse of the Soviet Union-whether Quebec would separate from Canada, what would happen to apartheid South Africa, whether the European monetary union would come together, and so on-and he found that the hindsight bias wasn't as extreme as it had been in the case of the Soviet collapse. So it seemed cognitive dissonance was also in play. An expert who spends his life studying international politics and who believes his expertise gives him insight into the future won't be pleased when he fails to foresee an important event, but being surprised by one of the biggest events of the century is a huge challenge to his very ident.i.ty, which sets off a five-alarm blaze of cognitive dissonance and a proportionate mental response. Memories aren't just tweaked, they are airbrushed top to bottom. Just as the Soviets themselves did, the expert's mind drastically changes the recorded past to suit the needs of the present, and he is left with the comforting belief that he knew it all along.
Unlike Philip Tetlock's experts, the experts on talk shows, best-seller lists, and the op-ed pages of major newspapers are far from anonymous. When they make predictions, their public reputations are at stake. Their connections, their meetings with movers and shakers, their invitations to Davos, are all on the line. So is cash. An expert who is perceived to provide genuine insight into the future owns a golden goose, but if that perception is lost, so is the goose. Everything that Tetlock's experts had on the line in his experiment, these experts also have, plus so much more.
"Suppose an individual believes something with his whole heart," wrote Leon Festinger. "Suppose further that he has a commitment to this belief, that he has taken irrevocable actions because of it; finally, suppose that he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong; what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before." In 1955, Marian Keech and her followers ill.u.s.trated this profound insight. Today, plenty of experts and their followers keep proving Leon Festinger right.
JAMES HOWARD KUNSTLER.