Economyths - ten ways economics gets it wrong - novelonlinefull.com
You’re read light novel Economyths - ten ways economics gets it wrong Part 3 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy
Of course house prices couldn't have increased unless banks were willing to lend the necessary funds. The amount of money a bank is prepared to lend will depend on its balance sheet. Since the loans are backed by home equity, if house prices go up then their balance sheets improve, so they put even more money to work by lending it out. This in turn drives house prices further. In the same way, borrowers with large mortgages find that their own balance sheets - i.e. their personal net worth - has increased accordingly, so they can borrow even more money to get a bigger house or improve their lifestyle.
Again, these feedbacks also work on the way down. Downward movements in house prices are quickly magnified by momentum sellers, media reports, and tightening of credit as banks try to withdraw from the mortgage market. According to the comparison site Moneysupermarket.com, the number of mortgage deals available from UK banks reduced from 27,962 in 2007 at the boom's peak, to only 2,282 in July 2009.10 Another source of positive feedback is risk management tools such as Value at Risk (VaR). Banks are required by regulators to use VaR to compute their maximum expected losses, based on the volatility of the relevant a.s.sets over the past year or so. The number determines how much capital they are required to hold in reserve. If markets are calm and operating smoothly, then the computed risk is low, and banks are free to leverage up; but if markets are stormy, then computed risk goes up and banks have to sell a.s.sets to bring their VaR back within an acceptable limit.
Because banks all use the same formula, with only minor adjustments, a consequence is that when volatility increases they are all required to sell a.s.sets at the same time. This creates further volatility, which again increases VaR, which means that more a.s.sets need to be sold to meet the regulatory requirement, and so on. The well-intended risk formula can then end up making the markets more risky. This type of synchronised deleveraging was one of the causes of market turbulence in 2007.11 As discussed in the next chapter, the VaR method has many other problems; but the point here is that any such numerical formula that computes risk based only on past price fluctuations will destabilise the markets, so long as it is uniformly adopted.
Bank runs such as the one that hit Northern Rock are the ultimate example of destructive feedback. Banks can function only if they have the trust and confidence of their customers, and of other banks. If a rumour gets out that a bank is in trouble, for whatever reason, then two things will happen: its customers will start trying to get their money out, and other banks will suddenly stop answering their calls. Even if the bank wasn't in real trouble, it soon will be. Its shares don't decline in an orderly fashion - the market for them just suddenly disappears. One of the main roles of central banks is to step in and act as the lender of last resort in such situations.
Positive feedback is also apparent in the wild swings of the currency markets. A favourite occupation of currency traders is the so-called carry trade. This involves borrowing in a low-yield currency, such as the j.a.panese yen, or (at the time of writing) the American dollar, and investing the funds in a high-yield currency. The trader pockets the difference in interest rates. The only risk is that the high-yield currency will depreciate relative to the loan currency, thus making the loan harder to pay off. If, for example, one of the governments involved adjusts its interest rates, then what can happen is that traders rush to unwind their positions, the high-yield currency depreciates in response, the trade becomes even less attractive, and the currencies suddenly jump to a new level.
Positive feedback is therefore an intrinsic and pervasive feature of the economy that appears in many different forms. It has been the driving force in a.s.set price bubbles and crashes throughout financial history, with only the theme changing. The Dutch tulip mania of 1637 saw the newly introduced bulb become one of the hottest commodities of all time, as prices grew to insane heights before suddenly wilting. In 1720, the South Sea bubble was driven by speculation in shares of the South Sea Company, which had a monopoly to trade in Spain's South American colonies. Not much trading was going on, but rumours of access to unlimited supplies of South American gold meant that the stock rose from 175 to over 1,000 in a few months. It then collapsed to 135. In the 1840s, it was the railway crisis; at the end of the 20th century, we had the dot-com bubble.
Such manias and their accompanying crashes are dramatic; however, the presence of positive and negative feedback loops means that even in more apparently stable times the market is constantly reacting to itself. In part this is because markets are composed of people who reflect on current conditions and respond to them, but it's interesting to note that this reflexivity, to use George Soros' term, is a property of complex organic systems in general. Biological systems like our own bodies, or ecosystems, or indeed the biosphere as a whole, are also constantly evolving and adapting. In fact, a characteristic of such systems is that they operate at a condition far from equilibrium, in the sense that their components are constantly being churned around rather than relaxing to a state of stasis. The only systems that achieve stability are inert objects, but the economy is very much alive.
Having a Minsky moment.
In comparing the achievements of Jevons et al. with those of Maxwell, it is easy to see why economics is often accused of suffering from "physics envy."12 It is a bit of a stretch to compare, as Schumpeter did, the "cla.s.sic synthesis" of Walras with the synthesis of the laws of electromagnetism. At the same time, though, I have a good deal of admiration for the founders of neocla.s.sical economics. They were smart, energetic, politically engaged, interested in different disciplines. They wanted to use techniques from science and engineering to understand the economy and make a better world. The tragedy is that mainstream economics did not develop much beyond their initial vision of a stable economy governed by simple mechanical laws.
Perhaps the best-known proponent of the idea that markets are unstable was the American economist Hyman Minsky. Before he died in 1996, Minsky was one of the few economists to speak out against deregulation and the risk of expanding credit. According to his Financial Instability Hypothesis, there are three types of borrowers: hedge, speculative, and Ponzi. Hedge borrowers can make payments on both the interest and the capital. A person with a traditional mortgage is a hedge borrower. Speculative borrowers can only service the interest payments, like a homeowner with an interest-only mortgage. Ponzi borrowers (named for the famous swindler Charles Ponzi) can't service interest payments but rely on the a.s.set increasing in value. They are like a subprime mortgage-holder with no income, no job, but plenty of hope.
According to Minsky, in prosperous times debt tends to acc.u.mulate, first among hedge and speculative borrowers, and finally with Ponzi borrowers, as a result of positive feedback: "Success breeds a disregard of the possibility of failure."13 However, debt becomes increasingly unsustainable, until finally the economy reaches a crisis point, now known as the Minsky Moment. The first to crash are the Ponzi borrowers, then the speculative borrowers, and finally even the hedge borrowers may be brought down. In the language of nonlinear dynamics, the Minsky Moment is the point where feedback changes direction to drive the markets down.
Minsky was considered an outsider and a maverick in the economics profession. In 1996, one reviewer wrote that his "work has not had a major influence in the macroeconomic discussions of the last thirty years."14 Until 2008, most economists probably thought that Minsky Moment was a synth pop group from the '80s.
Control theory and nonlinear dynamics are becoming better known in economics circles - there is now a Society for Nonlinear Dynamics and Econometrics, with its own journal - but they have perhaps had their greatest impact in the area of business forecasting.15 As mentioned in Chapter 1, one reason why the economy is so hard to predict is because of emergent properties that defy reductionist a.n.a.lysis. An equally important problem for forecasters is the interplay between positive and negative feedback loops. For every trend, it seems, a counter-trend soon develops; for every driver of a new technology, there is a blocker.
Consider a business like Facebook. This type of social networking site is an excellent real-world example of an emergent property. When fast computers first appeared, people predicted that they would lead to all kinds of amazing things like shorter working hours, but few predicted that one of their main applications would be social networking. Facebook was founded in 2004, and its initial growth was extraordinary. Every new user made the network larger and more attractive to others, in a self-reinforcing positive feedback loop. In 2009 it had around a quarter of a billion users. However, its popularity is now probably peaking (I know this because I was recently talked into joining, which is a sure sign that it has become pa.s.se). Compet.i.tors appear and steal market-share, or fashion changes.
The difficulty for business forecasters is trying to guess how this balance between positive and negative feedback will play out - and what will be the next big thing. Precise forecasts are impossible; but studies and model simulations can still be useful to explore possible future scenarios and find ways to improve performance.16
Out of control.
It might seem strange that control theory has not had wider influence on neocla.s.sical theory, given that they were both founded at the same time; and that market economies are famed for their creativity and dynamism, which often seems the opposite of stability. It makes sense only when you think of economics, not as a true scientific theory, but as an encoding of a particular story or ideology about money and society. Viewed this way, equilibrium theory is attractive for three reasons. Firstly, it implies that the current economic arrangement is in some sense optimal (if the economy were in flux, then at some times it must be more optimal than at others). This is a nice thing for professors at, say, Harvard Business School to teach their students before they head out into the upper echelons of the business world.
Secondly, it keeps everyone else in the game. As discussed further in Chapter 7, the benefits of increased productivity in the last few decades have flowed not to workers, but to managers and investors. If academics and the government were to let out the fact that the economy is unstable and non-optimal, then the workers might start to question their role in keeping it going.
Thirdly, it allows economists to retain some of their oracular authority. If the market were highly dynamic and changeable, then the carefully constructed tools of orthodox economics would be of little practical use. Efficient market theory, for example, makes no sense unless equilibrium is a.s.sumed. "Tests of market efficiency are tests of some model of market equilibrium and vice versa," according to Eugene Fama. "The two are joined at the hip."17 The belief in stability leads to a kind of myopia about markets. During the 1990s and the early 2000s, the world economy appeared to be growing in a smooth, steady, and sustainable way. Markets, it seemed, could regulate themselves. In the UK, chancellor (and future prime minister) Gordon Brown announced "the end of boom and bust." In the United States, there was talk of the "Great Moderation." The triumvirate of Federal Reserve chairman Alan Greenspan, and successive treasury secretaries Robert Rubin and Larry Summers, successfully opposed the regulation of financial derivatives on the grounds that "this would cause chaos," according to one insider.18 But as shown by Maxwell's less famous set of equations, chaos is a feature of many dynamical systems. The economy was running fast and free, like a steam engine with the governor removed, careering out of control towards its date with the Minsky Moment.
As systems biologist Hiroaki Kitano notes: "Robustness can only be controlled with a good understanding and thorough a.n.a.lysis of system dynamics."19 He was speaking of cancer biology, but the statement applies to that larger biological system known as the economy. Regulatory feedback loops are necessary for control, and without them the result can be dangerous instability.20 One lesson from the recent crisis is that central banks need to pay as much attention to the destabilising effects of excess credit and a.s.set price growth as they do to things like inflation. Available instruments include margin requirements and minimum capital requirements on banks, which can be dynamically controlled in response to market conditions and feedback. When markets are euphoric, both can be tightened; when markets are depressed, they can be loosened (note that this is the exact opposite of what happens with VaR).21 Use of the controls should be flexible, instead of rule-based, to avoid being gamed for commercial advantage. Of course financial inst.i.tutions will resent having such regulatory constraints imposed upon them, which is a problem because they have an incredible amount of political influence.
The main message for investors is to remember that trust and risk are coupled together in an inherently unstable way. When trust is high, firms take on more leverage, and investors get drawn into the market. The economy appears strong, but risk is growing. After a disaster, trust evaporates, but risk may actually be at its lowest. It's impossible to time the markets, but one can avoid over-leveraging during the good times, or becoming overly cautious during the bad times.
A property of complex systems like the economy is that they can often appear relatively stable for long periods of time. However, the apparent stability is actually a truce between strong opposing forces - those positive and negative feedback loops. When change happens, it often happens suddenly - as in earthquakes, or financial crashes. As seen next, it is when considering risk that the a.s.sumption of equilibrium can be particularly misleading and dangerous.
CHAPTER 4.
THE EXTREME ECONOMY.
The same flaw found in risk models that helped cause the financial meltdown is present in economic models invoked by "experts." Anyone relying on these models for conclusions is deluded.
Na.s.sim Nicholas Taleb and Mark Spitznagel (2009).
There is no more common error than to a.s.sume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.
A.N. Whitehead (1911).
Economists are taught that risk in the economy can be managed using well-established scientific techniques, unless of course something really unusual happens. The problem is, such so-called extreme events aren't quite as unusual as theory would suggest: in the last quarter-century we've had Black Monday, the Asian financial crisis, the Russian financial crisis, the dot-com bust, and the recent credit crunch. This chapter looks under the hood of the risk models used by banks and other financial inst.i.tutions, and finds that they rely on dangerous a.s.sumptions - stability, independent investors, and so on - that put our savings, pensions, and businesses in danger.
In October 2008, investors stared into the abyss. For a while it appeared that the entire financial system was on the verge of collapse. It was as if the whole world had gone to the cash machine, typed in the PIN, hit withdraw, and seen a blinking sign - INSUFFICIENT FUNDS.
Were we going to lose our jobs? Our houses? Our retirement nest-eggs? Would there be a complete breakdown, a return to the stone age? Would all social order come to an end? Would we end up scavenging for food in the forest and living off worms and grubs?
Of course, the situation soon improved, at least for most. Those who had scavenged for food prematurely had to return to their homes, looking sheepish. But the near-death episode was enough to shake anyone's faith in the financial system. And people soon started to ask how it was that the economy, which for years had been doing so well, could have been building up such unseen risks. Pensions and homes that had appeared to be safe and boring investments actually turned out to be quite exciting gambles. Who knew? Could it happen again? And didn't something like that happen before, come to think of it?
To answer those questions, and see what the future might hold, we'll again need to look back into the past, and in particular the history of risk. Most risk models are based on a 350-year-old mathematical object, first developed for gamblers. Unfortunately it gives the wrong answers, but we'll show how that can be fixed.
Games of chance.
Our desire to predict the future is mirrored by a desire to control it. The reason we want to foresee events is so that we can position ourselves correctly, and even influence the future.
Pythagoras is said to have taught predictive techniques based on divination through number. His student Empedocles earned the name Alexanamos, or "Averter of Winds," for his ability to predict and control the weather. However, in general the Greeks maintained a dichotomy between the abstract world of mathematics, which was governed by stability and symmetry, and the everyday world, which was governed by a bunch of squabbling G.o.ds. Mathematics was about beauty and precision and eternal forms, not messy, provisional reality. To get insights into the future you consulted the oracle, who had a hot-line to Apollo, the G.o.d of prediction.
This separation is strange, because in many respects mathematics and risk appear to be made for each other, at least when it comes to games of chance. Just as astronomy needed to await Renaissance figures like Copernicus to shake off the static hold of Greek philosophy, so mathematics had to await the arrival of those same free-thinkers to get a grip on risk. The first person to write a text on probability was an Italian mathematician, physician, astrologer, and dedicated gambler named Girolamo Cardano (1501-76).
Cardano sounds like the sort of person for whom "risk management" would have been a useful concept. He was born the illegitimate son of a mathematically talented lawyer, whom Leonardo da Vinci had consulted on mathematical problems. Trained in medicine, Cardano was refused entry into the College of Physicians in Milan. The official reason was his illegitimate birth, but more likely it was because of his argumentative nature. To pay his way he turned to full-time gambling. That didn't go well - he p.a.w.ned his wife's belongings and ended up broke - but once out of the poorhouse he started treating patients privately and soon grew famous for some astonishing cures. Eventually the College accepted him, and around the same time he started publishing his books on mathematics.
Cardano is best known today for his books on algebra, and for inventing the universal joint, but he also wrote a text, Liber de Ludo Aleae (Book on Games of Chance), which showed how to calculate the chances of obtaining different combinations at dice, such as rolling two sixes. As an addicted gambler who always carried a knife and once slashed the face of a cheat during a game of cards, he knew a thing or two about calculating the odds. He was perhaps the first person to fully realise that mathematical laws, which until then had been reserved for pristine subjects like celestial mechanics, could also apply to something as down-to-earth as the toss of dice.
Of course, as Cardano surely realised, not all of life's risks can be quantified in equations. He grew famous for his medical and mathematical achievements, but in 1560 his eldest son was found guilty of poisoning his wife, and was tortured and executed. His other son was a gambler and was repeatedly jailed for robbery. These events ruined Cardano emotionally and nearly destroyed his career. In 1570 he was imprisoned by the Inquisition for six months for publishing a horoscope of Jesus Christ.
His book on chance was found among his ma.n.u.scripts, and was not published until nearly a century after his death. Perhaps for that reason, probability theory did not take another major lurch forward until 1654, when the Chevalier de Mere posed an urgent question to the greatest mathematical minds of France: how do you divide up the pot when a game of balla is interrupted by lunch?
Pascal's wager.
Blaise Pascal is better known today for Pascal's wager - his statement that, even though G.o.d's existence cannot be rationally proved, the wisest course of action is to behave as if G.o.d does exist. The upside of this gamble is very good (salvation), the downside small. If instead you behave as if there is no G.o.d, then there's a potentially huge downside (d.a.m.nation) and not much upside.
The stakes in the problem posed by de Mere were less critical, but people had been puzzling over it for a long time. The winner at balla was the first to six rounds (the rest of the rules have been lost to history). In 1494 the Franciscan monk Luca Paccioli had argued that, if one person was ahead 5 to 3 when the game was cut short, then the pot should be divided in the same proportion. In collaboration with the great French mathematician Pierre de Fermat, Pascal used (actually, invented) probability theory to show that this answer wasn't quite right.
To ill.u.s.trate this, Figure 7 lists the different possible outcomes, were the game to be continued starting from 5:3. The outcome of the next game, denoted G1 in the left column, can either be 6:3, in which case the first player wins, or 5:4, in which case the game continues for another round. The only way for the second player to win is by winning three games in a row (shown in bold). If the chances of winning a game are exactly even, then the chance of winning three games in a row is 1/2 1/2 1/2 = 1/8.
Figure 7. The different possible outcomes for a game of balla, starting from the case at the top where one player is ahead 5 to 3. The left column lists the games. The second player can win the match only by winning three consecutive games.
Since the probabilities must add up to 1, the chance of the first player winning is 1 minus 1/8, or 7/8, which is 7 times the probability that the second player can win. The stakes should therefore be divided in a ratio 7:1, which is far greater than the 5:3 proposed by Paccioli.
A novel feature of Pascal's method was that it was based not just on what has already happened, but on future events that have yet to happen. It therefore established the basic principles of risk management that are still in use today: consider all the possible different future outcomes, estimate the likelihood of each, then use the most likely outcomes as a basis for decision-making.
Pascal generalised his method to obtain what is now known as Pascal's triangle, shown in Figure 8 (the same figure was studied by the Chinese mathematician Yanghui some 500 years earlier, so there it is known as the Yanghui triangle). It is constructed in a very simple way: the numbers at the start and end of each row are 1, and the other numbers in the row are equal to the sum of the two nearest numbers in the row above. As we'll see, this figure is extremely instructive about financial risk.
The rows of the triangle correspond to separate coin tosses. In game G1, the possible results are one head (the 1 in the column labelled 1H) or one tail (the 1 in the column labelled 1T). An arrow points to each of these outcomes, which have equal probability.
Now suppose that we play another game, and keep track of the total score. If the result of game G1 was a tail, then game G2 will produce either another tail, so the score is two tails; or a head, so the score is a draw. If, on the other hand, game G1 gave a head, then after game G2 the total score will be either a draw or two heads. There are therefore two ways of producing a draw, which means that this result is twice as likely as that of two tails or two heads. This is indicated by the number 2 in the central column, with the two arrows pointing in to it from above. The total probabilities must sum to 1, so we need to divide each number in the row by the sum of the row. Here the sum is 4, so the probability of two tails is 1/4; the probability of a head and a tail is 2/4; and the probability of two heads is 1/4.
Figure 8. Pascal's triangle. The rows represent different games: G1, G2, etc. The columns represent a tally of heads or tails, so for example 2H corresponds to the case where heads are winning by 2. The central column represents a tied result. The scheme for determining the entries is shown for the first four games. In each row, the number is the sum of the two nearest numbers in the row above. The summing process is indicated graphically by the arrows, so each number is equal to the number of arrows flowing into it from above. This triangle can be used to calculate the probability of any outcome from a sequence of coin tosses (or any other game in which the odds are even).
After the third round, the possible scores are three tails (which has one arrow pointing to it), a head and two tails (three arrows), two heads and a tail (three arrows), or three heads (one arrow). The arrows can be thought of as counting the number of possible paths to a result. Since each path is equally likely, the total number of arrows reflects the likelihood of the result. We again divide by the row total to get the probabilities. The chances of getting a mixed result are now 3/8, which is three times greater than the 1/8 chance of all heads or all tails.
Continuing in this way, we can read off the relative likelihood of any combination of heads and tails after any number of games, and convert into probabilities by dividing by the total for that row. For example, after six games, the row total is 1+6+15+20+15+6+1 = 64. The chance of tossing six consecutive heads is therefore 1 (the number in column 6H) divided by 64, or about 0.016 (i.e. 1.6 per cent). This is much smaller than the chance that the result will be tied, which is 20 (the number in the central column) divided by 64, or about 0.31 (i.e. 31 per cent).
If we calculate these probabilities for the result of a large number of games, we get a bar graph like the one shown in Figure 9, which shows the result after 40 games. The height of each bar gives the probability of that score, which ranges on the bottom scale from -40, indicating 40 tails, to +40, indicating 40 heads. The shape of the graph is symmetrical, because the probability of winning a certain number of heads is always the same as winning a certain number of tails (we're a.s.suming that the coin is fairly balanced). The graph is also bell-shaped, which means that the chances of a moderate result (a draw or a win by a small margin) are far greater than the chances of a lopsided outcome in favour of heads or tails. Note that after 40 games, the chances of tossing either all heads or all tails are nearly zero.
Figure 9. Bar graph of the probabilities calculated by Pascal's triangle, for 40 games. Also shown is the corresponding normal distribution (solid line). The standard deviation is approximately 6.3.
For whom the bell tolls.
In 1733, the mathematician Abraham de Moivre showed that after an infinitely large number of games, the results would converge on the so-called bell curve (otherwise known as the normal or Gaussian distribution). This is shown by the solid line in Figure 9. The curve is specified by two numbers: the mean or average, which here is zero, and the standard deviation, which is a measure of the curve's width. In a normal distribution, about 68 per cent of the data fall within one standard deviation from the mean; about 95 per cent of the data are within two standard deviations; and about 99 per cent of the data are within three standard deviations. In Figure 9 the standard deviation is about 6.3, so after 40 games there is a 99 per cent probability that the score will be in the range of -19 to +19. The odds are therefore only about 1 in 100 that someone will win by twenty games or more.
De Moivre was a respected mathematician and a friend of Isaac Newton, but he didn't have an academic position and supported himself by tutoring mathematics, playing chess for money, and occasional mathematical consulting to gamblers or the insurance industry. One of the first applications of the normal distribution was to the problem of estimating the revenue from annuities, because it turned out that human life spans also followed the bell curve. De Moivre's method for estimating his own life span was a little different. In later life, when his health was failing, he noticed that he was sleeping fifteen minutes longer every night. Doing the math, he figured that by November 27, 1754, he would be sleeping for 24 hours. His prediction proved correct, and he died that day. (Girolamo Cardano also correctly predicted the day of his death, but it is believed that he cheated by committing suicide.) The normal distribution also found many other applications in science and engineering (which is why it became known as normal). Scientists including Pierre-Simon Laplace, and particularly Carl Friedrich Gauss, used the method to a.n.a.lyse the errors in astronomical data (which is why its other name is the Gaussian distribution). They found that if many separate and independent measurements were taken of, say, a star's position in the sky, then the distribution of errors tended to follow the normal curve (yet another name at the time was the error law). The standard deviation of the errors therefore gave a measure of the accuracy.
The bell curve was adopted even more enthusiastically by social scientists, who found that it could be used to fit pretty much anything. For example, measurements of the height of English males were found to follow a near-perfect bell curve. One of the technique's greatest advertisers was the Victorian polymath Francis Galton, who wrote: "I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the 'law of error.' A savage, if he could understand it, would worship it as a G.o.d. It reigns with severity in complete self-effacement amidst the wildest confusion. The huger the mob and the greater the anarchy, the more perfect is its sway. It is the supreme law of Unreason."1 Mathematical justification for the ubiquity of the normal distribution came with the so-called Central Limit Theorem. This proved that the normal distribution could be used to model the sum of any random processes, provided that a number of conditions were met. In particular, the separate processes had to be independent, and identically distributed.
In his 1900 thesis The Theory of Speculation, Louis Bachelier used the normal distribution to model the variation of prices in the Paris bourse (Chapter 2). But it was only in the 1960s, with Eugene Fama's efficient market hypothesis, that the bell curve would really take its place as the "supreme law of Unreason."
The supreme law of unreason.
According to the efficient market hypothesis, the market is always in a state of near-perfect balance between buyers and sellers. Any change in an a.s.set's price is the result of small, independent, random perturbations as individuals buy or sell. The net result of many such changes will be like the final tally of a sequence of coin tosses, which de Moivre showed is governed by the bell curve. It follows that, even if the price change on a given day is unpredictable, one can still calculate the probability of a particular price change - just as insurance a.n.a.lysts can calculate the probability of a healthy male living to the age of 80, without knowing the exact day of death. It is therefore possible to derive a measurement of risk.
For example, suppose that we wish to calculate the risk in holding a particular a.s.set such as stock in a particular company. According to the efficient market hypothesis, if we neglect longer-term effects such as growth and just concentrate on day-to-day price fluctuations, then the price changes are purely random. One day they will be up, the next they will be down, but there is no underlying pattern. We can therefore model them statistically with the normal distribution. A plot of the price changes should follow a bell curve, rather like Figure 9. The standard deviation is then one measure of the risk in holding that stock, because the larger it is, the higher the potential price swings. If another a.s.set, such as a safer company, or a government bond, has a smaller standard deviation, then it appears to be less risky. (Of course, this a.s.sumes that volatility is a good measure of risk, a topic we will return to.) Since most people are willing to pay to avoid risk, it follows that the standard deviation should influence the a.s.set's price. An optimal portfolio will maximise growth but minimise risk.
Economists of the 1960s and '70s began devising complex formulae to measure and control risk. William F. Sharpe's Capital a.s.set Pricing Model computed the value of any financial a.s.set, taking into account its risk. It was based on Harry Markowitz's Modern Portfolio Theory, which presented a technique for minimising risk by choosing a.s.set cla.s.ses that are uncorrelated with one another. Fischer Black and Myron Scholes came up with a clever method for calculating the prices of options - financial derivatives that give one the right to buy or sell a security for a fixed price at some time in the future.2 The field of financial engineering was born. Several of its founders, including Sharpe, Markowitz, and Black, were later awarded the economics version of the n.o.bel Prize.
These techniques all a.s.sumed the key economic myths: that investors are rational and independent; that markets are free and fair; that markets are stable and correctly reflect value and risk; and that, as a result of all this, price changes are random and follow a normal distribution. The techniques were therefore all based on formulae developed for 18th-century astronomy; which in turn were based on Pascal's triangle.
Even today, the normal distribution is the gold standard for risk calculation.3 It has been enshrined in the Basel II regulatory framework as a method for banks to calculate their risk. As we will see in Chapter 6, the a.s.sumption of normality also played an important role in valuing the complex financial instruments that brought about the credit crunch. The main attraction of the normal distribution is its convenience: it allows traders to estimate risk in a single parameter, the standard deviation. There is no need to make a complex judgement based on a detailed understanding of the a.s.set or the market as a whole - just plug in a number and you're done.
The normal distribution has therefore played a key role in our financial system for the last half-century. That's enough time to compile a fair amount of data. So how's it doing so far?
The answer to this question would appear to be: not so well. The first big test came on October 19, 1987 (aka Black Monday), when the Dow Jones Industrial Index took everyone by surprise and dropped 29.2 per cent. According to theory, the chances of that happening were about 1 in 10 followed by 45 zeros - i.e. impossible.
In 1998, the firm Long-Term Capital Management used efficient market theory to make highly leveraged bets, by basically selling insurance against the possibility of extreme events. When such events materialised, courtesy of the Russian government defaulting on their bonds, the firm nearly self-destructed and had to be rescued in a $3.6 billion bail-out before it brought down the rest of the economy. According to theory, the chances of that happening were again incredibly small. At least they should have been, since the fund had a number of economic superstars on its payroll, including Myron Scholes. A memo from Merrill Lynch concluded that the models used "may provide a greater sense of security than warranted; therefore reliance on these models should be limited."4 In fact, orthodox risk-a.s.sessment techniques have failed to realistically a.s.sess the risk of every financial crisis of the past few decades, including the 1997 Asian crisis, the 2000 dot-com crisis, and of course the 2007-08 credit crunch. As a theory, it appears to have no backing from observational data. The reason is that, despite its obvious attractiveness and ease-of-use, the theory suffers from one major, overarching problem, which is that price changes don't actually follow a normal distribution. They're not normal.
Shaky grounds.
While financial mathematics has its roots in games of chance, the truth is that real life does not follow the neat patterns of cards or dice. As author and former trader Na.s.sim Taleb notes: "The casino is the only human venture I know where the probabilities are known, Gaussian (i.e., bell-curve), and almost computable."5 The financial markets are not a kind of giant coin-tossing experiment - they are something far more complex, intractable, and extreme.
This is true even for everyday price changes. Panel A of Figure 10 shows daily price fluctuations in the S&P 500 index (a value-weighted index of the top 500 public companies in the United States) from 1950. The record downward spike on Black Monday is clearly visible, as is the more recent turbulence from the credit crisis.
As a comparison, panel B shows what the price changes would have looked like if they had followed the normal distribution, with the standard deviation that would be calculated for the real price changes. You don't need to be an expert in statistics to see that these data have a very different appearance. The S&P 500 data has periods of calm followed by bursts of intense activity, while the normal data always fluctuates within a constant band. The real data also has much greater extremes than the normal data.