Home

Debunking Economics Part 21

Debunking Economics - novelonlinefull.com

You’re read light novel Debunking Economics Part 21 online at NovelOnlineFull.com. Please use the follow button to get notification about the latest chapter next time when you visit NovelOnlineFull.com. Use F11 button to read novel in full-screen(PC only). Drop by anytime you want to read free – fast – latest novel. It’s great if you could leave a comment, share your opinion about the new chapters, new novel with others on the internet. We’ll do our best to bring you the finest, latest novel everyday. Enjoy

This is an incredibly simple system, but even at this point it can give us some insights into why Bernanke's QE1 was far less effective than he had hoped and why it would have been far more effective if the money had been given to the debtors rather than to the banks.

A credit crunch The crisis of 2007 was not merely a credit crunch (where the problem is liquidity) but the end point in the process of Ponzi lending that made much of the US economy insolvent. However, the credit-crunch aspect of this crisis can be simulated in this model by halving the rate at which the bank lends from the vault, and doubling the speed at which firms try to repay their debts. The time constant for bank lending therefore drops from to so that the amount in the vault turns over every four years rather than every two while that for repaying debts goes from 1/10 to 1/5 so that loans are repaid every five years rather than every ten.

The credit crunch has a drastic impact upon both bank account balances and incomes. The level of loans drops from over $83 million to under $56 million, while the amount in the vault and therefore inactive rises from $16.9 million to $44.1 million.

14.5 A credit crunch causes a fall in deposits and a rise in reserves in the bank's vault All incomes drop substantially as well: wages drop from $216 million to $145 million per year, profits drop from $72 million to $48.5 million, and bank income drops from $2.7 million to $1.8 million a 32.8 percent drop.

Now let's consider what would happen if an injection of $10 million was made one year after the crunch began, into either the vault, or into the deposit accounts of the firms. The former approximates what Bernanke did in his attempt to exploit the mythical 'Money Multiplier,' the latter approximates what might have happened if the bailout had gone to debtors rather than to the banks and this is also very similar to what was in fact done in Australia, where the Rudd government effectively gave every Australian with a pulse $1,000 to spend.12 The results are intriguing, complex even though the model itself is simple, and the reverse of what Obama was told would happen by his neocla.s.sical advisors.

Whose bailout works best?

The bank bailout injects $10 million into the vault over a one-year period; the firm and worker bailouts inject the same amount of money over the same period of time into the deposit accounts of the firms or workers.

If you believed that the most important thing was to get lending going again after a credit crunch, then the bank bailout wins hands down: neither the firm nor the worker bailouts affect the level of loans at all, which remain on the depressed credit-crunch trajectory, while the bank bailout leads to loans falling less steeply, so that ten years after the crunch, they are $5.5 million higher than they would have been without the bailout.

14.6 A bank bailout's impact on loans However, if you believed that the most important thing was to restore economic activity, then the bank bailout is the least effective way to do this!

Profits and wages do rise because of the bank bailout, but the rise in income is far greater when the firms or workers receive the bailout than when the banks do.13 The increase in incomes is immediate and large in the case of the firms' bailout, versus gradual and modest for the bank bailout.

14.7 A bank bailout's impact on incomes The only people that do better if the bailout goes to the bankers ... are the bankers. Not only do they do better under their bailout than if nothing is done, they do worse if the bailout goes to firms or workers than if there is no bailout at all! The reason is that the firm (or worker) bailout increases the deposit accounts of the banks while leaving their loans unaffected. Their payment of interest to the rest of the economy therefore increases, while their receipts of interest payments remain the same.

14.8 A bank bailout's impact on bank income This is a very basic and incomplete model, and much more needs to be added to it before any definitive implications could be drawn about the impact of a government bailout during a credit crunch.14 But the differences between this simple dynamic model, and the even simpler but false Money Multiplier model that lay behind Obama's decision to bail out the banks rather than the public, tempt me to write what Obama could have said, if his advisers were not neocla.s.sical economists: And although the banks have argued that government money would be more effective if it were given to them to lend, rather than going directly to families and businesses 'where's our bailout?' they ask the truth is that an additional dollar of capital in a bank will dribble out slowly through the choked arteries of our sclerotic financial system, while that same dollar, if given to families and businesses, will enter circulation rapidly, a process that will cause a faster pace of economic growth.

But that's enough of fantasy. Let's bring this model up to date in terms of how money is created endogenously today, and extend it to include production, prices and growth.

A modern credit crunch The model we've just considered has a fixed amount of money in it, and since it's a paper-money system, the banks would need to print more notes if they wanted to expand the money supply. However, the majority of banking transactions have always involved the buyer writing a check drawn on an account in a bank, rather than handing over paper notes in return for goods and today's innovation of electronic transfer banking has taken this one step farther. The fact that these promises by banks to pay are accepted as money in their own right is what makes it possible for banks to expand the money supply simply by creating a new loan. The new loan creates a debt between the borrower and the bank, and it also creates additional spending power.

It's this capacity to create money 'out of nothing' which state policies like Reserve Requirements and Basel Rules attempted to control, but the empirical evidence shown in the last chapter shows that these control mechanisms have failed: the banks create as much new money as they can get away with, because, fundamentally, banks profit by creating debt.

TABLE 14.4 A growing pure credit economy with electronic money We can model this endogenous creation of both debt and new money (in a check-account or electronic-money banking system) by adding two new rows to the table one in which the firms' deposit accounts are credited with new money, the second in which the new debt the firms have to the banks is recorded on the loan ledger (see Table 14.4).

This extension helps explain why banks are so willing to create debt, and discourage its repayment: the source of bank profits is interest on outstanding debt, and the more debt that is out there, the more they make. The amount of outstanding debt will rise if existing money is turned over more rapidly, if new money is created more rapidly, and if debts are repaid more slowly. Banks therefore have an innate desire to create as much debt as possible which is why it is unwise to leave the level of debt creation up to the financial sector. As the Great Recession shows, they will be willing to create as much debt as they can, and if they can persuade borrowers to take it on which is easy to do when banks finance a Ponzi scheme then the economy will ultimately face a debt crisis where the banks' willingness to lend suddenly evaporates.

14.9 Bank income grows if debt grows more rapidly The extension also provides the means to link this purely monetary model to the cyclical Minsky model I outlined in the previous chapter, in a manner that is consistent with the argument that aggregate demand is the sum of income plus the change in debt.

In the model above, we were in a 'Say's Law' world in which aggregate demand equaled aggregate supply, and there was no change in debt. However, we now consider firms that wish to invest, and which are willing to take on new debt to finance it which also causes new money to be created. Aggregate demand is now income plus the change in debt, where incomes finance consumption, and the change in debt finances investment. The new loans thus provide the money needed to finance the investment that was an integral part of the Minsky model.

For simplicity, I a.s.sume that new money is created at a constant rate relative to the current level of debt (which halves when the credit crunch strikes); in the full Minsky model, this is a function of the rate of profit.

To link the two models, one more component is needed: a formula that describes how prices are set. For obvious reasons, this doesn't involve working out where 'marginal cost equals marginal revenue.' However, the equation I use is based on the proposition that prices will tend to converge to a level that equates the monetary value of demand and the monetary value of supply. At the same time, the equation conforms to the empirical research into how firms set prices (see Chapter 4) that they involve a markup on the wage cost per unit of output which is the theory of price-setting used by post-Keynesian economists (Lee 1998; Downward 1999).15 14.10 Unemployment is better with a debtor bailout We also need an explanation of how wages are set, and this raises the vexed issue of 'the Phillips Curve.' As explained earlier, a properly specified Phillips Curve should have three factors in determining money wages the employment rate, its rate of change, and a feedback from inflation but for simplicity here I'll just use the first factor (all three are used later in my monetary Minsky model).

The results of this model amplify the case made in the money-only, no-growth model. The firms' bailout works better on every front, on every metric except one (any guesses which one?).

Loans recover more rapidly when the firms are bailed out rather than the banks.

14.11 Loans grow more with a debtor bailout The rate of unemployment is turned around almost instantly with the firm bailout, and never reaches the extreme levels that apply with the bailout going to the banks (see Figure 14.10).

Both profits and wages are higher if the firms get the bailout money rather than the banks.

14.12 Profits do better with a debtor bailout The only losers from the bailout going to the firms rather than to the banks are ... the banks (did you guess right?). Once again, not only do they do worse if the firms get the bailout rather than them, they do worse under the firms' bailout than they do from no policy intervention at all.

14.13 Bank income does better with a bank bailout This is still a very simple model, and much more needs to be done to complete it from replacing time constants with variables (which I do in the Minsky model to come), through to properly modeling government finances as well as those of private banks (which I haven't yet done). But again it reaches results that are the opposite of the neocla.s.sical 'Money Multiplier' model that Obama, acting on the advice of his neocla.s.sical advisors, actually followed. Given the poor response of the economy to the stimulus and QE1, I think it's reasonable to argue that it's time Obama and politicians in general looked elsewhere for their economic advice.

From tranquility to breakdown To a neocla.s.sical economist, the most striking aspect of the Great Recession was the speed with which apparent tranquility gave way to sudden breakdown. With notable, n.o.ble exceptions like Nouriel Roubini, Robert Shiller, Joe Stiglitz and Paul Krugman, economists paid little attention to the obvious Bonfire of the Vanities taking place in a.s.set markets, so in a sense they didn't see the warning signs, which were obvious to many others, that this would all end in tears.

My model, in contrast, is one in which the Great Moderation and the Great Recession are merely different phases in the same process of debt-financed speculation, which causes a period of initial volatility to give way to damped oscillations as rising debt transfers income from workers to bankers, and then total breakdown occurs when debt reaches a level at which capitalists become insolvent.

The fixed parameters used in the previous models are replaced by functions where the rates of money creation and relending and debt repayment depend on the rate of profit, and where the rate of change of wages depends on the level of employment, its rate of change, and the rate of inflation. The link between the monetary and physical models is the creation of new money, which finances investment.

The model generates as sudden a turnaround in output as any neocla.s.sical model hit by 'exogenous shocks,' but unlike in those models there is continuity between the Great Moderation and the Great Recession.

14.14 Modeling the Great Moderation and the Great Recession inflation, unemployment and debt 14.15 The Great Moderation and the Great Recession actual inflation, unemployment and debt 14.16 Modeling the Great Moderation and the Great Recession output The model's numbers and the magnitude of its crash are hypothetical,16 and the main question is whether its qualitative behavior matches that of the US economy which it clearly does. A period of extreme cycles in unemployment and inflation is followed by diminishing cycles which, if they were the only economic indicators one focused upon, would imply that a 'Great Moderation' was occurring. But the third factor ignored by neocla.s.sical economics the ratio of debt to GDP rises in a series of cycles until it takes off exponentially (see Figure 14.14).

The qualitative similarity of this pattern to the actual US data (prior to the ma.s.sive intervention by both the government and the Federal Reserve) is striking see Figure 14.15. As in my 1995 model, though capitalists are the ones who actually take on debt, in practice the workers pay for it via a fall in their share of national income.

14.17 Income distribution workers pay for the debt 14.18 Actual income distribution matches the model This strictly monetary model generates one aspect of Minsky's hypothesis that my 1995 model could not: the 'deflation' part of the process of debt deflation. Debt rises in a series of booms and busts as in my 1995 paper, but as well the rate of inflation falls in a cyclical manner until it becomes accelerating deflation.

This generates the phenomenon observed in the early years of the Great Depression: the debt-to-GDP ratio continues to rise, even though nominal debt is falling (see Figure 14.19).

The model dynamic is more extreme than the data because the model doesn't yet include the impact of bankruptcy which reduces debt during a depression. But again, the qualitative similarity between the model and the empirical data is striking see Figure 14.20.

14.19 Debt and GDP in the model 14.20 Debt and GDP during the Great Depression Making monetary modeling accessible: QED I originally developed the models in this chapter using differential equations, and I found it very difficult to extend them, or explain them to other economists who weren't familiar with this approach to mathematics. Then a chance challenge to the accuracy of my models Scott Fullwiler a.s.serted that there must be errors in my models from the point of view of double-entry bookkeeping inspired me to see whether I could in fact explain my models using double-entry bookkeeping.

Not only did that prove possible, it also transpired that a double-entry bookkeeping layout of financial flows could be used to generate the models in the first place.

This overcame a major problem that I had with using system dynamics programs like Vissim (www.vissim.com) and Simulink (www.mathworks.com/products/simulink/) to build models of the financial sector. While these technologies were brilliant for designing engineering products like cars, computers and airplanes, they were poorly suited to modeling financial flows.

These programs use 'wires' to link one variable to another, and this is fine for physical processes where, for example, a wire from the fuel injector module to the cylinder module indicates a flow of gas from one point to another, and only one such link exists per cylinder. However, in a model of financial flows, the same term could turn up as often as three times in one diagram: once for the source account for some monetary transfer, once for its destination, and once to record it on a ledger. This resulted in almost incomprehensible models, and made 'wiring up' such a model extremely tedious.

I now use my double-entry bookkeeping methodology to develop models like the one in this chapter, and a simulation tool has also been developed for me to showcase this method. It's free, fairly easy to use, and you can both simulate the models I've shown in this chapter and build your own using it.

It's called QED which stands for Quesnay Economic Dynamics and can be downloaded from my blog at www.debtdeflation.com/blogs/qed/.

Conclusion There are many aspects of this model of which I am critical. For example, it doesn't distinguish borrowing for investment from borrowing for speculation, the government sector isn't incorporated, and many factors that are variable in reality (such as interest rates and the markup that sets prices) are constants in the model. But these missing aspects can be easily introduced into later extensions of the model a topic that I will take up in my next book, Finance and Economic Breakdown without needing to make the absurd a.s.sumptions that neocla.s.sical economics makes when it tries to combine more realism with the fantasy that everything happens in equilibrium.

It is also possible indeed it is essential to make this theory one not merely of macroeconomics, but of finance as well. In counterpoint to the false neocla.s.sical dichotomy between macroeconomics and finance on the basis of the counterfactual proposition that debt has no macroeconomic effects, a valid economic theory has to explain the behavior of both the macroeconomy and the financial markets. Such a coherent theory has not yet been developed. However, there are several realistic models of the behavior of financial markets themselves, which we'll now consider.

15 | WHY STOCK MARKETS CRASH.

The Efficient Markets Hypothesis says that the stock market's volatility is due to the random arrival of new information that affects the equilibrium value of shares. Allegedly, if it were not for the arrival of new information from outside the market, the market itself would be quiescent.

However, there are alternative explanations that attribute most (though not all) of the market's volatility to its own internal dynamics. Remarkably, these two explanations can predict statistical outcomes for share market prices that are almost indistinguishable from each other.

The kernel If financial markets aren't efficient, then what are they? According to Behavioral Finance, they are markets where agents make systematically irrational choices, thus resulting in both inefficiency and trading opportunities for the more rational. According to the Fractal Markets Hypothesis, they are highly unstable dynamic systems that generate stock prices which appear random, but behind which lie deterministic patterns. According to the Inefficient Markets Hypothesis, they are systems which overreact to good news and bad, leading to excessive a.s.set price volatility which inhibits the performance of the real economy. According to the burgeoning field of Econophysics, they are akin to nuclear reactors or tectonic plates, where interdependent interactions between speculators can occasionally give rise to runaway processes like nuclear reactions or earthquakes.

All these non-neocla.s.sical theories support the argument that unless finance markets are inst.i.tutionally tamed, capitalism will remain subject to potentially catastrophic breakdown caused by the finance sector.

The roadmap In this chapter I outline four different but consistent non-equilibrium theories of finance 'Behavioral Finance,' the 'Fractal Markets Hypothesis,' the 'Inefficient Markets Hypothesis,' and 'Econophysics.' The chapter concludes with two proposals to inst.i.tutionally limit the capacity of the finance sector to entice us into debt.

Behavioral finance Given the failure of the Efficient Markets Hypothesis (EMH), which is predicated on the belief that investors are 'rational' as neocla.s.sical economists define the word, it is little wonder that the most popular response to the failure of the EMH has been to argue instead that investors are in fact irrational or rather that their behavior deviates from pure rationality in systematic ways. This is then used as part of the explanation as to why the stock market is not efficient as the Efficient Markets Hypothesis defined the word so that a.s.set prices deviate from their fundamental values in systematic ways.

As you can imagine, I have rather more sympathy for this approach which is known as Behavioral Finance than I do for the EMH. But there are several aspects of this approach that make me rather less enthusiastic than you might expect. I'll detail these before I move on to the legitimate contributions that Behavioral Finance has made to understanding the behavior of finance markets.

What is rational? The development of Behavioral Finance was motivated by the results of experiments in which people were presented with gambles where their decisions consistently violated the accepted definition of rational behavior under conditions of risk, which is known as 'expected utility theory.' Under this theory, a rational person is expected to choose an option that maximizes their expected return and expected return is simply the sum of the returns for each outcome, multiplied by the odds of that outcome actually happening.

For example, say you were asked whether you'd be willing to take the following 'heads or tails' bet: Heads: You win $150 Tails: You lose $100 Most people say 'no thanks!' to that gamble and according to expected utility theory, they're being irrational. Why? Because the 'expected value' of that gamble is greater than zero: a 50 percent chance of $150 is worth $75, while a 50 percent chance of minus $100 is worth minus $50. The sum is plus $25, so that a person who turns the gamble down is walking away from a positive expected value.

Do you think it's irrational to turn that gamble down? I hope not! There's at least one good reason to quite sensibly decline it.1 This is that, if you take it, you don't get the 'expected value': you get either $150 or minus $100. Though you can know the odds of a particular random event like a coin toss, those odds are almost irrelevant to any given outcome.2 Whether the coin will come down heads or tails in any given throw is an uncertain event, not a risky one. The measurement of risk is meaningful only when the gamble is repeated multiple times.

This is easily ill.u.s.trated by modifying the bet above so that if you chose it, you have to play it 100 times. Think carefully now: would you still turn it down?

I hope not, because the odds are extremely good that out of 100 coin tosses, you'll get more than 40 heads, and 40 is the breakeven point. There is only a 1 percent chance that you'd get fewer than 40 heads and therefore lose money. If you get the most common outcome of 50 heads (which occurs 8 percent of the time), you'll make $2,500, while your odds of making between zero (from 40 heads) and $5,000 (from 60 heads) are better than 19 out of 20.

In other words, you get the expected value if, and only if, you repeat the gamble numerous times. But the expected value is irrelevant to the outcome of any individual coin toss.

The concept of expected value is thus not a good arbiter for rational behavior in the way it is normally presented in Behavioral Economics and Finance experiments why, then, is it used?

If you've read this far into this book, you won't be surprised to learn that it's because economists have misread the foundation research on this topic by the mathematician John von Neumann, and his economist collaborator Oskar Morgenstern, The Theory of Games and Economic Behavior (Von Neumann and Morgenstern 1953).

Misunderstanding von Neumann John von Neumann was one of the greatest intellects of all time, a child prodigy who went on to make numerous pivotal contributions to a vast range of fields in mathematics, physics, and computer science. He was a polymath at a time when it was far more difficult to make contributions across a range of fields than it had been in earlier centuries. One of the fields he dabbled in was economics.

His collaboration with Oskar Morgenstern resulted in whole fields of economic theory being developed by later researchers including Game Theory, much of neocla.s.sical finance theory, and ultimately Behavioral Economics but one key thing he actually wanted to achieve never happened: he wanted to eliminate indifference curves and immeasurable utility from economics. He regarded these concepts as a sign of the immaturity of economic theory primarily because it was so lacking in sound empirical data. His observations on this front are sadly even more relevant today: In some branches of economics the most fruitful work may be that of careful, patient description; indeed, this may be by far the largest domain for the present and for some time to come [...] the empirical background of economic science is definitely inadequate. Our knowledge of the relevant facts of economics is incomparably smaller than that commanded in physics at the time when the mathematization of that subject was achieved. Indeed, the decisive break which came in physics in the seventeenth century, specifically in the field of mechanics, was only possible because of previous developments in astronomy. It was backed by several millennia of systematic, scientific, astronomical observation, culminating in an observer of unparalleled caliber, Tycho de Brahe. Nothing of this sort has occurred in economic science. It would have been absurd in physics to expect Kepler and Newton without Tycho and there is no reason to hope for an easier development in economics. (Ibid.: 2, 4) Von Neumann was particularly disparaging about the role that the concept of immeasurable utility took in economic theory. You'll remember from Chapter 1 that early economists imagined that there was a measurable unit of utility they called the 'util,' but that this idea of measurable or 'cardinal' utility gave way to the concept of 'ordinal' utility in which the satisfaction gained from different bundles of commodities could be ranked, but not measured because measurement of individual subjective utility was deemed impossible.

Von Neumann disagreed, and proved that in situations in which it was possible to define indifference curves, it was also possible to calculate numerical values for utility by using gambles.

His idea was to set an arbitrary starting point for utility for example, to define that, for a given individual, one banana was worth one 'util' and then present that individual with a gamble where the options were either one banana, or a gamble between zero bananas and two bananas with a variable probability. The probability at which the consumer is willing to accept the gamble then lets you derive a numerical estimate of the utility of two bananas. As von Neumann and Morgenstern put it: The above technique permits a direct determination of the ratio q of the utility of possessing 1 unit of a certain good to the utility of possessing 2 units of the same good. The individual must be given the choice of obtaining 1 unit with certainty or of playing the chance to get two units with the probability a or nothing with the probability 1a ...; if he cannot state a preference then a=q. (Ibid.: 1819, n. 3) For example, if you were willing to accept a gamble that gave you either 2 bananas or zero when the odds of getting 2 bananas was 6 out of 10, then the ratio of the utility of 1 banana to the utility of 2 bananas for this consumer was 0.6. A bit of algebraic manipulation shows that this consumer gets 1.67 utils of utility from consuming two bananas, compared to 1 util from one banana. A hypothetical example of using this procedure to provide a numerical measure of utility is shown in Table 15.1.

TABLE 15.1 Von Neumann's procedure for working out a numerical value for utility Consumer: Joan Cheng An essential element of this procedure was that it had to be repeatable, and for obvious reasons. If it were done just once, and the experimental subject was hungry, then he might be unwilling to take the risk of starving that the gamble implied, if the outcome were that he had to forgo the banana he already had.

Von Neumann was emphatic about this: to make sense, his procedure had to be applied to repeatable experiments only: Probability has often been visualized as a subjective concept more or less in the nature of an estimation. Since we propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose. The simplest procedure is, therefore, to insist upon the alternative, perfectly well founded interpretation of probability as frequency in long runs. (Ibid: 19; emphasis added) Unfortunately, both neocla.s.sical and behavioral economists ignored this caveat, and applied the axioms that von Neumann and Morgenstern developed to situations of one-off gambles, in which the objective risk that would apply in a repeated experiment was replaced by the subjective uncertainty of a single outcome. Neocla.s.sical economists combined the concept of expected utility with their ordinal, 'indifference curve' theory of consumer choice to develop the Capital a.s.sets Pricing Model, despite the fact that von Neumann was adamant that he wanted to replace the concept of indifference curves with his concept of cardinal utility: we hope we have shown that the treatment by indifference curves implies either too much or too little: if the preferences of the individual are not at all comparable, then the indifference curves do not exist. If the individual's preferences are all comparable, then we can even obtain a (uniquely defined) numerical utility which renders the indifference curves superfluous. (Ibid.: 1920) Behavioral economists, on the other hand, developed all sorts of 'paradoxes of irrational behavior' from how people's behavior in experiments violated von Neumann's 'Axioms of Expected Utility' but all of these paradoxes evaporate when the correct, objective, 'frequency in long runs' version of probability is used.

The four axioms were Completeness, Transitivity, Independence and Continuity:3 Completeness: A subject can always decide whether he prefers one combination to another, or is indifferent between them.

Transitivity: Choices are consistent so that if shopping trolley A is preferred to trolley B, and B to C, then A is preferred to C.

Independence: Adding two gambles together doesn't change the rankings that apply when the gambles are undertaken separately. And Continuity: If A is preferred to B and B to C, then there must be some combination of the best (A) and worst (C) option that is as desirable as the middle option (B).

One alleged instance of a violation of these axioms is the famous 'Allais Paradox,' named after the French economist Maurice Allais. The violations definitely occur when a single experiment is all that is conducted, but would disappear if the experiment were repeated multiple times, as von Neumann intended.

Allais compared two experiments, the first of which is shown in Table 15.2: TABLE 15.2 The Allais 'Paradox': Experiment 1 The expected value of Option 1B is higher than that of 1A: 1B is worth $1.39 million (0.89 times $1 million plus 0.1 times $5 million, or $890,000 plus $500,000), so according to expected utility theory, a rational person should choose option A over option B. But in practice, most people choose A presumably because people prefer a sure thing of a million dollars against even the slightest chance of walking away with nothing.

Rather than calling this behavior irrational, behavioral economists say that this shows 'risk-averse' behavior.

The second experiment is shown in Table 15.3: TABLE 15.3 The Allais 'Paradox' Part 2: Experiment 2 Here the expected value of Option B is higher than that of A: B is worth $500,000 whereas A is worth $110,000. And here, most people in fact choose option B rather than option A. So in this experiment, most people are consistent with expected utility theory, whereas in the first experiment, most people are inconsistent.

Much was then made of this alleged inconsistency. It was said that it displayed people switching from risk-averse to risk-seeking behavior, that it was provably inconsistent with the Independence Axiom, and so on the Wikipedia entry on the Allais Paradox gives quite a reasonable summary.

However, these 'inconsistencies' disappear when one uses the 'frequency in long runs' approach that von Neumann insisted upon see his words above. Imagine now that you are offered the chance of repeating Experiment 1 a thousand times. The person who picked option A would certainly walk away a billionaire, but anyone who chooses B will probably walk away about $400 million richer. Ditto with Experiment 2: Option A would see you probably end up with $100 million, while your wealth via option B would be of the order of half a billion. Only Option B makes any sense in both experiments now it would clearly be a sign of poor reasoning to choose A instead.

The 'Allais Paradox' is thus not a paradox at all, but a typical case of economists misreading their own literature. I have a similar att.i.tude to all other 'paradoxes' in the behavioral economics literature.

However, this doesn't mean that this entire literature is a waste of time, because the exercises do point out the difference between an uncertain outcome and a risky one and it is clearly the uncertain outcome which is relevant to people's behavior in stock markets. Uncertainty introduces an asymmetry into people's reactions to losses and gains, and this results in a mult.i.tude of ways in which people's behavior deviates from the predictions of the Efficient Markets Hypothesis which, in their own peculiar way, are similar to the predictions of this misreading of von Neumann.

Many of these behaviors are also clearly counterproductive in the context of stock market gambling, and in turn they make it highly likely that market prices will deviate substantially from 'innate value.' These effects also form part of the Inefficient Markets Hypothesis, so I'll delay discussion of them until then.

The inherent instability of stock markets The Efficient Markets Hypothesis explains the price fluctuations that characterize financial markets as rational reactions by the markets to the random arrival of new information affecting the future prospects of companies. The three different approaches to finance outlined in this chapter all argue that these price fluctuations are due to the markets' own internal dynamics. These are two fundamentally different explanations for the same phenomenon: one based on exogenous shocks the random arrival of external economic news the other on internal dynamics today's market prices being a reaction to yesterday's. How can two such different explanations account for the same data?

An a.n.a.logy might help here. Some animal populations for example, lemmings are known to fluctuate wildly from year to year. There could be two explanations: the environment in which lemmings live could be so volatile that it causes extreme variations in population from one year to the next, and without this environmental volatility, lemming numbers could be constant. Or, the environment could be relatively stable, but the population dynamics of lemmings could be so volatile that they cause huge fluctuations in numbers from year to year.

15.1 Lemming population as a constant subject to exogenous shocks 15.2 Lemming population as a variable with unstable dynamics It turns out that it's very difficult to know which process is generating a given set, just from the numbers themselves: an unstable dynamic process can generate numbers which are very difficult to distinguish from a set of random numbers unless you have a very large data set. The Efficient Markets Hypothesis claimed that the movements in stock prices would be random, and at least initially this contention did seem to be supported by the data from a small sample (between 1950 and 1966). But stock market data actually support a far different contention: that the stock market is inherently unstable.

The Efficient Markets Hypothesis was also developed before the scientific world became reacquainted with the concept of chaos,4 and it fitted neatly with the economic predilection to see everything in terms of equilibrium. It also meant that economists working in finance theory could avail themselves of all the mathematical and statistical tools devised by mathematicians and scientists to study random processes.

This was an intellectual bonanza though it simultaneously meant that stock market speculators had to be told that, sadly, there was no bonanza for them hidden in the daily data of the stock exchange. Technical a.n.a.lysts, those looking for trends and waves within waves, were wasting their time.

However, as time went on, more and more data turned up which were not consistent with the EMH. As I detail in the next section, this led to something of a 'siege mentality' by supporters of the EMH, as they fought to defend their theory from attack. But it also inspired other researchers to develop alternative theories of stock market movements.

The Fractal Markets Hypothesis The Fractal Markets Hypothesis is primarily a statistical interpretation of stock market prices, rather than a model of how the stock market, or investors in it, actually behave. Its main point is that stock market prices do not follow the random walk predicted by the EMH,5 but conform to a much more complex pattern called a fractal. As a result, the statistical tools used by the EMH, which were designed to model random processes, will give systematically misleading predictions about stock market prices.

The archetypal set of random numbers is known as the 'normal' distribution, and its mathematical properties are very well known. A normal distribution with an average value of zero and a standard deviation of 1 will throw up a number greater than 1 15 percent of the time, a number greater than 2 just over 2 percent of the time, and a number greater than 3 only once every 750 times, and so on. The chance of a 'far from average' event occurring diminishes rapidly and smoothly the farther the event is from the average.

The standard deviation of daily movements on the Dow Jones Industrial Average is roughly 1 percent. If stock market prices were generated by a normal process, then extreme movements say a fall of more than 5 percent in just one day would be vanishingly rare. The odds of any such event having occurred even once during the twentieth century would be just over 1 in a 100.

In fact, there were over sixty such daily downward movements (and over fifty daily upward movements of 5 percent or more) during the twentieth century.

The fact that extreme movements occurred roughly 10,000 times more often than for a random process is fairly strong evidence that the process is not random at all (and there's lots more evidence besides this morsel).

A fractal set of numbers, on the other hand, is a far more pernicious beast. Specifically, it is much more likely to generate extreme events than a normal distribution, and one large movement is likely to be followed by another large movement another feature of stock markets which the EMH finds very difficult to explain.6 A fractal pattern also displays 'self-similarity': the data pattern looks the same regardless of whether you are looking at a short data period such as one day, or a week or longer periods, such as a year or even a century.

The basic idea behind a fractal is that each number in the series is a simple but nonlinear function of previous numbers in the series. This differs from a true 'random number generator' such as dice, where the next number is independent of all previous numbers rolling a 6 now doesn't change the odds of rolling a 6 on your next throw, they will still be 1 in 6.

Applying this to the stock market, it is quite possible that each price movement is a complex function of previous price movements.

This might seem to imply that, if the fractal markets hypothesis is correct, it should be easy to make money out of the stock market in which case the hypothesis would be invalid, since it isn't easy to profit as a trader. However, there is another key aspect of fractal systems which comes into play here, which is known as 'sensitive dependence on initial conditions.'

Even if you knew precisely the 'system' which generated the Dow Jones Industrial Average, you could never know the precise value of the index because of rounding error. Let's say your initial measure of its value was out by 1/10th of a percent rather than being, say, 10396.5, it was actually 10396.6.

One day (or iteration) later, your model would be wrong by (say) 1 percent; one day later by 10 percent; and a day after that, it would be completely useless as a means of predicting the following day's value. This is because any measurement errors you make in specifying the initial conditions of a fractal model grow exponentially with time, whereas for a random model the errors normally grow linearly (and can even fall with time for a stable system). As Ott puts this dilemma: 'The exponential sensitivity of chaotic solutions means that, as time goes on, small errors in the solution can grow very rapidly (i.e., exponentially). Hence, after some time, effects such as noise and computer roundoff can totally change the solution from what it would be in the absence of these effects' (Ott 1993).

Ott gives the example of a chaotic function called the Henon Map being simulated on a computer which is accurate to fifteen decimal places: the smallest difference it can record between two numbers is 0.00000000000001. He shows that if your initial measurement of the system was out by precisely this much, then after forty-five iterations of the model, your estimate of where the system is would be completely wrong. Attempting to overcome this problem by more computing power is futile: 'Suppose that we wish to predict to a longer time, say, twice as long. Then we must improve our accuracy by a tremendous amount, namely 14 orders of magnitude! In any practical situation, this is likely to be impossible. Thus, the relatively modest goal of an improvement of prediction time by a factor of two is not feasible' (ibid.).

Please click Like and leave more comments to support and keep us alive.

RECENTLY UPDATED MANGA

Legend of Swordsman

Legend of Swordsman

Legend of Swordsman Chapter 6352: Nine Physical Forms Author(s) : 打死都要钱, Mr. Money View : 10,247,926
Emperor’s Domination

Emperor’s Domination

Emperor’s Domination Chapter 6242: You'll Be Copying Me Later Author(s) : Yan Bi Xiao Sheng,厌笔萧生 View : 17,978,625

Debunking Economics Part 21 summary

You're reading Debunking Economics. This manga has been translated by Updating. Author(s): Steve Keen. Already has 757 views.

It's great if you read and follow any novel on our website. We promise you that we'll bring you the latest, hottest novel everyday and FREE.

NovelOnlineFull.com is a most smartest website for reading manga online, it can automatic resize images to fit your pc screen, even on your mobile. Experience now by using your smartphone and access to NovelOnlineFull.com