The (Mis)behavior of Markets: A Fractal View of Risk, Ruin & Reward – Benoit Mandelbrot, Richard L. Hudson

Chapter VIII
The Mystery of Cotton
.
.
.
.
Clue No.2: Early Power Laws in Economics

Vilfredo Pareto was an Italian industrialist, economist, and sociolo­gist with a turbulent career and a somewhat jaundiced view of the human enterprise.

He was born in 1848 in Paris, educated in Turin, and, after incur­ring huge losses speculating on the London metals market, was forced to resign his position as director of an Italian ironworks company. His first wife was a Russian countess; she left him for a young servant. Pareto did not begin serious work in economics until his mid-forties, but he swiftly made a mark and settled in Lausanne, Switzerland, as a professor and scholar. He started his career a fiery liberal, besting the most ardent British liberals with his attacks on any form of government intervention in the free market. He ended as, if not a believer, at least a student of socialism. He died in 1923 among a menagerie of cats that he and his French lover kept in their villa near Geneva; t;he local divorce laws-he was still officially yoked to his fickle countess-prevented him from re-marrying until just a few months before his death. His legacy as an economist was profound. Partly because of him, the field evolved from a branch of social philosophy as practiced by Adam Smith into a data ­intensive field of scientific research and mathematical equations. His books look more like modern economics than most any other texts of that day: tables of statistics from across the world and ages, rows of integral signs and equations, intricate charts and graphs.

One of Pareto’s equations achieved special prominence, and con­troversy. He was fascinated by the problems of power and wealth. How do people get it? How is it distributed around society? How do those who have it use it? The gulf between rich and poor has always been part of the human condition, but Pareto resolved to measure it. He gathered reams of data on wealth and income through different centuries, through different countries: the tax records of Basel, Switzerland, from 1454 and from Augsburg, Germany, in 1471, 1498, and 1512; contemporary rental income from Paris; personal income from Britain, Prussia, Saxony, Ireland, Italy, Peru. What he found-or thought he found-was striking. When he plotted the data on graph paper, with income level on one axis and number of people with that income on the other, he saw the same picture nearly everywhere in every era. Society was not a “social pyramid” with the proportion of rich to poor sloping gently from one class to the next. Instead, it was more of a “social arrow”­very fat at the bottom where the mass of men live, and very thin at the top where sit the wealthy elite. Nor was this effect by chance; the data did not remotely fit a bell curve, as one would expect if wealth were distributed randomly. It is a social law, he wrote: something “in the nature of man.”

That something, though expressed in a neat equation, is harsh and Darwinian, in Pareto’s view. At the very bottom of the wealth curve, he wrote, men and women starve and children die young. In the broad middle of the curve all is turmoil and motion: people ris­ing and falling, climbing by talent or luck and falling by alcoholism, tuberculosis, or other forms of unfitness. At the very narrow top sit the elite of the elite, who control wealth and power for a time ­until they are unseated through revolution or upheaval by a new aristocratic class. There is no progress in human history. Democracy is a fraud. Human nature is primitive, emotional, unyielding. The smarter, abler, stronger, and shrewder take the lion’s share. The weak starve, lest society become degenerate: One can, Pareto wrote, “compare the social body to the human body, which will promptly perish if prevented from eliminating toxins.” Inflammatory stuff-­and it burned Pareto’s reputation. At his death in 1923, Italian fas­cists were beatifying him, republicans demonizing him. British philosopher Karl Popper called him the “theoretician of totalitari­anism.
.
.
.
.
Chapter XII
Ten Heresies of Finance
.
.
.
.
10. In Financial Markets, the Idea of “Value” Has Limited Value

Value is a touchstone to most people. Financial analysts try to esti­mate it, as they study a company’s books. They calculate a break-up value, a discounted cash-flow value, a market value. Economists try to model it, as they forecast growth. In classical currency models, they input the difference between U.S. and Euro zone inflation rates, growth rates, interest rates, and other variables to estimate an Ideal “mean” value to which, over time, they believe the exchange rate will revert.

All this implies that value is somehow a single number that is a rational, solvable function of information. Given a certain set of information about an asset-a stock, a bond, or a pair of woolen culottes-everybody if equally well-placed to act will deduce it has a certain value; they will all hang the same price tag on it. Prices can fluctuate around that value; and it can be hard to calculate. But value, there is. It is a mean, an average, something certain in a chaos of conflicting information. People like the comfort of such thinking. There is something in the human condition that abhors uncertainty, unevenness, unpredictability. People like an average to hold onto, a target to aim at-even if it is a moving target.

But how useful is this concept, really? What is the value of a company? Well, you say, it is the price the market in its collective wisdom hangs on it. But how so? The most common index for mar­ket value is the price-earnings ratio, or P/E. Take Cisco Systems again, the supreme example of an Internet bubble stock. At its peak, the P/E reached a stratospheric 137. Put that into perspective. Any investor who actually believed that to be the company’s intrinsic value would have had to assume its earnings would keep up the same torrid pace for at least another decade-by which point Cisco’s market value would have exceeded the annual production of the entire U.S. economy. After the bubble burst, of course, the story changed. Cisco’s P/E at the market nadir of early 2003 had fallen to 26. Oddly enough, by then its earnings growth was actually faster than in the bubble days: 35 percent. Does any of this make sense? Ah, you say, it was not the company’s business fundamentals, but the market’s appetite for technology companies that changed-and that is as much a part of the measure of intrinsic value as balance sheet or cash flow. Really? If that is so, then surely the “real” value of Cisco changes every month, every week, every day-even tick­by-tick on the stock exchange. And if that value changes constantly, then of what practical use is it to any investor or financial analyst weighing whether to buy or sell? What use is a valuation model with new parameters for every calculation?

Point taken, you say. Then value is, perhaps, some function of cost-the cost of producing a steel ingot, the cost of replacing a fac­tory, the cost of buying a company’s individual pieces, broken up. How so? What is the cost of Microsoft Office software? Easy, you say: Add up the latest development budget, overheads, finance charges, and operational expenses for the relevant Microsoft divi­sion. But how much should we include of the cost of earlier Office generations, products without which the latest Office would not exist? How about the cost of the Windows operating system, the basic software with which Office was designed to work? How about the cost of installing and maintaining Office on millions of customers’ computers, without which Office would not have the “network economies” that have been so crucial to its growth? Such questions, difficult enough in a manufacturing economy, become intractable in our modern information economy, in which so much money changes hands for the mere right to use somebody else’s intangible ideas. And even if we could agree on a cost, how could we ever derive a useful formula for translating it into a price? Things sell below cost all the time. The price of a dress can drop 90 percent, simply by moving it from the shop window at the start of the season to the basement clearance rack at the end of the season.

Point taken, you say. But intellectual property and financial assets are unusually insubstantial items. What about hard assets? Well, commodity prices are at least as wild as stock prices. Cotton prices flipped around so wildly you could not say that average or variance, the standard parameters of measurement, had much meaning. And what “real” value would you have assigned to silver in the winter of 1979-1980, when prices nearly trebled in the space of just six weeks? Property prices are no more substantial. As anyone buying or selling a house knows, “average” prices have no significance: The quoted survey figures are based on just a few sales scattered around a neighborhood and can apparently change by the day. And even those figures show bizarre patterns: In the late 1990s, London house prices more than doubled. So divorced from any idea of intrinsic value did property become that one developer rehabilitated a for­mer public restroom, to sell as a small “cottage” for about £125,000-more than six times the average London wage.

To be sure, I do not argue there is no such thing as intrinsic value. It remains a popular notion, and one that I myself have used in some of my economic models. But the turbulent markets of the past few decades should have taught us, at the least, that value is a slip­pery concept, and one whose usefulness is vastly over-rated.

So how, you ask, does one survive in such an existentialist world, a world without absolutes? People do it rather well all the time. The prime mover in a financial market is not value or price, but price differences; not averaging, but arbitraging. People arbitrage between places or times. Between places: I had a friend who made his life as graduate student less tough by buying a convertible cheaply in his snowy home state, Minnesota, repairing it with his own hands, and then driving it to sunny California to sell dear. And arbitrage between times: A scalper buys a block of tickets today, and hopes to profit next month by reselling them dearly once the show is sold out. These arbitrage tactics assume no “intrinsic” value in the item being sold; they simply observe and forecast a difference in price, and try to profit from it. Of course, I am by no means the first to suggest the importance of arbitrage in financial theory; one of the latter-day “fixes” of orthodox finance, called Arbitrage Pricing Theory, tries to make the most of this. But a full understanding of multifractal markets begins with the realization that the mean is not golden.

Chapter XIII
In the Lab

.
.
.
.
Problem 2: Building Portfolios

You can have your cake and eat it: Such is the underlying message of modern portfolio theory. It is an elaborate mathematical machin­ery for reducing risk without sacrificing too much profit. As elabo­rated by Sharpe in the Capital Asset Pricing Model described earlier, it starts with the premise that the expected profit from any security is the sum of two simple items. First is the return the stock earns simply by rising with the market overall, and second is what­ever return it earns by marching to its own drummer. How much the stock rises or falls with the broad market index is measured by beta, and man-centuries of time have been squandered by financial analysts calculating and studying this parameter. Generally speak­ing, a stock with a beta of 1 moves in lockstep with the market overall. A stock with a higher beta is hypersensitive to market moves; it mag­nifies the market risk and so, to bother buying it, you have to believe it is such a powerful growth stock that it is worth the risk. A stock with lower beta is insensitive to market moves; it damps risk, and so may be more attractive in your portfolio even though you do not expect its price to rise much. With these assumptions, you can select stocks that mix and match risk and return and calculate, quite pre­cisely, an optimal portfolio. Such is the theory. But practice often differs: Many fund managers have their own, peculiar styles of pick­ing investments, and use the cold-blooded math of modern portfolio theory as a guide, a back-check that their picks are not piling on more risk than they thought.

But whether guide or master, modern portfolio theory bases everything on the conventional market assumptions that prices vary mildly, independently, and smoothly from one moment to the next. If those assumptions are wrong, everything falls apart: Rather than a carefully tuned profit engine, your portfolio may actually be a dangerous, careering rattletrap.

This was spelled out first by Fama. Conventional wisdom holds that, if you do the picks correctly, about thirty different stocks can provide an optimal portfolio. In fact, he found in a 1965 study, if you assume wild price variation you need many more stocks than that­-perhaps three or four times as many. The wild swings of real mar­kets mean you have to build in a wider margin of safety than conventional theory holds. In 2000, some researchers in France took his calculations to more detail. They found that, for nine stocks they studied on the Paris Bourse, the conventional methods consistently understated the basic market parameter, beta. For instance, the stan­dard method estimated French hotelier ACCOR to have a beta of 0.91-meaning it is a good defensive stock to add to a portfolio. But when they re-calculated the number using a more realistic model of price variation, they found ACCOR’s real beta was 8 percent higher, or 0.98-meaning it is about as risky as the market overall. On aver­age, the conventional methods underestimated beta by 6 percent, they found. The implication: When you pick a stock by the conventional method, you may actually be adding risk rather than reducing it.

Can we build a new, correct portfolio theory? Not yet clear. Whether you use a conventional beta or some new estimate of “real” beta, the entire theory is founded on the belief that the market averages are important-that you can use the Dow or the CAC-40 as a good yardstick to measure the risk of every individual stock. But of what use is an average when the individual stocks diverge so widely and unpredictably from it? What is the “average” location of all the stars in the galaxy? A new approach is needed. Today, building a portfolio by the book is a game of statistics rather than intelligence: You start by assuming the market has correctly priced each stock, and so your task is simply to combine the particular stocks in your portfolio in such a way as to meet your investment goals. This is much like a painter taking the colors straight out of the tube, as mixed and labeled by the factory. But if the colors do not come pre-mixed, then the painter’s eye for hue, intensity, and balance becomes more impor­tant. Likewise, if the stocks do not come pre-priced, if whatever drives the price-setting process is more complicated than expected,then the investment manager’s skill at spotting good opportunities becomes more important. Indeed, in a non-Gaussian world, the investment manager might actually have to earn his high fees.

So what is to be done? For starters, portfolio managers can more frequently resort to what is called stress-testing. It means letting a computer simulate everything that could possibly go wrong, and seeing if any of the possible outcomes seem so unbearable that you want to rethink the whole strategy. The technology is called a Monte Carlo simulation. You tell a computer how you think prices vary-specifically, what kind of random-number generator it should use. You feed it all the initial data: the particular stocks, their price histories, your strategy for buying them. Then you press the start button. Using the rules of randomness you gave it, the com­puter starts generating a series of hypothetical prices for each stock-in essence, it simulates one investor’s possible experience with the portfolio. Then it does it again, and again, thousands of times, like someone flipping a coin over and over to see if the odds for getting heads really are fifty-fifty. At the end, it totes up all the scores from all the runs: That tells you which simulated outcomes happened most often, and hence, which might be most likely in real life. It also tells you which outcomes are unlikely but, if they occurred, devastating. Finally, you use your own intelligence to decide whether you like the scenario the computer paints. If not, you decide the portfolio is too risky and you start again.

It sounds like a computational nightmare. Indeed, when this technique first appeared some decades ago in physics, it was not for the mathematically faint of heart. But computers are faster and cheaper now; software to perform these calculations now comes shrink-wrapped. You can simulate the performance of an options contract, for instance, in less than a minute on a standard personal computer. And so the technique has already spread over the past decade into many corners of finance. I urge that it become a stan­dard tool of portfolio construction.

Problem 3: Valuing Options

What is an option worth? It depends on how you measure it.

One 2003 study, for the U.S. Financial Executives Research Foundation, compared six common ways of valuing stock options. By one method, it figured, a particular stock option it studied was worth $8.76 a share to the executive who received it from his com­pany. But by another method, the thirty-year-old Black-Scholes equation, the same stock option was worth $25.27 a share. Which was right? Probably neither of them. Other studies have found even wilder errors. In the foreign exchange market, where $15 trillion of options were traded in 2001, one study found some dollar-yen options underpriced by 84 percent, and some Swiss franc-dollar options undervalued by 40 percent.

Valuing options correctly is a high-roller game, but the rules are all messed up. As described earlier, the most widely known formula was published in 1973 by Fischer Black and Myron Scholes, and it has been known for years that it is simply wrong. It makes unrealis­tic assumptions. It asserts that prices vary by the bell curve; volatility does not change through the life of the option; prices do not jump; taxes and commissions do not exist; and so on. Of course, these are simplifications to make the math easier. And so easy was it that, for the first fifteen years after its discovery, it was used blindly through­out options markets; it was viewed as a kind of financial alchemy that turned everything to gold. It let corporations hang a price tag on the stock options they granted their executives. It let banks devise new and ever-fancier financial products. It even allowed “portfolio insurance,” a precisely calculated number of options designed to rise in value if your main stock portfolio falls. It seemed to be financial engineering of the highest form. It had abolished risk. Of course, the truth was re-discovered on Black Monday, October 19, 1987, when a sudden drop in stock prices was turned into a rout by a wall of insur­ance options crashing down on the market.

A fundamental problem is the Black-Scholes assumption of con­stant volatility-in essence, that the world does not change. Normally, to calculate an options price, you plug in a few numbers, including your estimate of how much the underlying stock price or currency rate fluctuated in the past; the suggested price falls out the back end of the formula. But if you run the equation in reverse, plugging real market prices into its back and pulling from its front the volatility that those prices would imply, you get nonsense: a range of different volatility forecasts for the same options. A graphic example is below.

It shows the implied volatility for several differ­ent flavors-different maturities and different strike prices–of the same kind of option. If Black-Scholes were right, this would be a very boring picture, one flat line for all the varieties. Instead, you see a whole range of errors, wandering across the chart. Indeed, the mistakes have a Rococo structure of their own, worthy years of study. In the options industry, where mistakes can cost millions, that is exactly what they have received. Hundreds of scholarly papers, several textbooks, and scores of financial conferences have been devoted to studying the errors.

Improving or replacing Black-Scholes is one of the liveliest sub disciplines in mathematical finance. The most common approach is to try merely fixing the old formula. Software to correct the “volatil­ity smile,” the U-shaped pattern that Black-Scholes volatility errors often trace on graph paper, is now standard. Many adopt the GARCH methods mentioned earlier; while these produce better results than Black-Scholes-alone, they are still not accurate. Some approaches mix ideas similar to mine with those of others. For instance, Morgan Stanley has used what is called a “variance gamma process” to value its own options books at the end of each trading day. This method, developed by Dilip B. Madan of the University of Maryland and two others, is a two-step formula. It starts with an equation to deform time, to make it jump ahead ran­domly before slowing again. It follows with a type of Brownian motion to generate a price. There are many others-and so far, no consensus in the industry about which work best. In the absence of clear answers, it has become a case of every man for himself. Even in the same firm, you can have one group using experimental new methods to price “exotic” options, a range of complicated, and highly profitable, products that banks devise for their corporate clients with special risk problems. You can have the compliance officers, responsible for making sure the bank does not lose too much money, using a modified Black-Scholes formula. And then you can have the traders themselves using all or none of the above, as their whim or experience dictate.

That is, I submit, no way to run the options business. Even if Wall Street is content, Main Street is not. In 2004, the main American accounting body, the Financial Accounting Standards Board, revised the rules by which corporations account for the stock options they grant their executives. After the bursting of the Internet bubble, the obscene spectacle of greedy CEOs cashing in their options ahead of other shareholders stirred a political back­lash. Upshot: FASB, under prodding from Washington, is requiring many companies for the first time to record their options as an expense-in other words, an employment cost that will hit their reported profits. That position has enraged many corporate chief­tains, especially in the high-tech sector. Of course, they fear expens­ing options in any form will make them unattractive. But they also complain that there are no good valuation formulae.

“Despite results that are inherently inaccurate and unreliable for this purpose,” groused Intel CEO Craig Barrett recently, “Black-­Scholes is the only method available.” He continued:
If the standard-setters who support stock option expensing were required to certify their work, I wonder whether their tol­erance for inaccuracy would be the same?. I know of no situa­tion where it would be acceptable for a CEO to certify that a company’s results were ‘kind of right’-the term used by FASB’s Mr. Herz to describe the results produced by the Black-­Scholes model….

I support . . . corporate reform, but with all due respect,results that are ‘kind of right’ aren’t good enough.

Wall Street Journal, April 24, 2003

Problem 4: Managing Risk

By any measure, the late 1990s were a time of extraordinary growth and prosperity in much of the world-and yet, the global financial system still managed to lurch its way through six crises. The U.S. treasury secretary for part of that time, Lawrence H. Summers, counted them: Mexico in 1995; Thailand, Indonesia, and South Korea in 1997-1998; Russia in 1998; and Brazil in 1998-1999. The Indonesian crisis was especially severe: The country’s quarterly real GDP plummeted 18.9 percent and its currency fell into a hole 526 percent deep. Each of these end-of-millennium upheavals spread from its origin to most parts of the globe, destabilizing currencies, knocking gaping holes in bank balance sheets, and, in many cases, causing a wave of bankruptcies. The fact that each country recov­ered and the global economy roared on again is a testament, not to good financial management, but to good luck.

So risk-management is now a hot topic among financiers and politicians. To safeguard against bankruptcy, most banks in the world are obliged by law to keep a certain amount of cash on hand-a capital reserve. It can be tapped in extremis, but its main purpose is to assure the rest of the world that all is safe, and the bank that has it is a safe partner with which to do business. That presup­poses the reserve is large enough, and there lies the heart of the problem. In Basel, the Bank for International Settlements helps set the global standards for how much is enough, and since 2001 the world’s bankers and finance ministers have been arguing over new rules. The old methods are inadequate, they agree. So what should replace them?

One of the standard methods relies on-guess what?-­Brownian motion. The same false assumptions that underestimate stock-market risk, mis-price options, build bad portfolios, and gen­erally misconstrue the financial world are also built into the stan­dard risk software used by many of the world’s banks. The method is called Value at Risk, or V AR, and it works like this. You start off by deciding how “safe” you need to be. Say you set a 95 percent con­fidence level. That means you want to structure your bank’s invest­ments so there is, by your models, a 95 percent probability that the losses will stay below the danger point, and only a 5 percent chance they will break through it. To use an example suggested by some Citigroup analysts, suppose you want to check the risk of your euro­-dollar positions. With a few strokes on your PC keyboard, you cal­culate the volatility of the euro-dollar market, assuming the price changes follow the bell curve. Let us say volatility is 10 percent. Then, with a few more strokes, you get your answer: There is only a 5 percent chance that your portfolio will fall by more than 12 per­cent. Forget about it.

The flaw should be obvious by now. The potential loss is actually far, far greater than 12 percent. The problem is not merely that the bell curve leads us to underestimate the volatility. That would be bad enough, as it would understate the odds of loss. The problem is worse than that. Assume the market cracks and you land in the unlucky 5 percent portion of the probability curve: How much do you lose? Well, 12 percent, you say. Wrong. Even the VAR model recognizes that the actual loss could be greater; the amount beyond the theoretical 12 percent is the “overhang.” With a bell-curve assumption, the overhang is negligible. But if price-changes scale, the overhang can be catastrophic. As described before, once you are riding out on the far ends of a scaling probability curve, the journey gets very rough. There is no limit to how bad it could get for the bank. Its own bankruptcy is the least of the worries; it will default on its obligations to other banks-and so the final damage could be greater than its own capital. That was the lesson from each interna­tional crisis, as losses spread from one interlinked financial house to another. Only forceful action by the regulators put a firewall around the sickest firms, to stop the crisis spreading too far.

Fortunately, bankers and regulators now realize the system is flawed. So the world’s central banks have been pushing for more sophisticated risk models. One gaining popularity, based on some­thing called Extreme Value Theory and borrowed from the insur­ance industry, is on the right track: It assumes prices vary wildly, with “fat tails” that scale. But it does not commonly take account of a further source of risk I have been describing: long-term depend­ence, or the tendency of bad news to come in flocks. A bank that weathers one crisis may not survive a second or a third. I thus urge the regulators, now drafting a New Basel Capital Accord to regu­late global bank reserves, to encourage the study and adoption of yet more-realistic risk models. If they do not, Summers’s list of six crises will just keep growing.

It is gratifying to find I am no longer alone on this point. After his trading house, LTCM, crashed in the 1998 Russian crisis, Myron Scholes wrote:

Now is the time to encourage the BIS and other regulatory bod­ies to support studies on stress-test and concentration method­ologies. Planning for crises is more important than V AR analysis.

American Economic Review, May 2000

Aux Armes!

I am a persistent man. Once I decide something, I hold to it with extraordinary tenacity. I pushed and pushed to develop my ideas of scaling, power laws, fractality, and multifractality. I pushed and pushed to get out into the scholarly world with my message of wild chance, fat tails, long-term dependence, concentration, and disconti­nuity. Now I am pushing and pushing again, to get these ideas out into a broader marketplace where they may finally do some con­crete good for the world.

Of course, I have my hypotheses about market dynamics; and I believe they are well founded. Others have opposing views. Even the most cursory trawl through the economics literature will find a perplexing cacophony of conflicting opinions-and, more invidi­ous, contradictory “facts.” Consider one example. Proposition: Prices are dependent over a time-span that is (a) a day, (b) a quarter, (c) three years, (d) an infinite span, or (e) none of the above. Which is the right answer? All of them, apparently, if you are to believe the conflicting economics literature. All these views you will find asserted as an unassailable fact in countless articles reviewed by countless worthy peers, and supported by countless computer runs, probability tables, and analytical charts. Wassily Leontief, a Harvard economist and 1973 Nobel winner, once observed: “In no field of empirical enquiry has so massive and sophisticated a statisti­cal machinery been used with such indifferent results.”

It is time to change that. As a first step, I issue a challenge to Alan Greenspan, Eliot Spitzer, and William Donaldson-Federal Reserve chairman, New York attorney general, and SEC chairman, respectively. In the April 2003 settlement of post-bubble fraud charges, the biggest Wall Street firms agreed to cough up $432.5 million to fund “independent” research. Spitzer’s office amply doc­umented that what passed for investment research before was not only wrong, but fraudulent. Since then, a long line of media and ratings firms have lined up to collect some of the loot to launch independent research businesses. But there has been precious little discussion of what, exactly, these researchers should research.

I suggest just a small fraction of that sum-say, 5 percent, in honor of the V AR analysis discussed above-be set aside for funda­mental research in financial markets. Let the vast bulk of the money go where it usually does: ephemeral and contradictory opinions on which stocks to buy, which to sell, and whether to buy or sell at all. But let at least a widow’s mite go to understanding how stocks behave in the first place. Let the Wall Street settlement help to fund an international commission for systematic, rigorous, and replicable research into market dynamics. Of course, $20 million is not enough; even if computers and doctoral students are cheap, propri­etary data sources are not. But with that starting sum and wise lead­ership, such a commission would quickly draw contributions and investments from others, magnifying its impact.

A well-managed corporation devotes some portion of its research and development budget to fundamental research in fields of sci­ence that underlie its main businesses. Is not understanding the market at least as important to the economy as understanding solid­-state physics is to IBM? If we can map the human genome, why can we not map how a man loses his livelihood? If millions, on the Internet, can contribute a few cycles of their home PCs to searching for a signal from outer space, why can they not join a coordinated search for patterns in financial markets?

On the night of February 1, 1953, a very bad storm lashed the Dutch coast. It broke the famous sea dikes, the country’s ancient and proud bulwark against disaster. More than 1,800 died. Dutch hydrologists found the flooding had pushed the benchmark water­-level indicators, in Amsterdam, to 3.85 meters over the average level. Seemingly impossible. The dikes had been thought to be safe enough from such a calamity; the conventional odds of so high a flood were thought to have been less than one in ten thousand. And yet, further research showed, an even greater inundation of four meters had been recorded only a few centuries earlier, in 1570. Naturally, the pragmatic Dutch did not waste time arguing about the math. They cleaned up the damage and rebuilt the dikes higher and stronger.

Such pragmatism is needed in financial theory. It is the Hippocratic Oath to “do no harm.” In finance, I believe the conven­tional models and their more recent “fixes” violate that oath. They are not merely wrong; they are dangerously wrong. They are like a shipbuilder who assumes that gales are rare and hurricanes myth; so he builds his vessel for speed, capacity, and comfort-giving little thought to stability and strength. To launch such a ship across the ocean in typhoon season is to do serious harm. Like the weather, markets are turbulent. We must learn to recognize that, and better cope.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s