Losing My Religion

Dec 04, 2013
Samuel Lee, strategist on the passive funds research team for Morningstar, writes a captivating piece on how he unlearned efficient markets folly.
 

When I first began investing, I caught the passive investing bug bad. The efficient-markets hypothesis, or EMH, was like a divine revelation to me. It was elegant--almost beautiful--and blessed by an impressive-sounding body of authorities. The University of Chicago was my church, Eugene Fama my high priest, and Jack Bogle a saint. He still is, of course.

I’d like to think I wasn’t a blind convert. I had, after all, looked at the data and listened to the experts. As far as I could tell, there was a consensus: Beating the markets is close to impossible. And there was a corollary: Active management is a sin.

The biggest challenge to my belief system was discovering the existence of momentum--more precisely, it was discovering how certain people reacted to the evidence. Everyone acknowledged it existed. However, the diehard efficient-markets academics had baffling explanations: It was a mystery, perhaps tied to some kind of hidden “risk factor,” or it was a statistical illusion, or it was impossible to exploit after fees and taxes. Some just shrugged their shoulders.

Momentum is obviously behavioral in origin, made possible by irrational exuberance or pessimism. No other explanation makes sense. And it is exploitable because the effect is so powerful and prevalent in even the most liquid markets, such as large-cap stocks. The effect is a dagger in the heart of the most watered-down version of the EMH, “weak-form efficiency,” which holds that an asset’s future price cannot be forecast using past prices (that is, prices move in a random walk). Unbelievably, academics completely overlooked momentum’s existence until 1993. Their statistical tests were so weak, they didn’t detect what’s now acknowledged to be the most powerful and pervasive anomaly in the markets.

I began to rethink the assumptions built into the studies I’d taken as gospel when I saw finance’s most prestigious journals had published after-the-fact gee-whiz models that could justify the late-1990s tech bubble as a rational market response. It takes a lot of learning to be so wrong. When the scales fell from my eyes, I realized this wasn’t an isolated example. The EMH greased the way to publication, no matter how badly its lens distorted the picture. It reminded me of another form of collective delusion that still stalks parts of academia today: the idea of humans as blank slates and the diminishment of heredity as an explanation for, well, anything. (Steven Pinker’s The Blank Slate is a great treatment of this topic.)

My simplistic understanding of what all the smart guys believed evolved as I delved into the academic literature. There actually never was an academic consensus since at least the 1980s. While Eugene Fama and Kenneth French stumped for efficient markets, giants of the field like Andrei Shleifer, Larry Summers, Richard Thaler, and Robert Shiller were pushing back with behavioral explanations. One of the hottest points of contention was the “value effect,” the tendency for stocks with low valuation ratios to outperform stocks with high valuation ratios. Fama and French argued value stocks were riskier than growth stocks; equally credentialed academics, led by Shleifer, Robert Vishny, and Josef Lakonishok, argued investors irrationally extrapolated bad results for value stocks and good results for growth stocks, creating mispricings. 1

As I poked around, I discovered that many EMH-inspired findings looked an awful lot like zombies: They just wouldn’t stay dead. Ever heard of the capital asset pricing model, or CAPM? It’s a wildly unrealistic model that proves owning the market portfolio is “mean-variance efficient,” meaning it’s impossible to find a portfolio with a better volatility-adjusted return. CAPM also says an asset’s expected return is deter-mined by one factor only: how closely its returns move in tandem with the market portfolio, its beta. Many stock analysts to this day use beta to estimate the expected return of a stock, or its cost of equity, despite the fact that it doesn’t work. In fact, high CAPM beta predicts low future returns and vice versa, a phenomenon called the “low-volatility anomaly.”

Somehow all of this has gotten lost in the active versus passive debate. The science of investing is not all about efficient markets--far from it.

Bad Ideas and Bad Incentives in Science 

I’m a slow learner. It took me a while to realize that the sophistication of a study had little to do with its merit. I clued onto this when I began reading the work of John Ioannidis, a doctor who also happens to be a math whiz.

Ioannidis published an influential 2005 paper that argued most published findings are bound to be false. 2 You’ve experienced this phenomenon first-hand if you’ve paid attention to the news: It seems researchers have discovered cures for cancer, AIDS, old age, and obesity several times over.

His argument is sensible. Publications tend to look for positive results (for example, “drug X cures cancer in mice”— wow!) and ignore negative results (“drug X does, um, nothing in mice”— boring!). Unfortunately, the most interesting findings are also the ones most likely to come about by dumb luck or error.

Researchers also have a lot of leeway in the way they collect and interpret data. Small tweaks can turn an unpublishable negative result into a statistically significant “discovery.” Hence all the exciting initial findings that come to naught.

Ioannidis’ provocative conclusion: “Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”

The same year, Ioannidis published a paper looking at the most highly cited clinical research studies that were followed up with by studies that were larger or better constructed. 3 Five of the six nonrandomized studies and nine of the 39 randomized controlled studies were contradicted or weakened. These weren’t bad studies. They were published by top researchers using state-of-the-art methodologies.

Subsequent research is consistent with Ioannidis’ argument. Pharmaceutical firm Bayer AG found it couldn’t replicate the results of about two thirds of 67 studies it looked at. 4  Amgen found that it couldn’t reproduce the results of more than 90% of 53 promising papers in cancer research. 5 And this is biomedical science, where the methodologies are rigorous (double-blind trials are common) and your study—if influential or interesting enough—is going to be replicated by deep-pocketed pharma giants or academics looking to make a name for themselves. If there’s one place where researchers have the incentive to get it right, it’s there. And yet the failure rate is astounding.

My intuition is economics and finance studies are even worse. There are two ways to validate an economic or financial theory: wait 100 years and collect new data, or look at a fresh new data set, such as another time period or different markets. It can take decades before someone’s held accountable for a bunk theory. On top of that, it’s easy to run many different “experiments” on the historical data—just change the programming code—and prove your point, and no one can tell how many experiments you’ve run. (This is a very bad thing, a sin in empirical science.)

I’ve learned to not be overly impressed with a single study or even a series of studies, no matter how credentialed the authors. The data can be tortured to confess to anything. You need to apply liberal doses of common sense—more when the claims are outlandish. A new theory has to be backed by many independent sources of data, ideally data the theory’s originators have never seen, and you need to really kick the tires of any assumptions it makes.

The best models or theories are the ones that best predict previously unseen data using the fewest and weakest assumptions possible. It’s the litmus test of whether you’ve struck truth: Can you rely on it to work in the future? If not, it’s useless; it’s a prettified story, nothing more. Risk manager Aaron Brown argues many finance academics would never bet money on their more arcane models—such models are optimized for publication, to show how clever you are, not optimized to say something true about the world. The arguments put forth in high finance can have an otherworldly quality. Consider the closed-end fund “puzzle,” the fact that some investors buy CEFs at big premiums in initial public offerings, despite the sad reality that the premium almost always collapses within a couple of months into a discount. Prominent researchers have published papers with models and supporting data showing why such behavior is rational; common sense says IPOs are foisted on naïve investors. Unless you’ve overdosed on math, it’s clear which is probably right.

Fama-French Versus Graham-Dodd-Buffett 

The efficient-markets debate is really a competition between two theories. The efficient marketeers, led by Eugene Fama and Kenneth French, see the market as rational and calculating. Value investors, exemplified by Benjamin Graham and Warren Buffett, see it as a bipolar creature, either ecstatic or depressed.

The EMH scores highly on elegance: It makes asset-pricing theory amenable to beautiful theorems (such as CAPM) and does a lot to connect “macro” and “micro” models. Behavioral explanations aren’t so accommodating to economists’ physics envy; it’s hard to produce a grand theory of everything once you throw irrational human beings into the mix.

Elegance can’t be for its own sake. A theory has to be predictive. This is where the EMH falls flat on its face and the behavioral model shines.

Imagine yourself back in New York City on May 17, 1984. Columbia University is hosting a debate of a kind in celebration of the 50th anniversary of the publication of Benjamin Graham and David Dodd’s classic text “Security Analysis.” On the offensive is Michael Jensen, an influential University of Rochester professor, who’s there to stump for efficient markets, a near-unanimous academic consensus. On the defensive is Warren Buffett, Graham’s most famous disciple and already recognized as one of the greatest investors alive.

Jensen starts. He reviews the academic literature, reciting a litany of studies showing no statistically significant evidence of skill. It sounds impressive. (I’m filling in the details here; it seems no copies of his speech survive on the Internet.) He ends by describing the fund industry as a coin-flipping game—enough coin-flippers and someone’s bound to enjoy a long streak that in isolation looks impossible.

Buffett responds. He asks you to imagine a national coin-flipping competition with all 225 million Americans. Each morning the participants call out heads or tails. If they’re wrong, they drop out. After 20 days 215 coin-flippers will have called 20 coin flips in a row—literally a one in a million phenomenon for each individual flipper, but an expected outcome given the number of participants. Then he asks, what if 40 of those coin-flippers came from one place, say, Omaha? That’s no chance. Something’s going on there. 6

Buffett argues “Graham-and-Doddsville” is just that place. He presents nine different funds that have beaten the market averages over long periods, all sharing only two qualities: a value strategy and a personal connection to Buffett. He emphasizes that they weren’t cherry-picked with the benefit of hindsight.

In closing, he boldly predicts “those who read their Graham and Dodd will continue to prosper.” The crowd goes wild. Later on, at the cocktail reception, everyone’s talking about how Buffett crushed Jensen.

If you’re anything like me, you would’ve disagreed. Buffett claimed the funds weren’t cherry-picked, but how could you tell? And at least a few of them had the same ideas as Buffett.  Sequoia was an early investor in  Berkshire Hathaway stock, and by 2004 it had 34% of its portfolio in Berkshire. It’s not clear how independent of his success the funds really were. Buffett’s survey would have failed to gain publication in a respectable journal because it wasn’t reproducible.

But the ultimate test of a theory isn’t how credentialed its proponents are or whether it’s published in a prestigious journal, it’s this: Does it have predictive power? In 1984, an efficient marketeer would predict the following: Over the long run, it’s highly likely Warren Buffett will continue to earn excess returns only by taking on more risk. A value investor would predict “those who read their Graham and Dodd will continue to prosper.”

Which theory did a better job? From 1985 to 2012, Berkshire Hathaway’s book value would go on to grow 18% annualized, beating the S&P 500 by 7.4% annualized, with lower volatility. The value investors would avoid the worst of the tech bubble. The idea of value stocks outperforming would a decade later be accepted by academics and integrated into their models as the “value premium,” a compensatory return boost for bearing more “risk.” (Decades later, they’re still debating what this mysterious risk is!) In an ideal world, the efficient-markets theorist in 1984 would become closer to a value investor by 2013.

Sadly, I can’t find many old-school efficient-markets academics who’ve marked their beliefs to market. I’m not surprised. Scientific history is a procession of the old guard clinging to old ideas, defending them to the bloody death. In the early 20th century, physicist Max Planck experienced this firsthand when he introduced quantum mechanics, which assumed some physical phenomena, such as light, occurred in discrete quantities, as if nature operated with dials that could only be rotated into set notches rather than smoothly spun. He is quoted as saying, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”7 Ironically, Planck rejected the Copenhagen interpretation of quantum mechanics, devised by a trio of younger physicists, Neils Bohr, Werner Heisenberg, and Wolfgang Pauli (Heisenberg and Pauli being 20-something year olds at the time). It would go on to become the canonical interpretation.

I wouldn’t feel too smug pointing at the perceived failings of those smarter than us. At least once in a while a scientist will change his mind. Most people have never significantly altered beliefs that took root while they were young. They cling to comforting delusions, and for that everyone is worse off.

1 Josef Lakonishok, Andrei Shleifer, and Robert W. Vishny. “Contrarian Investment, Extrapolation, and Risk.” The Journal of Finance, 1994.

2 John P. A. Ioannidis. “Why Most Published Research Findings Are False.” PLOSMedicine, 2005.

3 John P. A. Ioannidis. “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research.” The Journal of the American Medical Association, 2005.

4 Florian Prinz, Thomas Schlange, and Khusru Asadullah. “Believe It or Not: How Much Can We Rely on Published Data on Potential Drug Targets?” Nature Reviews Drug Discovery, September 2011.

5 C. Glenn Begley and Lee M. Ellis. “Drug Development: Raise Standards for Preclinical Cancer Research.” Nature, 2012.

6 Warren E. Buffett. “The Superinvestors of Graham-and-Doddsville.” Hermes, 1984.

7 Max Planck. Wikiquote, http://en.wikiquote.org/wiki/Max_Planck

Samuel Lee is a strategist on the passive funds research team for Morningstar.

Add a Comment
Please login or register to post a comment.
© Copyright 2024 Morningstar, Inc. All rights reserved.
Terms of Use    Privacy Policy
© Copyright 2024 Morningstar, Inc. All rights reserved. Please read our Terms of Use above. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
As of December 1st, 2023, the ESG-related information, methodologies, tools, ratings, data and opinions contained or reflected herein are not directed to or intended for use or distribution to India-based clients or users and their distribution to Indian resident individuals or entities is not permitted, and Morningstar/Sustainalytics accepts no responsibility or liability whatsoever for the actions of third parties in this respect.
Company: Morningstar India Private Limited; Regd. Office: 9th floor, Platinum Technopark, Plot No. 17/18, Sector 30A, Vashi, Navi Mumbai – 400705, Maharashtra, India; CIN: U72300MH2004PTC245103; Telephone No.: +91-22-61217100; Fax No.: +91-22-61217200; Contact: Morningstar India Help Desk (e-mail: helpdesk.in@morningstar.com) in case of queries or grievances.
Top