The Jellybean Trilogy (Part II) – Why you should show causation before you seek correlation
In The Jellybean Trilogy (Part I), we highlighted a number of reasons – particularly the increased chance of finding so-called ‘false positives’ – why a leading US professor of health research had concluded most new medical research does not hold water. Here we return closer to home by looking at an academic paper that argues “most claimed research findings in financial economics are likely false”.
Before going into the details, it is worth noting one big difference between scientific and financial research. In the former, any new discovery is likely to be tested and retested with different data sets in different labs and under different conditions yet, in the latter, this kind of ‘replication study’ is almost non-existent. A shiny new financial theory may well see you published; scrutinising somebody else’s work probably will not.
As a result, financial researchers are undertaking an ever-increasing number of studies designed to reveal new ways to beat the stockmarket. Those studies that unearth something new and interesting become academic papers; those that do not are binned and the researchers behind them go back to their drawing boards – or, more likely, to their high-powered computers.
We say that financial researchers rarely have their work scrutinised but, over the last year or two, there have been two notable exceptions – the first being Does academic research destroy stock return predictability?, in which David McLean and Jeffrey Pontiff argue that certain stockmarket anomalies become less anomalous once they have been published.
Whether this is down, say, to a quirk of the data or the passage of time – or perhaps, if we are being kind, to the research helping the market to grow a shade more efficient – it is reasonable to suggest few financial papers will have deserved quite the fanfare they received on publication. Indeed, of the 82 different factors tested in this study, 10 had to be ignored completely.
In other words, not only were these ‘anomalies’ no longer anomalous, McLean and Pontiff were unable to replicate the original findings within the data set. It was as if those researchers had simply stuck their finger in the air and written about the first ‘market-beating’ anomaly they found – high-volatility stocks or low-beta equities or whatever it might be – when the reality is it did not work. It never worked.
Then, last year, came the latest version of a paper by Campbell Harvey, Yan Liu and Heqing Zhu. The cross-section of expected returns subjected a wide range of empirical financial research done since 1967 to what is known in the medical world as ‘a multiple testing framework’ – the simultaneous testing of more than one hypothesis – and reached the damning conclusion we mentioned at the start: most findings are false
One major consideration here, as we also touched upon in The Jellybean Trilogy (Part I), is the ever-greater advances being made in computer technology. As the following chart shows, up until the early 1980s, when computers were still pretty unsophisticated and the associated research required a great deal of human effort, supposedly market-beating factors were found at the rate of perhaps one a year.
Source: Campbell Harvey, Yan Liu and Heqing Zhu 2014
As computers have become more and more powerful, however, so the number of factors has increased – to the extent that an average of 18 a year have been unearthed over the last decade. Simply put, the easier it has become to ‘mine’ data, the more data has been mined. It is thus unsurprising that more discoveries have been made – and that many of these have turned out to be nonsense.
That of course begs the question as to how many factors investors might reasonably hope to rely upon as they look to bolster their own returns. Over the course of their research, Harvey and his colleagues catalogued 316 supposedly market-beating factors and concluded that, under a range of statistical measures, up to half of them “are likely false discoveries”.
Furthermore, most of the factors that are not actually disproved by the paper’s multiple testing framework have such negligible impacts that, when you adjust for transaction costs and other expenses, they are unlikely to be significant in any way. Sure, if you test enough data, you will find factors that appear to beat the market – whether these are both true and meaningful is an entirely different matter.
One way financial researchers might protect themselves against the perils of data-mining is to return to the kind of mind-set that existed when one factor was being discovered annually rather than 18 – that is to say, instead of testing a mass of data and then homing in on any factors that seem to ‘work’, to begin with some sort of rationale as to why a factor might actually work in the first place.
In short, the trick is to be able to show causation before seeking correlation. It is why, as we have said before in articles such as Tangled up with blue and Killer question, we prefer to pursue investment strategies that can not only demonstrate outperformance but are also able to explain whythey outperform.
Fund Manager, Equity Value
I joined Schroders in 2000 as an equity analyst with a focus on construction and building materials. In 2006, Nick Kirrage and I took over management of a fund that seeks to identify and exploit deeply out of favour investment opportunities. In 2010, Nick and I also took over management of the team's flagship UK value fund seeking to offer income and capital growth.
The views and opinions displayed are those of Nick Kirrage, Andrew Lyddon, Kevin Murphy, Andrew Williams, Andrew Evans, Simon Adler, Juan Torres Rodriguez, Liam Nunn, Vera German and Roberta Barr, members of the Schroder Global Value Equity Team (the Value Perspective Team), and other independent commentators where stated.
They do not necessarily represent views expressed or reflected in other Schroders' communications, strategies or funds. The Team has expressed its own views and opinions on this website and these may change.
This article is intended to be for information purposes only and it is not intended as promotional material in any respect. Reliance should not be placed on the views and information on the website when taking individual investment and/or strategic decisions. Nothing in this article should be construed as advice. The sectors/securities shown above are for illustrative purposes only and are not to be considered a recommendation to buy/sell.
Past performance is not a guide to future performance and may not be repeated. The value of investments and the income from them may go down as well as up and investors may not get back the amounts originally invested.