Part of The Value Perspective’s holiday reading list has been the excellent The Signal and the Noise – The Art and Science of Prediction, in which US statistician and writer Nate Silver discusses the dangers inherent in trying to predict the future and how even supposed experts in their field generally prove to be very poor forecasters.
Silver, who as it happens accurately predicted the results of every single state in the 2012 US election, highlights a 1996 study, Inside dopes? – Pundits as political forecasters, in which three academics classified and then evaluated the accuracy of predictions made by panellists on The McLaughlin Group, a highly-regarded public affairs discussion programme that still airs in the US.
Over an 18-month period to the end of 1994, the show’s panellists, who are drawn from the great and the good of US political punditry, made almost 1,200 predictions. of those, the authors of the study excluded a little over 200 predictions on the grounds they were incapable of being tested and a similar number because they could not be resolved within the requisite timeframe.
This left 757 testable predictions – of which the study found 50.1% turned out to be correct and 49.9% incorrect. “Thus the panellists were wrong almost exactly as often as they were right,” the study’s authors observed. “The implication is clear – someone who tunes into The McLaughlin Group to get a better grip on the future would do just as well to flip a coin.”
Aside from the general lesson that even experts in their field can be very poor at forecasting – a point with which The Value Perspective regulars will be familiar – another interesting part of the study was its analysis of why, within that broad 50/50 average, some of the panellists on The McLaughlin Group were far more accurate in their predictions than others.
One area on which the study focused was the political persuasions of four of the regular panellists at the time, with host John McLaughlin and the Wall Street Journal’s Fred Barnes having unabashedly pro-Republican leanings and Newsweek’s Eleanor Clift and Jack Germond of the Baltimore Sun being the show’s resident liberals.
Overall, when it came to forecasting the results of elections, McLaughlin got 67% of his predictions right, Barnes 60%, Germond 54% and Clift 45%. Narrowing the field down to pro-Republican predictions, however, the study found the accuracy of the right-leaning McLaughlin and Barnes rose to 72% and 71% respectively while the accuracy of Germond fell to 43% and Clift to just 24%
Does this mean Republicans make better forecasters? Of course not – it just so happened the period under review saw the Republicans rout the Democrats in the 1994 elections. “This made McLaughlin and Barnes look like Seers and Germond and Clift look like dopes when all that really happened was each of the four forecasters simply predicted what he or she hoped would occur,” noted the study.
Nor are pundits who tend to forecast what they want to happen confined to the world of politics – and of course they exist in finance too. Take the following graph from BCA research, which looks to illustrate “analyst forecasting error” by plotting, from 1980 to 2013, the difference between actual S&P 500 earnings and what consensus estimates had been for them 12 months previously.
Only twice in the last 33 years have consensus estimates been more than 10% too low and yet four times they have been more than 40% too high. Most people are inclined to predict what they want to happen in the world and most analysts want strong earnings growth. As we have noted here before, rather than having our judgement clouded by the opinions of others, we have found independent thought much more profitable.