Academic papers set great store by being able to portray any new research they reveal to the world as ‘statistically significant’ – meaning, broadly speaking, it has lower than 1-in-20 odds of merely being chance. However, as we pointed out in Jellybean Trilogy IV (trust us, that title does hold up), statistical significance is by no means the same thing as ‘being a fact’ or ‘having a significant impact’.
Starting, logically enough, with Jellybean Trilogy I, that series aimed to encourage readers of The Value Perspective to be careful about taking any research at face value. After all, the huge proliferation of academic papers over the years – facilitated, among other things, by ever-greater levels of computer firepower – has hardly been accompanied by a commensurate upsurge in earth-shattering revelations.
What has come closer to matching the increase in papers published per year, however – at least in the field of economics – is the average number of equations being published per paper. Just look at the following chart, which is taken from The use of mathematics in economics and its effect on a scholar’s academic career, published in 2012 by Miguel Espinosa, Carlos Rondon and Mauricio Romero.
Source: JSTOR. Calculations: authors
The chart shows how the average number of equations per economics article per year has mushroomed over time, from an average of four equations per article for the decade 1895/1905 to an average of 70 equations per article for the decade 1996/2006. Obvious reasons for this include, as we said, computers – before they existed, you came up with the theory first and then you tested it as best you could.
After the Second World War though, computers increasingly made it much easier to put the research cart, as it were, before the horse. As we observed in Jellybean Trilogy I: “The ease with which scientists can screen many thousands of genes – or financial analysts many thousands of stocks – and then crunch their numbers means it is the experiments that are giving rise to new theories.”
The problem is, this has inevitably led to a fair degree of diminishing returns in terms of what more recent papers are saying. Either their authors are making claims about significant issues in which they have a limited amount of confidence or they are making claims they are very confident are true – but not discussing their significance quite as much as they might.
An analogy would be, while it can be said with a very high degree of confidence that someone standing at the top of a hill has more chance of being hit by lightning, the big question is – how much more? The danger with many academic papers today is that while they could crunch the numbers and come up with an answer, they may very well fail to go into a crucial factor such as what the weather was like.
Yes, of course your chances of being struck by lightning go up if you stand at the top of a hill but if you happen to be doing so one afternoon in June, when the sun is out, the sky is blue and there is not a cloud to spoil the view, those chances may rise from, say, 10,000,000 to 1 to 9,999,990 to 1. That may or may not be statistically significant but it certainly is not that factually significant.
Here on The Value Perspective, our suspicion is that, a century ago, economists were making claims that were broadly true and/or significant but with a low degree of confidence due to the lesser chance to test their theories. These days, many are making claims with a misplaced sense of confidence – and overconfidence about something without knowing its magnitude or indeed if it really exists is not ideal.
We should quickly add, however, that those are not the conclusions of Espinosa and his co-authors. Among other things, they argue their results provide “concrete measures of mathematisation in economics” and that they have found “the training and use of mathematics has a positive correlation with the probability of winning a Nobel Prize in certain cases”.
That said, they also concluded Nobel-winning economists use more equations prior to receiving their laureates than they do afterwards. Might one infer from this that, once you have your Nobel Prize in the bag, you can be more willing to make arguments that are broadly true because you are secure in the knowledge you already have the confidence of your audience?
Certainly the great 20th Century economist JK Galbraith, who never won the Nobel Prize for economics but whose work is highly regarded by many who have, was no fan of equations – once affirming his preference for a more pragmatic approach to his chosen field with the line: “I react to what is necessary. I would like to eschew any formula.”