Society tells us unanimous agreement is a good thing. If all 12 people on a jury agree the defendant is guilty, for example, then a judge will feel pretty comfortable doling out the appropriate punishment. Yet, if 12 witnesses to a mugging unanimously agree the same person in a police line-up is the culprit, the one they identify is less likely to be guilty than if just two or three people had pointed the finger.
Welcome, then, to the ‘paradox of unanimity’, which holds that the probability of a large number of people all agreeing on something is tiny so society’s confidence in unanimity is not justified. Yes, it is possible to come up with exceptions to the premise – faced with a line-up of five lions and a Siamese cat, for example, one might reasonably expect everyone, if asked, successfully to pick out the moggy.
That is because that situation is as about as straightforward and certain as it is possible to get – it allows no room for doubt or bias. But real life is rarely straightforward or certain and, as soon as the slightest element of doubt or bias is introduced into a scenario, the chances become very slim that people will not only be unanimous but correctly so.
Imagine you were asked to toss a coin 10,000 times. Naturally you would expect it to come up heads roughly 50% of the time and so, if it actually came up heads 75% of the time, you could come to one of two conclusions – either the laws of probability have changed or the system is biased (essentially, there is something wrong with the coin). All things considered, the latter would appear more likely.
Let’s return then to the idea of the line-up – the police version, not the cats – which is the focus of an imminent academic paper discussed in some detail in Why too much evidence can be a bad thing. “In police line-ups, the systemic error may be any kind of bias, such as how the line-up is presented to the witnesses or a personal bias held by the witnesses themselves,” this explains.
“Importantly, the researchers showed that even a tiny bit of bias can have a very large impact on the results overall. Specifically, they show that when only 1% of the line-ups exhibit a bias toward a particular suspect, the probability that the witnesses are correct begins to decrease after only three unanimous identifications.
“Counterintuitively, if one of the many witnesses were to identify a different suspect, then the probability that the other witnesses were correct would substantially increase.” Unanimous agreement by the witnesses, on the other hand, would – much like a coin that comes up heads three-quarters of the time – be an indication the system is biased or unreliable.
If you have not yet spotted the investment angle in all this, just switch ‘unanimity’ to ‘consensus’ and it should become apparent. After all, if you have a group of investors, then – just as with any other group of people – it is highly unlikely you will end up with a wholly independent and unbiased selection of viewpoints.
So if, say, an investment committee unanimously agrees on something, it is actually a bad sign because it suggests the committee’s system is biased or broken. Rather than being indicative of a great decision therefore, unanimity is more likely to imply some error within the decision-making process. In this context, any difference of opinion is a welcome and positive sign.
And of course the same argument holds for consensus broker forecasts. As value investors, we already know it does not bode well when everybody is unanimously bullish about a business because, statistically speaking, when there are substantially more buyers of a stock than sellers, the stock will first become overvalued and then it will underperform.
It also happens to be the case, however, that if everybody is unanimously bullish about a business and any dissenting voices are only conspicuous by their absence, it makes it more likely some bias has crept into the system, that some fact about the supposed investment case has perhaps not been fully appreciated and that, one way or another, the process has failed.