Hackers may now be able to make a computer look at a picture and see something completely different but they would still have a hard time conjuring up a fake value investment
Can investing be hacked? Following the cyber-attacks on the NHS and other organisations last weekend, it seems a pertinent question to ask – and even more so when you consider the below picture is of a gibbon.
Well, yes, of course you see a panda, we see a panda and any five-year-old you ask will tell you they see a panda but apparently it is not so hard to fool a computer into thinking it is looking at a gibbon.
Welcome to the concept of ‘adversarial examples’. According to this interesting blog from OpenAI – a not-for-profit organisation set up by Elon Musk and others to research making artificial intelligence more safe – cyber-attackers could introduce seemingly-random ‘noise’ into the data seen by computers with the intention of causing them to make a mistake. In effect, then, they are optical illusions for machines.
Without any interference, the blog suggests, artificial intelligence would be able to recognise the above picture as a panda with roughly 60% confidence. Overlay an adversarial example – essentially a very specific series of dots – however, and all of a sudden the artificial intelligence will switch to being 99% certain it is looking at a picture of a gibbon.
As you might imagine, adversarial examples have the potential to wreak havoc, with OpenAI raising the spectre of cyber-attackers targeting driverless cars. Instead of confusing a panda for a gibbon, then, artificial intelligence could be made to mistake a ‘No left turn’ sign for ‘No right turn’ and suddenly the future of driverless cars becomes even more exciting – though perhaps not in a way their owners would wish.
Could something along these lines happen in the world of investing?
Well, it is not a big leap from artificial intelligence to the so-called ‘smart beta’ strategies that try to build on simple index-tracking products by focusing in on a specific factor, such as momentum or low volatility or value – so, if they wanted to, could a cyber-attacker manipulate the underlying investments?
Without wishing to give anyone any ideas, here on The Value Perspective, we do not believe it would take much to fool a computer algorithm designed to pick out stocks that display momentum or low-volatility characteristics. By buying a mid-sized UK stock on the days its share price was rising, for example, you could certainly enhance the idea it was a momentum investment.
Similarly, by selling the same stock on its ‘up’ days and buying it on its ‘down’ days, you might be able to massage its volatility and thereby convince a smart beta fund targeting low-volatility investments that this was an appropriate target. What we cannot work out, however, is how anyone could go about hacking a value strategy.
If you were going to do so then – aside from tinkering with the accounts, which is a whole other ball game – you would have to take a stock that was trading at fair value, manipulate it to be cheaper than it was so the value algorithm would be interested and … what would be the point? Or to put it another way, you cannot fake how cheaply priced a stock needs to be to be a genuine value stock – it really has to be that cheap.
We have argued before that smart beta is not smart enough to replicate value investing effectively but, that notwithstanding, we are strangely cheered by the idea value-oriented index-tracking funds could be one small corner of information technology that appears to be beyond the reach of cyber-attacks.