Facebook is in trouble again – this time, the BBC reports, after it approved an advertising campaign aimed at people interested in a conspiracy theory favoured by white nationalists.
Apologising for the episode, the company explained a “white genocide” targeting category had been created, not by human staff or users but by an algorithm – not the first time artificial intelligence (AI) has embarrassed one of the world’s tech giants.
As the BBC article also notes, Twitter was recently forced to apologise after an anti-Semitic phrase briefly featured on the site's ‘trending’ section, where topics are set by algorithms although they can later be reviewed by human moderators.
And last month, as this Telegraph piece reports, Amazon had to scrap an internal tool that used AI to sort through job applications – and apparently ‘learnt’ to be sexist.
Although it was built as a way of automatically sorting through CVs and picking out the most promising, its creators quickly noticed it had taught itself to prefer male candidates over female ones – apparently because the system had been ‘trained’ using data submitted by people over a 10-year period, most of which came from men. After initially trying to fix the bias, Amazon abandoned the project altogether.
Hitler-loving chat robot
Combining sexism, racism and all manner of other bigotry into a single package, meanwhile, there is the uncomfortable story from a couple of years ago when Microsoft set out to introduce an innocent AI chat robot to Twitter – before having to delete it a day later because, as this Telegraph piece puts it, “ it transformed into an evil Hitler-loving, incestual sex-promoting, ‘Bush did 9/11’-proclaiming robot”.
The article reveals: “This is because her responses are learned by the conversations she has with real humans online – and real humans like to say weird stuff online and enjoy hijacking corporate attempts at PR.”
So, an explanation that reflects as badly on humanity as it does the artificial intelligence we have created – but, then again, it is humanity’s flaws that have helped value investing to thrive as a strategy for more than 100 years.
There have been a lot of AI failures
We have previously considered the limitations of AI in the context of chat-up lines and paint colours – where, after much tweaking of programmes, “Do you have a pence because you stole my heart bat?” and ‘You look like a thing and I love you” for the former and “Dorkwood” “Sindis Poop” and “Stanky Bean” (all shades of grey) for the latter were among the most memorable.
Importantly, we highlight these failures not to be churlish but as a counterbalance to the breathless media reports of every AI success – the few hits plucked from all the misses.
“The robot apocalypse may well be coming,” we wrote last time. “But not today.”
Given the episodes mentioned above, we should perhaps add a caveat: “And if it does happen anytime soon, the robots will probably have done it by accident.”