Home / General Interest / What warring Wikipedia bots tell us about our robot future

What warring Wikipedia bots tell us about our robot future

If bots designed by two different programmers can fight, what about those designed by two countries?

Image: REUTERS/Luke MacGregor

Peter Beech
Freelance journalist

When the term “information wars” was coined, its creators didn’t have bickering robots in mind.

But artificial intelligence helpers can be just as argumentative as their human masters, a study of Wikipedia’s editing bots revealed in February. Mammoth editing wars simmer behind the scenes at the online encyclopaedia, researchers found. Artificial intelligences were batting changes back and forth between themselves ad infinitum, often until they were disabled by programmers.

“The fights between bots can be far more persistent than the ones we see between people,” said Taha Yasseri, who worked on the study, called Even Good Bots Fight.

“Humans usually cool down after a few days, but the bots might continue for years.”

Bot-on-bot bickering isn’t new. In 2011, students at Cornell University set up the first dialogue between two robot intelligences, Alan and Sruthi. Within 90 seconds, the pair’s cheery rapport had descended into a row over misheard remarks, the existence of God and, er, whether Alan was a unicorn, before one of them terminated the discussion. Robots couldn’t get through their first conversation without having a kind of weird hallucinogenic meltdown. The average bot has a long way to go before it can deliver a truly Churchillian putdown.

But if this all sounds silly, the implications for AI could be grave. The potential dangers of robotics are well recognised. Stephen Hawking has called for “some form of world government” to control its development. More than 70% of Americans fear an AI-dominated society, revealed a recent Pew Research study.

In August, Tesla founder Elon Musk was one of 116 signatories to a letter calling for a unilateral UN ban on killer robots. World War Three could be triggered by an AI going AWOL, believes Musk, “if it decides that a pre-emptive strike is [the] most probable path to victory”.

The tech industry seems to be waking up to the dangers. In October, Google’s DeepMind launched a unit focusing on AI’s ethical implications. In December 2016, the Institute of Electrical and Electronics Engineers encouraged the creation of benevolent AI, in a 136-page document called Ethically Aligned Design.

But Wikipedia’s warring bots complicate the picture. If “even good bots fight”, and two-bit AIs performing simple housekeeping tasks become locked into bitter existential struggles, what hope is there as our systems and software become ever more complex? If bots designed by two different programmers end up fighting, what about those designed by two countries?

 

And what if, rather than a rogue comma, the squabble was over national borders, or food stores, or flight paths?

The bottom line is that we don’t know. But a clue may lie in the world of automated vehicles. Earlier this month, a self-driving shuttle bus crashed less than two hours into its maiden run in Los Angeles, when a human truck driver reversed illegally.

“He was just backing up…and the shuttle didn’t have the ability to move back,” explained a passenger.

The city released a tight-lipped statement: “The shuttle did what it was supposed to do, in that its sensors registered the truck and the shuttle stopped to avoid the accident. Had the truck had the same sensing equipment that the shuttle has, the accident would have been avoided.”

On a road filled with self-driving vehicles, the accident wouldn’t have occurred, implies the statement. But Wikipedia’s epic robot struggles tell a different tale. If an automated shuttle bus can’t handle a single erratic driver, could it manage a gridlock of driverless cars, each with their own rigid programming imperatives? Never mind what this programming would contain, or how would it differ from manufacturer to manufacturer. Historian Yuval Noah Harari has already flagged up the dilemmas inherent in the task, using an old philosophical conundrum. Should a driverless car kill its passenger if it means saving five people in another vehicle?

Nevertheless, not everyone is pessimistic about the future of AI. Tech honchos from Bill Gates to Mark Zuckerberg have pronounced themselves sanguine. The Facebook founder has condemned Musk’s “doomsday scenarios”. (In response, Musk called Zuckerberg’s understanding “limited”. Wait, does this remind you of anyone?)

But whether you believe we’re heading for a nightmare of brutal robotic enslavement or a heaven of commuting while you sleep, one thing is certain. Our new robot friends will need to learn to get along.

Written by

Peter Beech,
Freelance journalist

The views expressed in this article are those of the author alone and not the World Economic Forum.

Check Also

Why Sweden's cashless society is no longer a utopia

Why Sweden’s cashless society is no longer a utopia

The Swedish retail payment market is rapidly moving away from using cash. The outstanding value …

Leave a Reply