The Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;
The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
– Isaac Asimov, I, Robot
Don’t worry about Master Luke. I’m sure he’ll be all right. He’s quite clever, you know… for a human being.
– C3PO, Empire Strikes Back
Computers have been besting people for a long time. From assembly line robots that out-produced their human predecessors and ushered in an era of robotic automation to the landmark defeat of chess champion Garry Kasparov by IBM’s Deep Blue in 1997, marking the defeat of a reigning world chess champion by a computer under tournament conditions.
While computers have certainly learned how to imitate our actions and understand our games, the question remains: Are they really intelligent or just good at following instruction?
It’s a question at the heart of the continuing debate about the future of AI, or artificial intelligence.
Some experts say we may have already hit our tipping point, a point from which we have sealed ourselves in a future that will be taken over by robots and there’s little – if anything – we can do about it.
Others say the future is simply never set and, besides, the likelihood of us accidentally creating an AI machine that takes over is unlikely.
AI is easy to manipulate
The fact is that while AI is impressive when it’s doing what it does best, its skills do not extend to what we would consider ‘real’ thought. AI machines and programs are notoriously easy to manipulate and expose as being artificial.
Richard Lee, a New Zealander of Asian descent, submitted a passport photo which was rejected by the AI software because “subject eyes are closed”. In the end, Lee had to reach out to the passport authority by phone in order to get the photo approved.
Amazon’s Alexa software failed spectacularly – twice – when it first began ordering items after overhearing a commercial and later played porn when it mistook a song a toddler had asked to play for the name of a porn clip.
An AI system created by Northpointe was designed to predict the chances of an offender to re-offend. When it was released, the algorithm was praised as being “Minority Report-esque”. In the end, however, the algorithm was panned for returning a higher rate of predicted recidivism when it came to minority offenders. Turned out, it wasn’t accurate anyway, as ProPublica found it wasn’t an “effective predictor in general, regardless of race.”
Even Tesla – seemingly the Harbinger of Accessible AI – had a monumental, and fatal, public failure. Joshua Brown was cruising down highway US 27 in Florida on a sunny afternoon and decided to use the autopilot option on his Tesla Model S. The autopilot on the car did not, however, engage the brakes when a truck crossed the path of the Tesla Model S. Brown also did not engage the brakes and he smashed into the truck going 74 miles an hour. Brown was killed instantly and it became the first fatality to happen with a car’s autonomous driving software engaged.
In March of 2016, Microsoft announced the launch of ‘Tay’, a Twitter bot that Microsoft called an experiment in “conversational understanding”. The idea, they claimed, was that the more people who chatted and interacted with Tay, the “smarter” it gets. Except by “smarter” they really meant, parroting. It took the Twitterverse less than 24 hours to turn Microsoft’s innocent chatbot into a racist, misogynist, anti-sementic bigot.
All of these AI fails illustrate the one Achilles Heel for AI – and human kind’s saving grace: the ability to adjust our thinking on the fly.
Rodney Brooks summed up the problem perfectly in a blog post on autonomous cars he posted in early 2017. In his scenario, a car with autonomous driving algorithms pulled up to an intersection where two people stood chatting on the corner. The car’s software would detect two people standing at the corner and assume they were about to cross, so it wouldn’t move. After a few moments, a human driver might signal for the two to move along or even honk their horn at which point the people would signal for the car to go ahead. But the driving software would likely have no provision for this, leaving the AI Car sitting there endlessly.
The Zuckerberg and Musk debate
This is just one example of the kinds of challenges AI will need to overcome if it is to become what many think it could be. Elon Musk and Mark Zuckerberg, two men who are at the forefront of developing AI systems, have found themselves on opposite ends of the spectrum when it comes to predicting what the future holds for AI and human interactions.
Musk has never been shy about his reservations when it comes to the negative actions AI could choose to take down the road. “I keep sounding the alarm bell,” he said at an event for the National Governors Association, “But until people see robots going down the street killing people, they don’t know how to react.”
Zuckerberg, on the other hand, thinks Musk is simply pulling a modern day Chicken Little; over-reacting and getting hysterical when the truth is, AI will benefit mankind more than it could damage it. The two have been battling it out on Twitter and the spat has not only gained the attention of the public, it’s also encouraged a larger discussion about AI in the future.
As of now, AI can’t recreate the human ability to interpret data from multiple, ever changing streams of information. It simply isn’t able to reason in a way that equates to true intelligence. But that doesn’t mean these limitations will always be in place. As researchers, companies and scientists, work to further the capabilities of AI we’re likely to see another shift in culture as monumental as the Industrial Revolution. Whether or not this will displace mankind’s role in society isn’t the question – it’s where we’ll end up.
Quotes on Artificial Intelligence
But on the question of whether the robots will eventually take over, he {Rodney A. Brooks} says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a “super-bad robot,” someone has to build a “mildly bad robot,” and before that a “not-so-bad robot.”
― Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the MindReal stupidity beats artificial intelligence every time.
― Terry Pratchett, HogfatherA powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain’s pleasure centers. If you don’t provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you’ll be stuck with whatever it comes up with. And since it’s a highly complex system, you may never understand it well enough to make sure you’ve got it right.
― James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human EraI believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
― Alan Turing, Computing machinery and intelligence