The real danger of artificial intelligence

-October 30, 2015

In the beginning of this year several respected scientists issued a letter warning about the dangers of artificial intelligence (AI). In particular, they were concerned that we would create an AI that was able to adapt and evolve on its own, and would do so at such an accelerated rate that it would move beyond human ability to understand or control. And that, they warned, could spell the end of mankind. But I think the real danger of AI is much closer to us than that undefined and likely distant future.

For one thing, I have serious doubts about the whole AI apocalypse scenario. We are an awfully long way from creating any kind of computing system with the complexity embodied in the human brain. In addition, we don't really know what intelligence is, what's necessary for it to exist, and how it arises in the first place. Complexity alone clearly isn't enough. We humans all have brains, but intelligence varies widely. I don't see how we can artificially create an intelligence when we don't really have a specification to follow.

What we do have is a hazy description of what intelligent behavior looks like, and so far all our AI efforts have concentrated on mimicking some elements of that behavior. The results so far have offered some impressive results, but only in narrow application areas. We have chess programs that can beat Grand Masters, interactive programs that are pushing the boundaries of the Turing Test, and a supercomputer that can beat human Jeopardy champions. But nothing that can do all those and the thousands of other things a human can.

And even were we able to create something that was truly intelligent, who's to say that such an entity will be malevolent?

I think the dangers of AI are real and will manifest in the near future, however. But they won't arise because of how intelligent the machines are. They'll arise because the machines won't be intelligent enough, yet we will give control over to them anyway and in so doing, lose the ability to take control ourselves.

This handoff and skill loss is already starting to happen in the airline industry, according to this New Yorker article. Autopilots are good enough to handle the vast majority of situations without human intervention, so the pilot's attention wanders and when a situation arises that the autopilot cannot properly handle, there is an increased chance that the human pilot's startled reaction will be the wrong one.

Then there is the GIGO factor (GIGO = garbage in, garbage out). If the AI system is getting incorrect information, it is highly likely to make an improper decision with potentially disastrous consequences. Humans are able to take in information from a variety of sources, integrate them all, compare that against experience, and use the result to identify faulty information sources. AI devices are a long way from being able to accomplish the same thing, yet we're predicting the advent of fully autonomous cars by 2020. I think that's giving the AI too much control too soon.

So, no, I don't worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it's not ready for. If mankind is to be exterminated by an AI system, it won't be because of malicious intent on the part of the AI. It will be because the AI will be performing its functions exactly as designed, which is not the same as performing as intended.

What are your thoughts? Let me know here or, better yet, join me at ESC Minneapolis where Max Maxfield and I will host a moderated discussion on "The coming robot apocalypse." Hope to see you there.

Also see:

Join over 2,000 technical professionals and embedded systems hardware, software, and firmware developers at ESC Minneapolis Nov 8-9, 2017 and learn about the latest techniques and tips for reducing time, cost, and complexity in the embedded development process.

Make sure to follow updates about ESC Minneapolis's talks, programs, and announcements on the  social media accounts Twitter, Facebook, LinkedIn, and Google+.

Loading comments...

Write a Comment

To comment please Log In