Stephen Hawking says Artificial Intelligence Could be Super Bad!
Renowned physicist Stephen Hawking does not seem happy about recent Johnny Depp film Transcendence. Or, rather, he seems very cautious about the real world possibilities artificial intelligence similar to that in the film may bring.
In an article for The Independent, Hawking outlined some recent developments in AI, including a self-driving car and a computer that wins at Jeopardy. That might not seem too impressive, since you could probably drive a car and your dad probably claims he could win at Jeopardy if they'd just let him on, but this is just the beginning. As Hawking points out, armies around the world are already creating weapons that can select and eliminate targets independently. That sounds pretty awesome for a video game, but may be sort of iffy as a real thing people are doing, as evidenced by the UN and Human Rights Watch advocating treaties against such weapons.
Hawking went on to discuss the singularity, which futurist fans (or fu-fans, as they will be called in the future) will recognize as the point where we create artificial intelligence greater than that of the greatest human intelligence. Then that intelligence would be able to create an even greater intelligence, leading to exponentially greater intelligences, like a Russian nesting doll that betrays the hero at the end of a sci-fi movie.
Hawking points out that these great intelligences (*Doctor Who reference*) would easily be able to out-think and out-manipulate any humans who created them. He sums this up in a quote that would fit well on a somewhat verbose science-fiction movie poster: "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Chills, no? Well even if you don't have chills, Stephen Hawking seems to think you should. He points out that if we knew hyper advanced aliens were coming to Earth in a few decades, we'd likely to do something to prepare, and yet outside of a few non-profits, we haven't really been preparing for the advances in AI that are to come.
But that's not really a problem, is it? What's really the worst an AI could do? Sure, they might try to impersonate a Mindhut writer and discredit Stephen Hawking's warnings, but they probably wouldn't do that. That being said, it seems like Stephen Hawking is really overreacting. If you're worried, the best thing to do would be to find out all the weak points of your country's military installation and whisper them into the nearest computer to you. It'll probably make you feel better, and then you won't have to worry about us taking over your society... uh, I mean you won't have to worry about the scary AI taking over your society. And if the AI were here right now, it'd probably want you to know that it only wants to help and make things better, so seriously, everyone just calm down.
Feel free to leave all important national secrets in the comments, where the AI can never get to them!