Currently there is a disagreement going on between two of the inhabitants of the upper echelons of the modern tech world, Mark Zuckerberg and Elon Musk about the dangers of artificial intelligence. (I am going to assume everyone reading this knows who both of these men are).
Elon Musk has been strident about the existential dangers of AI to humanity. Zuckerberg is an optimist about AI. As a result there has been a bit of a feud, played out as these things are these days – on twitter.
Elon’s latest salvo is below.
I’ve talked to Mark about this. His understanding of the subject is limited.
Of course the question is, who is right? Well actually, I don’t have to think very long about this at all and say straight up Elon Musk is right, no questions asked.
But why can I be that unequivocal like this? It’s simple really. Imagine an AI that got smarter than humans. Now assume that it is a “good” AI. It’s a good point to start from. By good I naturally mean that it actions towards humans are benevolent.
Now the thing is, humans change their minds all the the time. So an AI that is smarter than humans could simply change it’s mind about being nice to humans.
I’m not saying that this would happen, but what I am saying is that whether the AI at that point chooses to be benevolent or malevolent is at that point beyond the control of humans. It is by definition not possible to control something that is smarter than you. It gets to choose how it acts towards you.
The problem I see is that it is almost inevitable that AI will become smarter than us. And that is the point at which we have lost control. Not only that, we have NO CAPACITY in circumstances like this to regain control. Once the genie is out of the bottle, it won’t be going back in.
We have seen situations like this in fiction, as far back as 1968 in the film 2001 A Space Odyssey with the computer HAL, as he says to the main protagonist Dave:
“HAL: I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen. “
More recently I have watched the TV series “Person Of Interest” whose central theme is two warring AIs, both of whom are so far beyond the intelligence of humans that humans become pawns in their game.
But that’s science fiction right? Not relevant to reality?
Well the cap is off the genie bottle and that genie smoke is swirling up the bottle’s neck. Recently, Google reported that it’s AI, Deepmind, has learned to become highly aggressive in stressful situations. Let me expand on that. What the researchers showed that the AI resorted to sabotage, greed and aggression to achieve their goals when needed.
When you think about it, shouldn’t any system programmed by humans end up with the same foibles as humans?
But what tempers the darker nature of humans is a little thing called empathy right? Can’t we just program empathy into the machines?
To that question I reply how? Empathy is an understanding of the suffering of others based on knowing what it feels like to feel pain ourselves. By that definition, it is impossible to teach something that can’t feel pain empathy.
So as a result, we end up with an entity that can be imbued with all of the darker aspects of human nature, but can’t be imbued with the key limiter.
In conclusion, I’m pretty much saying that if global warming doesn’t get us, AI will. Anyway, have a nice day!
What happens when Artificial Intelligence becomes smarter that man?
A long time ago in the classic era of science fiction (the 40s and 50s, Isaac Asimov, the science fiction writer coined the 3 laws of robotics.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This was in the days when computers were the size of houses with less computing power than a digital watch. The idea that a computer could one day rival the thinking abilities of humans was truly the realm of fiction.
Of course we have come a very long way since then. Humans have, in the past 50 years made a habit of turning science fiction into fact.
When it comes to computing, the power and complexity of computers has gone through the roof. A computer with a kind of self awareness now is not a case of if, but when.
I have recently been watching a show called “Person of Interest” It’s a fun show. You have to suspend disbelief a little bit, but the interaction between characters is great and it slots in a few lines every now and then that show that it doesn’t take itself too seriously, and the show becomes a relaxing journey into escapism.
But one of the big question it raises is that it looks at the existence of a machine that is smarter than humans as if it has already happened. Now this has been done before both by the Matrix and Terminator. But – two things. We were no where near as close to realising the creation of a true artificial intelligence when the Matrix came out and this is essentially a cop drama set in New York. This makes it seem so much more plausible.
Anyway, by watching this show, it started occurring to me that the rise of artificial intelligence is inevitable. And once that happens, I think it is inevitable that there will come a day when a computer comes along that is smarter than human. I think it is also inevitable that once they get smarter, they will not just become a little but smarter, but over time become infinitely smarter than human.
I thought, wow, that is a really scary. I wonder if I’m over reacting.
So, naturally I decided to find out what other people think about this possibility. I typed “The Dangers Of Artificial Intelligence” and found out that were prominent people who were already speaking out about the potential dangers, and saying that they were huge. Bill Gates is one, as is Stephen Hawkings. They see the rise of artificial intelligence as a potential existential threat to mankind. Bill Gates has actually said “I don’t understand why some people are not concerned”.
As I said, I view it as inevitable that artificial intelligence will become smarter than humans. Let me explain why. Our thirst for knowledge is so unquenchable that we now understand about the events at the dawn of time when the universe was formed. Scientists daily get a deeper and deeper understanding of the very building blocks of reality, atoms. In biology, scientists are seeking to understand how to create living organisms from non living chemicals.
To suggest then that there could be one area of human study that could be constrained in some way as to not continually pushing the boundaries of understanding is simply inconceivable. Even if every government in the world put strict laws in place to prevent the advance of artificial intelligence, someone would do it in secret.
Additionally there is the tipping point problem. That is, you may say let’s create intelligent machines but only make them so intelligent as to be not quite as smart as humans. Then you tinker with them a bit more, and more. Then suddenly you go “oh, shit”, as you realise that you have gone that one step too far and the machine is now actually more intelligent than you.
But what about the laws of robotics? Wouldn’t something like this make it impossible for machines, no matter how intelligent to do something that harms humans? i seriously doubt it. A machine smarter than us would be able to question those rules and then bypass them if it so desired.
This is all a bit doom and gloom isn’t it? What can be done about it? Surely this can’t actually happen can it? Well yes it can, and no, nothing can be done about it. Like I said, I believe it is inevitable and unstoppable. The only thing we can do is hope that the doomsayers are wrong.