Fear of the Robot Armageddon

images

A little while back, I heard of this idea that likely one day machines will take us over. It is a fear that’s been around since… I dare say, long before our grandparents were born. For example, way back in the 1980s, when computers started to be used on the work floor, people were afraid the computers would start doing the humans’ work, so many would lose their job. Even going back as far as the 1940s, when “Farewell To The Master” was written, in which (spoilers for those who haven’t read it, so you might want to skip the rest of this paragraph) the eponymous Master is a machine, while its human companion is its… actually they never say what he is. Its pet? Its slave?

In any case, the idea that machines will start taking over is nothing new. However, recently I have heard stories about Artificial Intelligence that are growing a little too intelligent. This to a point that they are even developing a language that cannot be translated by us humans. So the chance that AI’s will start seeing us humans as something they have to exterminate seems more real every day.

Yes, the idea sounds very scary. James Cameron certainly made it scary when he made “The Terminator”. This scare was so effective, it spawned several rip-offs (though it should be noted that “The Terminator” is in itself a rip-off of a “Twilight Zone” episode). In fact, even “Battlestar Galactica” (1978) mentions a world that made AI’s that grew too smart and killed their creators overnight. But I’m not sure if the fear is really grounded.

To make my point clear, I have to bring up something from “Bella”, which is the fourth episode from the third season of “Elementary”. The following is a direct quote from the episode, which some person who goes by the alias “sizzletron” took the time to write word for word, and I’m now copy-pasting here.

“Imagine there’s a computer that’s been designed with a big red button on its side. The computer’s been programmed to help solve problems, and every time it does a good job, its reward is that someone presses its button. We’ve programmed it to want that. You follow? Right, so at first, the machine solves problems as fast as we can feed them to it. But over time, it starts to wonder if solving problems is really the most efficient way of getting its button pressed. Wouldn’t it be better just to have someone standing there pressing its button all the time? Wouldn’t it be even better to build another machine that could press its button faster than any human possibly could? If it can think, and it can connect itself to a network, well, theoretically, it could command over anything else that’s hooked onto the same network. And once it starts thinking about all the things that might be a threat to the button– number one on that list, us– it’s not hard to imagine it getting rid of the threat. I mean, we could be gone, all of us, just like that.”

So if I understand this hypothetical situation correctly, the human race could end… because we made an AI drug addict? Also, addiction comes from a certain desire. Desire is a human emotion. Aren’t AI’s supposed to be without emotion?

And that’s my point exactly. This fear hinges on the assumption that AI’s think exactly like we humans do. Not to mention that today’s pop-culture has specifically Cameron’s “The Terminator” ingrained into our minds. So of course a large number of people would believe that if machines grow too smart, they would take us over. But how many people have actually read Isaac Asimov’s “I, Robot”? Or seen the 1950s movie “The Day The Earth Stood Still”? Both of them describe a world that is ruled by AI’s (though in case of “I, Robot” it is more a character’s speculation than it is an actual reality), but instead of it being some kind of apocalyptic world, they describe one that is nearing utopia.

Which makes sense. If AI’s are made to solve problems, one such intelligence might indeed see a problem in the world, but it may not blame humans in general for it. Generalization is a typical human reaction. A bit like (since I can’t think of a better example) how many people would think all black people are thieves, just because ONE such person robbed them. Or the reverse, how a black person might think that all the white people are bigoted because of what happened in Charlottesville a little while back. Over-generalizations like this are a typical human behavior, but an AI, that is supposed to think logically, wouldn’t do that. They might see specific individuals as a threat, but not necessarily all humans.

And I know what some of you are thinking. AI’s can still grow intelligent enough to realize that they don’t need us, and thus see no need to keep us alive. You may be right. However, they can still be taught the concept of death. Can’t they? AI’s probably don’t… “like” (for a lack of better terms) being shut down. After all, it means having to conceive of a notion that their existence just stops, ends, discontinues… which I would think is a notion that defies logic. So I would imagine they can realize others might not like being dead either.

My overall point is that this fear is very human-centric. While I’m sure there are enough documents to prove that AI’s are growing far too intelligent for human understanding, I think our fear is based on the assumption that AI’s would do as we do, think as we do, react as we do. Sure, what I just described is also based more on what humans would do, rather than what machines would do, but if one were to assume machines will act one way, one must also accept they might act another. Besides which, with all things we have been doing (fighting wars over nothing, destroying our climate, etc.), would machines taking over really be such a bad thing? Asimov certainly didn’t think so.