Ever since we humans created our first tools, we’ve kept insisting they’re not only as alive as we are, but that they feel the same things we feel: Love, hate, envy, embarrassment, anger, happiness.
This wrong-headed behavior on our part has ramped up a thousand fold since the creation of the first computers. The fears of what truly intelligent machines could do have been simmering in our society for the better part of a century. Just look at the latest Avengers film, Age of Ultron. Or look at Elon Musk, a venture capitalist with his fingers in several pies, likes to stoke this fear for middlebrow audiences that think they’re highbrow. Even Arthur C. Clarke, a fellow who definitely knew better, had to play to public fears of smart computers by creating the HAL 9000, the lead “villain” in 2001: A Space Odyssey. HAL was depicted as a fully self-aware, intelligent computer so determined not to admit error that it would kill people to cover up its mistakes. (Clarke later atoned somewhat for this by making HAL a much more sympathetic figure in his later book 2010.)
The truth is that a future where true artificial intelligence exists is much more likely to resemble the one Isaac Asimov wrote about in his robot novels, or that in Iain M. Banks’ “Culture” novels, than those shown in any of the dystopias used to stoke our fears for money. Speaking of the latter series, it’s telling that a primary criticism of the Culture novels as literature is that the Minds make for a future so pleasant that it is devoid of the nasty conflicts and “interesting times” that people hate to live through but love to read about:
In vesting all power in his individualistic, sometime eccentric, but always benign, AI Minds, Banks knew what he was doing; this is the only way a liberal anarchy could be achieved, by taking what is best in humans and placing it beyond corruption, which means out of human control. The danger involved in this imaginative step, though, is clear; one of the problems with the Culture novels as novels is that the central characters, the Minds, are too powerful and, to put it bluntly, too good.
Even with our society being more likely to follow something closer to Banks’ vision than the dystopian ones, it’s doubtful that things such as personalities, or even the emotions that generate them, would spontaneously evolve. These things, or rather the simulations thereof, would have had to be carefully implanted by humans. The rule “garbage in, garbage out” still applies.
And that, as always, is the real thing to fear: not if machines can ever think, but what we command them, via their base-level programming, to think about.