Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
When the personal computer first became ubiquitous in the 1980s, as Adrienne LaFrance wrote in The Atlantic earlier this year, some people found it so terrifying that the term “computerphobia” was coined.
“In the early days of the telephone, people wondered if the machines might be used to communicate with the dead. Today, it is the smartphone that has people jittery,” she wrote. “Humans often converge around massive technological shifts—around any change, really—with a flurry of anxieties.”
To see those anxieties quantified, take a look at the top five scariest items in the Survey of American Fears, released earlier this week by researchers at Chapman University. Three of them—cyberterrorism, corporate tracking of personal information, and government tracking of personal information—were technology-related.
For the survey, a random sample of around 1,500 adults ranked their fears of 88 different items on a scale of one (not afraid) to four (very afraid). The fears were divided into 10 different categories: crime, personal anxieties (like clowns or public speaking), judgment of others, environment, daily life (like romantic rejection or talking to strangers), technology, natural disasters, personal future, man-made disasters, and government—and when the study authors averaged out the fear scores across all the different categories, technology came in second place, right behind natural disasters.
What makes us human? Why do we fear artificial intelligence and robots? ‘AI: More than Human’ curators Suzanne Livingston and Maholo Uchida unpack the big questions explored in this interactive exhibition.
A Waymo self-driving van cruised through a Chandler neighborhood Aug. 1 when test driver Michael Palos saw something startling as he sat behind the wheel — a bearded man in shorts aiming a handgun at him as he passed the man’s driveway.
The incident is one of at least 21 interactions documented by Chandler police during the past two years where people have harassed the autonomous vehicles and their human test drivers.
People have thrown rocks at Waymos. The tire on one was slashed while it was stopped in traffic. The vehicles have been yelled at, chased and one Jeep was responsible for forcing the vans off roads six times.
Many of the people harassing the van drivers appear to hold a grudge against the company, a division of Mountain View, California-based Alphabet Inc., which has tested self-driving technology in the Chandler area since 2016.
One of my favorite technological myths is, like all the best stories, both ancient and urgent. It’s about usurpation and seduction. In Greek mythology, the sculptor Pygmalion falls in love with his own supremely beautiful creation, Galatea. In Ovid’s telling, there’s a happy ending. The goddess of beauty, Aphrodite, takes pity on him and breathes life into the marble. The statue’s lips grow warm under his kiss; they fall in love, marry.
The tale has an unhappier classical cousin: that of Talos, the artificial man. Created by the divine smith, Hephaestus, Talos is often depicted as a bronze giant striding through the seas around Crete. Immensely strong, almost invulnerable, Talos renders all human might redundant.
Skip forward two thousand years and we find Galatea and Talos dovetailing into one of the 1990s’ most iconic science fiction films: Terminator 2: Judgment Day, James Cameron’s masterpiece of action and exquisitely honed musculature. In the second half of the film, there’s a quiet moment where Arnold Schwarzenegger’s titular Terminator – an artificial killer reprogrammed to act as the perfect protector – is hanging out with his young protectee, John Connor.
John’s mother, Sarah, watches from a distance as the cyborg plays with the 10-year-old. Arnie has flipped from one polarity to the other: from perfect assassin to perfect playmate.
“It was suddenly so clear,” she says in voiceover. “The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up.”
Tireless, infinitely patient, endlessly consistent – our creations measure up in ways we can only dream of. Who wouldn’t want an immaculate machine companion, employee, parent, lover?
Quite a few people, as it turns out. Or at least, we don’t want to want these things. Our myths warn us about the weakness of human desire and judgment. To become entirely human, as in Pygmalion’s tale, is one thing. But to supplant the human is quite another. Arnie is there to help humans do human things: save the world, blow stuff up, chase around in trucks and on motorbikes. Then, conveniently enough, he terminates himself.
Myths themselves are seductive. They structure time and the world in ways we understand. They resonate. They are about human vulnerability and greatness; our fragility and hope. They are all about us – and, unfortunately, they have little to say about our current crop of technologies that isn’t misleading in one way or another.
Amazon’s smart assistant Alexa can now be made to encourage children to say: “Please,” and: “Thank you,” when issuing it voice commands.
The new function addresses some parents’ concerns that use of the technology was teaching their offspring to sound officious or even rude.
In addition, parents can now set time limits on when requests are responded to, and can block some services.
The move has been welcomed by one of Alexa’s critics.
In January, the research company ChildWise published a report warning that youngsters that grew up accustomed to barking orders at Alexa, Google Assistant or some other virtual personality might become aggressive in later dealings with humans.
“This is a very positive development,” research director Simon Leggett told the BBC.
“We had noticed that practically none of the children that we had talked to said they ever used the words ‘please’ or ‘thank you’ when talking to their devices.
"Younger children will enjoy having the added interactivity, but older children may be less likely to use it as they will be more aware it’s a robot at the other end.”
It was quite common to have a certain name associated with a certain job. The scullery maid is called Mary. If you hire Gwyneth, you call her Mary because she is the scullery maid. You couldn’t even depend on maintaining your own name for the purposes of your working life.
The whole point of having a digital assistant is to have it do stuff for you. You’re supposed to boss it around.
But it still sounds like a bit of a reprimand whenever I hear someone talking to an Amazon Echo. The monolithic voice-activated digital assistant will, when instructed, play music for you, read headlines, add items to your Amazon shopping cart, and complete any number of other tasks. And to activate the Echo, you first have to say: “Alexa.” As in, “Alexa, play rock music.” (Or, more pointedly, “Alexa, stop.”)
Sign up for The Atlantic’s daily newsletter.
Each weekday evening, get an overview of the day’s biggest news, along with fascinating ideas, images, and voices.
Email Address (required)Sign Up
Thanks for signing up!
The command for Microsoft’s Cortana—“Hey Cortana”—is similar, though maybe a smidge gentler. Apple’s Siri can be activated with a “hey,” or with the push of a button. Not to get overly anthropomorphic here—Amazon’s the one who refers to Echo as part of the family, after all—but if we’re going to live in a world in which we’re ordering our machines around so casually, why do so many of them have to have women’s names?
The simplest explanation is that people are conditioned to expect women, not men, to be in administrative roles—and that the makers of digital assistants are influenced by these social expectations. But maybe there’s more to it.
“It’s much easier to find a female voice that everyone likes than a male voice that everyone likes,” the Stanford communications professor Clifford Nass, told CNNin 2011. (Nass died in 2013.) “It’s a well-established phenomenon that the human brain is developed to like female voices.”
Slaves are normally defined to be people you own. In recent centuries, due to the African slave trade, slavery came to be associated with racism and also with endemic cruelty. In the past though (and in some places still today) slaves were often members of the same race or even nation that had simply lost private status. This happened generally as an outcome of war, but sometimes as an outcome of poverty. Excesses of cruelty are greatest when actors are able to dehumanise those in their power, and thus remove their own empathy for their subordinates. Such behaviour can be seen even within contemporary communities of citizens, when a person in power considers their very social standing as an indication of a specialness not shared with subordinates. Our culture has for good reason become extremely defensive against actions and beliefs associated with such dehumanisation.
But surely dehumanisation is only wrong when it’s applied to someone who really is human? Given the very obviously human beings that have been labelled inhuman in the global culture’s very recent past, many seem to have grown wary of applying the label at all. For example, Dennett (1987) argues that we should allocate the rights of agency to anything that appears to be best reasoned about as acting in an intentional manner. Because the costs of making a mistake and trivialising a sentient being are too great, Dennett says we are safer to err on the side of caution.
Dennett’s position is certainly easy to be sympathetic with, and not only because such generosity is almost definitionally nice. As I discuss below, there are many reasons people want to be able to build robots that they owe ethical obligation to. But the position overlooks the fact that there are also costs associated with allocating agency this way. I describe these costs below as well.
But first, returning to the question of definition – when I say “Robots should be slaves”, I by no means mean “Robots should be people you own.” What I mean to say is “Robots should be servants you own.”
There are several fundamental claims of this paper:
Having servants is good and useful, provided no one is dehumanised. A robot can be a servant without being a person. It is right and natural for people to own robots. It would be wrong to let people think that their robots are persons.