Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
A Waymo self-driving van cruised through a Chandler neighborhood Aug. 1 when test driver Michael Palos saw something startling as he sat behind the wheel — a bearded man in shorts aiming a handgun at him as he passed the man’s driveway.
The incident is one of at least 21 interactions documented by Chandler police during the past two years where people have harassed the autonomous vehicles and their human test drivers.
People have thrown rocks at Waymos. The tire on one was slashed while it was stopped in traffic. The vehicles have been yelled at, chased and one Jeep was responsible for forcing the vans off roads six times.
Many of the people harassing the van drivers appear to hold a grudge against the company, a division of Mountain View, California-based Alphabet Inc., which has tested self-driving technology in the Chandler area since 2016.
Animism (from the Latin: animus or anima, meaning mind or soul) refers to a belief in numerous personalized, supernatural beings endowed with reason, intelligence and/or volition, that inhabit both objects and living beings and govern their existences. More simply, it is the belief that “everything is conscious” or that “everything has a soul.” The term has been further extended to refer to a belief that the natural world is a community of living personas, only some of whom are human. As a term, “animism” has also been used in academic circles to refer to the types of cultures in which these animists live.
While the term “animism” refers to a broad range of spiritual beliefs (many of which are still extant within human cultures today), it does not denote any particular religious creed or doctrine. The most common feature of animist religions is their attention to particulars, as evidenced by the number and variety of spirits they recognize. This can be strongly contrasted with the all-inclusive universalism of monotheistic, pantheistic and panentheistic traditions. Furthermore, animist spirituality is more focused on addressing practical exigencies (such as health, nourishment and safety needs) than on solving abstract metaphysical quandaries. Animism recognizes that the universe is alive with spirits and that humans are interrelated with them.
One of my favorite technological myths is, like all the best stories, both ancient and urgent. It’s about usurpation and seduction. In Greek mythology, the sculptor Pygmalion falls in love with his own supremely beautiful creation, Galatea. In Ovid’s telling, there’s a happy ending. The goddess of beauty, Aphrodite, takes pity on him and breathes life into the marble. The statue’s lips grow warm under his kiss; they fall in love, marry.
The tale has an unhappier classical cousin: that of Talos, the artificial man. Created by the divine smith, Hephaestus, Talos is often depicted as a bronze giant striding through the seas around Crete. Immensely strong, almost invulnerable, Talos renders all human might redundant.
Skip forward two thousand years and we find Galatea and Talos dovetailing into one of the 1990s’ most iconic science fiction films: Terminator 2: Judgment Day, James Cameron’s masterpiece of action and exquisitely honed musculature. In the second half of the film, there’s a quiet moment where Arnold Schwarzenegger’s titular Terminator – an artificial killer reprogrammed to act as the perfect protector – is hanging out with his young protectee, John Connor.
John’s mother, Sarah, watches from a distance as the cyborg plays with the 10-year-old. Arnie has flipped from one polarity to the other: from perfect assassin to perfect playmate.
“It was suddenly so clear,” she says in voiceover. “The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up.”
Tireless, infinitely patient, endlessly consistent – our creations measure up in ways we can only dream of. Who wouldn’t want an immaculate machine companion, employee, parent, lover?
Quite a few people, as it turns out. Or at least, we don’t want to want these things. Our myths warn us about the weakness of human desire and judgment. To become entirely human, as in Pygmalion’s tale, is one thing. But to supplant the human is quite another. Arnie is there to help humans do human things: save the world, blow stuff up, chase around in trucks and on motorbikes. Then, conveniently enough, he terminates himself.
Myths themselves are seductive. They structure time and the world in ways we understand. They resonate. They are about human vulnerability and greatness; our fragility and hope. They are all about us – and, unfortunately, they have little to say about our current crop of technologies that isn’t misleading in one way or another.
AN ARTIFICIAL intelligence system can predict how a scene will unfold and dream up a vision of the immediate future.
Given a still image, the deep learning algorithm generates a mini video showing what could happen next. If it starts with a picture of a train station, it might imagine the train pulling away from the platform, for example. Or an image of a beach could inspire it to animate the motion of lapping waves.
Teaching AI to anticipate the future can help it comprehend the present. To understand what someone is doing when they’re preparing a meal, we might imagine that they will next eat it, something which is tricky for an AI to grasp. Such a system could also let an AI assistant recognise when someone is about to fall, or help a self-driving car foresee an accident.
“Any robot that operates in our world needs to have some basic ability to predict the future,” says Carl Vondrick at the Massachusetts Institute of Technology, part of the team that created the new system. “For example, if you’re about to sit down, you don’t want a robot to pull the chair out from underneath you.
To teach the AI to make better videos, the team used an approach called adversarial networks. One network generates the videos, and the other judges whether they look real or fake. The two get locked in competition: the video generator tries to make videos that best fool the other network, while the other network hones its ability to distinguish the generated videos from real ones.
Earlier this year, I met an entrepreneur who believed people could become better parents by texting with a software program she’d built. To me, a new mother, it sounded like magic. By corresponding with me, the program would learn so much about my son that it’d be able to predict his future happiness, earning potential, and even his life span. Based on those projections, it would assign me an activity each morning meant to improve those outcomes. In other words, the program, called Muse, would literally turn my son, Kavi, into a richer, happier, longer-living adult than he otherwise would’ve been.
Maybe this sounds ridiculous. But the entrepreneur had caught me at a vulnerable time. I’d believed myself to be an intelligent, capable person, but parenthood had me feeling stupid and kind of unhinged. Kavi was about to turn 11 months old. My husband and I had vowed at the outset not to become hyper-vigilant parents, but we’d lately wondered if we’d instead been too cavalier.
At our last visit to the doctor’s office, I’d been given a questionnaire asking, among other things, whether Kavi had learned at least three words and whether he could respond to at least one simple verbal command. My answer to both: Wait — that’s possible? Also, Kavi was small for his age, which a nurse suggested was because we weren’t feeding him right.
The point being that when I visited this entrepreneur’s website, and it had a picture on it of a fierce little girl in a cape, and the text said something about giving me a superpower, I thought that might do me some good. It might at least better prepare us for the next doctor’s visit. Bring on the questionnaires, I thought.
Black Swan is a data-mining startup which uses artificial intelligence to predict when Brits are planning to have a BBQ (regardless of weather), or when the next big winter cold will hit.
Steve King, co-founder and CEO of the data science startupspoke to the audience at WIRED Retail about how the internet and algorithms can be used to make smarter retaildecisions.
“We live our lives on the internet, fortunately or unfortunately,” said King. “What’s interesting is we can use network theory to understand the trends and flows of the internet.”
One of Black Swan’s first clients was Disney. King described the popularity of the 2013 film Frozen as a ‘Black Swan moment’: “It broke all kinds of supply chains – people got angry because they couldn’t buy the products.”
Black Swan’s analysts looked at film sites such as Rotten Tomatoes and IMDB, data about different films released before Frozen, and what people were watching on YouTube at the time, to help predict this new cartoon would be popular. “The algorithm can predict DVD sales, which goes down to the supply chain and marketing. We can predict this before the film comes out,” said King. Disney uses this kind of data to inform what goes in its stores and what features at Disneyland theme parks.
The London-based business now has a presence on four continents, with offices in Hungary, Canada, South Africa and Hong Kong. It recently worked with 7/11 in Japan to help “shift the needle of the supply chain”, so it could make savings and reduce waste.
“We looked at Japanese opinions: how peoples’ feelings have changed over seasons and holidays,” explained King. The work of King’s team helped 7/11 boost its profits by 12 per cent. “We had access to information important to run a business,” he said.