Category: BBC

People are using Siri as a therapist, so Apple…

People are using Siri as a therapist, so Apple is seeking engineers who understand psychology:

Perhaps you’ve stumped Siri before, asking Apple’s automated assistant things like, “What is the meaning of life?” or “How can I be healthier and happier?”

If so, you’re not alone in turning to your phone for existential guidance and serious, practical life advice. According to an Apple job posting, lots of people do it. That’s why the company is seeking software engineers with feeling—and a background in psychology and peer counseling—to help improve Siri’s responses to the toughest questions.

Digital Human: Series 15, Ep 5 – Subservience

AI painting to go under the hammer

AI painting to go under the hammer:

Digital Human: Series 8, Ep 10 – Imagine 

Alexa wants children to say please

Alexa wants children to say please:

Amazon’s smart assistant Alexa can now be made to encourage children to say: “Please,” and: “Thank you,” when issuing it voice commands.

The new function addresses some parents’ concerns that use of the technology was teaching their offspring to sound officious or even rude.

In addition, parents can now set time limits on when requests are responded to, and can block some services.

The move has been welcomed by one of Alexa’s critics.

In January, the research company ChildWise published a report warning that youngsters that grew up accustomed to barking orders at Alexa, Google Assistant or some other virtual personality might become aggressive in later dealings with humans.

“This is a very positive development,” research director Simon Leggett told the BBC.

“We had noticed that practically none of the children that we had talked to said they ever used the words ‘please’ or ‘thank you’ when talking to their devices.

"Younger children will enjoy having the added interactivity, but older children may be less likely to use it as they will be more aware it’s a robot at the other end.”

Digital Human: Series 15, Ep 5 – Subservience

The Life of Domestic Servants in Victorian Eng…

The Life of Domestic Servants in Victorian England:

It was quite common to have a certain name associated with a certain job. The scullery maid is called Mary. If you hire Gwyneth, you call her Mary because she is the scullery maid. You couldn’t even depend on maintaining your own name for the purposes of your working life.

Digital Human: Series 15, Ep 5 – Subservience

Pretty Please, Alexa – Member Feature Stories …

Pretty Please, Alexa – Member Feature Stories – Medium:

Option A: Respond just as a parent would, perhaps a bit pithily.

“Alexa, set an alarm.”
“I didn’t hear the magic word…”

Imagine getting this response at 11 p.m. when you’re preparing for bed. How would you feel? Or, perhaps, when trying to set a time-out timer for your child in the heat of the moment. Would being corrected for lack of politeness when trying to discipline a child really help solve the problem?

Option B: Complete the action, but add some reinforcement.

“Alexa, set an alarm for 7 a.m. tomorrow.”
“Your alarm is set for 7 a.m. tomorrow. By the way, it makes me happy when you say ‘please’.”

Less immediately annoying, but preachy. Can you honestly say this wouldn’t irritate you, and perhaps elicit a negative response in front of the kids we’re supposed to be teaching?

Option C: Swing towards positive reinforcement.

“Alexa, please set an alarm for 7AM tomorrow.”
“Your alarm is set for 7AM tomorrow. Thanks for asking so politely!”

Today’s prompts would remain as is, but successful use of the word “please” in appropriate scenarios could result in a more pleasing exchange. Naturally, we’d want to vary the “pleasing” responses so they don’t get too repetitive, since we’re trying to encourage more frequent use of Please.

Option D: Go abstract, and mirror the brusqueness of impolite speech.

“Alexa, set an alarm for tomorrow at 7 a.m.”
“Fine, your alarm is set.”

While I wouldn’t necessarily recommend this approach since it’s a bit of a user experience regression, the lack of politeness means the system is less forthcoming with information. If you want the full confirmation (“Your alarm is set for 7 p.m. tomorrow”), you need to be polite about it.

So what’s the right approach? There’s no silver bullet; it probably depends not only on your assistant’s tone and demeanor, but your brand and the context of use. And of course, there are likely many other ways to attack this problem. But all four of these options run up against repetitiveness, especially if applied to all requests.

Digital Human: Series 15, Ep 5 – Subservience

I Don’t Date Men Who Yell at Alexa

I Don’t Date Men Who Yell at Alexa:

One thing that is already clear: The way people speak to Alexa, Cortana, and Siri already changes the way I see them. It matters how you interact with your virtual assistant, not because it has feelings or will one day murder you in your sleep for disrespecting it, but because of how it reflects on you. Alexa is not human, but we engage with her like one. We judge people by how they interact with retail and hospitality workers—it supposedly says a lot about a person that they are rude to wait staff. Of course, waiters are more deserving of respect than robots—you could make or break a worker’s mood with your thoughtlessness, while Alexa doesn’t have moods (she only cares about yours). But the underlying revelation is the same: Who are you when in a position of power, and how do you treat those beneath you?

Digital Human: Series 15, Ep 5 – Subservience

Robots Should Be Slaves

Robots Should Be Slaves:

Slaves are normally defined to be people you own. In recent centuries, due to the African slave trade, slavery came to be associated with racism and also with endemic cruelty. In the past though (and in some places still today) slaves were often members of the same race or even nation that had simply lost private status. This happened generally as an outcome of war, but sometimes as an outcome of poverty. Excesses of cruelty are greatest when actors are able to dehumanise those in their power, and thus remove their own empathy for their subordinates. Such behaviour can be seen even within contemporary communities of citizens, when a person in power considers their very social standing as an indication of a specialness not shared with subordinates. Our culture has for good reason become extremely defensive against actions and beliefs associated with such dehumanisation.

But surely dehumanisation is only wrong when it’s applied to someone who really is human? Given the very obviously human beings that have been labelled inhuman in the global culture’s very recent past, many seem to have grown wary of applying the label at all. For example, Dennett (1987) argues that we should allocate the rights of agency to anything that appears to be best reasoned about as acting in an intentional manner. Because the costs of making a mistake and trivialising a sentient being are too great, Dennett says we are safer to err on the side of caution.

Dennett’s position is certainly easy to be sympathetic with, and not only because such generosity is almost definitionally nice. As I discuss below, there are many reasons people want to be able to build robots that they owe ethical obligation to. But the position overlooks the fact that there are also costs associated with allocating agency this way. I describe these costs below as well.

But first, returning to the question of definition – when I say “Robots should be slaves”, I by no means mean “Robots should be people you own.” What I mean to say is “Robots should be servants you own.”

There are several fundamental claims of this paper:

Having servants is good and useful, provided no one is dehumanised.
A robot can be a servant without being a person.
It is right and natural for people to own robots.
It would be wrong to let people think that their robots are persons.

Digital Human: Series 15, Ep 5 – Subservience

Fairness, transparency, privacy

Fairness, transparency, privacy:

Aims

Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.

There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.

The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.

Once your smart devices can talk to you, who…

Once your smart devices can talk to you, who else are they talking to? Kashmir Hill and Surya Mattu wanted to find out – so they outfitted Hill’s apartment with 18 different internet-connected devices and built a special router to track how often they contacted their servers and see what they were reporting back. The results were surprising – and more than a little bit creepy. Learn more about what the data from your smart devices reveals about your sleep schedule, TV binges and even your tooth-brushing habits – and how tech companies could use it to target and profile you. (This talk contains mature language.)

Digital Human: Series 15, Ep 1 – Jigsaw

Survival in the digital age Stephanie Hankey o…

Survival in the digital age Stephanie Hankey offers a guide to safe online activism.:

Digital Human: Series 15, Ep 1 – Jigsaw