Category: elizabethann

People are using Siri as a therapist, so Apple…

People are using Siri as a therapist, so Apple is seeking engineers who understand psychology:

Perhaps you’ve stumped Siri before, asking Apple’s automated assistant things like, “What is the meaning of life?” or “How can I be healthier and happier?”

If so, you’re not alone in turning to your phone for existential guidance and serious, practical life advice. According to an Apple job posting, lots of people do it. That’s why the company is seeking software engineers with feeling—and a background in psychology and peer counseling—to help improve Siri’s responses to the toughest questions.

Digital Human: Series 15, Ep 5 – Subservience

Amazon Echo Is Magical. It’s Also Turning My K…

Amazon Echo Is Magical. It’s Also Turning My Kid Into an Asshole.:

We love our Amazon Echo. Among other tasks, my four year old finds the knock knock jokes hilarious, the weather captivating, the ability to summon songs comparable to magic and Echo to be the best speller in the house. But I fear it’s also turning our daughter into a raging asshole. Because Alexa tolerates poor manners.

Digital Human: Series 15, Ep 5 – Subservience

AI painting to go under the hammer

AI painting to go under the hammer:

Digital Human: Series 8, Ep 10 – Imagine 

She didn’t even say please :/ Digital Human: …

She didn’t even say please :/

Digital Human: Series 15, Ep 5 – Subservience

Pretty Please, Alexa – Member Feature Stories …

Pretty Please, Alexa – Member Feature Stories – Medium:

Option A: Respond just as a parent would, perhaps a bit pithily.

“Alexa, set an alarm.”
“I didn’t hear the magic word…”

Imagine getting this response at 11 p.m. when you’re preparing for bed. How would you feel? Or, perhaps, when trying to set a time-out timer for your child in the heat of the moment. Would being corrected for lack of politeness when trying to discipline a child really help solve the problem?

Option B: Complete the action, but add some reinforcement.

“Alexa, set an alarm for 7 a.m. tomorrow.”
“Your alarm is set for 7 a.m. tomorrow. By the way, it makes me happy when you say ‘please’.”

Less immediately annoying, but preachy. Can you honestly say this wouldn’t irritate you, and perhaps elicit a negative response in front of the kids we’re supposed to be teaching?

Option C: Swing towards positive reinforcement.

“Alexa, please set an alarm for 7AM tomorrow.”
“Your alarm is set for 7AM tomorrow. Thanks for asking so politely!”

Today’s prompts would remain as is, but successful use of the word “please” in appropriate scenarios could result in a more pleasing exchange. Naturally, we’d want to vary the “pleasing” responses so they don’t get too repetitive, since we’re trying to encourage more frequent use of Please.

Option D: Go abstract, and mirror the brusqueness of impolite speech.

“Alexa, set an alarm for tomorrow at 7 a.m.”
“Fine, your alarm is set.”

While I wouldn’t necessarily recommend this approach since it’s a bit of a user experience regression, the lack of politeness means the system is less forthcoming with information. If you want the full confirmation (“Your alarm is set for 7 p.m. tomorrow”), you need to be polite about it.

So what’s the right approach? There’s no silver bullet; it probably depends not only on your assistant’s tone and demeanor, but your brand and the context of use. And of course, there are likely many other ways to attack this problem. But all four of these options run up against repetitiveness, especially if applied to all requests.

Digital Human: Series 15, Ep 5 – Subservience

I Don’t Date Men Who Yell at Alexa

I Don’t Date Men Who Yell at Alexa:

One thing that is already clear: The way people speak to Alexa, Cortana, and Siri already changes the way I see them. It matters how you interact with your virtual assistant, not because it has feelings or will one day murder you in your sleep for disrespecting it, but because of how it reflects on you. Alexa is not human, but we engage with her like one. We judge people by how they interact with retail and hospitality workers—it supposedly says a lot about a person that they are rude to wait staff. Of course, waiters are more deserving of respect than robots—you could make or break a worker’s mood with your thoughtlessness, while Alexa doesn’t have moods (she only cares about yours). But the underlying revelation is the same: Who are you when in a position of power, and how do you treat those beneath you?

Digital Human: Series 15, Ep 5 – Subservience

Robots Should Be Slaves

Robots Should Be Slaves:

Slaves are normally defined to be people you own. In recent centuries, due to the African slave trade, slavery came to be associated with racism and also with endemic cruelty. In the past though (and in some places still today) slaves were often members of the same race or even nation that had simply lost private status. This happened generally as an outcome of war, but sometimes as an outcome of poverty. Excesses of cruelty are greatest when actors are able to dehumanise those in their power, and thus remove their own empathy for their subordinates. Such behaviour can be seen even within contemporary communities of citizens, when a person in power considers their very social standing as an indication of a specialness not shared with subordinates. Our culture has for good reason become extremely defensive against actions and beliefs associated with such dehumanisation.

But surely dehumanisation is only wrong when it’s applied to someone who really is human? Given the very obviously human beings that have been labelled inhuman in the global culture’s very recent past, many seem to have grown wary of applying the label at all. For example, Dennett (1987) argues that we should allocate the rights of agency to anything that appears to be best reasoned about as acting in an intentional manner. Because the costs of making a mistake and trivialising a sentient being are too great, Dennett says we are safer to err on the side of caution.

Dennett’s position is certainly easy to be sympathetic with, and not only because such generosity is almost definitionally nice. As I discuss below, there are many reasons people want to be able to build robots that they owe ethical obligation to. But the position overlooks the fact that there are also costs associated with allocating agency this way. I describe these costs below as well.

But first, returning to the question of definition – when I say “Robots should be slaves”, I by no means mean “Robots should be people you own.” What I mean to say is “Robots should be servants you own.”

There are several fundamental claims of this paper:

Having servants is good and useful, provided no one is dehumanised.
A robot can be a servant without being a person.
It is right and natural for people to own robots.
It would be wrong to let people think that their robots are persons.

Digital Human: Series 15, Ep 5 – Subservience

When DNA Implicates the Innocent

When DNA Implicates the Innocent:

In December 2012 a homeless man named Lukis Anderson was charged with the murder of Raveesh Kumra, a Silicon Valley multimillionaire, based on DNA evidence. The charge carried a possible death sentence. But Anderson was not guilty. He had a rock-solid alibi: drunk and nearly comatose, Anderson had been hospitalized—and under constant medical supervision—the night of the murder in November. Later his legal team learned his DNA made its way to the crime scene by way of the paramedics who had arrived at Kumra’s residence. They had treated Anderson earlier on the same day—inadvertently “planting” the evidence at the crime scene more than three hours later. The case, presented in February at the annual American Academy of Forensic Sciences meeting in Las Vegas, provides one of the few definitive examples of a DNA transfer implicating an innocent person and illustrates a growing opinion that the criminal justice system’s reliance on DNA evidence, often treated as infallible, actually carries significant risks.

As virtually every field in forensics has come under increased scientific scrutiny in recent years, especially those relying on comparisons such as bite-mark and microscopic hair analysis, the power of DNA evidence has grown—and for good reason. DNA analysis is more definitive and less subjective than other forensic techniques because it is predicated on statistical models. By examining specific regions, or loci, on the human genome, analysts can determine the likelihood that a given piece of evidence does or does not match a known genetic profile, from a victim, suspect or alleged perpetrator; moreover, analysts can predict how powerful or probative the match is by checking a pattern’s frequency against population databases. Since the mid-1990s the Innocence Project, a nonprofit legal organization based in New York City, has analyzed or reanalyzed available DNA to examine convictions, winning nearly 200 exonerations and spurring calls for reform of the criminal justice system.

Like any piece of evidence, however, DNA is just one part of a larger picture. “We’re desperately hoping that DNA will come in to save the day, but it’s still fitting into a flawed system,” says Erin E. Murphy, a professor of law at New York University and author of the 2015 book Inside the Cell: The Dark Side of Forensic DNA. “If you don’t bring in the appropriate amount of skepticism and restraint in using the method, there are going to be miscarriages of justice.” For example, biological samples can degrade or be contaminated; judges and juries can misinterpret statistical probabilities. And as the Anderson case brought to light, skin cells can move.

Digital Human: Series 15, Ep 1 – Jigsaw

Fairness, transparency, privacy

Fairness, transparency, privacy:

Aims

Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.

There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.

The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.

Once your smart devices can talk to you, who…

Once your smart devices can talk to you, who else are they talking to? Kashmir Hill and Surya Mattu wanted to find out – so they outfitted Hill’s apartment with 18 different internet-connected devices and built a special router to track how often they contacted their servers and see what they were reporting back. The results were surprising – and more than a little bit creepy. Learn more about what the data from your smart devices reveals about your sleep schedule, TV binges and even your tooth-brushing habits – and how tech companies could use it to target and profile you. (This talk contains mature language.)

Digital Human: Series 15, Ep 1 – Jigsaw