The contributions to this book explore a phenomenon that appears to be a contradiction in itself – we, the users of computers, can be tracked in digital space for all eternity. Although, on the one hand, one wants to be noticed and noticeable, on the other hand one does not necessarily want to be recognized at the first instance, being prey to an unfathomable public, or – even less so – to lose face.
While spending time with gang members in the South Side of Chicago to conduct fieldwork for his forthcoming book, sociologist Forrest Stuart would regularly check Twitter and Instagram. He’d be surprised to find that the young men he was hanging out with, often in perfectly mundane situations, were posting pre-prepared images and videos of themselves wielding guns.
“I discovered all this flexing on social media,” he tells me over Skype. “I’d be standing right next to these guys and realise they were posting things that were nothing to do with what we were actually doing.” Some of the young men didn’t own and had never used a gun. They simply borrowed them to stockpile photos and videos of themselves holding weapons, later curating an intimidating social media profile that they would drip feed onto the internet over the coming days and weeks.
Drill artist Digga D has found a young, engaged audience through social media, despite some of his videos being banned
“I’d be driving them across town in my car, and when we’d pass a rival block they’d start taking selfies out the window, pretending they were on their way to do a drive-by,” Stuart continues. “Another time, in a cold Chicago winter, I was sat with a young man who was babysitting his little sisters. We were in his living room watching music videos on the television. But when I checked Instagram, he was on there posting photos pretending to be stood in the blizzard outside protecting his block.”
It is no secret that social media platforms are shifting human behaviours, habits and interactions all over the world. People are increasingly able to use digital profiles of themselves to extend or invert their physical realities, and thus manipulate their social, professional and moral worlds for all sorts of benefits and incentives: the prospect of meeting a new lover, the lure of branded money from sponsors, the endorphin-hit of likes and shares, and chase votes and political power.
By the end of his senior year in a Philadelphia high school in June 2017, Jamal had missed out on completing his certification in the culinary arts, playing on the basketball team, attending prom, and walking across the stage at his graduation. He was barred from working a job to help his mother pay the bills. He wasn’t even allowed to leave his home — all on the order of a judge. But Jamal hadn’t been convicted of a crime. Jamal lost a year of his life because — like many testosterone-filled young men — he acted tough on his social media accounts.
Jamal, a young black man — whose name has been changed at his request due to confidentiality concerns — was swept up in Philadelphia’s Focused Deterrence program, an initiative meant to crack down on gang violence but which has instead been used to criminalize entire social networks of young black and brown people. Philadelphia police arrested him in September 2016 on a gun charge after an officer in the department’s South Gang Task Force identified Jamal as a member of a gang. How had that officer made that determination? As officer Matthew York, a member of the task force, later testified in court, it was largely based on photos and tweets that appeared on Jamal’s social media and which York believed associated him with a gang, as well as Jamal’s appearance in a friend’s music video, a video that the officer believed was “gang-related.”
Philadelphia’s Focused Deterrence program, like similar programs in cities around the country, relies on internet surveillance. Police officers mine social media for possible gang affiliations of young people, then compile that “data” and feed it into gang databases. Police officers target young people in the databases — who may be included for as little as flashing a gang sign in a Tweet to bragging about a crime in a music video on YouTube and Facebook — for on-the-ground policing. State and federal prosecutors also get their hands on the social-media “data,” using it to shore up criminal cases. Philadelphia modeled Focused Deterrence after criminologist David Kennedy’s “Ceasefire” policing model, which, as I previously reported in IThe Appeal and The Nation, focuses policing on small groups of individuals (often referred to by police departments as “gangs”) that purportedly drive community violence. The Kennedy model and its offshoot programs have been deployed by many cities, including Baltimore, Baton Rouge, and New Orleans.
But the “data” police feed into these databases, for the most part, has little bearing on reality. Indeed, in December the City of Chicago settled a lawsuit with a man who was falsely included in its sprawling gang database. Across the country, young people are swept into these databases and then targeted by police — just because they bragged about actions they had no part in or made threats against rival groups they have no intention of following up on…
When Elizabeth Warren took on Mark Zuckerberg and Facebook earlier this week, it was a low moment for what New Yorker writer Andrew Marantz calls “techno-utopianism.”
That the progressive, populist Massachusetts Senator and leading Democratic Presidential candidate wants to #BreakUpBigTech is not surprising. But Warren’s choice to spotlight regulating and trust-busting Facebook was nonetheless noteworthy, because of what it represents on a philosophical level. Warren, along with like-minded political leaders, social activists, and tech critics, has begun to offer the first massively popular alternative to the massively popular wave of aggressive optimism and “genius” ambition that characterized tech culture for the past decade or two.
“No,” Warren and others seem to say, “your vision is not necessarily making the world a better place.” This is a major buzzkill for tech leaders who have made (positive) world-changing their number one calling card — more than profits, popularity, skyscrapers like San Francisco’s striking Salesforce Tower, or any other measure.
Enter Marantz, a longtime New Yorker staff writer and Brooklyn, N.Y. resident who has recently trained his attention on tech culture, following around iconic figures on both sides of what he sees as the divide of our time — not between tech greats whose successes make us all better and those who would stop them, but between the alternative figures on the “new right” and the self-understood liberals of Silicon Valley who, according to Marantz, have both contributed to “hijacking the American conversation.”
When I started writing a column in the Guardian, I would engage with the commenters who made valid points and urge those whose response was getting lost in rage to re-read the piece and return. Comments were open for 72 hours. Coming up for air at the end of a thread felt like mooring a ship after a few days on choppy waters, like an achievement, something that I and the readers had gone through together. We had discussed sensitive, complicated ideas about politics, race, gender and sexuality and, at the end, via a rolling conversation, we had got somewhere.
In the decade since, the tenor of those comments became so personalised and abusive that the ship often drowned before making it to shore – the moderators would simply shut the thread down. When it first started happening, I took it as a personal failure – perhaps I had not struck the right tone or not sufficiently hedged all my points, provoking readers into thinking I was being dishonest or incendiary. In time, it dawned on me that my writing was the same. It was the commenters who had changed. It was becoming harder to discuss almost anything without a virtual snarl in response. And it was becoming harder to do so if one were not white or male.
As a result, the Guardian overhauled its policy and decided that it would not open comment threads on pieces that were certain to derail. The moderators had a duty of care to the writers, some of whom struggled with the abuse, and a duty of care to new writers who might succumb to a chilling effect if they knew that to embark on a journalism career nowadays comes inevitably with no protection from online thuggery. Alongside these moral concerns there were also practical, commercial ones. There were simply not enough resources to manage all the open threads at the same time with the increased level of attention that was now required.
In the past 10 years, many platforms in the press and social media have had to grapple with the challenges of managing users with increasingly sharp and offensive tones, while maintaining enough space for expression, feedback and interaction. Speech has never been more free or less intermediated. Anyone with internet access can create a profile and write, tweet, blog or comment, with little vetting and no hurdle of technological skill. But the targets of this growth in the means of expression have been primarily women, minorities and LGBTQ+ people.
A 2017 Pew Research Center survey revealed that a “wide cross-section” of Americans experience online abuse, but that the majority was directed towards minorities, with a quarter of black Americans saying they have been attacked online due to race or ethnicity. Ten per cent of Hispanics and 3% of whites reported the same. The picture is not much different in the UK. A 2017 Amnesty report analysed tweets sent to 177 female British MPs. The 20 of them who were from a black and ethnic minority background received almost half the total number of abusive tweets.
The vast majority of this abuse goes unpunished. And yet it is somehow conventional wisdom that free speech is under assault, that university campuses have succumbed to an epidemic of no-platforming, that social media mobs are ready to raise their pitchforks at the most innocent slip of the tongue or joke, and that Enlightenment values that protected the right to free expression and individual liberty are under threat. The cause of this, it is claimed, is a liberal totalitarianism that is attributable (somehow) simultaneously to intolerance and thin skin. The impulse is allegedly at once both fascist in its brutal inclinations to silence the individual, and protective of the weak, easily wounded and coddled.
This is the myth of the free speech crisis. It is an extension of the political-correctness myth, but is a recent mutation more specifically linked to efforts or impulses to normalise hate speech or shut down legitimate responses to it. The purpose of the myth is not to secure freedom of speech – that is, the right to express one’s opinions without censorship, restraint or legal penalty. The purpose is to secure the licence to speak with impunity; not freedom of expression, but rather freedom from the consequences of that expression.
Last fall at Oberlin College, a talk held as part of Latino Heritage Month was scheduled on the same evening that intramural soccer games were held. As a result, soccer players communicated by email about their respective plans. “Hey, that talk looks pretty great,” a white student wrote to a Hispanic student, “but on the off chance you aren’t going or would rather play futbol instead the club team wants to go!!”
Unbeknownst to the white student, the Hispanic student was offended by the email. And her response signals the rise of a new moral culture America.
When conflicts occur, sociologists Bradley Campbell and Jason Manning observe in an insightful new scholarly paper, aggrieved parties can respond in any number of ways. In honor cultures like the Old West or the street gangs of West Side Story, they might engage in a duel or physical fight. In dignity cultures, like the ones that prevailed in Western countries during the 19th and 20th Centuries, “insults might provoke offense, but they no longer have the same importance as a way of establishing or destroying a reputation for bravery,” they write. “When intolerable conflicts do arise, dignity cultures prescribe direct but non-violent actions.”
We’ve all engaged in these actions.
The aggrieved might “exercise covert avoidance, quietly cutting off relations with the offender without any confrontation” or “conceptualize the problem as a disruption to their relationship and seek only to restore harmony without passing judgment.” In the most serious cases, they might call police rather than initiating violence themselves. “For offenses like theft, assault, or breach of contract, people in a dignity culture will use law without shame,” the authors observe. “But in keeping with their ethic of restraint and toleration, it is not necessarily their first resort, and they might condemn many uses of the authorities as frivolous. People might even be expected to tolerate serious but accidental personal injuries.”
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as “smart” as a human being. And then, says Nick Bostrom, it will overtake us: “Machine intelligence is the last invention that humanity will ever need to make.” A philosopher and technologist, Bostrom asks us to think hard about the world we’re building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
East Asian technological innovations have long outpaced those in the West. Products that sound like recent or even future innovations to most Westerners have been available for decades in Asia, particularly in Japan. These include:
· A handheld device that enables customers to order food and drinks from their karaoke room.
· A button attached to the table that customers push to alert a waitress.
· A slew of vending machines that sell everything you can imagine: alcohol, ramen, underwear, umbrellas, rice, newspapers, cell phones.
· Love hotels where guests can check in discreetly without interacting with other human beings.
Tourists visiting Japan for the first time often feel compelled to take a photo of the ubiquitous high-tech washlet toilets. These fixtures are hardly new; they have been on the market since 1980 and have more than 80 percent market penetration. Years before the Internet of Things became a phenomenon in the West, Japanese people were using their mobile phones to run their baths remotely while in a cab. They were also using a single card on their phones to buy groceries from a store, get green tea from a vending machine, and pay the fare for trains and buses.
When the personal computer first became ubiquitous in the 1980s, as Adrienne LaFrance wrote in The Atlantic earlier this year, some people found it so terrifying that the term “computerphobia” was coined.
“In the early days of the telephone, people wondered if the machines might be used to communicate with the dead. Today, it is the smartphone that has people jittery,” she wrote. “Humans often converge around massive technological shifts—around any change, really—with a flurry of anxieties.”
To see those anxieties quantified, take a look at the top five scariest items in the Survey of American Fears, released earlier this week by researchers at Chapman University. Three of them—cyberterrorism, corporate tracking of personal information, and government tracking of personal information—were technology-related.
For the survey, a random sample of around 1,500 adults ranked their fears of 88 different items on a scale of one (not afraid) to four (very afraid). The fears were divided into 10 different categories: crime, personal anxieties (like clowns or public speaking), judgment of others, environment, daily life (like romantic rejection or talking to strangers), technology, natural disasters, personal future, man-made disasters, and government—and when the study authors averaged out the fear scores across all the different categories, technology came in second place, right behind natural disasters.