Category: episode 6

Hong Kong protesters devise face-covering hairstyle to get around mask ban:

Protesters in Hong Kong have devised a new face-covering hairstyle to evade the new ban on masks.

A tutorial video showing viewers how to plait their hair to achieve the anonymising hairstyle was shared on Twitter by journalist Cherie Chan.

It came as tens of thousands of masked protesters poured onto Hong Kong’s streets on Sunday as they furiously yelled: “Wearing a mask is not a crime.”

Digital Human, Series 18, Episode 6: Faceless

FACELESS short documentary was produced on request of Jos de Putter for De Correspondent.

As follow up of the exhibition that Bogmor Doringer curated in collaboration with Brigitte Felderer and staged on the topic of hidden faces in contemporary society after 9-11.

Digital Human, Series 18, Episode 6: Faceless

“Why Should We Hide Our Faces?” Hong Kong’s Voices on the Ground|Across the Strait|2019-09-02|web only:

Q: How often do you join the protests?

A: As a working professional, I try to go on weekends when I have time. At most legal protests, I try not to wear masks. It’s about exercising our legal right of assembly and showing our support for this movement. Why should we hide our faces?

You don’t really know if a demonstration will ultimately be legal or illegal. It might start out as a legal assembly in a designated area, but then it spills onto the street because so many people are joining. If that crowd walks down the street and deviates in any way from the original route, it can technically be interpreted as an “unlawful assembly.”

The police are supposed to inform protesters when an assembly has been declared “unlawful.” However, protesters may hardly be aware when this happens—officers may put up a sign, in a place not within our eyesight.

If the police start shooting tear gas, you will know the demonstration is now considered illegal. This is when I put on a mask. The surgical mask I carry is not a chemical mask, so it’s not effective protection against the tear gas; but when police deem things an “unlawful assembly” and charge in, it’s better not to have your photo taken.

Many protesters are concerned about photos being taken, no matter if the assembly is legal or not, since China is renowned for using face recognition technology to monitor its people. A sense of “White Terror” is increasingly felt in Hong Kong, and we are quite worried that such images could be used against you later. Look at what is happening at Cathay Pacific and TVB—staff were laid off because they posted pro-protest messages on Facebook.

Digital Human, Series 18, Episode 6: Faceless

Cameras and other technological products make for a better and safer living environment than ever before. Mega databanks and high-resolution cameras in the streets stock hundreds of exabytes a year. But who has access to this data? It is possible that it could have commercial use, hence not only retail companies but also the advertisement industry could be very interested in this data in the coming future. They would hope to gain these personal data and information as much as they can.

In the future, the advertisement could call your name when you walk along the streets. The companies would know your interests and may set different retail strategies for you. It could be convenient for customers, but personal thoughts and opinions should be kept private. This product protects you from this privacy violation.

The concept from Jing-cai Liu:  Wearable face projector– A small beamer projects a different appearance on your face, giving you a completely new appearance.

Digital Human, Series 18, Episode 6: Faceless

Face Cages | Zach Blas:

The success of today’s booming biometrics industry resides in its promise to rapidly measure an objective, truthful, and core identity from the surface of a human body, often for a mixture of commercial, state, and military interests. Yet, feminist communications scholar Shoshana Amielle Magnet has described this neoliberal enterprise as producing “a cage of information,” a form of policing, surveillance, and structural violence that is ableist, classist, homophobic, racist, sexist, and transphobic.

Biometric machines often fail to recognize non-normative, minoritarian persons, which makes such people vulnerable to discrimination, violence, and criminalization: Asian women’s hands fail to be legible to fingerprint devices; eyes with cataracts hinder iris scans; dark skin continues to be undetectable; and non-normative formations of age, gender, and race frequently fail successful detection. These examples illustrate that the abstract, surface calculations biometrics performs on the body are gross, harmful reductions.

A visual motif in biometric facial recognition is the minimal, colorful diagrams that visualize over the face for authentication, verification, and tracking purposes. These diagrams are a kind of abstraction gone bad, a visualization of the reduction of the human to a standardized, ideological diagram. When these diagrams are extracted from the humans they cover over, they appear as harsh and sharp incongruous structures; they are, in fact, digital portraits of dehumanization.

Face Cages is a dramatization of the abstract violence of the biometric diagram. In this installation and performance work, four queer artists, including micha cárdenas, Elle Mehrmand, Paul Mpagi Sepuya, and Zach Blas, generate biometric diagrams of their faces, which are then fabricated as three-dimensional metal objects, evoking a material resonance with handcuffs, prison bars, and torture devices used during the Medieval period and slavery in the United States. The metal face cages are then worn in endurance performances for video. Face Cages is presented as an installation that features the four performance videos and four metal face cages.

The computational biometric diagram, a supposedly perfect measuring and accounting of the face, once materialized as a physical object, transforms into a cage that does not easily fit the human head, that is extremely painful to wear. These cages exaggerate and perform the irreconcilability of the biometric diagram with the materiality of the human face itself–and the violence that occurs when the two are forced to coincide.

Digital Human, Series 18, Episode 6: Faceless

Hong Kong’s Face Mask Ban Is Just Pissing People Off

Digital Human, Series 18, Episode 6: Faceless

 A stolen life – A new perspective and everything in between | Neda Soltani | TEDxRWTHAachen

Digital Human, Series 18, Episode 6: Faceless

AI painting to go under the hammer:

Digital Human: Series 8, Ep 10 – Imagine 

Why Futurism Has a Cultural Blindspot – Issue 28: 2050 – Nautilus:

In early 1999, during the halftime of a University of Washington basketball game, a time capsule from 1927 was opened. Among the contents of this portal to the past were some yellowing newspapers, a Mercury dime, a student handbook, and a building permit. The crowd promptly erupted into boos. One student declared the items “dumb.”

Such disappointment in time capsules seems to run endemic, suggests William E. Jarvis in his book Time Capsules: A Cultural History. A headline from The Onion, he notes, sums it up: “Newly unearthed time capsule just full of useless old crap.” Time capsules, after all, exude a kind of pathos: They show us that the future was not quite as advanced as we thought it would be, nor did it come as quickly. The past, meanwhile, turns out to not be as radically distinct as we thought.

In his book Predicting the Future, Nicholas Rescher writes that “we incline to view the future through a telescope, as it were, thereby magnifying and bringing nearer what we can manage to see.” So too do we view the past through the other end of the telescope, making things look farther away than they actually were, or losing sight of some things altogether.

These observations apply neatly to technology. We don’t have the personal flying cars we predicted we would. Coal, notes the historian David Edgerton in his book The Shock of the Old, was a bigger source of power at the dawn of the 21st century than in sooty 1900; steam was more significant in 1900 than 1800.

But when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?

Chances are, that person resembles you now. As the psychologist George Lowenstein and colleagues have argued, in a phenomenon they termed “projection bias,”1 people “tend to exaggerate the degree to which their future tastes will resemble their current tastes.”

In one experimental example, people were asked how much they would pay to see their favorite band now perform in 10 years; others were asked how much they would pay now to see their favorite band from 10 years ago. “Participants,” the authors reported, “substantially overpaid for a future opportunity to indulge a current preference.” They called it the “end of history illusion”; people believed they had reached some “watershed moment” in which they had become their authentic self.2 Francis Fukuyama’s 1989 essay, “The End of History?” made a similar argument for Western liberal democracy as a kind of endpoint of societal evolution.

This over- and under-predicting is embedded into how we conceive of the future. “Futurology is almost always wrong,” the historian Judith Flanders suggested to me, “because it rarely takes into account behavioral changes.” And, she says, we look at the wrong things: “Transport to work, rather than the shape of work; technology itself, rather than how our behavior is changed by the very changes that technology brings.” It turns out that predicting who we will be is harder than predicting what we will be able to do.

The Digital Human, Series 13, Episode 6 – Oracle

We’re a little closer to texting from our brains, thanks to birds:

Entrepreneurs in Silicon Valley this year set themselves an audacious new goal: creating a brain-reading device that would allow people to effortlessly send texts with their thoughts.

In April, Elon Musk announced a secretive new brain-interface company called Neuralink. Days later, Facebook CEO Mark Zuckerberg declared that “direct brain interfaces [are] going to, eventually, let you communicate only with your mind.” The company says it has 60 engineers working on the problem.

It’s an ambitious quest—and there are reasons to think it won’t happen anytime soon. But for at least one small, orange-beaked bird, the zebra finch, the dream just became a lot closer to reality.

That’s thanks to some nifty work by Timothy Gentner and his students at the University of California, San Diego, who built a brain-to-tweet interface that figures out the song a finch is going to sing a fraction of a second before it does so.

“We decode realistic synthetic birdsong directly from neural activity,” the scientists announced in a new report published on the website bioRxiv. The team, which includes Argentinian birdsong expert Ezequiel Arneodo, calls the system the first prototype of “a decoder of complex, natural communication signals from neural activity.” A similar approach could fuel advances towards a human thought-to-text interface, the researchers say.

The Digital Human, Series 13, Episode 6 – Oracle