Not a lunchtime lecture, but a very cool listen. Dive deep into the history of the Delphic Oracle.
In 1972 the Club of Rome, an international think tank, commissioned four scientists to use computers to model the human future. The result was the infamous Limits toGrowth that crashed into world culture like an asteroid from space. Collapse, calamity and chaos were the media take-aways from the book, even though the authors tried hard to explain they weren’t making predictions but only exploring what would happen if population and economies continued their exponential growth. People, however, wanted predictions even if the book wasn’t really offering them. That gap between the authors’ intentions and the book’s reception tells us something critical about flaws in the way we think about the long-term future. Just as important, it points to new and different ways to think about the future at this strange moment in human history, when that future is so uncertain.
The real point to emerge from the crude (by today’s standards) simulations in The Limits of Growthwas that…duh… growth had limits. Using the language of coupled non-linear differential equations, the authors modeled the interaction between population and resources on a finite planet. The stunning prediction of those models was that collapse, rather than steady state, was one very real “solution” to the system. The visual cue to this nasty future was the simple trajectory of black line on a printout of population vs. time. That was really all folks needed. Follow the line. If it leveled off things would be great. If it plummets we are, by definition, all doomed.
What happened next was a battle over the details of that line and its trajectory. Would it really plummet, and if so, when, exactly, would it do that—i.e., how many years left till collapse? Economists, environmentalists, political scientists, and politicians began duking it out over these questions, and they haven’t stopped yet. But as climate change and resource depletion began spreading from scientific journals to headlines, it became clear these kinds of fights are missing the point. For the boots-on-the-ground folks—urban planners who must start planning and building now—something very different is needed.
That is a great irony of the challenge human culture must deal with now. On the one hand we canpredict the future. Our science makes it pretty damn clear that a rapidly changing planet is in the cards over thenext 30 to 50 to 100 years. On the other hand, there is no way to accurately predict all the ways a city like New York or Seattle will be affected those changes. For folks charged with ensuring the specifics of human culture are resilient in the face of those changes, the Club of Rome-style computer models can’t deal with the way uncertainty spreads the further we look into the future. It’s hard enough to predict snowpack a year in advance; how is a city to understand and plan for changes given changing climate conditions and population levels 50 years in advance?
To make that leap we need to go beyond predicting the future and begin telling the future.
We need to begin thinking in terms of “scenarios.”
Some divination methods are more intimate than others :/
Robot is a relative newcomer to the English language. It was the brainchild of the Czech playwright, novelist and journalist Karel Čapek, who introduced it in his 1920 hit play, R.U.R., or Rossum’s Universal Robots. Science historian Howard Markel discusses how Čapek thought up the word.
Letters written by a young girl in Poland, similar age to Aleks’ badass Nazi bamboozling granny.
Confirmed criminals were thus clamped in a position of acute and pitiful vulnerability in full view of the jeering mob to whom they proved irresistible targets and who were at liberty — encouraged, even — to chuck whatever they could get their hands on; clumps of earth, rotten eggs, cucumbers, turnips, offal and in their less charitable moments dead cats, paving stones, shards of glass and bricks at the victim’s head. The stakes were high; an unlucky few died in the pillory (around 10 in the course of the 18th century, according to the historian Robert Shoemaker). Aside from sodomites who fared particularly badly, no creature was more despised than false-accusers who, for reward money, swore robberies against innocent parties. John Valler stood the pillory for just that in June 1732 and a pamphlet gleefully described how ‘the mob began to pelt him with cabbage, cauliflowers and artichoke stalks …[then] they pulled down the pillory, by which the skull of this unhappy wretch was fractured’. Still not satisfied, ‘as he lay on the ground, they stamped so hard upon his body that they broke his ribs’. He was dead within the hour. This had not been the authorities’ intention; two members of the crowd were convicted of his murder three months later.
Yet the crowd could be forgiving; sympathetic even, especially when they felt the defendant had been treated unfairly.
A pamphlet gleefully described how ‘the mob began to pelt him with cabbage, cauliflowers and artichoke stalks…
Famously, when Daniel Defoe was pilloried in 1703 after publishing a faux-bigoted rant against religious dissenters (of which he was one) on the orders of the pro-dissenter, irony-deaf whig government, the only thing anyone threw at him was flowers, while the very pamphlet itself was casually sold by his supporters. And in June 1763, the Post Boy reported how the crowd reacted to two elderly men in the New Palace Yard pillory for attempted buggery — ‘their tears, which flowed in great abundance, drew such compassion, that they treated them with the greatest lenity, and some money was collected for them’.
The most interactive and democratic of all of old London’s shaming rituals, audience responses could damage, destroy or save someone’s reputation or, in rare cases, kill them; in 1509, three pilloried offenders are recorded as ‘dying of shame’ — could they have been driven to suicide, like victims of online trolling in extreme cases?
Psycho – How Alfred Hitchcock Manipulates An Audience
Pretty good breakdown of Hitchcock’s methods by The Discarded Image
Most people don’t try to parse cuteness. Like pornography, we know it when we see it. With a bit of examination, however, cuteness has easily quantifiable aesthetics. Take a moment to picture whatever you find cute—puppies, kittens, cartoon characters or your own children. Cuteness is the type of attractiveness associated with youth; your “cute” objects no doubt have many youthful traits.
Infants of most species have a small body with a disproportionately large head, big eyes, small nose, chubby limbs and clumsy coordination. Youthful behavior includes playfulness, affection, helplessness, and a need to be nurtured. A few characteristics such as dimples and baby-talk are unique to humans, but most are common across species.
Evolutionary biologists view “cuteness” as simply the mechanism by which infantile features trigger nurturing in adults—a crucial adaptation for survival. Scientific studies find that definitions of cuteness are similar across cultures. So are our responses.
Anyone disheartened by research demonstrating that attractive adults are better liked and better paid than their homelier peers will be further dismayed at studies on infant cuteness. Articles such as “The Infant’s Physical Attractiveness: Its Effect on Bonding and Attachment” document that stereotypically cute babies receive the most attention from both strangers and their own parents. They run less risk of abuse or neglect. Cute children proceed to get better treatment from teachers. Fortunately, most babies are cute enough to attract sufficient nurturing from parents and the world around them. The decline of cuteness normally coincides with the child’s diminished need for caretaking, which gradually shifts toward younger siblings.
Toy manufacturers are well aware of what’s cute. Dolls have grown progressively cuter: first they looked like people, then like children, then like supernormal exaggerations of children. In the 1990s, the Journal of Animal Behavior published a series of articles on a creature not of the wilderness but of the marketplace.
“The Evolution of the Teddy Bear” traced the origin to 1900 when President Theodore Roosevelt was photographed in the Rockies, after a hunt, with a brown bear in the background. The early teddies looked like bears—with a low forehead and a long snout. Over the years, the teddy “evolved” to become the cute popular creature of now, laden with infantile features, including a larger forehead and a shorter snout. “It is obvious that the morphological changes that have occurred in teddies in the short span of a little over 100 years have contributed greatly to their reproductive fitness,” observed the authors. “There seem to be teddies all over the place.”
With tongue in cheek, but metaphor firmly in mind, animal behaviorists continued publishing on the evolution of the teddy. They pointed out that the changes might be likened to mutation, but are actually closer to “intelligent design,” diverting human resources to enable teddies to reproduce at a phenomenal rate.
And that, my dear Digihuman listeners, is why kittens won the internet.