Entrepreneurs in Silicon Valley this year set themselves an audacious new goal: creating a brain-reading device that would allow people to effortlessly send texts with their thoughts.
In April, Elon Musk announced a secretive new brain-interface company called Neuralink. Days later, Facebook CEO Mark Zuckerberg declared that “direct brain interfaces [are] going to, eventually, let you communicate only with your mind.” The company says it has 60 engineers working on the problem.
It’s an ambitious quest—and there are reasons to think it won’t happen anytime soon. But for at least one small, orange-beaked bird, the zebra finch, the dream just became a lot closer to reality.
That’s thanks to some nifty work by Timothy Gentner and his students at the University of California, San Diego, who built a brain-to-tweet interface that figures out the song a finch is going to sing a fraction of a second before it does so.
“We decode realistic synthetic birdsong directly from neural activity,” the scientists announced in a new report published on the website bioRxiv. The team, which includes Argentinian birdsong expert Ezequiel Arneodo, calls the system the first prototype of “a decoder of complex, natural communication signals from neural activity.” A similar approach could fuel advances towards a human thought-to-text interface, the researchers say.
Police in the UK are starting to use futuristic technology that allows them to predict where and when crime will happen, and deploy officers to prevent it, research has revealed. “Predictive crime mapping” may sound like the plot of a far-fetched film, but it is already widely in use across the US and Kent Police is leading the technological charge in the UK.
A report on big data’s use in policing published by the Royal United Services Institute for Defence and Security Studies (RUSI) said British forces already have access to huge amounts of data but lack the capability to use it.
Alexander Babuta, who carried out the research, said predictive crime mapping tools had existed for more than a decade but are only being used by a fraction of British forces. “The software itself is actually quite simple – using crime type, crime location and date and time – and then based on past crime data it generates a hotspot map identifying areas where crime is most likely to happen,” he told The Independent…
In 1972 the Club of Rome, an international think tank, commissioned four scientists to use computers to model the human future. The result was the infamous Limits toGrowth that crashed into world culture like an asteroid from space. Collapse, calamity and chaos were the media take-aways from the book, even though the authors tried hard to explain they weren’t making predictions but only exploring what would happen if population and economies continued their exponential growth. People, however, wanted predictions even if the book wasn’t really offering them. That gap between the authors’ intentions and the book’s reception tells us something critical about flaws in the way we think about the long-term future. Just as important, it points to new and different ways to think about the future at this strange moment in human history, when that future is so uncertain.
The real point to emerge from the crude (by today’s standards) simulations in The Limits of Growthwas that…duh… growth had limits. Using the language of coupled non-linear differential equations, the authors modeled the interaction between population and resources on a finite planet. The stunning prediction of those models was that collapse, rather than steady state, was one very real “solution” to the system. The visual cue to this nasty future was the simple trajectory of black line on a printout of population vs. time. That was really all folks needed. Follow the line. If it leveled off things would be great. If it plummets we are, by definition, all doomed.
What happened next was a battle over the details of that line and its trajectory. Would it really plummet, and if so, when, exactly, would it do that—i.e., how many years left till collapse? Economists, environmentalists, political scientists, and politicians began duking it out over these questions, and they haven’t stopped yet. But as climate change and resource depletion began spreading from scientific journals to headlines, it became clear these kinds of fights are missing the point. For the boots-on-the-ground folks—urban planners who must start planning and building now—something very different is needed.
That is a great irony of the challenge human culture must deal with now. On the one hand we canpredict the future. Our science makes it pretty damn clear that a rapidly changing planet is in the cards over thenext 30 to 50 to 100 years. On the other hand, there is no way to accurately predict all the ways a city like New York or Seattle will be affected those changes. For folks charged with ensuring the specifics of human culture are resilient in the face of those changes, the Club of Rome-style computer models can’t deal with the way uncertainty spreads the further we look into the future. It’s hard enough to predict snowpack a year in advance; how is a city to understand and plan for changes given changing climate conditions and population levels 50 years in advance?
To make that leap we need to go beyond predicting the future and begin telling the future.
We need to begin thinking in terms of “scenarios.”
I can look into your eyes and see straight to your heart.
It may sound like a sappy sentiment from a Hallmark card. Essentially though, that’s what researchers at Google did in applying artificial intelligence to predict something deadly serious: the likelihood that a patient will suffer a heart attack or stroke. The researchers made these determinations by examining images of the patient’s retina.
Google, which is presenting its findings Monday in Nature Biomedical Engineering, an online medical journal, says that such a method is as accurate as predicting cardiovascular disease through more invasive measures that involve sticking a needle in a patient’s arm.
As usual, The Twilight Zone appears to predict the future. Be ware of prophetic machines… *cue theme music*
Some divination methods are more intimate than others :/
By Victoria Turk
AN ARTIFICIAL intelligence system can predict how a scene will unfold and dream up a vision of the immediate future.
Given a still image, the deep learning algorithm generates a mini video showing what could happen next. If it starts with a picture of a train station, it might imagine the train pulling away from the platform, for example. Or an image of a beach could inspire it to animate the motion of lapping waves.
Teaching AI to anticipate the future can help it comprehend the present. To understand what someone is doing when they’re preparing a meal, we might imagine that they will next eat it, something which is tricky for an AI to grasp. Such a system could also let an AI assistant recognise when someone is about to fall, or help a self-driving car foresee an accident.
“Any robot that operates in our world needs to have some basic ability to predict the future,” says Carl Vondrick at the Massachusetts Institute of Technology, part of the team that created the new system. “For example, if you’re about to sit down, you don’t want a robot to pull the chair out from underneath you.
To teach the AI to make better videos, the team used an approach called adversarial networks. One network generates the videos, and the other judges whether they look real or fake. The two get locked in competition: the video generator tries to make videos that best fool the other network, while the other network hones its ability to distinguish the generated videos from real ones.