A professor in China builds a brain-machine interface that keeps humans fully in charge.
How do you teach robots to dance? Changle Zhou thinks he can do it by getting the machines to read our minds.
In an experiment that began last year, Zhou, a computer scientist at Xiamen University in China, invited six people to sit at a screen and watch videos of a humanoid robot moving its arms. All of the humans wore an electrode cap that recorded signals from their brains. That let the researchers analyze changes in the activity of “mirror neurons,” which fire when you’re performing certain actions and when you’re observing others perform the same actions.
For the next few years, Zhou and his graduate students will gather more and more of this data from human brains. Eventually, he says, just by thinking about what they want a robot to do, people will be able to instruct machines to make the movements necessary to boogie.
But Zhou’s real objective isn’t just to get robots to fit in on the dance floor. The long-term goal of this research is to figure out the optimal way for humans and machines to communicate at the speed of thought.
Zhou is one of many people trying to develop brain-machine interfaces. But while some projects aim to upgrade the way we interact with computers — letting people type and move a cursor on a computer screen through mind control, for example—Zhou is also trying to also upgrade the computer in the process. It’s a way to get machines to think more like we do and become much more useful.
Humans can feel a sense of happiness because of qualia, not because of high IQ.
However, he’s careful to note that his vision for human-computer connections stops short of the one held by people known as transhumanists, who think an upward trajectory of machine intelligence should bring about computers that are adept and perceptive enough to serve as the substrate of human minds. Transhumanists hope to fully merge with these machines, whether it’s to enhance their own biological brains or, better yet, to become immortal. “Why should we be restricted minds?” says Ben Goertzel, a Hong Kong-based AI researcher and chair of Humanity+, an organization that envisions liberating humanity from the constraints of biology using technology.
Zhou doesn’t buy it. Unless advanced quantum computers somehow change everything, he expects machines will remain inferior to human brains — even if they can plug in more directly to us.
Zhou has spent the past 30 years trying to build artificial intelligence that resembles human intelligence as closely as possible. He wanted machines that could understand metaphors, compose music and poems, and perhaps even express emotions. But even though he and others have designed algorithms that can make computers write, draw, and carry out other individual aspects of human creativity, he couldn’t get past the realization that machines can’t have what he considers the highest form of intelligence, something akin to Zen enlightenment. The algorithms that write songs and poems are not able to actually experience the beauty of these works.
Zhou believes that what distinguishes the human mind from other intelligent systems is consciousness of internal experiences, or mental self-reflection. He points to our own subjective experiences of sensory inputs, which are known as qualia. “Humans can feel a sense of happiness because of qualia, not because of high IQ,” says Zhou. “Machines do not have this ability.”
The robot that reads minds is merely a surrogate for a person’s imaginary actions, and only the human brain gets to have internal experiences.
That’s why he started thinking about a hybrid system: a brain-machine interface that could draw on “the intelligence of both people and machines,” says Zhou. “This is the future path.”
In this approach, qualia would be still be left to the humans. In his dancing robot project, a biological brain experiences the charm of the dance, and the robot “brain” is responsible only for carrying out the task. He envisions a slew of applications in which people get enjoyment from seamlessly making a machine do things that they don’t have the physical capacity to perform. But it’s not a two-way street. The robot that reads minds is merely a surrogate for a person’s imaginary actions, and only the human brain gets to have internal experiences.
During a transhumanism conference held by Humanity+ in Beijing recently, Goertzel described ideas that might sound roughly similar. He envisions a machine whose intelligence would be as agile and nimble as a human’s. It’s beginning with a decentralized network of AI algorithms called SingularityNET, and independent developers from around the world can contribute to the network. Eventually, Goertzel imagines, it will evolve into a “self-growing and self-pruning” human-level AI. If we merge our minds with it, we would create a “superhuman AI mind,” says Goertzel.
In that vision, superhuman AI will be the culmination of some ingenious computer engineering, and then humans will want to link their physical bodies with it. But to Zhou, a human-machine collaboration comes first and stays primary. He sees it as the means of creating the best possible AI.
You complete me
There are a few different tools for measuring brain activity, and the invasive ones aren’t likely to attract many volunteer research subjects. So Zhou’s team uses the electroencephalogram, or EEG. The people controlling the robots attach electrodes to their scalps by wearing a cap, and these electrodes measure the strength of the electric fields coming from the brain. To make these signals sharp enough for a computer to pick up and analyze, a white gooey conductive gel is injected in the space between scalp and electrode.
The EEG doesn’t actually provide neuron-level resolution. Instead Zhou’s group uses it to detect very brief oscillations in brain waves as someone imagines and observes certain actions. These signals are good enough for telling a robot whether to move its left or right arm and how high. But they’re not clear enough for the more complex movements that make up the rich repository of dance styles. So these more complex movements still need to be programmed in advance. Nonetheless, training a robot this way can convey aesthetics that are very difficult to write into code, says Zhou.
There are many challenges along the way. Tianjian Luo, one of Zhou’s PhD students, is training an algorithm that can generate high-quality signals out of the noise in EEG readings. That could make it possible for a person controlling the robot to use lightweight equipment rather than being tethered to bulky wires and computers.
Sitting in his cubicle in the lab at Xiamen University, Luo rattles off a number of ways these interfaces could improve people’s lives. Rehabilitation for stroke patients. Controlling prosthetic limbs for amputees. Safer long-distance control of equipment stationed in dangerous areas.
Transhumanists see something even bigger from work like this, though: a new type of consciousness. Not only will people be linked to computers, they’ll be linked to the minds of other people. “We would share experience,” says Natasha Vita-More, the executive director of Humanity+. “It would be very empathetic. Once we are connected more electronically, we will get each other.”
If the transhumanist ideal or even something simpler comes to pass, it’s not clear that how much it would expand human possibility in China or any other country that extensively uses computers to conduct mass surveillance and maintain social control. A Chinese project dubbed “Dazzling Snow” aims to build an omnipresent video surveillance network by 2020, covering all corners of China’s public spaces, because security is always more important than individual liberty in the country. To combat toilet paper theft in public restrooms, the Beijing city council has deployed dispensers equipped with face-recognition technology that supplies a limited amount to each person. Given that the Internet has been tamed by the Chinese government into a tool for meeting its social and economic goals, could human-machine mind melds strengthen that power?
Zhou says it’s counterproductive to brood over the potentially dystopian uses of brain-machine interfaces. For one thing, he says, no one is close to extracting a person’s complex, private thoughts by analyzing brain signals. But more broadly, he shrugs off such fears by using a knife as an analogy. Nobody says we shouldn’t make knives because they could be used to kill people. “You’ve got to educate people to use these technologies in the right way,” he says in his office overlooking his idyllic campus, dotted by phoenix palms. “And soothe the human heart.”