What Do Neurons and Snowflakes Have to Teach Us About AI?
The fact that no two are alike could be the key to designing AIs that can adapt to change.
The tens of billions of neurons that make up the human brain are like snowflakes in many ways: They’re small, they clump together into white masses, and no two are alike. Identical neurons, like selfsame snowflakes, are simply not found in nature.
Yet, according to computational neuroscientist Daniel Goodman of the Intelligent Systems and Networks group at Imperial College London, most AI systems that are inspired by and designed to mimic the brain are built upon indistinguishable manmade neurons—a very abstract, homogenous, and he would say limiting approach. “People always set up these artificial networks so that all the neurons are identical, and the brain is not like that. Every neuron is different,” he says.
Goodman claims just as the diversification of cells within our brains is critical for humans to learn, so will diversifying the “cells” within brain-inspired AI improve their ability to learn. To back this up, his team recently showed that introducing variation by slightly tweaking every artificial neuron in an AI system improves the accuracy of a simulated neural network’s learning performance by up to 20 percent. Published in Nature Communications, the study demonstrates that mimicking the natural heterogeneity observed in animal brains, which plays an active and essential role in allowing them to learn and adapt to changing environments, can improve memory and information capture to enrich an AI’s set of functions.
Getting better at playing Pong
Evolution has sculpted our brains over millions of years to learn, adapt, and do everything they do, and one of the ways it shows is that nature-made brains are much better at learning than artificial neural networks.
To illustrate, Goodman invokes the revolutionary early 80s arcade game Pong. “You’ve got the two bats moving up and down, and the ball bounces between them. An artificial neural network [can be trained] to play this game perfectly, better than a human,” he says. “But if you move the bats one pixel closer to each other, it suddenly can’t play it because it’s trained on an exact version of the game and can’t handle any slight deviation from that.”
No human would ever have that problem, Goodman says, and we can attribute that to our better ability to learn and adapt—and ultimately to our neurons themselves. And the diversity and richness of those cells, he believes, is the key that enables us to learn more robustly than AI if something changes.
The distribution in time constants that improved the AI system matched those observed in massive databases of human and animal neurons recordings.
Attempting to replicate one of the most striking features of the biological brain in machines, Goodman has been studying how highly connected networks of neurons in the mammalian brain communicate via precisely timed, discrete electrical impulses, called “spikes,” which are radically different from the conventional digital and analog computations.
By injecting heterogeneity into AI systems based on artificial neurons and making them more closely resemble the spiking neural networks in our own brains, Goodman’s team managed to improve an AI system’s performance to learn tasks of real-world difficulty like voice recognition—the ability of a machine or program to receive and interpret dictation or spoken commands.
Specifically, by adding variation in how long it takes simulated neurons to activate (a response rate technically known as a “time constant”), the artificial neural networks were better able to learn tasks with an essential time component—such as discerning numbers spoken in succession—but did not improve in tasks dependent on spatial recognition.
Curiously, the distribution in time constants that improved the AI system matched those observed in massive databases of human and animal neurons recordings. Why is the optimal heterogeneity similar to what is seen in nature? Rather than a byproduct of just noisy processes, there could be a universal time constant ingrained into the fabric of the cosmos that serves as an active and important role in allowing intelligence to learn in changing environments.
Manmade brains
Yet neuroscientist Partha Mitra, who studies both animal brains and AI at Cold Spring Harbor Laboratory, is skeptical that the key to recreating brains is simply the variability in time responses of the individual cells themselves. Instead, he thinks the path to improving AI functionality will be based on better mimicking how mammalian brain cells are arranged.
Since all electronic circuits are made up of the same components, Mitra says, their arrangement is what matters most. “It’s really the way the system is organized.”
“You can’t take a radio circuit and put it into a washer-dryer and expect it to work,” Mitra says. “The difference between the radio and your washer-dryer is not so much what the components are but how they are wired. It’s a mistake, I think, to say I have a better computer because we have more types of transistors.”
But Goodman thinks that different kinds of variation can give an AI system advantages in learning how to solve particular tasks. Specific tweaks can help an AI resolve events in time, whereas others, like what Mitra is interested in, may help an AI pick out different features presented in space simultaneously.
Human brains are really messy and complicated, he says, and although we may not need to mimic all of that complexity to make a useful and efficient AI, some details, such as neuron heterogeneity, probably shouldn’t be ignored.
There is a third element of biological brains that may improve AIs in the future, according to both Mitra and Goodman. They predict we’re going to see lots of manmade brains in coming years that aren’t solely based on a general purpose computer, but rather built with physical artificial neurons (made from silicon) called neuromorphic systems. These systems would emulate another major feature of natural neural networks: plasticity—the brain’s ability to change its own wiring through the growth and reorganization of the physical connections it makes to other cells.
Plasticity may be an essential ingredient for building autonomous AI systems that can cope with the real world because it’s how the human brain adapts to new and unfolding circumstances, allowing people to recognize and avoid danger, adjust motor activity through sensory feedback, optimize their reward seeking and behaviors, and/or learn the hard lessons of life. In the same way, the plasticity of artificial neuronal interconnects could help AIs adapt to virtual world changes, like the Pong paddles being moved a pixel.
Or a different snowflake falling.