I think, therefore I Pong
By navigating neural pathways, cognitive scientist Nicholas Hatsopoulos helps paralyzed patients turn thought into action.
Nicholas Hatsopoulos’s electrode-array device allows disabled patients
to control a computer cursor—and perhaps someday robotic limbs—entirely
by thought.
Imagine your computer could read your mind. Not in a nefarious, Big Brother way, but as an active helper. You think of a word, it appears on screen. You think “click,” and up pop Google results. For average users, this computer would redefine multitasking. For people with motor disabilities, it could translate thoughts normally associated with muscle movement to control a robotic prosthesis or a power wheelchair.
A brain-computer link that translates thoughts isn’t farfetched, says Nicholas Hatsopoulos, assistant professor of organismal biology & anatomy and a member of the Center for Integrative Neuroscience & Neuroengineering Research, a collaboration between Chicago and the Illinois Institute of Technology. Already Hatsopoulos has demonstrated that laboratory monkeys can move a computer’s cursor by thought alone.
Here’s how it works: researchers implant a silicon chip in the motor cortex of a macaque monkey trained to move a cursor. The chip detects and records neural activity related to the arm and paw movement, and the data is used to program a computer to translate neural signals directly to cursor movement—to, in effect, bypass the arm. The chip then transmits neural signals to the computer, so the monkey can move the cursor without moving a muscle.
“His arms are at rest, by his sides,” says Hatsopoulos. The subject can still use his arm but doesn’t need to.
Begun two years ago, human trials stemming from work Hatsopoulos did as an assistant professor at Brown University (he came to Chicago in 2002) are producing similar results. The subject, a quadriplegic adult, moved a computer cursor entirely by thought. He even played Pong. By simply thinking about moving his arm, he triggered neurons associated with arm movement, which the chip detected and transmitted. Cyberkinetics Neurotechnology Systems, a Massachusetts company Hatsopoulos cofounded—and on whose board he still serves—is conducting the trials and has commercialized the silicon chip and related technology under the brand name BrainGate.
This brain-computer interface is a by--product of a more fundamental quest: “to understand,” he says, “how large ensembles of neurons in the brain work together to create behavior.” While early investigations into neural activity looked at single neurons (many researchers still do), Hatsopoulos notes that human brains have oceans of neurons, around 1010, with millions focused on any given function—like moving the arm, for example. “My perspective,” Hatsopoulos explains, “is that you really need to look at more than just one cell at a time.”
Measuring four millimeters square and packed with electrodes, the silicon
chip translates neural signals into movement.
The Cyberkinetics chip makes such multicell research possible. Only four millimeters square and weightless on a fingertip, it’s shaped to form a ten-by-ten matrix of hair-thin electrode pins. For Hatsopoulos’s research and Cyberkinetics’ human trials, it’s surgically implanted in the area of the motor cortex that controls arm movement. As neurons fire off electrical charges to communicate with one another, the electrodes detect when and where they fire. Tiny wires connect the array to a device that extends into the scalp, which in turn connects to a computer that records the signals.
By itself, the chip makes for a relatively crude mind-reader. “It’s
a little bit depressing,” says Hatsopoulos, “because you think,
well, we have 100 electrodes, it’s two orders of magnitude more
than what people have done.” Even though those electrodes pick up
100 signals from individual neurons, there are 1.6 million neurons un-der
each chip. Ideally the array would detect all of them. “So we’re
highly under-sampling”
the neural activity.
Complex algorithms (Hatsopoulos has patented two of them) compensate with predictive extrapolations. “The good news is, despite under-sampling, we can do a remarkably good job of predicting motion in the arm from a small collection of samples. I think the reason it works is that, to a certain degree, the cells are redundant.”
Despite its limitations, the electrode array has provided evidence that Hatsopoulos’s central proposition—that neurons are best studied collectively—is spot on. From readings taken over a 20-millisecond time frame, he’s produced images that suggest the firing of neurons propagates over space and time, like a wave; they work together. In his images, firing neurons show up red. To a layperson, they look like a radar animation of a passing thunderstorm. In the first image, a small red point appears at the left border; in succeeding images the red area expands and moves rightward, gradually disappearing. “What it means in terms of behavior, we’re not sure,” says Hatsopoulos. “But the implications are important to future study, and a single-cell approach would never reveal such a pattern.”
Hatospoulos’s current applied research concerns proprioceptive feedback. To understand, close your eyes and move your hand. You can’t see it, but you know where it is. Even if someone else moved your hand, you’d know its location in space. Interpreting that feedback isn’t critical to controlling two-dimensional movement, like moving a cursor, but it is vital to controlling a robotic prosthesis through three dimensions. It could also help patients with Lou Gehrig’s disease, who lose the ability to actuate their muscles but retain feedback capacity.
To study feedback, Hatsopoulos is essentially reversing his original process. “In this case it’s the outside world that has to come into the brain,” he explains. The electrode array, implanted in a different cerebral location, detects signals returned to the brain from a functioning arm.
Once that research is hashed out, the next step is only a matter of connecting a robotic device that generates feedback. “Robots already have sensors that report joint angles, velocity—all that stuff is there.” And unlike reading minds, Hatsopoulos says, “that’s easy.”