Mind over Matter

Todd P. ColemanTodd Coleman is one of the few guys who can teach you a lesson in both Bernoulli distributions and basketball–he’s a theory master but also a hands-on hacker. When I took his probability and statistics course sophomore year, it was clear that he wasn’t your average middling professor. His research group was publishing some deep work in information theory but then dissecting cat brains and hacking code to apply it on the weekend. It seemed so much more interesting and multidisciplinary than what the other professors were doing. So when he invited me to one of his group meetings I knew I had to see how they played ball.

The big new project in his group was Brain Machine Interfacing, or taking EEG recordings on a person’s scalp to achieve mind control of external devices like an unmanned aerial vehicle or a wheelchair. It relies on certain parts of the brain being more electrically active, with neuron firing more synchronously, when a person is focusing on a specific part of their body. By putting EEG sensors on the scalp, a computer can figure out which part of the brain has greater activity, and determine the precise body part the person is focusing on. The computer can then move a different physical system like a prosthetic leg or an airplane in the sky. It’s as close as you can get to real life telekinesis!

Somatosensory Map
A motor cortex map shows a projection of the skin’s surface onto the brain according to the processing of motion intent.

Most work in this area was rather crude, with motors or actuators driven in proportional to the brain activity. But Todd was steeped in the field of Cybernetics, where giants like John Von Neumann, Claude Shannon and Norbert Wiener  had studied the intersection of information theory, computers and neuroscience. With this inspiration it was more natural to formalize BMI as an information channel where noisy bits of information are transmitted from user to computer. By imagining specific body parts in a structured way, a person is crudely communicating a set of discrete choices; this is conceptually  the same as digital signals sent between computers over wires and radio so that the same techniques of compression and error correction might be applied. However the standard communications channel model omits the approximately noise-free channel back to the user–that is, the user interface display. Incorporating it was interesting from a theoretical perspective and invoked a field called feedback information theory.  Applying a technique called posterior matching would lead to significant improvements in the rate of information transfer.

Todd had been exploring a slew of user interfaces where discrete choices could communicate any kind of intent depending on the program. For example, with a wheelchair interface, focusing on the left side of the tongue might turn it 90 degrees or imagine the right foot might cause it to stop. In another program, these choices could be used to spell out entire sentences by selecting letters and words through an interface similar to Dasher.

Rui Ma flying an unmanned aerial vehicle on the south quad with his mind.
Rui Ma flying an unmanned aerial vehicle on the south quad of University of Illinois with his mind.

From the first meeting, I knew I had to get involved with the BMI project–this wasn’t improving some bound on an Noisy EEG Recordingsabstract function–this was enhancing the basic capability of mankind! Fortunately, Todd had a whole set of challenges to get the project set up. I was pretty convinced I was invincible with a soldering iron and a compiler, so I jumped in blindly. It began at the beginning; developing the EEG sensors and amplifiers, customizing a driver for a data acquisition and designing signal conditioning filters. It seemed OK until we started looking at the EEG recordings themselves. They were pretty noisy! Meaningful recordings were often dwarfed an order of magnitude by other signals unintentionally added in: the fan in the other room, the florescent lights or even a person blinking.  It took a lot of long nights figuring out the filtering, base-line drift and tracking code to clean these things out just to get an estimate of what the EEG really was on the scalp.

From there we needed to model the brain-scalp-sensor channel and use this to turn the recordings into estimates of the underlying activity in the brain.  I started by reproducing the DSP and classification methods that performed best in the Berlin Brain-Computer Interface Competition. In particular, Common Spatial Patterns,  Linear Discriminant Analysis, and simple nearest-neighbor methods proved to be quite effective. A separate probability could then be derived for the likelihood that the subject was imagining each of the possible body parts. Once these were estimated, the information could be passed on to a user interface program that drove the wheelchair or redirected the plane.

With the initial prototype, shortcomings were identified. The literature seemed to under-emphasize the real-time nature of the whole task. Anyone who’s tried a computer mouse with a half-second lag knows it’s nearly impossible to control. Navigating a complicated GUI with a 1 second of delay–by mind power alone–is a heck of a lot harder still! People would play with it if we paid them $20/hr as research subjects, but we would need to improve the experience to get use beyond grad students and paralyzed patients. Furthermore, the feedback information channel with the user is known to be fundamentally limited by latency. By lowering delay throughout our system we might achieve an increase in the information transfer rate (ITR) communicated to the computer. For these reasons, I spent days wringing out every last millisecond from the signal acquisition front-end, minimizing the group delay of filters, and writing all the code in C.  In the end, latency was in the 10s of milliseconds rather than seconds.

Tip: to remove 60 Hz noise caused by AC power lines, one might use a digital notch filter. However, this kind of filtering has high group delay and introduces significant latency between the input and output. Instead, an adaptive filter and PLL can be used to track the 60 Hz signal and subtract it out without any latency at all! 

The initial results were surprising and clearly outperforming the literature we were trying to reproduce. Artificially adding delay dropped performance to normal levels so that this latency stuff appeared to be helping us. It was a nice result but we didn’t want to stop there. Could we push the ITR further?  There seemed to be a number of other areas to improve. The heuristic measures people were using for brain activity seemed like a ripe one. Here, Todd suggested “getting back to first principles” with modeling things more physically and posing it all as a Bayesian inference problem.

We built a generative model of the EEG signals from a basic template of the neurons and the scalp.  This could then be used to calculate the likelihood of a given EEG signal conditioning on (i.e., assuming) each of the possible types of body-part imagery. The best guess for a body part was the one that made the actual EEG measurements most likely.

We also wanted to include our prior knowledge about the temporal evolution of a user’s cognitive state. The idea was that the person was constantly imagining the same body part until the feedback stimulus changed, at which point they would change to another body part with various probabilities.  We could impose this structure with a special kind of Hidden Markov Model or HMM. It being a little different than a traditional HMM because the transitions happened in response to the user interface itself. Working with post-doc Rui Ma, we designed a graphical model representation that captured this.  Writing a fast belief-propagation solver for this model was then my first introduction to the wonderful world of factor graphs.

A factor graph modeling the dynamic transitions between user intent.
A factor graph modeling the dynamic transitions of user intent.

With factor graphs, we could model every part of the system: from the timing at which choices are mentally generated to how those choices affect neurons and the electrical signals they send to the sensor and the noise that gets added along the way. Once the model of all this is captured in a factor graph, the precise calculation for the likelihoods of different choices is simply a matter of running the sum-product algorithm–a message-passing algorithm that computes the desired probabilities in an elegant and computationally efficient way.

When we plugged it all in and turned it on the performance really floored us. The system worked significantly better than before.  In fact, after formally experimenting with subjects we found we were nearly doubling the highest BMI information transfer rate ever reported!

From there we decided to work on making the entire system more compact and useful.  We began collaborating with John Rogers and his company MC10 to develop thin tattoo-like EEG sensors. These sensors do not require gel like traditional electrodes and are entirely self-contained.  The sensors amplify the signal and send it wirelessly to a computer (eventually a phone in a person’s pocket) that can process the signal using our algorithms and then act as the gateway to the rest of the world.

Currently this technology is being commercialized by Todd Coleman and MC10.  If they are able to navigate the challenges of reliability and power and bring the product to market, it will open up a whole new way for us to interact with the world. I hope that they succeed. It’s a project that is sure to remain on my mind.

 

Papers

C. Omar, A. Akce, M. Johnson, T. Bretl, R. Ma, E. Maclin, M. McCormick, and T. P. Coleman, “A Feedback Information-Theoretic Approach to the Design of Brain-Computer Interfaces”, International Journal on Human-Computer Interaction , special issue on “Current Trends in Brain-Computer Interface (BCI) Research and Development”, Volume 27, Issue 1, pages 5 – 23, January 2011. [PDF]

Martin McCormick, Rui Ma, and Todd Coleman, “An Analytic Spatial Filter and Hidden Markov Model for Enhanced Information Transfer Rate in EEG-Based Brain Computer Interfaces”, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), March 14-19, 2010, Dallas, TX. [PDF]

D.H. Kim, N. Lu, R. Ma, Y.S. Kim, R.H. Kim, M. McCormick, J. Wu, et al., "Epidermal Electronics," Science, vol. 333, no. 6044, pp. 838, 2011. [PDF]

R. Ma, D. Kim, M. McCormick, T. P. Coleman, and J. Rogers, “A Stretchable Electrode Array for Non-invasive, Skin-Mounted Measurement of Electrocardiography (ECG), Electromyography (EMG) and Electroencephalography (EEG)”, IEEE International Conference of the Engineering in Medicine and Biology Society, September 2010. [PDF]

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Prove you are human *