Skip to main content

Verified by Psychology Today

Brain Computer Interface

AI Deep Learning Decodes Hand Gestures from Brain Images

Research may lead to noninvasive brain-computer interfaces for the paralyzed.

Geralt/Pixabay
Geralt/Pixabay

Brain-computer interfaces (BCIs), also known as brain-machine interfaces (BMIs), offer hope to those who have lost the ability to move or communicate. The pattern-recognition capabilities of artificial intelligence (AI) are accelerating innovation. A new University of California San Diego (UC San Diego) study published in Cerebral Cortex, an Oxford Academic journal, shows how AI machine learning can decode hand gestures from brain images from magnetoencephalography (MEG), a noninvasive imaging method.

“Our MEG-RPSnet model outperformed two state-of-the-art neural network architectures for electroencephalogram-based BCI as well as a traditional machine learning method, and demonstrated equivalent and/or better performance than machine learning methods that have employed invasive, electrocorticography-based BCI using the same task,” wrote senior author Mingxiong Huang, co-director of the UC San Diego MEG Center at the Qualcomm Institute along with researchers Yifeng Bu, Deborah L Harrington, Roland R Lee, Qian Shen, Annemarie Angeles-Quinto, Zhengwei Ji, Hayden Hansen, Jaqueline Hernandez-Lucas, Jared Baumgartner, Tao Song, Sharon Nichols, Dewleen Baker, Ramesh Rao, Imanuel Lerman, Tuo Lin, and Xin Ming Tu.

Magnetoencephalography is a noninvasive neuroimaging method for mapping brain activity by measuring the magnetic fields produced by the brain’s electrical currents. MEG enables real-time tracking of brain activation sequences and millisecond time resolution.

Here’s how MEG works: The brain generates electromagnetic fields that are produced by the net effect of electrically charged ions that flow through a cell. When thousands of neurons are excited together, this generates a magnetic field outside the head that can be measured. The neuromagnetic signals generated by the brain are small, hence special sensors are required. MEG scanners have superconducting quantum interference device (SQUID) sensors.

For this study, the UC San Diego team used a helmet consisting of an embedded 306-sensor array to sense the magnetic fields produced by the brain’s electric currents flowing between neurons. Twelve participants wore the MEG helmet as they were randomly told to make either a rock, paper, or scissor hand gesture, as used in the game Rock, Paper, Scissors. The MEG helmet collected images of the brain activity of the participants during the gestures.

The researchers used an AI convolutional neural network (CNN) deep learning algorithm to learn to classify the gestures.

“On a single-trial basis, we found an average of 85.56% classification accuracy in 12 subjects,” the UC San Diego researchers reported.

The researchers discovered two specific regions where the AI deep-learning model was able to classify with results comparable to the whole brain model. “Remarkably, we also found that when using only central-parietal-occipital regional sensors or occipitotemporal regional sensors, the deep learning model achieved classification performances that were similar to the whole-brain sensor model,” the scientists noted.

By harnessing the pattern-recognition capabilities of AI deep learning trained on noninvasive brain imaging data, the UC San Diego researchers have a proof-of-concept that may one day lead to noninvasive brain-computer interfaces to help the paralyzed and those who have lost the ability to speak.

“Altogether, these results show that noninvasive MEG-based BCI applications hold promise for future BCI developments in hand-gesture decoding,” the scientists concluded.

Copyright © 2023 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today