Skip to Main Content

Recently, my colleagues and I published a study on decoding language from brain recordings made using functional MRI. Brain decoders are being developed to help restore communication to people who have lost the ability to speak or write. Currently, most brain decoders use recordings from implanted electrodes, and are primarily intended for people with motor system disorders. We hope that eventually our brain decoder can provide a non-invasive option for people with a wide range of communication disorders.

Our study has also generated a lot of discussion around what brain decoding technology could mean for mental privacy. While there is still much work to be done before our brain decoder can be used in practice, it is never too early to think about the ethical issues that can arise from any new technology. We want anyone to be able to participate in the discussion around maximizing the benefits and minimizing the risks of brain decoding technology. To this end, I would like to provide a simple explanation of how brain decoding technology works, what it can and cannot currently do, where my colleagues and I think it could go in the future, and what steps could be taken to help ensure that it is used ethically and appropriately.

advertisement

In our study, we recorded a participant’s brain activity using functional MRI while they listened to 16 hours of narrative stories over the course of several months. This dataset provides correspondences between phrases in language and the participant’s brain activity patterns. We used this dataset to build a model that takes in any sequence of words and predicts how the participant’s brain would respond when processing those words. To decode new brain recordings, we generated words that make the predictions from our model look as similar as possible to the real brain recordings. We found that the generated words could often describe what the participant heard, saw, or imagined at the time of the recording. For instance, when a participant heard the words “I don’t have my driver’s license yet,” the decoder generated the words “she has not even started to learn to drive yet” — not exact, but pretty close.

People often describe these sorts of brain decoders as “mind reading” devices, but this is a vague term that overstates their capabilities. While our brains give rise to our mental processes, we have a limited understanding of how most mental processes are actually encoded in brain activity.

As a result, brain decoders cannot simply read out the contents of a person’s mind. Instead, they learn to make predictions about mental content. A brain decoder is like a dictionary between patterns of brain activity and descriptions of mental content. The dictionary is built by measuring how a person’s brain responds to stimuli like words or images.

advertisement

However, brain activity is influenced by factors beyond the immediate stimulus, such as emotional states and idle thoughts. So the dictionary can only provide predictions of how a person’s brain would respond to a stimulus. Moreover, it is infeasible to measure how a person’s brain would respond to every possible stimulus, so we predict missing entries in the dictionary based on entries that we have. Prediction processes are inherently imperfect, so a decoder’s prediction of what a person is thinking can be very different from what the person is actually thinking.

This means that brain decoders — whether ours or those that use implanted electrodes — should never be used to make consequential decisions, even with a person’s consent. A cautionary example is the polygraph, which uses physiological signals like pulse to predict whether a person is lying. Polygraphs are inaccurate and easily manipulated, but the popular perception that they are objective lie detectors has led to horrific outcomes. While brain decoders have a much stronger scientific basis, they are still imperfect by nature of being prediction processes. Decoder predictions should be used only in situations where they can be verified by the user and there are no consequences to making errors. For instance, we would like to see brain decoders used as communication prostheses for people who have lost the ability to speak or write. In these cases, the user knows what the intended output should be, so the decoder could privately show the user what is being predicted and allow them to confirm whether the prediction is correct before making it public.

Furthermore, brain decoders can recover only active mental content. All brain recording methods measure signals that correspond to what a person is actively processing. By contrast, inactive information like long-term memory is encoded in the connections between neurons, and we are very far from being able to measure and decode this information.

This means that people have some degree of conscious control over what a decoder recovers. To test for conscious control in our language decoding study, we played two stories at the same time, and found that the decoder recovered only the story that the participants actively paid attention to. We also found that participants could prevent us from decoding a story that they were hearing by performing simple tasks like imagining animals. These results suggest that a person should be able to hide sensitive information from a decoder by drawing their attention away from that information.

Of course, brain decoders could become less reliant on a person’s cooperation as technology improves. There are also more subliminal ways to gather information from brain data than decoding specific mental content. For instance, brain responses that indicate familiarity could reveal information in a way that’s harder to consciously resist. For these reasons, it is important to continually assess the privacy risks of decoding technology and enact policies to ensure that nobody’s brain is decoded without their consent.

Finally, it’s worth noting that brain decoders require a large amount of training data from the person being decoded. Decoders work by learning correspondences between stimuli and patterns of brain activity. Since different people have different life experiences, many of these correspondences may be unique for each individual. In our language decoding study, we found that you cannot train a decoder on brain data from one person and use it to decode brain data from a different person. While this could also change as technology improves, it might never be possible to decode personal details (such as the names of family members) without training data from the person being decoded.

This means that it is important to regulate the collection and distribution of brain data. One setting where regulation is necessary is the consumer application space. While brain decoders could allow consumers to interface with technology in new and exciting ways, like typing by just thinking of words, consumer applications also come with substantial privacy risks that must be addressed. For instance, it may be difficult for consumers to understand the types of inferences that can be made from their brain data, as the information that could be decoded from a brain recording a year from now may be very different from what can be decoded today.

Another setting where regulation is necessary is the workplace. Even if people have rights over their brain data, they may feel compelled to waive these rights due to economic pressures. Companies already track employees using video and biometric recordings, and brain decoders may eventually provide another tool unless appropriate safeguards are put in place.

It is important to understand the potential capabilities of brain decoding technology in order to enact proactive policies. It is equally important to understand the limitations of brain decoding technology to ensure that current decoders are not used inappropriately. Ultimately, my colleagues and I believe that many of the same solutions — regulating the recording and use of brain data in both public and private settings, studying the privacy implications of brain decoders, and helping the public understand the science behind brain decoding — will help prevent the misuse of current and future brain decoding technology.

Jerry Tang is a Ph.D. student at the University of Texas at Austin.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.