Two years ago, this blog wrote about the privacy implications of the exciting new field of brain-computer interfaces (BCIs). The approach discussed there involved implanting tiny threads directly into the brain. Such an invasive and medically risky procedure is unlikely to be adopted by the general public. But a new paper underlines the major advances in non-invasive BCIs, for example using electroencephalography BCIs (eBCIs) that measure electrical activity on the external surface of the scalp.
Most of the paper is devoted to a detailed review of over a dozen such eBCI headsets and their potential applications as mass market devices.. But the paper rightly concludes by considering the main ethical issues raised by these eBCI systems, which concern privacy and agency. It points out that neural information acquired using eBCIs could give important insights into how their users think, feel and behave. Neural data could be used to “infer different aspects related to user intention, emotional response, and decision making, as well as conscious and unconscious interest”. That, in its turn, means that eBCIs will be appealing tools for the large-scale collection of consumer biometric data for companies.
It’s easy to imagine incentives being offered to users of eBCIs to allow their neural information to be shared with third parties, just as much personal data is shared as they move around the Internet. Indeed, one of the deeply troubling aspects of the routine and large-scale harvesting of highly personal data from people when they visit sites or use online services is that it normalizes this invasion of privacy. Once that happens, many people might think that since they have already given up so much about themselves, they may as well agree to upload their neural data too.
The other major ethical issue raised by the new generation of two-way eBCIs concerns “agency”. This refers to “the subjective awareness of control over our own actions and, by extension, over events in the external world”. Here, there is a subtle but serious problem thanks to the brain’s lack of “proprioception”, sometimes referred to as the “sixth sense”:
the human brain is unable to acknowledge the influence of an external device on itself, which could potentially compromise autonomy and self-agency. Because of this, users may be liable to mistakenly perceive ownership over behavioral outputs that are generated by the BCI, as well as incorrectly attribute causation to it. For instance, brain stimulation techniques have been shown to trigger changes in demeanor and character traits, which often leads to changes in personal identity.
In other words, the brain’s paradoxical lack of awareness about itself as a physical organ, and its inability to distinguish different kinds of external inputs, could make it easier to implant thoughts and desires using BCIs without the subject being aware of that fact.
The evident seriousness of these challenges has prompted ethicists and researchers to explore ways to combat the use of BCIs to undermine privacy and agency. An increasingly popular suggestion is to update the Universal Declaration of Human Rights for the age of neurotechnology. Recently, one group of scientists and lawyers suggested enshrining and protecting five basic neuro-rights:
(1) the right to identity, or the ability to control both one’s physical and mental integrity; (2) the right to agency, or the freedom of thought and free will to choose one’s own actions; (3) the right to mental privacy, or the ability to keep thoughts protected against disclosure; (4) the right to fair access to mental augmentation, or the ability to ensure that the benefits of improvements to sensory and mental capacity through neurotechnology are distributed justly in the population; and (5) the right to protection from algorithmic bias, or the ability to ensure that technologies do not insert prejudices.
Back in 2019, when Privacy News Online first looked at this area, such proposals were purely theoretical, and seemed unlikely to form part of a legal framework with any real-world impact. But things have moved on since then. For example, Spain aims to bring in a Charter of Digital Rights, whose aim is to preserve offline rights in the online world. Section XXIV concerns “digital rights in the use of neurotechnologies“. Remarkably, new “mental privacy” laws in Chile are even further along. As an article on the Rest of the World site explains, a constitutional bill awaiting approval by Chile’s Chamber of Deputies, and a bill explicitly about neuro-protection, will establish rights to “free will, mental privacy, equal access to cognitive enhancement technologies, and protection against algorithmic bias” – essentially those listed above. The Rest of the World article points out the radically new approach taken by the Chilean neuro-protection bill:
Rather than describing it in broad-brush terms, it proposes to treat neural data as a special kind of information that is intimately related to who we are and that partly defines our identity. The bill therefore states that neural data must be legally considered as organic tissue.
By treating neuro-data as an organ, the law prohibits Chileans from being compelled to give up brain data and, crucially, its collection will require explicit “opt-in” authorization. Another implication of this legal analogy is that brain data cannot be sold; it can only be donated for altruistic purposes. The buying and selling of brain data is prohibited, regardless of consent.
It is noteworthy how fast the world of neuro-rights is progressing. It is also striking that the country in the vanguard of addressing the novel and profound ethical questions that BCIs raise is Chile. This is a sure sign that protecting mental privacy will not just be a problem for a few rich Western nations, but for the entire world. Other countries need to learn from the Chilean moves, and start working on their own legislation enshrining key neuro-rights.
Featured image by Baburov.