It’s 2050; you’re sitting at home, controlling your devices with your brain. You turn on the lights, play your favourite song, and even order groceries without lifting a finger. You think. It happens.
This is the promise of neurotechnology — a world where your brain’s signals are seamlessly connected to the digital world. The electric pulses in your brain can be converted to 1s and 0s by an embedded chip. But this convenience comes at an unsettling price.
A corporation now has unfiltered access to your brain. It interprets your actions, like changing the channel, but also knows when you’re happy, stressed, or tired.
Your brain is always on. It’s constantly generating data, and you cannot filter what they access.
As you fall asleep, your thoughts are interrupted by a personalised ad for a product your subconscious signalled you might be interested in. Marketers plant their campaigns while we are deep asleep. It’s a dystopian future, and it’s quite possible.
Neurotechnology, the field that uses technology to understand the brain, visualise its processes, and even control repair, or improve its functions, has made extraordinary strides recently.
Like many things in the startup world, it’s been driven by rapid advancements in artificial intelligence and data science.
Once the domain of science fiction, neurotechnology is a reality with enormous implications for medicine and how we interact with the digital world.
Dr Kobi Leins steered the multifaceted conversation during a recent panel discussion at SXSW Sydney I attended titled “Where Is My Mind? The Coming Wave of Neurotechnology and What It Means for Humans“.
The talk spanned human rights to consumer rights while touching on the actual science behind neurotechnology.
Understanding neurotechnology: Invasive and non-invasive approaches
At its core, neurotechnology involves the application of devices and systems capable of recording or stimulating brain and nervous system activity.
Neurotechnology falls into two categories: invasive and non-invasive.
Invasive neurotechnology includes implanting devices like electrodes directly into the brain to record neural signals or stimulate neural activity.
A well-known example is deep brain stimulation, which has been used for over 30 years to treat Parkinson’s disease.
This technology can detect and correct abnormal neural signals through electrical stimulation by placing electrodes in specific brain regions.
More recently, brain-computer interfaces (BCIs) like Elon Musk’s Neuralink have started to gain attention. A recent example includes Noland Arbaugh, who is the first person to receive a Neuralink implant after experiencing a severe spinal cord injury while swimming that led to quadriplegia and paralysis from the shoulders down.
Arbaugh can now command a computer via his thoughts to play chess, browse the web and use social media.
Such interfaces hold enormous promise for individuals with paralysis, neurodegenerative diseases, or traumatic brain injuries, allowing them to communicate and interact better with the world. The brilliant minds once trapped in a body can now participate in the world like never before.
On the other hand, non-invasive neurotechnology involves devices that do not penetrate the body. Electroencephalography (EEG) headsets, for example, detect electrical signals from the brain by placing sensors on the scalp.
Although the signals are weaker and noisier than invasive methods, advancements in AI have greatly improved their utility.
AI helps to decode these signals and interpret complex neural patterns, allowing these devices to measure mental states like stress, focus, and emotional responses.
Such technology is already commercially available. Flow, for example, produces a headset used by the UK’s NHS to treat depression, while the Dreem headset measures brain activity, heart rate, and movements and provides daily reports and sleep tips to improve sleep.
The risks of neurotechnology
While the opportunities for the future of nanotechnology feel like they’re straight out of Star Trek, this panel focused on the risks accompanying this rapid progress, particularly concerning privacy, ethics, and inequality.
Kiley Seymour, the associate professor of neuroscience and behaviour at the University of Technology Sydney stated early that “our thoughts are the last frontier of privacy”.
Unlike other forms of biometric data, such as fingerprints or facial recognition, brain data is profoundly personal and almost impossible to anonymise and filter.
Human Rights Commissioner Lorraine Finlay said neurotechnology will allow your “innermost thoughts to be seen by other people”. That unconsciously you could give away your deepest thoughts” is “profoundly concerning”.
It is a dystopian thought indeed.
23andMe, the genetic testing company that identifies your ancestry, recently experienced a substantial data breach that affected approximately 6.9 million users, nearly half of its customer base.
In this leak, the hacker sold subsets of data on the dark web with listings compiled based on genetic groups like those of Chinese or Jewish descent.
Moreover, this leak caused significant damage to the reputation and valuation of the 23andMe business. Although its future is uncertain, it will likely be sold to the highest bidder.
Unfortunately for users, the data they gave 23andMe to learn what percentage of Irish they were to tell people at pubs may end up in the hands of another corporation, along with their entire genotype.
A similar future may await those who allow neurotechnology near their brainwaves.
Another recent example Dr Kobi Leins gave was the Australian startup Harrison Ai, which landed itself in hot water after it was revealed it used chest scans from I-MED to train its algorithms without consent.
Leins said Harrison AI collected “people’s chest scans… without consent and used to train AI. Again, those images are all individually identifiable. They’re biometric data and might also show other conditions precluding people in some countries from insurance or medical care”.
If Target was able to detect a teen girl was pregnant before her father did due to her shopping patterns in the early 2020s, then I fear to think what corporations will be able to understand about us from reading our brain waves.
Dr Kate Bower, the digital rights advocate, questioned the business model some nanotechnology businesses may use to be viable.
Bower stated, “one of the ways that we traditionally have done that is by a freemium model right so you get the technology for free, but if you’re getting the technology for free then you are the product”.
High costs could limit access to advanced treatments for those with disabilities, while the affluent might use these technologies to gain cognitive or physical advantages, further widening societal inequalities.
The opportunities of neurotechnology
In medicine, neurotechnology is already transforming the lives of people with severe disabilities.
Professor Kiley Seymour explained “brain computer interfaces are using the commands of those muscles to control avatars that can speak in the same voice as the person that is suffering from locked-in syndrome.
Such innovations represent a breakthrough in improving quality of life and expanding the boundaries of human interaction.
Many, like Noland Arbaugh, the quadriplegic who can now use a computer with his mind, will improve their lives through access to socialisation, entertainment, and work.
The future of neurotechnology is undoubtedly full of potential.
However, as the panel concluded, it is crucial to ensure that these technologies are developed responsibly, with appropriate safeguards to protect human rights and prevent abuse.
My biggest takeaway was that our governments need to start thinking about their citizens’ neuro rights before it’s too late. Countries like Chile adjusted their constitutions in 2021 to protect their citizens’ neuro rights.
Australia should start having more of these conversations. The Australian government implemented the Australian Privacy Act in 1989. Since then, there have been a few additions but no references to our neurorights.
I’ve had a brainwave: let’s champion change before Elon & Co starts surfing them.
Update: Post-publication statement from Harrison.ai
“As a clinician-led company, harrison.ai takes patient safety and data privacy very seriously. All data that we use for research and development arrives de-identified, cannot be re-identified, and is encrypted.
“Our products, which have met the highest levels of regulatory requirements for clinical use in over 40 countries, including with the TGA in Australia, are designed to improve quality of care, greatly increase the speed of diagnostics and save lives.”
Never miss a story: sign up to SmartCompany’s free daily newsletter and find our best stories on LinkedIn.
Comments