Neurotech could change the health industry – but is it for the better?

Strides are being made in the field of Neurotech, and we are now able to use these technologies to do immense good for those who need it. We are able to harness neural communications to allow a paralysed person to control prosthetic limbs. 

 

Perhaps the most recent development is the ability to enable a paralysed person to type on a screen simply by imagining writing the characters they wish to use, provided they are hooked up to the required equipment. This is a development that has come out of cutting edge current academic research which allows writing at a far faster rate than previous technologies, and so could revolutionise communication for the people who need it. 

 

The major issue for many seems to be privacy. The idea of having sensors in our brains which can relay and pick up neural signals is feared as a step along the path to a dystopian future of mind reading and thought crime. Our mind and our thoughts are perhaps our most private and protected sanctuary of internal opinion and deliberation. What we are thinking is something we do not want others to know, and surely what others have no right to know. These technologies, people worry, may have the potential to change that. 

 

At present, the tech is not quite this developed; pre-motor neural signals can be identified and harnessed, but other ‘thoughts’ are still out of reach. However, brain scanners are already being considered which could have a deeper, broader ‘scan’ of one’s brain activities than the neuron sensors we have now. We are not yet at the stage of complete or non-consensual mind-reading, which is the primary cause of concern. But it is not as though these technologies are not going to undergo advancement. 

 

On top of the potential of brain scanners, Elon Musk, through his company Neuralink, has plans to make these implants which read neural signals more mainstream, to aid the completion of mundane tasks. He is making progress with small, wireless chips, having tested them on monkeys. 

 

We can argue that there is not an issue with the current limited use of these implants, but we must consider the implication of the widespread distribution and use of them, both in the possibility of accessing people’s thoughts and that of artificial impulses. There is the risk of possibilities, like the placing of artificial cravings in our brains, which could threaten our autonomy and rational decision-making abilities. 

 

Both things pose serious ethical issues. They are not directly a risk as of now, but may be in the near future, and so any proposal of a more widespread use of these neural sensors must be deliberated with these possibilities in mind. When the capabilities of these technologies reach levels which can then pose a risk to our privacy or autonomy.

 

At present, we lack the capacities for these machines to be dangerous; one has to be a willing participant in order to utilise the neuron sensing technologies, for example. They can be greatly useful for the communication of paralysed persons, and it would be a mistake to put a hold on their use when they have the potential to do so much good in this form, just because of a potential future risk. The conditions for this future risk to arise must be monitored, however, so as to mitigate its potential.

 

Latest