Electronic brain implants are becoming increasingly common in both research and medicine but little attention has been paid to the digital security of these grey matter gateways. A new article in Neurosurgical Focus discusses their potential back doors and security weaknesses.
While there’s a small literature on hardware problems in implantable deep brain stimulators, little consideration has been give to data privacy, access control and crash protection for neural implants.
Many of these devices are designed to be surgically implanted and controlled, tuned or reprogrammed from outside the body by a wireless link but very few (if any) have an in-built authentication system that only allows access to people who are authorised to make the changes.
Currently, they work more like TV remote controls. Anyone with the correct remote control can change the settings on your TV, but it’s just assumed that no one except the owner would want to.
As these devices become more widespread, however, it leaves open the possibility that malicious attackers could alter the function of the brain by taking control of the device.
In fact the research group that wrote this article managed exactly this sort of remote pwnage on a commercial implantable heart defibrillator in 2003:
In our past research, we experimentally demonstrated that a hacker could wirelessly compromise the security and privacy of a representative implantable medical device: an implantable cardiac defibrillator introduced into the US market in 2003.
Specifically, our prior research [pdf] found that a third party, using his or her own homemade and low-cost equipment, could wirelessly change a patient’s therapies, disable therapies altogether, and induce ventricular fibrillation (a potentially fatal heart rhythm).
Although we only conducted our experiments using short-range, 10-cm wireless communications, and although we believe that the risk of an attack on a patient today is very low, the implications are clear: unless appropriate safeguards are in place, a hacker could compromise the security and privacy of a medical implant and cause serious physical harm to a patient.
We believe that some future hackers ‚Äî if given the opportunity ‚Äî will have no qualms in targeting neural devices.
It also seems that there is little concern for data privacy on these devices, so everything is broadcast ‘in the clear’. This means even if you didn’t own a legitimate controller, you could potentially intercept the data, learn its structure and create your own.
While information about an individual’s neural firing patterns are probably of little interest at the current time, we just don’t know enough about them to ‘reveal’ anything personal about the patient, their frequency and pattern could conceivably leave both the device and the patient open to side channel attacks – where the external behaviour of a system can give clues to its internal weaknesses.
For example, take a patient who has an implantable chip that detects when epileptic seizures are about to start and cools the disturbed part of the brain, a technology that is already in development.
It would be possible to know when the system kicks in by monitoring radio transmissions, giving the outside observer a reliable guide to what external conditions trigger seizures in the patient.
If transmitted, it might also be possible to read the exact frequency at which neural oscillations lead to seizures, giving clues as to how to trigger them with lights or sounds.
Another problem is the integrity of the devices. For example, the devices need to be resistant to interference from other radio signals, magnetic fields or even deliberate attempts to crash them.
This new article serves as both a warning and a plea to consider security when designing and deploying these increasingly common medical technologies.
By the way, the whole issue of Neurosurgical Focus is dedicated to brain-machine interfaces and is freely available online.
Link to ‘Neurosecurity: security and privacy for neural devices’.