Comment Lots of things are classified as medical devices (Score 1) 138
Medical devices don't just include things like implantable equipment (such as implantable defibrillators, pacemakers, pumps, etc.) but analysis equipment, and more recently computer software running on regular PCs (such as electronic patient records, order management systems, digital X-ray system/picture archiving and communications systems), etc.
Implantable devices have been in the public eye recently because they don't use very secure protocols. Typically, the wireless controller transmits a command prefixed by the serial-number of the implanted device. The device then ignores commands which are not prefixed by the appropriate serial number. This is OK for preventing programming the wrong device in a clinic situation, but a hacker could easily perform a replay type attack to cause the device to administer an inappropriate treatment or dose. One reason that manufacturers have given for this is an extremely limited power budget - strong cryptography simply burns too much energy for a device which cannot be recharged.
One problem that has concerned me as a user of medical software is just how poor the security is on a surprising number of products. One product that I use at the moment is part of an electronic patient record system. This system doesn't quite store user passwords as cleartext in the database. However, instead, it encrypts them with a Vigenere cipher (using the username as key). However, because of excess load on the database server, the software very concienciously caches the entire "Users" table as a CSV file on the client computer. Yes, when I discovered the file, it didn't take long for the Mk I eyeball and my recollection of my password history (which was also documented in great detail in encrypted format) to determine the cipher and what was being used as the key. This was subsequently confirmed by running the binary through a decompiler, which revealed a number of other wonders such as potential SQL injection vulns. Of course, none of that really mattered - there was an interesting file called "C:\epr.ini" which contained such lines as:
[ClientDatabaseConnectionString]
Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=EPRORA)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)));User Id=SYSTEM;Password=pyramid1;
However, even leaving aside such extraordinarily bad software from small IT contractors, even the big-boys in the healthcare arena seem to have problems with basic testing, and anything even vaguely corner-case will often result in strange behavior - and that's just routine use, I can imagine all sorts of vulnerabilities appearing if these software packages were subjected to serious attack.
In fact, even in healthcare systems which are supposed to be paradigms of good design, implementation is often very poor. Professor Ross Anderson in his book "Security Engineering" mentions a national security system used in the UK for securing health records, where an individual user's smartcard contains an individual certificate and permitted user roles, which interact with the software to release the appropriate records. On the face of it, an excellent system - and one that Anderson mentions as an example in his book. For a user, however, the implementation is a disaster area; it's unreliable (depending on a national authentication server - local caching was broken in the first 11 6-monthly releases) and vulnerable to DOS attacks. Authentication with the national server was hopelessly slow (taking up to 5 minutes) so was useless for doctors in a busy environment such as the ER. The Roles are administered on a national level, with no way to override errors in role allocation before the next 6-month release (e.g. the first few releases did not permit doctors to change the brightness/contrast of an X-ray that they were examining - this function was restricted to sysadmins only) - the user role administrators acknowledged that this was a serious problem, but refused to push out a hotfix, instead it had to wait for the next role release. In reality, the nurse in Anderson's example would not simply be restricted to her patients. Instead, what would have happened is that the first doctor on shift in the morning would have used her smartcard to log in, and then left her smartcard in the terminal for the rest of the day - with every other member of staff that needed patient record access piggy-backing on her login. This procedure was sanctioned by senior hospital management in recognition of the fact that the authentication system was unsuable - and this sharing of logins led to further problems, with annotations being attributed to the wrong person. In the end, the solution was for each person annotating a case record to sign their note with "This note was made by Nurse Doe at 11:41 on 1/2/2010", so that it was clear who had actually written the note.