chicksdaddy writes: The Department of Homeland Security is readying a set of security guidelines for Internet of Things device makers and for consumers that it will soon release, The Security Ledger is reporting (https://securityledger.com/2016/09/exclusive-dhs-readies-guidance-for-securing-internet-of-things/).
DHS, which houses the U.S. Computer Emergency Readiness Team (CERT), as well as the U.S. Secret Service, is assembling a set of strategic principles that it says will help safeguard and secure the Internet of Things by providing high level guidance to industry about how to design and manufacture secure connected devices. For consumers, DHS will lay out guidelines about how to manage the risks posed by Internet connected devices in their homes, cars and businesses.
Robert Silvers, the DHS Assistant Secretary for Cyber Policy, told Security Ledger that the agency thinks it can play a key role in setting cross industry standards for the Internet of Things.
“What we’ve come to recognize is that the Internet of Things is a full-blown phenomenon,” said Rob Silvers, the DHS Assistant Secretary for Cyber Policy. “We think everyone. Govt. industries, consumers need to get serious about reasonable security being built into IoT devices. And we need to do it now before we’ve deployed an entire ecosystems,” he said.
Silvers will outline the agency's forthcoming guidance in a speech at The Security of Things Forum (https://www.securityofthings.com) in Cambridge, Mass on Thursday.
chicksdaddy writes: The Federal Trade Commission is warning consumers to beware of new ‘connected car’ features that allow rental car customers to connect their mobile phone or other devices to in-vehicle infotainment systems, The Security Ledger reports. (https://securityledger.com/2016/08/ftc-warns-consumers-of-rental-car-data-theft-risk/)
“If you connect a mobile device, the car may also keep your mobile phone number, call and message logs, or even contacts and text messages,” the FTC said in an advisory released on Tuesday. (https://www.consumer.ftc.gov/blog/what-your-phone-telling-your-rental-car) “Unless you delete that data before you return the car, other people may view it, including future renters and rental car employees or even hackers.”
The Commission is advising renters to avoid syncing their mobile phones to their rental car, or to power devices via a USB port, where settings on your device may allow automatic syncing of data. Consumers who do connect their device should scrutinize any requests for permissions. Renters are also urged to remove their device from the vehicle’s memory before handing it back over to the rental firm.
chicksdaddy writes: The battle of words over warnings from a Wall Street trader about serious security flaws in implantable medical devices (https://securityledger.com/2016/08/the-big-short-alleged-security-flaws-fuel-bet-against-st-jude-medical/) continued on Tuesday, as researchers from The University of Michigan joined St. Jude itself in raising doubts about research that was used by the investment firm Muddy Waters to bet against ( or “short”) the stock of St. Jude Medical, a major medical device maker, The Security Ledger reports (https://securityledger.com/2016/08/short-sheet-researchers-raise-doubts-on-st-jude-research/).
In a statement released on Tuesday, Kevin Fu and Thomas Crawford of the Archimedes Center for Medical Device Research did not directly challenge the findings of the report by Muddy Waters and the firm MedSec, but did suggest that, rather than being evidence of a successful attack, the output observed by the researchers may have been typical for a home-monitored implantable cardiac defibrillator (ICD) device being tested while not properly connected to a patient.
“The U-M team reproduced error messages the report cites as evidence of a successful ‘crash attack’ but the messages are the same set of errors that display if the device isn’t properly plugged in,” the University said in a statement.
“We’re not saying the report is false. We’re saying it’s inconclusive because the evidence does not support their conclusions,” said Fu, U-M associate professor of computer science and engineering and director of the Archimedes Center for Medical Device Security. Fu is also co-founder of medical device security startup Virta Labs.
In a separate blog post, Kevin Fu of the University of Michigan said the research that informed the Muddy Waters report may be an example of 'armchair engineering.' (http://blog.secure-medicine.org/2016/08/study-on-st-jude-medical-device_30.html)
The conflict may come down to how different viewers interpret the same events. The behavior witnessed by the MedSec researchers and described in their report may not have been a security issue, but simply evidence of the device acting as designed, Fu and his colleagues say.
A defibrillator’s electrodes are connected to heart tissue via wires that are woven through blood vessels the wires are used both for sensing operations and to send shocks to the heart, if necessary. No surprise, when the defibrillator is not connected to a human host, the data transmitted by the device is quite different.
“When these wires are disconnected, the device generates a series of error messages: two indicate high impedance, and a third indicates that the pacemaker is interfering with itself,” said Denis Foo Kune, former U-M postdoctoral researcher and co-founder of Virta Labs” in a statement.
That behavior is very similar to what is described in the Muddy Waters report on St. Jude as evidence of a successful attack.
While medical knowledge isn’t necessary to find vulnerabilities in a medical device or even hack them, it is critical to understanding the clinical implications of any software flaws and whether there is the possibility of causing harm to patients, Fu said.
chicksdaddy writes: Call it The Big Short – or maybe just the medical device industry’s “Shot Heard Round The World”: a report from Muddy Waters Research (http://www.muddywatersresearch.com/research/stj/mw-is-short-stj/) recommends that its readers bet against (or “short”) St. Jude Medical after learning of serious security vulnerabilities in a range of the company’s implantable cardiac devices, The Security Ledger reports. (https://securityledger.com/2016/08/the-big-short-alleged-security-flaws-fuel-bet-against-st-jude-medical/)
The Muddy Waters report on St. Jude’s set off a steep sell off in St. Jude Medical’s stock, which finished the day down 5%, helping to push down medical stocks overall. (http://finance.yahoo.com/news/us-stocks-wall-st-slips-201909233.html)
The report cites the “strong possibility that close to half of STJ’s revenue is about to disappear for approximately two years” as a result of “product safety” issues stemming from remotely exploitable vulnerabilities in STJ’s pacemakers, implantable cardioverter defibrillator (ICD), and cardiac resynchronization therapy (CRT) devices. The vulnerabilities are linked to St. Jude’s Merlin@home remote patient management platform, said Muddy Waters.
The firm cited research by MedSec Holdings Ltd. a cybersecurity research firm that identified the vulnerabilities in St. Jude’s ecosystem. Muddy Waters said that the affected products should be recalled until the vulnerabilities are fixed.
In an e-mail statement to Security Ledger, St. Jude’s Chief Technology Officer, Phil Ebeling, called the allegations “absolutely untrue.” “There are several layers of security measures in place. We conduct security assessments on an ongoing basis and work with external experts specifically on Merlin@ home and on all our devices,” Ebeling said.
More controversial: MedSec CEO Justine Bone acknowledged in an interview with Bloomberg (http://www.bloomberg.com/news/videos/2016-08-25/bone-st-jude-has-history-of-sweeping-things-under-table) that her company did not first reach out to St. Jude to provide them with information on the security holes before working with Muddy Waters.
Information security experts who have worked with the medical device industry to improve security expressed confusion and dismay.
"If safety was the goal then I think (MedSec's) execution was poor," said Joshua Corman of The Atlantic Institute and I Am The Cavalry. "And if profit was the goal it may come at the cost of safety. It seems like a high stakes game that people may live to regret."
chicksdaddy writes: One of every five software vulnerabilities discovered in vehicles in the last three years are rated “critical” and are unlikely to be resolved through after the fact security fixes, according to an analysis by the firm IOActive, The Security Ledger reports. (https://securityledger.com/2016/08/one-in-five-vehicle-vulnerabilities-are-hair-on-fire-critical/)
“These are the high priority ‘hair on fire’ vulnerabilities that are easily discovered and exploited and can cause major impacts to the system or component,” the firm said in its report (http://www.infosecurity-magazine.com/download/227664/), which it released last week. The report was based on an analysis of more than 150 vehicle security flaws identified over three years by IOActive or publicly disclosed by way of third-party firms.
The report studied a wide range of flaws, most discovered in IOActive’s work with automakers and suppliers to auto manufacturers, said Corey Thuen, a Senior Security Consultant with IOActive. Thuen and his colleagues considered what kinds of vulnerabilities most commonly affect connect vehicles, what types of attacks are most often used to compromise vehicles and what kinds of vulnerabilities might be mitigated using common security techniques and tactics.
The results, while not dire, are not encouraging. The bulk of vulnerabilities that were identified stemmed from a failure by automakers and suppliers to follow security best practices including designing in security or applying secure development lifecycle (SDL) practices to software creation. “These are all great things that the software industry learned as it has progressed in the last 20 years. But (automakers) are not doing them.”
chicksdaddy writes: The Department of Homeland Security warned of hundreds of vulnerabilities in a hospital monitoring system sold by Philips. Security researchers who studied the system said the security holes may number in the thousands, according to a report by The Security Ledger (https://securityledger.com/2016/07/code-blue-thousands-of-bugs-found-on-medical-monitoring-system/)
The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued an alert on July 14 (https://ics-cert.us-cert.gov/advisories/ICSMA-16-196-01) about the discovery of 460 vulnerabilities in the Philips Xper-IM Connect system, including 360 with a severity rating of “high” or “critical” severity. But an interview with one of the researchers who analyzed the Xper system said that the true number of vulnerabilities was much higher, numbering in the thousands.
Xper IM Connect is a “physiomonitoring” system that is widely used in the healthcare sector to monitor and manage other medical devices. Research by two companies, Synopsys and Whitescope LLC, working in collaboration with Philips, found that the system is directly afflicted by 460 software vulnerabilities, including 272 in the Xper software itself and 188 in the Windows XP operating system that Xper IM runs on. The vulnerabilities include remote code execution flaws that could allow malicious code to be run on the Xper system as well as vulnerabilities that could expose sensitive information stored on Xper systems.
chicksdaddy writes: The Automotive industry’s main group for coordinating policy on information security and “cyber” threats has published a “Best Practices” document (http://www.automotiveisac.com/best-practices/), giving individual automakers guidance on implementing cybersecurity in their vehicles for the first time.
The Automotive Information Sharing and Analysis Center (ISAC) released the Automotive Cybersecurity Best Practices document on July 21st, saying the guidelines are for auto manufacturers as well as their suppliers.
The Best Practices cover organizational and technical aspects of vehicle cybersecurity, including governance, risk management, security by design, threat detection, incident response, training, and collaboration with appropriate third parties.
Taken together, they move the auto industry closer to standards pioneered decades ago and embraced by companies like Microsoft. They call on automakers to design software to be secure from the ground up and to take a sober look at risks to connected vehicles as part of the design process.
chicksdaddy writes: Ransomware infections have been plaguing the healthcare field for much of the last two years. But amidst all the reports of hospitals hamstrung by encrypted, clinical systems, there’s been precious little talk about whether such incidents are violations of patients’ privacy under the federal HIPAA legislation. Now we have an answer: yes.
Security Ledger reports (https://securityledger.com/2016/07/regulator-ransomware-infections-likely-reportable-under-hipaa/) that the U.S. Department of Health and Human Services on Monday issued new guidance (http://www.hhs.gov/sites/default/files/RansomwareFactSheet.pdf) that suggests strongly that ransomware infections that affect electronic patient health information (ePHI) are reportable violations under HIPAA.
“When electronic protected health information (ePHI) is encrypted as the result of a ransomware attack, a breach has occurred because the ePHI encrypted by the ransomware was acquired,” HHS said in its guidance. (PDF)
The new guidance comes after a period of consideration and debate within policy circles about whether having patient records encrypted by ransomware should count as a “breach” of patient privacy. In theory, the files aren’t being accessed and viewed, simply scrambled and held for ransom. Or so the thinking went.
Writing on the Virta Labs blog (http://go.virtalabs.com/ocr-ransomware), Virta CEO and University of Michigan researcher Kevin Fu, noted that the HHS guidelines get a lot right: ruling out an exemption for systems with Full Disk Encryption running (ransomware, by its very nature, operates when the machine is running and the operating system and file system are accessible).
Fu expected that the guidelines would be “bad news” for the majority of Health Delivery Organizations (HDOs) covered by HIPAA. “The OCR guidance means you just got clarity on whether ransomware results in a breach. Sorry, the answer is yes, unless you have methodical evidence to the contrary.”
chicksdaddy writes: The use of open source software exploded in 2015, almost doubling from the year before, according to a report from the firm Sonatype. (http://blog.sonatype.com/the-2016-state-of-software-supply-chain-report). The company, which manages the world’s largest repository of open source components, said it received 31 billion download requests from its Central Repository during 2015, up from over 17 billion such requests in 2014. The average enterprise downloaded 229,000 open source components during the same period.
However, software quality continues to be an issue, with a survey of 25,000 applications revealing that close to 7% percent of open source components in use had a known security defect that could lead to successful attacks.
While 7% (actually 6.8%) might not sound like much — just one of every 16 components — in the supply chain world, its a pretty ugly statistic, Sonatype warned. “Imagine if one in every 16 of the parts in your iPhone were known defective – or 1 in every 16 parts in your car,” Derek Weeks, a Vice President and advocate for DevOps at Sonatype told The Security Ledger.(https://securityledger.com/2016/07/developers-gorge-on-open-source-amid-worries-about-quality-security/)
The State of the Software Supply Chain Report surveys data from Sonatype’s Central Repository, a public repository of open source components for the Java development community to reveal high level trends within the open source industry. Sonatype also tapped data from other open source repository including RubyGems.org, NPM, DockerHub and Nexus, the company’s private repository.
In 2015, that data showed a hockey-stick like curve marking the increase in open source component use and activity across the space. Sonatype said that the volume of open source download requests has increased 64 times over since 2007, driven by a shift in application development towards a component-based architecture that heavily relies on open source to accelerate development by leveraging already-created software components.
chicksdaddy writes: The Security Ledger notes (https://securityledger.com/2016/06/report-feds-mull-bug-bounty-contest-for-medical-devices/) that the U.S. Department of Health and Human Services is considering a bug bounty program for medical devices and healthcare technology, modeled after the Department of Defense's recently launched Hack the Pentagon program. (https://yro.slashdot.org/story/16/03/31/2013254/hack-the-pentagon-bug-bounty-program-opens-for-registration)
The Chief Privacy Officer at the Department of Health and Human Services (HHS) has made public statements that suggest HHS is considering a similar program.
Speaking at the Collaboration of Health IT Policy and Standards Committees meeting on June 23, Lucia Savage, chief privacy officer at HHS’s Office of the National Coordinator for Health Information Technology, said that the practice could show promise at HHS if it was scaled up to meet health care needs, Federal Times reported on June 23rd. (http://www.federaltimes.com/story/government/it/health/2016/06/23/ethical-hacking-dod-draws-interest-hhs/86301606/)
"This is a struggle for devices as well,” she said. “You can’t hack something in the field, because what if the hacker disrupts the operation of the device. Similarly, health data and EHRs, we may not want to have the hacker accessing your live data because that might cause other problems relative to your obligation to keep that data confidential."
"Given that space and given the need to improve cybersecurity, is there something that ONC can do to improve that rate at which ethical hacking occurs in health care?” Savage wondered.
On June 17, U.S. Secretary of Defense Ash Carter announced preliminary results from the program, which invited some 1,400 vulnerability hunters to try their luck on DOD systems. In all, the DOD paid bounties for 138 vulnerabilities submitted by 250 researchers. In all, the DOD paid out $150,000 in bounties, with about half going to the hackers.
chicksdaddy writes: Hospitals are pretty hygienic places — except when it comes to passwords, it seems.
That's the conclusion of a recent study (http://www.cs.dartmouth.edu/~sws/pubs/ksbk15-draft.pdf) by researchers at Dartmouth College, the University of Pennsylvania and USC, which found that efforts to circumvent password protections are "endemic" in healthcare environments and mostly go unnoticed by hospital IT staff.
The report describes what can only be described as wholesale abandonment of security best practices at hospitals and other clinical environments — with the bad behavior being driven by necessity rather than malice.
"In hospital after hospital and clinic after clinic, we find users write down passwords everywhere," the report reads. "Sticky notes form sticky stalagmites on medical devices and in medication preparation rooms. We’ve observed entire hospital units share a password to a medical device, where the password is taped onto the device. We found emergency room supply rooms with locked doors where the lock code was written on the door--no one wanted to prevent a clinician from obtaining emergency supplies because they didn’t remember the code. "
Competing priorities of clinical staff and information technology staff bear much of the blame. Specifically: IT staff and management are often focused on regulatory compliance and securing healthcare environments. They are excoriated for lapses in security that result in the theft or loss of data. Clinical staff, on the other hand, are focused on patient care and ensuring good health outcomes, said Ross Koppel, one of the authors of the report, told The Security Ledger (https://securityledger.com/2016/06/study-finds-password-misuse-in-hospitals-a-steaming-hot-mess/)
Those two, competing goals often clash. “IT want to be good guys. They’re not out to make life miserable for the clinical staff, but they often do,” he said.
chicksdaddy writes: In a sign that hacking connected “things” is joining the mainstream of the information security awards, The Pwnies (http://pwnies.com/), a long-running awards ceremony that is the hacker community’s equivalent of The Oscars (or at least The People’s Choice Awards) is adding an award for “Junk Hacking” to its 2016 roster, The Security Ledger reports. (https://securityledger.com/2016/06/at-the-hacker-oscars-a-new-category-for-junk-hacking/)
The awards, which are handed out at the annual Black Hat Briefings conference in Las Vegas in August, added a “Pwnie for Best Junk Hack” to its list of new awards.(http://pwnies.com/nominations/) But in a nod to the security industry’s penchant for stunt hacking and the technology industry’s penchant for unwarranted complexity, the award will be given to researchers who “discovered and performed the most needlessly sophisticated attack against the most needlessly Internet-enabled ‘Thing.'”
Justine Bone, Chief Technology Officer at the firm Vult.com, said that combination applies to the Junk Hacking category. The Internet of Things has only amped the silliness, giving an IP address to everything from kitchen appliances to tooth brushes to stuffed animals. (See also: @InternetofShit (https://twitter.com/internetofshit?lang=en))
Despite all the silliness, however, Bone said that the community can learn from efforts to compromise connected stuff, which can still inspire subtle and creative hacks that have wider applications. “It may be that there’s some exploit in your connected toothbrush that could also be used against a home security system,” she said.
The Best Junk Hack category is among a slew of new award categories that are being added this year, the 10th year that the Pwnie Awards have been held. Among other new categories that are being added are Pwnies for the “Best Cryptographic Attack,” the “Best Backdoor,” and the closely related “Best Stunt Hack,” awarded to “the researchers, their PR team, and participating journalists for the best, most high-profile, and fear-inducing public spectacle that resulted in the most panic-stricken phone calls from our less-technical friends and family members.”
chicksdaddy writes: The Electronic Frontier Foundation is calling out law enforcement's use of a database of tattoo images compiled from prisoners to develop artificial intelligence, saying it violates the prisoners civil rights and that the technology threatens free speech and privacy, The Security Ledger reports.(https://securityledger.com/2016/06/eff-argues-tattoo-recognition-research-threatens-free-speech-privacy/)
Efforts to “crack the symbolism of our tattoos using automated computer algorithms” threatens civil liberties, EFF staffers Dave Maass and Aaron Mackey in a blog post last week.(https://www.eff.org/deeplinks/2016/06/tattoo-recognition-research-threatens-free-speech-and-privacy)
The post is an apparent reference to work headed up by the National Institute of Standards and Technology (NIST). In June, 2015, NIST held a workshop that explored approaches to automatic tattoo identification using artificial intelligence. (https://securityledger.com/2015/06/internet-of-tattoos-nist-workshop-plumbs-body-art-algorithms/)
Participating organizations in that workshop used a FBI -supplied dataset of thousands of images of tattoos from government databases. According to NIST computer scientist Mei Ngan, “state-of-the-art algorithms fared quite well in detecting tattoos, finding different instances of the same tattoo from the same subject over time, and finding a small part of a tattoo within a larger tattoo.”
But EFF said an investigation it conducted found that these experiments “exploit inmates, with little regard for the research’s implications for privacy, free expression, religious freedom, and the right to associate.” So far, EFF said “researchers have avoided ethical oversight while doing (their work).”
“Tattoos are inked on our skin, but they often hold much deeper meaning. They may reveal who we are, our passions, ideologies, religious beliefs, and even our social relationshipsThat’s exactly why law enforcement wants to crack the symbolism of our tattoos using automated computer algorithms, an effort that threatens our civil liberties.”
chicksdaddy writes: Security firm FireEye claims to have discovered proof-of-concept malicious software that targets industrial control systems software that is used to operate critical infrastructure worldwide, Security Ledger reports (https://securityledger.com/2016/06/new-stuxnet-like-industrial-control-system-malware-ups-the-ante/)
The malware, dubbed “IRONGATE” was discovered via VirusTotal, a kind of online clearinghouse for malicious software samples, according to a FireEye blog post.(https://www.fireeye.com/blog/threat-research/2016/06/irongate_ics_malware.html)
The software isn’t yet capable of infecting actual industrial control systems, FireEye warns that it suggests that malicious software authors are upping their game: adding evasion features that prevent the malware from being fooled by so-called “sandbox” environments and enabling sophisticated “man in the middle” attacks on applications used with programmable logic controller (PLCs) made by Siemens – the same equipment targeted by the Stuxnet worm.
FireEye cautioned that the malicious software samples its researchers discovered do not pose a threat to industrial control environments currently. The code would require “widespread changes” to actually attack Siemens programmable logic controllers.
Rather, the malicious software seems to suggest that malicious actors are testing out their creations before using them in actual attacks. Among other things, FireEye researchers observed the malware carry out a man in the middle attack against a custom-compiled user application in a Siemens Step 7 PLC simulation environment (PLCSIM).
chicksdaddy writes: Passcode is reporting (http://www.csmonitor.com/World/Passcode/2016/0518/Flaws-in-networking-devices-highlight-tech-industry-s-quality-control-problem) that researchers are warning about security vulnerabilities in widely used remote power management (RPM) equipment could give malicious hackers the ability to remotely shut off power to critical information systems and industrial machinery.
Researchers at Georgia-based BorderHawk said that it discovered suspicious traffic emanating from compromised RPM devices while working at a large energy firm. An investigation found more reasons for concern: undocumented, no-authentication required features hidden in firmware that could be used to dump a list of user accounts and passwords to access the device. Researchers also found a link to a malicious domain located in China buried in a help file.
RPMs are simple network hardware containing two power outlets to plug in equipment as well as an Ethernet and serial ports for connecting to the network or directly to another computer.
The work by BorderHawk jives with work done by the security consulting firm Senrio Inc. (formerly called Xipiter -http://www.xipiter.com/). Researchers there analyzed the NetBooter NP-02B – made by the Arizona firm SynAccess Networks and found hidden, no authentication features in that device's firmware lets anyone remotely reset the NetBooter device to its factory default configuration. Another allows anyone to modify network and system settings. A third, hidden function could be used to extract data (like a recently entered password) stored in the device’s memory, according to Stephen Ridley, a principal at Senrio. Searches using the Shodan.io search engine reveal hundreds of publicly accessible SynAccess RPM devices deployed at universities, on government networks, and other businesses.
The problem is a byproduct of changes in the way that technology firms source and build their products, often relying on far-flung networks of manufacturers and suppliers who operate with little oversight or quality control.
"Hardware is a misunderstood, unknown territory," said noted electrical engineer and inventor Joe Grand of Grand Idea Studio. "People buy a piece of hardware and take it for granted. They assume it is secure. They assume it does what it does and only does what it does."