chicksdaddy writes: One of every five software vulnerabilities discovered in vehicles in the last three years are rated “critical” and are unlikely to be resolved through after the fact security fixes, according to an analysis by the firm IOActive, The Security Ledger reports. (https://securityledger.com/2016/08/one-in-five-vehicle-vulnerabilities-are-hair-on-fire-critical/)
“These are the high priority ‘hair on fire’ vulnerabilities that are easily discovered and exploited and can cause major impacts to the system or component,” the firm said in its report (http://www.infosecurity-magazine.com/download/227664/), which it released last week. The report was based on an analysis of more than 150 vehicle security flaws identified over three years by IOActive or publicly disclosed by way of third-party firms.
The report studied a wide range of flaws, most discovered in IOActive’s work with automakers and suppliers to auto manufacturers, said Corey Thuen, a Senior Security Consultant with IOActive. Thuen and his colleagues considered what kinds of vulnerabilities most commonly affect connect vehicles, what types of attacks are most often used to compromise vehicles and what kinds of vulnerabilities might be mitigated using common security techniques and tactics.
The results, while not dire, are not encouraging. The bulk of vulnerabilities that were identified stemmed from a failure by automakers and suppliers to follow security best practices including designing in security or applying secure development lifecycle (SDL) practices to software creation. “These are all great things that the software industry learned as it has progressed in the last 20 years. But (automakers) are not doing them.”
chicksdaddy writes: The Department of Homeland Security warned of hundreds of vulnerabilities in a hospital monitoring system sold by Philips. Security researchers who studied the system said the security holes may number in the thousands, according to a report by The Security Ledger (https://securityledger.com/2016/07/code-blue-thousands-of-bugs-found-on-medical-monitoring-system/)
The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued an alert on July 14 (https://ics-cert.us-cert.gov/advisories/ICSMA-16-196-01) about the discovery of 460 vulnerabilities in the Philips Xper-IM Connect system, including 360 with a severity rating of “high” or “critical” severity. But an interview with one of the researchers who analyzed the Xper system said that the true number of vulnerabilities was much higher, numbering in the thousands.
Xper IM Connect is a “physiomonitoring” system that is widely used in the healthcare sector to monitor and manage other medical devices. Research by two companies, Synopsys and Whitescope LLC, working in collaboration with Philips, found that the system is directly afflicted by 460 software vulnerabilities, including 272 in the Xper software itself and 188 in the Windows XP operating system that Xper IM runs on. The vulnerabilities include remote code execution flaws that could allow malicious code to be run on the Xper system as well as vulnerabilities that could expose sensitive information stored on Xper systems.
chicksdaddy writes: The Automotive industry’s main group for coordinating policy on information security and “cyber” threats has published a “Best Practices” document (http://www.automotiveisac.com/best-practices/), giving individual automakers guidance on implementing cybersecurity in their vehicles for the first time.
The Automotive Information Sharing and Analysis Center (ISAC) released the Automotive Cybersecurity Best Practices document on July 21st, saying the guidelines are for auto manufacturers as well as their suppliers.
The Best Practices cover organizational and technical aspects of vehicle cybersecurity, including governance, risk management, security by design, threat detection, incident response, training, and collaboration with appropriate third parties.
Taken together, they move the auto industry closer to standards pioneered decades ago and embraced by companies like Microsoft. They call on automakers to design software to be secure from the ground up and to take a sober look at risks to connected vehicles as part of the design process.
chicksdaddy writes: Ransomware infections have been plaguing the healthcare field for much of the last two years. But amidst all the reports of hospitals hamstrung by encrypted, clinical systems, there’s been precious little talk about whether such incidents are violations of patients’ privacy under the federal HIPAA legislation. Now we have an answer: yes.
Security Ledger reports (https://securityledger.com/2016/07/regulator-ransomware-infections-likely-reportable-under-hipaa/) that the U.S. Department of Health and Human Services on Monday issued new guidance (http://www.hhs.gov/sites/default/files/RansomwareFactSheet.pdf) that suggests strongly that ransomware infections that affect electronic patient health information (ePHI) are reportable violations under HIPAA.
“When electronic protected health information (ePHI) is encrypted as the result of a ransomware attack, a breach has occurred because the ePHI encrypted by the ransomware was acquired,” HHS said in its guidance. (PDF)
The new guidance comes after a period of consideration and debate within policy circles about whether having patient records encrypted by ransomware should count as a “breach” of patient privacy. In theory, the files aren’t being accessed and viewed, simply scrambled and held for ransom. Or so the thinking went.
Writing on the Virta Labs blog (http://go.virtalabs.com/ocr-ransomware), Virta CEO and University of Michigan researcher Kevin Fu, noted that the HHS guidelines get a lot right: ruling out an exemption for systems with Full Disk Encryption running (ransomware, by its very nature, operates when the machine is running and the operating system and file system are accessible).
Fu expected that the guidelines would be “bad news” for the majority of Health Delivery Organizations (HDOs) covered by HIPAA. “The OCR guidance means you just got clarity on whether ransomware results in a breach. Sorry, the answer is yes, unless you have methodical evidence to the contrary.”
chicksdaddy writes: The use of open source software exploded in 2015, almost doubling from the year before, according to a report from the firm Sonatype. (http://blog.sonatype.com/the-2016-state-of-software-supply-chain-report). The company, which manages the world’s largest repository of open source components, said it received 31 billion download requests from its Central Repository during 2015, up from over 17 billion such requests in 2014. The average enterprise downloaded 229,000 open source components during the same period.
However, software quality continues to be an issue, with a survey of 25,000 applications revealing that close to 7% percent of open source components in use had a known security defect that could lead to successful attacks.
While 7% (actually 6.8%) might not sound like much — just one of every 16 components — in the supply chain world, its a pretty ugly statistic, Sonatype warned. “Imagine if one in every 16 of the parts in your iPhone were known defective – or 1 in every 16 parts in your car,” Derek Weeks, a Vice President and advocate for DevOps at Sonatype told The Security Ledger.(https://securityledger.com/2016/07/developers-gorge-on-open-source-amid-worries-about-quality-security/)
The State of the Software Supply Chain Report surveys data from Sonatype’s Central Repository, a public repository of open source components for the Java development community to reveal high level trends within the open source industry. Sonatype also tapped data from other open source repository including RubyGems.org, NPM, DockerHub and Nexus, the company’s private repository.
In 2015, that data showed a hockey-stick like curve marking the increase in open source component use and activity across the space. Sonatype said that the volume of open source download requests has increased 64 times over since 2007, driven by a shift in application development towards a component-based architecture that heavily relies on open source to accelerate development by leveraging already-created software components.
chicksdaddy writes: The Security Ledger notes (https://securityledger.com/2016/06/report-feds-mull-bug-bounty-contest-for-medical-devices/) that the U.S. Department of Health and Human Services is considering a bug bounty program for medical devices and healthcare technology, modeled after the Department of Defense's recently launched Hack the Pentagon program. (https://yro.slashdot.org/story/16/03/31/2013254/hack-the-pentagon-bug-bounty-program-opens-for-registration)
The Chief Privacy Officer at the Department of Health and Human Services (HHS) has made public statements that suggest HHS is considering a similar program.
Speaking at the Collaboration of Health IT Policy and Standards Committees meeting on June 23, Lucia Savage, chief privacy officer at HHS’s Office of the National Coordinator for Health Information Technology, said that the practice could show promise at HHS if it was scaled up to meet health care needs, Federal Times reported on June 23rd. (http://www.federaltimes.com/story/government/it/health/2016/06/23/ethical-hacking-dod-draws-interest-hhs/86301606/)
"This is a struggle for devices as well,” she said. “You can’t hack something in the field, because what if the hacker disrupts the operation of the device. Similarly, health data and EHRs, we may not want to have the hacker accessing your live data because that might cause other problems relative to your obligation to keep that data confidential."
"Given that space and given the need to improve cybersecurity, is there something that ONC can do to improve that rate at which ethical hacking occurs in health care?” Savage wondered.
On June 17, U.S. Secretary of Defense Ash Carter announced preliminary results from the program, which invited some 1,400 vulnerability hunters to try their luck on DOD systems. In all, the DOD paid bounties for 138 vulnerabilities submitted by 250 researchers. In all, the DOD paid out $150,000 in bounties, with about half going to the hackers.
chicksdaddy writes: Hospitals are pretty hygienic places — except when it comes to passwords, it seems.
That's the conclusion of a recent study (http://www.cs.dartmouth.edu/~sws/pubs/ksbk15-draft.pdf) by researchers at Dartmouth College, the University of Pennsylvania and USC, which found that efforts to circumvent password protections are "endemic" in healthcare environments and mostly go unnoticed by hospital IT staff.
The report describes what can only be described as wholesale abandonment of security best practices at hospitals and other clinical environments — with the bad behavior being driven by necessity rather than malice.
"In hospital after hospital and clinic after clinic, we find users write down passwords everywhere," the report reads. "Sticky notes form sticky stalagmites on medical devices and in medication preparation rooms. We’ve observed entire hospital units share a password to a medical device, where the password is taped onto the device. We found emergency room supply rooms with locked doors where the lock code was written on the door--no one wanted to prevent a clinician from obtaining emergency supplies because they didn’t remember the code. "
Competing priorities of clinical staff and information technology staff bear much of the blame. Specifically: IT staff and management are often focused on regulatory compliance and securing healthcare environments. They are excoriated for lapses in security that result in the theft or loss of data. Clinical staff, on the other hand, are focused on patient care and ensuring good health outcomes, said Ross Koppel, one of the authors of the report, told The Security Ledger (https://securityledger.com/2016/06/study-finds-password-misuse-in-hospitals-a-steaming-hot-mess/)
Those two, competing goals often clash. “IT want to be good guys. They’re not out to make life miserable for the clinical staff, but they often do,” he said.
chicksdaddy writes: In a sign that hacking connected “things” is joining the mainstream of the information security awards, The Pwnies (http://pwnies.com/), a long-running awards ceremony that is the hacker community’s equivalent of The Oscars (or at least The People’s Choice Awards) is adding an award for “Junk Hacking” to its 2016 roster, The Security Ledger reports. (https://securityledger.com/2016/06/at-the-hacker-oscars-a-new-category-for-junk-hacking/)
The awards, which are handed out at the annual Black Hat Briefings conference in Las Vegas in August, added a “Pwnie for Best Junk Hack” to its list of new awards.(http://pwnies.com/nominations/) But in a nod to the security industry’s penchant for stunt hacking and the technology industry’s penchant for unwarranted complexity, the award will be given to researchers who “discovered and performed the most needlessly sophisticated attack against the most needlessly Internet-enabled ‘Thing.'”
Justine Bone, Chief Technology Officer at the firm Vult.com, said that combination applies to the Junk Hacking category. The Internet of Things has only amped the silliness, giving an IP address to everything from kitchen appliances to tooth brushes to stuffed animals. (See also: @InternetofShit (https://twitter.com/internetofshit?lang=en))
Despite all the silliness, however, Bone said that the community can learn from efforts to compromise connected stuff, which can still inspire subtle and creative hacks that have wider applications. “It may be that there’s some exploit in your connected toothbrush that could also be used against a home security system,” she said.
The Best Junk Hack category is among a slew of new award categories that are being added this year, the 10th year that the Pwnie Awards have been held. Among other new categories that are being added are Pwnies for the “Best Cryptographic Attack,” the “Best Backdoor,” and the closely related “Best Stunt Hack,” awarded to “the researchers, their PR team, and participating journalists for the best, most high-profile, and fear-inducing public spectacle that resulted in the most panic-stricken phone calls from our less-technical friends and family members.”
chicksdaddy writes: The Electronic Frontier Foundation is calling out law enforcement's use of a database of tattoo images compiled from prisoners to develop artificial intelligence, saying it violates the prisoners civil rights and that the technology threatens free speech and privacy, The Security Ledger reports.(https://securityledger.com/2016/06/eff-argues-tattoo-recognition-research-threatens-free-speech-privacy/)
Efforts to “crack the symbolism of our tattoos using automated computer algorithms” threatens civil liberties, EFF staffers Dave Maass and Aaron Mackey in a blog post last week.(https://www.eff.org/deeplinks/2016/06/tattoo-recognition-research-threatens-free-speech-and-privacy)
The post is an apparent reference to work headed up by the National Institute of Standards and Technology (NIST). In June, 2015, NIST held a workshop that explored approaches to automatic tattoo identification using artificial intelligence. (https://securityledger.com/2015/06/internet-of-tattoos-nist-workshop-plumbs-body-art-algorithms/)
Participating organizations in that workshop used a FBI -supplied dataset of thousands of images of tattoos from government databases. According to NIST computer scientist Mei Ngan, “state-of-the-art algorithms fared quite well in detecting tattoos, finding different instances of the same tattoo from the same subject over time, and finding a small part of a tattoo within a larger tattoo.”
But EFF said an investigation it conducted found that these experiments “exploit inmates, with little regard for the research’s implications for privacy, free expression, religious freedom, and the right to associate.” So far, EFF said “researchers have avoided ethical oversight while doing (their work).”
“Tattoos are inked on our skin, but they often hold much deeper meaning. They may reveal who we are, our passions, ideologies, religious beliefs, and even our social relationshipsThat’s exactly why law enforcement wants to crack the symbolism of our tattoos using automated computer algorithms, an effort that threatens our civil liberties.”
chicksdaddy writes: Security firm FireEye claims to have discovered proof-of-concept malicious software that targets industrial control systems software that is used to operate critical infrastructure worldwide, Security Ledger reports (https://securityledger.com/2016/06/new-stuxnet-like-industrial-control-system-malware-ups-the-ante/)
The malware, dubbed “IRONGATE” was discovered via VirusTotal, a kind of online clearinghouse for malicious software samples, according to a FireEye blog post.(https://www.fireeye.com/blog/threat-research/2016/06/irongate_ics_malware.html)
The software isn’t yet capable of infecting actual industrial control systems, FireEye warns that it suggests that malicious software authors are upping their game: adding evasion features that prevent the malware from being fooled by so-called “sandbox” environments and enabling sophisticated “man in the middle” attacks on applications used with programmable logic controller (PLCs) made by Siemens – the same equipment targeted by the Stuxnet worm.
FireEye cautioned that the malicious software samples its researchers discovered do not pose a threat to industrial control environments currently. The code would require “widespread changes” to actually attack Siemens programmable logic controllers.
Rather, the malicious software seems to suggest that malicious actors are testing out their creations before using them in actual attacks. Among other things, FireEye researchers observed the malware carry out a man in the middle attack against a custom-compiled user application in a Siemens Step 7 PLC simulation environment (PLCSIM).
chicksdaddy writes: Passcode is reporting (http://www.csmonitor.com/World/Passcode/2016/0518/Flaws-in-networking-devices-highlight-tech-industry-s-quality-control-problem) that researchers are warning about security vulnerabilities in widely used remote power management (RPM) equipment could give malicious hackers the ability to remotely shut off power to critical information systems and industrial machinery.
Researchers at Georgia-based BorderHawk said that it discovered suspicious traffic emanating from compromised RPM devices while working at a large energy firm. An investigation found more reasons for concern: undocumented, no-authentication required features hidden in firmware that could be used to dump a list of user accounts and passwords to access the device. Researchers also found a link to a malicious domain located in China buried in a help file.
RPMs are simple network hardware containing two power outlets to plug in equipment as well as an Ethernet and serial ports for connecting to the network or directly to another computer.
The work by BorderHawk jives with work done by the security consulting firm Senrio Inc. (formerly called Xipiter -http://www.xipiter.com/). Researchers there analyzed the NetBooter NP-02B – made by the Arizona firm SynAccess Networks and found hidden, no authentication features in that device's firmware lets anyone remotely reset the NetBooter device to its factory default configuration. Another allows anyone to modify network and system settings. A third, hidden function could be used to extract data (like a recently entered password) stored in the device’s memory, according to Stephen Ridley, a principal at Senrio. Searches using the Shodan.io search engine reveal hundreds of publicly accessible SynAccess RPM devices deployed at universities, on government networks, and other businesses.
The problem is a byproduct of changes in the way that technology firms source and build their products, often relying on far-flung networks of manufacturers and suppliers who operate with little oversight or quality control.
"Hardware is a misunderstood, unknown territory," said noted electrical engineer and inventor Joe Grand of Grand Idea Studio. "People buy a piece of hardware and take it for granted. They assume it is secure. They assume it does what it does and only does what it does."
chicksdaddy writes: Just in from the "21st Century Jobs for Nobody" Desk: IBM is said on Tuesday that its adapting the Watson artificial intelligence (AI) to help detect cyber attacks and cyber crimes, The Security Ledger reported. (https://securityledger.com/2016/05/ibm-tweaking-watson-ai-for-cyber-security-analysis/)
A new, cloud-based version of its Watson cognitive technology is being trained to understand information security and interpret masses of security event data. IBM said it is a “critical step in the advancement of cognitive security.” (http://www-03.ibm.com/security/cognitive/)
“Security analysts are already fighting fires. Wouldn’t it be nice if they could be a little proactive,” said Charles Palmer, a Distinguished IBM Research staffer in a video released by the company. “How do you get to be proactive? You read. You learn. What are bad people doing?” Watson, Palmer said, “is reading the same stuff.”
“What Watson brings to the table is the distilled human understanding that is most relevant to making those decisions about (a) boiled down list of (security incidents) “said Jeb Linton, the Chief Security Architect on IBM’s Watson team.
As part of the project, IBM will work with academics at well-known universities including MIT, Penn State, NYU, University of Maryland, Pomona and Cal State Polytechnic, as well as the Universities of New Brunswick, Waterloo and Ottawa in Canada.
Researchers there will be training the Watson AI to understand information security like an expert – starting with the basic vocabulary of the trade: things like “exploit,” “dropper,” “incident” and (ahem) “Adobe.”
chicksdaddy writes: Antivirus software running on a medical diagnostic computer caused the device to fail in the middle of a cardiac procedure, denying physicians access to data from a critical monitoring tool and potentially endangering patient safety, the U.S. Food and Drug Administration said.
The FDA issued an Adverse Event Report (https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=5487204), dated February 8, regarding the device: the Merge Hemo Programmable Diagnostic Computer (http://www.merge.com/Solutions/Cardiology/Merge-Hemo.aspx), which is made by Merge Healthcare. The adverse event occurred during a hearth catheterization procedure and was caused by improper configuration of the anti virus software, the FDA concluded.
According to the Adverse Event report, a Merge Hemo customer reported to the company that, “in the middle of a heart catheterization procedure, the Hemo Monitor PC lost communication with the Hemo client and the Hemo monitor went black.” According to information provided by the customer, “there was a delay of about 5 minutes while the patient was sedated so that the application could be rebooted," The Security Ledger reported. (https://securityledger.com/2016/05/fda-antivirus-crashed-diagnostic-tool-during-heart-procedure/)
The incident is a rare, documented instance of a software based failure interfering with a medical procedure, though nobody knows for sure how common equipment failures in clinical settings are. The FDA received around 1.2 million adverse incident reports in 2014, the last full year for which data is available. This is the first known incident linked to anti malware software.
chicksdaddy writes: Security improvements for connected cars may be years away, as both the government and industry struggle to catch up on the cyber security issue, according to a report from the Government Accountability Office (GAO), the Security Ledger is reporting. (https://securityledger.com/2016/04/gao-help-securing-connected-cars-is-years-away/)
In a report published in March (http://www.gao.gov/assets/680/676064.pdf) GAO paints a worrying picture as regards vehicle cyber security, telling Congress that modern vehicles feature many communications interfaces that are vulnerable to attack, and noting that remote, software based attacks that affect critical vehicle functions have already been demonstrated by researchers. Unfortunately, measures to address those threats are likely years away, as automakers work to design more secure in-vehicle systems and regulators, like that National Highway Traffic Safety Administration (NHTSA) struggle to determine their role and the scope of possible regulations.
In either case, help is likely years away, the GAO concluded, citing information gleaned from automotive industry “stakeholders.”
Despite independent research dating back more than five years showing that remote, software based attacks on vehicles were technically possible, GAO notes that both the government and industry have been slow to respond.
“Despite awareness of risks related to vehicle cybersecurity since at least 2011, the auto industry and NHTSA have only recently sharpened their focus on this issue,” GAO said.
NHTSA, the government’s lead body on vehicle safety, has taken “several important steps” on vehicle cybersecurity since 2012, GAO noted that the agency has established a vehicle-cybersecurity research program and is “soliciting industry input on the need for government and voluntary industry standards.”(https://securityledger.com/2016/04/nhtsa-drafting-cyber-security-guidelines-for-light-vehicles/) However, “NHTSA does not anticipate making a final determination on the need for government standards until 2018 when additional cybersecurity research is expected to be completed,” GAO noted.
So too on industry efforts to address vehicle cybersecurity. The development of an Automotive ISAC and a voluntary design and engineering process standard for cybersecurity—are still in their early stages, GAO notes.
“As such, some of these government and industry efforts to address vehicle cybersecurity are unlikely to provide many benefits for vehicles already operating on the roads today or those currently in the design and production stages,” the report notes.
chicksdaddy writes: Farmers who are looking to make better use of technology need to start paying attention to security, or suffer the same fate as industries such as healthcare, the FBI warned in an industry note, The Security Ledger reports. (https://securityledger.com/2016/04/fbi-warns-of-smart-farm-risk/)
In an FBI Private Industry Note dated March 31 (https://info.publicintelligence.net/FBI-SmartFarmHacking.pdf), the Bureau said that increased adoption of “precision farming” technology threatens to expose the nation’s agriculture sector to the risk of hacking and data theft.
“Historically, the farming industry has lacked awareness of how their data should be protected from cyber exploitation,” the FBI said. That’s a dangerous precedent as farmers invest in connected and data intensive farming equipment and related services.
Possible risks include hacktivists who destroy data to protest the use of genetically-modified organisms (GMOs) or pesticides. Farm-level data may also be vulnerable to ransomware and data destruction, potentially impacting the food supply, the FBI said.
Though lower profile than industries like cars and manufacturing, agriculture has been an aggressive adopter of new technology, allowing fewer people to manage large, industrial farms far more effectively. Farm equipment by John Deere and others now frequently is equipped with sensors and paired with sophisticated, hosted services. That's not always a good thing: Wired wrote last year about the struggles of farmers who struggle with software-induced shutdowns with expensive equipment that they are unable to resolve themselves. (http://www.wired.com/2015/02/new-high-tech-farm-equipment-nightmare-farmers/) Anti-tamper features built into hardware and software by manufacturers like John Deere prevent them from doing so.