Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Submission + - One Petabyte of Data Exposed via Insecure Big Data Systems->

chicksdaddy writes: We hear a lot about the incredible value of data analysis to modern businesses — from Uber to Target Stores. And behind every big data deployment is a range of supporting technologies like databases and memory caching systems that are used to store and analyze massive data sets at lightning speeds. But, as we know, with great power comes great responsibility. And a new report from the security research firm Binaryedge (http://blog.binaryedge.io/2015/08/10/data-technologies-and-security-part-1/) suggests that many of the organizations that are using these powerful data storage and analysis tools are not taking adequate steps to secure them. The result is that more than a petabyte (a thousand terabytes) of stored data is accessible to anyone online with the knowledge of where and how to look for it.

In a blog post on Thursday, the firm reported the results of research that found close to 200,000 such systems that were publicly addressable. Vulnerable systems were found on networks of firms ranging from small start-ups to Fortune 500 firms. Many were running vulnerable and out of date software and lacked even basic security protections such as user authentication, the company said.

In a scan of the public Internet Binaryedge said it found 39,000 MongoDB servers that were publicly addressable and that “didn’t have any type of authentication." In all, the exposed MongoDB systems contained more than 600 terabytes of data in those systems, stored in databases with names like “local,” “admin,” and “db.” Other platforms that were found to be publicly addressable and unsecured included the open source Redis key-value cache and store technology (35,000 publicly addressable instances holding 13TB of data) and 9,000 instances of ElasticSearch, a commonly used search engine based on Lucene, that exposed another 531 terabytes of data.

As Digital Guardian notes (https://digitalguardian.com/blog/big-data-means-big-risks) we don’t know what kind of data is stored on these systems or how useful it might be to malicious actors. But given that there’s more than a petabyte of data out there, it is reasonable to assume that some of it is sensitive in nature. And, in the case of technologies like Memcached, the data they contain is constantly changing. That means an attacker who accessed them could benefit from a continuous stream of new information including, possibly, authentication session information.

A scan for deployments of the open source Redis key-value cache and store technology uncovered 35,000 publicly addressable instances that could be accessed without any authentication. Those systems contained about 13 terabytes of data stored in memory.

Link to Original Source

Submission + - Facebook Doubles Internet Defense Prize: Awards $100k for work on Type Casting->

chicksdaddy writes: Two days after the software giant Oracle found itself in hot water for questioning the value of independent security researchers (http://developers.slashdot.org/story/15/08/11/1613225/oracle-exec-stop-sending-vulnerability-reports), social media giant Facebook and USENIX, the advanced computing systems association, sent a drastically different message: doubling the amount of an annual prize rewarding novel security research. (https://securityledger.com/2015/08/facebook-awards-100k-for-fix-to-common-c-flaw/)

The company said on Wednesday that it was awarding its Internet Defense Prize, and a purse of US $100,000 to a team of Ph.D. students from Georgia Tech University for a paper describing a new method for identifying a class of vulnerabilities in C++ programs centered on the use of so-called “static” type casting in programs.

The paper, “Type Casting Verification: Stopping an Emerging Attack Vector,” (https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/lee) by students Byoungyoung Lee and Chengyu Song, with Professors Taesoo Kim and Wenke Lee, describes a new method and a tool for detecting incorrect or bad type casting in C++ applications, including popular web browsers like Chrome and Firefox.

If not correctly used, static type casting may return "unsafe and incorrectly casted values, leading to so-called bad-casting or type-confusion vulnerabilities," the researchers note. Those can enable an attacker to corrupt and rewrite memory, potentially allowing malicious code to be inserted and run, as with other memory corruption vulnerabilities.

Ioannis Papagiannis, a Security Engineering Manager at Facebook, said that the security trade-offs of static type casting have been well understood for a while but that application developers choose static type casting because the alternative, dynamic type casting, incurs a significant performance hit. “A lot of companies go for the ‘fast but insecure’ approach,” he said.

The Georgia Tech researchers developed a tool, dubbed CAVER, that is described as a “runtime bad-casting detection tool” that “performs program instrumentation at compile time and uses a new runtime type tracing mechanism—the type hierarchy table—to overcome the limitation of existing approaches and efficiently verify type casting dynamically.”

Facebook's Papagiannis said that Facebook decided to double the amount of the award in recognition of the value of the research and to give the Georgia Tech researchers the resources they need to develop the CAVER tool into something that is widely available.

Link to Original Source

Submission + - Tech Firms, Retailers Propose Security, Privacy Rules for Internet of Things->

chicksdaddy writes: As the Obama Administration and the rest of the federal bureaucracy hem and haw about whether- and how to regulate the fast-growing Internet of Things, a group representing private sector firms has come out with a framework for ensuring privacy and security protections in IoT products that is lightyears ahead of anything under consideration inside the Beltway.

The Online Trust Association (https://otalliance.org/) — a group made up of such staunch civil liberties and privacy advocates as Target Stores (?), Microsoft and home security firm ADT — on Tuesday released a draft of its IoT Trust Framework (PDF here: https://otalliance.org/system/...), which offers voluntary best practices in security, privacy and what OTA calls "sustainability" (read "lifecycle management") for home automation wearable health & fitness technologies.

So how is it? Pretty damned good, according to this post at The Security Ledger.(https://securityledger.com/2015/08/tech-retail-firms-propose-privacy-standards-for-internet-of-things/)

"The OTA guidelines set a high bar for IoT device makers. On the security front, the framework calls on manufacturers to employ end-to-end encryption, including device connections to mobile devices and applications and wireless communications to the cloud or other devices. Device makers should include features that force the retirement of default passwords after their first use and to configure multiple user roles with separate passwords for administrative and end-user access.

"Privacy policies must be made available to potential buyers prior to product purchase and disclose the consequences of declining to opt in or out of policies, such as data collection. And, in a nod to consumer advocates' complaints about long and legalistic end-user license agreements (EULA) and privacy policies that are the prevalent today, device makers would be required to 'maximize readability.'

"Beyond that, manufacturers must conspicuously disclose all personally identifiable data types and attributes collected. A health or fitness band would need to inform potential buyers that it harvests data such as their physical location and biometric data like heart rate, pulse, blood pressure and so on."

The standards also address issues such as lifecycle management for IoT devices. Craig Spiezle, Executive Director and President of OTA notes that many home appliances have life spans that are measured in decades, not months or years. Under the framework, device makers should have a plan for supporting and updating them during that time, or risk creating a population of insecure, off-warranty endpoints that are subject to tampering and attack.

Spiezle said that such questions and issues are currently "uncharted waters" in the consumer space. And, in fact, issues related to data collection and disclosure in connection to smart appliances have already come to the fore. In 2014, device maker LG issued a firmware update for its SmartTVs that disabled the "connected" features of the device if users would not agree to lengthy new Terms of Service and Privacy Agreements. The revised documents granted LG permission to monitor and record their viewing habits and their interactions with the device, including voice commands. (https://securityledger.com/2014/05/bad-actor-with-update-lg-says-no-monitoring-no-smart-tv/)

Link to Original Source

Submission + - One In Four Indiana Residents Lost Data in Electronic Records Firm Hack->

chicksdaddy writes: Four million patients of more than 230 hospitals, doctors offices and clinics had patient data exposed in a May hack of the Fort Wayne, Indiana firm Medical Informatics Engineering (MIE), which makes the NoMoreClipBoard electronic health records system, according to the Indiana Attorney General.(http://goo.gl/KdCbRX) The breach affected 3.9 million people. But it hit MIE's home state of Indiana especially hard. In all, 1.5 million Hoosiers had data exposed in the hack, almost a quarter of the state's population, the Security Ledger reports. (https://securityledger.com/2015/07/doctors-still-in-the-dark-after-electronics-records-hack-exposes-data-on-4-million/)

But the breach affects healthcare organizations from across the country, with healthcare providers ranging from prominent hospitals to individual physicians’ offices and clinics are among 195 customers of the NoMoreClipboard product that had patient information exposed in the breach. And, more than a month after the breach was discovered, some healthcare organizations whose patients were affected are still waiting for data from EMI on how many and which patients had information exposed.

“We have received no information from MIE regarding that,” said a spokeswoman for Fort Wayne Radiology Association (http://www.fwradiology.com/), one of hundreds of healthcare organizations whose information was compromised in the attack on MIE.

According to MIE’s statement, released on July 24, individuals who received services from Fort Wayne Radiology Association and a variety of other imaging and MRI centers were also compromised when a database relating to the healthcare providers was breached in the incident, MIE said. That contained data going back more 17 years and involved another 44 healthcare organizations in three states: Indiana, Ohio and Michigan.

Link to Original Source

Submission + - Whitehouse Lures Mudge From Google to launch a UL for Cyber->

chicksdaddy writes: The Obama Whitehouse has tapped famed hacker Peiter Zatko (aka “Mudge”) to head up a new project aimed at developing an “underwriters’ lab” for cyber security, The Security Ledger reports. (https://securityledger.com/2015/06/whitehouse-taps-google-advanced-projects-lead-for-software-safety-lab/)

Zatko announced the new initiative on Monday via Twitter (https://twitter.com/dotmudge). “The White House asked if I would kindly create a #CyberUL, so here goes,” he wrote.

The new organization would function as an independent, non-profit entity designed to assess the security strengths and weaknesses of products and publishing the results of its tests.

Zatko is a famed hacker and security luminary, who cut his teeth with the Boston-based hacker collective The L0pht in the 1990s before moving on to work in private industry and, then, to become a program manager at the DARPA in 2010. Though known for keeping a low profile, his scruffy visage (circa 1998) graced the pages of the Washington Post in a recent piece that remembered testimony that Mudge and other L0pht members gave to Congress about the dangers posed by insecure software.(http://www.washingtonpost.com/sf/business/2015/06/22/net-of-insecurity-part-3/)

Since leaving DARPA, Zatko has served as Deputy Director of Google's Advanced Technology and Projects division. He did not respond to requests for comment prior to publication.

Underwriters Lab — or "UL" — was founded in 1894 as a private firm dedicated to developing testing and safety standards for everything from fire extinguishers to lithium batteries to heating and cooling equipment and trash cans. UL has developed safety and performance standards for evaluating quality of information technology equipment, as well, but does not make a practice of testing software security or quality.

Link to Original Source

Submission + - Internet of Tatts? NIST Workshop Explores Automated Tattoo Identification->

chicksdaddy writes: Security Ledger reports on a recent NIST workshop dedicated to improving the art of automated tattoo identification. (https://securityledger.com/2015/06/internet-of-tattoos-nist-workshop-plumbs-body-art-algorithms/)

It used to be that the only place you’d see tattoos was at your local VA hospital (http://thebrigade.com/2014/07/08/lookin-back-on-naval-tats-47-photos/world-war-ii-tattoos-550-11/). No more. In the last 30 years, body art has gone mainstream. One in five adults in the U.S. has one. There are reality shows centered on tattoo parlors and even full-sleeve stick-on tattoos (http://inkwear.co.uk/extra-large-luxury-inkwear/) so that even kids and the faint of heart sport that David Beckham look.

For law enforcement and forensics experts, this is a good thing; tattoos are a great way to identify both perpetrators and their victims. Given the number and variety of tattoos, though, how to describe and catalog them? Clearly this is an area where technology can help, but it’s also one of those “fuzzy” problems that challenges the limits of artificial intelligence.

The National Institute of Standards and Technology (NIST) Tattoo Recognition Technology Challenge Workshop (http://www.nist.gov/itl/iad/201506_tattoo_workshop.cfm) challenged industry and academia to work towards developing an automated image-based tattoo matching technology. Participating organizations in the challenge used a FBI -supplied dataset of thousands of images of tattoos from government databases. They were challenged to develop methods for identifying a tattoo in an image, identifying visually similar or related tattoos from different subjects; identifying the same tattoo image from the same subject over time; identifying a small region of interest that is contained in a larger image; and identifying a tattoo from a visually similar image like a sketch or scanned print.

According to NIST computer scientist Mei Ngan, “state-of-the-art algorithms fared quite well in detecting tattoos, finding different instances of the same tattoo from the same subject over time, and finding a small part of a tattoo within a larger tattoo.” However, they struggled to detect visually similar tattoos on different people and matching a tattoo image from a sketch or sources other than a photo.

Link to Original Source

Submission + - The Internet Of Things Is The Password Killer We've Been Waiting For->

jfruh writes: You can't enter a password into an Apple Watch; the software doesn't allow it, and the UI would make doing so difficult even if it did. As we enter the brave new world of wearable and embeddable devices and omnipresent 'headless' computers, we may be seeing the end of the password as we know it. What will replace? Well, as anyone who's ever unlocked car door just by reaching for its handle with a key in their pocket knows, the answer may be the embeddable devices themselves.
Link to Original Source

Submission + - Home Depot using 2013 SCOTUS FISA Ruling to Challenge Data Breach Damages Suit->

chicksdaddy writes: As Citizens United and Bush v. Gore have shown us: there's no end to the trouble (https://en.wikipedia.org/wiki/Iraq_War) that can be caused by bad Supreme Court rulings. The latest example of that may be unfolding in an Atlanta courtroom, where Home improvement giant Home Depot is attempting to use a 2013 Supreme Court ruling concerning the U.S. government’s FISA court to block efforts by its customers to sue the company over damages (https://digitalguardian.com/blog/are-data-breaches-victimless-crime) resulting from a 2014 incident that resulted in the theft of more than 50 million credit card numbers (http://it.slashdot.org/story/14/09/19/1251234/home-depot-says-breach-affected-56-million-cards) from the company’s network.

Huh? Exactly. Home Depot in late May filed a motion (http://media.bizj.us/view/img/6039561/home-depot-dismiss.pdf ) asking the U.S. District Court for the Northern District of Georgia to dismiss the case, citing Clapper vs. Amnesty International, a 2013 case in which Supreme Court ruled, in a 5-4 decision, that the plaintiffs lacked standing to sue the Federal Government, as they couldn’t prove harm as a result of the actions of the secretive court.(http://www.scotusblog.com/case-files/cases/clapper-v-amnesty-international-usa/)

Home Depot’s argument rests on a couple points that were also raised in the Clapper vs. Amnesty case. First: that there is no real harm caused because “the few plaintiffs who allege some economic harm fail to explain why the losses they allege were not reimbursed.” That’s an apparent reference to the U.S. law that requires consumers to not be held liable for fraudulent charges on their credit cards. That, Home Depot argues, fails the Supreme Court’s charge, in Clapper, that alleged injuries must be “concrete, particularized, and actual or imminent.”

The second point made by Home Depot is that the individuals who claim they were injured base their claims on “the hypothetical future acts of third parties, which the Supreme Court held in Clapper is insufficient to establish Article III standing because such conduct is not ‘fairly traceable’ to the defendant.”

In other words: even though it is clear that cyber criminals 1) compromised Home Depot’s network, 2) stole credit cards on millions of its customers and 3) foist those numbers upon cyber criminal exchanges after which they were used for fraudulent purposes (http://krebsonsecurity.com/2014/09/banks-credit-card-breach-at-home-depot/), the plaintiffs in the case can’t prove that Home Depot’s failure to secure its network was the direct cause of the fraud. The plaintiffs “statutory claims fail because they have not identified any deceptive act by Home Depot and do not allege any actual damage flowing from Home Depot’s purported delay in providing notice.”

Link to Original Source

Submission + - Report: Evidence of Healthcare Breaches Lurks on Infected Medical Devices->

chicksdaddy writes: Evidence that serious and widespread breaches of hospital- and healthcare networks is likely to be hiding on compromised and infect medical devices in clinical settings, including medical imaging machines, blood gas analyzers and more, according to a report by the firm TrapX. (https://securityledger.com/2015/06/x-rays-behaving-badly-devices-give-malware-foothold-on-hospital-networks/)

In a report, which will be released this week, the company details incidents of medical devices and management stations infected with malicious software at three, separate customer engagements. According to the report, medical devices – in particular so-called picture archive and communications systems (PACS) radiologic imaging systems – are all but invisible to security monitoring systems and provide a ready platform for malware infections to lurk on hospital networks, and for malicious actors to launch attacks on other, high value IT assets.
Among the revelations contained in the report: malware at a TrapX customer site spread from a unmonitored PACS system to a key nurse’s workstation. The result: confidential hospital data was secreted off the network to a server hosted in Guiyang, China. Communications went out encrypted using port 443 (SSL), resulting in the leak of an unknown number of patient records. In another incident documented by the company, a healthcare institution at which installed its technology was found to have the Zeus and Citadel malware operating from infected blood gas analyzers in the hospital’s laboratory, which were infected and provided a “backdoor” into the hospital’s network and were being used to harvest credentials from other systems on the network.

“The medical devices themselves create far broader exposure to the healthcare institutions than standard information technology assets,” the report concludes.

Radiologic and medical imaging systems such as the PACS were particularly useful because they are heavily used and critical to the operation of almost every department. Of the three systems that TrapX found infected at customer sites, one was a PACS, the second was a medical x-ray scanner and the third was a collection of blood gas analyzers in a healthcare institution’s laboratory department used by critical care and emergency services.

To help validate its findings, TrapX acquired and tested a NOVA CCX blood gas analyzer of the type it encountered in the customer environments. As with the deployed devices, TrapX chose the version of the CCX for Windows 2000, which was the model used in customer settings. And, in fact, Windows 2000 is the choice for “many medical devices.” The version that TrapX obtained “did not seem to have been updated or patched in a long time,” the company writes.

“Based upon our experience and understanding of MEDJACK, our scientists believe that a large majority of hospitals are currently infected with malware that has remained undetected for months and in many cases years. We expect additional data to support these assertions over time," the report says.

Link to Original Source

Submission + - Is the OPM breach really a success story? Maybe. ->

chicksdaddy writes: How dire is the state of information security within the federal government? How utterly inept are federal agencies when it comes to protecting the sensitive data they collect on hundreds of millions of Americans? So dire and so inept that it is at least plausible to look at the recent revelation that the Office of Personnel Management let sensitive information on 4 million current and former federal employees and see it as a success story.

As The Security Ledger notes (https://securityledger.com/2015/06/success-story-opm-security-chief-trumpeted-new-approach-to-cyber/), the discovery of the breach by OPM comes on the heels of that 6,000-person agency's very public embrace of new tools and a new approach to cyber security. In a series of media interviews, position papers and public appearances OPM's head of information security Jeff Wagner (https://www.linkedin.com/pub/jeff-wagner/29/3b1/105) hailed OPM’s new approach to cyber security, which he described as “security through visibility." The new approach emphasized a holistic approach to analyzing security information and on detection of anomalous behavior within OPM’s network, rather than on deployment monitoring attacks from outside. At the heart of OPM’s new approach is technology by CSG Invotas (http://invotas.csgi.com/), which appears to be a kind security automation platform that correlates security information and metrics from disparate products and provides features for automated responses to security “triggers” raised by the product.

It is unclear when OPM began using the technology, or exactly what role it played in the discovery of the most recent breach. But Wagner was evangelizing what he called the agency’s “security through visibility” approach in March and April, including an appearance at the RSA Conference in April, around the time that OPM said the breach was first discovered.

“We try to simplify our processes as much as possible and then you can look through your flow chart and see where you can leverage orchestration and where can I stop having humans do simple things?” Wagner told Federal News Radio in March (http://www.federalnewsradio.com/520/3817837/OPM-orchestrates-cyber-protections-through-automation)

That's good advice, as far as it goes. And that approach may have allowed OPM to detect a long-running breach of its internal network. Though in this case, OPM's thought leadership on security may be an instance of "do as I say, not as I do."

Link to Original Source

Submission + - Insurer denies healthcare breach claim citing lack of minimum required practices->

chicksdaddy writes: In what may become a trend, an insurance company is denying a claim from a California healthcare provider following the leak of data on more than 32,000 patients. The insurer, Columbia Casualty, charges that Cottage Health System did an inadequate job of protecting patient data.

In a complaint filed in U.S. District Court in California, Columbia alleges that the breach occurred because Cottage and a third party vendor, INSYNC Computer Solution, Inc. failed to follow “minimum required practices,” as spelled out in the policy. Among other things, Cottage “stored medical records on a system that was fully accessible to the internet but failed to install encryption or take other security measures to protect patient information from becoming available to anyone who ‘surfed’ the Internet,” the complaint alleges.

Disputes like this may become more common, as insurers anxious to get into a cyber insurance market that's growing by about 40% annually use liberally written exclusions to hedge against 'known unknowns' like lax IT practices, pre-existing conditions (like compromises) and so on. (http://www.itworld.com/article/2839393/cyber-insurance-only-fools-rush-in.html)

Link to Original Source

Submission + - Chris Roberts is the least important part of the airplane hacking story-> 1

chicksdaddy writes: Now that the news media is in full freak-out mode (http://www.cnn.com/2015/05/17/us/fbi-hacker-flight-computer-systems/index.html) about whether or not security researcher Chris Roberts did or did not hack into the engine of a plane, in flight and cause it to "fly sideways," security experts say its time to take a step back from the crazy and ask what is the real import of the plane hacking. The answer: definitely not Chris Roberts.

The real story that media outlets should be chasing isn't what Roberts did or didn't do on board a United flight in April, but whether there is any truth to longtime assurances from airplane makers like Boeing and Airbus that critical avionics systems aboard their aircraft are unreachable from systems accessible to passengers, the Christian Science Monitor writes. (http://www.csmonitor.com/World/Passcode/2015/0518/Did-a-hacker-really-make-a-plane-go-sideways)

And, on that issue, Roberts' statements and the FBI's actions raise as many questions as they answer. For one: why is the FBI suddenly focused on years-old research that has long been part of the public record.

“This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, 'This has to be fixed,' " Roberts noted. “Is there a credible threat? Is something happening? If so, they’re not going to tell us,” he said.

Roberts isn’t the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.

“I would like to see a transcript (of the interviews),” said one former federal computer crimes prosecutor, speaking on condition of anonymity. “If he did what he said he did, why is he not in jail? And if he didn’t do it, why is the FBI saying he did?”

Josh Corman, the chief technology officer at the firm Sonatype, said the media and security industry's focus on Roberts' actions is a distraction. Mr. Corman, who is the founder of IAmTheCavalry.org, (https://www.iamthecavalry.org/) a grassroots group focused on issues where computer security intersects public safety and human life, said that the real question was about the safety and reliability of airplane avionics systems.

"The message has been that nothing the customer can do in the passenger cabin can affect the avionics," said Corman. However, the FBI affidavit (http://aptn.ca/news/wp-content/uploads/sites/4/2015/05/warrant-for-Roberts-electronics.pdf) suggests otherwise, citing interviews with Roberts going back to Februrary.

"So we're getting a mixed message about what can and can't be done," Corman said. "Either planes are not hackable, or they might be...irrespective or regardless of the veracity of [Roberts] claim."

Link to Original Source

Submission + - In a First: FDA issues Safety Advisory for Cyber Risk of Drug Pumps->

chicksdaddy writes: In what may be a first, the Food and Drug Administration (FDA) has issued a Safety Communication regarding vulnerabilities in a drug infusion pump by the firm Hospira that could make it easy prey for hackers, The Security Ledger reports.

The FDA Safety Communications notice regarding the Hospira LifeCare PCA3 and PCA5 Infusion Pump Systems (http://www.fda.gov/medicaldevices/safety/alertsandnotices/ucm446809.htm) was published on Wednesday. The notice advises hospitals that are using the pump to isolate it from the Internet and “untrusted systems.” It follows disclosures by two, independent security researchers in recent months of a raft of software security vulnerabilities in the pumps, including Telnet and FTP services that were accessible without authentication.

The FDA said it and Hospira “have become aware of security vulnerabilities in Hospira’s LifeCare PCA3 and PCA5 Infusion Pump Systems” as well as the publication of “software codes, which, if exploited, could allow an unauthorized user to interfere with the pump’s functioning.”

An unauthorized user with malicious intent could “access the pump remotely and modify the dosage it delivers, which could lead to over- or under-infusion of critical therapies,” the safety advisory warned.

The advisory follows a warning by the Department of Homeland Security in April. DHS’s Industrial Control System Computer Emergency Response Team (ICS-CERT) warned of drug infusion pump management software sold by Hospira contains serious and exploitable vulnerabilities that could be used to remotely take control of the devices.).

The FDA notice regarding the Hospira LifeCare PCA3 and PCA5 Infusion Pump Systems was published on Wednesday. The notice advises hospitals that are using the pump to isolate it from the Internet and “untrusted systems.” It follows disclosures by two, independent security researchers in recent months of a raft of software security vulnerabilities in the pumps, including Telnet and FTP services that were accessible without authentication.

The FDA said it and Hospira “have become aware of security vulnerabilities in Hospira’s LifeCare PCA3 and PCA5 Infusion Pump Systems” as well as the publication of “software codes, which, if exploited, could allow an unauthorized user to interfere with the pump’s functioning.”

An unauthorized user with malicious intent could “access the pump remotely and modify the dosage it delivers, which could lead to over- or under-infusion of critical therapies,” the safety advisory warned.

The advisory follows a warning by the Department of Homeland Security in April. DHS’s Industrial Control System Computer Emergency Response Team (ICS-CERT) warned of drug infusion pump management software sold by Hospira contains serious and exploitable vulnerabilities that could be used to remotely take control of the devices.(https://securityledger.com/2015/04/drug-pumps-vulnerable-to-trivial-hacks-dhs-warns/)

he issuance of a “Safety Communication” for software vulnerabilities is novel. The communications are typically used to issue specific and actionable guidance concerning safety related issues with medical devices or products used by health professionals in the field.
This is believed to be the first such communication issued for a software vulnerability in a specific product. In June, 2013, the FDA issued a safety communication regarding cybersecurity of hospital networks and medical devices. (http://www.fda.gov/medicaldevices/safety/alertsandnotices/ucm356423.htm)

Link to Original Source

Submission + - Add GitHub dorking to list of enterprise security concerns->

chicksdaddy writes: IT World has a story today suggesting that GitHub may be a victim of its own success. Exhibit 1: "GitHub dorking:" the use of GitHub's powerful internal search engine to uncover security holes and sensitive data in published code repositories. (http://www.itworld.com/article/2921135/security/add-github-dorking-to-list-of-security-concerns.html)
In a nutshell: GitHub's runaway popularity among developers is putting employers and development shops in a tough spot. As the recent story about Uber accidentally publishing database administrator credentials in a public GitHub repository suggests, (http://arstechnica.com/security/2015/03/in-major-goof-uber-stored-sensitive-database-key-on-public-github-page/), it can be difficult even for sophisticated development organizations to grasp the nuances of how interactions with GitHub's public code repositories might work to undermine corporate security.

The ease with which developers can share and re-use code on GitHub is part of the problem, said Bill Ledingham, chief technology officer at Black Duck Software, which monitors some 300,000 open source software projects that use GitHub. Ledingham said leaked user credentials are inadvertent errors caused by developers too accustomed to the ease with which code can be borrowed, modified and resubmitted to GitHub.

"Developers in some cases are just taking the easiest path forward," he said. "They're checking in code or re-using it and not looking at some of these issues related to security."

Among the issues to watch out for are information leaks by way of vulnerabilities in GitHub.com or the GitHub API, leaks of intellectual property in published repositories and the leak of credentials and other shared secrets that could be used to compromise production applications.

Tools like the GitRob command line application developed by Michael Henriksen (http://michenriksen.com/blog/gitrob-putting-the-open-source-in-osint/) make it a simple matter to analyze all the public GitHub repositories associated with a particular organization. GitRob works by compiling the public repositories belonging to known employees of that firm, then flagging filenames in each repository that match patterns of known sensitive files.

Companies that are doing software development need to take an active interest in GitHub, determining which employees and contractors are using it and verifying that no proprietary code or sensitive information is leaking into the public domain.

Internally, data leak prevention products can identify and block the movement of proprietary code. Concerted education for developers about best practices and proper security hygiene when downloading and uploading code to shared and searchable source repositories can help prevent head slapping mistakes like the leak of database administrator credentials and private keys.

Link to Original Source

Submission + - No Justice for Victims of Identity Theft->

chicksdaddy writes: The Christian Science Monitor's Passcode features a harrowing account of one individual's experience of identity theft.(http://passcode.csmonitor.com/identity-stolen) CSM reporter Sara Sorcher recounts the story of "Jonathan Franklin" (not his real name) a New Jersey business executive who woke up to find thieves had stolen his identity and racked up $30,000 in a shopping spree at luxury stores including Versace and the Apple Store. The thieves even went so far as to use personal info stolen from Franklin to have the phone company redirect calls to his home number, which meant that calls from the credit card company about the unusual spending went unanswered.

Despite the heinousness of the crime and the financial cost, Sorcher notes that credit card companies and merchants both look on this kind of theft as a "victimless crime" and are more interested in getting reimbursed for their losses than trying to pursue the thieves. Police departments, also, are unable to investigate these crimes, lacking both the technical expertise and resources to do so. Franklin notes that he wasn't even required to file a police report to get reimbursed for the crime.
“As long as their loss is covered they move on to [handling] tomorrow’s fraud,” Franklin observes. And that makes it harder for victims like Franklin to move on, “In some way, I’m seeking some sense of justice,” Franklin said. “But it’s likely not going to happen.”

Link to Original Source

"Don't try to outweird me, three-eyes. I get stranger things than you free with my breakfast cereal." - Zaphod Beeblebrox in "Hithiker's Guide to the Galaxy"

Working...