chicksdaddy writes: In the "where have I read this before?" category: research by the security firm Rapid7 has uncovered security flaws in new, interactive “smart toys” by Fisher Price and other toy makers. The flaws could divulge personal information related to children and their families, The Security Ledger reports. (https://securityledger.com/2016/02/smart-toys-leak-info-on-kids-families/)
The company published information Tuesday (https://community.rapid7.com/community/infosec/blog/2016/02/02/security-vulnerabilities-within-fisher-price-smart-toy-hereo-gps-platform) that reveals flaws in the Smart Toy® line of connected play things sold by the toy giant Mattel. The flaws could divulge information on the child owner of a toy, including its name, birthdate and e-mail. Research into a line of GPS watches for kids uncovered a way that a remote attacker could gain access to a trusted social network used by the watches.
The research, by Rapid7 researcher Mark Stanislav, is just the latest to raise privacy and security concerns about interactive, Internet-connected “smart toys.” In September, similar research exposed glaring security weaknesses in a range of Internet-connected baby monitors. (https://community.rapid7.com/community/infosec/blog/2015/09/02/iotsec-disclosure-10-new-vulns-for-several-video-baby-monitors) Also, in December, the security firm Bluebox Security said that it discovered security flaws in the mobile application that comes with Mattel’s Hello Barbie. (https://securityledger.com/2015/12/hello-barbie-fails-another-security-test/) Among other things, the researchers warned that the application was plagued by a myriad of authentication woes that could leak owner passwords, or allow an attacker to re-use stolen credentials to access other, linked web properties.
Stanislav found similar concerns in Mattel’s Smart Toys line of products, which are sold under its FisherPrice brand. By analyzing the toy’s hardware, software and network communications, Rapid7 determined that many of the platform’s web service API (application program interface) calls did not appropriately verify the “sender” of messages. That could allow an attacker to send requests to the toy that otherwise wouldn’t be authorized.
chicksdaddy writes: A report from a leading legal think tank argues that the spread of the Internet of Things will provide ample opportunities for law enforcement and intelligence agencies to spy on citizens, despite more widespread use of encryption, The Security Ledger reports. (https://securityledger.com/2016/02/with-internet-of-things-fbi-in-no-danger-of-going-dark/)
In a report released on Monday (https://cyber.law.harvard.edu/pubrelease/dont-panic/) scholars from The Berkman Center For Internet & Society at Harvard University make the case that arguments against the use of strong encryption by public figures like FBI chief James Comey are unfounded. Rather, technology adoption and current technology business models based on the monetization of data and metadata will create ample opportunities for online surveillance — even as they stymie widespread adoption of end to end encryption.
“Communications in the future will neither be eclipsed into darkness nor illuminated without shadow,” wrote the authors of the report in a blog post.
The report is a rebuke to arguments by Mr. Comey (https://www.fbi.gov/about-us/otd/going-dark-issue) and others that the adoption of strong encryption technology by Google, Facebook, Apple Computer and others threaten to blind law enforcement to the doings of criminals, terrorists and others.
In response, The Berkman Center’s Berklett Cybersecurity Project concluded that with so many businesses built on the practice of harvesting and monetizing data collected by mobile devices, encryption and data privacy will remain a low priority for the private sector, the report said.
Also, law enforcement- and intelligence organizations stand to benefit tremendously from the expansion of devices connected to The Internet of Things, the report concludes.
“Networked sensors and the Internet of Things are projected to grow substantially, and this has the potential to drastically change surveillance,” the author notes. “Still images, video, and audio captured by these devices may enable real-time intercept and recording with after-the-fact access. Thus an inability to monitor an encrypted channel could be mitigated by the ability to monitor from afar a person through a different channel.”
In an interview with Security Ledger in January, one of the Berkman report’s authors, cryptography expert Bruce Schneier of the firm Resilient Systems, said that encryption back doors are entirely unnecessary, given the lax data security practiced by most technology firms.
“Back doors are not needed because the front doors are opened so wide,” he said. In practice, government agencies that want to conduct surveillance can simply piggyback on corporate surveillance.
chicksdaddy writes: Reports of a crippling cyber attack on the power grid in Israel (http://www.timesofisrael.com/steinitz-israels-electric-authority-hit-by-severe-cyber-attack/) appear to have been greatly exaggerated, as subsequent reports point to a simple ransomware outbreak on the office network of an industry regulator.
The reports of an attack on the Israeli follow a story in the Times of Israel (http://www.timesofisrael.com/steinitz-israels-electric-authority-hit-by-severe-cyber-attack/) quoting Israeli Energy Minister Yuval Steinitz at a Tel Aviv cyber security conference. It comes amidst a cold snap in the country that is causing power demands to spike, and just weeks after an apparent cyber attack on power substations in The Ukraine darkened some 80,000 households.
“This is a fresh example of the sensitivity of infrastructure to cyberattacks, and the importance of preparing ourselves in order to defend ourselves against such attacks,” Steinitz is quoted saying in the Times of Israel report.
But the events in Israel may be far more quotidian than Steinitz comments or the sensational headlines that follow would suggest. Rather than a crippling cyber attack on the country’s grid, the incident Steinitz referred to appears to be a ransomware outbreak on PCs and notebook computers used by staff at a government agency.
A report on Wednesday by the Israeli web site YNet News (http://www.yediot.co.il/articles/0,7340,L-4758366,00.html) describes what appears to be a typical ransomware malware infection within the offices of the Electricity Authority.
In a post on the web site of The SANS Institute (https://ics.sans.org/blog/2016/01/27/context-for-the-claim-of-a-cyber-attack-on-the-israeli-electric-grid), Robert M. Lee said the incident underscores the inherent danger in reporting on cyber attacks, which take many different forms and have many different motivations.
“This once again stresses the importance around individuals and media carefully evaluating statements regarding cyber attacks and infrastructure as they can carry significant weight.”
chicksdaddy writes: The world’s governments are on notice that their critical infrastructure is vulnerable after an apparent cyberattack darkened 80,000 households in three regions of Ukraine last month. (http://hardware.slashdot.org/story/16/01/11/150241/ukraine-power-station-outage----enabled-by-malware-but-not-caused-by-malware) But on the question of safeguarding utilities, operators of power plants, water treatment facilities, and other industrial operations might do well to worry more about Instagram than hackers, according to a report by Christian Science Monitor Passcode.
Speaking at a gathering of industrial control systems experts last week, Sean McBride of the firm iSight Partners said that social media oversharing is wellspring of information that could be useful to attackers interested in compromising critical infrastructure. Among the valuable information he's found online: workplace selfies on Instagram and Facebook that reveal details of supervisory control and data acquisition, or SCADA, systems. (http://www.csmonitor.com/World/Passcode/2016/0115/Worried-about-cyberattacks-on-US-power-grid-Stop-taking-selfies-at-work)
"No SCADA selfies!" said Mr. McBride at the S4 Conference in Miami Thursday. "Don’t make an adversary’s job easier."
iSight has found examples of SCADA selfies at sensitive facilities and warns that such photos may unwittingly reveal critical information that operators would prefer to keep secret. The firm's researchers have also discovered panoramic pictures of control rooms and video walk-throughs of facilities. Corporate websites can divulge valuable information to adversaries like organization charts or lists of employees — valuable sources of information for would-be attackers, says McBride.
That kind of slip-up have aided critical infrastructure attacks in the past. Photographs published in 2008 by former Iranian President Mahmoud Ahmadinejad's press office provided western nuclear analysts with detailed views of the insides of the Natanz facility and Iran’s uranium enrichment operation – what an expert once described as "intel to die for." (http://www.nytimes.com/2008/04/29/science/29nuke.html?_r=0)
chicksdaddy writes: File this under "It's about time." The U.S. Internal Revenue Service has announced that it will treat identity theft protection as a non-taxable, non-reportable benefit that companies can offer — even when the company in question hasn't experienced a data breach, and regardless of whether it is offered by an employer to employees, or by other businesses (such as online retailers) to its customers, the blog E for ERISA reports. (https://eforerisa.wordpress.com/2016/01/10/irs-extends-tax-free-status-to-proactive-identity-theft-protection/) In short: companies can now deduct the cost of offering identity theft protection as a benefit for employees or extending it to customers, even if their data hasn't been exposed to hackers.
The announcement comes only four months after an earlier earlier announcement (https://eforerisa.files.wordpress.com/2016/01/earlier-announcement.pdf) by the IRS that it would treat identity theft protection offered to employees or customers in the wake of a data breach as a non-taxable event. Comments to the IRS following the earlier decision suggested that many businesses view a data breach as “inevitable” rather than as a remote risk.
The truth of that statement was made clear to the IRS itself, which had to provide identity theft protection earlier this year in response to a hack of its online database of past-filed returns and other filed documents which ultimately affected over 300,000 taxpayers. (http://www.nytimes.com/2015/08/18/us/politics/hacking-of-tax-returns-more-extensive-than-first-reported-irs-says.html) The new IRS guidance could be a boon to providers of identity protection services such as Experian and Lifelock, though maybe not as much as one would expect. Data from Experian suggests that consumer adoption rates for identity theft protection services is low. Fewer than 10% of those potentially affected by a breach opt for free identity protection services when they are offered. For very large breaches that number is even lower — in the single digit percentages. (https://securityledger.com/2015/05/amid-rampant-data-theft-consumers-left-breached-and-burned-out/)
chicksdaddy writes: General Motors (GM) has become the latest "old economy" firm to launch a program to entice white hat hackers and other expert to delve into the inner workings of its products in search of security flaws, The Security Ledger reports. (https://securityledger.com/2016/01/gm-launches-bug-bounty-program-minus-the-bounty/)
The company launched a bug bounty on January 5th on the web site of Hackerone (https://hackerone.com/gm), a firm that manages bounty programs on top of other firms, promising “eternal glory” to security experts who relay information on “security vulnerabilities of General Motors products and services.”
Despite a $47 billion market capitalization, however, GM is not offering monetary rewards – at least not yet. A page on Hackerone detailing how vulnerability reporters will be thanked reads “Be the first to receive eternal glory,” but does not spell out exactly what rewards are proffered. Judging from the description of the program, the "prize" for reporting a vulnerability to GM appears to be a promise by GM not to sue you for finding it.
The company earned immediate praise from security researchers Chris Valasek and Charlie Miller, whose research exposing security holes in vehicles manufactured by Fiat Chrysler attracted worldwide attention. “Great step in the right direction to Massimilla and the whole GM team,” wrote Chris Valasek of Uber (https://www.twitter.com/nudehaberdasher) in a Twitter post, an apparent reference to Jeff Massimilla, GM’s Chief of Cybersecurity. Valasek said offering security researchers a contact and a way to disclose vulnerabilities was important, even in the absence of a monetary reward.
Still, some researchers are skeptical that firms are willing to “walk the walk” when it comes to addressing and fixing reported vulnerabilities. “If we waited for Chrysler before disclosing the jeep hack, I bet it still wouldn’t be fixed,” wrote Valasek’s research partner Charlie Miller (https://www.twitter.com/0xCharlie) on Twitter.
chicksdaddy writes: Data breaches have become so common that they’ve taken on a kind of formality. One of the phrases that often accompany such incidents goes something like this: “[Company X] has no evidence that any of the stolen information has been used inappropriately.” Or you might read that “there is no evidence of fraud linked to the stolen data.”
Such assurances are generally interpreted as wishful thinking. But when courts are asked to weigh in on the question of damages resulting from cyber incidents in civil suits, the question of what harm resulted from the incident is very different – and very real. To put it simply: if nobody can prove harm resulting from a cyber incident, a company can’t be held liable for those damages, as this blog post notes (https://digitalguardian.com/blog/missing-michaels-data-breach-harm-consumers) over at Digital Guardian.
That fact was underscored again late last month, when a federal judge in U.S. District Court for the Eastern District of New York dismissed a class action suit against arts and crafts giant Michaels Stores (http://www.bloomberglaw.com/public/desktop/document/Whalen_v_Michael_Stores_Inc_Docket_No_214cv07006_EDNY_Dec_02_2014?1452175387) that was filed in the wake of that company’s widely-reported data breach. As part of her ruling, the judge, Joanna Seybert, cited a legal precedent set by the recent Supreme Court ruling in “Clapper v. Amnesty International,” (https://securityledger.com/2015/06/scotus-fisa-ruling-a-tool-to-disenfranchise-data-theft-victims/) concluding that the plaintiffs hadn’t proven that any harm resulted from the Michaels breach.
“Simply put, Whalen has not asserted any injuries that are ‘certainly impending’ or based on a 'substantial risk that the harm will occur,'” Seybert wrote in her decision, referring to Mary Jane Whalen, the Michaels customer in whose name the class action suit was filed. “Thus, Whalen’s claims are DISMISSED WITHOUT PREJUDICE for lack of subject matter jurisdiction,” Seybert concluded.
This isn’t to say that Whalen or other Michaels stores customers were not the target of fraudsters. In fact, Whalen’s attorneys presented evidence that her stolen credit card (or a clone of it) was presented for payment fraudulently in Ecuador: at a local gym and at a venue that sold concert tickets. But regulations in the U.S. exempt consumers from paying the cost of credit card fraud, and Whalen wasn’t asked to pay any unreimbursed charges as a result of the fraudulent use, the court noted.
Whalen’s other attempts to establish “costs” associated with the breach were also disregarded. They included the cost of credit monitoring services and the cost (in time and effort) to obtain replacement cards, the intrinsic value of her credit card information and the risk of future fraud tied to the theft of her credit card data.
chicksdaddy writes: Just a month after an FBI official admitted that his agency sometimes advised companies stricken with ransomware to pay the ransom (https://securityledger.com/2015/10/fbis-advice-on-cryptolocker-just-pay-the-ransom/), two U.S. Senators are requesting information about federal agencies’ encounters with ransomware malware, and whether Uncle Sam might have paid ransom, also. (https://securityledger.com/2015/12/senators-probe-governments-history-with-ransomware/)
chicksdaddy writes: How do you know when the Nest Cam monitoring your house is “on” or “off”? It’s simple: just look at the little power indicator light on the front of the device — and totally disregard what it is telling you.
The truth is: the Nest Cam is never “off” despite an effort by Nest and its parent Google to make it appear otherwise, The Security Ledger reports (https://securityledger.com/2015/11/green-light-or-no-nest-cam-never-stops-watching/). That, according to an analysis of the Nest Cam by the firm ABI Research, which found that turning the Nest Cam “off” using the associated mobile application only turns off the LED power indicator light on the front of the device. (https://www.abiresearch.com/press/nest-cam-works-around-clock/) Under the hood, the camera continues to operate and, according to ABI researcher Jim Mielke, to monitor its surroundings: noting movement, sound and other activity when users are led to believe it has powered down.
“Basically, you have an LED that says ‘on’ and ‘off ‘ that shuts off – and that’s about it,” Mielke said when asked to describe what happens when a user turns the Nest Cam off. Mielke is the Vice President of Teardowns at ABI Research and the author of a report: “Teardown Phone/Device: Nest Cam Works Around the Clock.”
Mielke reached that conclusion after analyzing Nest Cam's power consumption. Typically a shutdown or standby mode would reduce current by as much as 10 to 100 times, Mielke told Security Ledger. But the Google Nest Cam’s power consumption was almost identical in “shutdown” mode and when fully operational, dropping from 370 milliamps (mA) to around 340mA. The slight reduction in power consumption for the Nest Cam when it was turned “off” correlates with the disabling of the LED power light, given that LEDs typically draw 10-20mA.
In a statement to The Security Ledger, Nest Labs spokesperson Zoz Cuccias acknowledged that the Nest Cam does not fully power down when the camera is turned off from the user interface (UI).
“When Nest Cam is turned off from the user interface (UI), it does not fully power down, as we expect the camera to be turned on again at any point in time,” Cuccias wrote in an e-mail. “With that said, when Nest Cam is turned off, it completely stops transmitting video to the cloud, meaning it no longer observes its surroundings.”
The privacy and security implications are serious. “This means that even when a consumer thinks that he or she is successfully turning off this camera, the device is still running, which could potentially unleash a tidal wave of privacy concerns,” Mielke wrote.
chicksdaddy writes: RSA researchers issued a report today (https://blogs.rsa.com/wp-content/uploads/2015/11/GlassRAT-final.pdf) about a remote access trojan (or RAT) program dubbed “GlassRAT” that they are linking to sophisticated and targeted attacks on “Chinese nationals associated with large multinational corporations," The Security Ledger reports. (https://securityledger.com/2015/11/report-newly-discovered-glassrat-lurked-for-years-undetected/)
Discovered by RSA in February of this year, GlassRAT was first created in 2012 and “appears to have operated, stealthily, for nearly 3 years in some environments,” in part with the help of a legitimate certificate from a prominent Chinese software publisher and signed by Symantec and Verisign, RSA reports.
The software is described as a “simple but capable RAT” that packs reverse shell features that allow attackers to remotely control infected computers as well as transfer files and list active processes. The dropper program associated with the file poses as the Adobe Flash player, and was named “Flash.exe” when it was first detected.
RSA discovered it on the PC of a Chinese national working for a large, U.S. multi-national corporation. RSA had been investigating suspicious network traffic on the enterprise network. RSA says telemetry data and anecdotal reports suggest that GlassRAT may principally be targeting Chinese nationals or other Chinese speakers, in China and elsewhere, since at least early 2013.
RSA said it has discovered links between GlassRAT and earlier malware families including Mirage, Magicfire and PlugX. Those applications have been linked to targeted campaigns against the Philippine military and the Mongolian government. (https://securityledger.com/2015/10/security-firm-chinese-govt-hackers-still-active-despite-truce/)
chicksdaddy writes: The Christian Science Monitor has a story on Facebook's increasingly precarious position as the world's largest social network in an age of global terror, as last week's coordinate attacks in Paris underscored. From the article (http://www.csmonitor.com/World/Passcode/2015/1120/Facebook-s-balancing-act-between-trust-and-security#):
"The network became a powerful tool for relaying first-hand accounts of the violence, a means for those affected by violence to "check in" with friends and loved ones, and served as a central rallying point to voice support for terrorized Parisians. In fact, the Paris attacks marked the first time that Facebook’s Safety Check feature was made available for a terrorist attack. But as The New York Times reported (http://www.nytimes.com/interactive/2015/11/15/world/europe/manhunt-for-paris-attackers.html?smid=tw-share&_r=1), Facebook was also a conduit for the Paris terrorists to communicate and coordinate with each other. For that reason, the company faces growing pressure from law enforcement and politicians to disclose information about – and tamp down on – the darker corners of the social network inhabited by militant groups and their supporters."
Caught in the difficult position of balancing the privacy and civil rights of its users with government demands for data, there is evidence that the company increasingly sees itself in the role of advocate for and defender of the rights of users in the face of unwarranted government intrusion.
Speaking in Baltimore last month, Alex Stamos, Facebook's chief security officer, said that "trust" will become the defining commodity of the 21st century, just as oil had been in the 20th century. Facebook’s future and that of similar companies hinges on its ability to foster trust within its massive user base. That trust, he said, would be the product of Facebook convincing users that it "makes choices in their best interests." And, more importantly, that the company "backs up those choices even in the face of adversity."
Stamos’s words come after Facebook has taken steps in the past year to shore up its reputation as a champion of user privacy. In October, the company announced that it would begin warning users who were the target of state-sponsored hackers, following in the footsteps of companies like Google. Behind the scenes, the company also migrated more than 700 million users of its massively popular WhatsApp chat system to an open source peer-to-peer encryption scheme known as Textsecure by Open Whisper Systems, earning it the ire of the law enforcement and intelligence communities. (http://yro.slashdot.org/story/14/11/20/1421216/whatsapp-to-offer-end-to-end-encryption)
Speaking of the controversy over the growing use of strong encryption to secure communications, however, Stamos flatly rejected the thinking of senior officials such as CIA Director Brennan, who argue that “secure” backdoors can be created in technology so that intelligence agencies can surveil communications. "There is no such thing as 'partial strong encryption,' " Stamos said.
chicksdaddy writes: In the absence of strong action from federal or international regulators, there's little to compel companies to invest more in information security. That means that even companies that are caught doing a terrible job protecting customer data (Target, Home Depot, Anthem, TJX) find that the price of screwing up is low. In fact, the biggest cost to breached firms is often for lawyers while seemingly expensive items, like credit monitoring services to protect customers are rarely needed (https://securityledger.com/2015/05/amid-rampant-data-theft-consumers-left-breached-and-burned-out/). In such an environment, the math comparing "cost to prevent" with "cost of doing nothing" can seem unconvincing.
One thing that could change this distressing state of affairs is for the market itself to impose high costs on companies that take a pass on their cyber security. Insurance is one way to do that, and there is evidence that insurers are taking a tougher stand with companies they back (http://it.slashdot.org/story/15/05/27/0344205/insurer-wont-pay-out-for-security-breach-because-of-lax-security). The other big stick is held by credit ratings agencies, whose evaluations of private sector and public sector organizations determine how easily and cheaply they can finance their continued operation. Needless to say: a bad credit rating from a major ratings agency can significantly raise a firm’s borrowing cost, hampering plans for growth and expansion and even imperiling business operations.
And there’s growing evidence that ratings agencies are prepared to adjust credit ratings downward based on knowledge of damaging cyber incidents including loss or theft of data. Digital Guardian's blog notes a blog post from S&P analyst Laurence Hazell this week (https://digitalguardian.com/blog/sp-cyber-joins-climate-change-risk-corporate-credit-ratings) that Standard and Poor’s considers cyber risk as a component of so-called “ESG” (environmental, social and governance) risks that affect overall credit risk and ratings, putting cyber alongside the risks of man-made climate change as a potential cause of sudden shifts in an organization’s credit rating. (https://goo.gl/Cjmxqd)
“While different in so many important respects from the issues regarding the natural environment, we are beginning to make inroads to the assessment of credit impacts from cyber-crime and cyber-breaches,” he wrote.
S&P published two reports in June that laid out the case for taking cyber risk into account when analyzing organizations’ credit worthiness. The company makes clear that cyber incidents haven’t yet resulted in a ratings downgrade, even at financial and retail organizations that have been the victims of major cyber attacks. But the credit ratings agency suggested it was more a matter of “when” than “if” such a downgrade would happen.
“It’s not difficult to envision scenarios in which criminal or state-sponsored cyber-attacks would result in significant economic impacts, business interruption, theft, or damage to reputation,” the company wrote.
Still unclear is what might contribute to a cyber incident resulting in a credit downgrade, given that retailers, banks, financial services firms and other high profile organizations are the target of almost daily attacks – some of them successful. According to this report in Insurance Journal, “the most likely adverse ratings impact would stem from an attack weakening a target company’s business profile, most likely in terms of future revenue and profitability, and by causing deterioration in credit metrics.”
chicksdaddy writes: There's such a fine line between clever and...criminal. That's the unmistakable subtext of the latest FireEye report on a new "APT" style campaign that's using methods and tools that are pretty much indistinguishable from those used by media websites and online advertisers. The difference? This time the information gathered from individuals is being used to soften up specific individuals with links to international diplomacy, the Russian government and the energy sector, The Security Ledger reports. (https://securityledger.com/2015/11/super-cookies-web-analytics-behind-malicious-profiling/)
The company released a report this week (https://www2.fireeye.com/rs/848-DID-242/images/rpt-witchcoven.pdf) that presented evidence of a widespread campaign that combines so-called “watering hole” web sites with a tracking script dubbed “WITCHCOVEN” and Samy Kamkar's Evercookie, the super persistent web tracking cookie (http://yro.slashdot.org/story/10/09/22/1215236/introducing-the-invulnerable-evercookie). The tools are used to assemble detailed profiles on specific users including the kind of computer they use, the applications and web browsers they have installed and what web sites they visit.
While the aims of those behind the campaign aren’t known, FireEye said the use of compromised web sites and surreptitious tracking scripts doesn’t bode well.
“While many sites engage in profiling and tracking for legitimate purposes, those activities are typically conducted using normal third-party browser-based cookies and commercial ad services and analytics tools,” FireEye wrote in its report. “In this case, while the individuals behind the activity used publicly available tools, those tools had very specific purposes....This goes beyond ‘normal’ web analytics,” the company said.
In other words, TV viewing patterns will be used to serve ads to any device user who happens to be connected to the same network as the Vizio Smart TV — an obvious problem for households with a mix of say... adults and children?!
Vizio does provide instructions for disabling the Smart Interactivity features and says that “connected” features of the device aren’t contingent on monitoring. That's better than some other vendors. In 2014, for example, LG used a firmware update for its smart televisions to link the "smart" features of the device to viewer tracking and monitoring. Viewers who applied the update, but refused to consent to monitoring were not able to use services like Netflix and YouTube. (https://securityledger.com/2014/05/bad-actor-with-update-lg-says-no-monitoring-no-smart-tv/)
chicksdaddy writes: One of the big challenges facing companies and individuals that wish to secure the Internet of Things is the sheer complexity of connected devices – and the (many) often subtle dependencies they create. Exhibit #1: Tesla Motors sleek Model S electric sedans, which retail for over $60,000 and the AR Drone (http://ardrone2.parrot.com/) from the french firm Parrot SA – a consumer quadcopter that starts at a reasonable $405 on Amazon.com. What do the two products have in common? Lots, according to security expert Rob Graham of the firm Errata Security.
Writing on the Errata blog earlier this week (http://blog.erratasec.com/2015/10/omg-machines-are-breeding-mankind-is.html), Graham noted that the Tesla and the AR Drone appear nearly identical to a wireless sniffer when connected to the same wireless network, with similar MAC (media access control) addresses beginning with the code “90:03:B7” for Parrot SA. A coincidence? Hardly: Tesla’s sedans appear to use Parrot’s wireless access software to manage connections to wireless hubs at its servicing centers, where cars upload data to a Tesla Service access point upon arrival, Graham wrote. It's the same software that allows Parrot’s AR Drone to serve as an access point for mobile phones, like the iPhone, to connect to and navigate the device.
As Security Ledger notes (https://securityledger.com/2015/11/under-the-hood-wireless-software-links-teslas-drones/): the problem, for automakers, is that long and complex software supply chains may introduce what are sometimes referred to as “common mode” vulnerabilities into vehicles and other life-sustaining equipment.
“‘Supply chain’ risks are the next battle ground,” Graham wrote in an e-mail. As yet undiscovered security holes in third-party vendors like Parrot could pose safety and privacy risks to consumers through a wide range of products. That’s especially true if the vendor who manages the customer relationship chooses not to pass security fixes through to their customers, or rewrite code to take responsibility for risks.