Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - Trivial Bypass of PayPal Two-Factor Authentication On Mobile Devices (securityledger.com)

chicksdaddy writes: The Security Ledger reports on research from DUO Labs that exposes a serious gap in protection with PayPal Security Key, the company's two-factor authentication service.

According to DUO (https://duosecurity.com/blog/duo-security-researchers-uncover-bypass-of-paypal-s-two-factor-authentication), PayPal's mobile app doesn't yet support Security Key and displays an error message to users with the feature enabled when they try to log in to their PayPal account from a mobile device, terminating their session automatically.

However, researchers at DUO noticed that the PayPal iOS application would briefly display a user’s account information and transaction history prior to displaying that error message and logging them out. The behavior suggested that mobile users were, in fact, being signed in to their account prior to being logged off. The DUO researchers investigated: intercepting and analyzing the Web transaction between the PayPal mobile application and PayPal’s back end servers and scrutinizing how sessions for two-factor-enabled accounts versus non-two-factor-enabled accounts were handled.

They discovered that the API uses the OAuth technology for user authentication and authorization, but that PayPal only enforces the two-factor requirement on the client – not on the server.

An attacker with knowledge of the flaw and a Paypal user's login and password could easily evade the requirement to enter a second factor before access the account and transmitting money.

Submission + - FTC Lobbies To Be Top Cop For Geolocation (securityledger.com)

chicksdaddy writes: As the U.S. Senate considers draft legislation governing the commercial use of location data, The Federal Trade Commission (FTC) is asking Congress to make it — not the Department of Justice — the chief rule maker and enforcer of policies for the collection and sharing of geolocation information, the Security Ledger reports. (https://securityledger.com/2014/06/ftc-wants-to-be-top-cop-on-geolocation/)

Jessica Rich, Director of the FTC Bureau of Consumer Protection, told the Senate Judiciary Committee’s Subcommittee for Privacy, Technology that the Commission would like to see changes to the wording of the Location Privacy Protection Act of 2014 (LPPA) (http://www.ftc.gov/news-events/press-releases/2014/06/ftc-testifies-geolocation-privacy). The LPPA is draft legislation introduced by Sen. Al Franken that carves out new consumer protections for location data sent and received by mobile phones, tablets and other portable computing devices. Rich said that the FTC, as the U.S. Government’s leading privacy enforcement agency, should be given rule making and enforcement authority for the civil provisions of the LPPA. The current draft of the law instead gives that authority to the Department of Justice (DOJ).

The LPPA updates the Electronic Communications Privacy Act to take into account the widespread and availability and commercial use of geolocation information provided. LPPA requires that companies get individuals’ permission before collecting location data off of smartphones, tablets, or in-car navigation devices, and before sharing it with others.

It would prevent what Franken refers to as “GPS stalking,” preventing companies from collecting location data in secret. LPPA also requires companies to reveal the kinds of data they collect and how they share and use it, bans the development, operation, and sale of GPS stalking apps and requires the federal government to collect data on GPS stalking and facilitate reporting of GPS stalking by the public.(http://www.franken.senate.gov/files/documents/140327Locationprivacy.pdf)

Submission + - With Firmware Update, LG Links Smart TV Features To Viewer Monitoring (securityledger.com)

chicksdaddy writes: Can electronics giant LG force owners of its Smart TVs to agree to have their viewing habits monitored or lose access to the smart features they've already paid for?

That's the question being raised by LG customers and privacy advocates after firmware updates to some LG SmartTVs removed a check box opt-in that allowed TV owners to consent to have their viewing behavior monitored by LG. In its place, LG has asked users to consent to a slew of intrusive monitoring activities as part of lengthy new Terms of Service Agreement and Privacy Statement, or see many of the 'smart' features on their sets disabled.

Among other things, LG is asking for access to customers’ “viewing information”- interactions with program content, including live TV, movies and video on demand. That might include the programs you watch, the terms you use to search for content and actions taken while viewing.

Some LG SmartTV owners are crying foul (http://www.techdirt.com/articles/20140511/17430627199/lg-will-take-smart-out-your-smart-tv-if-you-dont-agree-to-share-your-viewing-search-data-with-third-parties.shtml). They include Jason Huntley (aka @DoctorBeet), a UK-based IT specialist who, in November, blew the whistle on LG's practice of collecting user viewing data without their consent. (http://doctorbeet.blogspot.com/2013/11/lg-smart-tvs-logging-usb-filenames-and.html) Huntley said he views the new privacy policy as a way for LG to get legal cover for the same kinds of omnibus customer monitoring they attempted earlier – though without notice and consent. “If you read the documents, they’ve covered themselves for all the activity that was going on before. But now they’re allowed to do it legally.”

It is unclear whether the firmware updates affect LG customers in the U.S. or just the EU. If they do, privacy experts say they may run afoul of US consumer protection laws. “My initial reaction is that this is an appalling practice,” Corryne McSherry, the Intellectual Property Director at the Electronic Frontier Foundation (EFF), told The Security Ledger.(https://securityledger.com/2014/05/bad-actor-with-update-lg-says-no-monitoring-no-smart-tv/) “Customers want and deserve to be able to retain a modicum of privacy in their media choices, and they shouldn’t have to waive that right in order for their TV (or any other device) to keep working as expected.”

Submission + - Heartbleed Exposes Critical Infrastructure's Patch Problem (veracode.com)

chicksdaddy writes: The good news about the Heartbleed vulnerability in OpenSSL is that most of the major sites that were found to be vulnerable to the flaw have been patched. (http://www.computerworld.com/s/article/9247787/Most_but_not_all_sites_have_fixed_Heartbleed_flaw)

The bad news: the vulnerability of high-profile web sites are just the tip of the iceberg or – more accurately – the head in front of a very long tail of vulnerable web sites and applications. Many of those applications and sites are among the systems that support critical infrastructure. For evidence of that, look no further than the alert issued Thursday by the Department of Homeland Security’s Industrial Control System (ICS) Computer Emergency Readiness Team (CERT). The alert – an update to one issued last month – includes a list of 43 ICS applications that are known to be vulnerable to Heartbleed. (http://ics-cert.us-cert.gov/advisories/ICSA-14-135-05) Just over half have patches available for the Heartbleed flaw, according to ICS CERT data. But that leaves twenty applications vulnerable, including industrial control products from major vendors like Siemens, Honeywell and Schneider Electric.

Even when patches are available, many affected organizations — including operators of critical infrastructure — may have a difficult time applying the patch. ICS environments are notoriously difficult to audit because ICS devices often respond poorly to any form of scanning. ICS-CERT notes that both active- and passive vulnerability scans are “dangerous when used in an ICS environment due to the sensitive nature of these devices.” Specifically: “when it is possible to scan the device, it is possible that device could be put into invalid state causing unexpected results and possible failure of safety safeguards,” ICS-CERT warned.

Submission + - Blade Runner Redux: Do Embedded Systems Need A Time To Die? (securityledger.com) 1

chicksdaddy writes: In a not-so-strange case of life imitating Blade Runner, Dan Geer, the CISO of In-Q-Tel, has proposed making embedded devices such as industrial control and SCADA systems more 'human' (http://geer.tinho.net/geer.secot.7v14.txt) in order to manage a future in which hundreds of billions of them will populate every corner of our personal, professional and lived environments. (http://www.gartner.com/newsroom/id/2636073)

Geer was speaking at The Security of Things Forum (http://www.securityofthings.com), a conference focused on securing The Internet of Things last Wednesday. He struck a wary tone, saying that "we are at the knee of the curve for deployment of a different model of computation," as the world shifts from an Internet of 'computers' to one of embedded systems that is many times larger.

Individually, these devices may not be particularly valuable. But, together, IoT systems are tremendously powerful and capable of causing tremendous social disruption. Geer noted the way that embedded systems, many outfitted with remote sensors, now help manage everything from transportation to food production in the U.S. and other developed nations.

“Is all the technologic dependency, and the data that fuels it, making us more resilient or more fragile?" he wondered. Geer noted the appearance of malware like TheMoon (https://isc.sans.edu/forums/diary/Linksys+Worm+TheMoon+Summary+What+we+know+so+far/17633), which spreads between vulnerable home routers, as one example of how a population of vulnerable, unpatchable embedded devices might be cobbled into a force of mass disruption.

Taking a page out of Philip Dick's book (http://www.goodreads.com/book/show/7082.Do_Androids_Dream_of_Electric_Sheep_) or at least Ridley Scott's movie (http://www.imdb.com/name/nm0000631/) Geer proposes a novel solution: “Perhaps what is needed is for embedded systems to be more like humans.”

By "human," Geer means that embedded systems that do not have a means of being (securely) managed and updated remotely should be configured with some kind of "end of life" past which they will cease to operate. Allowing embedded systems to 'die' will remove a population of remote and insecure devices from the Internet ecosystem and prevent those devices from falling into the hands of cyber criminals or other malicious actors, Geer argued.

The idea has many parallels with Scott's 1982 classic, Blade Runner, in which a group of rebellious, human-like androids – or “replicants” – return to a ruined Earth to seek out their maker. Their objective: find a way to disable an programmed ‘end of life’ in each of them. In essence: the replicants want to become immortal.

Submission + - Want More Secure Passwords? Ask Pavlov. (securityledger.com)

chicksdaddy writes: With each data breach at a major online service we are reminded, all over again, how pitiful most people are at picking and sticking to secure passwords. (http://securitynirvana.blogspot.com/2012/06/final-word-on-linkedin-leak.html). But all the noise security folks have made about the dangers of insecure passwords hasn't done much to change human behavior. (http://www.washingtonpost.com/blogs/the-switch/wp/2014/01/21/123456-replaces-password-as-most-common-password-found-in-data-breaches/)

Maybe the problem is that explaining isn't enough. Perhaps online firms need to actually change the behavior of users — to 'train' them to use secure passwords. And when you're talking about training someone to do something, who better to turn to than Ivan Pavlov (http://en.wikipedia.org/wiki/Ivan_Pavlov), the Russian Nobel Prize winning physiologist whose pioneering work in classical conditioning forever linked his name to the image of drooling dogs.

Writing on Security Ledger (https://securityledger.com/2014/05/is-pavlovian-password-management-the-answer/), Lance James, the head of Cyber Intelligence at consulting firm Deloitte & Touche suggests that a Pavlovian approach to password security might be the best way to go.

Rather than enforcing strict password requirements (which often result in weaker passwords http://blog.zorinaq.com/?e=54), James advocates allowing weak passwords, but attaching short TTL (time to live) values to them, based on data on how quickly the chosen password could be cracked.
"Let the user know the cost and value of the password, including it’s time for success," James proposes.

Users who select a weak password would get a message thanking them for resetting their password — and informing them that it will expire in 3 days, requiring another (punishing) password reset. Longer and more secure passwords would reward the user with a longer reprieve -from days to months. Thoughts?

Submission + - OpenSSL: The New Face Of Technology Monoculture (securityledger.com)

chicksdaddy writes: In a now-famous 2003 essay, “Cyberinsecurity: The Cost of Monopoly” (http://cryptome.org/cyberinsecurity.htm) Dr. Dan Geer (http://en.wikipedia.org/wiki/Dan_Geer) argued, persuasively, that Microsoft’s operating system monopoly constituted a grave risk to the security of the United States and international security, as well. It was in the interest of the U.S. government and others to break Redmond’s monopoly, or at least to lessen Microsoft’s ability to ‘lock in’ customers and limit choice. “The prevalence of security flaw (sp) in Microsoft’s products is an effect of monopoly power; it must not be allowed to become a reinforcer,” Geer wrote.

The essay cost Geer his job at the security consulting firm AtStake, which then counted Microsoft as a major customer.(http://cryptome.org/cyberinsecurity.htm#Fired) (AtStake was later acquired by Symantec.)

These days Geer is the Chief Security Officer at In-Q-Tel, the CIA’s venture capital arm. But he’s no less vigilant of the dangers of software monocultures. Security Ledger notes that, in a post today for the blog Lawfare (http://www.lawfareblog.com/2014/04/heartbleed-as-metaphor/), Geer is again warning about the dangers that come from an over-reliance on common platforms and code. His concern this time isn’t proprietary software managed by Redmond, however, it’s common, oft-reused hardware and software packages like the OpenSSL software at the heart (pun intended) of Heartbleed.(https://securityledger.com/2014/04/the-heartbleed-openssl-flaw-what-you-need-to-know/)

“The critical infrastructure’s monoculture question was once centered on Microsoft Windows,” he writes. “No more. The critical infrastructure’s monoculture problem, and hence its exposure to common mode risk, is now small devices and the chips which run them," Geer writes.

What happens when a critical and vulnerable component becomes ubiquitous — far more ubiquitous than OpenSSL? Geer wonders if the stability of the Internet itself is at stake.

“The Internet, per se, was designed for resistance to random faults; it was not designed for resistance to targeted faults,” Geer warns. “As the monocultures build, they do so in ever more pervasive, ever smaller packages, in ever less noticeable roles. The avenues to common mode failure proliferate.”

Submission + - Crowd Funding Bug Bounties To Fix Open Source Insecurity? Don't Count On It. (veracode.com)

chicksdaddy writes: The discovery of the Heartbleed vulnerability put the lie to the notion that ‘thousands of eyes’ keep watch over critical open source software packages like OpenSSL. In fact, some of the earliest reporting on Heartbleed noted that the team supporting the software consisted of just four developers – only one of them full time. (http://online.wsj.com/news/articles/SB10001424052702304819004579489813056799076)

To be sure, there are still plenty of examples of tightly monitored open source projects and real accountability. (The ever-mercurial Linus Torvalds recently made news by openly castigating a key Linux kernel developer Kay Sievers for submitting buggy code, suspending him from further contributions.) (http://lkml.iu.edu//hypermail/linux/kernel/1404.0/01331.html)

But how do poorer, volunteer-led open source projects improve accountability and oversight — especially in areas like security? Casey Ellis over at the firm BugCrowd has proposed a crowd-funded project to fund bug bounties (https://www.crowdtilt.com/campaigns/lets-make-sure-heartbleed-doesnt-happen-again/description) for a security audit of OpenSSL ($7,162 raised thus far, with a target of $100,000).

But a post on Veracode's blog doubts that offering fat purses for information on open source bugs will make much difference.

"A paid bounty program would mirror efforts by companies like Google, Adobe and Microsoft to attract the attention of the best and brightest security researchers to their platform. No doubt: bounties will beget bug discoveries, some of them important," the post reads. "But a bounty program isn’t a substitute for a full security audit and, beyond that, a program for managing OpenSSL (or similar projects) over the long term. And, after all, the Heartbleed vulnerability doesn’t just point out a security failing, it raises questions about the growth and complexity of the OpenSSL code base. Bounties won’t make it any easier to address those bigger and important problems."

In other words: finding bugs doesn't equate with making the underlying code more secure. That's a lesson that Adobe and Microsoft learned years ago (see Adobe's take on it from back in 2010 here: http://blogs.adobe.com/securit...).

What's needed is a more holistic approach to security that result in something like Microsoft's SDL (Secure Development Lifecycle) or Adobe's SPLC (Secure Product Lifecycle). That will staunch the flow of new vulnerabilities. Then investments need to be made to create a robust incident response and updating/patching post deployment. That's a lot to fit into a crowd-funding proposal — so it will need to fall to companies that rely on packages like OpenSSL to foot the bill (and provide the talent). Some companies, like Akamai, are already talking about that.

Submission + - How Can We Create A Culture Of Secure Behavior?

An anonymous reader writes: Most employees think they are immune to security threats. Despite the high news coverage that large breaches receive, and despite tales told by their friends about losing their laptops for a few days while a malware infection is cleared up, employees generally believe they are immune to security risks. Those types of things happen to other, less careful people. Training users how to properly create and store strong passwords, and putting measures in place that tell individuals the password they've created is "weak" can help change behavior.

Submission + - Apple's Spotty Record Of Giving Back To The Tech Industry (itworld.com)

chicksdaddy writes: One of the meta-stories to come out of the Heartbleed (http://heartbleed.com/) debacle is the degree to which large and wealthy companies have come to rely on third party code (http://blog.veracode.com/2014/04/heartbleed-and-the-curse-of-third-party-code/) — specifically, open source software maintained by volunteers on a shoestring budget. Adding insult to injury is the phenomenon of large, incredibly wealthy companies that gladly pick the fruit of open source software, but refusing to peel off a tiny fraction of their profits to financially support those same groups.

Exhibit 1: Apple Computer. On Friday, IT World ran a story that looks at Apple's long history of not giving back to the technology and open source community. The article cites three glaring examples: Apple's non-support of the Apache Software Foundation (despite bundling Apache with OS X), as well as its non-support of OASIS and refusal to participate in the Trusted Computing Group (despite leveraging TCG-inspired concepts, like AMDs Secure Enclave in iPhone 5s).

Given Apple's status as the world's most valuable company and its enormous cash hoard, the refusal to offer even meager support to open source and industry groups is puzzling. From the article:

"Apple bundles software from the Apache Software Foundation with its OS X operating system, but does not financially support the Apache Software Foundation (ASF) in any way. That is in contrast to Google and Microsoft, Apple's two chief competitors, which are both Platinum sponsors of ASF — signifying a contribution of $100,000 annually to the Foundation. Sponsorships range as low as $5,000 a year (Bronze), said Sally Khudairi, ASF's Director of Marketing and Public Relations. The ASF is vendor-neutral and all code contributions to the Foundation are done on an individual basis. Apple employees are frequent, individual contributors to Apache. However, their employer is not, Khudairi noted.

The company has been a sponsor of ApacheCon, a for-profit conference that runs separately from the Foundation — but not in the last 10 years. "We were told they didn't have the budget," she said of efforts to get Apple's support for ApacheCon in 2004, a year in which the company reported net income of $276 million on revenue of $8.28 billion."

Carol Geyer at OASIS is quoted saying her organization has done "lots of outreach" to Apple and other firms over the years, and regularly contacts Apple about becoming a member. "Whenever we're spinning up a new working group where we think they could contribute we will reach out and encourage them to join," she said. But those communications always go in one direction, Geyer said, with Apple declining the entreaties.

Today, the company has no presence on any of the Organization's 100-odd active committees, which are developing cross-industry technology standards such as The Key Management Interoperability Protocol (KMIP) and the Public-Key Cryptography Standard (PKCS).

Submission + - Is Apple A Bad Citizen Of The Tech Community? (itworld.com)

jfruh writes: While much criticism and praise for Apple comes with its engagement with the larger world — politics, charity, labor practices, and so on — there hasn't been much discussion of how Apple contributes to the open source and standards communities of the tech world. It turns out the world's most valuable company doesn't give back much. Despite widespread reliance on open source software, Apple isn't a major corporate sponsor of any open source proejcts — for instance, Microsoft gives more to the Apache Foundation, despite selling a Web server that competes against Apache's free flagship product. Considering the fact that open source and open standards were all that kept Apple from extinction during the dark days of Microsoft dominance, you'd think they'd be more grateful.

Submission + - TCP/IP Might Have Been Secure From The Start, But...NSA! (veracode.com)

chicksdaddy writes: The pervasiveness of the NSA's spying operation has turned it into a kind of bugaboo — the monster lurking behind every locked networking closet (http://en.wikipedia.org/wiki/Room_641A) and the invisible hand behind every flawed crypto implementation (http://www.reuters.com/article/2014/03/31/us-usa-security-nsa-rsa-idUSBREA2U0TY20140331).

Those inclined to don the tinfoil cap won't be reassured by Vint Cerf's offhand observation in a Google Hangout on Wednesday that, back in the mid 1970s, the world's favorite intelligence agency may have also stood in the way of stronger network layer security being a part of the original specification for TCP/IP — the Internet's lingua franca.

As noted on Veracode's blog (http://blog.veracode.com/2014/04/cerf-classified-nsa-work-mucked-up-security-for-early-tcpip/), Cerf said that given the chance to do it over again he would have designed earlier versions of TCP/IP to look and work like IPV6, the latest version of the IP protocol with its integrated network-layer security and massive 128 bit address space. IPv6 is only now beginning to replace the exhausted IPV4 protocol globally.

“If I had in my hands the kinds of cryptographic technology we have today, I would absolutely have used it,” Cerf said. (Check it out here: http://www.youtube.com/watch?v...)

Researchers at the time were working on just such a lightweight cryptosystem. On Stanford’s campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described the functioning of a public key cryptography system. But they didn’t yet have the algorithms to make it practical. (Ron Rivest, Adi Shamir and Leonard Adleman published the RSA algorithm in 1977).

As it turns out, however, Cerf revealed that he _did_ have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren’t they used? The culprit is one that’s well known now: the National Security Agency.

Cerf told host Leo Laporte that the crypto tools were part of a classified NSA project he was working on at Stanford in the mid 1970s to build a secure, classified Internet.

“During the mid 1970s while I was still at Stanford and working on this, I also worked with the NSA on a secure version of the Internet, but one that used classified cryptographic technology. At the time I couldn’t share that with my friends,” Cerf said. “So I was leading this kind of schizoid existence for a while.”

Hindsight is 20:20, as the saying goes. Neither Cerf, nor the NSA nor anyone else could have predicted how much of our economy and that of the globe would come to depend on what was then a government backed experiment in computer networking. Besides, Cerf didn't elaborate on the cryptographic tools he was working with as part of his secure Internet research or how suitable (and scalable) they would have been.

But it’s hard to listen to Cerf lamenting the absence of strong authentication and encryption in the foundational protocol of the Internet, or to think about the myriad of online ills in the past two decades that might have been preempted with a stronger and more secure protocol and not wonder what might have been.

Submission + - Vint Cerf: CS Programs Must Change To Adapt To Internet of Things (securityledger.com)

chicksdaddy writes: The Internet of Things has tremendous potential but also poses a tremendous risk if the underlying security of Internet of Things devices is not taken into account, according to Vint Cerf, Google’s Internet Evangelist.

Cerf, speaking in a public Google Hangout on Wednesday, said that he’s tremendously excited about the possibilities of an Internet of billions of connected objects (http://www.youtube.com/watch?v=17GtmwyvmWE&feature=share&t=21m8s). But Cerf warned that the Iot necessitates big changes in the way that software is written. Securing the data stored on those devices and exchanged between them represents a challenge to the field of computer science – one that the nation’s universities need to start addressing.

Internet of Things products need to do a better job managing access control and use strong authentication to secure communications between devices.

Submission + - Hell Is Other Contexts: How Wearables Will Transform Application Development (veracode.com)

chicksdaddy writes: Veracode's blog has an interesting post on how wearable technology will change the job of designing applications. Long and short: context is everything. From the article:

"It’s the notion – unique to wearable technology – that applications will need to be authored to be aware of and respond to the changing context of the wearer in near real-time. Just received a new email message? Great. But do you want to splash an alert to your user if she’s hurtling down a crowded city street on her bicycle? New text message? OK– but you probably shouldn't send a vibrate alert to your user's smartwatch if the heart rate monitor suggests that he’s asleep, right?

This isn't entirely a new problem, but it will be a challenge for developers used to a world where ‘endpoints’ were presumed to be objects that are physically distinct from their owner and, often, stationary.

Google has already called attention to this in its developer previews of Android Wear – that company’s attempt to extend its Android mobile phone OS to wearables. Google has encouraged wearable developers to be “good citizens.” “With great power comes great responsibility,” Google’s Justin Koh reminds would-be developers in a Google video.(https://www.youtube.com/watch?v=1dQf0sANoDw&feature=youtu.be&t=2m26s)

“Its extremely important that you be considerate of when and how you notify a user.” Developers are strongly encouraged to make notifications and other interactions between the wearable device and its wearer as ‘contextually relevant as possible.’ Google has provided APIs (application program interfaces) to help with this. For example, Koh recommends that developers use APIs in Google Play Services to set up a geo-fence that will make sure the wearer is in a specific location (i.e. “home”) before displaying certain information. Motion detection APIs for Wear can be used to front (or hide) notifications when the wearer is performing certain actions, like bicycling or driving.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...