Facebook

Mark Zuckerberg's Mentor 'Shocked and Disappointed' -- But He Has a Plan (time.com) 140

Early Facebook investor Roger McNamee published a scathing 3,000-word article adapted from his new book Zucked: Waking Up to the Facebook Catastrophe. Here's just one example of what's left him "shocked and disappointed": Facebook (along with Google and Twitter) has undercut the free press from two directions: it has eroded the economics of journalism and then overwhelmed it with disinformation. On Facebook, information and disinformation look the same; the only difference is that disinformation generates more revenue, so it gets better treatment.... At Facebook's scale -- or Google's -- there is no way to avoid influencing the lives of users and the future of nations. Recent history suggests that the threat to democracy is real. The efforts to date by Facebook, Google and Twitter to protect future elections may be sincere, but there is no reason to think they will do anything more than start a game of whack-a-mole with those who choose to interfere. Only fundamental changes to business models can reduce the risk to democracy.
Google and Facebook "are artificially profitable because they do not pay for the damage they cause," McNamee argues, adding that some medical researchers "have raised alarms noting that we have allowed unsupervised psychological experiments on millions of people."

But what's unique is he's offering specific suggestions to fix it.
  • "I want to set limits on the markets in which monopoly-class players like Facebook, Google and Amazon can operate. The economy would benefit from breaking them up. A first step would be to prevent acquisitions, as well as cross subsidies and data sharing among products within each platform."
  • "Another important regulatory opportunity is data portability, such that users can move everything of value from one platform to another. This would help enable startups to overcome an otherwise insurmountable barrier to adoption."
  • "Given that social media is practically a public utility, I think it is worth considering more aggressive strategies, including government subsidies."
  • "There need to be versions of Facebook News Feed and all search results that are free of manipulation."
  • "I would like to address privacy with a new model of authentication for website access that permits websites to gather only the minimum amount of data required for each transaction.... it would store private data on the device, not in the cloud. Apple has embraced this model, offering its customers valuable privacy and security advantages over Android."
  • "No one should be able to use a user's data in any way without explicit, prior consent. Third-party audits of algorithms, comparable to what exists now for financial statements, would create the transparency necessary to limit undesirable consequences."
  • "There should be limits on what kind of data can be collected, such that users can limit data collection or choose privacy. This needs to be done immediately, before new products like Alexa and Google Home reach mass adoption."

AI

Elon Musk Wants To Put An AI Hardware Chip In Your Skull (itmunch.com) 362

"iTMunch reports that Elon Musk apparently believes that the human race can only be "saved" by implanting chips into our skulls that make us half human, half artificial intelligence," writes Slashdot reader dryriver. From the report: Elon Musk's main goal, he explains, is to wire a chip into your skull. This chip would give you the digital intelligence needed to progress beyond the limits of our biological intelligence. This would mean a full incorporation of artificial intelligence into our bodies and minds. He argues that without taking this drastic measure, humanity is doomed. There are a lot of ethical questions raised on the topic of what humanity according to Elon Musk exactly is, but he seems undeterred. "My faith in humanity has been a little shaken this year," Musk continues, "but I'm still pro-humanity."

The seamless conjunction of humans and computers gives us humans a shot at becoming completely "symbiotic" with artificial intelligence, according to Elon Musk. He argues that humans as a species are all already practically attached to our phones. In a way, this makes us almost cyborg-like. The only difference is that we haven't managed to expand our intelligence to that level. This means that we are not as smart as we could be. The data link that currently exists between the information that we get from our phones or computers is not as fast as it could be. "It will enable anyone who wants to have superhuman cognition," Musk said. "Anyone who wants."
As for how much smarter humans will become with these AI chips, Musk writes: "How much smarter are you with a phone or computer or without? You're vastly smarter, actually," Musk said. "You can answer any question pretty much instantly. You can remember flawlessly. Your phone can remember videos (and) pictures perfectly. Your phone is already an extension of you. You're already a cyborg. Most people don't realize you're already a cyborg. It's just that the data rate [...] it's slow, very slow. It's like a tiny straw of information flow between your biological self and your digital self. We need to make that tiny straw like a giant river, a huge, high-bandwidth interface."
Medicine

$1.4 Million Raised on GoFundMe For 'Garbage' Homeopathy Cancer Treatment Scams (gizmodo.com) 180

"Medical crowdfunding has become a billion-dollar industry practically overnight, led by sites like GoFundMe," reports Gizmodo, citing new research on its dark side: over a million dollars in donations "funneled to ludicrous, unscientific treatments for life-threatening diseases like cancer." The authors of the study, published Thursday in The Lancet, searched for a particular kind of medical crowdfunding campaign on GoFundMe: campaigns for cancer treatments that involved the use of homeopathy. Homeopathy might easily be considered the lowest-hanging fruit of medical quackery. The theory behind how it works is nonsensical (in short, its proponents claim water can be programmed with the "memory" of toxic substances that will then treat the symptoms they normally cause); there are no good studies that show it works; and its practitioners are some of the most brazen cranks this side of P.T. Barnum still kicking. "These treatments are the bunkiest of the bunk, just complete garbage," lead author Jeremy Snyder, a bioethicist at Simon Fraser University in Canada, told Gizmodo.

Snyder and his co-author found that over 200 GoFundMe campaigns, as of June 2018, had been created to help fund homeopathic cancer treatments...and were shared on Facebook more than 100,000 times in total. They collectively asked for more than $5 million in funding, and raised $1.4 million from over 13,000 donors.... Snyder and his co-author also tried to find out what ultimately happened to the people behind all these campaigns. Sometimes, the campaigns would have final updates reporting the person had died; other times, they were able to track down obituaries. In total, they found that 28 percent of the people had died by the time of their search. But even that might be an underestimate...

A third of campaigns even explicitly stated that all contributions went to people who'd chosen to avoid doctors. "I have a huge amount of sympathy for these people. They're very sick and desperate," Snyder says. "But it's concerning to see them be taken in by these claims." Gizmodo adds, "That's to say nothing of the kind people who are being roped into donating their money to medical charlatans."

"[W]e believe it is not our place to tell them what decision to make," GoFundMe said in a statement. They added that "ultimately it is up to the GoFundMe community to decide which campaigns to donate to."
The Almighty Buck

Science Journals Are Laughing All the Way To the Bank, Locking the Results of Publicly Funded Research Behind Exorbitant Paywalls. This Must Be Stopped. (newscientist.com) 140

Here is a trivia question for you: what is the most profitable business in the world? You might think oil, or maybe banking. You would be wrong. The answer is academic publishing. Its profit margins are vast, reportedly in the region of 40 per cent. New Scientist: The reason it is so lucrative is because most of the costs of its content is picked up by taxpayers. Publicly funded researchers do the work, write it up and judge its merits. And yet the resulting intellectual property ends up in the hands of the publishers. To rub salt into the wound they then sell it via exorbitant subscriptions and paywalls, often paid for by taxpayers too.

The academic publishing business model is indefensible. Practically everybody -- even the companies that profit from it -- acknowledges that it has to change. And yet the status quo has proven extremely resilient. The latest attempt to break the mould is called Plan S, created by umbrella group cOAlition S. It demands that all publicly funded research be made freely available. When Plan S was unveiled in September, its backers expected support to snowball. But only a minority of Europe's 43 research funding bodies have signed up, and hoped-for participation from the US has failed to materialise. Meanwhile, a grass-roots campaign against it is gathering momentum. Plan S deserves a chance.

AI

Will Compression Be Machine Learning's Killer App? (petewarden.com) 59

Pete Warden, an engineer and CTO of Jetpac, writes: When I talk to people about machine learning on phones and devices I often get asked "What's the killer application?". I have a lot of different answers, everything from voice interfaces to entirely new ways of using sensor data, but the one I'm most excited about in the near-team is compression. Despite being fairly well-known in the research community, this seems to surprise a lot of people, so I wanted to share some of my personal thoughts on why I see compression as so promising.

I was reminded of this whole area when I came across an OSDI paper on "Neural Adaptive Content-aware Internet Video Delivery". The summary is that by using neural networks they're able to improve a quality-of-experience metric by 43% if they keep the bandwidth the same, or alternatively reduce the bandwidth by 17% while preserving the perceived quality. There have also been other papers in a similar vein, such as this one on generative compression [PDF], or adaptive image compression. They all show impressive results, so why don't we hear more about compression as a machine learning application?

All of these approaches require comparatively large neural networks, and the amount of arithmetic needed scales with the number of pixels. This means large images or video with high frames-per-second can require more computing power than current phones and similar devices have available. Most CPUs can only practically handle tens of billions of arithmetic operations per second, and running ML compression on HD video could easily require ten times that. The good news is that there are hardware solutions, like the Edge TPU amongst others, that offer the promise of much more compute being available in the future. I'm hopeful that we'll be able to apply these resources to all sorts of compression problems, from video and image, to audio, and even more imaginative approaches.

Power

Some Electric Car Drivers Might Spew More CO2 Than Diesel Cars, New Research Shows (bloomberg.com) 469

bricko shares a report from Bloomberg with the caption, "Making batteries is a mess": Beneath the hoods of millions of the clean electric cars rolling onto the world's roads in the next few years will be a dirty battery. Every major carmaker has plans for electric vehicles to cut greenhouse gas emissions, yet their manufacturers are, by and large, making lithium-ion batteries in places with some of the most polluting grids in the world. By 2021, capacity will exist to build batteries for more than 10 million cars running on 60 kilowatt-hour packs, according to data of Bloomberg NEF. Most supply will come from places like China, Thailand, Germany and Poland that rely on non-renewable sources like coal for electricity.

An electric vehicle in Germany would take more than 10 years to break even with an efficient combustion engine's emissions. "We're facing a bow wave of additional CO2 emissions," said Andreas Radics, a managing partner at Munich-based automotive consultancy Berylls Strategy Advisors, which argues that for now, drivers in Germany or Poland may still be better off with an efficient diesel engine. The findings, among the more bearish ones around, show that while electric cars are emission-free on the road, they still discharge a lot of the carbon-dioxide that conventional cars do. Just to build each car battery -- weighing upwards of 500 kilograms (1,100 pounds) in size for sport-utility vehicles -- would emit up to 74 percent more C02 than producing an efficient conventional car if it's made in a factory powered by fossil fuels in a place like Germany, according to Berylls' findings. Yet regulators haven't set out clear guidelines on acceptable carbon emissions over the life cycle of electric cars, even as the likes of China, France and the U.K. move toward outright bans of combustion engines.
It all has to do with manufacturing. According to estimates of Mercedes-Benz's electric-drive system integration department, manufacturing an electric car pumps out "significantly" more climate-warming gases than a conventional car, which releases only 20 percent of its lifetime CO2 at this stage. "Just switching to renewable energy for manufacturing would slash emissions by 65 percent, according to Transport & Environment," reports Bloomberg. "In Norway, where hydro-electric energy powers practically the entire grid, the Berylls study showed electric cars generate nearly 60 percent less CO2 over their lifetime, compared with even the most efficient fuel-powered vehicles."
Windows

Surface Go Reviews Are All Over the Place (arstechnica.com) 98

The reviews for Microsoft's Surface Go tablet are in, and they're all over the place. While the press generally agrees that the processor is slow and can only handle light tasks, such as browsing and mail, there are mixed conclusions as to whether or not the 10-inch, $399 tablet is worth buying. Ars Technica's Peter Bright summarizes: So, should you buy one? That's hard to say. Mashable was a fairly unequivocal "no:" for light productivity, a Chromebook or iPad does the job for less money, and the performance is too problematic for anything much beyond that. On the other side of the coin, Windows Central reckoned that "as a mini-PC [Surface Go] is about as good as you can get," and Ed Bott said, "It's the best cheap PC I've ever used." Gizmodo called it the "perfect representation of what laptops at this price should be." For everyone else, it depends. TechCrunch says that it's worth a look, but there's no shortage of competition around this price point. Acer and Lenovo, among others, offer decent systems that are a bit cheaper. PCWorld concludes that, if you want a tablet, get an honest-to-god tablet (which is to say, an iPad) rather than a system with Windows 10. But if you want something small and light and might just need the full flexibility of a PC, Go is the system to go for. Engadget acknowledged that the Go is "full of compromises" but that, as a "secondary device," the keyboard and software compatibility give it the edge over other tablets. The Verge concludes similarly: it's "probably not the right thing to be your only computer," but it could have a "real place" as a secondary machine. And VentureBeat took a similar line: if you really want the flexibility of a two-in-one, "you're unlikely to find anything better," but if you want either a laptop or a tablet, "you'll find better options for less." As a refresher, the Surface Go features a 10-inch touchscreen display with a 1800x1200 (217 PPI) resolution and 3:2 aspect ratio, an Intel Pentium Gold 4415Y Kaby Lake processor with up to 8GB of RAM and 128GB storage via a SSD (the 64GB eMMC variant features 4GB of RAM), integrated Intel HD Graphics 615, and "up to 9 hours" of battery life. The base model is just $399, compared to the $549 model with 128GB/8GB RAM.
Censorship

Researchers Find That Filters Don't Prevent Porn (techcrunch.com) 126

According to a new paper from Oxford Internet Institute researchers Victoria Nash and Andrew Przybylski, internet filters rarely work to keep adolescents away from online porn. Basically, the filters are expensive and they don't work. "Internet filtering tools are expensive to develop and maintain, and can easily 'underblock' due to the constant development of new ways of sharing content. Additionally, there are concerns about human rights violations -- filtering can lead to 'overblocking', where young people are not able to access legitimate health and relationship information." TechCrunch reports: The researchers "found that Internet filtering tools are ineffective and in most cases [and] were an insignificant factor in whether young people had seen explicit sexual content." The study's most interesting finding was that between 17 and 77 households "would need to use Internet filtering tools in order to prevent a single young person from accessing sexual content" and even then a filter "showed no statistically or practically significant protective effects." The study looked at 9,352 male and 9,357 female subjects from the EU and the UK and found that almost 50 percent of the subjects had some sort of Internet filter at home. Regardless of the filters installed, subjects still saw approximately the same amount of porn.
Microsoft

How Microsoft's Windows Red Team Keeps PCs Safe (wired.com) 83

Wired has a story on Windows' red team, which consists of a group of hackers (one of whom jailbroke Nintendo handhelds in a former life, another has more than one zero-day exploit to his name, and a third signed on just prior to the devastating Shadow Brokers leak), who are tasked with finding holes in the world's most used desktop operating system. From the story: The Windows red team didn't exist four years ago. That's around the time that David Weston, who currently leads the crew as principal security group manager for Windows, made his pitch for Microsoft to rethink how it handled the security of its marquee product. "Most of our hardening of the Windows operating system in previous generations was: Wait for a big attack to happen, or wait for someone to tell us about a new technique, and then spend some time trying to fix that," Weston says. "Obviously that's not ideal when the stakes are very high."

[...] Together, the red teamers spend their days attacking Windows. Every year, they develop a zero-day exploit to test their defensive blue-team counterparts. And when emergencies like Spectre or EternalBlue happen, they're among the first to get the call. Again, red teams aren't novel; companies that can afford them -- and that are aware they could be targeted -- tend to use them. If anything, it may come as a surprise that Microsoft hadn't sicced one on Windows until so recently. Microsoft as a company already had several other red teams in place by the time Weston built one for Windows, though those focused more on operational issues like unpatched machines. "Windows is still the central repository of malware and exploits. Practically, there's so much business done around the world on Windows. The attacker mentality is to get the biggest return on investment in what you develop in terms of code and exploits," says Aaron Lint, who regularly works with red teams in his role as chief scientist at application protection provider Arxan. "Windows is the obvious target."

Android

With Steam Link App, Your Smartphone Can Be An Imperfect Gaming Monitor (arstechnica.com) 47

Ars Technica's Kyle Orland shares his experience with Valve's recently announced Steam Link app, which lets users play games running on a PC via a tablet, mobile phone, or Apple TV on the same network. The app launches today for Android 5.0+ devices; iOS support is "pending further review from Apple." From the report: Valve isn't kidding when it says a Wi-Fi router in the 5Ghz band is required for wireless streaming. I first tested iPad streaming on the low-end 2.4Ghz router provided with my Verizon FiOS subscription (an Actiontec MI424WR), with a wired Ethernet connection to my Windows gaming rig on the other end. The Steam Link network test warned me that "your network may not work well with Steam Link," thanks to 1- to 2-percent frame loss and about 15ms of "network variance," depending on when I tested. Even graphically simple games like The Binding of Isaac ran at an unplayably slowed-down rate on this connection, with frequent dropped inputs to boot.

Switching over to a 5GHz tri-band router (The Netgear Nighthawk X6, to be precise), the same network test reported a "fantastic" connection that "look[s] like it will work well with Steam." On this router, remotely played games ran incredibly smoothly at the iPad's full 1080p resolution, with total round-trip display latency ranging anywhere from 50 to 150ms, according to Steam Link's reports (and one-way "input lag" of less than 1ms). At that level of delay, playing felt practically indistinguishable from playing directly on the computer, with no noticeable gameplay impact even on quick-response titles like Cuphead.

Cloud

Edge Computing: Explained (theverge.com) 159

An anonymous reader shares a report from The Verge, written by Paul Miller: In the beginning, there was One Big Computer. Then, in the Unix era, we learned how to connect to that computer using dumb (not a pejorative) terminals. Next we had personal computers, which was the first time regular people really owned the hardware that did the work. Right now, in 2018, we're firmly in the cloud computing era. Many of us still own personal computers, but we mostly use them to access centralized services like Dropbox, Gmail, Office 365, and Slack. Additionally, devices like Amazon Echo, Google Chromecast, and the Apple TV are powered by content and intelligence that's in the cloud -- as opposed to the DVD box set of Little House on the Prairie or CD-ROM copy of Encarta you might've enjoyed in the personal computing era. As centralized as this all sounds, the truly amazing thing about cloud computing is that a seriously large percentage of all companies in the world now rely on the infrastructure, hosting, machine learning, and compute power of a very select few cloud providers: Amazon, Microsoft, Google, and IBM.

The advent of edge computing as a buzzword you should perhaps pay attention to is the realization by these companies that there isn't much growth left in the cloud space. Almost everything that can be centralized has been centralized. Most of the new opportunities for the "cloud" lie at the "edge." The word edge in this context means literal geographic distribution. Edge computing is computing that's done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn't mean the cloud will disappear. It means the cloud is coming to you.
Miller goes on to "examine what people mean practically when they extoll edge computing," focusing on latency, privacy and security, and bandwidth.
Displays

Are Widescreen Laptops Dumb? (theverge.com) 411

"After years of phones, laptops, tablets, and TV screens converging on 16:9 as the 'right' display shape -- allowing video playback without distracting black bars -- smartphones have disturbed the universality recently by moving to even more elongated formats like 18:9, 19:9, or even 19.5:9 in the iPhone X's case," writes Amelia Holowaty Krales via The Verge. "That's prompted me to consider where else the default widescreen proportions might be a poor fit, and I've realized that laptops are the worst offenders." Krales makes the case for why a 16:9 screen of 13 to 15 inches in size is a poor fit: Practically every interface in Apple's macOS, Microsoft's Windows, and on the web is designed by stacking user controls in a vertical hierarchy. At the top of every MacBook, there's a menu bar. At the bottom, by default, is the Dock for launching your most-used apps. On Windows, you have the taskbar serving a similar purpose -- and though it may be moved around the screen like Apple's Dock, it's most commonly kept as a sliver traversing the bottom of the display. Every window in these operating systems has chrome -- the extra buttons and indicator bars that allow you to close, reshape, or move a window around -- and the components of that chrome are usually attached at the top and bottom. Look at your favorite website (hopefully this one) on the internet, and you'll again see a vertical structure.

As if all that wasn't enough, there's also the matter of tabs. Tabs are a couple of decades old now, and, like much of the rest of the desktop and web environment, they were initially thought up in an age where the predominant computer displays were close to square with a 4:3 aspect ratio. That's to say, most computer screens were the shape of an iPad when many of today's most common interface and design elements were being developed. As much of a chrome minimalist as I try to be, I still can't extricate myself from needing a menu bar in my OS and tab and address bars inside my browser. I'm still learning to live without a bookmarks bar. With all of these horizontal bars invading our vertical space, a 16:9 screen quickly starts to feel cramped, especially at the typical laptop size. You wind up spending more time scrolling through content than engaging with it.
What is your preferred aspect ratio for a laptop? Do you prefer Microsoft and Google's machines that have a squarer 3:2 aspect ratio, or Apple's MacBook Pro that has a 16:10 display?
Google

AMP For Email Is a Terrible Idea (techcrunch.com) 177

An anonymous reader shares an excerpt from a report via TechCrunch, written by Devin Coldewey: Google just announced a plan to "modernize" email with its Accelerated Mobile Pages platform, allowing "engaging, interactive, and actionable email experiences." Does that sound like a terrible idea to anyone else? It sure sounds like a terrible idea to me, and not only that, but an idea borne out of competitive pressure and existing leverage rather than user needs. Not good, Google. Send to trash. See, email belongs to a special class. Nobody really likes it, but it's the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there's something wrong with them. It's that they're mature, useful items that do exactly what they need to do. They've transcended the world of likes and dislikes. Email too is simple. It's a known quantity in practically every company, household, and device. The implementation has changed over the decades, but the basic idea has remained the same since the very first email systems in the '60s and '70s, certainly since its widespread standardization in the '90s and shift to web platforms in the '00s. The parallels to snail mail are deliberate (it's a payload with an address on it) and simplicity has always been part of its design (interoperability and privacy came later). No company owns it. It works reliably and as intended on every platform, every operating system, every device. That's a rarity today and a hell of a valuable one.

More important are two things: the moat and the motive. The moat is the one between communications and applications. Communications say things, and applications interact with things. There are crossover areas, but something like email is designed and overwhelmingly used to say things, while websites and apps are overwhelmingly designed and used to interact with things. The moat between communication and action is important because it makes it very clear what certain tools are capable of, which in turn lets them be trusted and used properly. We know that all an email can ever do is say something to you (tracking pixels and read receipts notwithstanding). It doesn't download anything on its own, it doesn't run any apps or scripts, attachments are discrete items, unless they're images in the HTML, which is itself optional. Ultimately the whole package is always just going to be a big , static chunk of text sent to you, with the occasional file riding shotgun. Open it a year or ten from now and it's the same email. And that proscription goes both ways. No matter what you try to do with email, you can only ever say something with it -- with another email. If you want to do something, you leave the email behind and do it on the other side of the moat.

Businesses

Apple Music Was Always Going To Win (gizmodo.com) 161

Apple Music is about to overtake Spotify as the most popular streaming music service in the United States, the Wall Street Journal reported over the weekend. Gizmodo: [...] Here's where the inevitability comes into play. Because all Apple devices come preloaded with Apple Music, countless consumers start using Apple Music without knowing any better. It's effectively become the streaming music analogue of Microsoft pushing people to surf the web with Internet Explorer. The big difference is that people eventually have to pay for Apple Music, which is the same price as Spotify. As many suspected when it launched three years ago, Apple Music was bound to succeed simply because Apple is big enough and rich enough to will it so. Think about it this way: Spotify gained traction quickly after its 2011 launch, largely because music enthusiasts had seen its streaming model succeed globally and wanted to try this neat new thing. After all, there wasn't anything quite like it at the time, and Americans love to feel innovative.

But eventually, Spotify would cease to feel special and new. As the years passed, practically every major tech company launched its own music streaming service. And then, in 2015, Apple unveiled Apple Music in 2015 -- which was really just a rebranded version of Beats Music. Because Apple could preload the service on iPhones, Watches, and Macs, the company could effectively tap into a new revenue stream without actually inventing anything.

Businesses

Uber Used Another Secret Software To Evade Police, Report Says (bloomberg.com) 226

schwit1 shares a Bloomberg report: In May 2015 about 10 investigators for the Quebec tax authority burst into Uber Technologies's office in Montreal. The authorities believed Uber had violated tax laws and had a warrant to collect evidence. Managers on-site knew what to do, say people with knowledge of the event. Like managers at Uber's hundreds of offices abroad, they'd been trained to page a number that alerted specially trained staff at company headquarters in San Francisco. When the call came in, staffers quickly remotely logged off every computer in the Montreal office, making it practically impossible for the authorities to retrieve the company records they'd obtained a warrant to collect. The investigators left without any evidence.

Most tech companies don't expect police to regularly raid their offices, but Uber isn't most companies. The ride-hailing startup's reputation for flouting local labor laws and taxi rules has made it a favorite target for law enforcement agencies around the world. That's where this remote system, called Ripley, comes in. From spring 2015 until late 2016, Uber routinely used Ripley to thwart police raids in foreign countries, say three people with knowledge of the system. Allusions to its nature can be found in a smattering of court filings, but its details, scope, and origin haven't been previously reported. The Uber HQ team overseeing Ripley could remotely change passwords and otherwise lock up data on company-owned smartphones, laptops, and desktops as well as shut down the devices. This routine was initially called the unexpected visitor protocol. Employees aware of its existence eventually took to calling it Ripley, after Sigourney Weaver's flamethrower-wielding hero in the Alien movies. The nickname was inspired by a Ripley line in Aliens, after the acid-blooded extraterrestrials easily best a squad of ground troops. 'Nuke the entire site from orbit. It's the only way to be sure.'

Intel

Can Intel's 'Management Engine' Be Repurposed? 139

Long-time Slashdot reader iamacat writes: Not a day goes by without a story about another Intel Management Engine vulnerability. What I get is that a lot of consumer PCs can access network and run x86 code on top of UNIX-like OS such as Minix even when powered off.

This sounds pretty useful for tasks such as running an occasional use Plex server. Like I can have a box that draws very little power when idle. But when an incoming connection is detected, it can power itself and the media drive on and serve the requested content.

The original submission ends with an interesting question. "if Intel ME is so insecure, how do I exploit it for practically useful purposes?"
Security

Ask Slashdot: How Are So Many Security Vulnerabilities Possible? 354

dryriver writes: It seems like not a day goes by on Slashdot and elsewhere on the intertubes that you don't read a story headline reading "Company_Name Product_Name Has Critical Vulnerability That Allows Hackers To Description_Of_Bad_Things_Vulnerability_Allows_To_Happen." A lot of it is big brand products as well. How, in the 21st century, is this possible, and with such frequency? Is software running on electronic hardware invariably open to hacking if someone just tries long and hard enough? Or are the product manufacturers simply careless or cutting corners in their product designs? If you create something that communicates with other things electronically, is there no way at all to ensure that the device is practically unhackable?
The Media

Net Neutrality is Essentially Unassailable, Argues Billionaire Barry Diller (broadcastingcable.com) 82

An anonymous reader quotes Yahoo Finance: The billionaire media mogul behind such popular sites as Expedia, Match.com and HomeAdvisor has a one-word forecast for traditional media conglomerates concerned about being replaced by tech giants: serfdom. "They, like everyone else, are kind of going to be serfs on the land of the large tech companies," IAC chairman Barry Diller said... That's because Google and Facebook not only have such massive user bases but also dominate online advertising. "Google and Facebook are consolidating," Diller said. "They are the only mass advertising mediums we have..." He expects Facebook, Google and maybe Amazon to face government regulation, simply because of their immense size. "At a certain point in size, you must," he said. "It's inevitable."

He did, however, outline one positive for Big Tech getting so gargantuan. Big Telecom no longer has the economic leverage to roll back today's net-neutrality norms, in which internet providers don't try to charge sites extra for access to their subscribers. "I think it's hard to overturn practically," he said. "It is the accepted system."

Even if the U.S. government takes moves to fight net neutrality, Diller told CNBC that "I think it is over... It is [the] practice of the world... You're still going to be able to push a button and publish to the world, without anybody in between asking you for tribute. I think that is now just the way things are done. I don't think it can be violated no matter what laws are back."
Security

Hilton Paid a $700K Fine For 2015 Breach; Under GDPR, It Would Be $420 Million (digitalguardian.com) 110

chicksdaddy writes from a report via Digital Guardian: If you want to understand the ground shaking change that the EU's General Data Protection Rule (GDPR) will have when it comes into force in May of 2018, look no further than hotel giant Hilton Domestic Operating Company, Inc., formerly known as Hilton Worldwide, Inc (a.k.a. "Hilton."). On Tuesday, the New York Attorney General Eric T. Schneiderman slapped a $700,000 fine on the hotel giant for two 2015 incidents in which the company was hacked, spilling credit card and other information for 350,000 customers. Schneiderman also punished Hilton for its response to the incident. The company first learned in February 2015 that its customer data had been exposed through a UK-based system belonging to the company, which was observed by a contractor communicating with "a suspicious computer outside Hilton's computer network." Still, it took Hilton until November 24, 2015 -- over nine months after the first intrusion was discovered -- to notify the public. That kind of lackluster response has become pretty typical among Fortune 500 companies (see also: Equifax). And why not? The $700,000 fine from the NY AG is a palatable $2 per lost record -- and a mere rounding error for Hilton, which reported revenues of $11.2 billion in 2015, the year of the breach. That means the $700,000 fine was just %.00006 of Hilton's annual revenue in the year of the breach. Schneiderman's fine was less "bringing down the hammer" than a butterfly kiss for Hilton's C-suite, board and shareholders.

But things are going to be different for Hilton and other companies like it come May 2018 when provisions of the EU's General Data Protection Rule (or GDPR) go into effect, as Digital Guardian points out on their blog. Under that new law, data "controllers" like Hilton (in other words: organizations that collect data on customers or employees) can be fined up to 4% of annual turnover in the year preceding the incident for failing to meet the law's charge to protect that data. What does that mean practically for a company like Hilton? Well, the company's FY 2014 revenue (or "turnover") was $10.5 billion. Four percent of that is a cool $420 million dollars -- or $1,200, rather than $2, for every customer record lost. Needless to say, that's a number that will get the attention of the company's Board of Directors and shareholders.

Science

We're Not Living in a Computer Simulation, New Research Shows (cosmosmagazine.com) 403

A reader shares a report: A team of theoretical physicists from Oxford University in the UK has shown that life and reality cannot be merely simulations generated by a massive extraterrestrial computer. The finding -- an unexpectedly definite one -- arose from the discovery of a novel link between gravitational anomalies and computational complexity. In a paper published in the journal Science Advances, Zohar Ringel and Dmitry Kovrizhi show that constructing a computer simulation of a particular quantum phenomenon that occurs in metals is impossible -- not just practically, but in principle. The pair initially set out to see whether it was possible to use a technique known as quantum Monte Carlo to study the quantum Hall effect -- a phenomenon in physical systems that exhibit strong magnetic fields and very low temperatures, and manifests as an energy current that runs across the temperature gradient. The phenomenon indicates an anomaly in the underlying space-time geometry. [...] They discovered that the complexity of the simulation increased exponentially with the number of particles being simulated. If the complexity grew linearly with the number of particles being simulated, then doubling the number of partices would mean doubling the computing power required. If, however, the complexity grows on an exponential scale -- where the amount of computing power has to double every time a single particle is added -- then the task quickly becomes impossible.

Slashdot Top Deals