Become a fan of Slashdot on Facebook


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

In Major Cloud Expansion, Google To Open 12 More Data Centers 42

Mickeycaskill writes: Google is to open 12 new data centers in the latest stage of a bitter war with rivals Amazon Web Services (AWS) and Microsoft Azure. The first two facilities to open will be in Oregon and Tokyo, both of which will open next year. The rest will follow in 2017. Google says the new locations will allow customers to run applications closer to home, boosting latency, and of course benefiting from any local data protection laws. At present, Google has just four cloud regions, meaning this expansion will quadruple its sphere of influence. "With these new regions, even more applications become candidates to run on Cloud Platform, and get the benefits of Google-level scale and industry leading price/performance," said Varun Sakalkar, Google Cloud's product manager. Two bits says those were not his exact words.
Data Storage

Facebook Preps Its Infrastructure For a Virtual Reality Future ( 53

1sockchuck writes: Facebook is building a new generation of open hardware, part of its vision for powerful data centers that will use artificial intelligence and virtual reality to deliver experiences over the Internet. At the Open Compute Summit, Facebook shared details of its work to integrate more SSDs, GPUs, NVM and a "Just a Bunch of Flash" storage sled to accelerate its infrastructure. The company's infrastructure ambitions are also powered by CEO Mark Zuckerberg's embrace of virtual reality, reflected in the $2 billion acquisition of VR pioneer Oculus. "Over the next decade, we're going to build experiences that rely more on technology like artificial intelligence and virtual reality," said Facebook CEO Mark Zuckerberg. "These will require a lot more computing power."
Data Storage

There's No End In Sight For Data Storage Capacity ( 107

Lucas123 writes: Several key technologies are coming to market in the next three years that will ensure data storage will not only keep up with but exceed demand. Heat-assisted magnetic recording and bit-patterned media promise to increase hard drive capacity initially by 40% and later by 10-fold, or as Seagate's marketing proclaims: 20TB hard drives by 2020. At the same time, resistive RAM technologies, such as Intel/Micron's 3D XPoint, promise storage-class memory that's 1,000 times faster and more resilient than today's NAND flash, but it will be expensive — at first. Meanwhile, NAND flash makers have created roadmaps for 3D NAND technology that will grow to more than 100 layers in the next two to three generations, increasing performance and capacity while ultimately lowering costs to that of hard drives."Very soon flash will be cheaper than rotating media," said Siva Sivaram, executive vice president of memory at SanDisk.
Data Storage

Seagate Debuts World's Fastest NVMe SSD With 10GBps Throughput ( 66

MojoKid writes: Seagate has just unveiled what it is calling "the world's fastest SSD," and the performance differential between it and the next closest competitive offering is significant, if their claims are true. The SSD, which Seagate today announced is in "production-ready" form employs the NVMe protocol to help it achieve breakneck speeds. So just how fast is it? Seagate says that the new SSD is capable of 10GB/sec of throughput when used in 16-lane PCIe slots. Seagate notes that this is 4GB/sec faster than the next-fastest competing SSD solution. The company is also working on a second, lower-performing variant that works in 8-lane PCIe slots and has a throughput of 6.7GB/sec. Seagate sees the second model as a more cost-effect SSD for businesses that want a high performing SSD, but want to keep costs and power consumption under control. Seagate isn't ready yet to discuss pricing for its blazing fast SSDs, and oddly haven't disclosed a model name either, but it does say that general availability for its customers will open up during the summer.

Crossword Database Analysis Spots What Looks Like Plagiarism 44

Seattle software developer Saul Pwanson has a hobby of developing crossword puzzles, but another related hobby, too: analyzing the way that existing puzzles have been constructed. He created a database that aggregates puzzles that have appeared in various publications, including, crucially, the New York Times and USA Today, and sorts them based on similarities. Puzzles that have a greater percentage of the same black squares, or the same letters in identical positions, are ranked as more similar. Crosswords often re-use answers; puzzle-solvers are used to encountering some of the usual glue words that connect parts of the grid. As 538 reports, though, Pwanson noticed something odd in the data: Many of the puzzles that appeared in USA Today and affiliated publications, listed under various creators' names but all published under Timothy Parker as editor, were highly similar to each other, differing in as little as four answer words. These Pwanson classifies as "shoddy" -- they seem to be about as different as test responses based on a passed-around answer sheet. These seem to shortchange readers expecting original works, but may represent no real copyright problem, since Universal Uclick holds the copyright to them all. Perhaps puzzle enthusiasts aren't surprised that a publishing syndicate economizes on crosswords with slight variations, or that horoscopes are sometimes recycled.

However, another tranche of puzzles Pwanson calls "shady": these are puzzles that bear such strong resemblance in their central clues and answers to puzzles that have appeared in the New York Times that it's very hard to accept Parker's claim that the overlap is coincidental. In one example given, for instance, the answers "Drive Up the Wall," "Get On One's Nerves," and "Rub the Wrong Way" appeared in the same order and the same position in a Parker-edited puzzle that appeared in USA Today in June 2010 as they had in a Will Shortz-edited puzzle published nine years before in the New York Times.

Hundreds of Hackers Celebrate Open Data Day ( 21

An anonymous reader writes: Hundreds of different data-hacking events are being held around the globe this weekend to celebrate International Open Data Day. It's the fifth installment of an annual event promoting government data-sharing with a series of loosely joined hackathons, "to show support for and encourage the adoption of open data policies by the world's local, regional and national governments," according to the event's web site. "Data science is a team sport," says Megan Smith, the former Google executive turned U.S. CTO, who points out over 200,000 new federal data sets have been opened to the public since 2009 on Each hackathon will culminate with a demo or brainstorm proposal that can be shared with the other participating groups around the world.
Data Storage

Samsung Ships 15.38TB SSD With Up To 1,200MBps Performance ( 103

Lucas123 writes: Samsung announced it is now shipping the world's highest capacity 2.5-in SSD, the 15.38TB PM1633a. The new SSD uses a 12Gbps SAS interface and is being marketed for use in enterprise-class storage systems where IT managers can fit twice as many of the drives in a standard 19-inch, 2U rack compared to an equivalent 3.5-inch drive. The PM1633a sports random read/write speeds of up to 200,000 and 32,000 IOPS, respectively. It delivers sequential read/write speeds of up to 1,200MBps, the company said. The SSD can sustain one full drive write (15.38TB) per day, every day over its life, which Samsung claims is two to ten times more data than typical SATA SSDs based on planar MLC and TLC NAND flash technologies. The SSD is based on Samsung's 48-layer V-NAND (3D NAND) technology, which also uses 3-bit MLC flash. Also at Hot Hardware
Data Storage

Google-Backed SSD Endurance Research Shows MLC Flash As Reliable As SLC ( 62

MojoKid writes: Even for mainstream users, it's easy to feel the differences between using a PC that has an OS installed on a solid state drive versus a mechanical hard drive. Also, with SSD pricing where it is right now, it's also easy to justify including one in a new configuration for the speed boost. And there's obvious benefit in the enterprise and data center for both performance and durability. As you might expect, Google has chewed through a healthy pile of SSDs in its data centers over the years and the company appears to have been one of the first to deploy SSDs in production at scale. New research results Google is sharing via a joint research project now encompasses SSD use over a six year span at one of Google's data centers. Looking over the results led to some expected and unexpected findings. One of the biggest discoveries is that SLC-based SSDs are not necessarily more reliable than MLC-based drives. This is surprising, as SLC SSDs carry a price premium with the promise of higher durability (specifically in write operations) as one of their selling points. It will come as no surprise that there are trade-offs of both SSDs and mechanical drives, but ultimately, the benefits SSDs offer often far outweigh the benefits of mechanical HDDs.
Data Storage

Google Proposes New Hard Drive Format For Data Centers ( 202

An anonymous reader writes: In a new research paper the VP of Infrastructure at Google argues for hard drive manufacturers and data center provisioners to consider revisions to the current 3.5" form-factor in favour of taller, multi-platter form factors — with the possibility of combining the new format with HDDs of smaller circumference which hold less data but have better seek times. Eric Brewer, also a professor at UC Berkeley, writes "The current 3.5" HDD geometry was adopted for historic reasons – its size inherited from the PC floppy disk. An alternative form factor should yield a better TCO overall. Changing the form factor is a long term process that requires a broad discussion, but we believe it should be considered."

UCL Scientists Push 1.125Tbps Through a Single Coherent Optical Receiver 25

Mark.JUK writes: A team of researchers working in the Optical Networks Group at the University College London in England claim to have achieved the "greatest information rate ever recorded using a single [coherent optical] receiver", which was able to handle a record data speed of 1.125 Terabits per second (Tbps). The result, which required a 15 sub-carrier 8GBd DP-256QAM super-channel (15 channels of data) and total bandwidth of 121.5GHz, represents an increase of 12.5% relative to the previous record (1Tbps). Now they just need to test it using some long fibre optic cable because optical signals tend to become distorted when they travel over thousands of kilometers.

Startup Uses Sensor Networks To Debug Science Experiments ( 25

gthuang88 writes: Environmental factors like temperature, humidity, or lighting often derail life science experiments. Now Elemental Machines, a startup from the founders of Misfit Wearables, is trying to help scientists debug experiments using distributed sensors and machine-learning software to detect anomalies. The product is in beta testing with academic labs and biotech companies. The goal is to help speed up things like biology research and drug development. Wiring up experiments is part of a broader effort to create "smart labs" that automate some of the scientific process.

How Uber Profits Even When Its Drivers Aren't Earning Money ( 180

tedlistens writes: Jay Cassano spoke to Uber drivers about "dead miles" and what work means when your boss is an algorithm, and considers a new frontier of labor concerns and big data. "Uber is the closest thing to an employer we've ever seen in this industry," Bhairavi Desai, founder of the New York Taxi Workers Alliance, told him. "They not only direct every aspect of a driver's workday, they also profit off the entire day through data collection, not just the 'sale of a product.'"
The Media

How To Build a TimesMachine ( 41

necro81 writes: The NY Times has an archive, the TimesMachine, that allows users to find any article from any issue from 1851 to the present day. Most of it is shown in the original typeset context of where an article appeared on a given page — like sifting through a microfiche archive. But when original newspaper scans are 100-MB TIFF files, how can this information be conveyed in an efficient manner to the end user? These are other computational challenges are described in this blog post on how the TimesMachine was realized.

Hunting Malware With GPUs and FPGAs ( 44

szczys writes: Rick Wesson has been working on a solution to identify the same piece of malware that has been altered through polymorphism (a common method of escaping detection). While the bits are scrambled from one example to the next, he has found that using a space filling curve makes it easy to cluster together polymorphically similar malware samples. Forming the fingerprint using these curves is computationally expensive. This is an Internet-scale problem which means he currently needs to inspect 300,000 new samples a day. Switching to a GPU to do the calculation proved four orders of magnitude efficiency over CPUs to reach about 200,000 samples a day. Rick has begun testing FPGA processing, aiming at a goal of processing 10 million samples in four hours using a machine drawing 4000 Watts.

Uber Scaling Up Its Data Center Infrastructure ( 33

1sockchuck writes: Connected cars generate a lot of data. That's translating into big business for data center providers, as evidenced by a major data center expansion by Uber, which needs more storage and compute power to support its global data platform. Uber drivers' mobile phones send location updates every 4 seconds, which is why the design goal for Uber's geospatial index is to handle a million writes per second. It's a reminder that as our cars become mini data centers, the data isn't staying onboard, but will also be offloaded to the data centers of automakers and software companies.

Slashdot Top Deals

Wasn't there something about a PASCAL programmer knowing the value of everything and the Wirth of nothing?