Facebook

Facebook Engineers Develop New Open Source Time Keeping Appliance (techcrunch.com) 99

Ron Miller, writing for TechCrunch: Most people probably don't realize just how much our devices are time driven, whether it's your phone, your laptop or a network server. For the most part, time keeping has been an esoteric chore, taken care of by a limited number of hardware manufacturers. While these devices served their purpose, a couple of Facebook engineers decided there had to be a better way. So they built a new more accurate time keeping device that fits on a PCI Express (PCIe) card, and contributed it to the Open Compute Project as an open source project. At a basic level, says Olag Obleukhov, a production engineer at Facebook, it's simply pinging this time-keeping server to make sure each device is reporting the same time.

"Almost every single electronic device today uses NTP -- Network Time Synchronization Protocol -- which you have on your phone, on your watch, on your laptop, everywhere, and they all connect to these NTP servers where they just go and say, 'what time is it's and the NTP server provides the time," he explained. Before Facebook developed a new way of doing this, there were basically two ways to check the time. If you were a developer, you probably used something like Facebook.com as a time checking mechanism, but a company like Facebook, working at massive scale, needed something that worked even when there wasn't an internet connection.

Companies running data centers have a hardware device called Stratum One, which is a big box that sits in the data center, and has no other job than acting as the time keeper. Because these time-keeping boxes were built by a handful of companies over years, they were solid and worked, but it was hard to get new features. What's more, companies like Facebook couldn't control the boxes because of their proprietary nature. Obleukhov and his colleague research scientist, Ahmad Byagowi began to attack the problem by looking for a way to create these devices by building a PCIe card with off-the-shelf parts that you could stick into any PC with an open slot.

Linux

SiFive Unveils Plan For Linux PCs With RISC-V Processors (venturebeat.com) 42

SiFive today announced it is creating a platform for Linux-based personal computers based on RISC-V processors. VentureBeat reports: Assuming customers adopt the processors and use them in PCs, the move might be part of a plan to create Linux-based PCs that use royalty-free processors. This could be seen as a challenge to computers based on designs from Intel, Advanced Micro Devices, Apple, or Arm, but giants of the industry don't have to cower just yet. The San Mateo, California-based company unveiled HiFive Unmatched, a development design for a Linux-based PC that uses its RISC-V processors. At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665.

The SiFive HiFive Unmatched board will have a SiFive processor, dubbed the SiFive FU740 SoC, a 5-core processor with four SiFive U74 cores and one SiFive S7 core. The U-series cores are Linux-based 64-bit application processor cores based on RISC-V. These cores can be mixed and matched with other SiFive cores, such as the SiFive FU740. These components are all leveraging SiFive's existing intellectual property portfolio. The HiFive Unmatched board comes in the mini-ITX standard form factor to make it easy to build a RISC-V PC. SiFive also added some standard industry connectors -- ATX power supplies, PCI-Express expansion, Gigabit Ethernet, and USB ports are present on a single-board RISC-V development system.

The HiFive Unmatched board includes 8GB of DDR4 memory, 32MB of QSPI flash memory, and a microSD card slot on the motherboard. For debugging and monitoring, developers can access the console output of the board through the built-in microUSB type-B connector. Developers can expand it using PCI-Express slots, including both a PCIe general-purpose slot (PCIe Gen 3 x8) for graphics, FPGAs, or other accelerators and M.2 slots for NVME storage (PCIe Gen 3 x4) and Wi-Fi/Bluetooth modules (PCIe Gen 3 x1). There are four USB 3.2 Gen 1 type-A ports on the rear, next to the Gigabit Ethernet port, making it easy to connect peripherals. The system will ship with a bootable SD card that includes Linux and popular system developer packages, with updates available for download from SiFive.com. It will be available for preorders soon.

For some more context: Could RISC-V processors compete with Intel, ARM, and AMD?
AMD

GPU and Motherboard OEMs Readying Components Optimized For Cryptocurrency Mining (hothardware.com) 77

MojoKid writes: With the popularity of upstart cryptocurrencies like Ethereum on the rise and the value of well-established currencies like Bitcoin steadily increasing, there is new-found interest in cryptocurrency mining. As such, there is another run on AMD and NVIDIA GPUs, which is driving up prices. In an effort to prevent the same kind of GPU shortages that happened in the past, reports have surfaced claiming that AMD and NVIDIA are both readying stripped-down graphics cards, specifically targeting cryptocurrency miners. At Computex, ASRock also announced a new motherboard targeted at cryptocurrency miners, the ASRock H110 Pro BTC+. The ASRock H110 Pro BTC+ is packing 13 PCI Express slots -- twelve x1 slots and one x16 slot -- to accommodate as many graphics cards. ASRock didn't specify pricing or when the H110 Pro BTC+ will be available, however. And the reports that AMD and NVIDIA graphics card for mining will be made available sometime at the end of the June are as yet unconfirmed.
Intel

Intel's New Mini PCs Have New Chips, an Updated Design, and Thunderbolt 3 (arstechnica.com) 92

An anonymous reader quotes a report from Ars Technica: In the last four or five years, Intel's "Next Unit of Computing" (NUC) hardware has evolved from interesting experiments to pace cars for the rest of the mini desktop business. Mini PCs represent one of the few segments of the desktop computing business that actually has growth left in it, and every year the NUC has added new features that make it work for a wider audience. This year's models, introduced alongside the rest of Intel's new "Kaby Lake" processor lineup at CES, include new processors with new integrated GPUs, but that's probably the least interesting thing about them. Thanks to the demise of Intel's "tick-tock" strategy, the processing updates are minor. Kaby Lake chips include smaller performance and architectural improvements than past generations, and the year-over-year improvements have been mild over the last few years. The big news is in all the ways you can get bytes into and out of these machines. There are two Core i3 models (NUC7i3BNK and NUC7i3BNH), two Core i5 models (NUC7i5BNK and NUC7i5BNH), and one Core i7 model (NUC7i7BNH) -- that last one is intended to replace the older dual-core Broadwell i7 NUC and not the recent quad-core "Skull Canyon" model. The Core i3 and i5 versions come in both "short" and "tall" cases, the latter of which offers space for a 2.5-inch laptop-sized SATA hard drive or SSD. The i7 version only comes in a "tall" version. Like past NUCs, all five models offer two laptop-sized DDR4 RAM slots and an M.2 slot for SATA and PCI Express SSDs (up to four lanes of PCIe 3.0 bandwidth is available). Bluetooth and 802.11ac Wi-Fi is built-in. As for the rest of the NUCs' features, Intel has drawn a line between the Core i3 model and the i5/i7 models. All of the boxes include four USB 3.0 ports (two on the front, two on the back), a headphone jack, an IR receiver, an HDMI 2.0 port, a gigabit Ethernet port, a microSD card slot, a dedicated power jack, and a new USB-C port that can be used for data or DisplayPort output (the dedicated DisplayPort is gone, and this port can't be used to power the NUCs). In the i5 and i7 models, the USB-C port is also a full-fledged Thunderbolt 3 port, the first time any of the smaller dual-core NUCs have included Thunderbolt since the old Ivy Bridge model back in 2012.
AMD

AMD Unveils Radeon Pro WX and Pro SSG Professional Graphics Cards (hothardware.com) 53

MojoKid writes: AMD took the wraps off its latest pro graphics solutions at SIGGRAPH today, and announced three new professional graphics cards in the new Polaris-based Radeon Pro WX Series. The Radeon Pro WX 4100 is the entry-level model with a half-height design for use in small form-factor workstations. The Radeon Pro WX 5100 is the middle child, while the Radeon Pro WX 7100 is AMD's current top-end WX model. The Radeon Pro WX 7100 has 32 compute units, offers 5 TFLOPs of compute performance, and is backed by 8GB of GDDR4 memory over a 256-bit memory interface. The Radeon Pro WX 5100 offers 28 compute units and 4 TFLOPs of performance along with 8GB memory over the same 256-bit interface, and the Radeon Pro WX 4100 is comprised of 16 compute units at 2 TFLOPs of perf with 4GB memory over a 128-bit memory link. The Radeon Pro WX 4100 has four mini DisplayPort outputs, while the Radeon Pro WX 5100 and 7100 each have four full-size DisplayPort connectors. None of these cards will be giving the new NVIDIA Quadro P6000 a run for its money in terms of performance, but they don't have to. The Quadro card will no doubt cost thousands of dollars, while the Radeon Pro WX 7100 will eek in at just under $1,000. The Radeon Pro WX 5100 and 4100 will slot in somewhat below that mark. AMD also announced the Radeon Solid State Storage Architecture and the Radeon Pro SSG card today. Details are scant, but AMD is essentially outfitting Radeon Pro SSG cards with large amounts of Solid State Flash Memory, which can allow much larger data sets to reside close to the GPU in an extended frame buffer. Whereas the highest-end professional graphics cards today may have up to 24GB of memory, the Radeon Pro SSG will start with 1TB, linked to the GPU via a custom PCI Express interface. Giving the GPU access to a large, local data repository should offer significantly increased performance for demanding workloads like real-time post-production of 8K video, high-resolution rendering, VR content creation and others.
AMD

AMD Details Driver Fix For Radeon RX 480's Controversial, Spec-Exceeding Power Draw (pcworld.com) 157

AMD's 150-watt Radeon RX 480 apparently draws more power than it is supposed to. According to Tom's Hardware blog, AMD's new graphics card used an average of 168W under load. Furthermore, the publication found that card pulled up to a whopping 90W over the motherboard's PCI-E slot, far exceeding the 75W maximum the slot it rated for. PC Perspective's findings were similar, with Witcher 3 title consuming over 190W of sustained power draw when the RX 480 was overclocked. Worse, the blog discovered that AMD's card drew 7 amps over the PCI-E slot's +12v rail, which is rated for 5.5 amps maximum. These issues could theoretically (but not likely) damage lower-end motherboards in extreme circumstances, writes PCWorld. The chip company last week addressed the concerns, noting that it will soon release a software fix. In a new statement to PCWorld, the company adds:"We promised an update today (July 5, 2016) following concerns around the Radeon RX 480 drawing excess current from the PCIe bus. Although we are confident that the levels of reported power draws by the Radeon RX 480 do not pose a risk of damage to motherboards or other PC components based on expected usage, we are serious about addressing this topic and allaying outstanding concerns. Towards that end, we assembled a worldwide team this past weekend to investigate and develop a driver update to improve the power draw. We're pleased to report that this driver -- Radeon Software 16.7.1 -- is now undergoing final testing and will be released to the public in the next 48 hours. In this driver we've implemented a change to address power distribution on the Radeon RX 480 -- this change will lower current drawn from the PCIe bus. Separately, we've also included an option to reduce total power with minimal performance impact. Users will find this as the "compatibility" UI toggle in the Global Settings menu of Radeon Settings. This toggle is "off" by default. Finally, we've implemented a collection of performance improvements for the Polaris architecture that yield performance uplifts in popular game titles of up to 3%. These optimizations are designed to improve the performance of the Radeon RX 480, and should substantially offset the performance impact for users who choose to activate the "compatibility" toggle.
Hardware

Alienware's X51 R3, Revamped With Skylake and Maxwell, Tested and Torn Down 18

MojoKid writes: Alienware has been relatively quiet for the past 18 months or so with respect to their X51 small form factor gaming systems. However, Intel's recent Skylake processor launch and NVIDIA's further optimizations in their Maxwell GPU architecture have given the company a fresh suite of technology to work with to enhance performance and reduce power consumption. As such, the Alienware X51 was given a complete overhaul of the lastest technologies, all of which play very well with the tighter power budgets and thermal constraints of this class of machine. Alienware calls their new machine simply the X51 R3, as it's the third revision of the product. One of the more unique design changes that Alienware made was to the graphics riser card which plugs into a X20 PCI Express slot on the motherboard. This is a rather unique approach to design efficiency which allows the Samsung NVMe M.2 gumstick Solid State Drive in this machine to ride along shotgun with the NVIDIA GeForce GTX 960, on the side of a custom riser card. Performance-wise the machine is capable of strong standard compute performance on the desktop and in the latest game titles it's able to offer up playable frame rates up through 1440p resolution with high image quality settings. Not bad for a console-sized small form factor PC, actually.
Data Storage

Kingston HyperX Predator SSD Takes Gumstick M.2 PCIe Drives To 1.4GB/sec 51

MojoKid writes Kingston recently launched their HyperX Predator PCIe SSD that is targeted at performance-minded PC enthusiasts but is much less expensive than enterprise-class PCIe offerings that are currently in market. Kits are available in a couple of capacities and form factors at 240GB and 480GB. All of the drives adhere to the 80mm M.2 2280 "gumstick" form factor and have PCIe 2.0 x4 connections, but are sold both with and without a half-height, half-length adapter card, if you'd like to drop it into a standard PCI Express slot. At the heart of the Kingston HyperX Predator is Marvell's latest 88SS9293 controller. The Marvell 88SS9293 is paired to a gigabyte of DDR3 memory and Toshiba A19 Toggle NAND. The drives are rated for read speeds up to 1.4GB/s and writes of 1GB/s and 130 – 160K random 4K IOPS. In the benchmarks, the 480GB model put up strong numbers. At roughly $1 per GiB, the HyperX Predator is about on par with Intel's faster SSD 750, but unlike Intel's new NVMe solution, the Kingston drive will work in all legacy platforms as well, not just Z97 and X99 boards with a compatible UEFI BIOS.
Graphics

NVIDIA Unveils Dual-GPU Powered GeForce GTX 690 93

MojoKid writes "Today at the GeForce LAN taking place in Shanghai, NVIDIA's CEO Jen Hsun Huang unveiled the company's upcoming dual-GPU powered, flagship graphics card, the GeForce GTX 690. The GeForce GTX 690 will feature a pair of fully-functional GK104 "Kepler" GPUs. If you recall, the GK104 is the chip powering the GeForce GTX 680, which debuted just last month. On the upcoming GeForce GTX 690, each of the GK104 GPUs will also be paired to its own 2GB of memory (4GB total) via a 256-bit interface, resulting in what is essentially GeForce GTX 680 SLI on a single card. The GPUs on the GTX 690 will be linked to each other via a PCI Express 3.0 switch from PLX, with a full 16 lanes of electrical connectivity between each GPU and the PEG slot. Previous dual-GPU powered cards from NVIDIA relied on the company's own NF200, but that chip lacks support for PCI Express 3.0, so NVIDIA opted for a third party solution this time around."
Networking

Ask Slashdot: Best Use For a New Supercomputing Cluster? 387

Supp0rtLinux writes "In about 2 weeks time I will be receiving everything necessary to build the largest x86_64-based supercomputer on the east coast of the U.S. (at least until someone takes the title away from us). It's spec'ed to start with 1200 dual-socket six-core servers. We primarily do life-science/health/biology related tasks on our existing (fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease access to recoup some of our costs. So, what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module? Additionally, due to cost contracts, we have to choose either InfiniBand or 10Gb Ethernet for the backend: which would Slashdot readers go with if they had to choose? Either way, all nodes will have four 1Gbps Ethernet ports. Finally, all nodes include only a basic onboard GPU. We intend to put powerful GPUs into the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most powerful Linux friendly PCI-e GPU available?"

Hornet Pro PC Reviewed 33

A while back now I had the chance to use Monarch Computer's Hornet Pro. The Hornet is a small form factor game cube-style machine. You may recall that I reviewed Monarch's Nemesis system a while back. Read more to find out how well this machine stacked up and compare notes.

Other Uses for an AGP Slot? 160

SleepyHappyDoc asks: "AGP seems to be going the way of the dinosaur, but there's still a lot of slots on legacy motherboards out there. If you don't have need for the graphical advantages of AGP (say, on a headless server), what else could you use the AGP slot for? Could the advantages of AGP over PCI be leveraged in a use other than graphics cards?"
Data Storage

Gigabyte Solid-State Storage Reviewed 71

EconolineCrush writes "The Tech Report has a review of Gigabyte's i-RAM, a relatively affordable solid-state storage device that uses plain old DDR memory modules and plugs into a standard motherboard PCI slot and Serial ATA port. Performance is generally excellent and occasionally jaw-dropping, but the i-RAM's appeal is ultimately curbed by its slower Serial ATA interface and limited capacity. Still, it's an interesting solution for anyone looking for faster I/O, and since it behaves like a normal hard drive without the need for drivers or software, it should work with just about any operating system."
Upgrades

Value (Price/Quality) for Computer Upgrades? 60

Sierpinski asks: "I am currently researching a new video card, and seeing that PCI-Express has pretty much taken the industry by storm, I have not been able to find a relatively recent (late-model so to speak) AGP card. If I get a PCI-Express card, I'll need to upgrade my board. If I upgrade my board, I doubt my CPU (slot 462) will still be usable. As much as price is a factor, compatibility is as well. I've run into problems in the past where X memory wouldn't work in Y boards, etc. Does anyone have a spec list of the main components (board, CPU, memory, video card) that are recent (ie 6800GT PCI-Express), and work well together?"
Hardware Hacking

Making Your PC Dust Free? 89

Kranfer asks: "Recently, I cleaned out my PC to find not only dust... but also feathers from my from rather large parrots. I have struggled with keeping my PC dust free for years, but I have yet to find a workable solution that will keep the dust from stacking up every few months, inside my PCs at home. I was hoping that my peers on Slashdot might have thought up some innovative solutions to this common problem with any PC. How does one cut down on the dust entering a PC and sticking around? I run an Antec File Server Case with each and every fan slot taken blowing out, and even one of those Harddrive coolers and PCI slot coolers. What have you done to rid yourself of the dust and pet dander inside your PCs?"
AMD

How Many Desktop PCs Can One Server Replace? 107

NZheretic asks: "HP has just announced that they have upgraded a four-processor server with Advanced Micro Devices' new dual-core Opteron. The amount of processing power a multi-processor multi-core system can deliver seem like a waste of processing power for most traditional servers, which are more likely to suffer from disk access bottlenecks before lack of processing power becomes a problem. But what if that power could be delivered direct to the desktop users? The HP ProLiant DL585 supports eight 64-bit PCI-X I/O Slots (Six 100MHz, two 133MHz). The ATI FireMV(TM) 2400 supports Quad DVI/VGA displays on PCI Express. Assuming that you leave one PCI-X slot for a multiport USB card, thats up to twenty eight displays with USB keyboards,mice and headsets that could theoretically replace twenty eight networked desktop PCs. Using DVI and USB extenders, not all of the user stations would have to be within the 7.5 meter cable distance imposed by the DVI cable limit. The only OS currently capable of supporting this many displays is Linux. What limits would be imposed by the hardware and PCI-X bottlenecks? Taking into account the added cost of the HP and ATI hardware, could it deliver a great reduction in the total cost of ownership over both traditional PCs and thin client systems? How many desktops is it practical for a high end server to directly replace?"
Hardware

Advanced System Building Guide 523

Alan writes "FiringSquad has up an Advanced System Building Guide, detailing how to construct your own rig. The first half deals with hardware selection and even esoteric concepts such as PCI slot placement. The second half is focused on Windows XP, and makes recommendations such as moving the swap file and scratch disk to a separate partition." From the article: "You laugh at the so-called expertise of Best Buy's GeekSquad, and are the one doing the teaching when calling technical support. If this sounds like you, you've come to the right place if you're looking to take your system building skills to the next level."
Portables

External PCI Box for Laptops? 82

cagem0nkey asks: "I am in need of some type of external PCI card box for use with a laptop. I was able to find several different solutions, but these were all WAY to expensive for my wallet (at around $1,000 ea for one PCI slot!). Does anyone know of a cheaper way to add PCI card capability to a laptop? Possibly a USB or Firewire external enclosure?"
Hardware

Abused, But Working Hardware Stories? 1352

RPI Geek writes "Everyone's heard the stories about people who, knowingly or unknowingly, abuse their computers. Personally, I've had a faulty power supply literally burn a hole through the motherboard, with the only ill effects being a dead PCI slot and USB ports. I'm curious as to what kind of abuse fellow /.ers have done or seen done to electronics while the hardware still worked afterwards. Soldered a broken keyboard PCB back together so that it worked fine? Taken sticks of RAM out of a running computer to see when it would notice? Overclocked a 386... to 386MHz? I'm interested in hearing any stories about abused-but-working hardware."
Programming

Sensors for Automobile Computers? 40

Bombcar asks: "I'm going to be installing an EPIA mainboard in my car, using a DC-DC power supply. It is mainly for playing music, but it has the potential for so much more. I know I can get LCD displays, and I know that various sensors are made for automobiles, but I want to combine both these with the computer. Most car sensors are analog, so does anyone know of an easy way to interface with analog (and perhaps some digital) sensors? Anything used would have to be able to stand up to the rigors of automobile use. The EPIA board has 4 serial ports, a parallel port, and some USB ports, along with a PCI slot. I plan to use this for display purposes only (not control any important vehicle functions), but am also leaning towards some 'fun' improvements, such as playing certain songs when the pedal is floored."

Slashdot Top Deals