Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Slashvertisement Alert!! (not) (Score 2) 107

The product was released at the end of NOVEMBER and is just now getting out to retail. No need to shout that. And just because an article here speaks to a product's salient features (both good and not so good - lest you forget the lower res display was mentioned too) doesn't make it an advertisement.

Submission + - NVIDIA Tegra Note 7 Tested, Fastest Android 4.3 Slate Under $200 (hothardware.com)

MojoKid writes: NVIDIA officially took the wraps off of its Tegra Note mobile platform a few weeks back. If you’re unfamiliar with the Tegra Note, it’s a 7”, Android-based tablet, powered by NVIDIA’s Tegra 4 SoC. The Tegra Note 7 also marks NVIDIA’s second foray into the consumer electronics market, with an in-house designed product; NVIDIA's SHIELD Android gaming device was the first out of the gate earlier this year. Though Tegra Note 7 on the surface may appear to be just another 7-inch slate, sporting a 1280X720 display, it does have NVIDIA's proprietary passive stylus technology on board, very good sounding speakers and an always on HDR camera. It's also one of the fastest Android tablets on the market currently, in the benchmarks. Unlike in NVIDIA's SHIELD device, the Tegra 4 SoC is passively cooled in Tegra Note 7 and is crammed into a thin and light 7" tablet form factor. As a result, the SoC can't hit peak frequencies quite as high as the SHIELD (1.8GHz vs. 1.9GHz), but that didn't hold the Tegra Note 7 back very much. In a few of the CPU-centric and system level tests, the Tegra Note 7 finished at or near the head of the pack, and in the graphics benchmarks, its 72-core GeForce GPU competed very well, and often allowed the $199 Tegra Note 7 to outpace much more expensive devices.

Submission + - Rise Of The Super High Res Notebook Display (hothardware.com)

MojoKid writes: Mobile device displays continue to evolve and along with the advancements in technology, resolution continues to scale higher, from Apple's Retina Display line to high resolution IPS and OLED display in various Android and Windows phone products. Notebooks are now also starting to follow the trend, driving very high resolution panels approaching 4K UltraHD even in 13-inch ultrabook form factors. Lenovo's Yoga 2 Pro, for example, is a three pound, .61-inch thick 13.3-inch ultrabook that sports a full QHD+ IPS display with a 3200X1800 native resolution. Samsung's ATIV 9 Plus also boast the same 3200X1800 13-inch panel, while other recent releases from ASUS and Toshiba are packing 2560X1440 displays as well. There's no question, machines like Lenovo's Yoga 2 Pro are really nice and offer a ton of screen real estate for the money but just how useful is a 3 or 4K display in a 13 to 15-inch design? Things can get pretty tight at these high resolutions and you'll end up turning screen magnification up in many cases so fonts are clear and things are legible. Granted, you can fit a lot more on your desktop but it begs the question, isn't 1080p enough?

Submission + - Start-Up MagnaCom Aims To Revolutionize Wireless Comms With A 10db Signal Boost (hothardware.com) 1

MojoKid writes: Technology development company MagnaCom thinks it has a new wireless approach
that could revolutionize wireless communication. With a sheaf of freshly minted patents and an impressive pitch, MagnaCom is claiming that its use of WAve Modulation (WAM) instead of the current QAM (Quadrature Modulation) will provide the bandwidth next-generation content networks desperately need. All existing cellular technology is based on QAM. According to MagnaCom, its new technology can offer a 10dB signal strength increase (which works out to 400% more range than competing QAM solutions). MagnaCom claims WAM is more efficient, makes better use of available spectrum and can drive farther distances thanks to higher efficiencies. A WAM circuit supposedly can integrate right alongside QAM in a typical radio and requires only about 1mm of silicon area. Because MagnaCom is an IP licensing firm, the next step is showing off its technology via FPGA at CES. It's hoping to attract attention from the likes of Qualcomm and Intel while working with the ITU or IEEE on upcoming wireless standards.

Submission + - AMD Announces Radeon R7 260, Affordable DX11 GPU For $109 (hothardware.com)

MojoKid writes: AMD is closing out the year with yet another new GPU announcement, though this one isn’t quite like the last few. AMD wants to bring its GCN architecture, Mantle support, and TrueAudio engine down to ever lower price points, with a new member of the Radeon R7 family, dubbed the Radeon R7 260. The Radeon R7 260 offers peak compute performance of 1.54TFLOPs and memory bandwidth of 96GB/s with 768 stream processors, a 1GHz engine clock and 1GB of GDDR5 at 6Gbps. Performance-wise, the card performs at about the same level or somewhat lower than a Radeon HD 7790 and markedly lower than the higher-end Radeon R7 260X and GeForce GTX 650 Ti Boost. The Radeon R7 260’s power consumption, however, is the lowest of the bunch, which will probably appeal to some. AMD has noted that all of its board partners will be offering custom Radeon R7 260 cards when they hit store shelves in a few weeks.

Submission + - NVIDIA G-SYNC Display Technology Explored (hothardware.com)

MojoKid writes: Back in September at a press event in Montreal, NVIDIA CEO Jen-Hsun Huang announced what he called “one of the most important works NVIDIA has done for computer graphics.” The technology, called G-SYNC, is an end-to-end graphics and display architecture that starts with a Kepler-based GPU and ends with a G-SYNC module within a monitor. The G-SYNC module is piece of hardware that replaces the scaler inside a monitor that essentially does away with the fixed vertical refresh rates of current displays. To put it simply, what G-SYNC does is keep a display and the output from a Kepler-based GPU in sync, regardless of frame rates or whether or not V-Sync is enabled. Instead of the monitor controlling the timing, and refreshing at say every 60Hz, with G-SYNC the timing control is transferred to the GPU. NVIDIA achieved this by developing the G-SYNC module, which will be featured in a number of new monitors starting next year. The G-SYNC module replaces the scaler and controller boards in current displays and allows for the dynamic refresh rates mentioned earlier. The module is comprised of an FPGA—programmed by NVIDIA—a bit of DRAM, and a DisplayPort input. At this time, G-SYNC requires a Kepler-based GPU, with a DP output, and obviously a G-SYNC enabled display. To fully appreciate the technology, a high-DPI gaming mouse is also recommended.

Submission + - AMD A10 Kaveri Rumored To Ship With BF4, Faster Than Haswell Core i5 In Gaming (hothardware.com)

MojoKid writes: Rumors continue to spill out ahead of AMD's next-generation Kaveri launch regarding the CPU's overall performance, capabilities, and architecture. It appears AMD is upping the ante with this CPU in several ways, including a gaming bundle that will include Battlefield 4 with the highest-end A10 APUs. AMD is comparing its new APUs against Intel's Core i5-4670K. The Core i5-4670K is a quad-core chip without Hyper-Threading, and its integrated GPU packs 20 Execution Units (EUs) with a maximum clock speed of 1.2GHz. The forthcoming quad-core AMD A10-7850K, in contrast, packs 512 GCN cores and a 720MHz clock speed. According to the leaks, the A10-7850K beats the 4670K by up to 40% in 3DMark Fire Strike and by 8% in PCMark 8. This implies that the 7850K's efficiency has improved a fair degree, at least in certain tests. AMD is apparently betting that Kaveri's GPU is strong enough to be worth equivalent pricing against more CPU cores. If Steamroller can hang with the slower Core i5-4430 on the CPU side of things, then AMD will come out ahead in the combined test. Much will depend on just how good the CPU core's improvements are — with a top frequency of 4GHz, AMD can make up some ground against a 3.2GHz Intel chip.

Submission + - Intel SSD Roadmap Points To 2TB Drives Arriving In 2014 And HET MLC NAND (hothardware.com) 2

MojoKid writes: A leaked Intel roadmap for solid state storage technology suggests the company is pushing ahead with its plans to introduce new high-end drives based on cutting-edge NAND flash. It's significant for Intel to be adopting 20nm NAND in its highest-end data center products, because of the challenges smaller NAND nodes present in terms of data retention and reliability. Intel introduced 20nm NAND lower in the product stack over a year ago, but apparently has waited till now to bring 20nm to the highest end. Reportedly, next year, Intel will debut three new drive families — the SSD Pro 2500 Series (codenamed Temple Star), the DC P3500 Series (Pleasantdale) and the DC P3700 Series (Fultondale). The Temple Star family uses the M.2 and M.25 form factors, which are meant to replace the older mSATA form factor for ultrabooks and tablets. The M.2 standard allows more space on PCBs for actual NAND storage and can interface with PCIe, SATA, and USB 3.0-attached storage in the same design. The new high-end enterprise drives, meanwhile, will hit 2TB (up from 800GB), ship in 2.5" and add-in card form factors, and offer vastly improved performance. The current DC S3700 series offers 500MBps writes and 460MBps reads. The DC P3700 will increase this to 2800MBps read and 1700MBps writes. The primary difference between the DC P3500 and DC P3700 families appears to be that the P3700 family will use Intel's High Endurance Technology (HET) MLC, while the DC P3500 family sticks with traditional MLC.

Submission + - AMD A10 Kaveri APU Details Emerge, Combining Steamroller and Graphics Core Next (hothardware.com)

MojoKid writes: There's a great deal riding on the launch of AMD's next-generation Kaveri APU. The new chip will be the first processor from AMD to incorporate significant architectural changes to the Bulldozer core AMD launched two years ago and the first chip to use a graphics core derived from AMD's GCN (Graphics Core Next) architecture. A strong Kaveri launch could give AMD back some momentum in the enthusiast business. Details are emerging that point to a Kaveri APU that's coming in hot — possibly a little hotter than some of us anticipated. Kaveri's Steamroller CPU core separates some of the core functions that Bulldozer unified and should substantially improve the chip's front-end execution. Unlike Piledriver, which could only decode four instructions per module per cycle (and topped out at eight instructions for a quad-core APU), Steamroller can decode four instructions per core or 16 instructions per quad-core module. The A10-7850K will offer a 512 core GPU while the A10-7700K will be a 384-core part. Again, GPU clock speeds have come down, from 844MHz on the A10-6800K to 720MHz on the new A10-7850K but should be offset by the gains from moving to AMD's GCN architecture.

Submission + - Researchers Make Malware Carried By Sound Wves (techweekeurope.co.uk)

judgecorp writes: Researchers have created malware that delivers stolen data without an Internet connection using inaudible sonic waves generated by a devices's speakers. The multi-hop acoustical keylogger is an experiment by the Fraunhofer Institute in Germany, rather than an exploit seen in the wild, but it is one more thing to be concerned about

Submission + - Futuremark Delists Android Devices For Cheating 3DMark, Samsung and HTC Ousted (hothardware.com)

MojoKid writes: Benchmarks are serious business. Buying decisions are often made based on how well a product scores, which is why the press and analysts spend so much time putting new gadgets through their paces. However, benchmarks are only meaningful when there's a level playing field, and when companies try to "game" the business of benchmarking, it's not only a form of cheating, it also bamboozles potential buyers who (rightfully) assume the numbers are supposed mean something. 3D graphics benchmark software developer Futuremark just "delisted" a bunch of devices from its 3DMark benchmark results database because it suspects foul play is at hand. Of the devices listed, it appears Samsung and HTC in particular are indirectly being accused of cheating 3DMark for mobile devices. Delisted devices are stripped of their rank and scores. Futuremark didn't elaborate on which specific rule(s) these devices broke, but a look at the company's benchmarking policies reveals that hardware makers aren't allowed to make optimizations specific to 3DMark, nor are platforms allowed to detect the launch of the benchmark executable unless it's needed to enable multi-GPU and/or there's a known conflict that would prevent it from running.

Submission + - 3D Systems And Motorola Team Up To Deliver Customizable 3D Printed Smartphones (hothardware.com)

MojoKid writes: Motorola is forging ahead with the concept of modular, customizable smartphones first put forth by designer Dave Hakkens with his Phonebloks concept. The company said recently that it was officially pursuing such an idea with Project Ara, and Motorola is already putting together important partnerships to make it happen. 3D Systems, a maker of 3D printers and other related products, has signed on to create a “continuous high-speed 3D printing production platform and fulfillment system” for it. In other words, 3D Systems is going to print parts for the project, and what’s more, the company has what appears to be an exclusive agreement to make all the enclosures and modules for Project Ara.

Submission + - Intel's 128MB L4 Cache May Be Coming To Broadwell And Other Future CPUs (hothardware.com)

MojoKid writes: When Intel debuted Haswell this year, it launched its first mobile processor with a massive 128MB L4 cache. Dubbed "Crystal Well," this on-package (not on-die) pool of memory wasn't just a graphics frame buffer, but a giant pool of RAM for the entire core to utilize. The performance impact from doing so is significant, though the Haswell processors that utilize the L4 cache don't appear to account for very much of Intel's total CPU volume. Right now, the L4 cache pool is only available on mobile parts, but that could change next year. Apparently Broadwell-K will change that. The 14nm desktop chips aren't due until the tail end of next year but we should see a desktop refresh in the spring with a second-generation Haswell part. Still, it's a sign that Intel intends to integrate the large L4 as standard on a wider range of parts. Using EDRAM instead of SRAM allows Intel's architecture to dedicate just one transistor per cell instead of the 6T configurations commonly used for L1 or L2 cache. That means the memory isn't quite as fast but it saves an enormous amount of die space. At 1.6GHz, L4 latencies are 50-60ns which is significantly higher than the L3 but just half the speed of main memory.

Slashdot Top Deals

8 Catfish = 1 Octo-puss

Working...