Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Self-driving folks (including Tesla) always knew.. (Score 1) 236

They are behaving like this is a surprise. Anyone who knows anything about self-driving cars knows that 1. You need about $100K in sensors (LIDAR, radar, cameras, etc) to build a true L4 car 2. The software, the test cases, situational training, etc is not there yet and won't be there for 5+ years (if not 10 years) 3. Waymo is furthest ahead, but even they can only achieve true L4 driving in geo-fenced situations (Geo-fence = known area, with known routes, with good weather)

Comment Data consumption habits have changed (Score 1) 97

how we consume data has changed significantly. reading books has huge merits, but we now get so much information and knowledge from reading online news, scientific articles, and so on, that I feel like I am reading all day. Much more so than I read 5-10 years ago. Books synthesize several things together in a cohesive story, but perhaps a lot of folks are growing up with shorter attention spans, and thus, prefer bite size info.

Comment Educating the older generation (Score 1) 31

This is a great initiative to use AI for improving reading and writing. I am sure the app can be used by adults too. They are in fact a bigger potential target for something like this. Educating primary school kids will have a long term impact. Educating adults today will have a very quick, short term impact. The challenge is how to motivate adults to learn to read and write (and deal with the shame of it).

Comment Also, Fastest & Largest AI Supercomputer (Score 1) 85

The story has focused so far on how the US got the #1 crown back. But the real story is about how we can now run the fastest and largest AI jobs. Because this IBM supercomputer has 27K+ GPUs, it can run massive deep learning jobs. IBM has been very focused on this deep learning space with their TensorFlow-based open-source PowerAI software offering.

Submission + - NVIDIA SHIELD Specs Finalized, Pre-Orders To Begin May 20 (hothardware.com)

bigwophh writes: NVIDIA’s Android-based, portable gaming system and media streaming device, originally known as Project SHIELD, was a big hit at CES. NVIDA has since dropped "Project" from the name and it appears the device is about ready to ship. If you’re unfamiliar with SHIELD, it is essentially a game controller with a built-in, flip-up 5” multi-touch screen. It is powered by NVIDIA’s own Tegra 4 quad-core SoC (System-on-Chip) with ARM A15 CPU cores, 72 GPU cores, 2GB of RAM, 16GB of internal storage, 802.11n 2x2 MIMO Wi-Fi, Bluetooth 3.0, and GPS support, among a number of other features. In addition to offering an array of Tegra-optimized games, part of SHIELD’s allure is the ability to wirelessly stream games and other media from a GeForce GTX-powered PC to any TV connected to SHIELD. Pricing for the device is set at $349 and pre-sales begin on May 20.

Comment Re:Can someone tell me NVidia's business model? (Score 1) 89

Discrete graphics is going away, they seem to be leaning increasingly towards the HPC market but that is tiny compared to the consumer graphics market that their company was built on. I just don't see it. Anyone?

Discrete GPU market is growing. See JPR's analyst reports http://jonpeddie.com/press-releases/details/embedded-graphics-processors-killing-off-igps-no-threat-to-discrete-gpus/

here is the full report http://jonpeddie.com/download/media/slides/An_Analysis_of_the_GPU_Market.pdf

Comment Easy way to program GPUs (Score 1) 205

There is a new easier way to program GPUs now using Directives-based compilers.

Idea is that you add some high-level pragmas to your C or Fortran code that a parallelizing compiler
uses to map to the GPU accelerator. Of course, you have to expose parallelism in the code for
the compiler to do a decent job. Example, use more data-parallel data structures. But this is a nice
incremental way to take advantage of the GPU.

Check it out at:
http://www.nvidia.com/object/tesla-2x-4weeks-guaranteed.html?cid=dev

Sumit
NVIDIA - Tesla Group

Comment Re:Well the other thing I'd say it shows (Score 1) 77

I think this is a big misconception about GPUs. They are good at many applications - not just Linpack.

Take a look at the list of applications ranging from video transcoding to weather forecasting to computational chemistry to physics at:
http://www.nvidia.com/cuda

In fact, the researchers at the Chinese Academy of Sciences just ran one of the fastest scientific simulations using their GPU supercomputer (#2 on the Top500 list):
http://blogs.nvidia.com/2011/06/chinas-investment-in-gpu-supercomputing-begins-to-pay-off-big-time/

There are tons of papers at the Supercomputing conference for real "full" applications in a very diverse range of applications that are accelerated using GPUs.

Comment Highlights as per Top500 site (Score 4, Interesting) 77

The top500 site has its own take on highlights:
http://www.top500.org/lists/2011/06/press-release

- The two Chinese systems at No. 2 and No. 4 and the Japanese Tsubame 2.0 system at No. 5 are all using NVIDIA GPUs to accelerate computation, and a total of 19 systems on the list are using GPU technology.
- China keeps increasing its number of systems and is now up to 62, making it clearly the No. 2 country as a user of HPC, ahead of Germany, UK, Japan and France.
- Intel continues to provide the processors for the largest share (77.4 percent) of TOP500 systems. Intel’s Westmere processors increased their presence in the list strongly with 169 systems, compared with 56 in the last list.
- Quad-core processors are used in 46.2 percent of the systems, while already 42.4 percent of the systems use processors with six or more cores.
- Cray defended the No. 2 spot in market share by total against Fujitsu, but IBM stays well ahead of either. Cray’s XT system series remains very popular for big research customers, with three systems in the TOP 10 (one new and two previously listed).

In my opinion, the newest & most important trend in high performance computing is the advent of accelerators like GPUs.

Comment CUDA C++ and Thrust (Score 2) 187

This is an awesome development - Microsoft adding support for GPU computing in their mainstream tools and C++.

Today, CUDA C++ already provides a full C++ implementation on NVIDIA's GPUs:
http://developer.nvidia.com/cuda-downloads

And the Thrust template library provides a set of data structures and functions for GPUs (similar in spirit to STL):
http://code.google.com/p/thrust/

- biased NVIDIA employee

Slashdot Top Deals

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...