Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Submission + - K-computer: What made it the fastest in the world? ( 3

AustinAlert writes: The Japanese K-computer supercomputer took the world title for the fastest computer in the world, after the latest TOP500 list was announced Monday morning at the International Supercomputing Conference in Hamburg, Germany. In this article, the author sets out to understand why this computer is fast as well as energy-efficient. Apparently, the microprocessor chips that were used are very energy efficient. Thus, the author, being a microprocessor architect, explores the chip architecture in great depth.

There are five unique features about the Sun VIIIfx processor including ISA extensions, software controlled caches, software predication, a deep pipeline, and a lack of threading. I particularly liked his analysis of why this chip did not do threading. A must read for anyone interested in this kind of stuff!

Comment Not really, AMPs are a superset (Score 1) 3

To the AMP guy, no the AMD thingie is a superset of both GPU and CPU. They are talking about integrating them hence the distinction still remains. The workloads for CPU and CPCPU will become closer due to AMD's idea but not exactly the same for saving power.

Submission + - CPU vs. GPGPU ( 3

AustinAlert writes: The article describes the similarities and differences between using CPU and GPUs for general purpose computing. It discusses how the GPU architecture impacts what code can run on a GPU and vice-versa. An interesting read in the context of AMD's and Microsoft's recent announcements.

Submission + - Computer Science Self-assessment Quiz (

An anonymous reader writes: Test your skills as a developer by taking this 10-question self assessment quiz. DoLearn if you really know how computer work! (answers provided)

Submission + - Heterogeneous Computing:Past, Present, and Future (

An anonymous reader writes: Intel's architect, Aater Suleman, gives a brief overview of what heterogeneous computing really means to industry and academia. While hetero is not a new concept, latest developments in technology have introduced brand new challenges and opportunities. The article discusses them in depth. Apparently, there are two types of hetero systems that work very differently and companies are exploring them both going into to the future.

Comment C/C++? (Score 1) 1

Nice article. I knew some of it already but had not thought of the function calls across multiple files. My problem is however is that why people use C/C++ at all. I understand that C/C++ programmers need to jump through a lot of hoops to get the compiler to understand the big picture. This is why I prefer JAVA because their is no ambiguous code. The compiler knows everything and every dependency.

Submission + - How to make C/C++ compilers generate terrible code ( 1

An anonymous reader writes: Compilers are designed to make pessimistic assumptions about the code if the code is ambiguous. These pessimistic assumptions significantly limit the compilers' optimizations. Programmers can help compilers by avoiding code patterns that are known to confuse them. This article presents three examples of compiler-unfriendly code and explains them in great depth. I found this post enlightening and novel since I haven't seen a post on this topic. Its a good read if you want to understand what a compiler does to your code.

Submission + - Is it necessary for programmers to learn hardware? (

AustinAlert writes: "Intel's Aater Suleman explains how new programming platforms like multi-core, iPhone, Android, and GPGPUs have changed the mainstream programming. Extracting high performance from these new platforms requires more efficient programming. Understanding a handful of basic hardware concepts can be a very handy tool for todays programmers. This article explains how hardware knowledge has become essential for writing good code and lists some items every programmer must know. ("

Submission + - Cybersecurity, the next frontier for NASA engineer (

mask.of.sanity writes: After 28 June, some 8000 NASA engineers could be out of a job, but one cyber security training centre, linked to the space agency and situated nearby, hopes to recruit them.

These guys are some of the best and brightest engineers in the world, and the operators of the training organisation say they will do wonders to improve the state of information security.


Submission + - Should programmers today learn hardware? (

AustinAlert writes: "This article by an Intel hardware engineer makes a great point. Programming has indeed changed in recent years. Instead of writing code for infinite-resource desktops and laptops, we now write code for iPhones and Androids. These new platforms are more constrained in terms of processor performance, physical memory, and even virtual memory. To write competitive and powerful apps that do not dry the battery, programmers are forced to write efficient code. Actually, this is like the days when I used to write code for Commodore and every bit and byte literally mattered. I know that companies preferred ninja coders who knew hardware to get an edge over competition. This post is probably right, all that is in fact coming back in the form of phones, Multi-cores, and GPGPUs.

I agree with the author that knowing just a handful of concepts can come in very handy. For example, I have personally noticed that L1 cache resident programs run 3x faster on the iPhone which means better user experience and times less battery drain.

I like this article because it provides real-life code examples where knowing hardware helps. The author even provides a very comprehensive list of important concepts that every programmer must learn ( don't know if I agree with his claim that the enlisted items can be learned in hours but it definitely doesn't take weeks either. By the way, his tutorials on hardware topics are nice!"


Submission + - Google releases video chat source code (

angry tapir writes: "Google has released the source code for a technology that it hopes developers will use to embed real-time video and voice chat functionality in their Web applications. Google acquired the technology, called WebRTC (Web Real Time Communication), when it purchased VoIP (Voice over IP) software developer Global IP Solutions in 2010, for approximately US$68.2 million."

Comment Re:Nostalgic. Interesting. But single thread? (Score 1) 3

Nope dude. Wrong argument! Speeding up a single thread is like speeding up each individual thread, in addition to your Ahmdahl's (dunno what you really meant there). Make single-thread faster then all threads become faster which makes the program faster. Multi-threading is not a magic, just means there are multiple single threads running with each other so single thread is always relevant. LOL@nostalgic btw.

Comment Doesn't this mean iPhone needs serious work! (Score 1) 3

I see his point that in-order cores are becoming common. I mean iPhone has an in-order core and they are common, hence in-order cores are common ... no debate. What I am thinking about is the importance of the optimizations Shouldn't all iPhone app developers know about these optimizations? This seems like it can be a big win for all cell phone developers to follow some of these techniques. I do notice some very slow apps on my iphone and wonder his arguments play a role.

Submission + - Tiny cores are here, and they change programming ( 3

An anonymous reader writes: Intel is returning to in-order cores after two decades with Atom and Knights. ARM is already building in-order cores for iPhones, iPads, and Androids. IBM has switched to in-order cores after building generations of out-of-order cores. This indicates a clear trend that in-order cores are back in the mainstream. Highlighting the performance characteristics of in-order and out-of-order cores, Dr. Aater Suleman's article explains why programming for in-order cores is very different from programming for the now-traditional out-of-order cores. Thus, this new trend requires a change in compilers, tools, and programming techniques. Compilers need to get better at removing useless code and instruction scheduling. Programmers need to weigh new trade-offs and perform classic optimizations that have been forgotten. I liked this article particularly for the very simple code examples and a simple explanation of in-order and out-of-order differences. The message is clear: programmers and compilers need to understand in-order cores and target their code better.

Slashdot Top Deals

It is difficult to soar with the eagles when you work with turkeys.