digitaldc writes with this excerpt from Gamasutra: "The Air Force Research Laboratory (AFRL) has connected 1,760 PlayStation 3 systems together to create what the organization is calling the fastest interactive computer in the entire Defense Department. The Condor Cluster, as the group of systems is known, also includes 168 separate graphical processing units and 84 coordinating servers in a parallel array capable of performing 500 trillion floating point operations per second (500 TFLOPS), according to AFRL Director of High Power Computing Mark Barnell."
Daetrin writes "Last weekend Google received the next statue in the sweets-themed series that commemorates the major updates of the Android OS. In the past this has meant that the release of the next SDK was right around the corner. However this time there's some doubt as to what the version number will actually be. Many sites (including Slashdot) have assumed that 'Gingerbread' was synonymous with '3.0,' but now there's some evidence that everyone may have jumped the gun and the next version will actually be 2.3."
dkd903 was one of several folks to note that a bunch of details about Google's Android 3.0 are beginning to leak out. The platform is codenamed Gingerbread; it includes video chat to compete with the iPhone, and a graphical overhaul to try to make it look a bit better compared to its rivals.
eldavojohn writes: Multicore (think tens or hundreds of cores) will come at a price for current operating systems. A team at MIT found that as they approached 48 cores their operating system slowed down. After activating more and more cores in their simulation, a sort of memory leak occurred whereby data had to remain in memory as long as a core might need it in its calculations. But the good news is that in their paper (PDF), they showed that for at least several years Linux should be able to keep up with chip enhancements in the multicore realm. To handle multiple cores, Linux keeps a counter of which cores are working on the data. As a core starts to work on a piece of data, Linux increments the number. When the core is done, Linux decrements the number. As the core count approached 48, the amount of actual work decreased and Linux spent more time managing counters. But the team found that 'Slightly rewriting the Linux code so that each core kept a local count, which was only occasionally synchronized with those of the other cores, greatly improved the system's overall performance.' The researchers caution that as the number of cores skyrockets, operating systems will have to be completely redesigned to handle managing these cores and SMP. After reviewing the paper, one researcher is confident Linux will remain viable for five to eight years without need for a major redesign.
Link to Original Source
Link to Original Source
Here in Denmark, we have some guys working on a manned space flight: http://www.copenhagensuborbitals.com/ "Our mission is very simple. We are working towards launching a human being into space. This is a non-profit suborbital space endeavor lead by Kristian von Bengtson and Peter Madsen, based entirely on sponsors and volunteers." Their progress is impressive!
Newer turbines don't lock the rotor if there is no emergency - they just pitch the blades so that they will not turn the rotor significantly. Therefore, the rotor may actually still rotate (slowly) even when the turbine is shut down.
Thanks. If it was a smaller company where the impact on global pollution were negligible the decision would have been fine. Instead the OSes ended up on hundred of millions of computers which MS probably had expected. Some sort of workaround cannot have been that difficult to implement compared to the consequences. At least they could have made it easy for the consumers to turn on the HLT-instruction