Please create an account to participate in the Slashdot moderation system


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Submission + - Windows coming to ARM

quo_vadis writes: At CES today, Microsoft announced full blown windows coming to ARM. This is a very Apple like move for Microsoft, but without the whole "oh we had this running for 5 years before releasing it". Sounds like we are in for driver incompatibilities a million times worse than the Vista transition. Even worse, given that Windows biggest selling point is legacy application compatibility, requiring all third party applications to be recompiled negates the advantages of a legacy compatible version of windows. Finally, the lack of a strong infrastructure for supporting the transition (universal binaries, system library management) point to the transition being a painful one.
User Journal

Journal Journal: Windows coming to ARM

At CES today, Microsoft announced full blown windows coming to ARM. This is a very Apple like move for Microsoft, but without the whole "oh we had this running for 5 years before releasing it". Sounds like we are in for driver incompatibilities a million times worse than the Vista transition. Even worse, given that Windows biggest selling point is legacy application compatibility, requiring all third

Comment Interesting (Score 2, Insightful) 229

It is interesting that there are 6 new entrants in the top 10. Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.

Comment I might be way off here.... (Score 1) 830

Let me preface my statements with the following disclaimer IANAB (I am not a biologist/biochemist)

That said, the problem of reconstructing a brain from DNA is something like trying to understand a self modifying genetic algorithm containing multiple parallel automata. To explain, I am going to conflate a couple of concepts. Self modifying code is reasonably well known. Consider a system where the hardware is an FPGA (i.e. can be reconfigured on the fly) and the program running on it a mix of a boot loader, independent hardware accelerated automata/agent programs, and some kind of feedback. The program contains an initial boot loader to load some data onto the FPGA, set up some accelerators and the capability to reprogram the FPGA. Then, it loads up some small agents, and some feedback controls. These agents run in parallel for a while, reconfiguring the hardware and/or the software of other agents or groups of agents, while the feedback control allows the minor selective mutation (through say bit stream corruption) of the programming. Some of the interactions of well definied automata are clear, but mutated automata interact in new and therefore unmodeled ways. The end result is the brain.

To sum it up, the DNA is just a small piece of the self modifying base code for the first initialization of the FPGA. The way the final FPGA is mapped depends on environmental factors (eg. which agent fired first, how did selection happen, small biases arising from the physical nature of the FPGA being propagated to wild changes in the end result). Thus, modeling just the base pairs is not sufficient as the interactions of the automata from the base pairs must be modeled as well.

Comment Re:Why do they need to? (Score 4, Informative) 362

Um, actually Intel has done a lot of work on the architecture and microarchitecture of its processors. The CPUs Intel makes today are almost RISC like, with a tiny translation engine, which thanks to the shrinking size of transistors takes a trivial amount of die space. The cost of adding a translation unit is tiny, compared to the penalty of not being compatible with a vast majority of the software out there.

Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.

All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.

Comment Re:Don't make them smaller (Score 5, Informative) 362

You are incorrect about the reason for lack of 3D stacking. Its not that we cant stack them. There has been a lot of work on it. In fact, the reason flash chips are increasing in capacity is because they are stacked usually 8 layers high. The problem quite simply is heat dissipation. A modern CPU has a TDP of 130W, most of which is removed from the top of the chip, through the casing, to the heatsink. Put a second core on top of it, and the bottom layer develops hotspots that cannot be handled. There are currently some approaches based on microfluidic channels interspersed between the stacked dies, but that has its own drawbacks.

Comment Re:Are you serious? (Score 1) 342

Not quite. MSFT did a 2:1 split in 2002, so each share was effectively diluted by 50%. If you invested in a 100 shares of MSFT in 2000, your valuation is almost the same now as it was then (well up by ~2%). The problem is that accounting for inflation, and given the performance of NASDAQ over the decade, your money would have been better invested elsewhere.

Comment Re:Artificial limits R US (tm) (Score 1) 401

Are you sure about this ? My computer architecture is a little rusty, but let me see .. what you are saying is that you would need 2^30 RAM sticks of 16GB capacity to fill up the full 2^64 space. This contention is fine. The problem is with the second part. The only way you would look at the complete delay is if you had sequential accesses. If you had a partitioned hierarchy, it is fine. Additionally, the TLB might be large, but addressing the ram can happen orders of magnitude faster (assuming partitioning happens on basis of 5 bits, worst case,you only look into a bank of 2^6 sticks) which is not that bad. Of course, access time is not uniform, but this problem has been addressed previously (see NUMA systems). As such, a full multi exabyte system is hard to design, but with increasing memory densities, it may become feasible, using techniques that are currently applied to supercomputer memory hierarchies.

Comment A litho primer (Score 1, Informative) 80

For those unfamiliar with the field of semiconductor design, heres what the sizes mean. The Toshiba press release is about flash. In flash, the actual physical silicon consists of rectangular areas of silicon that have impurities added (aka. doped regions or wells). On top of these doped regions, are thinner parallel "wires" (narrower rectangles) made of poly silicon. The distance between the leading edge of wire and the next is called the pitch. Thus, the half pitch is half that distance. The reason this is important is that half pitch is usually the width of the polysilicon wire and effectively becomes the primary physical characteristic from the point of view of power consumption (leakage), speed and density.

The official roadmap for processes and feature sizes (called process nodes) are published yearly by the International Technology Roadmap for Semiconductors, a consortium of all the fabs. According to the 2009 lithography report. 25nm Flash is supposed to hit full production in 2012, thus inital deployments happen a couple of years before. Effectively Toshiba seems to be hitting the roadmap.

The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.

Comment Really wont change the price (Score 5, Informative) 158

According to isuppli's teardown of the kindle the E Ink display is $60. The main processor (made by Freescale) is ~$8. The EPD chip, which is what becomes redundant adds only $4.31 to the BOM. The main point is you cannot expect E Ink based readers to get any cheaper any time soon. Any price cuts will only come about due to increased competition from different technologies like Pixel Qi's, or by sacrificing things like onboard wireless (which adds ~$40 to the cost of the Kindle).

Comment Price / Perfomance Question (Score 3, Informative) 163

Here is a link to the review of the disk over at anandtech. Interestingly, it seems this drive will not be using one of the higher performance SSD controllers (Sandforce / Indilinx), so the performance should be worse than other competitors. If the price is as predicted (128 GB @ $529), then this drive wont make much sense compared to faster drives from OCZ etc

Comment Much Ado about nothing (Score 4, Informative) 292

The TFA talks about the war between Digital Entertainment Content Ecosystem (DECE) from 6 of the big movie studios versus Keychest from Disney. But the important this is that Keychest is not DRM . As the name implies its a Key management service, proposed by Disney. It needs DRM such as DECE or Apple's Protected AAC stuff to work. The TFA's author doesnt seem to grasp the basic difference.

Comment Wow (Score 4, Insightful) 242

Very interesting loophole. For those too lazy to read TFA, basically this attack allows someone running as root (or in some cases as a local user) to run code at a level that even hypervisors cant deal with. To put this into perspective, if you are running some big iron hardware with a dozen virtualized servers. With a local privilege escalation exploit on one VM, an attacker could use this attack to take over the whole system, even the secured VMs. Worst problem is that it would be undetectable. No VM, and no hypervisor would be able to see it. Any AV call can be intercepted as the SMM has the highest priority in the system.

The solution on the other hand seems pretty simple. Make the chipset block writes to the TSEG for the SMRAM in hardware (by disabling those lines) and use some extra hardware to prevent those lines from being loaded into cache. Finally, make every bios SMRAM update contain a parity and create tools that allow SMRAM parity check.

Slashdot Top Deals

If you think the system is working, ask someone who's waiting for a prompt.