Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Interesting (Score 2, Insightful) 229

It is interesting that there are 6 new entrants in the top 10. Even more interesting is the fact that GPGPU accelerated supercomputers are clearly outclassing classical supercomputers such as Cray. I suspect we might be seeing something like a paradigm shift, such as when people moved from custom interconnect to GbE and infiniband. Or when custom processors began to be replaced by Commercial Off The Shelf processors.

Comment I might be way off here.... (Score 1) 830

Let me preface my statements with the following disclaimer IANAB (I am not a biologist/biochemist)

That said, the problem of reconstructing a brain from DNA is something like trying to understand a self modifying genetic algorithm containing multiple parallel automata. To explain, I am going to conflate a couple of concepts. Self modifying code is reasonably well known. Consider a system where the hardware is an FPGA (i.e. can be reconfigured on the fly) and the program running on it a mix of a boot loader, independent hardware accelerated automata/agent programs, and some kind of feedback. The program contains an initial boot loader to load some data onto the FPGA, set up some accelerators and the capability to reprogram the FPGA. Then, it loads up some small agents, and some feedback controls. These agents run in parallel for a while, reconfiguring the hardware and/or the software of other agents or groups of agents, while the feedback control allows the minor selective mutation (through say bit stream corruption) of the programming. Some of the interactions of well definied automata are clear, but mutated automata interact in new and therefore unmodeled ways. The end result is the brain.

To sum it up, the DNA is just a small piece of the self modifying base code for the first initialization of the FPGA. The way the final FPGA is mapped depends on environmental factors (eg. which agent fired first, how did selection happen, small biases arising from the physical nature of the FPGA being propagated to wild changes in the end result). Thus, modeling just the base pairs is not sufficient as the interactions of the automata from the base pairs must be modeled as well.

Comment Re:Why do they need to? (Score 4, Informative) 362

Um, actually Intel has done a lot of work on the architecture and microarchitecture of its processors. The CPUs Intel makes today are almost RISC like, with a tiny translation engine, which thanks to the shrinking size of transistors takes a trivial amount of die space. The cost of adding a translation unit is tiny, compared to the penalty of not being compatible with a vast majority of the software out there.

Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.

All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.

Comment Re:Don't make them smaller (Score 5, Informative) 362

You are incorrect about the reason for lack of 3D stacking. Its not that we cant stack them. There has been a lot of work on it. In fact, the reason flash chips are increasing in capacity is because they are stacked usually 8 layers high. The problem quite simply is heat dissipation. A modern CPU has a TDP of 130W, most of which is removed from the top of the chip, through the casing, to the heatsink. Put a second core on top of it, and the bottom layer develops hotspots that cannot be handled. There are currently some approaches based on microfluidic channels interspersed between the stacked dies, but that has its own drawbacks.

Comment Re:Are you serious? (Score 1) 342

Not quite. MSFT did a 2:1 split in 2002, so each share was effectively diluted by 50%. If you invested in a 100 shares of MSFT in 2000, your valuation is almost the same now as it was then (well up by ~2%). The problem is that accounting for inflation, and given the performance of NASDAQ over the decade, your money would have been better invested elsewhere.

Comment Re:Artificial limits R US (tm) (Score 1) 401

Are you sure about this ? My computer architecture is a little rusty, but let me see .. what you are saying is that you would need 2^30 RAM sticks of 16GB capacity to fill up the full 2^64 space. This contention is fine. The problem is with the second part. The only way you would look at the complete delay is if you had sequential accesses. If you had a partitioned hierarchy, it is fine. Additionally, the TLB might be large, but addressing the ram can happen orders of magnitude faster (assuming partitioning happens on basis of 5 bits, worst case,you only look into a bank of 2^6 sticks) which is not that bad. Of course, access time is not uniform, but this problem has been addressed previously (see NUMA systems). As such, a full multi exabyte system is hard to design, but with increasing memory densities, it may become feasible, using techniques that are currently applied to supercomputer memory hierarchies.

Comment A litho primer (Score 1, Informative) 80

For those unfamiliar with the field of semiconductor design, heres what the sizes mean. The Toshiba press release is about flash. In flash, the actual physical silicon consists of rectangular areas of silicon that have impurities added (aka. doped regions or wells). On top of these doped regions, are thinner parallel "wires" (narrower rectangles) made of poly silicon. The distance between the leading edge of wire and the next is called the pitch. Thus, the half pitch is half that distance. The reason this is important is that half pitch is usually the width of the polysilicon wire and effectively becomes the primary physical characteristic from the point of view of power consumption (leakage), speed and density.

The official roadmap for processes and feature sizes (called process nodes) are published yearly by the International Technology Roadmap for Semiconductors, a consortium of all the fabs. According to the 2009 lithography report. 25nm Flash is supposed to hit full production in 2012, thus inital deployments happen a couple of years before. Effectively Toshiba seems to be hitting the roadmap.

The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.

Comment Really wont change the price (Score 5, Informative) 158

According to isuppli's teardown of the kindle the E Ink display is $60. The main processor (made by Freescale) is ~$8. The EPD chip, which is what becomes redundant adds only $4.31 to the BOM. The main point is you cannot expect E Ink based readers to get any cheaper any time soon. Any price cuts will only come about due to increased competition from different technologies like Pixel Qi's, or by sacrificing things like onboard wireless (which adds ~$40 to the cost of the Kindle).

Comment Price / Perfomance Question (Score 3, Informative) 163

Here is a link to the review of the disk over at anandtech. Interestingly, it seems this drive will not be using one of the higher performance SSD controllers (Sandforce / Indilinx), so the performance should be worse than other competitors. If the price is as predicted (128 GB @ $529), then this drive wont make much sense compared to faster drives from OCZ etc

Comment Much Ado about nothing (Score 4, Informative) 292

The TFA talks about the war between Digital Entertainment Content Ecosystem (DECE) from 6 of the big movie studios versus Keychest from Disney. But the important this is that Keychest is not DRM . As the name implies its a Key management service, proposed by Disney. It needs DRM such as DECE or Apple's Protected AAC stuff to work. The TFA's author doesnt seem to grasp the basic difference.

Comment Wow (Score 4, Insightful) 242

Very interesting loophole. For those too lazy to read TFA, basically this attack allows someone running as root (or in some cases as a local user) to run code at a level that even hypervisors cant deal with. To put this into perspective, if you are running some big iron hardware with a dozen virtualized servers. With a local privilege escalation exploit on one VM, an attacker could use this attack to take over the whole system, even the secured VMs. Worst problem is that it would be undetectable. No VM, and no hypervisor would be able to see it. Any AV call can be intercepted as the SMM has the highest priority in the system.

The solution on the other hand seems pretty simple. Make the chipset block writes to the TSEG for the SMRAM in hardware (by disabling those lines) and use some extra hardware to prevent those lines from being loaded into cache. Finally, make every bios SMRAM update contain a parity and create tools that allow SMRAM parity check.
Biotech

Designer Babies 902

Singularity Hub writes "The Fertility Institutes recently stunned the fertility community by being the first company to boldly offer couples the opportunity to screen their embryos not only for diseases and gender, but also for completely benign characteristics such as eye color, hair color, and complexion. The Fertility Institutes proudly claims this is just the tip of the iceberg, and plans to offer almost any conceivable customization as science makes them available. Even as couples from across the globe are flocking in droves to pay the company their life's savings for a custom baby, opponents are vilifying the company for shattering moral and ethical boundaries. Like it or not, the era of designer babies is officially here and there is no going back."

Comment Get a netbook (Score 1) 465

The best thing to do would be to ensure your entire system was self sufficient to some degree (i.e. display, OS, input devices were fixed). A netbook would be the perfect low cost solution. Just get an eeePc with a 4/8G hard disk, set up with some slideshow to start on boot and store that. To ensure you dont wind up with the problem of bad flash hard disks, either make a few copies on SD cards, or get a ROM based hdd, burned with a system image. That way when people open it up, there wont be issues of how to connect it to a working monitor/keyboard etc. Just plug in battery and press power button.

Comment I agree. Kde4 has issues (Score 4, Insightful) 869

I think Linus is right on this one. I have been using KDE based linux desktops on my primary computer for ~7 years now. KDE 4 is a huge step back. The even bigger problem is that linux distros (Kubuntu and OpenSuse) are happily pushing KDE4.1 as the default KDE desktop. In fact with Kubuntu 8.10, there is no option. For KDE 3.5 you have to use 8.04. KDE 4 takes the GNOME approach to desktops (i.e. user's IQ is equivalent to a mostly dead rodent of unusually small size and any options would confuse poor afore mentioned user and therefore options are bad). Before the GNOME loving flames begin, yes I know there exist external tools to start fiddling with options, but the amount of flexibility is not the same as KDE 3.5.10.

KDE 4 unfortunately takes the GNOME approach, and removes flexibility. Worse still, all the developer time for KDE 4 is now going into polishing the interface (which while shiny is no better or more intuitive than KDE 3.5) while not bothering fixing apps people actually use. For example, on KDE 4.2, if you add a webdav calendar from a https source which has a self signed cert, you will be prompted every time it reloads, whether you want to accept the cert or not. Yes thats right, even if you click accept cert permanently, the DE is incapable of understanding it. This has been outstanding for a while, but all recent activity seems to be towards fixing desktop effects or making the kicker work. Its ridiculous.

/rant

Comment Re:The Money Quote (Score 5, Insightful) 228

I have never been a "windows fanboi"( In fact this is being posted from a linux computer) and I am no defender of Microsoft's business practices. However without doing code analysis, it is impossible to say that this slowdown is because of DRM. Nowhere in the article does it suggest that they were able to do a profile analysis of the kernel codes and compare what modules on the path were causing the delays. So while it is theoretically possible(and likely) that the source of the delay was DRM related, one cannot be sure. If you possess knowledge otherwise, please feel free to cite it and correct me.

Slashdot Top Deals

BLISS is ignorance.

Working...