Comment Re:sure you want to go with 'undead' ? (Score 2) 283
[...] code [...] in sigs and comments. [...] Now it's, apparently, the worst thing [...]
They tried understanding their own sigs after some time.
[...] code [...] in sigs and comments. [...] Now it's, apparently, the worst thing [...]
They tried understanding their own sigs after some time.
It's not that specialized. It's just plenty of DSPs strapped together on a torus.
Actually Anton uses ASICS, their cores are specially geared at MD codes. This goes way beyond just "strapping together DSPs". They have IIRC ~70 hardware engineers on site. (Source: I've been to DE Shaw Research last year).
Unlike what wikipedia claims, you could probably achieve comparable performance using a more classical and general-purpose supercomputer setup with GPU or Xeon Phi accelerators, provided the network topology is well tuned to address this sort of communication scheme
No, you can't, and here is why: Anton is built for strong scaling of smallish, long running simulations. If you ran the same simulations on a "x86 + accelerator" system (think ORNL's Titan) then you'd observe two effects:
(most recent supercomputers don't use tori)
Let's take a look at the current Top 500:
So, torus networks are the predominant topology for current supercomputers.
Computational drug design and bitcoin miners have in common that both run best on custom hardware. The crux is, that both require very different types of hardware. As an example, please refer to Anton, designed by DE Shaw Research exactly for molecular dynamics (MD) codes.
Bitcoin mining is classified as a so called embarrassingly parallel algorithm, while MD is a tightly coupled problem. Hence an efficient parallelization for MD codes is much harder to speed up: communication gets in the way, and communication is essentially always bound by the speed of light.
ps: fun fact: bitcoin mining and MD can be carried out (at least somewhat) efficiently on GPUs.
I would assume the FPGA part of the CPU would be programmed in VHDL.
Yes, that's the obvious reasoning. And that's certainly interesting enough on its own. But the summary said
[...]for critical functions without translating the majority of their code[...]
Somebody has to do the translation, agree?
I agree it is a good thing. IIRC, Altera even made a tool for synthesis from OpenCL (great for me, as I don't know VHDL and Verilog).
I'm in particular interested in that Parallella board (http://www.parallella.org/), but they're out of stock, and I've been the queue for months without a response.
By using FPGAs to accelerate certain specific types of workloads, Intel Xeon customers can reap higher performance for critical functions without translating the majority of their code to OpenCL or bothering to update it for GPGPU.
What? This doesn't make sense. Unless Intel invented a way to automatically generate parallel code (in which case it could also be used in GPUs), somebody would have to rewrite the relevant parts of the program in VHDL, Verilog, OpenCL, or whatever.
According to Wikipedia and vocabulary.com:
Simulation
Simulation is the imitation of the operation of a real-world process or system over time.
Emulation
In computing, emulation is the technique used so one machine gets the same results as another.
So I stand by everything I've said, including that you're wrong.
You have it exactly back asswards.
A simulator simulates how and emulator emulates what.
If I develop an exact description of the hardware down to the individual registers and control paths, that is called a simulator.
Ah, I see, when I control a virtual airplane the program is behind the scenes calculating all the mechanical, electrical, and {aero,hydro}dynamical forces, from the engine, from the control cables, from the landing gear, from everything, all the time, so we can call it a flight simulator. Oh wait, it doesn't! It just makes a rough estimate of the aerodynamical forces, to what you would expect it to behave. Then according to your (wrong) definition, we should call it a flight emulator.
Did you read the article? He defines simulator as a layer between the application and the OS.
I didn't RTFA, but let me point out that his definition is one way to implement a simulator. Let me summarize it for you:
Simulator: functionality, what it does.
Emulator: function, how it does.
A simulator mimics the real thing but isn't.
Both do it, only the objectives are different.
It is one more piece on the 1984 puzzle. It actually made me remember the movie What About Bob?: baby steps to total information awareness / citizen extortion state, baby steps to police state, baby steps to fucking irrecoverable totalitarian oligarchy... hey, is that Winston Smith?
Nanananananananana ROBIN!!!! Sorry, I've been bribed.
"Chinese technology startup ANTVR [...]"
From: http://games.slashdot.org/stor...
Like... hmm... C?
"One Architecture, One OS" also translates as "One Egg, One Basket".