Comment Re:Free or Open (Score 1) 152
A casual perusal of any open forum on the Internet will readily show that zealotry just as intense is plenty rampant. See: Politics, sports, cars.
A casual perusal of any open forum on the Internet will readily show that zealotry just as intense is plenty rampant. See: Politics, sports, cars.
I am one such programmer. Yet I also coded for an Nvidia Tesla C1060 board and found it much more straightforward to handle several thousand threads at once.
Not all types of threads are created equal. I usually explain CUDA to people as the "Zerg Rush" model of computing - instead of a couple, well-behaved, intelligent threads that try to be polite to each other and clean up their own messes, you throw a horde of a thousand little vicious, stupid threads at the problem all at once, and rely on some overlord to keep them in line.
Most of the guides explained it as, "Flops are free, bandwidth is expensive." This board had a 384 or 512-bit wide memory bus with a very high latency, and the reason you throw that many threads at it is to let the hardware cover up the latency - it can merge a huge number of memory reads/writes into one operation, and as soon as a thread is waiting on memory I/O it can swap another thread into that same SP and let it compute. If memory serves me, the board was divided into blocks of 8 scalar processors (each block had some scratchpad memory that could be accessed almost as fast as a register) and you wrote groups of 16 threads which ran in lock-step on that processor (no recursion was allowed, and if one branched, the others would just wait around until it reached the same point) in two rounds.
Sure, that's a bit complex to optimize for, but it beats the hell out of conventional threading while trying to optimize for x86 SIMD. And if you manage to write it so it runs well on CUDA, it generally will scale effortlessly to whatever card you throw it at.
It's looking like OpenCL won't be much different, but I have yet to try it. I'm kind of eager, since apparently AMD/ATI's current cards, for the money, have a bit more raw power than Nvidia's.
Whoa, that's weird, I just read it 10-15 minutes ago, but it's pay-walled now for me too.
Both groups "got off their ass" and "went outside". The comparison was between walking in a city area, and walking in a forest.
Did you even open the article?
No. I even re-read the summary about 10 times in a row, trying to figure out what exactly was harmful about forest bathing.
Stunts is one of my favorite games too. I remember seeing my brother play it first on our 386, and then I finally found it something like a decade later from an abandonware site. The track editor makes for a lot of replay value. Sure, it's still grid-based and sometimes it's picky, but it is still remarkably versatile.
For being able to run on something that slow, the engine was quite respectable - it was true 3D, wasn't it? Even if everything was very low polygon count...
The physics in Stunts also has some amusing issues. If you hit a building just right, your car flies straight up in the air to a ridiculous height before falling down again. I think it's also possible to make your car spontaneously explode if you enter a long tube and turn suddenly so your car moves in a circle.
So according to you,
V dt = dphi = L di?
Or is this wrong? And if yes, why?
Well...
V dt = L di
(V dt)/dt = (L di)/dt
V = L di/dt
Or did I totally miss your point?
i = current
q = charge
V = voltage
phi = magnetic flux
dq = i dt (current)
dphi = V dt (voltage)
dV = r di (resistance)
dq = C dv (capacitance)
dphi = L di (inductance)
(see http://www.spectrum.ieee.org/may08/6207)
It was hypothesized that some device should exist that connects charge and flux, and follows the relationship: dphi = M dq. This is "memristance." It was predicted in 1971 as the "fourth basic circuit element"; see: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1083337
They were fundamentally theoretically new then. They just had not been physically realized and connected with that theory until recently.
Please don't dismiss them as "pure marketing hype" without some research.
Machines have less problems. I'd like to be a machine. -- Andy Warhol