Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re: Looking more and more likely all the time... (Score 5, Insightful) 511 511

Eh... The physics mechanisms proposed ARE very controversial! The classic physics mechanism simply shouldn't work and the quantum physics proposal are far off speculations that aren't likely to be true.

But the amount of experimental verification from separate sources indicates that either there is some factor they all forgot or that there are new physics at play. I hope for the last alternative :)

Comment Re:Under what authority? (Score 1) 298 298

Perhaps you free-speech "pundits" should first understand what it means? The idiot in question (in the story, not you) haven't been hindered to speak, he wasn't wanted to perform at a place and the organizers agreed. Then the organizers fucked up.

But again this rapper have not been stopped from speaking. This isn't about free speech at all.

Comment Re: Title condradicts summary (Score 1) 143 143

Let's look at the actual setup used in this benchmark: AMD A10 7800B
4 Steamroller CPU cores (2 modules):
2x128 bit FMAC per module = 2x4 Single precision FMAC = 8 FMAC per module
16 FMAC/clock

8 GCN compute units:
4x16 single precision FMAC per compute unit = 64 FMAC per CU
512 FMAC/clock

Compute throughput:
CPU: 3500MHz x 16 = 35GFlops
GPU: 750MHz x 512 = 384GFlops

So we get more that x10 the (single precision) throughput using the GPU.

But that ignores the fact that GPUs are designed to tolerate long average memory access times while CPUs aren't. If the access pattern of the data isn't optimal (easily cacheable) the CPU will be stalled most of the time, the GPU will not. The GPU also have other resources (texture samplers++) that can be used to increase performance IF the code can use them.


But (as I pointed out in the earlier post) it isn't likely that there would be such a huge difference if the CPU didn't run crappy code. Most likely the CPU uses double precision floats while the GPU uses single precision. IIRC the GPU in question runs double precision floats at 1/16 the throughput of single precision - which would make the CPU superior in raw number crunching.

Comment Re: Title condradicts summary (Score 1) 143 143

Yes but I didn't claim otherwise. The fact is that a GPU running code fitting it can get over 500x the performance of a CPU. However most real world code isn't as parallelizable as e.g. 3D rendering so overheads will strongly reduce the GPU performance.

As I wrote "With a few exceptions reports of huge speedups for GPU computing is because the CPU is feed with severely suboptimal code". Because a CPU running really shitty code is a "good" comparison point if one wants to promote GPGPU though not realistic.

Comment Re: Title condradicts summary (Score 2) 143 143

GPUs are optimized for massive parallelism and can be much faster than CPUs that aren't for tasks fitting them. They can also be much slower if the task doesn't fit them. 500x faster than a normal processor is well within a reasonable speedup range.

But I also agree partially with you - the reason we see such a huge speedup in this case is probably because the CPU comparison point is running badly optimized code. With a few exceptions reports of huge speedups for GPU computing is because the CPU is feed with severely suboptimal code.

Comment Re:Heuristic Optimization? No, identifying executa (Score 1) 114 114

So what does your post title and your quote have in common?
The quote is correct: the standard mechanism for optimizations of the extremely complex graphics driver is heuristical but there is a coarse grain mechanism that allows bypassing that. It is triggered by the executable name in most cases.
IFF a game not individually optimized in that manner have similar rendering patterns as a game that does renaming can help.

Comment Re:Speed v.s. reliability (Score 2) 114 114

As usual you are talking out of your ass. There are a lot of evidence of the reverse (Nvidia technologies and optimization support that artificially decrease AMD performance) but not that using AMD tech does the same for Nvidia. I'm not talking about simply missing optimizations here BTW.

And we aren't talking about a whitelist, we are talking about drivers adjusting themselves using a coarse grained mechanism. The difference is obvious.

Comment Re:There is no cure for absolute fucking stupidity (Score 1) 232 232

How about nutters that think guns solve every problem? There's a staggering amount of magical thinking in the gun nut groups.
How about people that think owning a gun somehow makes them safe? While statistics shows the opposite?
Or those that thinks an armed society is a safe society? While every society that can be considered armed invariably descends into violent shitholes...

Gun nuts aren't usually just nuts about guns, they are nuts on most levels. Gun nerds on the other hand can be reasonable.

Comment Re:Hogwash! Poppycock! Rubbish! (Score 1) 93 93

you can still use your shell scripts within systemd

That's not the issue, but I am unsurprised to see you get this wrong because your reading comprehension is for shit and you are willfully disingenuous when it comes to systemd. The issue is not being able to use shell scripts, the issue is the additional unnecessary complexity of systemd. See, all you need is init and some very small shell scripts, of about the same complexity it would take to replace you.

Cute insult. If it had come from someone significant that is, now it's just as pathetic as yourself.

And while you could be sort of right in an embedded system most people complaining aren't using Systemd in that context. In a dynamic context like a desktop machine/notebook computer/workstation your idea of "small shell scripts" really goes out of the window.

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990