Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re:makes no sense to me (Score 1) 536

No it isn't a interesting point - it is a fundamentally misunderstanding how sexual preference work. A misunderstanding that I could understand a straight US citizen having - due to the media always presenting sexual preferences as a choice rather than something one is born with - but a homosexual person? Nah.

But for those that "have a choice" (technically bisexuals) sure they should be able to "choose" to have sex with someone with the same gender. Which is true now legally and (_drum_roll_) would be true socially/"morally" if the level of acceptance would increase.

Comment Re:From TFA: bit-exact or not? (Score 4, Informative) 174

Interpolation isn't about adding noise.

6 bit (per component) LCDs have for at least 10 years and probably much longer used dithering techniques to produce effective 16.2M colors (compared to a true 8 bit panel with 16.7M colors). This works very well for almost all use cases and provides smooth gradients but have the disadvantage that some image patters can produce flashing due to interference with the dithering algorithm.

Dithering isn't about adding noise either BTW.

Comment Re:No it hasn't (Score 1) 157

The IBM Z mainframe is a direct descendant from the IBM 360 from the 60ies.

Using a modified PPC to run the legacy CISCy code would be bad both for performance and reliability. Even assuming you meant IBM POWER rather than Power PC this holds.

IBM shares process technology and experiences in e.g. optimizing decimal floating point execution between the POWER and the Z series - but they are completely separate designs.

Comment Re: Looking more and more likely all the time... (Score 5, Insightful) 518

Eh... The physics mechanisms proposed ARE very controversial! The classic physics mechanism simply shouldn't work and the quantum physics proposal are far off speculations that aren't likely to be true.

But the amount of experimental verification from separate sources indicates that either there is some factor they all forgot or that there are new physics at play. I hope for the last alternative :)

Comment Re:Under what authority? (Score 1) 298

Perhaps you free-speech "pundits" should first understand what it means? The idiot in question (in the story, not you) haven't been hindered to speak, he wasn't wanted to perform at a place and the organizers agreed. Then the organizers fucked up.

But again this rapper have not been stopped from speaking. This isn't about free speech at all.

Comment Re: Title condradicts summary (Score 1) 144

Let's look at the actual setup used in this benchmark: AMD A10 7800B
4 Steamroller CPU cores (2 modules):
2x128 bit FMAC per module = 2x4 Single precision FMAC = 8 FMAC per module
16 FMAC/clock

8 GCN compute units:
4x16 single precision FMAC per compute unit = 64 FMAC per CU
512 FMAC/clock

Compute throughput:
CPU: 3500MHz x 16 = 35GFlops
GPU: 750MHz x 512 = 384GFlops

So we get more that x10 the (single precision) throughput using the GPU.

But that ignores the fact that GPUs are designed to tolerate long average memory access times while CPUs aren't. If the access pattern of the data isn't optimal (easily cacheable) the CPU will be stalled most of the time, the GPU will not. The GPU also have other resources (texture samplers++) that can be used to increase performance IF the code can use them.


But (as I pointed out in the earlier post) it isn't likely that there would be such a huge difference if the CPU didn't run crappy code. Most likely the CPU uses double precision floats while the GPU uses single precision. IIRC the GPU in question runs double precision floats at 1/16 the throughput of single precision - which would make the CPU superior in raw number crunching.

Comment Re: Title condradicts summary (Score 1) 144

Yes but I didn't claim otherwise. The fact is that a GPU running code fitting it can get over 500x the performance of a CPU. However most real world code isn't as parallelizable as e.g. 3D rendering so overheads will strongly reduce the GPU performance.

As I wrote "With a few exceptions reports of huge speedups for GPU computing is because the CPU is feed with severely suboptimal code". Because a CPU running really shitty code is a "good" comparison point if one wants to promote GPGPU though not realistic.

In space, no one can hear you fart.