Are you talking about the Voight Kampff test?
Are you talking about the Voight Kampff test?
How about those that have had benefits for a period before actually finding a job and are willing to offer some part of their payment due to empathy?
Oh, right - empathy is un-Amerikkkan. Sure glad a lot of US citizens are.
You have to be quite stupid to equalize a phone book in "the good old days" to doxxing on the Internet.
What next: claiming a DDOS is like someone knocking on your door?
No it isn't a interesting point - it is a fundamentally misunderstanding how sexual preference work. A misunderstanding that I could understand a straight US citizen having - due to the media always presenting sexual preferences as a choice rather than something one is born with - but a homosexual person? Nah.
But for those that "have a choice" (technically bisexuals) sure they should be able to "choose" to have sex with someone with the same gender. Which is true now legally and (_drum_roll_) would be true socially/"morally" if the level of acceptance would increase.
Men and women alike want sex, men and women alike like relationships. There are those of either gender that don't want one or the other (and a very small group that don't want either) but that isn't really relevant.
And I hope* you have a serious stroke and a long (otherwise) healthy life - having locked-in syndrome for 50 years may help you realize that judging another person without proof may not be so smart...
(* not really even if your death wish almost get me there)
Interpolation isn't about adding noise.
6 bit (per component) LCDs have for at least 10 years and probably much longer used dithering techniques to produce effective 16.2M colors (compared to a true 8 bit panel with 16.7M colors). This works very well for almost all use cases and provides smooth gradients but have the disadvantage that some image patters can produce flashing due to interference with the dithering algorithm.
Dithering isn't about adding noise either BTW.
The IBM Z mainframe is a direct descendant from the IBM 360 from the 60ies.
Using a modified PPC to run the legacy CISCy code would be bad both for performance and reliability. Even assuming you meant IBM POWER rather than Power PC this holds.
IBM shares process technology and experiences in e.g. optimizing decimal floating point execution between the POWER and the Z series - but they are completely separate designs.
Eh... The physics mechanisms proposed ARE very controversial! The classic physics mechanism simply shouldn't work and the quantum physics proposal are far off speculations that aren't likely to be true.
But the amount of experimental verification from separate sources indicates that either there is some factor they all forgot or that there are new physics at play. I hope for the last alternative
Exactly. He is also able to travel to Chicago and express what he wants* using his free speech rights.
(* with some limitations like yelling fire etc.)
Perhaps you free-speech "pundits" should first understand what it means? The idiot in question (in the story, not you) haven't been hindered to speak, he wasn't wanted to perform at a place and the organizers agreed. Then the organizers fucked up.
But again this rapper have not been stopped from speaking. This isn't about free speech at all.
Let's look at the actual setup used in this benchmark: AMD A10 7800B
4 Steamroller CPU cores (2 modules):
2x128 bit FMAC per module = 2x4 Single precision FMAC = 8 FMAC per module
8 GCN compute units:
4x16 single precision FMAC per compute unit = 64 FMAC per CU
CPU: 3500MHz x 16 = 35GFlops
GPU: 750MHz x 512 = 384GFlops
So we get more that x10 the (single precision) throughput using the GPU.
But that ignores the fact that GPUs are designed to tolerate long average memory access times while CPUs aren't. If the access pattern of the data isn't optimal (easily cacheable) the CPU will be stalled most of the time, the GPU will not. The GPU also have other resources (texture samplers++) that can be used to increase performance IF the code can use them.
But (as I pointed out in the earlier post) it isn't likely that there would be such a huge difference if the CPU didn't run crappy code. Most likely the CPU uses double precision floats while the GPU uses single precision. IIRC the GPU in question runs double precision floats at 1/16 the throughput of single precision - which would make the CPU superior in raw number crunching.
Yes but I didn't claim otherwise. The fact is that a GPU running code fitting it can get over 500x the performance of a CPU. However most real world code isn't as parallelizable as e.g. 3D rendering so overheads will strongly reduce the GPU performance.
As I wrote "With a few exceptions reports of huge speedups for GPU computing is because the CPU is feed with severely suboptimal code". Because a CPU running really shitty code is a "good" comparison point if one wants to promote GPGPU though not realistic.
Really? While some uses of spreadsheets could be considered that most should be faster to run on a CPU - if the CPU uses optimized code. Parallelizing the calculations have inherent overheads.
In space, no one can hear you fart.