Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:That's it? (Score 5, Informative) 67

Yeah but with this kind of applications the real bottleneck is the fact that the discrete GPU needs to access data through the high-latency, low-bandwidth PCIe bus. For this kind of application, an IGP, even with the lower core counts, is often a much better solution, unless you manage to fully cover the host-device-host transfers with computations.

I'd be really curious to see this thing done in OpenCL on a recent AMD APU, exploting all the CPU cores and the IGP cores concurrently.

Comment Re:Cards from duopoly are artificially crippled (Score 4, Informative) 157

Either you're trolling or you have no frigging idea what you're talking about.

It is true that often the low-end cards are just crippled versions of the high-end cards, something which —as despicable as it might be— is nothing new to the world of technology. But going from this to saying that there is no competition and no (or slow) progress is a step into ignorance (or trolling).

I've been dealing with GPUs (for the purpose of computing, not gaming) for over five years, that is to say almost since the beginning of proper hardware support for computing on GPU. And there has been a lot of progress, even with the very little competition there has been so far.

NVIDIA alone has produced three major architectures, with very significant differences between them. If you compare the capabilities of a Tesla (1st gen) with those of a Fermi (2nd gen) or a Kepler (3rd gen), for example, you get: Fermi, has introduced an L2 and an L1 cache, which was not present in the Tesla arch, lifting some of the very strict algorithmic restrictions imposed on memory-bound kernels; it also introduced hardware-level support for DP. Kepler is not as big a change over Tesla, but it has introduced things such as the ability for stream processors to swizzle private variables among them, which is a rather revolutionary idea in the GPGPU paradigm. And 6 times more stream processors per compute unit over the previous generation is not exactly something I'd call "not that much different".

AMD has only had one major overhaul (the introduction of GCN), instead of two, but I'm not really spending more words on how much of a change it was compared to the previous VLIW architectures they had. It's a completely different beast, with the most important benefit being that its huge computing power can be harnessed much more straightforwardly. And if you ever had to hand-vectorize your code looking for the pre-GCN hotspot of workload per wavefront, you'd know what a PITN that was.

I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).

And hardware-wise, Intel is finally stepping up their game. With their HD4000 chipset they've finally managed to produce an IGP with decent performance (it even supports compute), although AMD's APUs are still top dog. On the HPC side, their Xeon Phi offerings are very interesting competitors to the NVIDIA Tesla (not the arch, the brand name for the HPC-dedicated devices) cards.

Comment Re:None use intel or amd for graphics? (Score 3, Informative) 187

Nvidia hardware isn't really clearly superior to AMD.. they rotate on who has the best hardware at various price points.

Actually, if you just look at the specifications, ATI/AMD has almost always had the (theoretically) most competitive hardware (GPU-wise), both in terms of performance/price ratio and often even in terms of raw computing power/memory bandwidth. AMD was even the first to come out with hardware support for compute on GPU (the first CTM/CAL betas came out before CUDA was ever mentioned anywhere), even if it required assembly progamming of the shaders (which you could often do without by using a layer such as BrookGPU).

However, their GPUs have been crippled by the most horrible software ecosystem possible. By and large the main culprit is ATI/AMD itself, who has constantly failed at producing high-quality, stable drivers and capable compilers for their shaders. A secondary culprit (which has finally been removed from the equation) is the architecture itself: up until the introduction of GCN, AMD shaders had a VLIW architecture (VLIW5 first, VLIW4 in the last releases before GCN) which were often not easily exploitable without heavy-duty restructuring and vectorization of your shader code: so you often found yourself with huge horsepower available, while only be able to exploit some 30-60% of it at best.

Comment Re:Refactor? APU? (Score 1) 211

While OpenCL might not be necessary, there's no reason not to use it, since it will mean easy, cross-platform support for multicore programming and use of vector functions, that would be useful on any modern system, even just on CPUs. (Of course, if the system also has an APU with the upcoming hUMA architecture and can access the same memory space as the CPU, why not make use of it too?)

Of course the benefits will only be visible to people with huge spreadsheets. As for 1-2-3- being speedy on a 486, well, either you only had small spreadsheets or the mist of the past is obscuring your memory.

Comment Re:Lesson learned (Score 3, Interesting) 53

Ok, serious questions here. Are there _technical_ reasons for hating GRASS? It does have a butt-ugly UI, but it's extremely flexible, extensible and it's designed with a Unix-like philosophy in mind, with a collection of tools that do individual things but are well integrated with each other. I'm not saying it's perfect, but then again neither is ArcGIS.

Comment Re:Poor Opera (Score 3, Interesting) 135

I'm an Opera user myself and while I agree that (one of) the main reason(s) for this preference was the functionality of the whole thing, I did like the Opera rendering engine, and often found it to be more standard-compliant than other engines, even when it had less coverage. I'm a little afraid that the Blink switch will break some of the functionalities I've been relying on (such as the ‘presentation mode’ in full-screen).

On the other hand, with the Blink/WebKit fork we are probably going to have three main engines again, and this is a good thing.

Comment Re:Might be important, but probably not... (Score 1) 176

OpenCL is suboptimal on NVIDIA only because NVIDIA refues to keep their support up to date, as it would chip in their vendor lock-in attempt with CUDA.

I honestly think everybody doing serious manycore computing should use OpenCL. NVIDIA underperforms with that? Their problem. Ditch them.

Comment Re:Console margins can't be good (Score 1) 255

I absolutely agree that the software support AMD has for their card is inferior to that of NVIDIA. And this definitely pisses me off, considering their hardware is _consistently_ better than the competitor, in terms of raw performance _and_ in terms of performance/price. OTOH, I get the impression that their software support is slowly getting better. At the very least, I haven't had any significant issues recently (at least using Debian unstable with their packaged drivers).

Slashdot Top Deals

I've noticed several design suggestions in your code.

Working...