Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Canonical hires only morons. (Score 2) 118

Mark is a good guy. Too bad he sucks great humongous dick at hiring fucking idiots. All of the Ubuntu bullshit you hear about is because he has fucking morongs working for him that cannot tell their arse from a hole in the ground.

Mark. For the love of god. Fire EVERYONE at Canonical and hire people that have a goddamned clue.

As s friend of mine once taught me, “first class people hire first class people. second class people hire third class people”.

Comment Re:No Market Impact Expected, but Short it anyway (Score 3, Insightful) 181

There are only two rendering engines for Linux, and they are Gecko and Webkit, both of which have horrible support for a lot of advanced web standards such as SVG and MathML, because the focus today is on who makes the fanciest sliding div effect rather than on actually properly implementing existing stuff. The loss of Presto and the reduction of alternatives is a very sad day for the web.

Comment Re:That's it? (Score 5, Informative) 67

Yeah but with this kind of applications the real bottleneck is the fact that the discrete GPU needs to access data through the high-latency, low-bandwidth PCIe bus. For this kind of application, an IGP, even with the lower core counts, is often a much better solution, unless you manage to fully cover the host-device-host transfers with computations.

I'd be really curious to see this thing done in OpenCL on a recent AMD APU, exploting all the CPU cores and the IGP cores concurrently.

Comment Re:Cards from duopoly are artificially crippled (Score 4, Informative) 157

Either you're trolling or you have no frigging idea what you're talking about.

It is true that often the low-end cards are just crippled versions of the high-end cards, something which —as despicable as it might be— is nothing new to the world of technology. But going from this to saying that there is no competition and no (or slow) progress is a step into ignorance (or trolling).

I've been dealing with GPUs (for the purpose of computing, not gaming) for over five years, that is to say almost since the beginning of proper hardware support for computing on GPU. And there has been a lot of progress, even with the very little competition there has been so far.

NVIDIA alone has produced three major architectures, with very significant differences between them. If you compare the capabilities of a Tesla (1st gen) with those of a Fermi (2nd gen) or a Kepler (3rd gen), for example, you get: Fermi, has introduced an L2 and an L1 cache, which was not present in the Tesla arch, lifting some of the very strict algorithmic restrictions imposed on memory-bound kernels; it also introduced hardware-level support for DP. Kepler is not as big a change over Tesla, but it has introduced things such as the ability for stream processors to swizzle private variables among them, which is a rather revolutionary idea in the GPGPU paradigm. And 6 times more stream processors per compute unit over the previous generation is not exactly something I'd call "not that much different".

AMD has only had one major overhaul (the introduction of GCN), instead of two, but I'm not really spending more words on how much of a change it was compared to the previous VLIW architectures they had. It's a completely different beast, with the most important benefit being that its huge computing power can be harnessed much more straightforwardly. And if you ever had to hand-vectorize your code looking for the pre-GCN hotspot of workload per wavefront, you'd know what a PITN that was.

I would actually hope they stopped coming up with new archs, and spent some more time refining their software side. AMD has some of the worst drivers ever seen by a major hardware manufacturer (in fact, considering they've consistently had better, cheaper hardware, there isn't really any other explanation for their inability to gain dominance in the GPU market), but NVIDIA isn't exactly problem free: their support for OpenCL, for example, is ancient and crappy (obviously, since they'd rather have people use CUDA to do compute on their GPUs).

And hardware-wise, Intel is finally stepping up their game. With their HD4000 chipset they've finally managed to produce an IGP with decent performance (it even supports compute), although AMD's APUs are still top dog. On the HPC side, their Xeon Phi offerings are very interesting competitors to the NVIDIA Tesla (not the arch, the brand name for the HPC-dedicated devices) cards.

Comment Re:None use intel or amd for graphics? (Score 3, Informative) 187

Nvidia hardware isn't really clearly superior to AMD.. they rotate on who has the best hardware at various price points.

Actually, if you just look at the specifications, ATI/AMD has almost always had the (theoretically) most competitive hardware (GPU-wise), both in terms of performance/price ratio and often even in terms of raw computing power/memory bandwidth. AMD was even the first to come out with hardware support for compute on GPU (the first CTM/CAL betas came out before CUDA was ever mentioned anywhere), even if it required assembly progamming of the shaders (which you could often do without by using a layer such as BrookGPU).

However, their GPUs have been crippled by the most horrible software ecosystem possible. By and large the main culprit is ATI/AMD itself, who has constantly failed at producing high-quality, stable drivers and capable compilers for their shaders. A secondary culprit (which has finally been removed from the equation) is the architecture itself: up until the introduction of GCN, AMD shaders had a VLIW architecture (VLIW5 first, VLIW4 in the last releases before GCN) which were often not easily exploitable without heavy-duty restructuring and vectorization of your shader code: so you often found yourself with huge horsepower available, while only be able to exploit some 30-60% of it at best.

Comment Re:Refactor? APU? (Score 1) 211

While OpenCL might not be necessary, there's no reason not to use it, since it will mean easy, cross-platform support for multicore programming and use of vector functions, that would be useful on any modern system, even just on CPUs. (Of course, if the system also has an APU with the upcoming hUMA architecture and can access the same memory space as the CPU, why not make use of it too?)

Of course the benefits will only be visible to people with huge spreadsheets. As for 1-2-3- being speedy on a 486, well, either you only had small spreadsheets or the mist of the past is obscuring your memory.

Comment Re:Lesson learned (Score 3, Interesting) 53

Ok, serious questions here. Are there _technical_ reasons for hating GRASS? It does have a butt-ugly UI, but it's extremely flexible, extensible and it's designed with a Unix-like philosophy in mind, with a collection of tools that do individual things but are well integrated with each other. I'm not saying it's perfect, but then again neither is ArcGIS.

Slashdot Top Deals

"We shall reach greater and greater platitudes of achievement." -- Richard J. Daley