Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Poor Opera (Score 3, Interesting) 135

I'm an Opera user myself and while I agree that (one of) the main reason(s) for this preference was the functionality of the whole thing, I did like the Opera rendering engine, and often found it to be more standard-compliant than other engines, even when it had less coverage. I'm a little afraid that the Blink switch will break some of the functionalities I've been relying on (such as the ‘presentation mode’ in full-screen).

On the other hand, with the Blink/WebKit fork we are probably going to have three main engines again, and this is a good thing.

Comment Re:Might be important, but probably not... (Score 1) 176

OpenCL is suboptimal on NVIDIA only because NVIDIA refues to keep their support up to date, as it would chip in their vendor lock-in attempt with CUDA.

I honestly think everybody doing serious manycore computing should use OpenCL. NVIDIA underperforms with that? Their problem. Ditch them.

Comment Re:Console margins can't be good (Score 1) 255

I absolutely agree that the software support AMD has for their card is inferior to that of NVIDIA. And this definitely pisses me off, considering their hardware is _consistently_ better than the competitor, in terms of raw performance _and_ in terms of performance/price. OTOH, I get the impression that their software support is slowly getting better. At the very least, I haven't had any significant issues recently (at least using Debian unstable with their packaged drivers).

Comment Re:Time to form the MWaSP (Score 4, Insightful) 64

Problem is no browser follows exactly the standards, and as you point with Office every browser has bugs in it. So if you markup your page following the standards alone it won't render properly anywhere. You end up going back and rewriting some of the styling and scripting to either not use stuff that expose bugs or using browser-specific kludges to get around the bugs.

If all browsers use the same engine, at least we don't have to spend days testing pages with umpteen different browsers and getting around gumpteen bugs. And if one engine is used, wouldn't that become the de-facto standard? The trick is that the engine must be open-sourced (unlike MS Office), so that it's not controlled by a single commercial company and that bugs can be fixed by anyone at the RC stage.

The problem is that, with that kind of attitude, rendering issues in browsers will never be fixed. Even if the rendering engine is crap, and the standard claims a different (more sensible, more functional, whatever) behavior, with a single rendering engine used as the de facto standard, it would never get fixed. Unsurprisingly, whenever one reports a rendering bug, the first question that gets asked is: does it work in other engines? Luckily, we still have at least three major engines (the fourth, Presto, has only been recently abandoned), so we can still compare and see which engines are wrong in implementing that specific part of the standard, and which are not. Without these multitude of implementations, one of the primary motivation in fixing bugs disappears.

Monocultures are bad. Regardless of whether they're open-source or not.

Comment Re:Good idea. (Score 1) 314

One of the reasons I didn't use Opera was actually because Web developers never tended to create content with Opera's rendering engine in mind.

And that's actually the problem with Opera moving to webkit. Developers shouldn't have any specific rendering engine in mind. They should have the W3C standard in mind. By having one less rendering engine (even if it's just a minority one) reduces the pressure on web developers to code according to standards. It also makes it much harder to spot bugs in rendering engines: how do you know if a particular CSS+HTML combination doesn't work as you would expect according to what the standard does? You check it against multiple engines. If one of the engine does things differently, then either it is non-compliant, or the other engines are. Having one less engine means having one less external check, and less motivation for web engine developers to code standard-compliant engines. We're falling back to web monoculture, and just because it's not IE this time it doesn't make it better.

Comment Re:It's a very sad thing to admit, but (Score 3, Interesting) 260

OpenCL is supported by all major vendors, and it can be used both on CPU and on GPU. However, Intel's support for OpenCL on GPU is only available on Windows. Until the GalliumComute framework is ready, we won't be seeing any open source OpenCL support anywhere. (Also, Intel's GPUs support OpenCL only from HD4000 series).

Comment Re:If AMD Dies... (Score 1) 331

I think he's referring to the hyperthreading technology itself, for which you probably can't set the HT bit unless you actually support it. Still, even though Phenoms don't have HT, they _will_ perform at closer to peak performance if you overcommit (in terms of threads). I've don some testing, and you need about 18 threads to truly saturate an X6, which is about the same number of threads that you need to saturate a dual Xeon (8 physical cores, 16 with HT).

I think you're a bit confused. This subthread is about Bulldozer and how its unusual design (where each pair of "cores" are not truly independent cores because they share a common floating point unit, instruction cache, decoder, and a couple other blocks) interacts with the Windows scheduler. Due to the superficial similarity to hyperthreading, some people maintain that if AMD had only been smart enough to make pairs of Bulldozer cores declare themselves to be one hyperthreaded core, it would have magically made Bulldozer much faster in Windows. This isn't really true, but fans looking for a reason to believe don't ever notice they're simultaneously claiming AMD was smart enough to design a great CPU and dumb enough to accidentally sabotage it in a really trivial, easy-to-fix way.

You are indeed right, I totally missed that part. And I don't know anything about the Windows scheduler, so I have no idea if advertising the core pairs would perform better if they looked like a single core with HT.

Also, your test was probably somewhat bogus. You can easily saturate a Phenom II X6 with six threads. I'd guess you ran a program where individual threads cannot individually saturate 1 CPU core. That is, they frequently go to sleep or wait on each other quite a lot. That's the only way you can continue to get significant scaling after N threads (where N equals the number of hardware threads available). Not all programs behave that way, so it's not real useful to report that one particular program happens to "scale" all the way up to 3 threads per core. (And if it does behave that way, there's no reason to believe it would behave any differently on Intel CPUs.)

The thing it, it does behave different on Intel CPUs. Tested on both a dual-Xeon (2x4 cores, doubled by HT) and on an i7 (4 cores, doubled by HT), and in both cases peak performance was achieved with a number of CPU threads matching (or very close to) the HT-advertised performance (more specifically, 18 threads in the Xeon case and 10 threads in the i7 case.

Of course this is just a very specific application, and I'm sure that the effects I'm seeing are influenced by bottlenecks from other subsystem (memory throughput, most likely); still, I find the difference between Intel and AMD CPUs is quite peculiar.

Comment Re:If AMD Dies... (Score 1) 331

I think he's referring to the hyperthreading technology itself, for which you probably can't set the HT bit unless you actually support it. Still, even though Phenoms don't have HT, they _will_ perform at closer to peak performance if you overcommit (in terms of threads). I've don some testing, and you need about 18 threads to truly saturate an X6, which is about the same number of threads that you need to saturate a dual Xeon (8 physical cores, 16 with HT).

Comment Re:They had an alternative - MeeGo (Score 1) 409

This is a quote from the January 26th 2012 by Tomi Ahonen

“Luckily I didn’t have to do the math for this, the nice people at All About Symbian had tracked the numbers (read through the comments) and calculated the limits, finding N9 sales to be between the level of 1.5 million and 2.0 million units in Q4. Wow! Nokia specifically excluded all of its richest and biggest traditional markets where it tried to sell the Lumia, and these countries achieved – lets call it the average, 1.75 million unit sales of the N9 in Q4. So the one N9 outsold both Lumia handsets by almost exactly 3 to 1.” [1]

And the amazing thing is that the N9 sold so incredibly well despite not being marketed as much as as the Lumia. I still come across people looking to buy an N9, and having to get it Switzerland because it's not sold in Italy.

Comment Re:AMD needs some high profile support (Score 4, Insightful) 252

Unfortunately nVidia cards are a bit better (support for PhysX) which AMD doesn't

Unless you really need PhysX (which is a niche feature), my opinion is that AMD video cards are better. The 7770 and 7870 have excellent price/performace ratios and no major weaknesses. In particular, thermals and power consumption are better than on corresponding nVidia cards.

You're right about AMD's uncompetitiveness against Intel in the CPU market, though.

AMD video cards are significantly better than NVIDIA ones when it comes to raw computation power and when it comes to performance/watt and when it comes to performance/price; especially now that the 7xxx series has overcome the only weakness of the old series, the VLIW instruction set and architecture. Where AMD sucks big times is in software support. NVIDIA has pushed immensely CUDA, to the point that people now think that GPGPU = CUDA; and it has immensely pushed in creating a software environment around CUDA, including tons of external libraries that depend on CUDA. AMD has lost of a lot of ground with their CTM -> CAL -> OpenCL transitions, that have effectively prevented their technology to gain any significant traction, and they are just now starting to go back and getting some visibility. Their APU offering is probably the last chance they get in doing a significant breakthrough. Let's hope they don't bust it.

Comment Re:Wow (Score 1, Informative) 223

Or use Windows or possibly Gnome...or do OpenCl or OpenGl programming...or-

The list goes on. The fact that people are still selling craptacular integrated video chipsets in this day and age saddens me greatly. Guys, it's 2012...pony up for a dedicated video card with dedicated video ram. Quit trying to save a buck or two on a component you really don't want to be cheap on.

Well, I think you can do OpenCL on Intel HD3xxx/4xxx chips these days.

AFAIK, Intel HD3xxx is not OpenCL capable, and Intel HD4xxx is officially supported by Intel on Windows only (no Linux drivers). This is in sharp contrast with AMD, which has much better OpenCL support for everything they ship (CPUs, GPUs and APUs).

Comment Re:AMD's in deep trouble with Steamroller (Score 1, Insightful) 161

*Looks around* AAAAAAAnd, how does this AVX-256 compare to OpenCl transcoding of video?

That's a stupid question. OpenCL by itself does nothing whatsoever to improve video transcoding. OpenCL is an API, so the performance of an OpenCL kernel for video transcoding highly depends on which hardware you're running it on. On Intel CPUs supporting AVX-256, OpenCL kernels will be compiled to use those instructions (if Intel keeps updating its OpenCL SDK), on GPUs and APUs it will use whatever the respective platforms use. What OpenCL does is make it easier to exploit AVX-256, just as it makes it easier to exploit SSE and multiple cores.

Slashdot Top Deals

Space tells matter how to move and matter tells space how to curve. -- Wheeler

Working...