Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Has been Postponed (Score 2) 303

Well, the original meta post got heavily down-modded and has since been updated it with:

Update: January 15, 2016 Thank you for your candidness, patience and feedback. We're going to delay the implementation for now - we'll be back soon to open some more discussions.

So it's not been taken off the table, but it probably won't happen anytime soon.

Comment Re:Why are they on Social Media??? (Score 2) 256

If someone else sets up a fake account in your name, it still shows up when your name is searched. You can disavow it all you want, but it's still going to put off potential employers. If you have a generic name then it might not matter, but if your name is unique, you will have a much more difficult time.

Comment Re:FPS per watt (Score 1) 110

As for CUDA - it is almost directly inferior to OpenCL. CUDA's prevalence is largely due to NVIDIA's attempts to jam it down every available throat.

Not even close. CUDA came out well before OpenCL (CUDA in June 2007, OpenCL 1.0 in August 2009), and has remained ahead features, tools and stability-wise ever since. (yes I have used both). I would really like for AMD + OpenCL to be better than NVIDIA + CUDA, but I've been wishing for that for the last 6 years and it has yet to happen.

Comment Re:Dramatic speed increase? (Score 1) 37

I've never used it, but TBB flow graph does have graphical tool as well (https://software.intel.com/en-us/articles/flow-graph-designer). Looks like a two-part profiler and designer -- in their words, the design component "provides the ability to visually create Intel TBB flow graph diagrams and then generate C++ stubs as a starting point for further development."

That tensorflow graph tool does looks pretty nice though

Comment Re:Very sad - but let's get legislation in place N (Score 1) 706

Oh, sure, they might get a little bad PR, and the stock might slip a little. But that asshole executive who decided security was too costly? It's not his data being stolen, and it's not him who has to deal with it.

While I agree with the overall sentiment, in this specific case the hackers look to have grabbed the full source of all the parent companies' websites, and the CEO's emails... which they recently released.

Comment Re:Incrementing (Score 1) 285

x = x++; It looks okay at first glance,

TBH, it screams "@fixme, sequence points" even at a first glance.

Whenever I see 'sequence points', I want to get all pedantic and point out that the term itself is deprecated as of C++11, in favor of using more precise terminaology concerning memory ordering (and we actually can now, because of the C++11 memory model). But then I refrain.

Comment Re:"NVidia Hopes to Sell"... CUDA (Score 1) 35

That's true in theory; the problem is that OpenCL still feels a few years (or more...) behind CUDA. I have used both, and while OpenCL is undoubtedly the future, CUDA is still by far the better choice for GPGPU today.

The worrying thing is that I've been saying that for the past 5 years, and it hasn't shown any signs of changing. AMD's OpenCL implementation (everything from the drivers to the compiler) are a total crapshoot. With each release they fix one bug, but introduce 1 new one and 1 regression. (completely) innocuous changes in kernel code can make for dramatic swings in the compiled output. All too often AMD's OpenCL still feels like it's at its proof-of-concept phase, and Nvidia (those bastards) haven't released anything OpenCL related since OpenCL 1.1 (which was something like 5 years ago).

OpenCL implementations are too uneven still as well; I have been working with an embedded system lately that has coprocessors and advertises as being OpenCL compatible. The problem is that their implementation is at best incomplete and at worst completely wrong, to the point of not being usable. Maybe in another 5 years OpenCL will be the better choice

Slashdot Top Deals

"355/113 -- Not the famous irrational number PI, but an incredible simulation!"

Working...