As for CUDA - it is almost directly inferior to OpenCL. CUDA's prevalence is largely due to NVIDIA's attempts to jam it down every available throat.
Not even close. CUDA came out well before OpenCL (CUDA in June 2007, OpenCL 1.0 in August 2009), and has remained ahead features, tools and stability-wise ever since. (yes I have used both). I would really like for AMD + OpenCL to be better than NVIDIA + CUDA, but I've been wishing for that for the last 6 years and it has yet to happen.
I've never used it, but TBB flow graph does have graphical tool as well (https://software.intel.com/en-us/articles/flow-graph-designer). Looks like a two-part profiler and designer -- in their words, the design component "provides the ability to visually create Intel TBB flow graph diagrams and then generate C++ stubs as a starting point for further development."
That tensorflow graph tool does looks pretty nice though
There have been tools to do this in the past but, they have frequently been clumsy internal tools and geared towards a specific set of algorithms
Intel TBB's flow graph does a pretty good job of this.
Oh, sure, they might get a little bad PR, and the stock might slip a little. But that asshole executive who decided security was too costly? It's not his data being stolen, and it's not him who has to deal with it.
While I agree with the overall sentiment, in this specific case the hackers look to have grabbed the full source of all the parent companies' websites, and the CEO's emails... which they recently released.
x = x++; It looks okay at first glance,
TBH, it screams "@fixme, sequence points" even at a first glance.
Whenever I see 'sequence points', I want to get all pedantic and point out that the term itself is deprecated as of C++11, in favor of using more precise terminaology concerning memory ordering (and we actually can now, because of the C++11 memory model). But then I refrain.
That's true in theory; the problem is that OpenCL still feels a few years (or more...) behind CUDA. I have used both, and while OpenCL is undoubtedly the future, CUDA is still by far the better choice for GPGPU today.
The worrying thing is that I've been saying that for the past 5 years, and it hasn't shown any signs of changing. AMD's OpenCL implementation (everything from the drivers to the compiler) are a total crapshoot. With each release they fix one bug, but introduce 1 new one and 1 regression. (completely) innocuous changes in kernel code can make for dramatic swings in the compiled output. All too often AMD's OpenCL still feels like it's at its proof-of-concept phase, and Nvidia (those bastards) haven't released anything OpenCL related since OpenCL 1.1 (which was something like 5 years ago).
OpenCL implementations are too uneven still as well; I have been working with an embedded system lately that has coprocessors and advertises as being OpenCL compatible. The problem is that their implementation is at best incomplete and at worst completely wrong, to the point of not being usable. Maybe in another 5 years OpenCL will be the better choice
These neat little theories are always so so convenient to explain why everyone else is inferior. Yet Pakistan elected a woman as prime minister: http://en.wikipedia.org/wiki/B.... Perhaps the world is more complicated than these little theories suggest?
... the fine print being that she too was murdered (in 2007), with Al-Qaeda claiming responsibility. Arguing that Pakistan doesn't have a problem with militant islamist groups murdering women is a pretty tough sell
Why would anyone want to put a function definition in a class declaration? As I recall, defining a function in the class declaration automatically makes it inline, but that can also be achieved by declaring the function inline.
I also recall that inline functions can considerably increase the size of the resulting executable, so having large inline functions is a bad idea. If you define all your functions in the class declaration you'd end up with a very large program.
Declaring a function inline does not guarentee the function will actually be inlined -- the compiler decides whether it'll be inlined or not, and generally only small functions will be inlined, so if you're using a compiler made within the last decade, large inline functions are not a problem. What inline DOES do is modify the function's linkage specification, but that's a different matter.
C++ might not end up being faster, but it certainly has no reason to be slower*. Well-written C++ and C should run at about the same speed. However, C++ has the advantage of allowing you to use high-level constructs when performance isn't as much of an issue.
* this is contingent on your compiler -- if you're using some compiler from a decade ago, some constructs (e.g. templates) may emit shockingly poor code
Off topic: But I really don't know why so many people use C++ for non-embedded. It's perfectly valid for many - maybe most - applications to trade efficiency for safety, so use a different language. Why pick one that accommodates all the power of C then constantly beat on the developers with a giant list of coding guidelines? When the greatest attribute you seek in a developer is pedantry then something's wrong.
C++ is great anywhere you need performance. Numerical computing, scientific computing, image processing, computer vision, machine learning, etc -- all of these benefit greatly from C++, as you can use it as a high-level language in the non-performance critical parts, but at the same time, be able to optimize effectively in the places where it matters.
Nothing will ever be attempted if all possible objections must be first overcome. -- Dr. Johnson