Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Why do we need Auto? (Score 1) 193

You are indeed correct. Polymorphic lambdas as defined in C++ only apply template polymorphism to them. That's a subset of the possible forms of polymorphism, but I shouldn't really have used the term given that it has a definition in the standard now. Java lambdas (or C++ std::function wrapping of a lambda) is a different situation - those are only statically typed to the point of the interface, so any use of the lambda has to rely on a more dynamic typing mechanism (virtual function calls, maybe JIT inference), which is the situation I was alluding to.

Comment Re:Why do we need Auto? (Score 4, Informative) 193

Lambdas are a primary place where auto is there precisely because C++ is a strong, statically typed language as far as possible. The alternative might be polymorphic lambdas, which would require dynamic typing. With auto the type you get, and can propagate through templates, is the type of that specific lambda. With polymorphism the type you'd get is the type of a lambda, from which you'd need to infer which lambda. Auto ensures that with a lambda, though the type is not easily known to the programmer, the type can be statically defined in the code and propagated accordingly.

Comment Re:The answer is called LLVM (Score 1) 69

Google supports LLVM in the NDK. Renderscript is more like OpenCL where they restrict the input to make portability easier. Google also has the portable native client definition that aims to do something more general as you are suggesting, though that's for the desktop not android, admittedly. The thing is that LLVM is not actually portable between 32-bit and 64-bit anyway because C loses too much of that information at the early stages of compilation.

If you look at the SPIR spec (https://www.khronos.org/spir), which is an attempt to write a standardised version of an LLVM subset as you suggest, but for the OpenCL C subset so avoiding some of the complexities, you'll see that there are 32-bit and 64-bit versions and it really relies on the fact that OpenCL defines the sizes and layout of types more strictly than pure C does. LLVM is not a panacea in this case and a browse of past LLVM mailing lists will tell you that many of the devs are not keen on using it for portability because it isn't really what the IR was designed for.

Comment Re:Cynicism (Score 1) 148

Even roaming charges in countries not covered by that scheme are better. I maintain a 3 phone on a UK number even though I live in the US, partly because it's a way to keep the number I've had for 15 years, and partly because it is just cheaper to use in all countries other than the US. At the moment it's even cheaper to use IN the US if calling the UK, as you point out.

Comment Re:Proper vectorization (Score 1) 109

Hopefully this will fall out nicely from the work they're doing on Sumatra/Graal. If they can generate independent streams of ALU work that suit GPU vector units they should be able to generate AVX/SSE code too. No need to concentrate on vectorising the entire application, which can be difficult given other aspects of the Java language, but just concentrate on using the stream APIs and related features that guarantee iteration independence.

Comment Re:Huh? (Score 1) 128

When you do it that way you have no control over which computations are inaccurate. There's a lot more you can do if you have some input information from higher levels of the system.

You may be happy that your pixels come out wrong occasionally, but you certainly don't want the memory allocator that controls the data to do the same. The point of this kind of technology (which is becoming common in research at the moment, the MIT link here is a good use of the marketing department) is to be able to control this in a more fine-grained fashion. For example, you could mark the code in the memory allocator as accurate - it must not have errors and so must enable any hardware error correction, might use a core on the platform that operates at a higher voltage, or would add extra software error correction as necessary. At the same time you might allow the visualization code to degrade to reduce overall power consumption, because the visualization code is not mutating any important data structures. Anything it generates is transient and the errors will barely be noticed.

Comment Re:Still slower than AMD (Score 1) 160

GPU manufacturers have a tendency to use the word "core" to mean "one ALU in the middle of a vector unit". It's not really very different in principle from saying that an AVX unit is 8 cores, though, so you have to be careful with comparisons.

If you look at the AMD architecture for each compute unit, it's not so different from the cores you see on the CPU side, so it's much more fair to call the 7970 a 32 core chip. The way that a work item in OpenCL, say, or a shader instance in OpenGL maps down to one of those lanes is as much an artifact of the toolchain as of the architecture.

Comment Re:It's a pointless question. (Score 1) 395

I don't know. On the same money my taxes raised slightly moving from London to California by the calculations I did at the time. VAT/California sales tax might have made a significant difference on top of that, but probably not enough to worry about given that the UK taxes had health insurance built into the number.

Comment Re:But actually living in London is a challenge (Score 1) 395

Make it 40 minutes and you can get a reasonable 2 or 3 bed for that comfortably. I much preferred my 40 minutes on the train doing that to my 20 minutes driving in California: the quiet reading time was wonderful. Both are preferable to where I am now by miles even given property prices.

Slashdot Top Deals

If you want to put yourself on the map, publish your own map.

Working...