Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment: Re:stupidly weak (Score 2) 267

by Xrikcus (#49350049) Attached to: Generate Memorizable Passphrases That Even the NSA Can't Guess

Your first word is 7 digits your second is 3, so clearly one is stronger than the other. "nom" is not in the diceware set, which helps a little, but it isn't so uncommon to be in a search dictionary. The numbers are in the diceware set.

You're comparing 7700^3 against 7700^7. Your more secure password isn't any better than chickensandwichwafflesworkcraigcrossafrica, probably a lot less good because chicken, delicious and nom clearly correlate heavily and nomnomnom is almost one word really. 7700^7 is 1604852326685300000000000000 according to my calculator. If I assume 72 characters (52 letters, 10 numbers, 10 special characters) then I need a 15 character random password to beat it in terms of search space. Maybe this: }&X$0ueUo~ravx&.

Further, if you put numbers between your letters you are turning a search space of 7700 into 7710 or whatever. If you replace l with 1 and so on, you are surely turning 7700 into 7700*(number of replacement options and combinations thereof). So mathematically, I would think that replacing e with 3, a with @ would actually be a stronger encoding that what you suggest.

Comment: Re:Why do we need Auto? (Score 1) 193

by Xrikcus (#47712829) Attached to: C++14 Is Set In Stone

You are indeed correct. Polymorphic lambdas as defined in C++ only apply template polymorphism to them. That's a subset of the possible forms of polymorphism, but I shouldn't really have used the term given that it has a definition in the standard now. Java lambdas (or C++ std::function wrapping of a lambda) is a different situation - those are only statically typed to the point of the interface, so any use of the lambda has to rely on a more dynamic typing mechanism (virtual function calls, maybe JIT inference), which is the situation I was alluding to.

Comment: Re:Why do we need Auto? (Score 4, Informative) 193

by Xrikcus (#47704779) Attached to: C++14 Is Set In Stone

Lambdas are a primary place where auto is there precisely because C++ is a strong, statically typed language as far as possible. The alternative might be polymorphic lambdas, which would require dynamic typing. With auto the type you get, and can propagate through templates, is the type of that specific lambda. With polymorphism the type you'd get is the type of a lambda, from which you'd need to infer which lambda. Auto ensures that with a lambda, though the type is not easily known to the programmer, the type can be statically defined in the code and propagated accordingly.

Comment: Re:The answer is called LLVM (Score 1) 69

by Xrikcus (#47376967) Attached to: ARM Launches Juno Reference Platform For 64-bit Android Developers

Google supports LLVM in the NDK. Renderscript is more like OpenCL where they restrict the input to make portability easier. Google also has the portable native client definition that aims to do something more general as you are suggesting, though that's for the desktop not android, admittedly. The thing is that LLVM is not actually portable between 32-bit and 64-bit anyway because C loses too much of that information at the early stages of compilation.

If you look at the SPIR spec (https://www.khronos.org/spir), which is an attempt to write a standardised version of an LLVM subset as you suggest, but for the OpenCL C subset so avoiding some of the complexities, you'll see that there are 32-bit and 64-bit versions and it really relies on the fact that OpenCL defines the sizes and layout of types more strictly than pure C does. LLVM is not a panacea in this case and a browse of past LLVM mailing lists will tell you that many of the devs are not keen on using it for portability because it isn't really what the IR was designed for.

Comment: Re:Cynicism (Score 1) 148

Even roaming charges in countries not covered by that scheme are better. I maintain a 3 phone on a UK number even though I live in the US, partly because it's a way to keep the number I've had for 15 years, and partly because it is just cheaper to use in all countries other than the US. At the moment it's even cheaper to use IN the US if calling the UK, as you point out.

Comment: Re:Proper vectorization (Score 1) 109

by Xrikcus (#45966961) Attached to: Oracle Seeking Community Feedback on Java 8 EE Plans

Hopefully this will fall out nicely from the work they're doing on Sumatra/Graal. If they can generate independent streams of ALU work that suit GPU vector units they should be able to generate AVX/SSE code too. No need to concentrate on vectorising the entire application, which can be difficult given other aspects of the Java language, but just concentrate on using the stream APIs and related features that guarantee iteration independence.

Comment: Re:Huh? (Score 1) 128

by Xrikcus (#45327401) Attached to: New Framework For Programming Unreliable Chips

When you do it that way you have no control over which computations are inaccurate. There's a lot more you can do if you have some input information from higher levels of the system.

You may be happy that your pixels come out wrong occasionally, but you certainly don't want the memory allocator that controls the data to do the same. The point of this kind of technology (which is becoming common in research at the moment, the MIT link here is a good use of the marketing department) is to be able to control this in a more fine-grained fashion. For example, you could mark the code in the memory allocator as accurate - it must not have errors and so must enable any hardware error correction, might use a core on the platform that operates at a higher voltage, or would add extra software error correction as necessary. At the same time you might allow the visualization code to degrade to reduce overall power consumption, because the visualization code is not mutating any important data structures. Anything it generates is transient and the errors will barely be noticed.

Comment: Re:Still slower than AMD (Score 1) 160

by Xrikcus (#43804583) Attached to: NVIDIA GeForce GTX 780 Offers 2,304 Cores For $650

GPU manufacturers have a tendency to use the word "core" to mean "one ALU in the middle of a vector unit". It's not really very different in principle from saying that an AVX unit is 8 cores, though, so you have to be careful with comparisons.

If you look at the AMD architecture for each compute unit, it's not so different from the cores you see on the CPU side, so it's much more fair to call the 7970 a 32 core chip. The way that a work item in OpenCL, say, or a shader instance in OpenGL maps down to one of those lanes is as much an artifact of the toolchain as of the architecture.

Nothing succeeds like excess. -- Oscar Wilde