And you think this is a bigger problem than letting someone walk off with your credit card to where you can't see what they do with it?
Average traffic is there in the old version too. The traffic view information seems less clear in the new version and it still doesn't allow you to estimate the time for a given route based on the average traffic.
Although it does seem to have lost the little arrow that hinted which direction the camera would point in.
Hopefully this will fall out nicely from the work they're doing on Sumatra/Graal. If they can generate independent streams of ALU work that suit GPU vector units they should be able to generate AVX/SSE code too. No need to concentrate on vectorising the entire application, which can be difficult given other aspects of the Java language, but just concentrate on using the stream APIs and related features that guarantee iteration independence.
That is a quirk of American trademarks that you don't see so much in other countries. It has always seemed strange to see "x brand y" used everywhere in the US. Understandable for the reason you point out, but not natural to non-Americans.
When you do it that way you have no control over which computations are inaccurate. There's a lot more you can do if you have some input information from higher levels of the system.
You may be happy that your pixels come out wrong occasionally, but you certainly don't want the memory allocator that controls the data to do the same. The point of this kind of technology (which is becoming common in research at the moment, the MIT link here is a good use of the marketing department) is to be able to control this in a more fine-grained fashion. For example, you could mark the code in the memory allocator as accurate - it must not have errors and so must enable any hardware error correction, might use a core on the platform that operates at a higher voltage, or would add extra software error correction as necessary. At the same time you might allow the visualization code to degrade to reduce overall power consumption, because the visualization code is not mutating any important data structures. Anything it generates is transient and the errors will barely be noticed.
GPU manufacturers have a tendency to use the word "core" to mean "one ALU in the middle of a vector unit". It's not really very different in principle from saying that an AVX unit is 8 cores, though, so you have to be careful with comparisons.
If you look at the AMD architecture for each compute unit, it's not so different from the cores you see on the CPU side, so it's much more fair to call the 7970 a 32 core chip. The way that a work item in OpenCL, say, or a shader instance in OpenGL maps down to one of those lanes is as much an artifact of the toolchain as of the architecture.
Hmm. After two years in California and then a year in Texas I ended up spending quite a bit to move my family back to California. Absolutely no comparison, and well worth paying for.
Quite common, really. I got so used to drinking Sierra Nevada in London that when I drink it here in the US it makes me think of home.
I don't know. On the same money my taxes raised slightly moving from London to California by the calculations I did at the time. VAT/California sales tax might have made a significant difference on top of that, but probably not enough to worry about given that the UK taxes had health insurance built into the number.
Make it 40 minutes and you can get a reasonable 2 or 3 bed for that comfortably. I much preferred my 40 minutes on the train doing that to my 20 minutes driving in California: the quiet reading time was wonderful. Both are preferable to where I am now by miles even given property prices.
Even the addition of vector units basically breaks that model. We now need intrinsics or help from vectorising compilers to map to the way the hardware works: C isn't a great match at all.
I tried to drop comcast TV last year and they told me it was cheaper to have internet + basic cable than internet alone. Of course it was... until 6 months later that specially discounted package ran out and my bill jumped. You have to keep an eye on it.
Oh I don't know about that. I gave a talk only last week in which I described AMD's latest GPU (HD7970) as having 32 cores and I ran that content past some of the chip's architects first. If you look at the design of the chip it quite clearly has 32 cores: 32 scalar cores each of which having 4 16-wide SIMD units hanging off the side.
Of course, how the marketing department at Apple defines cores is open to question, but 4 strikes me as a reasonable number for an embedded GPU. The ARM Mali designs are around that kind of core count.
Auto in K&R was a storage class, not a request for compiler type inference.