Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re: Stupid (Score 1) 153

If it is just a bug, then we should expect a quick fix and firmware release from VW. If, however, it was a conspiracy, and there is no way that VW EGR technology can ever be made to pass the NOx requirements (without additional hardware - AdBlu tanks), then VW is screwed.

My point is that a "too good to be true" bug could easily have quite devastating consequences if it's just fixed. If they remove 'false' in the putative "if (isInTest() || (isNOxReductionNeeded() && false) enableEGR();" line and this increases fuel consumption or reduces maximum torque a lot, they cannot simply release that fix.

Embedded automotive control systems and scientific research are quite different domains, but in science I've repeatedly been close to thinking I had solved a problem, just to realize that my benchmark was off and the code was not really working at all. I have not published any of those results (AFAIK) yet, but I've reviewed and seen publications with blatant errors. When you have reached the kind of result you hoped for and believed likely, you are not on guard anymore. Fixing the blatant error might very well mean that the whole work is pointless. The error is trivial, the consequences are not.

Comment Re:Correct Conclusion, Wrong Rationale (Score 1) 153

The sensors required to detect "test mode" and software driven EGR control hardware are already part of any modern car so there was no decision to "add" them to accomplish this cheat. But there had to be a strategic decision to not to add SNCR, and that is a decision that could only be made at a very high level.

Yes, but that was not a conspiracy. It was very clear in the specs and even highlighted as an advantage. The question is "who knew and approved, at what point, that this design would not work out in practice". Even the idea that smart design of the control regime would make it possible to achieve low emissions without SNCR is not, in itself, equivalent to fraud. It's only when this design becomes all about "detecting test conditions" things get really, really bad.

Comment Stupid (Score 3, Informative) 153

The linked article makes the point that the sensors and hardware would not be necessary. I think the writer seriously underestimates to what extent a modern car with protection systems will try to juggle different constraints. Things like non-driving wheel rotation (defeated by being on a lab stand) are needed for breaking systems and possibly to some extent to moderate throttle control for stability. Wheel movement patterns are also needed and useful, even if you don't actually have electric power steering.

Regulating the exhaust gas recirculation somehow also makes sense. You might go totally on and off, but you would certainly want to keep it at a sensible level. You want good acceleration and full combustion of fuel while still not emitting to much nitrous oxides. It makes total sense to me that you might want to design your control system to try to judge not only the current emission levels, but also the overall driving pattern (steady straight ahead, repeated stop and go, etc) with some kind of state machine to try to find the best EGR regulation regime. This requires sensors and ways to regulate the feature.

My most innocent guess about how something such as this might have happened was an intent to find a good regime that would give nice bursty performance, while keeping nitrous oxides low overall. Progressively, the control regime was pushed until it ended up in the corner where the case of EGR being properly activated under real-world conditions basically does not happen. Some parts of it might even in the end be a bug between the intended state transitions and the actual ones. Like all bugs that give performance that seem too good to be true on the metrics you really care about (fuel consumption and enjoyable driving), no-one investigated.

Do I think it happened this way? It's hard to say. Probably not. But, in one way, it's even more frightening than an evil conspiracy. It's easy to say "I wouldn't take part in a conspiracy by my employer". It's harder to say "I would never be pressed to write code with goals that could not be fulfilled, eventually find a hack that seemed to work, and maybe ignore investigating why it worked so well"...

Comment Re:If a high IQ were better for the individual (Score 1) 385

Unless it, say, causes a higher energy usage or makes you slightly more prone to parish from an infection. The selection pressure for most of our evoluationary history might just be a tad different than it is today. It works the other way, too, of course. Other threads note the increased risk of getting depressed from all the "bad news you can't fix". A high intelligence might make it harder to just shrud that off, while you could more easily filter it out with lower intelligence. (Just like kids can hear some conversations and really don't take note of the full depth of what's being said.) This phenomenon might be worse today than it used to be.

Comment Re: Propheteering (Score 1) 131

What is this fusion "ore" you are talking about? Even if we restrict ourselves to deuterium or even tritium, the ocean reserves are plentiful even in the "multiple orders of magntiude" energy consumption case. Longterm, exponential growth will require space exploration and I am all for it in short-term, but let's keep to the facts.

Comment Re:A victim of applications and history (Score 1) 129

This seems to come out of the peculiar microsoft feature of being able to be an administrator user but without administrator privilege most of the time except when needed, and a lot of work to make this escalation happen in an non-intrusive fashion or be faked depending on context. It's a really complicated beast that no other platform tries to do.

MS up to and including XP (excluding the DOS based family) basically had the same as everyone else, you either were an administrator or you weren't, with facilities to 'runas' an elevated user to handle as-needed. The problem being they had tons of software from the DOS based system failing to use the right section of the registry and filesystem, requiring people to go through pains to run as administrator to run a lot of applications. This meant that most XP users just logged in as administrator.

To mitigate it, they embarked upon crafting this fairly complex thing to make running as administrator user safer most of the time. It's funny because at the same time they started doing more and more to allow even poorly designed DOS-era software to run without administrator. They create union mounts to make an application think it can write to it's application directory even when it cannot (and do sillier things like make 'system32' a different directory depending on whether a 32 or 64 bit application is looking). I do the atypical usage of a non-administrator user full time with UAC prompts nagging me about passwords if needed, and nowadays it doesn't nag any more than sudo does in a modern linux desktop. If I understand this behavior correctly, this usage model might be immune to this risk factor.

While impersonation and other techniques is used a lot more and including larger portions of the API, impersonation itself has been along since NT 3.1. Are you a file server process serving a request from a client? Just create an impersonation context for the user who sent the request and pass that along to the file system. You only need to make sure that you create the right context and tell other services on whose behalf you are doing this. This is not identical to setuid and similar, most importantly because a single thread can keep many impersonation contexts.

That this is part of the application compatibility cache service is almost coincidental, the real problem is in the fact that impersonation services are used, but used incorrectly. Impersonation was part of the original NT design, and for relatively good reason.

Comment Re:What about long-term data integrity? (Score 4, Informative) 438

you first need to copy that data into another block, erase the original one, write all data back and erase your "tmp" block. The churn on blocks happens a lot faster than what you'd think.

If that's the case, then why are they not copying the data to ram contained on the drive itself? Seems like an awful waste of cycles with a relatively simple fix. Is it just a cost issue?

Any wear levelling worth its salt will not do what the grandparent wrote. You simply do not change one page in a block. If you write a single page, that is handled by mapping that page to another (free) block and maintaining a mapping table for which LBAs are currently stored in what blocks. However, if you are doing single-sector writes, or in turn repeated I/O flushes of the same sector, you still see a lot of write amplification. To keep data integrity, the mapping tables also need to be kept updated in a correct way (or at least uniquely recoverable by scanning through all blocks after a hard power off).

Comment Re:Book Analogy (Score 1) 260

But, well, the difference is that Oracle has actively asked everyone else to quote references to their book. Google has produced a product that only respects those references that Oracle has encouraged anyone to use. If Oracle starts pursuing anyone *writing* Java code for copyright infringement ("hey, you called all methods of ArrayList, in the order they are declared"), that's a different thing.

Has Google copied Javadocs? Those texts are not necessary for technical interoperability. Thus, it would be a very different thing. Public symbols should be just that, public.

Comment Re:Colour me Suspicious (Score 1) 124

I find it a bit suspicious that Africa has handled many Ebola outbreaks before this just fine. Sure there were deaths, it's Ebola, but they handled it. Now we have 20+ medical companies with untested human trials of Ebola vaccines/cures all rushing in to "save" the day by testing the drug on humans without a controlled environment and with no legal liability. If a survivor's next fetus grows a third eye and has an IQ of a sweet potato in a drought are they going to take responsibility? I doubt it. How about getting the right gear and help to the health workers instead of pumping them full of crap you could not test legally in your own country. Stop using Africa as a petri dish. Makes me wonder if they didn't start and help spread the epidemic in the first place. But then again maybe I have played/watched too much resident evil.

Proper help to the health workes would include vaccine. In an early phase, there won't be enough vaccine doses for the general population anyway, but currently health workers are at high risk and the number of doctors is dwindling. Not because they leave their posts, but because they die. If you have protective gear and practices that reduce the risk of infection by 99 %, that's still a high risk scenario if you are at a treatment center with multiple patient encounters a day. For any similar infection where there is some kind of vaccine or prophylaxis, health workers get it. The side effects could be quite dire and still be worth it, assuming they are going to do their job at all.

Comment Re:Why is it necessary to reverse engineer this? (Score 3, Insightful) 167

I think all first year computer science / programming / engineering students should be introduced to this and learn how to write programs for this environment first before moving on to modern systems. True power is being able to write useful stuff with only 64kb of ram and 1mhz of processor, and have it run in an acceptable time frame, and taking those skills and scaling up today's multi-core/ multi-gigahertz/multi-gigabyte address spaces.

Cheap memory accesses compared to instruction latency, over your whole memory space. Memory locality basically doesn't matter. Branching is also almost free, since you are not pipelined at all. If you would extrapolate a Z80 machine to multi-gigahertz and multi-gigabyte you would get a much simpler low-level structure than you actually have. Some of the lessons learned regarding being frugal make sense, but you will also learn a lot of tricks that are either directly counter-productive, or at the very least steal your attention from what you should be concerned with, even in those very cases where you are really focusing on performance and power efficiency (in addition to the myriad of valid cases where developer productivity is simply more important than optimality).

It used to be that you couldn't pre-calculate everything since you didn't have enough storage. These days, you shouldn't pre-calculate everything since it will actually be cheaper to do it again rather than risk hitting main memory with a tremendous amount of random accesses, or even worse, swap. Up to some quite serious size data sets a flat data structure with fully sequential accesses can turn out to be preferable to something like a binary tree with multi-level indirections. (Now, the insane amount of references even for what should be a flat array in anything from Java to Python is one reason for why even very good JITs fall short of well-written C, Fortran, or C++.)

Comment Re:Throwing out all compatibility hooks makes it e (Score 1) 164

If you clear out the various multi-platform work for OpenSSL, _of course_ it can progress more quickly and more securely. The multi-platform work is where so much of the work has been done.

As a person making their living writing software for MacOS X and iOS, do I care about this code running in MacOS 9? I don't care one bit. They explain it very well: You don't need to be "multi-platform" if you are standard. Instead of "we have thirteen implementions of SSL_memcpy that run on a dozen completely outdated platforms that nobody cares about", they use memcpy and say "if your platform doesn't support a standard C function correctly, fuck you and your platform". Which is the correct approach.

A slightly more pragmatic approach is to keep those implementations, at least the most crucial ones, but please make sure that you use memcpy etc directly on any sane modern platform.

Comment Re:Throwing out all compatibility hooks makes it e (Score 1) 164

What is so difficult to understand, and why is everyone getting their knickers up in a bunch over it?

The OpenBSD project used to be pretty rude to a number of people (mostly you could understand why, but that doesn't justify). While some of this is just ignorance much of it is likely people wanting to get back at them and people from the various security services trying to spread dissent.

Any other bitching just shows what an idiot you are (not saying you're bitching, just pointing that out to the general peanut gallery).

He's bitching, whether he realises it or not. He didn't point to a single instance of Alpha support slowing down other platforms. His analogy doesn't apply just because they provide portability. It applies if providing portability to old platforms such as the Alpha slows down the development of OpenBSD, which it probably occasionally does and on the other hand probably keeps people interested in developing on their old Alpha machines contributing and so overall has a positive effect.

I haven't watched the full talk (yet), but previous outlines of what OpenSSL did reeked of a Javaesque approach, provide such a thick runtime library on top of the OS, that you don't really need to care about what the platform gives you. That gives you a great portability story, once the layer is in place. The bad thing is that you do not benefit from the specifics of the platform. If you let your portability/base layer rot, you are also behind everyone's game. What's happened during the last 5-10 years is a lot of work to make the C standard library (or slight variations of it), as well as base system calls, much more hardened, to some extent providing security in depth. The LibreSSL critique has been based on the fact that OpenSSL went with their home-rolled, slightly inferior, slightly unpredictable (not handling NULL values in places, at least not in the same way any sane platform did, etc) layer for far too many things, even on modern platforms. As a provider of a platform with security in mind, I can understand the frustration of having a crucial library saying "hey, we don't care about that stuff, we can implement everything we need".

We all like praise, but a hike in our pay is the best kind of ways.