Forgot your password?
typodupeerror

Comment: Re:TDD FDD (Score 1) 203

by tlambert (#47929413) Attached to: Ask Slashdot: Have You Experienced Fear Driven Development?

Tests need to be fast and repeatable (among other characteristics). Tests must be of high quality as your production code. If you would fix "timing related" issues in your production code, there is no reason your tests suffer from the "timing related" issues either.

There's no reason they *should*, but they do unless you correct the test. The problem is in the test code, or in the wrapper that runs the test code. But consider an automated login test on an isolated network with a credentials server that races to come up with the browser that's attempting the login in the test case. If the login happens to start before the login server gets up and stable, then your login fails, and so does your test case, even though it's not a problem with the browser you are nominally testing.

This is/was a pretty common failure case with the ChomeOS build waterfall because Chrome was considered an "upstream" product, and therefore changes in Chrome, when they occurred, could throw off the timing. There wasn't a specific, separate effort to ensure that the test environment was free from timing issues. And since you can't let any test run forever, if you intend to get a result that you can act upon it in an automated way, you get transient failures.

Transient test failures can (sort of) be addressed by repeating failed tests; by the time you attempt to reproduce, the cache is likely warmed up anyway, and the transient failure goes away. Problem solved. Sort of. But what if everyone starts taking that tack? Then you end up with 5 or 6 transient failures, and any one of them is enough to shoot you in the foot on any given retry.

Now add that these are reactive tests: they're intended to avoid the recurrence of a bug which has occurred previously, but is probabilistically unlikely to occur again; when do you retire one of these tests? Do you retire one of these tests?

Consider that you remove a feature, a login methodology, a special URL, or some other facility that used to be there; what do you do with the tests which used to test that code? If you remove them, then your data values are no longer directly comparable with historical data; if you don't remove them, then your test fails. What about the opposite case: what are the historical values, necessarily synthetic, for a new feature? What about for a new feature where the test is not quite correct, or where the test is correct, but the feature is not yet fully stable, or not yet implemented, but instead merely stubbed out?

You see, I think, the problem.

And while in theory your build sheriff or other person, who's under fire to reopen the tree, rather than actually root-causing the problem, doesn't have time to actually determine a root cause. At that point, you're back to fear driven development, because for every half hour you keep the tree closed, you have 120 engineers unable to commit new code that's nor related to fixing the build failure. Conservatively estimate their salary at $120K/year, then their TCO for computers and everything else is probably $240K/year, and for every half hour you don't open the tree back up, that's ~$14K of lost productivity, and then once you open it up, there's another half hour for the next build to be ready, so even if you react immediately, you're costing the company at least $25K one of those bugs pops on you and you don't just say "screw it" and open the tree back up. Have that happen 3X a day on average, and that's $75K lost money per day, so let's call it $19.5M a year in lost productivity.

This very quickly leads to a "We Fear Change" mentality for anyone making commits. At the very least, it leads to a "We Fear Large Change" mentality which won't stop forward progress, but will insure that all forward progress is incremental and evolutionary. The problem then becomes that you never make anything revolutionary because sometimes there's no drunkard's walk from where you are to the new, innovative place you want to get to (eventually). So you don't go there.

The whole "We Fear Large Change" mentality - the anti-innovation mentality - tends to creep in any place you have the Agile/SCRUM coding pattern, where you're trying to do large things in small steps, and it's just not possible to, for example, change an API out from everyone, without committing changes to everyone else at the same time.

You can avoid the problem (somewhat) by adding the new API before taking the old API away. So you end up with things like "stat64" that returns a different structure from "stat", and then when you go and try to kill "stat" after you've changed everywhere to call "stat64" instead, with the new structure, you have to change the "stat" API to be the same as the "stat64" API, and then convert all the call sites back, one by one, until you can then get rid of the "stat64".

That leads to things like Solaris, where the way you endure binary compatibility is "give the hell up; you're never going to kill off the old stat, just live with carrying around two APIs, and pray people use the new one and you can kill off the old one in a decade or so". So you're back to another drunkard's walk of very slow progress, but at least you have the new API out of it.

And maybe someday the formal process around the "We Fear Change" mentality, otherwise known as "The Architectural Board" or "The Change Control Committee" or "Senior VP Bob" will let you finally kill off the old API, but you know, at that point, frankly you don't care, and the threat to get rid of it is just a bug in a bug database somewhere that someone has helpfully marked "NTBF" because you can close "Not To Be Fixed" bugs immediately, and hey, it gets the total number of P2 or P3 bugs down, and that looks good on the team stats.

Comment: Re:TDD FDD (Score 0) 203

by tlambert (#47925269) Attached to: Ask Slashdot: Have You Experienced Fear Driven Development?

Having some experience with both FDD and TDD, I can attest that test driven culture where automated testing is fully integrated into the dev process pretty much addresses all three of your conditions.

The wrong kind of TDD leads to FDD of the type where you're afraid to break the build.

The problem with TDD that leads to this is that TDD is almost totally reactive; that is, you find a bug, you write a test for the bug so you can tell when it's gone; you get rid of the bug, and now you have this test which is going to be run on each build, as if you are not already hyperaware, having both experienced and fixed the bug, of the conditions leading up to the bug. The annoying part, of course, is when you start taking longer and longer amounts of time to get through the build to an acceptance of the build, for each test you add. Then to make things even worse, add to that the occasional false failure because the test is flakey, but it's someone's baby and it "usually works" and the failure is "timing related", and now you're testing the test, and rejecting a perfectly good build because you're unwilling to either rip out the test completely, or make it non-fatal and assign the bug on it back to the person who wrote the original test.

TDD with test cases written up front, and not added to without an associated specification change: Good.

TDD with test cases written to cover historical bugs identified through ad hoc testing: Project Cancer.

The second worst thing you can possibly do is write tests for no good reason because you're able to write tests, but unable to contribute to the core code, and you still want to contribute somehow. The worst thing is being the code reviewer and letting that type of mess into your source tree because you want the person submitting the tests to not feel bad about them not getting accepted.

Comment: Re:well (Score 2) 194

by tlambert (#47918183) Attached to: WSJ Reports Boeing To Beat SpaceX For Manned Taxi To ISS

Or just the better alternative. It is hard to seriously argue that Boeing is so much behind Elon Musk, that anything space related should be given to the latter.

Given that Boeing will already be 3 years late to the party, when SpaceX has manned capability up and running this coming January? We're supposed to wait another couple of years for manned launch capability, when the Russians have already said they wouldn't be hailing our asses into orbit any more? I don't think "Time To Market" is a difficult argument.

Comment: One thing Swift will address... (Score 2) 178

by tlambert (#47916599) Attached to: Why Apple Should Open-Source Swift -- But Won't

One thing Swift will address... There are currently 3 memory management models in use in Objective-C, and for some of those models, you don't get a retain count automatically (for example, this is the case for a number of collection objects when doing an insertion).

Swift has the opportunity to rationalize this, which is not something you could do with the Objective-C libraries themselves, since doing so would change historical APIs and thus break old code.

It wasn't really until Metrowerks basically became incompatible with the Intel switchover and the 64 bit support had to drop certain types of support from Finder due to 64 bit inode numbers, and while I happily would have made them new header files so that they would have continued to work with the UNIX Conformance work, where Ed Moy and I basically broke their local private copies of their header files, since Motorola sold off the Intel version of the Metrowerks C the week because Apple announced Intel, it was pretty much DOA at that point.

So it basically took an Act Of God to get some people to get the hell off some of the old APIs we had been dooming and glooming about for half a decade.

Swift is another opportunity for that type of intentional non-exposure obsolescence to clean up the crappy parts of the APIs and language bindings that haven't been cleaned up previously due to people hanging onto them with their cold, dead hands. Hopefully, they will advantage themselves of this opportunity.

Comment: Re:Google should win this if they went to court... (Score 1) 290

by tlambert (#47892481) Attached to: German Court: Google Must Stop Ignoring Customer E-mails

Translation:

2. information for quick access

Paragraph 5 para 1 no 2 TMG says literally:

"Information to enable a fast electronic contact and direct communication with them, including electronic mail address."

You can hardly more clear than that. And if Google answers:

Google will not respond to or even read your message

it definitely breaks the law, since this is not even a one sided communication.

The problem here is that the law *requires* an email address. It was never really thought out for large companies with billions of customers, and the law is effectively a bad law as a result, but it is still in fact the law.

I can imagine that the response is going to be something like an IVR system, where you are emailed back something which requires you provide more context ("or you can click here"), and repeats the process narrowing down the context, each time ("or you can go here"), until it drills down to the automated system where it can bucket it into the appropriate web form you should have used in the first place instead of sending them an email, or your problem is answered, or you give up and go away.

Unless there's also a law against IVR in Germany?

Guaranteed that most of these emails to that address are SPAM and/or people bitching about seeing things in the search results they don't want to, or not seeing things in the search results that they expected to, and a human would be telling them, very politely, that nothing will be done about their complaint and/or they are not interested in pretending to be the heir to the fortune on deposit in the Bank of Lagos by the wife of the late oil minister ("now deceased, God Bless").

Comment: The fiction of net metering... (Score 5, Insightful) 444

by tlambert (#47887215) Attached to: If Tesla Can Run Its Gigafactory On 100% Renewables, Why Can't Others?

The fiction of net metering is that you will not be paid the same amount for the electricity you generate as for the electricity you consume.

On of the purposes of "Smart Meters" is to permit differential pricing on electricity produced vs. consumed; it's not just to provide a temporal demand market. There are already tariffs in place in California where PG&E only has to buy as much electricity as you consume for a net 0 energy usage, rather than being required to purchase everything you generate over what you consume.

The idea of a large grid only works if someone pays to maintain that grid, and that pricing comes in as a differential.

Everyone can't do what Tesla is doing because not everyone is going to have the storage capacity to make it economical; Tesla can just rota the batteries it manufactures in service to the manufacturing plant itself, as part of "burn in testing", so that it'll get local off-grid storage as a side effect of the manufacturing process itself.

I suppose that "every rechargeable battery manufacturer can do what Tesla does" would be a fair statement, but that's a tiny subset of "everyone"

Comment: Re:Great, they've invented "MedBook"... (Score 1) 198

Almost everything everyone complains about regarding Facebook is related to its choice of NoSQL as an underlying implementation technology:

- You don't get to see all of your friends posts
- Everyone who follows you isn't guaranteed to see all of your posts
- The computational overhead of making ACID guarantees is available ... if you pay for the extra work (i.e. step back to ACID)
- Posts show up out of order
- A comment on an old post by someone brings the whole thing back as if it's a new post

It follows that the other things that people complain about Facebook over are sure to follow into the NHS implementation, if they are taking that lead to its logical conclusion - meaning advertising replacing desirable content in the medical record.

Comment: Great, they've invented "MedBook"... (Score 1) 198

by tlambert (#47869175) Attached to: UK's National Health Service Moves To NoSQL Running On an Open-Source Stack

Great, they've invented "MedBook"... what you see when you look at it is a fraction of the available data at any one time because it has "arrived" at the node where you are viewing it from yet.

What do I have to do so that my drug allergies and blood type are "sponsored postings" so that when my doctor looks at them, he doesn't kill me due to all of the auto-play video advertisements for Cialis being there instead of the information I want to be there?

Comment: Re:I'm not understanding "missing DNA"... (Score 1) 108

by tlambert (#47805131) Attached to: The Passenger Pigeon: A Century of Extinction

Museum specimens were commonly preserved with formaldehyde, which damages DNA.

The technique in question would use the DNA from a *lot* of cells. Even if all of them were damaged, they would not be damaged in precisely the same way, which is why the technique works: it's a statistical technique. Give 1500 full specimens with multiple sets of damage, they should, on average, get the full genome for the species, since that's a viable number of individuals to propagate the species.

So again, unless something knocked out a specific chromosome in *all* the cells of *all* the specimens, there's nothing in particular, I'm not seeing the problem here that's being solved by inserting non-species DNA into the genome, since it should be acomplete species genome.it's going to be present in the majority of the samples anyway, and weeding out the damage is a computational bioinformatics task, not a "Well, it's not in this one cell; we're screwed" task.

Comment: I think most are missing the politics. (Score 3, Interesting) 127

by tlambert (#47795999) Attached to: Microsoft Shutting Down MSN Messenger After 15 Years of Service

I think most are missing the politics.

This is surprising, coming as it does on the heels of Microsoft's refusal to comply with the U.S. Federal court order to hand over overseas held emails.

So I will spell out some of the political consequences here.

The service closure forces a service switch on the remaining people who were using non-Microsoft MSN clients and thus avoiding the Guangming, which operates the Chinese version of Skype, which has been modified "to support Internet regulations", which is to say The Great Firewall of China. If these users want comparable services, the only comparable one now available to them is Tencent’s QQ messaging software, which from the start has been designed "to support Internet regulations". So there are no longer any "too big to shoot in the head" options which do NOT "support Internet regulations".

So really the only people who care about this will be Chinese dissidents who want to communicate with each other using an encrypted channel through a server inaccessible to the Chinese government, and any journalists seeking an encrypted channel whereby they can move information out of China without having to have a government approved satellite uplink handy, or a willingness to smuggle out data storage some other way.

Comment: Hardkernel wasn't using Broadcom SoC anyway? (Score 1) 165

by tlambert (#47795915) Attached to: Update: Raspberry Pi-Compatible Development Board Cancelled

Hardkernel wasn't using Broadcom SoC anyway?

The linked article makes it pretty clear they were basing it on Samsung Exynos SoCs - who *cares* whether or not Broadcom would source them parts, if they weren't even using Broadcom in their design?!? This is like using a Motorola 6502 in a design, and then claiming that Intel wouldn't sell you 8008's ... what the hell?

God is real, unless declared integer.

Working...