Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:More importantly (Score 1) 393

Sure, the regenerative braking probably reduces the wear on the brakes.

Point being, brake pads and rotors are normal replacement items. You should expect to replace them more than once in 12 years on a normal vehicle. I can wear down a set of pads in a weekend at the track. It depends a lot on how you drive.

I will agree that on the Tesla I test drove, I barely touched the brake pedal. The regen was turned up to maximum and that does a good job of slowing the car down if you are paying attention.

BMWs also tend to have static negative rear camber, and are RWD like the Tesla. But the wheels are smaller dia, which means the tires are more affordable.

I think over 12 years you will spend similar or more on Tesla model S brake and tire components as compared to an average BMW. I look forward to hearing from Model S owners 11 years from now...

Comment Re:You mean... (Score 1) 243

That'd be a much better rant if Netflix actually, y'know, HAD a dedicated Tivo-like box to store things in. They don't.

I can't figure out if you totally missed my point, or totally got it.

Me: "This dog smells awful!" You: "Your rant would be better, if the dog weren't covered in shit."

Comment Re:More importantly (Score 3, Informative) 393

Heck. At 12-years on a BMW, there are any number of wearbale parts that replacement may exceed car value (tires, brakes (you have to replace the rotors with the pads on a BMW), etc).

Not unless the car has been damaged.

BMWs have very high resale value. 12 year old BMWs are currently 2002 models. Very few model year 2002 BMWs can be found for under $5000 in _any_ condition.

In fact, if you do a quick search on autotrader.com for model year 2002 BMWs, you'll see that there are 1200 listings with an average asking price of $9700

I happen to be quite familiar with the running costs of old BMWs. The drive train of a BMW will easily last 12 years without substantial work. The exceptions would be the plastic cooling system components, and, on some models, premature VANOS failure. Sadly, on the newer N54 engines the HPFP is a disaster, but that is not the majority of used BMWs, and certainly not MY2002 cars.

Even paying dealer prices, to replace brakes, suspension rubber, tires, cooling system, etc, will not cost you $9000.

The brake rotors and pads are a few hundred dollars per corner, and you could replace them yourself in your own garage with a jack and hand tools.

FWIW, I really like Tesla. I look forward to a time when buying one of their cars makes sense for me.

However, your consideration of the repair costs of a 12 year old BMW is way off. Thus, my response.

Also, Brakes and Tires are functionally identical between a BMW and a Tesla, and, on the Model S, the Tesla replacement parts are probably more expensive (I haven't priced them to be certain), because the Tesla has very large low profile tires and very large brakes, especially compared to the "average" BMW (instead of their X5 trucks with big wheels, or their high performance M models with larger brakes)

So comparing a 12 year old BMW and a 12 year old Tesla, the wear and maintenance parts differences are the Tesla's battery vs. the BMW's conventional drivetrain. The latter requires coolant flushes, oil changes, transmission fluid changes, air filters, etc.

The one maintenance surprise that I learned about when chatting with a Tesla service technician was that on the model S, the A/C refrigerant is serviced regularly, because it is an integral component of the battery cooling system.

Comment What is really happening here? (Score 1) 981

We are in a War on Faith, because Faith justifies anything and ISIS takes it to extremes. But in the end they are just a bigger version of Christian-dominated school boards that mess with the teaching of Evolution, or Mormon sponsors of anti-gay-marriage measures, or my Hebrew school teacher, an adult who slapped me as a 12-year-old for some unremembered offense against his faith.

Comment Re:Anti-math and anti-science ... (Score 1) 981

Hm. The covenant of Noah is about two paragraphs before this part (King James Version) which is used for various justifications of slavery and discrimination against all sorts of people because they are said to bear the Curse of Ham. If folks wanted to use the Bible to justify anything ISIS says is justified by God's words in the Koran, they could easily do so.

18 And the sons of Noah, that went forth of the ark, were Shem, and Ham, and Japheth: and Ham is the father of Canaan.
19 These are the three sons of Noah: and of them was the whole earth overspread.
20 And Noah began to be an husbandman, and he planted a vineyard:
21 And he drank of the wine, and was drunken; and he was uncovered within his tent.
22 And Ham, the father of Canaan, saw the nakedness of his father, and told his two brethren without.
23 And Shem and Japheth took a garment, and laid it upon both their shoulders, and went backward, and covered the nakedness of their father; and their faces were backward, and they saw not their father's nakedness.
24 And Noah awoke from his wine, and knew what his younger son had done unto him.
25 And he said, Cursed be Canaan; a servant of servants shall he be unto his brethren.
26 And he said, Blessed be the Lord God of Shem; and Canaan shall be his servant.
27 God shall enlarge Japheth, and he shall dwell in the tents of Shem; and Canaan shall be his servant.

Comment Re:TDD FDD (Score 1) 232

Tests need to be fast and repeatable (among other characteristics). Tests must be of high quality as your production code. If you would fix "timing related" issues in your production code, there is no reason your tests suffer from the "timing related" issues either.

There's no reason they *should*, but they do unless you correct the test. The problem is in the test code, or in the wrapper that runs the test code. But consider an automated login test on an isolated network with a credentials server that races to come up with the browser that's attempting the login in the test case. If the login happens to start before the login server gets up and stable, then your login fails, and so does your test case, even though it's not a problem with the browser you are nominally testing.

This is/was a pretty common failure case with the ChomeOS build waterfall because Chrome was considered an "upstream" product, and therefore changes in Chrome, when they occurred, could throw off the timing. There wasn't a specific, separate effort to ensure that the test environment was free from timing issues. And since you can't let any test run forever, if you intend to get a result that you can act upon it in an automated way, you get transient failures.

Transient test failures can (sort of) be addressed by repeating failed tests; by the time you attempt to reproduce, the cache is likely warmed up anyway, and the transient failure goes away. Problem solved. Sort of. But what if everyone starts taking that tack? Then you end up with 5 or 6 transient failures, and any one of them is enough to shoot you in the foot on any given retry.

Now add that these are reactive tests: they're intended to avoid the recurrence of a bug which has occurred previously, but is probabilistically unlikely to occur again; when do you retire one of these tests? Do you retire one of these tests?

Consider that you remove a feature, a login methodology, a special URL, or some other facility that used to be there; what do you do with the tests which used to test that code? If you remove them, then your data values are no longer directly comparable with historical data; if you don't remove them, then your test fails. What about the opposite case: what are the historical values, necessarily synthetic, for a new feature? What about for a new feature where the test is not quite correct, or where the test is correct, but the feature is not yet fully stable, or not yet implemented, but instead merely stubbed out?

You see, I think, the problem.

And while in theory your build sheriff or other person, who's under fire to reopen the tree, rather than actually root-causing the problem, doesn't have time to actually determine a root cause. At that point, you're back to fear driven development, because for every half hour you keep the tree closed, you have 120 engineers unable to commit new code that's nor related to fixing the build failure. Conservatively estimate their salary at $120K/year, then their TCO for computers and everything else is probably $240K/year, and for every half hour you don't open the tree back up, that's ~$14K of lost productivity, and then once you open it up, there's another half hour for the next build to be ready, so even if you react immediately, you're costing the company at least $25K one of those bugs pops on you and you don't just say "screw it" and open the tree back up. Have that happen 3X a day on average, and that's $75K lost money per day, so let's call it $19.5M a year in lost productivity.

This very quickly leads to a "We Fear Change" mentality for anyone making commits. At the very least, it leads to a "We Fear Large Change" mentality which won't stop forward progress, but will insure that all forward progress is incremental and evolutionary. The problem then becomes that you never make anything revolutionary because sometimes there's no drunkard's walk from where you are to the new, innovative place you want to get to (eventually). So you don't go there.

The whole "We Fear Large Change" mentality - the anti-innovation mentality - tends to creep in any place you have the Agile/SCRUM coding pattern, where you're trying to do large things in small steps, and it's just not possible to, for example, change an API out from everyone, without committing changes to everyone else at the same time.

You can avoid the problem (somewhat) by adding the new API before taking the old API away. So you end up with things like "stat64" that returns a different structure from "stat", and then when you go and try to kill "stat" after you've changed everywhere to call "stat64" instead, with the new structure, you have to change the "stat" API to be the same as the "stat64" API, and then convert all the call sites back, one by one, until you can then get rid of the "stat64".

That leads to things like Solaris, where the way you endure binary compatibility is "give the hell up; you're never going to kill off the old stat, just live with carrying around two APIs, and pray people use the new one and you can kill off the old one in a decade or so". So you're back to another drunkard's walk of very slow progress, but at least you have the new API out of it.

And maybe someday the formal process around the "We Fear Change" mentality, otherwise known as "The Architectural Board" or "The Change Control Committee" or "Senior VP Bob" will let you finally kill off the old API, but you know, at that point, frankly you don't care, and the threat to get rid of it is just a bug in a bug database somewhere that someone has helpfully marked "NTBF" because you can close "Not To Be Fixed" bugs immediately, and hey, it gets the total number of P2 or P3 bugs down, and that looks good on the team stats.

Comment Re:TDD FDD (Score 0) 232

Having some experience with both FDD and TDD, I can attest that test driven culture where automated testing is fully integrated into the dev process pretty much addresses all three of your conditions.

The wrong kind of TDD leads to FDD of the type where you're afraid to break the build.

The problem with TDD that leads to this is that TDD is almost totally reactive; that is, you find a bug, you write a test for the bug so you can tell when it's gone; you get rid of the bug, and now you have this test which is going to be run on each build, as if you are not already hyperaware, having both experienced and fixed the bug, of the conditions leading up to the bug. The annoying part, of course, is when you start taking longer and longer amounts of time to get through the build to an acceptance of the build, for each test you add. Then to make things even worse, add to that the occasional false failure because the test is flakey, but it's someone's baby and it "usually works" and the failure is "timing related", and now you're testing the test, and rejecting a perfectly good build because you're unwilling to either rip out the test completely, or make it non-fatal and assign the bug on it back to the person who wrote the original test.

TDD with test cases written up front, and not added to without an associated specification change: Good.

TDD with test cases written to cover historical bugs identified through ad hoc testing: Project Cancer.

The second worst thing you can possibly do is write tests for no good reason because you're able to write tests, but unable to contribute to the core code, and you still want to contribute somehow. The worst thing is being the code reviewer and letting that type of mess into your source tree because you want the person submitting the tests to not feel bad about them not getting accepted.

Comment Hmmm. (Score 0) 72

If Kip Thorne can win a year's worth of Playboys for his bet that Cygnus X1 was a Black Hole, when current theory from Professor Hawking says Black Holes don't really exist, then can Professor Thorne please give me a year's subscription to the porno of my choice due to the non-existent bet that this wasn't such a star?

Comment Re:You mean... (Score 5, Insightful) 243

I think the idea is that you pay the ISP for a "Netflix booster", and then your Netflix traffic gets un-humped into the fast lane.

Is it just me, or does anyone else see the foolishness in one of the highest volume uses of the Internet also being one of the highest priority? That people are thinking of the huge transfers of pre-produced video as being something other than the dead last, lowest priority cheapest-per-byte traffic there is, is totally ridiculous.

The only things that should be "fast laned" (low latency) are VoIP, videoconferencing, interactive terminals, etc: most of which is either low-bandwidth or else niche. If "high priority" is what many peoples' connections are doing several hours per day, then our very sense of "priorities" is fucked up.

I can't say I'm a fan of the ISPs that Netflix is fighting with, but at the same time: Fuck Netflix. Netflix is a case study in how to do video technologically wrong and it seems like they're just totally ignoring common sense. Why shouldn't doing things like a luddite, be relatively expensive? (Really, having storage in your box is still considered prohibitively expensive? It sure wasn't expensive in 2000 with Tivo series 1. Things got worse since then?!?) If the pampered princess insists that her cake be delivered from the kitchen a bite at a time and the commoner just puts a whole slice on his plate and takes a bite at the table whenever he wants it, we expect the princess' servants to be rolling their eyes when she's not looking, embezzeling, etc.

When we have broken up the monopolies and our streets have conduits under them containing a dozen competing fibers, we can re-evaluate the tech from our position of abundance. Maybe video streaming won't be on-the-face-of-it-stupid, then. But that's the future, not today.

Comment Re:well (Score 2) 200

Or just the better alternative. It is hard to seriously argue that Boeing is so much behind Elon Musk, that anything space related should be given to the latter.

Given that Boeing will already be 3 years late to the party, when SpaceX has manned capability up and running this coming January? We're supposed to wait another couple of years for manned launch capability, when the Russians have already said they wouldn't be hailing our asses into orbit any more? I don't think "Time To Market" is a difficult argument.

Comment One thing Swift will address... (Score 3, Informative) 183

One thing Swift will address... There are currently 3 memory management models in use in Objective-C, and for some of those models, you don't get a retain count automatically (for example, this is the case for a number of collection objects when doing an insertion).

Swift has the opportunity to rationalize this, which is not something you could do with the Objective-C libraries themselves, since doing so would change historical APIs and thus break old code.

It wasn't really until Metrowerks basically became incompatible with the Intel switchover and the 64 bit support had to drop certain types of support from Finder due to 64 bit inode numbers, and while I happily would have made them new header files so that they would have continued to work with the UNIX Conformance work, where Ed Moy and I basically broke their local private copies of their header files, since Motorola sold off the Intel version of the Metrowerks C the week because Apple announced Intel, it was pretty much DOA at that point.

So it basically took an Act Of God to get some people to get the hell off some of the old APIs we had been dooming and glooming about for half a decade.

Swift is another opportunity for that type of intentional non-exposure obsolescence to clean up the crappy parts of the APIs and language bindings that haven't been cleaned up previously due to people hanging onto them with their cold, dead hands. Hopefully, they will advantage themselves of this opportunity.

Slashdot Top Deals

No man is an island if he's on at least one mailing list.

Working...