Forgot your password?

Comment: Re: user error (Score 1) 708

by greg1104 (#47461865) Attached to: People Who Claim To Worry About Climate Change Don't Cut Energy Use

The important part is not the physics, fundamentally this is a statistics problem across some population. "heavier cars are safer than lighter cars in equal-mass collisions"...right, but that also means the heavier your car, the less cars you'll encounter on the road that are heavier than you are. The person in a 90th percentile weight vehicle drives in a world where they are on the better side of a head-on collision 90% of the time. And because of that, you can't transplant cars from a vastly different weight distribution population and expect the same safety results for them.

The Autobahn does put speed limits on larger vehicles like buses and trucks, to try and limit the worst of the high mass + high velocity combinations possible. That's far easier to do than something like parallel roadways.

It's also worth noting that most of the traffic on the specific chunk of US highway I referenced (I-95) has roughly the same car fatality rate as Germany. There's a handy chart comparing Autobahn safety that breaks things down per-state in the US. The best US entries on that list overlap heavily with the busy parts of I-95. Delaware, Maryland, Maine, Massachusetts, New York, Virginia, New Jersey, New Hampshire, those are all states where I-95 is the primary north/south motorway. Those also happen to be some of the richest states in the country, meaning people are buying higher quality cars too--which may also be the case for typical Autobahn traffic. There are a lot of things that correlate with highway safety in some way.

Comment: Re: user error (Score 4, Insightful) 708

by greg1104 (#47456467) Attached to: People Who Claim To Worry About Climate Change Don't Cut Energy Use

Lighter cars are typically safer than heavier cars (as is indicated by your link).

I screwed up with that source and deserved the moderation down, but this isn't true either. Heavier cars are safer for the person driving them. The direction US cars have gone is based on things like this 1997 weight study, where the conclusion was that passenger cars would be better with an extra 100 pounds.

However, having a fleet of heavy cars around is more dangerous for the average person, which is what the EU statistics show, and that study points it out too. At the same time as showing cars would be better if heavier, the study also shows making light truckers lighter would be good. The important point in their words, and I'll bold it because it's the most important thing here: "When trucks are reduced in weight and size, they become less crashworthy for their own occupants, but they become less capable of damaging other vehicles."

If everyone has a light car, the average accident isn't as bad as two heavy cars colliding. That's Europe right now. Average car is heavier but you're also in a heavy car, that's the American roads. Worse overall, but it's not as bad if you are in one of the heavy cars! The really bad case is when you're driving a light car and you hit a heavy one. That's what I was describing with the EU car on I-95 example. The end result is a sort of arms race in American car design. Everyone has a a personal incentive to drive something heavier for their own safety, but everyone would be safer if, collectively, we didn't do that.

Another reason the busy American highways are dangerous is all of the trucking used to move things around. My personal distaste for being in a light car here in the US comes from watching a few car -> tractor-trailer accidents back when I used to drive quite a lot here. Whenever I'm in something like a London taxi, worrying about a collision with a truck in that tiny vehicle makes me crazy. I have to remind myself that the road isn't filled with those big trucks though, and overall that's an improvement.

Comment: Re:If anyone actually cared... (Score 1) 708

by greg1104 (#47455823) Attached to: People Who Claim To Worry About Climate Change Don't Cut Energy Use

There is only one solution to the problem of how to get devices that last longer: you make them have longer warranties, so manufacturers have an incentive to make cost/longevity trade-offs on the lifetime side. That will drive up prices on everything. People would need to think of cost in terms of $/year assuming the lifetime is at least the warranty, to get a price metric that drops when quality improves.

Your run at finding easier answers has two major issues. First you're assuming that manufacturers know, in advance, which parts will wear out fast and which won't. The way things will fail in the field is unpredictable. The last thing I bothered to repair was a TV that filed due to the Capacitor plague. Quoth Wikpedia: "these capacitors should have a life expectancy of about 18 years of continuous operation; a failure after 1.5 to 2 years is very premature".

The idea that this could have been prevented by buying higher quality parts is not well founded. They already bought capacitors that were overbuilt by at least a 6X factor over their warranty period. But shit happens. You cannot overbuild to where shit doesn't happen. That's the road to the crazy town that's given us things like super-expensive "mil-spec" parts. And assemblies of things made from that quality level of part still fail early anyway; see "shit happens", again. Also, device failures are dictated by the first failing component. There's no sense overbuilding plastic parts into metal if the lifetime is normally dictated by a motor.

Second major flaw: designing for maintenance and repair is way more expensive than you give it credit for, and it's not clear it's even productive. Splitting a design into usefully modular components makes things more expensive, and while repairs are easier the failure rate goes up in the process. The way you've connected the modules becomes a whole new failure mode. Take a washing machine that was reliable as a single mechanism, split it into easy to repair modules, and the new type of failure you'll see in the field are modules that vibrate out of their module interface over time. There's a reason we've moved toward giant monolithic designs: they're simply more reliable than modular ones, on top of being cheaper to build and design too. People don't really like less reliable but easier to repair, and in a high labor cost world that's a correct preference.

Comment: Re: user error (Score 1, Troll) 708

by greg1104 (#47455675) Attached to: People Who Claim To Worry About Climate Change Don't Cut Energy Use

He suggested 32 MPH is good for a 10 year old car that's built to the safety standards in America. US cars from 2009 are a lot better too.

And the main reason European cars get better mileage is that they're smaller and lighter. We drive serious distances here in the US, and if our cars were as light as European ones, our fatal crash statistics would suffer enormously. I would not want to be driving the style of car that get better mileage in the EU, because they're smaller and lighter, into a car accident on a big American road like I95.

Visit List of countries by traffic-related death rate and sort by "Road fatalities per 100 000 motor vehicles" if you want some hard numbers on it. The highest entries are Malta, Norway, Iceland, Sweden, Denmark, Chile, Spain, Switzerland, UK, Finland, Ireland, Germany, and the Netherlands. Notice a pattern? That's the trade-off when everyone drives around tiny cars. The EU Econobox is a deathtrap by American standards.

Comment: Re:Sargon II on Commodore 64 (Score 2) 128

by greg1104 (#47449705) Attached to: How To Fix The Shortage of K-5 Scholastic Chess Facilitators

Sounds about right. I played enough tournament games to estimate I was about a 1450 player at my best, and playing Sargon II on the Apple was a pretty evenly matched game. The key to beating early chess games like that, and this is still useful for any small memory chess opponent, is to play something weird. You need to get the computer out of its opening book library as soon as possible, without making an overtly bad move. Moving a pawn a single space forward where most players would taking advantage of being able to move forward two can be enough to break you out of a small book. You could easily tell when Sargon went "off book" because the time it spent thinking about moves went up dramatically, especially on its highest difficulty setting.

I learned some ideas like this from David Levy's excellent 1983 book Computer Gamesmanship. With Sargon, I recall I would do somewhere around 5 moves from the standard opening library before inserting one aimed to go off-book. The first few moves in a chess game tend to be very similar because they work. You don't want to yield control of the middle of the board in favor of breaking out of the book on your first move; that's counterproductive.

Comment: Re:Not to detract from our roots... (Score 1) 128

by greg1104 (#47449407) Attached to: How To Fix The Shortage of K-5 Scholastic Chess Facilitators

There are two main types of chess games. In one, someone manages to checkmate while there are still a lot of pieces on the board. You seem to only be familiar with this type of game. It's possible to prioritize for that over holding onto pieces, with strategies like "gambits" taking that idea back to the opening move.

But when both players are good enough that this doesn't happen, you get a drawn out type of game where very subtle position advantages allow picking off pawns, or exchanging a better piece for a worse one. Eventually those swaps knock out most of the pieces on the board, and then the person with an advantage in "material"--the pieces they still have--will normally win. One of the things you need to learn as a competative chess player is how to checkmate when you only have a small advantage like that. Can you win a game where you have a king and a bishop left vs. just a king? There's a whole body of research on pawnless chess endings that to this day hasn't considered every possibility yet.

So how do you tell which type of game you're playing? That's the trick--you can't until it's over. If you goof on a risky push to checkmate and it fails, you can easily end up down in material and then playing the other type of game at a disadvantage. That's where people who are good at tactics instead of memorization can really shine--no one memorizes optimal play when you're already down a piece or two. The entire risk-reward evaluation changes when you're in a position where you must do something risky to win, because being conservative will eventually result in you losing to the person with more pieces.

And if you think there are so few combinations here that it's possible for the person who memorizes more to always win, you really need to revisit just who has the "small mind" here because you don't understand Chess at all. Go is really the simpler game here because it only has the long-term strategy to worry about. Chess players have to worry about a long-term game of position and material trade-offs, but at the same time you have to guard against short-term win approaches too. Your long-term game is worthless if you get nailed by a Fools Mate.

Comment: Re:Happy to let someone else test it (Score 2) 101

by greg1104 (#47437289) Attached to: First Release of LibreSSL Portable Is Available

Most of FIPS is a certification process oriented on testing. However, there is a checklist of things you need to support, and one of them used to be the easy to backdoor Dual_EC_DRBG.

Now that the requirement for Dual_EC_DRBG has been dropped from NIST's checklist, it would be possible to have LibreSSL meet FIPS requirements without having the troublesome component. Most of FIPS certification is about throwing money at testing vendors, as described by OpenSSL themselves. Doing that would really be incompatible with the crusade LibreSSL is on though, because the result is believed by some to be less secure than using a library that isn't bound to the FIPS process. I don't see those developers ever accepting a process that prioritizes code stability over security.

Comment: Re:Style over substance (Score 1) 188

by greg1104 (#47125641) Attached to: Apple Confirms Purchase of Beats For $3 Billion

Oh goodie, a lesson on ABX testing I didn't need. Carbonation is more obvious than the taste differences people often fail to confirm in blind test. Slate even did some coverage on container carbonation differences talking about it. According to that I didn't necessarily describe the cause and effect correctly in my quick comment--it may be from gas escaping rather than a bottling difference--but the effect I was describing is real.

Have you ever noticed the difference between flat soda and fresh? If so, why do you believe carbonation level and bottle specific characteristics are never distinguishable? There's a motion component to it. A major reason flat soda tastes differently is that you expect a different taste from the bubbles, whether or not there even is a taste difference outside of that. Your perception of carbonation turns into a taste even though it's really not a taste, exactly. The same way that knowing the brand alters how you taste--the bit that screws up non-blind taste tests--sensing the carbonation in your mouth changes how you taste too.

Fine, you say that's still me claiming something, not a test result. I looked around for five minutes for a blind test showing some difference between two different Coke product packages that included observations on the "fizziness" of the product impacting preference. Here's a recent blind comparison with untrained testers doing exactly that. I don't think it's studied more because it is too obvious to bother.

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys