Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:If anyone actually cared... (Score 1) 710

There is only one solution to the problem of how to get devices that last longer: you make them have longer warranties, so manufacturers have an incentive to make cost/longevity trade-offs on the lifetime side. That will drive up prices on everything. People would need to think of cost in terms of $/year assuming the lifetime is at least the warranty, to get a price metric that drops when quality improves.

Your run at finding easier answers has two major issues. First you're assuming that manufacturers know, in advance, which parts will wear out fast and which won't. The way things will fail in the field is unpredictable. The last thing I bothered to repair was a TV that filed due to the Capacitor plague. Quoth Wikpedia: "these capacitors should have a life expectancy of about 18 years of continuous operation; a failure after 1.5 to 2 years is very premature".

The idea that this could have been prevented by buying higher quality parts is not well founded. They already bought capacitors that were overbuilt by at least a 6X factor over their warranty period. But shit happens. You cannot overbuild to where shit doesn't happen. That's the road to the crazy town that's given us things like super-expensive "mil-spec" parts. And assemblies of things made from that quality level of part still fail early anyway; see "shit happens", again. Also, device failures are dictated by the first failing component. There's no sense overbuilding plastic parts into metal if the lifetime is normally dictated by a motor.

Second major flaw: designing for maintenance and repair is way more expensive than you give it credit for, and it's not clear it's even productive. Splitting a design into usefully modular components makes things more expensive, and while repairs are easier the failure rate goes up in the process. The way you've connected the modules becomes a whole new failure mode. Take a washing machine that was reliable as a single mechanism, split it into easy to repair modules, and the new type of failure you'll see in the field are modules that vibrate out of their module interface over time. There's a reason we've moved toward giant monolithic designs: they're simply more reliable than modular ones, on top of being cheaper to build and design too. People don't really like less reliable but easier to repair, and in a high labor cost world that's a correct preference.

Comment Re: user error (Score 1, Troll) 710

He suggested 32 MPH is good for a 10 year old car that's built to the safety standards in America. US cars from 2009 are a lot better too.

And the main reason European cars get better mileage is that they're smaller and lighter. We drive serious distances here in the US, and if our cars were as light as European ones, our fatal crash statistics would suffer enormously. I would not want to be driving the style of car that get better mileage in the EU, because they're smaller and lighter, into a car accident on a big American road like I95.

Visit List of countries by traffic-related death rate and sort by "Road fatalities per 100 000 motor vehicles" if you want some hard numbers on it. The highest entries are Malta, Norway, Iceland, Sweden, Denmark, Chile, Spain, Switzerland, UK, Finland, Ireland, Germany, and the Netherlands. Notice a pattern? That's the trade-off when everyone drives around tiny cars. The EU Econobox is a deathtrap by American standards.

Comment Re:Sargon II on Commodore 64 (Score 2) 128

Sounds about right. I played enough tournament games to estimate I was about a 1450 player at my best, and playing Sargon II on the Apple was a pretty evenly matched game. The key to beating early chess games like that, and this is still useful for any small memory chess opponent, is to play something weird. You need to get the computer out of its opening book library as soon as possible, without making an overtly bad move. Moving a pawn a single space forward where most players would taking advantage of being able to move forward two can be enough to break you out of a small book. You could easily tell when Sargon went "off book" because the time it spent thinking about moves went up dramatically, especially on its highest difficulty setting.

I learned some ideas like this from David Levy's excellent 1983 book Computer Gamesmanship. With Sargon, I recall I would do somewhere around 5 moves from the standard opening library before inserting one aimed to go off-book. The first few moves in a chess game tend to be very similar because they work. You don't want to yield control of the middle of the board in favor of breaking out of the book on your first move; that's counterproductive.

Comment Re:Not to detract from our roots... (Score 1) 128

There are two main types of chess games. In one, someone manages to checkmate while there are still a lot of pieces on the board. You seem to only be familiar with this type of game. It's possible to prioritize for that over holding onto pieces, with strategies like "gambits" taking that idea back to the opening move.

But when both players are good enough that this doesn't happen, you get a drawn out type of game where very subtle position advantages allow picking off pawns, or exchanging a better piece for a worse one. Eventually those swaps knock out most of the pieces on the board, and then the person with an advantage in "material"--the pieces they still have--will normally win. One of the things you need to learn as a competative chess player is how to checkmate when you only have a small advantage like that. Can you win a game where you have a king and a bishop left vs. just a king? There's a whole body of research on pawnless chess endings that to this day hasn't considered every possibility yet.

So how do you tell which type of game you're playing? That's the trick--you can't until it's over. If you goof on a risky push to checkmate and it fails, you can easily end up down in material and then playing the other type of game at a disadvantage. That's where people who are good at tactics instead of memorization can really shine--no one memorizes optimal play when you're already down a piece or two. The entire risk-reward evaluation changes when you're in a position where you must do something risky to win, because being conservative will eventually result in you losing to the person with more pieces.

And if you think there are so few combinations here that it's possible for the person who memorizes more to always win, you really need to revisit just who has the "small mind" here because you don't understand Chess at all. Go is really the simpler game here because it only has the long-term strategy to worry about. Chess players have to worry about a long-term game of position and material trade-offs, but at the same time you have to guard against short-term win approaches too. Your long-term game is worthless if you get nailed by a Fools Mate.

Comment Re:Happy to let someone else test it (Score 2) 101

Most of FIPS is a certification process oriented on testing. However, there is a checklist of things you need to support, and one of them used to be the easy to backdoor Dual_EC_DRBG.

Now that the requirement for Dual_EC_DRBG has been dropped from NIST's checklist, it would be possible to have LibreSSL meet FIPS requirements without having the troublesome component. Most of FIPS certification is about throwing money at testing vendors, as described by OpenSSL themselves. Doing that would really be incompatible with the crusade LibreSSL is on though, because the result is believed by some to be less secure than using a library that isn't bound to the FIPS process. I don't see those developers ever accepting a process that prioritizes code stability over security.

Comment Re:Style over substance (Score 1) 188

Oh goodie, a lesson on ABX testing I didn't need. Carbonation is more obvious than the taste differences people often fail to confirm in blind test. Slate even did some coverage on container carbonation differences talking about it. According to that I didn't necessarily describe the cause and effect correctly in my quick comment--it may be from gas escaping rather than a bottling difference--but the effect I was describing is real.

Have you ever noticed the difference between flat soda and fresh? If so, why do you believe carbonation level and bottle specific characteristics are never distinguishable? There's a motion component to it. A major reason flat soda tastes differently is that you expect a different taste from the bubbles, whether or not there even is a taste difference outside of that. Your perception of carbonation turns into a taste even though it's really not a taste, exactly. The same way that knowing the brand alters how you taste--the bit that screws up non-blind taste tests--sensing the carbonation in your mouth changes how you taste too.

Fine, you say that's still me claiming something, not a test result. I looked around for five minutes for a blind test showing some difference between two different Coke product packages that included observations on the "fizziness" of the product impacting preference. Here's a recent blind comparison with untrained testers doing exactly that. I don't think it's studied more because it is too obvious to bother.

Comment Re:Never used this keystroke (Score 1) 521

What? No. Mouse vs. Keyboard shows that the mouse is better for moving around, compared to one of the UNIX-style editors where moving the cursor takes many keys. That's it. If you are doing a job other than moving the cursor and/or text around, keyboard beats mouse. Navigation is the thing the mouse is good at.

The context for TFA is writing new content, and there a save keyboard shortcut is far more efficient than anything else. It's only when you change your focus from there to editing that the mouse becomes a viable alternate navigation method.

Comment Re:next for NoSQL (Score 5, Interesting) 162

All "NoSQL" means is that the database doesn't use SQL as its interface, nor the massive infrastructure needed to implement the SQL standard. This lets you build some things that lighter than SQL-based things, like schemaless data stores. There several consistency models that let you have a fair comparison. It's not the case that NoSQL must trade consistency for availability in a way that makes it impossible to move toward SQL spec behavior.

Differences include:

  • Less durability for writes. Originally PostgreSQL only had high durability, NoSQL low. Now both have many options going from commit to memory being good enough to move on, up to requiring multiple nodes get the data first.
  • No heavy b-tree indexes on the data.
    Key-value indexes are small and efficient to navigate,
  • No complicated MVCC model for complicated read/write mixes.

    Today NoSQL solutions like MongoDB still have a better story for sharding data across multiple servers. NoSQL also give you Flexible schemaless design, scaling by adding nodes, and simpler/lighter query and indexes.

    PostgreSQL is still working on a built-in answer for multi-node sharding. A lot of the small NoSQL features have been incorporated, with JSON and the hstore key-value index being how Postgres does that part. Both system have converged so much, either one is good enough for many different types of applications.

Slashdot Top Deals

What hath Bob wrought?

Working...