Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Sargon II on Commodore 64 (Score 2) 128

Sounds about right. I played enough tournament games to estimate I was about a 1450 player at my best, and playing Sargon II on the Apple was a pretty evenly matched game. The key to beating early chess games like that, and this is still useful for any small memory chess opponent, is to play something weird. You need to get the computer out of its opening book library as soon as possible, without making an overtly bad move. Moving a pawn a single space forward where most players would taking advantage of being able to move forward two can be enough to break you out of a small book. You could easily tell when Sargon went "off book" because the time it spent thinking about moves went up dramatically, especially on its highest difficulty setting.

I learned some ideas like this from David Levy's excellent 1983 book Computer Gamesmanship. With Sargon, I recall I would do somewhere around 5 moves from the standard opening library before inserting one aimed to go off-book. The first few moves in a chess game tend to be very similar because they work. You don't want to yield control of the middle of the board in favor of breaking out of the book on your first move; that's counterproductive.

Comment Re:Not to detract from our roots... (Score 1) 128

There are two main types of chess games. In one, someone manages to checkmate while there are still a lot of pieces on the board. You seem to only be familiar with this type of game. It's possible to prioritize for that over holding onto pieces, with strategies like "gambits" taking that idea back to the opening move.

But when both players are good enough that this doesn't happen, you get a drawn out type of game where very subtle position advantages allow picking off pawns, or exchanging a better piece for a worse one. Eventually those swaps knock out most of the pieces on the board, and then the person with an advantage in "material"--the pieces they still have--will normally win. One of the things you need to learn as a competative chess player is how to checkmate when you only have a small advantage like that. Can you win a game where you have a king and a bishop left vs. just a king? There's a whole body of research on pawnless chess endings that to this day hasn't considered every possibility yet.

So how do you tell which type of game you're playing? That's the trick--you can't until it's over. If you goof on a risky push to checkmate and it fails, you can easily end up down in material and then playing the other type of game at a disadvantage. That's where people who are good at tactics instead of memorization can really shine--no one memorizes optimal play when you're already down a piece or two. The entire risk-reward evaluation changes when you're in a position where you must do something risky to win, because being conservative will eventually result in you losing to the person with more pieces.

And if you think there are so few combinations here that it's possible for the person who memorizes more to always win, you really need to revisit just who has the "small mind" here because you don't understand Chess at all. Go is really the simpler game here because it only has the long-term strategy to worry about. Chess players have to worry about a long-term game of position and material trade-offs, but at the same time you have to guard against short-term win approaches too. Your long-term game is worthless if you get nailed by a Fools Mate.

Comment Re:Happy to let someone else test it (Score 2) 101

Most of FIPS is a certification process oriented on testing. However, there is a checklist of things you need to support, and one of them used to be the easy to backdoor Dual_EC_DRBG.

Now that the requirement for Dual_EC_DRBG has been dropped from NIST's checklist, it would be possible to have LibreSSL meet FIPS requirements without having the troublesome component. Most of FIPS certification is about throwing money at testing vendors, as described by OpenSSL themselves. Doing that would really be incompatible with the crusade LibreSSL is on though, because the result is believed by some to be less secure than using a library that isn't bound to the FIPS process. I don't see those developers ever accepting a process that prioritizes code stability over security.

Comment Re:Style over substance (Score 1) 188

Oh goodie, a lesson on ABX testing I didn't need. Carbonation is more obvious than the taste differences people often fail to confirm in blind test. Slate even did some coverage on container carbonation differences talking about it. According to that I didn't necessarily describe the cause and effect correctly in my quick comment--it may be from gas escaping rather than a bottling difference--but the effect I was describing is real.

Have you ever noticed the difference between flat soda and fresh? If so, why do you believe carbonation level and bottle specific characteristics are never distinguishable? There's a motion component to it. A major reason flat soda tastes differently is that you expect a different taste from the bubbles, whether or not there even is a taste difference outside of that. Your perception of carbonation turns into a taste even though it's really not a taste, exactly. The same way that knowing the brand alters how you taste--the bit that screws up non-blind taste tests--sensing the carbonation in your mouth changes how you taste too.

Fine, you say that's still me claiming something, not a test result. I looked around for five minutes for a blind test showing some difference between two different Coke product packages that included observations on the "fizziness" of the product impacting preference. Here's a recent blind comparison with untrained testers doing exactly that. I don't think it's studied more because it is too obvious to bother.

Comment Re:Never used this keystroke (Score 1) 521

What? No. Mouse vs. Keyboard shows that the mouse is better for moving around, compared to one of the UNIX-style editors where moving the cursor takes many keys. That's it. If you are doing a job other than moving the cursor and/or text around, keyboard beats mouse. Navigation is the thing the mouse is good at.

The context for TFA is writing new content, and there a save keyboard shortcut is far more efficient than anything else. It's only when you change your focus from there to editing that the mouse becomes a viable alternate navigation method.

Comment Re:next for NoSQL (Score 5, Interesting) 162

All "NoSQL" means is that the database doesn't use SQL as its interface, nor the massive infrastructure needed to implement the SQL standard. This lets you build some things that lighter than SQL-based things, like schemaless data stores. There several consistency models that let you have a fair comparison. It's not the case that NoSQL must trade consistency for availability in a way that makes it impossible to move toward SQL spec behavior.

Differences include:

  • Less durability for writes. Originally PostgreSQL only had high durability, NoSQL low. Now both have many options going from commit to memory being good enough to move on, up to requiring multiple nodes get the data first.
  • No heavy b-tree indexes on the data.
    Key-value indexes are small and efficient to navigate,
  • No complicated MVCC model for complicated read/write mixes.

    Today NoSQL solutions like MongoDB still have a better story for sharding data across multiple servers. NoSQL also give you Flexible schemaless design, scaling by adding nodes, and simpler/lighter query and indexes.

    PostgreSQL is still working on a built-in answer for multi-node sharding. A lot of the small NoSQL features have been incorporated, with JSON and the hstore key-value index being how Postgres does that part. Both system have converged so much, either one is good enough for many different types of applications.

Comment Re:Ignore these naysayers (Score 1) 113

No, it's OCZ. ext4 is the most popular filesystem that expects good behavior from drive write caches, so of course it also has the most problem reports. The way write barriers work in ext4, the filesystem struggles when hardware lies about data being flushed to disk. See ext4 and data loss for an introduction.

As outlined there, ext3 gets lucky in some situations ext4 just doesn't tolerate so some people see that as a bug in ext4. But the reason for the change is improved performance. You just can't get a fast filesystem and rugged behavior in the face of drives lying at the same time. You have to pick a side there. In the classic "good-fast-cheap--pick two" trio of trade-offs, OCZ always picks cheap and fast.

Bad drives aren't tolerated by zfs or btrfs either. It's just the case that ext4 is deployed on far more servers than they are.

Comment Re:Can't Tell Them Apart (Score 1) 466

When I give someone a fizzbuzz style program to do, I point out to them that part of my grading is how it handles errors. The example programs people swipe online don't help very much, because they usually don't worry about things like boundary checking. If I can break someone's fizzbuzz by giving it a negative number that's a failing grade. That is much, much more important to me than language mastery. For C programming as one example, I'll trade you a dozen people who know the correct order of arguments for calloc for one who knows how that library call might fail.

Slashdot Top Deals

What is research but a blind date with knowledge? -- Will Harvey

Working...