If you're feeling *particularly* devious, you can use a little-known Python feature called "for-else" (also "while-else") which allows you to tag "else" clauses onto for loops. Such a clause executes only if the loop runs to completion 'naturally' (i.e. if no break or exception happens within the loop block).
As a relatively obscure language feature, using it might make your code harder to read. It can help make a multilevel break (chained "else: continue; break" snippets at the end of loops), and reduces the number of flag tests and sentinels you need to do (e.g. a linear search, wherein the "else" case simply contains the if-not-found logic).
There are other alternatives to multi-level break: exceptions can break out of any number of loops until they reach an appropriate handler block, and function returns can always break out of any loop up to the top of the function.
As for switch/case: In C, they were basically a thin wrapper around jump tables for most switches (e.g. enums, small integers, duff's device, etc.). Python's preferred alternatives are key-value dictionaries (hashtables) or if/elif cascades. The former is very easy to setup and manipulate in Python (e.g. {1: 'spam', 2: 'eggs', 3: 'ham'}), unlike in C or C++. The latter is far more flexible than switch/case (e.g. being able to test ranges of values with "A <= x <= B" queries, test for multiple values with "x in (A, B, C)", or do arbitrary tests like "x.isspace()"), while avoiding the complexity of an entirely new language construct.
Finally, goto exists in Python as a third-party module. Go ahead, try it: it really works!
This implementation is not infeasible even when the players run in separate processes. I can observe your rand() state through the moves you make, and provided the remainder of the argument is the same (seeded with system time at the start of the match, simple RNG processing to derive a move), I can simply play randomly for the first 30 rounds (3^30 > 2^32) and observe your state. If I can come up with a set of RNG parameters (time-based seed, rand algorithm, sequence offset, processing strategy) that outputs your move sequence, then I can still beat your bot with a slightly worse win record. It takes 30 times more computational power, so about 180000 possibilities, but it's still tractable.
Suck is relative. I was implying they sucked according to GP's definition, which was that a "good" algorithm would require lots of data & computation time to beat. I would agree that they don't suck for their intended purpose (simple, fast, reasonably random, repeatable numbers). However, rand() does get (mis)used in a lot of contexts (especially security-conscious ones, or applications vulnerable to algorithmic complexity attacks) where it should not be used.
I should mention, though, that even in cases where it "shouldn't" matter, a weak rand() enables attacks on systems that should otherwise behave randomly. For example, many games (as you mention) use very simple implementations of rand(); such simple implementations also allow particularly hard-core gamers to abuse the system and earn much higher rewards (stat boosts, attack rolls, random encounters, etc.) than they ought to. Heck, there's an entire subcommunity of Pokémon game players dedicated to beating the game (and/or outperforming their peers) through methodical, systematic RNG abuse.
0) I have source code for my bot, the tourney announcement, and the tourney results. If you are really curious, ping me at my email address.
1) I never said you were *playing* a random opponent. Against an *arbitrary* opponent your optimal strategy is to play randomly. Any other strategy that you play can be exploited to your loss. In this way, random really is the game-theoretic optimal strategy. It's not just a buzzword.
2) Of course. I'm exploiting an implementation detail. This is a classic side-channel attack on an otherwise secure system.
3) Yes. I looked up the source code for libc implementations online. It's easy: google ' rand.c'. Examples: http://fxr.watson.org/fxr/source/stdlib/rand.c?v=FREEBSD-LIBC, http://fossies.org/dox/glibc-2.15/random_8c_source.html. I also got Microsoft's rand() implementation for their MSVC runtime because the source code for that comes with Visual Studio. Yes, I was using Windows. Contest submissions had to be in C or C++, and most programmers would rather use rand() (portable, simple, easy) than implement their own random() function or use the less-portable
4) Contest programs ran in the same process, so your rand() state was shared. srand was forbidden, but we were given the tournament engine source code and so we knew when srand() was called and with what arguments (time(NULL)). To be specific: I pulled a single rand() value, then ran all my implementations of rand() with different seeds in the neighbourhood of time(NULL), and ran them for a variable number of iterations (up to 20, I think) to guess the sequence offset. The processing strategy was simply observing what values they were adding to the rand() value before taking it mod 3. The input is about 20 seconds * 5 libcs * 20 sequence offsets * 3 "processing strategies" = 6000 possibilities. Four dimensions, but they can all be constrained to small values.
I took first place in the full round-robin. The second part of the contest was a knockout tournament, and I wasn't using the kind of super-sophisticated modelling that my opponents used since the RNG breaker was already a fair amount of work. Consequently, at best I tied strong opponents, and lost on time.
Yes indeed, neonbot was my entry. Quite a remarkable small world. The contest was in 2007 (my first year of undergrad), and it was probably my favorite event of the year. Thanks for judging!
I'd think that unless you were playing a very large number of rounds such that you could infer the opponent's PRNG function and seed, or unless the opponent PRNG was REALLY bad, this would not work.
Actually, supposing the seed state is 32 bits in size, it really doesn't take that many rounds. Each round, one of three outcomes is produced. This yields roughly 1.6 bits of information about the generator's internal state. After 20 rounds, you have, in theory, enough information to infer the PRNG seed. If you can predict the opponent's move for 30 straight rounds, you can be pretty confident that you've determined both the seed and the function precisely. This works for arbitrary seeds.
Maybe if the seed were the time to the nearest hour you might be OK. However, if it used time to a millisecond then you'd have almost no chance of success. Any decent PRNG will show what would appear to be completely different behavior with even a slightly different seed.
With regards to the timestamp: srand(time(NULL)) isn't a random seed. If you can guess when the process called this function (hint: it's at least after the process was started!), you can then bruteforce over a very small number of possibilities. If the process was started, say, at 7:00pm for the contest start, and you run at 8:30pm, you've got a search space of just 5000000 milliseconds, which is trivial. My implementation worked for this contest (as evidenced by its extremely high win rate against rand()-using bots).
Now, if the PRNG were really lousy maybe you'd have a shot. It just seems unlikely that such a function would exist in any well-used library since the 60s. Sure, lots of functions are inadequate for cryptography, but even defeating these poor algorithms usually requires lots of data collection and a huge search. A PRNG that takes 5 years of supercomputer time would be considered broken, since a brute force keysearch might require the age of the universe. That doesn't mean that you're going to defeat it with a few rounds of RPS to figure out what it is doing.
This PRNG-breaking approach would work equally well for a crappy linear-congruential generator as for a modern Mersenne Twister, since it relies only on being able to guess the seed (not the entire internal state).
Most libc implementations of rand() really suck. They were originally implemented for memory economy and speed, and there was no concern about security. Now, such implementations are largely retained for compatibility, so that older programs can still produce the same results using fixed seeds. The various libcs are, arguably, the most-used libraries in existence. To beat many of these generators requires just one (or two) observations of the full 32-bit output, plus a trivial amount of computation.
Maybe about 10% or so. The contest was five years ago, so some of the details are a bit fuzzy. I don't believe we ever got the source code for other competitors, either, so I wouldn't know if they were using an RNG strategy or just a simple predictable one.
Evolution doesn't need to "progress" quickly. It has no defined end-goal, only the continued adaptation of species. Life adapts to its environment, whether the environment is an asteroid crater or an industrial park.
Evolution doesn't have to be "induced" by any great event. Catastrophic events are just one way in which evolution can operate, since they produce ecological voids which are rapidly filled by new (adapted) flora and fauna. Regular adaptation to ever-changing natural (and artificial) environments also drives evolution.
I once participated in a Rock-Paper-Scissors tournament put on by Epson (see, for example, http://www.campuslogix.com/rps_challenge/rps_challenge.html). They basically said "write a bot that will play RPS". Of course, the game-theoretic optimal strategy in such a contest is to just play randomly. You can beat the (Epson-supplied) rockbots and rotatebots easily, so with a bit of work you can do slightly above average.
Seeking a greater advantage, though, I coded my bot to also include a set of predictors for the random number generators for several popular libcs (as I did not which OS or distro the tournament machine would use). During a round, I would guess the random seed (current system time +/- a few seconds), the sequence offset, RNG processing strategy, and the algorithm used, and simply run a parallel copy of the libc RNG used by my opponent.
I was therefore able to beat most RNG-using opponents 9998/10000 times easily, a finding which rather surprised the judges
Goes to show: sometimes a bit of "cheating" works well.
They've (either purposefully or inadvertently) created for themselves a bit of an AI challenge. If the hackers take this seriously enough, we could see the development of some pretty advanced game-specific AIs.
And, just like Australia, they might all eventually come to be accepted as mostly-normal members of society.
...two months is all you need to cross from one year to the next, which, under the %m-%d-%Y scheme, would still fail to sort in the expected chronological order.
8 Catfish = 1 Octo-puss