I had to go recheck the Stanford course numbering system - looks like a 200-level is "advanced undergraduate/beginning graduate". So we're both right.
I agree that people might interpret the grades in the way you say, and that it could be a PR issue for Stanford, but I don't think it would actually be indicative of deep flaws in Stanford admissions (and here I mean undergrad admissions - my understanding is that admissions to the professional masters program mostly consist of "can you breathe, and does your employer have an enormous wad of cash for us?"). The objective function that Stanford is optimizing contains a lot of terms that are not reflected in someone's CS221 performance, so you'd only expect them to be somewhat correlated. Could all of the outside high-scorers also do well in a Stanford English class, do they have anything to contribute to the college community artistically, athletically, or in terms of their background (whether poor black kid from the south or non-native speaker from Japan, most elite schools put a lot of admissions resources into making their campus a mini-melting-pot on the theory that it's good for society and good for their students), and were they taking a bunch of other tough classes at the same time while playing in the orchestra and being involved in a bunch of student groups? (of course many outsiders would have real-world jobs; it would depend on the job and the Stanford student who actually had the tougher time) And of course some of the successful outsiders could be 40yo engineers who went to MIT or wherever back in the day, so it's not as if they're "excluded" from the elite education system.
Basically, I agree that it's possible this could expose that the Stanford admissions filter does not perfectly select for raw CS talent. But I don't think they ever claimed to do that, and I don't think there are many good arguments that they should.
You're right that most game AI doesn't use very sophisticated techniques, but just for the record it's not true that the Berkeley Overmind team found building a "true AI" (whatever that means) for Starcraft to be "beyond infeasible". They focused on mutas because they're easier to do micro with, but the higher-level strategic code is AFAIK pretty much race-agnostic. There's a build-order planner which can work with any set of building constraints you feed it, there's strategy selection which leverages results of scouting and just needs to know the basic details of units and buildings (e.g. if you observe the enemy building a Stargate, then as long as the AI has been told that Stargates produce air units and that Goliaths (say) can attack air units, then it will shift production towards Goliaths), there's a fair amount of prediction of when/where enemies will expand or attack that has to work for enemies of all races, and so on. The Overmind is certainly not a perfect rational agent (or even a perfect bounded-rational agent, which would be a more realistic goal), but it's much more sophisticated than a bunch of hacks around a mutalisk-micro script.
Source: I'm a Berkeley CS grad student, and I know a bunch of the Overmind authors and have been to a few of their meetings, though I didn't personally contribute code.
Stanford undergrad tuition is essentially free if your family makes less than $100k/yr. Need-based financial aid policies mean that the $55k number is an upper bound, typically paid in full only by families making $200k and above (with various exceptions, of course, but that's the general pattern). In any case, this is a grad course, so the price of undergrad tuition is not really relevant to the discussion.
Stanford CS PhD students generally have their tuition, as well as an additional stipend for living expenses, fully funded by research grant money, so they don't pay a cent. The only students taking this class who would actually be charged full tuition are likely those in the professional master's degree program, which is basically Stanford's way of siphoning money from Silicon Valley tech companies: the companies send their employees for training and pay Stanford to do it.
This is all to say that I don't think Stanford's trying to rip anyone off here (quite the contrary, since they're providing the course for free). But it's also a rare course which can be taught in this way. It's easy to write an autograder that runs programs submitted to it and checks to see if they produce the correct output; it's much harder to automatically provide feedback on an English paper or a mathematical proof. Similarly, it's easy to record your lectures and put them up on Youtube; it's much harder to replicate a classroom discussion facilitated by a true expert. So, a few large-lecture CS classes aside, the vast majority of classroom experiences (at Stanford or anywhere else) are going to be very difficult to replicate at a web scale, now and for the foreseeable future.
Basically, the argument that schools like MIT make to their alumni is that when you donate to MIT, you're not giving your money to a large faceless entity with a $8 billion endowment staffed with administrators who light their cigars with $100 bills. You're giving it to a poor kid from the slums of Bangalore who is able to come to MIT and fulfill his/her potential because of generous alumni like you, who have allowed MIT to provide a $300k education for free to anyone who can qualify. Obviously you can believe this sort of thing to varying degrees, but apparently Bose's experiences working at MIT for several decades came to convince him that it is, as an institution, overall a force for good in the world.
They are relatively good but absolutely terrible. -- Alan Kay, commenting on Apollos