Every time some name becomes a noun (or a noun becomes a verb), a meme becomes mainstream, or anything slightly changes its meaning to settle in a niche, there will always be a lot of ranting from people who find this annoying or lame, making it something of a scandal.
I'll just call that "GATEGATEGATE".
This is what bothers me the most. Of course Sony doesn't want someone to play a game and then resell it - they don't get a cut. This and piracy are the biggest reasons for the change (battery life? sure, they care sooo much how long you can play with their console). But what about the people who simply don't have the money?
As a kid, the amounts of money I had pretty much made me choose: buy the hardware to play games or buy games. Couldn't afford both. So I chose the hardware. With consoles, I bought used games or new ones (to sell them later). Also there's rental. Unless you do these things, games are way too expensive (at least to a kid). On the PC I just pirated whatever I wanted. Didn't feel right, but I kept telling myself that I only download/copy whatever I couldn't afford anyway.
But now as an adult I do buy games. I don't play as much as I used to, so I can afford it (although I do usually wait until the price drops). But thanks to my reselling/renting/pirating childhood, I'm pretty much a gamer for life. If we had ebay back then, I'd also be much more used to buying games.
When I do buy a new, full-priced game, I still like to think it's no that expensive, because I can still sell it on ebay. I usually don't, but it does make it easier to buy something without thinking much about it. With an app store, I know that I don't really own that game. What I bought is the privilege to play it almost immediately (with no fear of scratched DVDs), as long as the company that sold it to me exists. That's it. App stores that sell games at retail price are complete insanity. Some even use bittorrent, so there's not even much bandwidth to pay for.
But you know, the market will decide. According to sony that means "new psp go owners", not "people with old psps who bitch about being ignored" and they're probably right.
This is the world we live in: buy a product, be dead to the company that made it - unless you have some sort of support contract.
Or use OpenCL and choose any GPU vendor that supports it (ATI and nVidia already do - in beta)
I must admit that CUDA is pretty easy, you'll understand the basics and make a simple application in less than one day. I have some experience with programmable shaders and I know how GPGPU works, but that's a lot more complicated than using CUDA. I'm not sure what kinds of features the xbox provides, but I doubt it's easier than that. And OpenCL is almost the same concept, it's just using a compilation method more similar to what shader programmers are used to.
Let me guess, you had one course / book about CS theory or something and now you just throw half-understood concepts at us. (I'll try that too!)
The problem is that it is not that easy to understand the implications of things like the halting problem. It is very important for the general concept of computation, but it does not mean that there is not a significant subclass of programs for which the halting problem is decideable. void main(){} halts, void main(){while(true);} does not and it is easy to see for us (or a computer) why that's the case. Similar thing happened to limtxt-learneable languages until pattern languages came along. Just because you found out that you can't do something in general doesn't mean there isn't a huge number of useful instances where you still can.
That's why you shouldn't limit cleverness like that. Also, why does HAL-2 have to solve all the mathematical problems that HAL solves instead of just "more"? What if HAL uses a evolutionary algorithm and makes HAL-2 by "informed accident"? And how many mathematical problems can bacteria solve? Based on your theory, all ancestors of humans (including bacteria) should be able to solve at least as many as we do.
I'm guessing that the truth about how intelligence developed from non-intelligence is hidden somewhere in the concepts of evolution and emergence. If you don't fully understand these things (I know I don't), don't make predictions about what intelligence can or cannot do, and that includes the creation of cleverer AI.
More people will volunteer if they think they might still come back somehow:
"OK, for the trip back you get one empty tube of toothpaste, two cigarettes, one paperclip and one Richard Dean Anderson"
I too think that the author used the wrong yardstick - you could make a similar argument counting the number of "fields of science". Just because not that much completely-unheard-of things get invented doesn't mean the overall progress, or complexity as you've said, doesn't continue at the same or higher rate. The biggest change has already been introduced with computing, and since most future changes will somehow be related to that, they will look smaller in comparison. Strong AI could be considered a "game-changing" invention or something - if it came unexpectedly instead through small increments.
It's hard to keep track of the state of science today. I read a lot of science news, try to stay informed... but Kurzweil's books (K. tends to write a lot about advances in many different fields) and TED conferences still surprise me and make me wonder how I could miss all these new things. So if the author wants that "big advances each decade"-feeling of the 20th century, he should probably go live in a cave for ten years and then check back. The little increments seem to ruin perception.
You're right, nothing about performance in that law. But with GPUs, things are a bit different... if you can squeeze twice as many shader units onto the die, you'll probably get almost twice the performance if you stick to the special class of "GPU-compatible" programs (those that need massive parallelism with little synchronization etc).
Though I would have expected the GPU people to use some of those extra transistors to implement double precision and generally make the GPU cores look more like CPU cores (more pipelining, branch prediction) to make them better suited for more complex problems like raytracing.
Only through hard work and perseverance can one truly suffer.