Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Reselling (Score 1) 258

This is what bothers me the most. Of course Sony doesn't want someone to play a game and then resell it - they don't get a cut. This and piracy are the biggest reasons for the change (battery life? sure, they care sooo much how long you can play with their console). But what about the people who simply don't have the money?

As a kid, the amounts of money I had pretty much made me choose: buy the hardware to play games or buy games. Couldn't afford both. So I chose the hardware. With consoles, I bought used games or new ones (to sell them later). Also there's rental. Unless you do these things, games are way too expensive (at least to a kid). On the PC I just pirated whatever I wanted. Didn't feel right, but I kept telling myself that I only download/copy whatever I couldn't afford anyway.

But now as an adult I do buy games. I don't play as much as I used to, so I can afford it (although I do usually wait until the price drops). But thanks to my reselling/renting/pirating childhood, I'm pretty much a gamer for life. If we had ebay back then, I'd also be much more used to buying games.

When I do buy a new, full-priced game, I still like to think it's no that expensive, because I can still sell it on ebay. I usually don't, but it does make it easier to buy something without thinking much about it. With an app store, I know that I don't really own that game. What I bought is the privilege to play it almost immediately (with no fear of scratched DVDs), as long as the company that sold it to me exists. That's it. App stores that sell games at retail price are complete insanity. Some even use bittorrent, so there's not even much bandwidth to pay for.

But you know, the market will decide. According to sony that means "new psp go owners", not "people with old psps who bitch about being ignored" and they're probably right.

This is the world we live in: buy a product, be dead to the company that made it - unless you have some sort of support contract.

Comment Re:Why not the PS3? (Score 1) 101

Or use OpenCL and choose any GPU vendor that supports it (ATI and nVidia already do - in beta)

I must admit that CUDA is pretty easy, you'll understand the basics and make a simple application in less than one day. I have some experience with programmable shaders and I know how GPGPU works, but that's a lot more complicated than using CUDA. I'm not sure what kinds of features the xbox provides, but I doubt it's easier than that. And OpenCL is almost the same concept, it's just using a compilation method more similar to what shader programmers are used to.

Comment Re:There is no way an AI can build a cleverer AI. (Score 1) 482

Let me guess, you had one course / book about CS theory or something and now you just throw half-understood concepts at us. (I'll try that too!)

The problem is that it is not that easy to understand the implications of things like the halting problem. It is very important for the general concept of computation, but it does not mean that there is not a significant subclass of programs for which the halting problem is decideable. void main(){} halts, void main(){while(true);} does not and it is easy to see for us (or a computer) why that's the case. Similar thing happened to limtxt-learneable languages until pattern languages came along. Just because you found out that you can't do something in general doesn't mean there isn't a huge number of useful instances where you still can.

That's why you shouldn't limit cleverness like that. Also, why does HAL-2 have to solve all the mathematical problems that HAL solves instead of just "more"? What if HAL uses a evolutionary algorithm and makes HAL-2 by "informed accident"? And how many mathematical problems can bacteria solve? Based on your theory, all ancestors of humans (including bacteria) should be able to solve at least as many as we do.

I'm guessing that the truth about how intelligence developed from non-intelligence is hidden somewhere in the concepts of evolution and emergence. If you don't fully understand these things (I know I don't), don't make predictions about what intelligence can or cannot do, and that includes the creation of cleverer AI.

Comment Re:Flying Car (Score 1) 712

I think the author made pretty much the same mistake. If you measure progress in "number of unrelated things that blew my mind per decade", then that rate might be (temporarily) going down. If you mean overall progress in all fields that support small incremental changes, then that rate is going up. Way up.

Comment Re:The article's author is confused (Score 1) 712

I too think that the author used the wrong yardstick - you could make a similar argument counting the number of "fields of science". Just because not that much completely-unheard-of things get invented doesn't mean the overall progress, or complexity as you've said, doesn't continue at the same or higher rate. The biggest change has already been introduced with computing, and since most future changes will somehow be related to that, they will look smaller in comparison. Strong AI could be considered a "game-changing" invention or something - if it came unexpectedly instead through small increments.

It's hard to keep track of the state of science today. I read a lot of science news, try to stay informed... but Kurzweil's books (K. tends to write a lot about advances in many different fields) and TED conferences still surprise me and make me wonder how I could miss all these new things. So if the author wants that "big advances each decade"-feeling of the 20th century, he should probably go live in a cave for ten years and then check back. The little increments seem to ruin perception.

Comment Re:New grey goo milestone (Score 5, Insightful) 119

the "grey goo threat" might be something to be considered, but it shouldn't stop us from further exploring micro/nanobots. I'm tired of hearing someone shout "grey goo!" or "skynet!" every time there is some advance in nanotechnology or AI (and I mean the ones who are actually being serious about it). You can't stop the progress in these fields (and you shouldn't, considering all the positive aspects), or just repeat fear-mongering from luddites/attention-whores/sci-fi-writers. Instead, try to understand current research and help to find ways to make these things safe!

Comment Re:The math (Score 1) 295

You're right, nothing about performance in that law. But with GPUs, things are a bit different... if you can squeeze twice as many shader units onto the die, you'll probably get almost twice the performance if you stick to the special class of "GPU-compatible" programs (those that need massive parallelism with little synchronization etc).

Though I would have expected the GPU people to use some of those extra transistors to implement double precision and generally make the GPU cores look more like CPU cores (more pipelining, branch prediction) to make them better suited for more complex problems like raytracing.

Comment Re:Think back 17 years (Score 1) 633

Unfortunately there is a difference between CD-R's and pressed CD's. The best Idea would probably be to keep the data redundant on multiple CD-R's (from different manufacturers) and other media like USB sticks (as some already mentioned). If you can, check for integrity every 5 years or so even if that kind of destroys that "time capsule" -idea.

Comment Re:Free will (Score 2, Interesting) 438

Try to review what you think you know about "free will" and "punishment", I had some misconceptions about that too. There is no reason to believe that something like "free will" exists - it really is more of a religious thing but somehow many non-religious people cling to it. Of course I have the impression that all of my actions originate in me, but that's how consciousness works. Anyone who believes in free will is implying that his brain doesn't obey causality and that is one claim that can't just stand there, unproven... (And don't start with quantum mechanics at this point if you are not a physicist)
What's important to understand is that something being deterministic doesn't mean it doesn't look random to someone who does not have all the data. So your thoughts will continue to look like free will, even if you know that it doesn't exist. Also something like "so now we have to let all the criminals go because they had no choice?" is a completely false implication. But there's the other misconception: law/justice should have nothing to do with punishment or revenge. It is supposed to be a solution to a problem. If you have a violent criminal then you'll want to put him away so he can't attack normal people. If someone stole something, you'll want to make him give it back and maybe add some incentive to not do it again (what you may call punishment - but it only works on rational-minded people). So what about psychopaths, child abusers and so on? You can 1) put them away (doesn't really solve the problem, just the symptoms), 2) kill them (barbaric, innocent people will die as well, also not really a solution), or when it's available 3) correct what's physically wrong with them.
So if your justice system is based on revenge rather than problem-solving, then I hope these advances will affect it. As for society I guess the effect will be a lot of misunderstandings, fear and knee-jerk reactions. As usual.

Comment Re:Is fiction driving science? (Score 1) 652

I guess that could happen, but by the time there are (working, independent) androids humans will have altered themselves so much that the differences become harder and harder to see. That president might just be a former biological human who replaced his body (including the brain) with an artificial one.

...this does sound a bit like Futurama, doesn't it?

Comment Re:Scientists watch too many movies. (Score 1) 652

Absolutely... If you look at the source of these concerns, it usually boils down to fiction. Fiction that was written to be entertaining, and that usually means there has to be some villain that threatens the good guys. Most of these stories carry the same message: "Technology is good to some point (usually the point where we are at the moment), everything beyond that is extremely dangerous and morally wrong. Embrace what you have instead.". Sounds nice, doesn't it? Yet there is absolutely no rational reason for that.

There is so much wrong with AIs as presented in movies and books that I can't even begin to describe it (Actually, I can: 1. If an AI develops emotions, they were probably programmed in, not just magically "there". 2. It's unlikely that the programmer loses control over an AI, or doesn't understand how it works. Even if it is grown by some overly complex evolutionary algorithm, you still know what it can and cannot do - unless it runs windows or something. ...). Mostly it's just uninformed garbage dreamed up by people with a very shallow grasp of science who think their story needs a "realistic" doom scenario and some kind of moral message.

Artificial Intelligence has become a punching bag for bad science fiction authors. You really need to differentiate between what's a real danger and what comes entirely from fiction. And since there has never been a human-level AI, ALL concerns have to do with fiction and most the people who do have the knowledge to make accurate predictions have better things to do.

But maybe this will escalate, with all the Luddites going to anti-AI-conventions, selling robot-repellents and passing stupid laws. At least they'll get their very own "Bullshit!"-Episode.

Slashdot Top Deals

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...