I can't tell if this comment is meant to be ironic or not.
I can't tell if this comment is meant to be ironic or not.
I completely agree with this sentiment. I'm in favor of anything that will make fewer people use Java. Oracle has proved time and again that they can't even the lowest of bars for security, and the language itself has simply fallen behind other similar languages developers should use instead.
I agree the 'processing power' part sounds a bit silly - I'll be waiting to see if that's anything other than like...streaming?
Most consumer internet connections don't compete with those at datacenters, in both speed and quality. Having one end of the connection at a datacenter makes a huge difference. I played shooters for years, and when you're playing on a server you get a consistent experience that is better than all but only the best of matchmade games going over p2p. The host can drop, their connectivity can get worse - your gameplay gets interrupted a lot by technical bullshit you just don't get on a well-run datacenter-based system.
It's possible they will mess it up, and not handle the geographical component of their strategy well. Like, if I'm playing in Australia, a peer-to-peer solution will work a lot better if there are no servers in Australia. I don't think they'll let that happen, but who knows.
SNES is not a good comparison, but it's possible your point about 10 years is valid - we'll just have to wait and see. In 10 years, you can likely buy the PC port of any 10-year old game for a few bucks.
The PC is a great option, but PC developers want to protect their investments (which can be huge) more than console developers want to. At least with a console, it's a real pain to get it all set up so you can pirate games - on the PC, developers have a larger incentive to make their games online-only. If you only play FTP games anyway, you're already not buying an XBox...
Ironically, this approach will likely produce the opposite effect. For example, you can't really play Call of Duty: Modern Warfare 2 at all anymore, multiplayer. Why? Because the only way to play is to run a peer-to-peer game with whoever else happens to be playing. Chances are, they are all far away, and their internet connection sucks, so the game just sucks as a result, and you have to buy the newest version to actually get good connectivity.
If you're building your game to leverage server resources, players just connect to a datacenter, and get matchmade with other players there, likely pairing players with similar latency. Even if there are relatively few people playing, you'll probably get a pretty good experience, as at least one end of the connection for all players is pretty solid.
It seems like the whole point of the system is to actually address this very problem. Game publishers don't need to invest so much in hardware, and server resources are made available to games on a need basis. If you're game has 50 players, it'll probably do just fine with a server running on a virtual machine somewhere along with 20 other games on the same hardware. Microsoft could still screw up on the total capacity side when they're hit with a big release, but smaller games will likely benefit.
I completely agree with the parent.
To me, the main difference is the way Google makes business decisions with their consumer products. With Gmail, for example, Google really goes out of their way to make it incredibly easy to migrate your entire mailbox to another service. Unlike Hotmail, that for years didn't allow blanket forwarding rules, the ability to check an essentially arbitrary number of POP3 accounts to pull from, and the ability to send mail from any domain, Google bent over backwards to just do what would make their service the best, even if that meant making it easier to lose them to competition. Google, from what I can tell, considers it a priority to set up their services so the incentives drive them to make the product better, not worse. With Google Drive, they don't even really need to do this, since there's very little stopping users from dragging their files over from their Google Drive into their Dropbox if they don't like it better. There's even multiple ways of integrating Dropbox with Gmail, many of which are free - some even provide drag and drop support.
Microsoft, on the other hand, continually goes out of it's way to do the 'dick move' to their consumers. With XBox 360, they went out of their way to make most USB storage devices work with the console, BUT intentionally placed a limit so you could only access the first 16GB of any device, forcing consumers to buy the XBox 360-specific hard drives if they wanted more than that amount of space. Microsoft doesn't apologize - they just say "yes, we went out of our way to intentionally inconvenience you because we think it will make more money in this case, and that's what we'll do every time."
Another great example is PDF support in Office. Historically, in Office Mac, they just had the option to save or print to PDF. In Windows, they just left this out for more than a decade, on purpose, until finally in 2006 they caved, probably under competitive pressure and their corporate consumers whining about it so much. As much as I think PDF is junk, you can't argue it wasn't a widely used format that they could have easily supported, and it wasn't Adobe stopping them. They intentionally did it just to be dicks - they had a reputation to uphold, after all.
Microsoft's version of Java - another move that just seemed to be made intentionally out of spite towards Java developers. They release a modified version of Java that isn't compatible, only to then abandon it once a bunch of Java developers migrated. It's hard not to think the whole thing was just a plan to fuck with people.
Except that decentralized digital cash is inherently flawed, since the tokens will always grow linearly in the number of transactions they are used for. In other digital cash systems, this problem is solved by having an issuing authority (bank, government, etc.) that accepts old tokens and issues fresh tokens. In the case of Bitcoin, no such authority exists, so the tokens are just going to keep getting bigger, and eventually they will be too large to be useful.
This is total bullshit, presumably based on a lack of understanding of how bitcoins work. The phrase "accepts old tokens and issues fresh tokens" is completely meaningless in the context of bitcoins, which has a universal log of all transactions maintained by miners, not any actual tokens at all.
I was about to release my new game Elder, which involves an old guy trying to climb up mountains...
I also work at M$ (contractor!) but not on Kinect and those demos were definitely legit. My office happens happens to be near where it's worked on, and I've playtested it briefly on several occasions. I think today's demo and the hype doesn't nearly do the platform justice - I've already gone to GameStop to (try to) pre-order...it's frikkin' amazing.
If you watch the video carefully, you'll notice there are are essentially two types of use of the platform:
1. Most games seem to have a delay between when you move and when that movement shows up on screen. These games are either ones where you notice something you have to react to, you react, and then you see something happen after a delay, or ones where you sorta 'pre-act' moves you know are coming. If you watch the video where they are avoiding things on the track, you can see them move their bodies early, anticipating that the game won't get the move in time if they jump in time with what they see.
2. The dance game seemed to do a kind of post-analysis to see if what you did is correct - I think this is very similar to existing singing games out there - you calibrate it so you can sing with the music as you hear it, but the scoring mechanism doesn't come back with how well you're doing as fast as you're doing it. I'm pretty sure they must be doing the same thing here - you dance to what you see, and the scoring chimes in a moment later with "yup, that last move was great" or whatever. If you look on the right side you can see the upcoming moves - that's how you know what to do next - also you can see yourself moving on the right in a small box - i think if you look there you'll see yourself delayed.
I play FPS games - 30 fps is fine as far as visual quality goes - sure, 60 fps is better, but I don't care - it's not the visual quality that concerns me.
What does concern me is the delay in getting the information I need when I'm playing.
Ideally, I'd like infinite FPS - then, when an opponent appears, I'd see it as soon as possible after the data makes it from the computer to the monitor. At 30 FPS, there is an additional delay, probably up to 33 ms, probably averaging 16 ms. At 60 FPS, that additional delay is cut in half, and at 120 FPS, it's cut in half again. In short, I get relevant information sooner, and that makes me play better.
Often battles in FPS games are literally two people who both shoot each other in the head for a one shot kill as soon as they see each other. Players want to minimize any delay so the game will decide they shot first, and win the encounter - every little ms matters, as any skilled gamer knows all too well.
Any program which runs right is obsolete.