You can make a Linux executable quite easily using a similar trick to the Windows executable version. Just cat a shell script that tries to run itself as a JAR file with an actual JAR file.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
While you get an idea of what the people who post in the forum like by reading it, it's not necessarily the best choice overall. The people who post on gaming forums are going to be a self-selected subset amounting to a couple of percent of the total player base, tops. This means they're going to have opinions that may not reflect everyone who plays the game. Most notably they're going to be more hardcore than average.
There is no war game, simulation or RPG mechanic so utterly baroque that someone won't decry streamlining it as 'dumbing down' the game. Inevitably that someone posts on the developer's forum. People got unbelievably pissed off when Dungeons and Dragons got rid of THAC0 and made higher armor classes better. All THAC0 did was complicate the rules set and give newcomers one more reason not to play past their first game. D&D 4e among many other things eliminated enemies that drain levels on touch since permanently weakening a PC sucks, it disproportionately hits melee classes, and it brings the game to a halt as you recalculate everything every time someone gets hit.
Ultimately, designing a game is a different skill set from playing the same game. Players can give an idea of what they personally liked and disliked, but as a rule have a pretty terrible idea of what's possible and what's balanced. Designers who forget that are begging for trouble.
Rooting in the strictest sense means being able to run with administrative user privileges. It's just that administrative user privileges isn't enough to actually change system files and still boot.
Because cell phones generally already have support circuitry built into the CPU which prevents you from changing the OS. The public key is loaded into the PROM at manufacturing time and absent the private key, you're not going to put a new OS on the phone. The Droid X 'killswitch' most likely works that way so when someone replaces a critical signed file, the bootloader just screeches to a halt. This sounds like someone added a recovery partition with the original signed files so it just grabs the files from there and tries to boot again. If the recovery partition's files aren't correctly signed, the phone's a brick. I'd give maybe a week or two before someone gets the brilliant idea to overwrite the recovery partition with unsigned files and we get a story about how the G2 has its own 'killswitch'.
That's the problem with any software that's not running on bare iron. A C program running on Linux is still limited in the exact same way that managed code is. It's just that the OS imposes those limitations with SIGSEGV rather than simply not deallocating referenced memory. If it really did let you do whatever you want that the hardware allows, that'd be a tremendous security hole.
I wouldn't put it in terms of a specific poster given that I haven't seen their code to judge it in the first place and one of the points of GoF is that the patterns are really just techniques that get reinvented over and over and over by people to solve common broad classes of problems. For a lot of programmers, learning big-O notation was really just formalizing an intuition we've had about the speed of nested loops. Education gave us a firmer grasp of what it really means as well as a vocabulary to express it to others.
However I've noticed that there's a contingent of self-taught programmers who learn some Turing complete subset of a procedural language plus some amusing anecdotes about systems programming from over 30 years ago and have therefore attained the status of Programming God. They're the only ones able to see through the perfumed lies of all those college educated frauds with their "event handlers" and "callbacks" and who understand the timeless elegance of a 50,000 line while(1) loop. In short they're so incompetent it hurts and they can't even tell it due to their inability to comprehend just how bad they are.
This is pretty ironic considering the circumstances. Their DRM code is pretty much the standard process and kernel isolation plus hardware support for looking to see if anyone's messed around with critical system files to bypass that.
I'm going to savor the day when there's an article about this awesome new feature in the Linux kernel that uses hardware encryption to verify the integrity of loaded kernel modules and prevent rootkits.
You can't anycast TCP, so this is a big boon for companies with lots of servers all over the world. The downsides involve bizarre cases with repressive governments that rule over their DNS servers with an iron fist but leave everything else alone because that would be wrong. China already blocks websites and monitors everyone in their country, so adding this really just offers them a less effective way to go about it. Companies wanting to use this to undermine their users' privacy can just look at the actual connections as well. Making sky-is-falling predictions about this just convinces people that these sorts of concerns are always misplaced when the truth of the matter is that this is innocuous even if there are other proposals out there that do have great potential for abuse.
Ideally an OS should be able to mediate access to resources and provide sufficient isolation on its lonesome rather than needing to add more layers in the form of virtual machines to do its job. In the same vein, an OS should be able to provide a uniform interface for accessing the resources available even if they're physically not on the same box. Distributed single level storage would be the logical conclusion, and in a couple of decades a large server farm might start getting uncomfortably close to the 64-bit limit if everything on it shared a single physical address space.
That's what PAE is. To the process, the address space is just one huge flat expanse from 00000000 to 7FFFFFFF. (or BFFFFFFF if the OS is configured that way and the software understands it) To the OS, the processes are allocated RAM in 4 kB pages which are mapped to their corresponding hardware frames in RAM via the page table. When the process accesses an address, the low 12 bits determine where within the page it should read, while the high 20 bits determine the entry in the page table. That entry has the hardware address which it then accesses. PAE allows the hardware address to be larger than 20 bits so that the OS can address more than 32 bits of physical memory transparently to the individual processes.
The term 'privilege escalation' is utterly meaningless in this context. The code is running at kernel level. It defines what the privileges are and can do whatever it wants because it IS the OS.
The flip of a switch that subtly corrupts terabytes of data vital to a $300 billion government project in a manner that the engineers can't detect it until planes start falling out of the sky.