It does seem like this may be the RTM build, although the timing is a little early yet.
My first reaction was the build number 7600 is very similar to the XP build of 2600 (yeah, I'm grasping at straws here.) It would be in MS favor to strongly relate this to XP and try to distance themselves from refencing Vista, which the correlation I just noted might help backup in people's minds.
However, the timing is just a little too early. The stated general retail release date from June's Computex is October 22. Historically, a MS OS RTM is released 30-45 days prior to the general retail date. That would place the RTM as beginning of September at earliest. Even a generous 60 day RTM date would place the date in mid-August, a month from now. Pressing and stamping aside (and what's to say a RTM DVD can't be downloaded over the net from a registration server similar to how volume and open license customers can already do), that's a little early yet.
And can anyone draw any significance from 16384 being 2^14? Or would that just indicate something like the 14th build of the master OS?
It's a simple that was borked, if you look at the source it shows as instead of
They'll fix it soon enough... mods, feel free to mark this down as overrated or offtopic.
Sorry, I meant vSphere. Was trying to think off the top of my head what was next after VMWare Infrastructure. I should have gone and looked rather than try to depend on memory.
I believe rather than moving to 1U servers, many companies will be (and already are) looking at virtualization via VMWare and HyperV. Yes, a 1U server attached to about 2TB space allocated on a backend Clarion and running a lowend Oracle or SQL database functionality is better power wise than a 4-6U server with 2TB(+redundancy) space locally running the exact same functions.
However, IF that functionality is capable of being run on a VM guest, you could put that and 20 other virtual servers in a maxed out (full hardware complement) HP DL580 G5 running VMware Fusion hooked to a backend SAN, and save even more power than breaking those individually out to separate 1U servers connected to the SAN. Multiple Fusion servers hooked into a farm for failover redundancy, at that, in case a host server goes down.
Better decision though, if you really have the capacity need for it, is to actually setup a database farm (with multi host redundancy) of those 4U servers, and run multiple databases, not just one, off of it. Do the same for your Apache farm, your SQL farm, etc... let the VM hosts handle the servers that can't be placed in a farm.
In order of functionality, then: Farms broken into functionality > VMs > server individualization in lowest form factor server possible > server individualization in a big server, leaving tons of wasted resources.
Note... That's Don Davis, not to be confused with Don Davis, aka General Hammond from SG1.
Although, I do believe that somewhere in the SG1 mythos it was suggested that Tunguska was either a failed Asgard or Goa'uld experiment, or that it was a weapons blast from orbit by a Ha'tak mothership.
Not that that has anything to do with this article or anything....
The only warp capability you might possibly get with your laptop would be of the OS2 variety.
Read the article. Plant is being constructed a days walk from Rome.
This page is intentionally blank. Yes, that's a challenge.
Actually, looking at the battery, ir looks like the same exact type of battery as you'd find in an APC small (450-800VA) UPS. We also used the same batteries for emergency power in our door access systems to power the controller when I was managing those at a small college. That type of battery is widely used to compensate for short term power outages.
I presume, given the amount of hardware shown (2 drives, 2 processors, motherboard, RAM) that the battery would probably last that given system about 7-10 minutes... plenty of time for the electric system to failover to the generator farm (you know they have more than 2 for redundancy.
As to the lifetime on those batteries... I was replacing them every 3-3.5 years, maybe 4 if I was lucky. It's a standard generic battery, and the failure rate on them is quite low.
I'd echo another user... If Google wanted to be smart, they wouldn't bother repairing a server when a component fails. Server obselescence at a company that can afford it is about 3-4 years... pretty close to the time for these batteries. They'd probably just pull the main power on it, and when a threshold of servers is "dead" in the container, they pull it offline for renovation... Either to repair the bad servers, or just retire everything.
Think someone will have to then report you for using the Hot Coffee mod.
Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!