We use a WSUS server and Local Update Publisher at work. It has been a bit of a pain sometimes, Adobe isn't fond of sticking to MSI standards and has published stuff with bad MSI applicability rule content (windows installer would still install it but you had to edit the xml so WSUS could validate it). They also only publish MSI files for the ActiveX version of flash player so we have to deploy the exe version of the mozilla plugin (WSUS can deploy exe, msi and msp files but msp files are the easiest).
It takes about 1 hour for us to write and test the deployment rules for each update. We test against both WinXP-32bit and Win7-64bit targets as they will sometimes need different applicability rules. Then we let the clients check-in to see if they mark the update as applicable. After we are satisfied that all of the clients that need it and only the clients that need the update will try to install it (we have had issues with this in the past) we mark the update as ready to install and the clients will install it in the next cycle.
This usually means that an update is out for 1-2 days before our clients have it installed so if there is an exploit being used broadly we will sometimes force clients to update via our inventory tool that can have it done in 1 hour. We have had systems where the user has been hit by these type of scareware drive-by-installs before the patch was even out.
In my experience, almost any IT admin who is actually qualified to make that choice (i.e., expert in both) would prefer Ubuntu, because it's easier, cheaper, and takes less time.
I agree, I've done both. By far the worst however are the systems where the user insists on dual booting.
Real Disadvantages: You risk data loss with any application that stores critical data using either (1) a truncate/write method or (2) a write/rename method without a asking the OS to sync it's data. I think that far fewer than 95% of applications fall under (2) and every filesystem will have issues with (1).
For (1) there is nothing the OS can do for the application, just about any file system would loose data in this case depending on how long it caches the writes in memory and if the application has a chance to finish writing all of the data. (1) is clearly bad application code at fault. Ext4 does increase the write-delay for the data but any way you use (1) is asking for problems if the system crashes/the disk fills up/etc.
For (2) the file system could implement atomic rename operations but that would be at a slight performance loss when the application didn't need this atomic operation. This is more of a do-what-I-mean-not-what-I-say workaround as I don't see too many situations where (2) would be used without expecting atomic operation. If the application didn't care about possible data loss in the file (1) works well. The real fix however is to call sync() in the application code in this situation, it makes the code more portable across posix filesystems.
If reliability was the only concern they would likely use ATM.
Speed is a major concern due to SAR bottlenecks. Also, ATM networks are expensive, difficult to implement, and inefficient at bulk data transfers.
There was a patch (not by S3) for the XFree86 driver that disabled VESA mode probing. It would make S3 chipsets "work" with VESA drivers if you got your mode lines just right in the config file. Even then there were some other bugs, you had to start the kernel in framebuffer mode and you could not go back to a console once X was started.
The stated reason from S3 for not supporting VESA probing was that it was used in the past to obtain firmware from the chipset and that Windows (9x) did not probe for VESA modes (even when using VESA drivers).
I like to think that linux support from manufactures has come a long way since then but it looks like some still don't care.
All those things (and many more) are just tales?
Quite frankly, yes. AFIAK most waste in the US is sold to Areva for reprocessing into usable material. Most uranium mines have been non-operational for several years due to the low price of uranium on the market, this might change in the near future however. In the US the NEC is quite strict on tracking where waste is shipped.
Full disclosure: I work in Richland, WA where Hanford is located. I do not work at Hanford myself but I know people who do.
If the thermometer was in the radiator then you would see this. The thermostat will shut off most of the water cycling through the radiator and cycle it internally through the engine only. This keeps the engine running at optimal temp and gets it warm faster when it is cold out. The dash thermometers in older injection cars are not coupled to the thermal sensor that controls fuel mixture but rather to the water temperature in either the radiator or the engine (found this out myself when my thermal sensor went bad).
If it was extremely cold out you might be able to cool off the engine significantly while it was running downhill at idle. The result would be that the fuel mixture would run a bit richer than normal for a short bit while the engine warmed back up. A normal engine has a fairly high heat capacitance and doesn't have that high of surface area, it would take quite a bit of cold moving air to cool it down.
Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second