Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Google Calls For Power Supply Design Changes 377

Raindance writes "The New York Times reports that Google is calling 'for a shift from multivoltage power supplies to a single 12-volt standard. Although voltage conversion would still take place on the PC motherboard, the simpler design of the new power supply would make it easier to achieve higher overall efficiencies ... The Google white paper argues that the opportunity for power savings is immense — by deploying the new power supplies in 100 million desktop PC's running eight hours a day, it will be possible to save 40 billion kilowatt-hours over three years, or more than $5 billion at California's energy rates.' This may have something to do with the electricity bill for Google's estimated 450,000 servers."
This discussion has been archived. No new comments can be posted.

Google Calls For Power Supply Design Changes

Comments Filter:
  • No... (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Tuesday September 26, 2006 @05:55PM (#16206641)
    google hires experts on Electrical Engineering to figure out how to reduce the power bill on those 450,000 servers. Hell, I'm all for it. Less power means less heat means quieter fans (w/o spending an arm/leg on an Antec Sonata or whathaveyou).
  • In the old days, disk drive motors and fans. But many of these now run on 5V, hence the cheap USB-powered drive cases out there. Chips at CMOS power levels run at 3.3v, TTL is 5v, but hardly anything runs at 12v anymore. It seems to me that if they'd just pick their hardware carfully, they could run their entire server rack off of 5v+- rails.
  • by MarcoAtWork ( 28889 ) on Tuesday September 26, 2006 @05:56PM (#16206657)
    given that the article says

    Although Google does not plan to enter the personal computer market, the company is a large purchaser of microprocessors and has evolved a highly energy-efficient power supply system for its data centers.

    I assume Google is employing some smart electrical engineers, which are more than qualified to make this kind of recommendations I would think...
  • good idea but... (Score:3, Insightful)

    by grapeape ( 137008 ) <mpope7 AT kc DOT rr DOT com> on Tuesday September 26, 2006 @05:58PM (#16206691) Homepage
    Its a nice idea and one that is probably a long time coming, but phasing something like that into place will take an incredibly long time. Look at the struggles of PCI express, its still not in 50% of the newer motherboards and systems though its benefits are more than apparent. Its just been in the past couple years that we have seen a shift to full usb and most machines still come with ps2, serial and parallel ports anyway. Dramatic changes to the PC standards are very difficult, there are millions of existing machines that still need support. Perhaps if it was tied to a new socket standard in the future it could slowly be phased in through upgrades, but I see the chances as very very slim.
  • by purpledinoz ( 573045 ) on Tuesday September 26, 2006 @05:59PM (#16206717)
    I would bet a lot of the employees at Google have Electrical Engineering degrees. Don't underestimate the brain power Google has in its employee base. But the power supply issue they're trying to address isn't a technical challenge, but a political challenge.
  • by Anonymous Coward on Tuesday September 26, 2006 @06:03PM (#16206781)
    Video cards use a ton of 12v power, enough that high-end cards get a dedicated connector featuring two wires of it.
  • by dgatwood ( 11270 ) on Tuesday September 26, 2006 @06:07PM (#16206859) Homepage Journal

    The ability to have all my machines powered by a heavy cable carrying 12VDC would be pretty useful for several reasons.

    • The UPS could be integrated into the power supply, avoiding lots of energy lost in converting it up to 110VAC and right back down again.
    • The power supply would then be external, where it could be a fanless brick instead of being inside the case where it adds heat that must be dissipated.
    • A switching power supply is theoretically more efficient than a wall wart. If everything were 12V, all those stupid little outboard devices could draw power off of the same supply source, resulting in better overall efficiency. More importantly, I would never let out the magic smoke when I accidentally plug a wall wart into the wrong device. :-)
    • A 12V system can more easily be integrated with solar panels to reduce load on the power grid.

    *sigh*

  • by chroot_james ( 833654 ) on Tuesday September 26, 2006 @06:07PM (#16206863) Homepage
    There is no reason to be annoyed by people trying to do good things!
  • by dgatwood ( 11270 ) on Tuesday September 26, 2006 @06:10PM (#16206897) Homepage Journal

    Why? I can turn 12VDC into 5VDC (what USB uses) with nothing more than a voltage regulator (or if you want to waste a ton of power, a relatively trivial voltage divider).

  • by ve3id ( 601924 ) <nw@johnson.ieee@org> on Tuesday September 26, 2006 @06:15PM (#16206987)
    I have been saying this for years. We lose 10-20 % of energy charging a battery in a UPS with 117V, we lose another 20-30% in the inverter to get it back to 117V, and then we lose another 10% getting the 117V back to usable voltages for the PC.

    It does not take an expert in electrical engineering, just common sense.

    Can I sue google for stealing my idea?
  • by Bruce Perens ( 3872 ) * <bruce@perens.com> on Tuesday September 26, 2006 @06:20PM (#16207075) Homepage Journal
    Low-voltage power supplies in racks might make sense. Not in desktops, because low-voltage power takes requires more copper to distribute it, because there's more current. Copper is very expensive of late.

    Bruce

  • by fm6 ( 162816 ) on Tuesday September 26, 2006 @06:21PM (#16207105) Homepage Journal
    You make a good point about wall warts, except you don't go far enough. If all portable devices accepted 12V power, somebody would come out with a single brick with multiple 12V plugs, which would be a godsend to travellers who currently schlep one wall wart for each device.

    **big sigh**

  • by genericacct ( 692294 ) on Tuesday September 26, 2006 @06:28PM (#16207207)
    I'm all about the solar angle! Someday I'll wire my house with an off-grid 12-volt solar system, with 12-volt "car lighter" sockets and DC lighting (both LED and mini halogen). Laptop and WiFi router plug in to it.

    And everything can plug into the car with the same cord. That's another awesome advantage, being able to put these same computers in cars and RVs.
  • by poot_rootbeer ( 188613 ) on Tuesday September 26, 2006 @06:30PM (#16207231)
    Google is not throwing 7950's in their servers. These systems run with on-board video at best. Google has no need for a video card that can do anything more than text, as with all non-windows based servers. For that matter, after the first boot, there is no need for a video card at all.

    Seems to me Google doesn't want to fracture the commodity hardware market into server-class hardware using 5VDC power and desktop-class hardware using 12VDC. One standard, applied equally across the entire range of products.

  • Re:No... (Score:5, Insightful)

    by x2A ( 858210 ) on Tuesday September 26, 2006 @06:31PM (#16207245)
    Better than that guy they spent $50,000 who said moving the plant from the window and installing a water feature would allow the energy would flow much better...

    If google come out with a "can save energy this way...", and gets the world to follow, the marketing value speaks for itself. That kind of reputation doesn't come easily.

  • MOSFETs use 12V (Score:2, Insightful)

    by wtarreau ( 324106 ) on Tuesday September 26, 2006 @06:44PM (#16207411) Homepage
    Many recent motherboard use 12V to control voltage regulators' MOSFETs gates because the higher the voltage, the lower the internal resistance, so the higher the efficiency. 5V is generally too low to achieve good efficiency, but 12V is fine.
    From 12V, the MB can produce 3.3V and 1.xxx Volt for the CPU. It's easy to also provide 5V on the MB.
  • by Anonymous Coward on Tuesday September 26, 2006 @06:53PM (#16207549)
    Not quite on topic but..

    It amazes me that so few people realize that a "nominal" 24V is the norm for all manufacturing. Just about EVERY manufacturing plant has 24V throughout the facility, they may (or may not) also have 120/240, but they WILL have 24V - amps and amps of the stuff.

    This means there's a full range of 24V equipment, millions of devices. 24V PC's, 24V hubs/switch and all the other infrastructure as well as specialized industrial controllers, etc. etc..

    There's some logic behind this that doesn't related to power saving, mainly that you really have to work at doing yourself serious injury with 24V, but you can still pull enough power to run things (like PCs).

    Having this low voltage standard is very useful, but before we consider adding another, how about just considering using the one that already exists.
  • Err... (Score:1, Insightful)

    by Anonymous Coward on Tuesday September 26, 2006 @07:11PM (#16207781)
    Isn't -48 still 48Volts of differential, so why not just +48V? I'm no EE, so I'm obviously missing something here.
  • Mod Parent up please. That would definitely be the non-dick move, which is what we'd all like to expect from Google.
  • Re:No... (Score:2, Insightful)

    by Jake73 ( 306340 ) on Tuesday September 26, 2006 @07:30PM (#16208039) Homepage
    Ecomonies of scale, really. If the estimate of 450,000 is correct, it means that Google isn't going to go out and buy 100k - 1MM servers tomorrow. They're buying in large quantity, but not enough to justify building their own. I'd guess they buy 1k to 10k at a time.

    As a buyer, Google still wants choice in the marketplace. If they design their own boards, they don't get much choice over time. They pay for every decision with risk. Get everyone to jump on-board with this and they have hundreds of choices in the marketplace.

  • by uorden ( 1006217 ) on Tuesday September 26, 2006 @07:47PM (#16208241)
    That works fine, of course, if your computer is able to talk to the network. What happens if/when the system gets borked and you need to have access to a serial console to effect repairs? VNC and rdesktop can't help you there last I checked.
  • by Junta ( 36770 ) on Tuesday September 26, 2006 @08:01PM (#16208417)
    Serial remains one of the most manageable approaches to console management. Video is, obviously, not loggable, not automatically monitorable, not greppable, and not amenable to low throughput, high latency remote access.. Serial devices and consequently drivers for them are so simple and straightforward, and the behavior so deterministic, that it is far preferable to something more complex (ethernet and usb) for a console. Ethernet certainly in questionable circumstances may suggest a driver unload/reload as a step to problem resolution, which is safer if not using as a console (though many times I have used ssh and chained the commands using semicolons). For example in that case, if your path contains an nfs mount, and you forget about it as you yank the network out, your chained command will hang as the shell tries to stat the nfs mount for the path. Part of the problem with relying solely upon the ethernet for console is the ethernet has more than one job to do, so it takes a fair amount more competent engineering to get to work right. Many newer systems offer to redirect textual serial traffic over IPMI, and that is admittedly decent *if* the vendor architects it robustly, which is difficult to ensure beyond hands-on experience with a brand and trusting in their consistancy. For example, e326 servers from IBM I wouldn't trust the net console, but an IBM x3455 I would be more confident in. USB, again, has similar complexity issues (it's multiplexed for keyboard/mouse/mass/storage/printing/scanning/etc etc). If you theoretically had bi-directional text console over some usb device, it's more difficult for a low level, simple piece of software to set up the usb controller and all requisite activities, then traverse the bus, identify the console devices, and then use it. Just like with an ethernet device where you may have cause to unload and reload a driver, a usb controller out to lunch with respect to a mass storage device would cause a similar issue. Enterprise distribution kernels tend to compile in the serial console and leave the usb controller modular, specifically with serial consoles in mind.

    Serial console servers, in answer to your question, provide a scalable way for systems to access via the network serial consoles. By being dedicated, moderately simple systems with 40+ serial cables, they can provide access (via telnet generally) to a rack's worth of 1U servers, automatically log the content, or at the very least provide an administrator with remote console access at will to any given system.

    Serial console is not obsolete in the least bit, just because it can't run your '31337' aero interface, or whatever nice and shiny interface that makes poser administrators and PHBs drool, doesn't mean good, serious systems administrators don't consider the technology to be a vital part of a robust management strategy.
  • by Da Web Guru ( 215458 ) on Tuesday September 26, 2006 @08:03PM (#16208429)
    I have- but what part of choose your hardware carefully do you people not understand? RS232 is a rather outdated protocol at this point. My two latest computer purchases do not speak RS232 natively- but they DO have multiple USB ports.

    A lot of network hardware manufacturers choose to support RS-232 natively because of the relative simplicity of the protocol when compared to TCP/IP. Often an alternative non-serial product does not exist, so "choose your hardware carefully" is not always an option. Because of this fact, most servers come with at least one serial port. (Some setups exclusively use console over serial for managing servers.) There is no possibility of network issues, routing problems, congestion, management networks, etc. Usually the most configuration that you have to do is 9600-8N1. Serial ports work even before networking is configured. Most networking hardware *requires* initial (and sometimes normal maintenance) through a serial port (you often have no choice). And when your switch/router is having routing issues, the last thing you need to worry about is whether or not your equipment will even accept your TCP/IP packets...
  • Re:No... (Score:4, Insightful)

    by JahToasted ( 517101 ) <toastafari AT yahoo DOT com> on Tuesday September 26, 2006 @08:26PM (#16208645) Homepage

    Why do that when they can just rent out space in one of their super massive server farms. Think about it... you get some good bandwidth, your data will be mirrored on geographically and topographically separate systems. You don't have to worry about hardware failure or anything like that and you'll be able to get all the bandwidth you could ever want. You don't have to worry about database replication or syncing up data or anything like that, its all taken care of for you. Depending on your needs, you can have gmail, google maps, google office, adsense all integrated with whatever it is you're setting up... web app, file server, database system, whatever it is you're setting up you'll be able to get it from Google along with some nice cross platform tools to make it as easy as possible.

    And because of economies of scale the price will be very reasonable, ie. cheaper than rolling your own solution. Hell, I'd consider it, wouldn't you?

  • by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Tuesday September 26, 2006 @09:31PM (#16209275)
    Actually I would bet that Google servers DON'T have a video card, and that all of them have RJ-45 SOL support (or something like it). The reason being that Google has admitted that they fully embrace the commodity distributed server system. Google will periodically host talks at my university where they explain all this in [too much] detail.

    Who sells commodity servers without motherboard-integrated video cards ?

  • by dragonman97 ( 185927 ) on Tuesday September 26, 2006 @09:53PM (#16209491)
    I have- but what part of choose your hardware carefully do you people not understand? RS232 is a rather outdated protocol at this point. My two latest computer purchases do not speak RS232 natively- but they DO have multiple USB ports.

    I believe it's the part where you expect me not to buy Cisco hardware because it uses that 'rather outdated protocol.' Any router that has USB on it is probably a toy! I'd just assume not have to connect via a USB -> DB9 dongle, but at some point, it's going to be harder to buy computers that way. I already know a group that uses those gizmos on client visits, because their company bought a fleet of Compaq laptops that are super-slim, and only have USB.

    I get so very tired of people who only think of home computer applications, and can't see the big picture. I once made quite an argument about the stupidity of going DVD only with Knoppix. ("But everyone has DVD drives!") Kids just don't understand that a high majority of computers still do not have DVD drives - it's just not an essential for the average business PC. That's starting to change (this was probably 2 yrs ago), but the places that spend the most money don't see the need for toys, and understand the value of technology that *just works.*

  • by Ungrounded Lightning ( 62228 ) on Tuesday September 26, 2006 @11:37PM (#16210201) Journal
    if you have to re-convert anyway, 5V as intermediate voltage is not optimal. When converting to 5V, the voltage drop in the power diodes and in the wires to the mainboard eats a much higher proportion of the power than with 12V as intermediate voltage. 24V or even 48V would be even better.

    Telephony has been running on redundant -48V DC supplies to the racks (typically from rooms full of floating storage batteries) since the early relay days. Much modern networking equipment also conforms to this standard, so it can be used in such racks with no local power supply (except the per-card isolation diodes and downconverters).

    Power conversion modules running from 48V are in volume production.

    Why does Google want to reinvent this wheel?

    (However, if they do insist on using 12v, I hope they make it able to work from 11.75v to about 15V, with glitches, and shut off at stable levels below 11.75v. That way such boards could be used directly with 12v renewable energy systems, plugged directly into an automobile "cigarette lighter" power outlet, or easily wired into a vehicle or travel trailer as an appliance.)
  • by SEE ( 7681 ) on Tuesday September 26, 2006 @11:59PM (#16210389) Homepage
    For efficiency, CPUs are heading below 3.3v, and RAM too.

    That's actually why single-voltage PSUs make sense. Your CPU, GPU, and RAM don't care if the PSU is providing 12V, -12V, 5V, or 3.3V, or any combination of them, as long as its VRM steps it down to the 1.7V or whatever it needs. So why have the power supply provide so many different types of power, instead of just one of them? It's all going to be converted by a local VRM anyway.

    And a single-voltage power supply is about 85% energy-efficient at converting AC power, compared with about 65% for four-voltage (12, -12, 5, and 3.3) supplies, due to various redundancies. Switiching to all 12V means you've made the power supply less complicated, more efficient, and less expensive, at the cost of a few extra VRMs on any 5v and 3.3v components that are put in the machine.
  • by inKubus ( 199753 ) on Wednesday September 27, 2006 @02:00AM (#16211079) Homepage Journal
    There are over 6500 Walmart stores with an average size of 120000 square feet. Every 500 sqaure feet they have a 4-tube fluorescent light fixture, drawing 4x40 or 160 watts. Multiplying out, the total square footage is ~6500*120000=780,000,000 square feet. Divide by 500 square feet to get the total number of fixtures, 1,560,000, and multiply that by 160 watts to get the total watts, 249,600,000. Probably 75% of those Walmart stores are 24 hour, while the rest are 12 hour: (.75*24)+(.25*12) = 21 average hours

    Total Watts x Avg hours x 365 days per year = Wh per year
    249,600,000W x 21h x 365d = 1,913,184,000,000 Wh per year

    Wh/1000 (kWh) x the going rate (approximately 6.8 cents nationwide)

    1,913,184,000 x .068 = $130,096,512 per year in electricity.

    If they took out one tube per fixture, they would save $32,524,128 per year.

    *This doesn't include the parking lots, which have a similar consumption.

    So, what's the point? There are other, easier ways to save a lot of power. I'm glad Google wants to change the computer world, but what about replacing 10% of the incandescent bulbs with fluorescents and save 50W x 10,000,000,000? Or just TURN OFF your computer when you aren't using it! Retooling the entire industry would cost MORE than it would save in power. That's not to say I don't agree that we need to start making a lot of little changes and this is as good a place as any. But the benefits are very far in the future, when we run out of oil. Not now.

  • by dcam ( 615646 ) <david AT uberconcept DOT com> on Wednesday September 27, 2006 @03:16AM (#16211479) Homepage
    Someone who wants to sell to google? They must get through a *lot* of machines, this would give them some buying power.
  • Re:No... (Score:4, Insightful)

    by alienw ( 585907 ) <alienw.slashdotNO@SPAMgmail.com> on Wednesday September 27, 2006 @08:31AM (#16212975)
    If you can cut power consumption by 10 watts a machine (quite realistic) and you have 100,000 machines (Google has more) you just saved 1 megawatt of power, or about a million dollars per year in electricity (without even taking into account the electricity required for cooling). That's quite a chunk of cash.

UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn

Working...