Follow Slashdot stories on Twitter


Forgot your password?

Was Thomas Edison Right about DC Power? 545

Declan McCullagh writes "Everyone knows the alternating vs. direct current wars ended with Thomas Edison and Nikola Tesla. But now DC power is being seriously considered for data centers. DC advocates say that plugging servers into AC power is inefficient, and switching to DC cuts down on waste heat and component failure. The University of Florida has even bought 200 DC servers."
This discussion has been archived. No new comments can be posted.

Was Thomas Edison Right about DC Power?

Comments Filter:
  • by Anonymous Coward on Thursday March 02, 2006 @10:20PM (#14839954)
    In Washington State, Verizon (Was once GTE) runs almost all DC powered servers and Telco equipment in their Data Centers. Many of the IBM server my company buys support DC power.
  • by hpa ( 7948 ) on Thursday March 02, 2006 @10:23PM (#14839971) Homepage
    It's true that DC-DC power converters are more efficient than AC-DC converters, only if you consider than the typical DC-DC converter has a much lower voltage ratio than the typical AC-DC converter. DC power distribution is usually done in the 12-48 V range, depending on application, whereas AC is 100-240 V. It's also only a win in if you don't end up losing that power in the wiring.

    How come there is no real difference? Because both modern AC and modern DC supplies start out by converting the power to high frequency AC (on the order of several kHz), and operate on that. That's what you actually want as input, if anything.

    The article states:

    By distributing redundant direct current power to each server--and replacing the standard AC power supply with a far more reliable and efficient DC power supply...server reliability is increased by as much as 27 percent, and monthly power costs are reduced by up to 30 percent.

    In other words, the DC supplies they use are more efficient than standard AC supplies, which are the cheap crap and notoriously inefficient.

  • by Phanatic1a ( 413374 ) on Thursday March 02, 2006 @10:35PM (#14840035)
    For moving power over long distances, AC is king.

    Nope. For the longest-distance transmission lines, you see DC being used. There comes a point when the capacitive losses you get from using AC encourage you to switch to DC, and for lines of several hundred miles, you start seeing DC transmission lines.
  • Re:Uhh... (Score:2, Informative)

    by jdaomteys ( 825374 ) on Thursday March 02, 2006 @10:37PM (#14840039)
    Nope, sorry. Please play again. [] Tesla and Westinghouse patented all of the AC equipment. Edison wanted to sell his stuff. He even went as far as designing electric chairs with AC to prove it was "more deadly."
  • this is news? (Score:5, Informative)

    by iggymanz ( 596061 ) on Thursday March 02, 2006 @10:37PM (#14840048)
    for crap's sake, dc powered servers are nothing new, many have config option of "-48VDC standard telco" supply.
  • No (Score:3, Informative)

    by dbIII ( 701233 ) on Thursday March 02, 2006 @10:41PM (#14840070)
    Short answer no. Long answer - sometimes. DC is somethimes useful right in front of you, but it's hard to get it there.

    I've seen houses wired with 12V DC from mini hydro and solar - but in those cases it was a long way to the nearest transmission wire and would cost a fortune to get mains power onto the site.

  • To Westinghouse (Score:2, Informative)

    by jheath314 ( 916607 ) on Thursday March 02, 2006 @10:42PM (#14840075)
    Perhaps we'll see the AC group hitting back with demonstrations of how dangerous these DC powersupplies are to the hamsters and other wildlife native to big server rooms.

    Incidentally, that's how the electric chair came about:

    [Edison]AC is dangerous! Just watch what happens to these various animals when I close this circuit!
    Edison electrocutes some horses
    [US_Gov]Ooooo... I'll bet that works on people too!
    US_Gov introduces new grisly method of executions, while disregarding the main point of Edison's demonstrations.

    The story has a good post script too... some reporters came to Edison to get his take the new, modern form of executions. When asked what name he would give to the method, Edison, in an attempt to forever link his competitor's name with electricity's most grusome application, offered "to Westinghouse someone."
  • by hpa ( 7948 ) on Thursday March 02, 2006 @10:42PM (#14840080) Homepage
    In addition to capacitive losses, there is also the fact that you have to dimension your transmission lines to handle up to Vp (peak), not just Vrms which is what controls the amount of power that actually travels through your system. In effect, by going to DC, you can run the whole system at 1.4 times the voltage, and run more power through the same wires with no additional losses (other than conversion.)
  • by atrus ( 73476 ) <atrus&atrustrivalie,org> on Thursday March 02, 2006 @10:43PM (#14840088) Homepage
    What you just described is a voltage divider circuit. And its terribly inefficient for transferring power, since you're burning all of the extra energy up in the resistors.

    DC->DC converters are basicly AC power supplies. They pulse the DC current up to several hundred kHz, using an inductor, and convert it down/up on the other side. They're very efficient, although somewhat costly.
  • by gvc ( 167165 ) on Thursday March 02, 2006 @10:45PM (#14840102)
    For physics reasons, it's easier to transmit AC over long distances; DC requires thick copper cables or bars, instead of comparatively lightweight wires. But DC becomes a more serious possibility for power once AC reaches a building.
    What a load of crap. Low voltage (high current) requires thick wires - it has nothing to do with AC/DC. AC is horrible for long-distance transmission; up north megavolt DC is popular. AC is useful because it is easy to transform - you can step the voltage up or down with turn-of-the-previous-century technology and hence transmit at a higher voltage than you'd like to use.

    That said, if space and cooling are an issue it might well make engineering sense to get the transformers, capacitors, and rectifiers out of the computer boxes. Big 5v/12v power busses wouldn't even need to be insulated. So while the reporter badly mangled the story, the engineering sounds reasonable to me.

  • by isdnip ( 49656 ) on Thursday March 02, 2006 @10:49PM (#14840118)
    Actually, that's the norm across the phone industry. Everything, and I mean everything, runs on -48V DC. Okay, not the fluorescent lights....

    This goes back to the telephone talk battery, which is -48 V DC. That powered the phones via old cord switchboards, and was the voltage of electromechanical (stepper, and later crossbar) switches, which basically used relays. Electronic gear was then designed to run on the same power plant. A telephone building has a big bank of batteries, powered by multiple "rectifiers" (DC supplies) which, btw, are normally engineered to not run over 40% of load. (That way they can still run the systems and recharge the batteries when one of them is kaput.)

    If you then put anything else into one of their buildings, the Network Equipment Building Standards (NEBS), which are Telcordia documents that practically carry the force of law, dictate that equipment be DC powered. Among other things -- NEBS gear has to meet the brick schytthaus test. (Sun Netras and many Cisco routers meet NEBS. Your basic rack server doesn't. And aluminum racks are STRICTLY forbidden; it has to be steel.)

    So because of the talk voltage on analog phones, lots of computing equipment is engineered for -48 V DC power. Sort of like the legend (I know, that one is not really true) about the railroad track gauge being based on Roman chariots. But in this case it's surprisingly effective.
  • by sflory ( 2747 ) on Thursday March 02, 2006 @10:50PM (#14840120)
    The reason you want to use DC is that a computer's power supply converts AC into DC. The power supply of most computers isn't that efficient at it. This basically converts some of your electricity into heat. (Heat in a 1U server in a big rack of 1 U is really bad.) In theory the data center's big AC to DC converter is more efficient and better cooled. Thus you save money in power bills, air conditioning, and rack space (less heat, and power draw means more servers per rack). Plus in theory your servers should last longer as the power supply is one of the more likely points of failure.
  • by oddbudman ( 599695 ) on Thursday March 02, 2006 @11:03PM (#14840188) Journal
    The concept can be taken as far as to cutting down to a single power supply per rack.

    The article mentions the distribution will be done using 48V for distribution - you will still need DC:DC conversion at the boxes. These DC:DC converters will need to be run at higher current than an AC:DC converter. Higher current can cause more series loss in the system as well leakage losses in the switching supply.

    AC:DC converters, as mentioned in the article aren't really that ineffiecient (article itself quotes 90%). AC:DC converters are infact really DC:DC convertors, they just have a rectifier circuit to convert the AC to high volatge DC for DC:DC conversion.
  • by waferhead ( 557795 ) <waferhead AT yahoo DOT com> on Thursday March 02, 2006 @11:16PM (#14840232)
    Actually the article gets it quite wrong, with a bogus explanation when it says "For Physics reasons"...

    HVDC is actually FAR better for long line power distribution due to ACs inductive line losses... IIRC DC _is_ used some places. (California)

    The downside is that AC requires only a series of transformers to step it down to various levels for local power distribution--- Makes for a relatively straightfoward infrastructure.

    DC for all practicle purposed MUST be converted to AC for this purpose, via big honking inverters, unless you happen to NEED 250KV@1000A.
  • by Myself ( 57572 ) on Thursday March 02, 2006 @11:30PM (#14840299) Journal
    The origin of the 48 volt number is that it was convenient, and now it just sneaks under the 50-volt "low voltage" cutoff in the NEC, which I think was written with telcos in mind. The glorious thing about this is that you don't need licensed electricians to do power wiring in a central office.

    And the reason it's negative with respect to ground goes all the way back to the telegraph system: Western Union initially ran bipolar lines and noticed that the positive ones corroded much faster. Sodium ions (from dissolved salt) are negative, and thus repelled from lines that're also negative. The whole phone system was built with positive ground because of this, and it's saved incalculable maintenance costs. It does tend to mess with people's heads the first time, if they're used to negative ground systems, but you get over it quickly. (A number of traditions use blue for "hot" and black for ground/return, to help escape your "red equals positive" association.)

    DC power as used by telcos is also always redundant. There's an A-side and a B-side for everything, and the cables are sized so that the entire load can run from just one side. This leads to some very fat copper, which is cheap compared to downtime. You don't achieve five-nines reliability with a system that contains single points of failure!

    Now, about rack-mounting: This was also invented by the telcos, originally in a very wide (40-inch?) format, for the panelboards and Strowger switches. Some of the old crossbar equipment is still in those huge racks, but the 23-inch width is infinitely more common now. All telco equipment is mid-mounted, with the ears approximately in the center of gravity on the shelf, so the force on the screws is shear. There's no torsion on the mounting flange unless you step on the front or back of the shelf. Cooling is always convective bottom-to-top, or occasionally front-to-back with fans. This leads to a "cool" front aisle and a "warm" back aisle between alternating rows of equipment.

    Now, the pro audio industry borrowed the rackmount idea fairly early on, but they were mostly mounting control panels and mixers, which are very shallow, so flush-mounting made sense. They also changed the every-inch Western Electric mounting holes to an alternating-spaces "EIA" standard, and narrowed the rack from 23 to 19 inches.

    Somewhere along the line, an absolute idiot decided that computers should be rackmounted, but they should be 19 inches wide, flush-mounted, and use EIA hole patterns. I'm sure this has something to do with mainframe legacy getting perverted by peecee people. The current mishmosh of mounting standards (19" vs 23", two-post versus four-post, flush versus mid, inch versus RU, front-cable versus rear-cable) is what every datacenter tech deals with on a daily basis. Throw overhead racks versus raised-floor cabling into the mix, and you've got a recipe for frustration!

    If you're familiar with the concept of "blade servers", where common components are separate from processor resources in the shelf, congratulations. Telco hardware has been built like this since the invention of the circuit board. Actually, the concept of replacable plug-in units goes back before that, but it got vastly easier with printed wiring boards and card-edge connectors in the sixties. Most of the "good ideas" in serious computing circles are actually century-old ideas in the telco industry. Spend a week shadowing a central office tech before you design a datacenter, please!

    Also consider: If your datacenter is already built for DC, throw some solar photovoltaic panels on the roof. Inverters are a large part of most PV systems' expense, and you can skip that part. Why not start offsetting your grid demand now?

    Also also: Edison was flat-out wrong about DC. The modern switching power supplies that make DC transmission lines practical didn't exist in his day. Besides, long-distance power transmission is an entirely other discussion.
  • by cellojoe ( 920354 ) on Thursday March 02, 2006 @11:37PM (#14840328)
    Tesla believed that electricity should be free, so he created a tower that transmitted electricity over a distance. []
  • by Ungrounded Lightning ( 62228 ) on Thursday March 02, 2006 @11:39PM (#14840342) Journal
    The article conflates several things.

    First off: Digital electronics generally requires several voltages. And they're all low, requiring high currents, massive conductors, and local filtering and regulation. So even if you're providing DC power from outside the room, you'll have a switching power supply (or several) in each piece of equipment to convert whatever the rough DC power is to whatever you need, smooth it, and regulate it.

    But while some electronic devices use a common switcher to generate all the voltages with one conversion step, others use a "roughing" supply and a bunch of local supplies. Part of that is to get better regulation - part is because the roughing supply must run from 60 (or 50 or whatever) Hz and thus requires big caps to tide you over the low part of the cycles - caps you don't want taking up space near the components.

    If you're going to do it in two stages anyhow, you can put your roughing supply OUTSIDE the room and only have the final supplies inside. The roughing supply has a lot of heat dissipation so you save a bunch on your cooling.

    Second: There are two standards for power distribution in electronics rooms:
      - Your local power line stuff. (120/240/480/208-3-phase in the US)
      - The telco standard: x2-redunant 48V DC.
    A lot of equipment - especially networking equipment - is manufactured for sale to tellcos and other operations that use the standard. They might have initially used it because some of their equipment was co-located in tellco sites, where only 2x48VDC is available - and they got a quantity discount for buying a bunch of the same stuff and went to 48V for their own sites. Or they might use it because it's MUCH simpler to do backup power with floating batteries and century-old technology than with a building-sized UPS. (Note that a UPS CAUSES at least one outage when first installed and on the averate at least one more within the first year of operation from some malfunction. And a UPS dissipates more power than a roughing power supply or a battery charger.)

    But the standard for 48VDC is REDUNDANT 48VDC supplies, with the equipment only requiring one (and typically doing "cutover" with diodes B-) ). With the equipment already set up for redundant supplies it's not a lot of cost or work to wire both sides and put in two 48V feeds to the equipment room. (Four diodes are a LOT cheaper than a pair of 120V roughing power supplies at each box, too.) So of course the users of such equipment normally give it dual supplies. (Even if it's a single rack and so they just put two roughing supplies in the rack fed from two different 120V feeds.)

    The result is that all the equipment has redundant power supply, and keeps operating glitch-free through a number of kinds of partial outages - AND power supply repair and replacement. This is what's responsible for much of the claimed increase in reliability.

    The whole Edison/Tesla DC/AC war had to do with the economics of CROSS-COUNTRY power transmission. AC beat DC there because a century or more ago it was virtually impossible to jack DC voltages up to levels suitable for long-distance transmission and back down to levels safe for distribution within houses, while AC could do that easily and efficiently. So Westinghouse/Tesla could ship cheap power from Niagra Falls to New York City while Edison had to build fuel-burning power plants IN the city. It has essentially nothing to do with shipping the power around within a single building.
  • right and wrong (Score:5, Informative)

    by IGnatius T Foobar ( 4328 ) on Thursday March 02, 2006 @11:45PM (#14840365) Homepage Journal
    Edison and Tesla were both right. Remember, the DC vs. AC wars were fought back when the load was mostly made up of lights, motors, very utilitarian things. AC is fantastic for transmission over long distances (and for running three phase motors, but that's another story). DC happens to be better at running precision equipment like computers -- heck, they all run on DC already. All we're really talking about here is taking advantage of an economy of scale by doing one big power supply (or a few, for redundancy) instead of one for each machine.

    Ever seen a telco rack? Everything runs on -48VDC. Everything. A telco rack always includes a couple of DC power supplies, and all the equipment just ties in to a common DC bus. The best part of all: the UPS simply consists of four "car batteries" (not exactly, but you get the idea) wired in series and tied directly into the bus! No pesky inverters to deal with.

    The telecom industry has been doing it this way for decades. It's about time the computer industry got on board.
  • by Anonymous Coward on Thursday March 02, 2006 @11:57PM (#14840403)
    It seems to me that the only disadvantage with DC has to do with interconnecting it with the existing AC grid. From this wikipedia [] entry and from reading the book Infrastucture [] by Brian Haynes, which states "some of the longest, highest capacity power tranmissions lines carry direct current", I get that it is very efficient. It even makes our AC power more stable by linking electricity producing AC grids that aren't in sync. Other advantages are: resistive losses are lower for a given conductor size, and only two wires are need instead of three, thus reducing the materials needed (such as less wire, fewer insulators, and smaller towers) on long runs.

    So why do you say that it isn't any good for long distance power distribution?
  • Re:laws? (Score:2, Informative)

    by Kizeh ( 71312 ) on Friday March 03, 2006 @12:04AM (#14840437)
    Bzzzt. Because 48 Volts is the standard used for all the DC equipment telephone companies have been using for years. Cisco, for example, makes a large portion of their product line with 48 V DC power supplies as well.

    Mind you, the 48 V DC systems are not simple or easy to wire. You're talking very significant amperages, which means very beefy conductors, and with batteries in the picture a risk of nasty stuff if you drop your screw driver in the wrong place.
  • by klaun ( 236494 ) on Friday March 03, 2006 @12:26AM (#14840512)
    [snip]Sodium ions (from dissolved salt) are negative[snip]

    Sodium ions from a salt are definitely not negative. Sodium like many other akalai metals tends to lose its outermost electron and form a positive ion. I think you'd have a hard time getting sodium to pick up an extra electron.

    It makes the rest of the explanation a bit hard to swallow.

  • by slvi ( 628811 ) on Friday March 03, 2006 @12:32AM (#14840539)
    You'll find the actual movie here [], it's rather vile.

    Really though it's not that good, it only gets 4.2 stars at imdb [].


  • by Anonymous Coward on Friday March 03, 2006 @12:35AM (#14840550)
    Meanwhile back in the real world ...

    In a standard PC power supply the incoming AC is rectified and stored in a capacitor. Energy only flows into the capacitor when the voltage after the rectifier exceeds that stored in the capacitor. This results in a waveform which departs considerably from a sine wave - no current flows for most of the time while much higher currents than expected flow at the peaks of the half cycles. Electricians interpret this as a bad "power factor" from their experience driving inductive loads where the current lags the voltage by as much as 90 degrees.

    Standard PC power supplies are nothing like 90% efficient largely because of this crude rectification of the mains. Compare the rating of your supply in watts with the input voltage multiplied by the input current. These values should all be marked on the case.

    Power Factor Corrected (PFC) supplies are available. The better ones use a switch mode circuit to charge the reservoir capacitor through most of the main power cycle, while the less good ones incorporate a capacitor across the mains to buffer the large peaks of current when the input voltage exceeds that stored in the reservoir capacitor.

    One advantage of AC is the ease of transforming it to other voltages using transformers and the ease of using it to drive motors especially with multiple phases. In the modern age where switch mode power supplies are cheaper than those using transformers operating at mains frequency this advantage no longer exists. One disadvantage of using DC is the difficulty in switching the stuff off - inductance in the load drives the current straight through an opening switch or fuse creating a nice sustaining arc which is not quenched by the current dropping to zero twice each cycle.
  • by Myself ( 57572 ) on Friday March 03, 2006 @12:51AM (#14840612) Journal
    D'oh! Three minutes of googling while I composed the post, and nothing. As soon as I hit submit, I came across the telecom digest intro FAQ [] that explains it.
  • by Doc Ruby ( 173196 ) on Friday March 03, 2006 @02:31AM (#14840985) Homepage Journal
    Edison "advocated" for all power systems, including longdistance transmission, to be DC - because that's what he was selling. His battle with Tesla for the first big contract, electrifying NYC, is the stuff of legend. Tesla won. And died penniless in the 1940s, while Edison died fat and rich from thousands of patents, most on inventions invented by people working for him. Some of whom no doubt died penniless.
  • AC versus DC (Score:2, Informative)

    by brazilofmux ( 905505 ) on Friday March 03, 2006 @03:58AM (#14841231)
    With both AC and DC distribution, there are losses due to the resistance of the wire (I-squared-R losses). The way to minimize these losses is to increase the voltage (V) and decrease the current (I) while transmitting the same power, but there is a limit to how high the voltage can be increased. Air breaks down at about 3x10^6 V/m. To avoid this dialectic break-down, you continue to raise the height of the power line as you increase the voltage.

    With AC distribution lines, there are also losses related to the capacitance between the power line and the ground. Increasing the height of the power also minimizes the capacitive losses.

    With both AC and DC there are reflections between the source and load which cause further trips from one end to the other. Each reflection is smaller than the previous one, but remember how many people are using electricity and the fact that everyone is constantly adding and removing load from the system. So, even in a DC system, the line voltage will be constantly changing.

    Then, we have the conversions. Conversions from one AC voltage to another AC voltage is accomplished with a step-up or step-down transformer. This converstion isn't free, and it doesn't work for DC. It is very efficient and economical however, to convert from a higher DC voltage down to a lower one -- even for moderately high currents. it is very painful however to step a lower DC voltage up to a higher one. There are circuits to do it, but typically (or at least through 1990), it has been easier to convert to AC, go through a step-up transformer, and and then convert to DC. Also, the circuits for up-converting DC to DC are usually fixed at multiples of 2x, 3x, 4x, etc. using diodes.

    So, let's put it all together. I can believe there are long-distance DC transmission lines where the savings in capacitive losses are worth the significant capital investment required at both ends of the line for the conversions, conditioning, and to match the source to the line and the line to the load, but in general, in a DC distribution scheme, the DC voltage drops continuously along the line and must be periodically stepped-up by some hard-to-determine amount because it depends on the age of the wire, the distance from the last step-up, and the demands of the load at that moment in time, but the circuits for doing it are inflexible (can only do multiples).

    With AC, you get the flexibility that each sub-station is monitoring its own load and it can control the variable-step-down transformers to achieve the desired neighborhood voltages. Ready to increase the height of the lines? Step-up. Ready to drop the height of the lines? Step-down.

    In a data center, as several people have said, everything is in one place, so the problem is different. You want to pick a high enough DC voltage so that it is always higher (even at maximum load across the entire room) than any voltage you might need. Then, you use the cheap and economical DC-to-DC conversions _at the point of use_ to take that down to the +5V and +12V that your equipment needs. You may pay marginally extra for larger cabling to handle higher currents, but you save money by not needing step-down transformers in each power supply. Let weight, more compact, and more efficient.
  • by Anonymous Coward on Friday March 03, 2006 @03:58AM (#14841233)
    As I recall, in the AC-DC conversion, the process is actually more like this: AC-AC tranformation to low voltage, high current, AC-DC conversion using bridge rectifier. Maybe I'm wrong, but isn't that the way most power supplies work?

    It's the way most cheap "wall-wart" power supplies work.

    For more than a few amps of current, the transformer needs to be rather large. To make things smaller, most computer power supplies use a smaller inductor which is driven at a higher frequency by switching the current on and off at a higher frequency (>60Hz). This is called a switching power supply.
  • by PatrickThomson ( 712694 ) on Friday March 03, 2006 @04:49AM (#14841339)
    Power supplies that only draw current for a tiny part of the mains wave, to top up the capacitor, are banned in the EU because too many of them can and did affect upstream power equipment.
  • by SnowZero ( 92219 ) on Friday March 03, 2006 @04:53AM (#14841346)
    It turns out Edison was not completely wrong: HVDC []

    In particular, "Increased stability of power systems" is certainly something that individuals in the Northeastern US and London may be interested in.

    Of course, AC still has its uses, but the chart is now thought to be:
    really long distance -> HVDC
    long distance -> AC
    short distance -> DC
  • Re:AC versus DC (Score:3, Informative)

    by fishnuts ( 414425 ) <> on Friday March 03, 2006 @05:12AM (#14841385) Homepage
    About converting DC to DC... There are DC-DC converters now that use highly-optimized digital controllers and high-efficiency inductors and/or transformers used to buck (lower) or boost (raise) available DC power, with efficiencies for some small systems (below 50W or so) in the 90-95% range. Their efficiency is due to high-frequency pulse-width-modulated switching transistors feeding the source current into a high-frequency toroidal core transformer or inductor. Running higher frequencies (as opposed to the low 60Hz line frequency from an AC source) allows MUCH smaller inductors and transformers for the voltage conversion, and less power loss due to transformer core saturation (which happens more with lower frequencies, which is why AC line transformers are so huge)

    In most new configurations of these types of switching power supplies (switchers, which is what almost every car audio amplifier uses, as well as most computer power supplies) the efficiency is about the same whether you step-up or step-down the voltage with the power supply. In fact, many PWM switching power supply designs can step-up and step-down without any change in circuitry, just by changing the pulse-width of the current being fed to the transformer.

    To say stepping up DC power is inefficient while saying stepping it down is highly efficient, makes it sound like you need to brush up on modern power supply design. Also, the only application in which diodes are used to step up voltage in integer multiples, is the diode-capacitor multiplier, which only works with PULSED DC or AC, and that design is, in fact, very inefficient. Nobody would use that except in certin high-voltage, low-current supplies, like in air ionizers, older televisions, and stun guns.
  • Re:right and wrong (Score:3, Informative)

    by fishnuts ( 414425 ) <> on Friday March 03, 2006 @05:28AM (#14841423) Homepage
    Depending on the method, 5 to 40 percent gets lost as heat.
    Switching power supplies that are optimized for a small range of load conditions can achieve 95% or better efficiency. Most computer power supplies (built for a wide range of load conditions and voltages) are about 85-90% efficient now. Simple rectification and regulation through a linear regulator loses various amount of power through heat, depending on the load, but varies from 50-70% efficiency in a good design. This is what most wall-wart transformers do.

    The primary loss of power, dissipated as heat, happens in low-frequency power transformers and in linear regulators and transistors. Slightly less is lost in rectifier diodes, switching transistors and high-frequency transformers used in switching (PWM) power supplies.
  • by olman ( 127310 ) on Friday March 03, 2006 @09:04AM (#14841869)
    The only limit on using higher frequencies comes when you start to get magnetic losses in transformers and chokes, so in practise a few hundred kHz is the useful limit in switched mode PSUs.

    Thus, if you were starting again with an electric grid system, 500 or 1000Hz would be a much better solution.

    IAAPD (I am a PSU designer)

    1MHz SMPS is nothing fancy these days. In fact they're available even as integrated chips which combine FET switches and the controller into one IC. If you count in point-of-load charge pumps and such, you see up to 3MHz.

    Generally speaking, worst offender is high-current FET gate charge which eats up more power than all the other losses combined for synchronous buck transformer (higher DC to lower DC topology, most common type in use probably) Small (1-10uH) inductors are much better behaved in comparison. One reason to use such high frequencies is indeed that you can get smaller inductors and you have less ripple current.. But you're limited by the fet gate charge. Of course, if you're driving some 200W load, you can just say "h*ll with it" and build high power driver circuit to drive your switches, 5% waste heat on switching losses doesn't bring down the house when you can use dirt cheap inductors.
  • by Waffle Iron ( 339739 ) on Friday March 03, 2006 @09:53AM (#14842064)
    You could use a spinning disk.

    Actually, high-speed flywheels are a viable energy storage system. IIRC, the most advanced ones currently use a Kevlar ring suspended on magnetic bearings in a vacuum container.

    The container has to be heavily armored, because if the flywheel fails and flies apart, all of the energy gets released at once. I saw a picture of the results of that in an article somewhere; it was a pretty big mess.

  • by araemo ( 603185 ) on Friday March 03, 2006 @09:56AM (#14842082)
    Cheap PSUs support it because they are banned in the EU now if they don't support it.

    Most cheap ones use the less good method he referred to(Often called 'passive' vs. 'active' PFC, in PSU literature.)
  • by The Cisco Kid ( 31490 ) * on Friday March 03, 2006 @10:23AM (#14842240)
    The Edison/Tesla one was about long distance transmission of power, and AC is still the winner there.

    TTL logic has to run on DC, so you have to convert the supplied AC to DC. This is just recognizing that instead of converting it individually in each of dozens or hundreds (or more) machines, that it is more efficient and reliable to have one (and perhaps a redundant standby) converter providing DC to the same machines.
  • by Anonymous Coward on Friday March 03, 2006 @10:24AM (#14842250)
    What's new about this? Just about every manufacturing plant in the world runs their electronics on (nominal) 24V. Computers, controller, hub, switches, etc. etc. etc. The cabinets may have AC, often two or three phase, but that's not always easily accessible for safety reasons.

    If you want to install a computer on a manufacturing plant floor, it had better run on 24V.

  • by jeffmeden ( 135043 ) on Friday March 03, 2006 @10:28AM (#14842269) Homepage Journal
    Bingo. I work with one of the premiere flywheel technologies companies, we incorporate their technology into industrial power supply systems. The disk is carbon fiber, suspended by magnet in a vacuum, and is spun up to something like 60,000 rpm. It's about the size of a full height rack, and manages to hold a whopping 2/3 of a kilowatt-hour. Yep, thats it. The advantage to this technology comes from the very efficient charge and discharge, not from the net charge itself. A flywheel to actually run a datacenter would have to be outright monstrous.
  • by mwood ( 25379 ) on Friday March 03, 2006 @10:32AM (#14842292)
    Because there's oodles of 48VDC power supply gear out there now, since telcos buy it in trainload lots to run their equipment. Battery for telephony has been 48V pretty much forever. Gear that can give you reliable 5V@1000A is probably rather scarce (pronounced "expensive").

    It's a good idea but it won't fly until DC becomes common in datacenters. And then it won't fly because the datacenters will have all been rigged for 48V. :-(
  • Hero Dies Penniless (Score:4, Informative)

    by rdmiller3 ( 29465 ) on Friday March 03, 2006 @10:56AM (#14842414) Journal
    I wonder how long the list would be, if we filled in all the names which could be described by such a headline? How many of the greatest positive influences in human history have died under pitiful circumstances?
    Remember the death of Archimedes!

    Anyway, the respondant claimed:

    Edison "advocated" for all power systems, including longdistance transmission, to be DC - because that's what he was selling. His battle with Tesla for the first big contract, electrifying NYC, is the stuff of legend. Tesla won. And died penniless in the 1940s, while Edison died fat and rich from thousands of patents, most on inventions invented by people working for him. Some of whom no doubt died penniless.

    Check your history!
    Edison died nearly penniless too.

    The account which I read described how he ran across some iron-rich sand on a beach, and it gave him the idea to try a new mining technique where the ore would be extracted from non-ore material by dropping the sand past magnets. The idea was a good one (and is still used) but the site he chose to build his iron mine turned out to be almost completely lacking in iron ore. The iron ore in the sand on the beach had apparently washed up from some other source.

    Maybe he wouldn't have been desperate enough to try such a risky thing if he had been ALLOWED to sell AC power. I'm sure he could see the advantages of AC for power transmission but Edison didn't have the patents for that, and you can bet that Westinghouse wasn't going to license the technology at a reasonable price to their chief competitor.

    So Tesla got ripped off by Westinghouse because he wasn't business savvy and they got ownership of the patents. Then Edison, even though he was somewhat business savvy, got shut out by Westinghouse because they owned the patents. In both cases, patent law helped business-people who didn't invent anything get rich while the real inventors lost out. Shouldn't we remember that the patent system was set up in order to encourage invention?

  • by operagost ( 62405 ) on Friday March 03, 2006 @11:51AM (#14842772) Homepage Journal
    Condenser microphones use 48V phantom power, and I can assure you that when I touched the housing of a mic that had its hot lead shorted to ground in the patch panel, I felt it. And that was probably only a few hundred mA.
  • by confused one ( 671304 ) on Friday March 03, 2006 @12:26PM (#14843014)
    (Just for kicks and giggles) ^2... A DC-DC converter actually chops the incoming voltage into pulsed DC as the parent is implying. Depending on your point of reference, you could call it AC... It either feeds into a charge pump or a regulator circuit that results in the desired voltage, then passes through some filtering to smooth it out.
  • by stupidfoo ( 836212 ) on Friday March 03, 2006 @12:58PM (#14843234)
    Both vi and emacs suck

    nano is where it's at ;P
  • by ecloud ( 3022 ) on Friday March 03, 2006 @02:22PM (#14843955) Homepage Journal
    I believe the article makes an oversimplification by stating that AC is better for long-distance power transmission. Rather, it's easier to generate AC power (no rectifiers are needed), easier to switch (because the arc when the switch opens is much easier to extinguish - current flow actually stops for a short period of time, and the arc goes out), easier to run a motor from AC (no commutator), and easier to do voltage conversions (you only need a transformer). For really high-power long-distance transmission lines (like between states) they use very high-voltage DC because it is in fact more efficient. But I'm not sure how they do the conversion from DC back to AC in that case (would guess it's just a rotary converter - a motor running a generator). The losses from doing the conversion on both ends are acceptable only when they are less than the losses that would occur in such a long transmission line.

    Losses are especially bad in AC transmission lines when the power factor is not correct, because while currents which are out-of-phase with the generated voltage waveform are expressed using imaginary numbers, in fact they are very real currents, and they cause increased heating losses in the transmission line. So the power companies switch large capacitors in and out of the circuit to try to keep the current and voltage in phase. (And they would appreciate if every device on the grid was power-factor-corrected, but this doesn't happen, mostly because motors are inherently inductive, and motors are the largest consumer of electricity. Sometimes they at least manage to persuade large industrial customers to manage their own capacitor bank, to correct for the inductance of their own motors, and give them a discount in exchange.)

Time to take stock. Go home with some office supplies.