Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

'Case-less' Rackmounts and Multi-Machine Power Supplies? 13

phungus asks: "I'm looking for manufacturers of rack-mounted 'case-less' system trays for a project I'm researching. I'd like to rack-mount all of our dedicated servers without using cases. Ideally, you could put all of the components on one tray. I'd also like to find out if anyone knows of manufacturers who make big multiple-machine power supplies (ATX) so that I can eliminate individuals from the picture. I seem to recall an advertisement in a Linux magazine that did exactly this but can't seem to find it anymore. It would be nice if it supported standard relay racks, but full-enclosures would be okay too."
This discussion has been archived. No new comments can be posted.

'Case-less' Rackmounts and Multi-Machine Power Supplies?

Comments Filter:
  • Any one of you who's done some serious breadboaring knows what a pain in the ass cases are. I've got some friends who've done hardware for DOD projects (US Military for the acronym-impaired), and it was just a bunch of boards hung in a rack. Kinda like old telco equipment, or mainframes. Looked hokey, but that's how the military wanted it. Ventilation's only a problem if you're not passing air over the thing, and one whole side of the cases tended to be an array of fans. 5-1/4" and 9" fans, both.

    Perhaps the poster (phungus) could answer this for me. Is it really a case-less setup you want, or easier access to the boards inside? Easier access, "tool-less" cases might be an answer.
  • There is no good reason to have a single, central power supply for multiple CPUs, but there are several bad reasons:

    Power supplies now are cheap and small, due to high demand. They are not any less efficient than a single supply would be (ie, a 70% efficient single supply would turn as much energy to heat as 20 70% efficient power supplies given the same load)

    Were one machine to fail, unless you have some sort of hot swap in place, you have to shut off all the machines on the power supply.

    A significant portion of your current at low voltages (3.3V, 5v, etc) are lost in the wiring from the power supply to the MB. Say your wiring is .01 ohm per foot, with 3 feet to the computer. 5V at 80A (8 MB) introduces a .8V drop across the copper, resulting in 64W being lost as heat, and the MB only getting 4.2V. (this is an exageration to show the point, were you to do this you'd have to make copper busses (say, 1/4" by 1/4" copper bar for each voltage) which would have lower resistance) Which is a main point in using high AC voltage for distribution: Higher voltage results in less losses for less copper and long distances. Don't make your power supply go for more than a few inches to your mobo unless you know what you're doing.

    There are numerous other reasons, but I'd say efficiency and fault tolerance are the two biggest reasons to avoid this idea. I'm assuming that since you're considering this idea you're rolling in money, as a custom power supply (which is what you're after) of this size is not trivial.

    -Adam

    Some minds are like cement: Thouroughly mixed and permanently set.

  • Go with someone like BGW Systems [bgw.com] for the rack mount shelves and such.
    Look under "Racks" for details
    Your still going to need decent cooling with an open case environment. I don't know of a multi computer ATX power supply.
    -----
  • Please people, don't chew out the guy unless you know what he is talking about. Many of you clearly don't; these are commerical products used in high-end computing applications. Remember that there are many products that really exist, even though you can't buy them at CompUSA.

    This is a very good way of building a server. The ElCheapo power supplies that come in PC cases are not only inefficent, they fail frequently and take out everything downstream.

    The single power supplies used in these racks are overengineered to the point that they don't fail in this manner. The racks usually have front and back doors and a blower...one we have where I work has a side-mounted McLean air conditioner! They are better cooled than any little PC box.

    As far as manufactures go, I don't know. Some used to advertise in Linux Journal (which I have not taken for a couple years) but many of the designs used one-supply-per-rack, which I really don't care for. I like a single, decent-quality supply more than 24 crummy ones. I recall that 24-per-rack seemed to be the maximum you could fit into a rack...this would make each one about two rack units high, which seems to be the smallest I can imagine. They were using Intel all-in-one MBs with integrated 10/100 and VGA, with Black Box KBM switches. I would use the Belkin switches myself due to $$$.

    Most seem to have rolling ventilated shelves with a IDE drive on each shelf.

    Ours here at work has no identifing marks except "McLean" on the air conditioner, but as McLean is a famous rack blower manufacturer I would think they just made the air conditioner, not the rack. The rack was just shipped in from a company called "NetVantage" that we bought, so I can't tell you anything else.

    I will check when I get home, I seem to recall I ordered brochures from some of those companies.

  • Okay, I know what you are saying, but I still like larger supplies. The cheap supplies fail a lot %-wise compared to "real" power supplies. In particlar components like the fan.

    To be fair, I don't think you really need a fan in most of these racks; the PS would cool well enough through convection. But most of the supplies use cheap components that fail at a pretty alarming rate. The only large PC network I set up lost a lot of power supplies, and I bought the better ones.

    Though a larger supply may be no more efficient, the sort of supplies used in large computers tend to fail pretty infrequently, and uptime may be more important than power use. Plue, if 20% of the cheap supplies fail in five years, and you have 24 of them chances are you will lose five computers over that time. A single hot-swap box (see below) probably has a failure rate in the low single digits...even if it is as high as 5%, 2 x 0.05 is a lot lower than 24 x 0.2, if you see what I mean.

    Plus, with a single supply you can mount the supply in the unenclosed portion of the rack and blow the heat into the room, not mix it into the air circulating in the computer-portion of the case. Also, this allows you to keep the AC even farther from the computer--which is nice if noise is a problem. I use one computer that not only has the PS in a shielded portion of the case, it is a linear power supply. Very heavy and a lot less efficient, but the +5V line is rock-stable on my scope.

    As far as failure tolerance, while a single supply does provide a single point of failure most high-end systems use multiple hot-swap supplies. The network gear I help design uses four supplies but can run on two (or one with minimal blades). With more than one supply you get load-balancing, but it lets you hot-swap without a problem.

    Now, the voltage drop is a good point. I've run +5V quite a ways, and frankly with multiple 16Ga wires the drop is not even worth talking about, even in a double wide rack (one machine of mine runs 5V through a four-wide rack with no significant drop). Mini- and mainframe computers do this all the time. While newer machines use a PS-per-cage, look in the back of a pdp11/60 or a big HP9000/800 sometime. There is a 5V supply you can spot weld with! If it is a design that powers a single cage, you can use sense lines to regulate the voltage at the backplane...although not in this case. Remember that 5V can wander +/- 250mV with no problem for many designs.

    Now, with 3.3 volts the voltage drop could be a concern. I don't know how the rack companies do it; one way would be to put an "industrial brick" type DC/DC converter from 5-3.3V on each shelf. You could heatsink it to the shelf and not generate too much heat. I don't know how they deal with that issue in the commerical products.

  • Rackable (http://www.rackable.com [rackable.com] has some really nice high density rackmount systems with interesting cooling and wire management features. If you want you can get the systems without top or bottom covers and they can be installed tray-like into the rack.

    They have 1/2 width 1U units that can go dual CPU which means you can get 88 dual-processor systems in a single rack :)

  • Actually, you need the case if you plan on using the standard fans - without the case, there will be no even circulation of air around the motherboard. It's a common mistake, but leaving the case off a server can actually make it more prone to overheating.
  • Firstly, if your power supplies do not have a fan output, then you did not buy 'good' power supplies. Also, if you are experiencing a 20% failure rate over a period of 4 years, well, I think you may need to ask yourself why in the world you got such cheap junk to power your cluster. Don't expect to buy a good power supply for under $100[US], and even that much will only get you a mediocre one.

    So, what you are saying is that you want a 'rack' with two or more large power supplies working in a tandem failover mode. This rack would contain supports for bare motherboards and other computer components. Each motherboard would have a power disconnect such that one could safely turn off and remove one motherboard and other components from the system without affecting other components of the system.

    As far as your comments about not needing fans, I hope you are kidding. Say your single system uses about 200W. It will, due to (let's be generous here) 80% efficiency, draw about 250W from its source, converting a full 50W to heat. That may not seem like much, but it's enough to raise the ambient temperature to surpass the components ratings. Convection will work if the ambient temperature is 25C, but that means you have to take 50W of heat out of the room the case is in to keep the room at that constant temperature, and that is before the fact that you are converting 200W of energy into heat with the motherboard.

    Say you want your rack to hold 20 MB, each eating 150W. The power suppl(y/ies) must give 3kW of power, turning 750W into heat. The motherboards turn maybe 100W into heat each. Therefore your case is a space heater capable of delivering 2750W of heat to its environment. This all depends on system load, CPU throttling, etc. Sorry, but convection cooling isn't going to move nearly 3kW of heat out of a rack mount case no matter how well it's designed. You will fry nearly everything in the case. At that point you not only need to use high-volume blowers to vent the case, but you must air-condition the room, ideally to below 25C.

    I suspect your first effort had so many power supply failures because of improper venting and cooling. This is not a trivial design, and I would suggest you either learn a lot more about thermal system design, or use a product which is from a company with a lot of experience in this area.

    So, in short, yes, you can do it. It would work very well. But you will end up paying through the nose for the design. It would cost more for you to do it this way than it would for you to get Dell rackmount servers with dual power supplies in each. If this is a one-time project, I don't think the cost is worth the effort you are going to. I could see that you believe you won't be adequately served with individual power supplies, but you've got to buy better ones if that's the case. It would still be less expensive than designing (or having designed) a full rack case of this type.

    -Adam

    Good Idea: Doing your own yard work.
    Bad Idea: Doing your own dental work.
  • Okay, first some confusion to clear up.

    I am not the poster of the topic. The poster was not asking for some home-made kludge as many people here have assumed, there are commerical companies that make enclosed (front, back and side door) racks with forced-air blowing throughout the entire case. Some even have a dedicated air-conditioner. Throughout the rack are mounted pull-out shelves that can hold a motherboard. This is what I am talking about. Some have one standard PC-type power supply mounted on each shelf, others have a single large supply. The ones I have seen advertised in the Linux mags seemed to include the motherboards and drives. You'll note he mentions that he has "seen them advertised" in his post.

    experiencing a 20% failure rate over a period of 4 years, well, I think you may need to ask yourself why in the world you got such cheap junk to power your cluster. Don't expect to buy a good power supply for under $100[US], and even that much will only get you a mediocre one.

    Oh, I agree 100%. But most users here will buy a whole case and power supply for $50. I purchased the case/supply for $100 at my old job and got chewed out for wasting money. This is where the problem lies. And I worry that the supplies in the racks may be substandard; the only decent PC-type supplies I have found are the ones from California PC (calpc) and the Linear supplies from Integrand Research. A single high-end supply is better than 24 cheap Chinese ones. It is hard to find a good PC power supply...like most parts.

    So, what you are saying is that you want a 'rack' with...

    The ones I have seen are in standard relay racks.

    As far as your comments about not needing fans, I hope you are kidding. Say your single system uses about 200W...

    The rack has huge blowers the cool the whole rack. All I said (or meant to say) is in the designs where there is one PC supply per shelf (and therefore per motherboard), that PC supply (which only needs to cool itself, not anything else) probably does not need its fan, as the rack itself is kept so cool. I mentioned this during my rant on cheap PC supplies with Magical Frying Fans. Like I mentioned in another message, one rack here has a built-in McLean air condtioner. The supply itself would be kept cool by convection; chilled air would be blowing around it, and it would be mounted to a metal shelf in a metal rack in a computer room. I was trying to give credit to your many-supplies viewpoint....

    but you must air-condition the room, ideally to below 25C.

    I think these systems are almost always in proper computer rooms.

    I suspect your first effort had so many power supply failures because of improper venting and cooling.

    I was just discussing failures due to cheap power supplies on user's desks. I've never designed one of these things; I would buy one off of the shelf. In my own designs I like standard 5-unit rackmount cases--from Integrand (a.k.a. Tri-Mag) if I can afford it.

    This is not a trivial design, and I would suggest you either learn a lot more about thermal system design, or use a product which is from a company with a lot of experience in this area.

    The companies who sell the Linux systems based on this design do--or at least claim to. The rack we have here does not contain any computers; they have been stripped out. But it was made in a professional fab shop and they did a nice job. I would photograph it if I had a digital camera.

  • Yes, but a rack with a massive blower is cooler than a crummy PC case fan. This is what he is talking about.
  • We have lots of mini-towers in our datacenter right now that I want to convert to a case-less design. I basically want to make room for more servers without having to purchase 1U machines. We buy our servers (albeit, the cheap ones) for only several hundred dollars a piece. I want to save money on the case and make more room.

    We have several big air conditioners blowing into this room, so cooling is not a problem.

    It's more of a space+economic problem.
  • The question wasn't your opinion on using multiple-box power supplies. This is one thing I hate about typical geeks, their inability to answer questions directly without giving their most educated opinion. "How can I do x?" "Why would you want to do something so stupid as x?". Holly shit people, are we so bloody brilliant that we can imagine every possible thing this guy could want to do and we sure as hell know that multi-box power supplies are a bad idea? Check your ego at the door and stop trying to win clue-points with your conjecture. If he wanted our opinions on whether to use it or not he would have asked, among giving enough background information on the project to put is in a POSITION to even be able to have opinions.

    Regards
  • "Flamebait"?! What the heck was that for?

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...