Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

The Soaring Costs for New Data Center Projects 164

miller60 writes "The cost of building a quality data center is rising fast. Equinix will spend $165 million to convert a Chicago warehouse into a data center, while Microsoft is said to be shopping Texas sites for a massive server farm that could cost as much as $600 million. Just three years ago, data centers were dirt cheap due to a glut of facilities built by failed dot-coms and telcos like Exodus, AboveNet and WorldCom. Those sites have been bought up amid surging demand for data storage, so companies needing data center space must either build from scratch or convert existing industrial sites. Microsoft and Yahoo are each building centers in central Washington, where cheap hydro electric power from nearby dams helps them save on energy costs, which can be enormous for high-density server installations."
This discussion has been archived. No new comments can be posted.

The Soaring Costs for New Data Center Projects

Comments Filter:
  • Detroit? (Score:5, Informative)

    by haydenth ( 588730 ) <haydenth AT msu DOT edu> on Wednesday June 07, 2006 @10:42PM (#15491963)
    Some of these firms should really start looking at warehouses in Detroit. If you can secure the facility properly, you can get TONS of old warehouses and factory floors for very little. Look at the conversion that Wayne State did with techtown [techtownwsu.org] - they converted an old abandoned warehouse into usable high-tech space (and the real estate was virtually free).
    • there's prolly a good reason why people have abandoned the city.

      I hear realestate in Katrina is also cheap, should
      build a data center there. Like close to the water.
    • What about taxes and power concerns? You pay for real estate once, buy you pay taxes every year and if you loose power, you will loose paying customers.

      One both counts, Texas wins hands down. We have low taxes and our state power grid can be disconnected from all others. There was a problem a few years back where a power issue in the mid west took out huge parts of the North East.

      Then you have to think about other things like being able to fill and staff the facility.

      Again, Texas wins out. We're the nex
      • Ok, you moved from Mountain View to Texas. I agree it's an improvement (I moved from Mountain View to Melbourne and I spent a year in Houston one week, too) but ... it's winter, right? You haven't actually gone outside in the summer yet? I recommend you install redundant air conditioning systems in your car, with battery backup.
        • All the datacenter growth in Texas is happening in central and north Texas, nowhere near Houston. You've gotta remember, Texas is bigger than all of New England, so there are a lot of differences in climate across the state. Central Texas is actually about 5-10 degrees cooler year round than Houston, there's no state income tax, and land is some of the cheapest in the country. Power is good as well; Texas has an independent electric grid from the rest of the US. There is a lot of wind power generated in the
      • Re:Detroit? (Score:1, Funny)

        by Anonymous Coward
        if you loose power, you will loose paying customers

        loose - the opposite of tight
        lose - to not win

        Example: I guess Texas lose in the spelling stakes eh Bubba?
      • Re:Detroit? (Score:4, Insightful)

        by Bishop ( 4500 ) on Thursday June 08, 2006 @01:33AM (#15492674)
        Power grid reliability is not a big concern. Data centres of this size will have backup generators. Taxes aren't going to be an issue either. These data centres will be given sweatheart tax deals, no interest loans, and other incentives. The states and counties will give out these incentives because the data centres will bring so called "high tech jobs."
        • Power grid reliability is not a big concern. Data centres of this size will have backup generators.

          Actually, power grid reliability is a huge concern. Most data centers of this size will have connections to two different power grids, preferably from two different electric providers. I don't care how much generator capacity you have, it's most likely not enough to last longer than a couple of days without power. This definitely influences data center projects of this size, where architects need to conside
          • Most data centers of this size will have connections to two different power grids

            No they won't. It dosen't make sense. You don't know what you are talking about. The North American power grid is all interconnected. There is no second grid to connect to. Even if there were a second grid, running power lines is insanely expensive. Generators are cheaper.

            I don't care how much generator capacity you have, it's most likely not enough to last longer than a couple of days without power.

            What does this mean? The gen
        • The states and counties will give out these incentives because the data centres will bring so called "high tech jobs."

          But how do you get skilled high-tech people to work in a place where they risk their lives on their daily commute?
    • Re:Detroit? (Score:5, Funny)

      by vertinox ( 846076 ) on Wednesday June 07, 2006 @11:12PM (#15492082)
      Some of these firms should really start looking at warehouses in Detroit.

      Do bullet proof vests come included?
    • I hadn't heard of techtown, but as soon as you mentioned Wayne State, I knew who the brainchild was .....

      WSU is very, very lucky to have Dr. Reid as their President. That's a guy with vision. And boy, does he love his technology!

      We were sorry to see him leave in '97, but all of the good things that have happened at Montclair State [montclair.edu] in the last 10 years were from his vision.

      Good luck Dr. Reid, glad to see you're still pushing the envelope!
    • I've been saying this about Cleveland for a couple years now. Not only is the real estate cheap, but I figured that about 6 months out of the year, your A/C costs would be about nil. Just pump in air from the outside.

      In fact, if you could figure out a way to sell the heat generated by the computers to nearby buildings, you might make a tidy side-sum to help defray the power costs :)
  • by patio11 ( 857072 ) on Wednesday June 07, 2006 @10:54PM (#15492018)
    In the finest of Slashdot traditions I'm speaking from barely informed ignorance here:

    It seems to me you can control your costs by buying existing space, like a mothballed factory, in an economically depressed area. Like, say, anywhere in the rust belt. You've got a bit of flexibility in siting as long as you can get Internet pipes, and you don't necessarily *have* to set up in an area known for a workforce with a high degree of tech skill (and absurd prevailing wages along with almost certainly having higher cost of everything because its metropolitan).

    Our technology incubator in Japan is in a park with a few major data centers and is located 40 miles from the middle of nowhere. The US analog would be siting the datacenter in a cornfield in central Illinois. We have (comparitively) cheap power rates, a cost of living (and prevailing salaries) a fraction of that in Nagoya, and the rent (heavily subsidized by local government, which may not be an option for folks discussed in these articles) is a song.
    • Why not build your datacenter in alaska where it's colder year round. I'd have thought building the thing in Texas would just help pump up your A/C costs.
      • Believe it or not, the speed of light is too slow. Latency would be an issue if the data center was in Alaska.
        • Believe it or not, the speed of light is too slow. Latency would be an issue if the data center was in Alaska.

          Horse hocky. The operative measure is the speed of the dark. Dark fibre, fewer hops, acceptable response.

        • That really depends on the use of the data centre. If you are serving internet content, then it's probably fast enough. 6000 KM road distance,from alaska to texas. Trave time of 0.02 seconds. It's not great, but not terrible. Good enough form most people. Plus you could probably cust that distance in half by putting cable under the water.
      • Why not build your datacenter in alaska where it's colder year round.

        Our datacenter in just up the Hudson from NYC and was built back in the 1960s, When IBM Ruled The Datacenter, and disk farms generated a lot of heat and the ambient temperature needed to be roughly 70F.

        So, the DC is in the 2nd basement, and (had) vents to the outside, so cold winter air could be shunted into the room.

        Became obsolete, though, in the mid 1990s when the huge 3390 farm was replaced by a couple of EMC cabinets and the bipolar m
    • by 0racle ( 667029 ) on Wednesday June 07, 2006 @11:21PM (#15492122)
      The cost of turning that into a safe datacenter environment would be enormous. When was the last time you heard of a abandoned factory being built to hold a temperature controlled environment? The costs that go into making a real datacenter are significant, and building the place from scratch for that purpose can be cheaper. Building a datacenter right downtown is a stupid idea, but that doesn't make building it out in the boonies a good one.
      • I wonder if anyone has run feasibility studies on building datacenters in abandoned underground facilities? They're naturally temperature-controlled: anything more than a few feet down is going to hover around 40-50F, really the only problem you'd have is the possible humidity. But last time I saw specs on servers, they're fine to about 80% RH [hp.com]. You'd obviously have to be very careful about possible flooding issues if it was in an area prone to that, but overall I think you could make use of a lot of old ind
      • When was the last time you heard of a abandoned factory being built to hold a temperature controlled environment?
        I don't know about you, but I've never head of an abandoned factory being built...
      • Actually, most factories are already temperature controlled environments. Industrial processes like steel smelting, injection molding, etc... generate quite a bit more heat than even the largest datacenters.
      • by Lumpy ( 12016 ) on Thursday June 08, 2006 @08:01AM (#15493517) Homepage
        The cost of turning that into a safe datacenter environment would be enormous. When was the last time you heard of a abandoned factory being built to hold a temperature controlled environment?

        oh for crying out loud. It amazes me the lack of thought outside the box people have.

        Options...

        1 - spend very little and build seperate enclosures inside the wearhouse that hold the libert units for environment control and the servers in data-center pods.

        2 - go uber cheap. Buy a bunch of camper trailers that are gutted and put the servers inside those parked in the wearhouse. works great and I have seen several startups that did exactly that. this also works very well for rental property as you can pull up stakes and move your datacenter within minutes of getting your data pipes into another cheap wearhouse.

        the best option and the one usually does in these types of datacenters is the first. you can hire simple general contractors to build interior walls with roofs that are only 10 feet high and insulate the crap out of them to make the perfect datacenter within 5 - 30 days.

        It's the mentially retarted CEO's and Venture Capilolists that think you need to spend 80 million dollars on a flashy facility with lots of glass and artwork and special "touches" that only impress clients that will never go there or see it.
    • Ignorance indeed. The point is simple. Above all else, you want to minimize the hazards to your datacenter. Detroit has been known to have icestorms and blizzards. Heartland corn fields have tornados. Add hurricanes on the southeast coast and a whole range of natural disasters on the west coast and it is apparent why you make your choices. The last thing any major internet company wants is for the roof of the data center to collapse under the weight of an ice storm or be torn off by a twister because in ex
      • Yeah, but if you know what the risks are, you can plan for them. It may be worth reinforcing the building to handle a direct tornado strike (reinforced concrete, anyone?) if the power and personnel costs are low enough. California has earthquakes, but there are still quite a few data centers (and high tech companies) located there. They simply take the risks into account in the financial equation and either accept the cost of that risk or take precautions to protect against it.
      • by Anonymous Coward
        Wow... Blizzards and icestorms... I wonder where Russians and Canadians put their datacenters.
      • by patio11 ( 857072 ) on Thursday June 08, 2006 @12:19AM (#15492405)
        Like I said: I live in Japan. We're the earthquake capital of the world, and yet somehow we manage to have buildings stay standing. Many of them also contain computers or millions of dollars of capital, strange as this may be. I trust that the folks living in Iowa and Detroit have figured out some combination of construction techniques, building codes, and insurance schemes which enables their cities to be something other than windswept wastelands. I mean, how long has the auto industry put billion-dollar factories in Detroit? And how many times have you seen GM say "Aww shootskie, we forgot about the ice storms and now three production lines are buried under 400 tons of collapsed roof and snow?"
    • Actually your not that far off the mark. Building a data center basically comes down to five key components:

      1. Getting lots of cheap power. Being next to a power plant with tons of extra capacity doesn't hurt. The farther you are, the more loss, and that means more $$$ per MW.
      2. Internet pipes. Having X thousand servers up and running with nowhere to push the bits is pretty useless. I'm not sure if most people understand how hard it is to say get 40-60 Gig of bandwidth to the middle of nowhere. It tak
      • Why the comments as if Iowa was some backwards unwired wasteland? Working in eastern Iowa, we have a number of excellent datacenters that have just as much capacity as elsewhere. Inquire about a rack in Cedar Falls, Iowa and compare it to Equinix in Chicago. Not even in the same ballpark. But in Cedar Falls, I'm still on Internap as well as connections to MAE West. With a much lower cost of business than some metro areas, I'm surprised more people haven't located here.
      • I'm not sure if most people understand how hard it is to say get 40-60 Gig of bandwidth to the middle of nowhere. It takes months, if not years, to put in the right infrastructure. If you think I'm lying, call up say, Sprint and ask them for a 10GE pipe to the middle of Iowa

        Right-of-way is more important than existing infrastructure.

        Just to use a company that's familiar to you as an example, ask the Southern Pacific Railroad Integration.

    • by dj245 ( 732906 ) on Thursday June 08, 2006 @12:54AM (#15492545) Homepage
      Other posts have dealt with what you might need to get a datacenter running in one of these places. A good analogy, I think, is if you were to go and build a power station. A power station needs the following things-

      1. Access to a large body of water cuts costs immensely when dumping the heat from the beast. Fresh is preferred but not required.
      2. Access to high voltage lines, or a short distance to one that can be tied into. 34.5 and 105kV lines are expensive to build and maintain on a long-term basis.
      3. Access to fuel. Ideally rail, ship, or pipeline, because power plants burn massive quantities of fuel. Trucks do not cut it unless the distance is extremely short.

      I recently worked at a power station that was originally built with none of these things. The only people to ever make any money from this white elefant were the contracters that built it.

      Build your datacenter near a large body of water (or maybe in Juneau?). Build it near a power station (or build your own steam plant?). Build near some big strands of fiber. Being in the middle of nowhere for the sake of being in the middle of nowhere only profits the contractors.

      • Many of the heavy manufacturing plants use something like 440V three phase to power huge motors continuously, is a 34.5kV really necessary if a manufacturing plant didn't have the need?
        • Many of the heavy manufacturing plants use something like 440V three phase to power huge motors continuously, is a 34.5kV really necessary if a manufacturing plant didn't have the need?

          He's talking about the primaries that come in. Not the stepped-down voltages that things actually run at. The transmission voltage on the primaries has a lot more to do with how far away from the substation/power generating plant your building is than how much power you need.

          I don't know how much power a couple of elec
    • From a building perspective, that can possibly make sense. However, you would also want power from multiple substations for reliability. That could also be possible, but likely very expensive out in the middle of nowhere. But the hard thing would likely be network connectivity. Finding a high speed uplinks out in the middle of a corn field is going to be difficult. Find multiple ones would be near impossible. What you save by reusing an old building or through cheap "rent" would be ofset by the huge c
  • esp banks... (Score:5, Informative)

    by eggoeater ( 704775 ) on Wednesday June 07, 2006 @10:54PM (#15492019) Journal
    I work for a large financial institution.
    We have a LOT of data...and not just account data.
    Back in the 80's, the standard was two mainframes in the same room, back-up
    tapes kept on and off site, and a contract with a company to supply a DR computer
    if it was ever needed.

    Cut to 2006...
    We have dual fully redundant data centers, each with many mainframes, and pipes
    big enough to drive a dump truck full of bits between the two.
    A third one is about to open and a fourth is under construction.

    Most of this is for SOX.


  • In QUINCY? (Score:3, Interesting)

    by DurendalMac ( 736637 ) on Wednesday June 07, 2006 @11:06PM (#15492059)
    I grew up in Moses Lake, WA, which is about 30 miles SE of Quincy. This should be going into Moses Lake, but it isn't. We have goddamn fiber optics laid all over that town (and the county, exempting Quincy due to some sort of contract the PUD had with Verizon, I believe) going right up to people's houses. I enjoyed a 100Mbps symmetric connection for a while...then my bandwidth got capped. In fact, the PUD is charging the service providers so damn much for bandwidth, some have to cap it at 1Mbps down/512Kbps up. That's slower than fucking DSL and Cable! The local PUD is sitting on a fucking GOLDMINE and they're not doing a goddamned thing about it! They could have easily wooed MS and Yahoo into Moses Lake to build their datafarms there using the PUD's fiber network through the local providers (the PUD can't sell service, so they sell the use of the network to ISPs) and made things better in the town. But they're not doing shit. They haven't been doing much to promote, pitch, or package it for big guys to come in and build a major server farm out here. It pisses me off to no end to see those fuckers doing so little to help that town, and it needs all the help it can get. AAARRRGGGHHH!!!
  • I don't buy it (Score:3, Interesting)

    by appleLaserWriter ( 91994 ) on Wednesday June 07, 2006 @11:08PM (#15492066)
    The Westin Building [officespace.com] (no not THAT office space) still has plenty of space, including the entire 5th floor!
    • Re:I don't buy it (Score:3, Interesting)

      by 1sockchuck ( 826398 )
      Most enterprise customers don't have any interest in sharing a facility with 50 other telecom providers and hosting companies in a carrier hotel like The Westin Building. These companies want big, stand-alone data centers where they can have complete control over access and security. The other issue is that space is limited in telecom hotels like Westin. The Equinix project mentioned in TFA is 225,000 square feet, and the Microsoft requirement is for more than 400,000 square feet. Westin is a large facility
    • Haha, and let's not rule out the old Team F wiring closet on the 4th floor, either.
    • The problem with the Westin and any other data in the downtown Seattle area is power. We have servers in Internap Fischer Plaza and they have 30 AMP caps on each rack. We can't get more than 15 1Us in a cabinet even though there's space for 30. You can't pay them for more because they can't get more. I have heard straight from them that they are pretty worried about power because power use is soaring and it's next to impossible to get more.

      If you go just outside the Seattle area (Kent, Tukwilla), they'l
  • is to wait until this new tech bubble bursts and get Super-Amazing Data Roxors TM for a fraction of the price. Seriously. The future is going to have so much storage and computing power for so damn cheap, it makes me feel a little something funny inside. Is this what they call "Love"?
    • and get Super-Amazing Data Roxors...

      I have this recurring vision of people tripping over this huge data cable and dislodging the little nub at the end that was the data centre.

      Data storage densities may continue to improve for a bit. Until we're reading the RFC's for a new RS-nnnn spec for DTE communication via quantum entanglement and metal telepathy* though, we're going to be building data centres for bandwidth and reliable power as much as for cubic volume required to house binary digits.

      Which brings u

      • Which brings up another point -- when HDD's are approaching the terabyte range, does it still make sense to use single large disks when they're inherently throttled to IDE or SATA IO rates?

        Those huge-density 7200RPM drives are best for near-line and "online archival" storage. Perfect for SOX data retention.

        For speed, you still want 10K 147GB SCSI drives.
        • I prefer the 15K RPM 147GB SCSI drives personally =)
          Of course looking forward the high RPM SAS drives with physically smaller platters seem nice since I can get a large number of spindles into even a 2U enclosure.
  • by pavon ( 30274 ) * on Wednesday June 07, 2006 @11:59PM (#15492322)
    As we know worker moral is important, and considering the traditional living arrangements of your standard computer geek, it stands to reason that they should build their data center in the most awesome basement [triggur.org] ever built! Hey, one can dream can't he? To the batcave!
  • my plan on building a data center as a business.
  • by aaarrrgggh ( 9205 ) on Thursday June 08, 2006 @01:28AM (#15492659)
    It's incredibly uninformed to talk of costs in terms of total dollars!

    The old metric was in $/sq. ft., and today it is better to talk in terms of $/kW given higher densities.

    For a wide range of data centers, the building shell cost is around $100-250/sq. ft. An enterprise (EIA 692 "Tier 4") data center costs about $22k/kW, plus the high end of the building shell cost. A "Tier 3" data center is closer to $20k/kW and $200/sq. ft. When you drop to Tier 2, you cut the cost in about half, at $12k/kW.

    The only costs that have risen dramatically recently are generators and copper, which have a one-year lead time for big engines typically used (1.5-2+ MW) for the generetor, and about triple the cost three years ago for copper-- maybe a 15% premium maximum for a large data center.

    Costs get much more complicated when you talk about provisions for future expansion and site constraints.

    As for energy costs, yes, cheaper electricity is good for a data center. A 2MW data center will save about $350k/year if they can drop their electricity cost by $0.01 per kWh!
    • You must be building somewhere very near an expensive megalopolis if you're putting up a shell for over $100/SF. Even with services, large industrial buildings can easily be built for under $80 including the land and utilities, and I suspect you could bring a facility in under $50 a square foot in the right places (and that includes the "right places" with big internet pipes). Makes me want to go build a datacenter in Christiansburg, Virginia. Lots of land, Virginia Tech right next door (tech-savvy bodies
    • by WilsonSD ( 159419 ) on Thursday June 08, 2006 @09:58AM (#15494113) Homepage
      The otherr realy key metric is server utilization. It turns out the IT's dirty little secret is that the way they deploy applications (in static silos of servers that can't be shared between applications) requires that each app be dramatically over provisioned with hardware to handle various load changes. A typical data center is only using 10% of it's compute capacity at any given time. This has gotten dramatically worse as people moved from Mainframe->SMP->Cheap Pizza Boxes.

      -Steve

      http://www.cassatt.com/ [cassatt.com]
  • by bazily ( 838434 ) <slashdot@b[ ]ly.com ['azi' in gap]> on Thursday June 08, 2006 @03:43AM (#15492947) Homepage
    I love this cycle, where internet business heats up and companies start building datacenters to keep up with perceived demand. It happened the last time around, with companies like Exodus, Cable & Wireless, and all the others who were overbuilt when demand didn't materialize.

    Anything over 50k sf of datacenter is more than enough, assuming you've got cheap and available power, and close to a couple fiber loops. The big reason that these new datacenters are so large (200k-400k sf, compared that to 1 floor of a high rise office at 30k sf) is because they aren't allowed to have the power density (elec co can only supply so much at reasonable price). With servers more power hungry, yet smaller, there's a need for more power/cooling, but less space.

    Building new isn't all that different in cost of retrofitting an old warehouse. I'd just buy one of the small operators out there and be up and running for a % of the cost. The problem there is that there's a company called Digital Realty Trust buying all a lot of the datacenters in the market, and they've got a ton of cash.

    So maybe the rust belt should be fighting for these developments, but they can't overcome 1 issue - companies want to be close to their datacenter. It goes against the security mission, the cost justification, and just about everything else; but these always get built right next to corporate HQ or in some metropolitan area. Doh!
  • Equinix is insanely expensive. I considered moving my company's colocation into Equinix's Ashburn VA facility but I ended up choosing more space at two others for half the price. Equinix has a beautiful place but I can't for the life of me figure out who actually needs biometric locks on the cages. That stuff isn't cheap.
  • by Exter-C ( 310390 ) on Thursday June 08, 2006 @04:05AM (#15493006) Homepage
    The cost of building out datacentres has been soaring for several reasons, the first real issue is being able to provide enough power for todays power hungry servers to run at any sort of density required to actually churn the data. We see datacentres only able to offer very low amount of power per square meter in the UK, which is very low and can often only provide you with upto 4 quad processor XEON servers per rack. When you can only have that density the cost is much greater. The other aspect is how do you cool it, the traditional airconditioning raised floor method really does not work as its almost impossible to actually cool where you have to cool and there will always be hot spots even if your doing warm row cold row designs etc. Its important to seal the cool air in and funnel it to where its needed. APC have recently been working heavily in this area and claim to be able to cool MASSIVE amounts of density. The other issue is the management aspects of these datacentres. In the past you could design a datacentre to be good for 5-10 years now its hard to design something which will be good for the same length of time becaues the power requirements, cooling and power grid is something that is often over utilised (as per in the UK datacentre market).
  • The price of AJAX (Score:3, Interesting)

    by Animats ( 122034 ) on Thursday June 08, 2006 @04:13AM (#15493021) Homepage
    This is the price of AJAX. If users are constantly going back to the server in the middle of a page, you need more server capacity. Really, the AJAX approach is a hideously inefficient way to update a form. We're now seeing the price of that.
    • Umm, AJAX is *more* efficient than a static page. It needs less server capacity because it doesn't require the entire form to be reloaded constantly.
      • It needs less server capacity because it doesn't require the entire form to be reloaded constantly.

        Depends on what you mean by "capacity". If you're talking about bandwidth capacity, then yes, AJAX can potentially reduce bandwidth. If you're talking about server processing capacity, then the answer is no, AJAX will not reduce server processing loads. AJAX requires more server software, processing, memory and time than simply having the server rejurgitate a static, or quasi static webpage over and over.
        • But the reason that the script goes back to the server is because the data has changed, meaning that it is not static.
          • The content being requested via AJAX could certainly be static. The point the other two are missing is they think AJAX requires server-side logic to execute to process what the client is sending, probably because the examples they've seen do just that. But the point behind AJAX isn't only to enable a web page to call into server-side logic; you could do this with a page refresh, too.

            The point of AJAX is to either send info to the server without refreshing the client page, or to update only a portion of the

  • French Iliad SAS (the parent company of the ISP Free) just started a cute server renting business in a former Exodus datacenter.
    The point of interest is that servers are fanless, built on low-consumption VIA processors, and consume about 20W/server.
    That should make the cost of operation much lower than traditional hosting...
    See details on http://www.dedibox.fr/index.php?rub=offre [dedibox.fr] (in french)
    Pictures of the datacenter: http://www.dedibox.fr/index.php?rub=datacenter [dedibox.fr]
  • Everything is cheaper in India. Then build out sufficient bandwidth to connect everything here to there. See it's not just about pesky American wages - it's also about pesky American real estate prices, utility costs and whatnot.
  • by peter303 ( 12292 ) on Thursday June 08, 2006 @09:47AM (#15494058)
    I've seen a number of conflicting estimates on how much power computers and digital devices use.
    One source decries widescreen TVs as the "SUV" of the 21st century . The average plasma TV consumes more power per hour than the average refrigerator, the previous household energy hog.
  • Costs (Score:3, Insightful)

    by plopez ( 54068 ) on Thursday June 08, 2006 @10:59AM (#15494579) Journal
    This comment cuts across several threads on costs.

    Costs alone are not enough. What is needed is a unit cost. For example, is unit cost per user rising or falling? If it is falling but the user base is growing rapidly, you are getting a good deal even though costs may be increasing.

    Also, things such as redudent server, backups, power backups etc. should probably be counted as an insurance cost and measured against cost of down time. If the cost of downtime increases much faster than the cost of this 'insurance' then you are probably getting a good deal.

    To say 'costs are rising' without a benefit analysis is meaningless.

    Also, I wonder how much of this is due to bloated apps and poor design (XML anyone?). Is this explosion in servers due to crappy code and bad data models. I suspect some of it is though it has to be looked at on an application-by-application basis.

    And while I am on the topic, multi-tier does *not* mean multi server. I have no idea how this myth got started (hardware vendors maybe?). You can, if you like, run all tiers on one server if your code is not leaky. For security reasons you probably should put your web server on its own box, but then if you have 5 tiers and a DB engine there is no reason why a good server can't run all of them in most cases. Unless, of course, the code is crap.

    My semi-informed opinion....

  • Perhaps Microsoft (Score:3, Interesting)

    by Sergeant Beavis ( 558225 ) on Thursday June 08, 2006 @11:04AM (#15494620) Homepage
    Should be using VMware Infrastructure 3 :)

    My company is building a new DC in Texas too. We are doing it on our existing campus by gutting and renovating an older building but the costs are still going to be huge.

    In the meantime, I've been building one of the first VMware ESX environments our company has ever used. It started out as a simple 6 host server environment but has grown to over 20 DL 580s and 585s hosting hundreds of Virtual Machines. The initial investment is high but the operating costs are lower, the cabling costs are lower, the HVAC costs are lower, and of course, a VMware host server takes up less real estate.

    If my company had focused on VMware, or virtualization in general, early on, they wouldn't need three datacenters and they wouldn't be building a fourth.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...