Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

The Soaring Costs for New Data Center Projects 164

miller60 writes "The cost of building a quality data center is rising fast. Equinix will spend $165 million to convert a Chicago warehouse into a data center, while Microsoft is said to be shopping Texas sites for a massive server farm that could cost as much as $600 million. Just three years ago, data centers were dirt cheap due to a glut of facilities built by failed dot-coms and telcos like Exodus, AboveNet and WorldCom. Those sites have been bought up amid surging demand for data storage, so companies needing data center space must either build from scratch or convert existing industrial sites. Microsoft and Yahoo are each building centers in central Washington, where cheap hydro electric power from nearby dams helps them save on energy costs, which can be enormous for high-density server installations."
This discussion has been archived. No new comments can be posted.

The Soaring Costs for New Data Center Projects

Comments Filter:
  • In QUINCY? (Score:3, Interesting)

    by DurendalMac ( 736637 ) on Wednesday June 07, 2006 @11:06PM (#15492059)
    I grew up in Moses Lake, WA, which is about 30 miles SE of Quincy. This should be going into Moses Lake, but it isn't. We have goddamn fiber optics laid all over that town (and the county, exempting Quincy due to some sort of contract the PUD had with Verizon, I believe) going right up to people's houses. I enjoyed a 100Mbps symmetric connection for a while...then my bandwidth got capped. In fact, the PUD is charging the service providers so damn much for bandwidth, some have to cap it at 1Mbps down/512Kbps up. That's slower than fucking DSL and Cable! The local PUD is sitting on a fucking GOLDMINE and they're not doing a goddamned thing about it! They could have easily wooed MS and Yahoo into Moses Lake to build their datafarms there using the PUD's fiber network through the local providers (the PUD can't sell service, so they sell the use of the network to ISPs) and made things better in the town. But they're not doing shit. They haven't been doing much to promote, pitch, or package it for big guys to come in and build a major server farm out here. It pisses me off to no end to see those fuckers doing so little to help that town, and it needs all the help it can get. AAARRRGGGHHH!!!
  • I don't buy it (Score:3, Interesting)

    by appleLaserWriter ( 91994 ) on Wednesday June 07, 2006 @11:08PM (#15492066)
    The Westin Building [officespace.com] (no not THAT office space) still has plenty of space, including the entire 5th floor!
  • Re:I don't buy it (Score:3, Interesting)

    by 1sockchuck ( 826398 ) on Wednesday June 07, 2006 @11:19PM (#15492109) Homepage
    Most enterprise customers don't have any interest in sharing a facility with 50 other telecom providers and hosting companies in a carrier hotel like The Westin Building. These companies want big, stand-alone data centers where they can have complete control over access and security. The other issue is that space is limited in telecom hotels like Westin. The Equinix project mentioned in TFA is 225,000 square feet, and the Microsoft requirement is for more than 400,000 square feet. Westin is a large facility, but the fifth floor isn't 200,000 square feet.
  • by Exter-C ( 310390 ) on Thursday June 08, 2006 @04:05AM (#15493006) Homepage
    The cost of building out datacentres has been soaring for several reasons, the first real issue is being able to provide enough power for todays power hungry servers to run at any sort of density required to actually churn the data. We see datacentres only able to offer very low amount of power per square meter in the UK, which is very low and can often only provide you with upto 4 quad processor XEON servers per rack. When you can only have that density the cost is much greater. The other aspect is how do you cool it, the traditional airconditioning raised floor method really does not work as its almost impossible to actually cool where you have to cool and there will always be hot spots even if your doing warm row cold row designs etc. Its important to seal the cool air in and funnel it to where its needed. APC have recently been working heavily in this area and claim to be able to cool MASSIVE amounts of density. The other issue is the management aspects of these datacentres. In the past you could design a datacentre to be good for 5-10 years now its hard to design something which will be good for the same length of time becaues the power requirements, cooling and power grid is something that is often over utilised (as per in the UK datacentre market).
  • The price of AJAX (Score:3, Interesting)

    by Animats ( 122034 ) on Thursday June 08, 2006 @04:13AM (#15493021) Homepage
    This is the price of AJAX. If users are constantly going back to the server in the middle of a page, you need more server capacity. Really, the AJAX approach is a hideously inefficient way to update a form. We're now seeing the price of that.
  • by Nutria ( 679911 ) on Thursday June 08, 2006 @05:02AM (#15493147)
    Why not build your datacenter in alaska where it's colder year round.

    Our datacenter in just up the Hudson from NYC and was built back in the 1960s, When IBM Ruled The Datacenter, and disk farms generated a lot of heat and the ambient temperature needed to be roughly 70F.

    So, the DC is in the 2nd basement, and (had) vents to the outside, so cold winter air could be shunted into the room.

    Became obsolete, though, in the mid 1990s when the huge 3390 farm was replaced by a couple of EMC cabinets and the bipolar mainframe was replaced with a CMOS s/390.

    I'd have thought building the thing in Texas would just help pump up your A/C costs.

    Depends on how well it's insulated. When the building is gutted, that's the perfect time to spray on insulation.
  • by ersatx ( 742762 ) on Thursday June 08, 2006 @05:16AM (#15493171)
    French Iliad SAS (the parent company of the ISP Free) just started a cute server renting business in a former Exodus datacenter.
    The point of interest is that servers are fanless, built on low-consumption VIA processors, and consume about 20W/server.
    That should make the cost of operation much lower than traditional hosting...
    See details on http://www.dedibox.fr/index.php?rub=offre [dedibox.fr] (in french)
    Pictures of the datacenter: http://www.dedibox.fr/index.php?rub=datacenter [dedibox.fr]
  • by peter303 ( 12292 ) on Thursday June 08, 2006 @09:47AM (#15494058)
    I've seen a number of conflicting estimates on how much power computers and digital devices use.
    One source decries widescreen TVs as the "SUV" of the 21st century . The average plasma TV consumes more power per hour than the average refrigerator, the previous household energy hog.
  • Perhaps Microsoft (Score:3, Interesting)

    by Sergeant Beavis ( 558225 ) on Thursday June 08, 2006 @11:04AM (#15494620) Homepage
    Should be using VMware Infrastructure 3 :)

    My company is building a new DC in Texas too. We are doing it on our existing campus by gutting and renovating an older building but the costs are still going to be huge.

    In the meantime, I've been building one of the first VMware ESX environments our company has ever used. It started out as a simple 6 host server environment but has grown to over 20 DL 580s and 585s hosting hundreds of Virtual Machines. The initial investment is high but the operating costs are lower, the cabling costs are lower, the HVAC costs are lower, and of course, a VMware host server takes up less real estate.

    If my company had focused on VMware, or virtualization in general, early on, they wouldn't need three datacenters and they wouldn't be building a fourth.

Old programmers never die, they just hit account block limit.

Working...