Slashdot's Setup, Part 1- Hardware 273
CT:Most of the following was written by Uriah Welcome, famed sysadmin extraordinaire, responsible for our corporate intertubes. He Writes...
Many of you have asked about the infrastructure that supports your favorite time sink... err news site. The question even reached the top ten questions to ask CmdrTaco. So I've been asked to share our secrets on how we keep the site up and running, as well as a look towards the future of Slashdot's infrastructure. Please keep in mind that this infrastructure not only runs Slashdot, but also all the other sites owned by SourceForge, Inc.: SourceForge.net, Thinkgeek.com, Freshmeat.net, Linux.com, Newsforge.com, et al.
Well, let's begin with the most boring and basic details. We're hosted at a Savvis data center in the Bay Area. Our data center is pretty much like every other one. Raised floors, UPSs, giant diesel generators, 24x7 security, man traps, the works. Really, once you've seen one class A data center, you've seen them all. (CT: I've still never seen one. And they won't let us take pictures. Boo savvis.)
Next, our bandwidth and network. We currently have two Active-Active Gigabit uplinks; again nothing unique here, no crazy routing, just symmetric, equal cost uplinks. The uplinks terminate in our cage at a pair of Cisco 7301s that we use as our gateway/border routers. We do some basic filtering here, but nothing too outrageous; we tier our filtering to try to spread the load. From the border routers, the bits hit our core switches/routers, a pair of Foundry BigIron 8000s. They have been our workhorses throughout the years. The BigIron 8000s have been in production since we built this data center in 2002 and actually, having just looked at it... haven't been rebooted since. These guys used to be our border routers, but alas... their CPUs just weren't up to the task after all these years and growth. Many machines plug directly into these core switches, however for certain self contained racks we branch off to Foundry FastIron 9604s. They are basically switches and do nothing but save us ports on the cores.
Now onto the meat: the actual systems. We've gone through many vendors over the years. Some good, some...not so much. We've had our share of problems with everyone. Currently in production we have the following: HP, Dell, IBM, Rackable, and I kid you not, VA Linux Systems. Since this article is about Slashdot, I'll stick to their hardware. The first hop on the way to Slashdot is the load balancing firewalls, which are a pair of Rackable Systems 1Us; P4 Xeon 2.66Gz, 2G RAM, 2x80GB IDE, running CentOS and LVS. These guys distribute the traffic to the next hop, which are the web servers.
Slashdot currently has 16 web servers all of which are running Red Hat 9. Two serve static content: javascript, images, and the front page for non logged-in users. Four serve the front page to logged in users. And the remaining ten handle comment pages. All web servers are Rackable 1U servers with 2 Xeon 2.66Ghz processors, 2GB of RAM, and 2x80GB IDE hard drives. The web servers all NFS mount the NFS server, which is a Rackable 2U with 2 Xeon 2.4Ghz processors, 2GB of RAM, and 4x36GB 15K RPM SCSI drives. (CT: Just as a note, we frequently shuffle these 16 servers from one task to another to handle changes in load or performance. Next week's software story will explain in much more detail exactly what we do with those machines. Also as a note- the NFS is read-only, which was really the only safe way to use NFS around 1999 when we started doing it this way.)
Besides the 16 web servers, we have 7 databases. They currently are all running CentOS 4. They breakdown as follows: 2 Dual Opteron 270's with 16GB RAM, 4x36GB 15K RPM SCSI Drives These are doing multiple-master replication, with one acting as Slashdot's single write-only DB, and the other acting as a reader. We have the ability to swap their functions dynamically at any time, providing an acceptable level of failover.
2 Dual Opteron 270's with 8GB RAM, 4x36GB 15K RPM SCSI Drives These are Slashdot's reader DBs. Each derives data from a specific master database (listed above). The idea is that we can add more reader databases as we need to scale. These boxes are barely a year old now — and still are plenty fast for our needs.
Lastly, we have 3 Quad P3 Xeon 700Mhz with 4GB RAM, 8x36GB 10K RPM SCSI Drives which are sort of our miscellaneous 'other' boxes. They are used to host our accesslog writer, an accesslog reader, and Slashdot's search database. We need this much for accesslogs because moderation and stats require a lot of CPU time for computation.
And that is basically it, in a nutshell. There isn't anything too terribly crazy about the infrastructure. We like to keep things as simple as possible. This design is also very similar to what all the other SourceForge, Inc. sites use, and has proved to scale quite well.
CT: Thanks to Uriah and Chris Brown for the report. Now if only we remember to update the FAQ entry...
Re:Savvis (Score:3, Informative)
Re:Possibly obtuse question (Score:5, Informative)
Re:Write-only database? (Score:3, Informative)
If you have a farm of replicated mysql servers (which are read only - as replication is one way here) you need a db to write to.... not reading from it reduces the load on that server.
So, assuming that your read-mostly - it's actually a nice way to balance the load across multiple systems.
Re:bandwidth usage and cost? (Score:4, Informative)
The savings pays for the gear in less than 2 years plus we have 10X the band width as well as full control over the connection.
Re:Interesting (Score:5, Informative)
Yeah, I wasn't sure what he meant either. We have 2 webheads serving static pages (like the non-logged-in homepage), and 4 serving specifically the dynamically-generated homepage for all logged-in users. Plus 1 that serves all SSL traffic, which subscribers can use.
People often say "subscriber" when they mean "logged-in Slashdot user," not specifically a paying subscriber [slashdot.org].
Re:Multiple master DBs (Score:5, Informative)
Re:backup? (Score:2, Informative)
Re:Why CentOS? (Score:2, Informative)
Re:backup? (Score:4, Informative)
Second rule of offsite backups: Never talk about where you keep your offsite backups.
You thought I was going somewhere else with that didn't you?
In all seriousness, that sounds like it would be in the software article instead.
Re:bandwidth usage and cost? (Score:5, Informative)
Re:Why CentOS? (Score:5, Informative)
Re:backup? (Score:5, Informative)
Re:Considered a CDN? (Score:5, Informative)
Re:Redhat 9 (Score:3, Informative)
You people keep using the word "brick" to refer to "broken software that can easily be reinstalled."
Yep, you're dead on the money about Level4 support (Score:3, Informative)
Anyway, it did get to a point where I instantly got escalated to their 2 or 3 tier because if I couldn't fix it, or I couldn't find the answer withing a Unix forum on-line, they would have a hard time offering a solution. This was supporting about 300 Sun Netra systems running Solaris 9.
Re:Savvis (Score:3, Informative)
Depending on who you talk to, you'll get different responses about Savvis. This is mainly due to the heritage of various customers. i.e. Savvis/Bridge/Intel vs Exodus reputation.
Savvis is actually the conglomeration of _many_ companies.
Exodus == (Exodus, AIS, Arca, Cohesive, Network-1, Global Center)
C&W US == (MCI (IP backbone), Exodus, Digital Island)
Savvis == (C&W US, Intel Hosting, Bridge Networks)
Re:Windows? (Score:3, Informative)
http://news.netcraft.com/archives/2003/08/17/wwwmicrosoftcom_runs_linux_up_to_a_point_.html [netcraft.com]
Yes, MySQL. (Score:3, Informative)
Re:RAID? Mail-Server? (Score:3, Informative)
Re:You probably shouldn't tell us too much informa (Score:4, Informative)
Re:bandwidth usage and cost? (Score:3, Informative)
However, I had a strange split in quotes I received. Some were in the range I expected, from about US$10 for a 100 Mbps commit (minimum bill about US$1000) up to around US$20-$25/Mbps. Then there was a huge jump up to the $500/Mbps range you speak of. Companies that were obviously not one of the tier-1 or 2 players, just resellers of tier-2 bandwith, but who didn't seem capable of competing.
Quite a few places seemed to think they could obfuscate the quote by refusing to deal in Mbps/month, and instead would offer traffic totals of 100 Gigabytes for inbound+outbound together. There were others who offered peak+offpeak or other ways to hide the usual Mbps/month quote.
One place was offering GigE ports, but I discovered later their internet transit was just a pair of 100M copper links. They sold their traffic as a package but when you calculate out 50 Gigabytes in one month into a traffic figure, you come up with something like 1-2 Mbps, for the low price of US$500. This may be where you are getting your quotes from.
As a very general rule of thumb, the tier-1s don't want to deal with a monthly bill of less than US$10,000, the tier-2s don't want anything less than US$1,000, and the tiny resellers will try to sell you everything they can (rackspace, metered electricity, port costs, traffic) to try to keep the bill upwards of $300-$500/month.
Just for comparison, even with the US dollar in free fall this summer, US prices were well over twice what we pay in Europe for internet transit.
the AC