It's only the core of IOS-XR, and in fact will be gone in the next release (which will share a Linux core with NX-OS and IOS-XE)
More like 16Tb per fibre pair in each direction, most of the major commercial vendors (ALU, Infinera, Ciena, etc.) have 4+ Tb systems out.
And long-haul fibres are usually getting towards 372 fibre, even metro rarely goes less than 72 fibre.
Only submarine systems run much lower than that, with a usual limit of 8 or 16 fibres due to the requirements led by inline amplification.
IOS-XR is migrating to Linux in the next major release, NX-OS (the OS for their Nexus DC kit) is built on Linux, and IOS-XE which powers most of the smaller side of new Cisco kit is also Linux.
As for Juniper they also have many products running on Linux.
The "big boys" are the IP networks, and have been for years, in practice there's two major vendors (Cisco & Juniper) and a bunch of also-rans that can play somewhat (ALU and Brocade [Foundry]).
ALU kit can run core & transmission, but it's not the top tier kit.
The "big boys" are also migrating wholesale to 100g links as their multi-terabit backbones get painful to manage with trunked 10g links.
It's existed for longer than this one in emacs and is called Vundle.
MySQL still has nothing built in for that purpose
MySQL has had inbuilt multi-master clustering since before PostgreSQL even had master/slave built in.
That's utterly false.
It's the stateful firewall that's doing that, which is a pre-requisite for some common forms of NAT.
Most, if not all, IPv6 supporting consumer routers by default have a firewall configured on IPv6 with essentially identical semantics to that for v4, allow all out, allow nothing in.
Odd, pretty sure that's happened to me and they simply pulled it out when stapling in the new one
The directions to my house in Melbourne (capitol city of Victoria) actually include the step "left turn at the Giraffe"
That was fixed in rev 1.1 of the B. The recent 512MB version is rev 2.1 IIRC
I work in what you'd consider to be Google's NOC.
It's just a standard office, nothing special.
Why does that matter? The only justification for bonding with 10g these days is "redundancy" and I've seen many more outages (at a variety of sites) from people failing at bonding than I have from switch failure.
If a machine is that critical the service it runs shouldn't live on a single machine.
Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NICs.
How is it a bad thing?
You firewall it just the same, so the only change in traffic flow is the lack of NAT, and NAT is not security despite what some people will try and claim.
Pretty most software devices I've seen have either been a rebadged Dell or Supermicro, with the top end running custom cases, and the low end doing whitebox.
In terms of "real" networking kit though, there is a bunch of switches that run linux:
Extreme (everything running XOS, which is all current models)
Cisco (everything running IOS XE, the only switch being the 4500-X)
All Juniper devices that run JunOS are FreeBSD, this includes both the EX and QFX switch lines, as well as their SRX firewalls.
Also most of the openflow-aimed switches run Linux, eg http://www.pica8.com/
RAID-DP (usually) is NetApp's name for what's essentially a RAID6.
RAID5/6 work fine for databases, just like RAID10 you have to size it correctly accounting for your read/write load.
Really any form of RAID (if you need the size) with as much RAM and SSD caching is the way to go these days.