Comment Re:Darwin awards (Score 3, Funny) 452
The directions to my house in Melbourne (capitol city of Victoria) actually include the step "left turn at the Giraffe"
The directions to my house in Melbourne (capitol city of Victoria) actually include the step "left turn at the Giraffe"
That was fixed in rev 1.1 of the B. The recent 512MB version is rev 2.1 IIRC
I work in what you'd consider to be Google's NOC.
It's just a standard office, nothing special.
"bonded/trunked NICs"
Why does that matter? The only justification for bonding with 10g these days is "redundancy" and I've seen many more outages (at a variety of sites) from people failing at bonding than I have from switch failure.
If a machine is that critical the service it runs shouldn't live on a single machine.
Even at my last job where we had a design based on multiple SPOFs we lost machines to PSU or drive/RAID failure several times, but never network, except for the one site that did "redundant" NICs.
How is it a bad thing?
You firewall it just the same, so the only change in traffic flow is the lack of NAT, and NAT is not security despite what some people will try and claim.
Pretty most software devices I've seen have either been a rebadged Dell or Supermicro, with the top end running custom cases, and the low end doing whitebox.
In terms of "real" networking kit though, there is a bunch of switches that run linux:
Arista (everything)
Extreme (everything running XOS, which is all current models)
Cisco (everything running IOS XE, the only switch being the 4500-X)
All Juniper devices that run JunOS are FreeBSD, this includes both the EX and QFX switch lines, as well as their SRX firewalls.
Also most of the openflow-aimed switches run Linux, eg http://www.pica8.com/
RAID-DP (usually) is NetApp's name for what's essentially a RAID6.
RAID5/6 work fine for databases, just like RAID10 you have to size it correctly accounting for your read/write load.
Really any form of RAID (if you need the size) with as much RAM and SSD caching is the way to go these days.
Or just use colored optics and a passive mux.
Unless you're going hundreds of miles (at which point FC has probably broken anyway due to latency expectations) you don't need active DWDM kit, passive muxes are a much better solution, and far cheaper.
Ethernet most certainly is a broadcast technology, and it and IP have supported multicast for many years (IP multicast across several networks is very common on research networks).
As for bandwidth, assuming 20Mbit streams (fairly standard BluRay, broadcast in some parts of the world approaches it as well) you can fit 500 channels on 10G. In practice as you only have to send out what at least one client has requested you can have more channels then can be streamed, cable companies do this already with Switched Digital Video.
That's almost entirely false. (ISP network engineer in Australia here)
The major cable leaving Australia for the last decade has been Southern Cross (there's more now) and the Australian government have no significant interest in it (the NZ government on the other hand does by way of cable system part owner Telecom NZ).
iTunes downloads (at least some of them) are cached by Akamai, and traditionally most medium to large ISPs hosted Akamai caches inside their network (at $JOB[-1] Akamai was ~30% and Google was ~15% of all bandwidth used for a regional education network).
It truly isn't any different for any other CDN, some host inside Australia and peer with local networks (IIRC Limelite do this), some only host in Asia (eg Amazon), and some (eg Steam) install machines inside ISPs for their customers.
Well the Microsoft campus nearby does:
http://youtu.be/3W9JziTvsgA
I actually *am* a network engineer, working daily on Google's backbone.
Actually the knowledge tested for in interview's isn't that great (certainly not enough to be productive on day one), but as with others it's the proven ability to learn that counts.
No you don't. (Network engineer working for a major global carrier in Australia)
*US* carriers will sell you T1/T3 lines, but they're not to the local standards. Australian standard lines are E1/E3 (Euro) or more commonly E1 PRI's (AKA Telstra Onramp).
Of course none of that matters for data as almost everyone just uses Gig-E (or 10ge) over single mode fibre these days, although some carriers still hold a torch for SDH/SONET.
Yes, it's a great skill to be able to setup a Cisco from the CLI, but it takes 1/10th the time from the GUI.
Er, no. For anything larger then a trivial config the CLI is much faster, and that's if you type it. Use templates (as most larger configs will) and it's not even close.
BLISS is ignorance.