Link to Original Source
It's already September 27 in Australia. Problem solved.
SSDs are slow in that they rely on old school disk protocols like sata. Sure, you'll get better performance than spinning disk. But if you want screaming fast performance, you should look at flash devices connected through the PCIe bus.
Products from Fusion IO would be an example of this. Apple Mac Pro would be another: "Up to 2.5 times faster than the fastest SATA-based solid-state drive".
I wired my house with cat5E cables, thinking it would future proof the house. In hind sight, I would have chosen cat5.
10G may not work, even if you've chosen the right type of cable, as 10G is much pickier about the terminations. So you can always try and if it doesn't work well, go for prefabricated cables for the 10G connections.
If you want to play with fast (10G+) networking at home, the smart way is to buy infiniband gear on ebay. There's quite a supply from compute clusters being torn down. Older SDR (10G) cards run $30-50. DDR (20G) a bit more and QDR (40G) for a few hundred per card. Buy a cheap copper cable for cross connect and you're done. Or preterminated fiber cables if you need distance, the cards usually handle that too. Some cards also handle 10G and 40G ethernet as well. Need a switch? 36 port QDR switches typically go for $1000. That's 1.4 Tbps worth of bandwidth.
I bought a couple of Mellanox cards that do both 40G ethernet and FDR (56G) infiniband. Between my two linux servers, I get about 37Gbps when using 2+ tcp connections. While bandwidth is about the same, infiniband latency is about half that of ethernet, so I run IP over infiniband.
Apart from being fun (this is slashdot after all), why would you want this? Because it remove the network as a bottleneck and changes the way I think about resources. File transfers are limited by disk performance, there's never network congestion, etc. The only thing that could saturate the link would be memory to memory copying (think VM migrations). Either way, it will be a long time before I worry about network performance again...
It's hardly surprising that Skype isn't mentioned. It's widely believed that there are already backdoors in Skype. Skype has "declined to confirm" that there are no backdoors.
From the Wikipedia Skype Security article
Security researchers Biondi and Desclaux have speculated that Skype may have a back door, since Skype sends traffic even when it is turned off and because Skype has taken extreme measures to obfuscate their traffic and functioning of their program. Several media sources have reported that at a meeting about the "Lawful interception of IP based services" held on 25 June 2008, high-ranking but not named officials at the Austrian interior ministry said that they could listen in on Skype conversations without problems. Austrian public broadcasting service ORF, citing minutes from the meeting, have reported that "the Austrian police are able to listen in on Skype connections". Skype declined to comment on the reports.
Effective deployment of DNSSEC requires action from both DNS resolvers and authoritative name servers. Resolvers, especially those of ISPs and other public resolvers, need to start validating DNS responses. Meanwhile, domain owners have to sign their domains. Today, about 1/3 of top-level domains have been signed, but most second-level domains remain unsigned. From the daily 130 billion DNS queries the service receives, only 7% of queries from the client side are DNSSEC-enabled (about 3% requesting validation and 4% requesting DNSSEC data but no validation) and about 1% of DNS responses from the name server side are signed."
Link to Original Source
Keeping track of that many webservers have to be very time consuming tasks.
This book teaches you to think like an experienced programmer.
It's a great way of refreshing your algorithm skills and an easy ready compared to other (heavier) algorithm books.
Link to Original Source
This is exactly what I have and it's a great solution.
In a small 4U network cabinet, you can fit a patch panel and a 24 port switch. That leaves you an extra 2U for other things. I also have a PoE enabled switch and a network server: The SuperMicro 1U Atom servers are small, cheap, energy efficient and quiet. For the switch(es), go for quiet (fanless if possible) and energy efficient. Most switches are made in the same factory in China and from the same components, so it doesn't really matter which brand you choose.
I recently downsized from a 42U to a 21U rack. A 42U rack is was inconvenient and too heavy to handle. Having a smaller rack on wheels is more convenient and 21U is probably more space than you'll need in a home environment if your main purpose is "just for fun". I've got a separate switch in the rack and an uplink connecting the two switches.
I recently set up my own VPN network and wanted a generic solution with access to a number of countries, mainly the US, Canada and the UK. I wanted something that would work naturally with all the devices on my home network, including the Wii, Playstation, etc. The problem with a regular VPN services is that they only give you one country at a time, plus you will probably tunnel more traffic than you want. Your ISP is usually the best route for traffic that doesn't have to originate inside a specific country.
So I've got a number of VPS instances in different countries, all running OpenBSD. These routers are connected with IPsec tunnels. That's not really necessary (ip encapsulation would work just fine) but gets me around national packet sniffing (Australia, I'm looking at you). Then I use OpenBGPD to dynamically announce routes between the routers. Finding out the routes for a provider is easy: just lookup the whois information for an IP number and you get the corresponding CIDR. Add that route to BGP and it's visible across the network in seconds. You also need to forward the appropriate DNS traffic, to get around the load balancing based on originating IP used by some CDNs.
This solution may seem too complicated and overkill, but it works incredibly well. You could of course achieve the same thing by having multiple VPN connections from a single router and add a bunch of static routes. But where's the fun in that?
As an added bonus, it's trivial to set up redundant gateways to the US and load balance traffic between them. This is a natural feature of BGP: if a router goes down, the BGP connection dies and traffic is routed through another path. Since OpenBSD is very light, I only pay for the smallest VPS instances, usually 128MB ram and a tiny bit of cpu for a few $/month per instance.
This was discussed on slashdot in 2007:
And it's not a very good idea:
"The V2G potential of Honda’s full hybrid vehicles is unexplored, but the company is doubtful of using them to power homes. “We would not like to see stresses on the battery pack caused by putting it through cycles it wasn’t designed for,” said Chris Naughton, a Honda spokesman. “Instead, they should buy a Honda generator that was made for that purpose.”