Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment References, references, references (Score 1) 211

The quality of a datacenter has less to do with the equipment (although thats important), and more to do with who designed and is running the equipment.

Most of the datacenter outages I have been a part of in one way or another (Customer, or Provider) have been caused by:

Poor planning
Human Error
Poor design

As a normal customer, there is no way to know if any of these problems exist. The solution? Ask for references that utilize that datacenter. Make sure they don't give you a customer that utilizes another data center from the same provider. Data center design varies greatly, even across the same provider. Ask that reference how long they have been there, how many problems they have had, and the companies response to those issues. Look for a customer with a long history in that data center (3+ years, 5 would be better).

Don't rule out a data center because they had an outage. Outages will happen, no matter how redundant their systems are. Their response to it is very important. If you find out about a previous outage, ask to see the root cause analysis they provided their customers. If they can't or won't produce it, even under NDA, then walk away.

Comment TCP Algorithms are "Funny" (Score 3, Interesting) 515

I've spend a lot of time looking at this type of problem. I had a customer that wanted to transfer data at greater then 10 mbps across the internet, across the country. Lets just say with windows this is impossible.

The problem has to do with TCP algorithms. I found the ones in windows are optimized for common cases. Linux has multiple TCP/IP algorithms you can choose from. Most are significantly better the one used in windows.

The "problem" with TCP is it has to assume that packet loss equals network congestion. This is a good thing for an over-loaded network link. As the link fills up, it starts dropping packets. As the computers on each end of a TCP connection see this packet loss, they start "Backing off". They slow down their transmission rates until the packet loss is gone. In most cases they back way off, and then slowly increase the speed until they start seeing a little packet loss. The methods they use to determine what is congestion, how much they slow down, and how they recover from it greatly effects total usable bandwidth.

The bottom line: TCP Algorithms greatly effect transfer speed, and no algorithm is good for every situation. Linux gives you flexibility in this area (And by default uses a better one), and windows gives you zero.

To test raw bandwidth, you have to saturate a link with UDP data, and count how much data is received. This is pretty pointless as its not the useable bandwidth, but it does tell you the "raw" potential. The problem is the "raw" potential can be subverted by a small amount of packet loss.

Slashdot Top Deals

"There is no statute of limitations on stupidity." -- Randomly produced by a computer program called Markov3.