You wouldn't believe how slow it is to get even tiny patches to the source code deployed.
Domain Registry of America has made a business out of sending deceptive letters to domain owners using other registrars asking them to "renew" their domain registration with DRA. The letters are cleverly written to make it unobvious so that people who think they are just renewing their domain actually have their domain registry transfered from their current registrar to DRA.
Those papers have been out all of four days. I would would wait a few days before proclaiming them rock solid disproof of firewalls. Even Motls admits that they are making a few assumptions that are themselves subject to debate.
Why would I trust them with anything else?
1) Climate change has always been more used than global warming in the actual literature. A fact easily confirmed by checking the Google Ngram Viewer
2) You can thank Republican Party strategist Frank Luntz for popularizing climate change over global warming in the mass media. The Republicans got behind climate change vs global warming specifically to convince the public that it wasn't a serious issue. So your 'honesty' argument backfires: It was the Republican Party that wanted 'climate change' to be the popular term so people wouldn't take it seriously.
Really - did anyone in Washington bother to think about the fact that by repeatedly demonstrating to terrorists how easy it was to use 'remote-controlled model planes packed with explosives' to cheaply kill people that you otherwise couldn't easily reach ("Drone Strikes") that the terrorists wouldn't eventually try variations on the same idea themselves?
Remember: Don't Blink
I think MS may have revised the tech note after ZDNet wrote their story. It was offline for a little while after the story came out, and then came back again.
Yep. This is Google explaining to Apple that they aren't the only one with patents. The monster patent portfolios of all the big players have exist in part to deter other large players from launching patent wars. It is a form of 'Mutually Assured Destruction'. Apple went nuclear starting a couple of years ago. Google (and other large players) are now launching their counter-strikes to demonstrate to Apple why it is a bad idea.
If Apple has any sense (more likely now that Steve Jobs is gone) they will begin quietly trying to wind down the patent wars.
A "small" DDOS attack is more than enough to down an unprotected machine. I experimented with less intensive approaches *first*. If I limitted the number of Apache connections they would run up the number of open connections until the server quit responding. If I let the number processes grow, they would keep adding connections until the machine ran out of memory to support additional connections. With a pool of more than 30K potentially attacking machines it takes an *incredible* amount of resources to just 'ride it out'.
You run into multiple limits: How many simultaneous TCP connections can your system handle? How much memory does it take per connection? How much CPU does it take to context switch between thousands of connections?
It was a simple yet very effective attack. If you didn't have a good sysadmin who *could* erect an intensive defense your choices would be
1) Let your site go down.
2) Pay a DDOS defense service to defend you.
This assumes they are just trying to flood the httpd with requests, because doing so requires less resources on their part, and generally only harms the target box and not the isp hosting it.
If you block an attack like this, you run the risk that the attacker will switch tactics and start simply flooding your line.
True, they *could* have escalated it to a packet flood (and oddly enough naively dropping the TCP packets actually initially converted the HTTP Flood into a SYN flood - which didn't pose much of a problem for me at the rates they were running).
But it is much more resource intensive for the attacker and they are optimising return on investment. They can waste time dedicating their botnet to packet flooding a minor site with no financial payoff even if they succeed in bringing it down, or they can move on to easier targets where they can continue to 'time share' the botnet traffic among multiple targets.
It really is the 'why have locks on your doors and windows when the thief could kick them in' argument. Sure - he *could*. Or he could move down the street to the house that left their bathroom window open when they went to work.
Yes - but they don't change web browsers with every sequential request.
Rate limiting IP addresses doesn't work when they are only hitting from any specific source IP address a few dozen times per hour. They bury you by having tens of thousands of different machines all hitting you independantly. You can be getting hundreds of requests per second and never trigger the rate limitting.
The essence comes down to two things. Neither is particularly complicated in principle, although getting it right can be a bit fiddly.
1) Detect attacking IPs.
So I 'tailed' the web server log and analyzed it in one to ten minute chunks to detect abnormal accesses. All detected addresses were added to a persistent database of blacklisted addresses.
2) Add the detected attacking addresses to an efficient firewall.
A naive firewall blacklist might try to just put each addresses in one big long list. This doesn't scale well beyond a couple of hundred attacking addresses. On the older machine I had, I used a 'divide and conquer' approach: I created a few hundred filter chains based on a
After that it became a slow game of cat and mouse. The attacker would alter his attack to try and slip by the detection, I would update the detection software to detect something else he wasn't getting perfect if he managed to by-pass the filters. After about two weeks they quit attacking the web server.
The largest issue I had really was that I was starting my defense from a 'standing start': I had to write all the needed scripts from scratch while the attack was still on going.