First off, I'd like to thank Slashdot for giving me an opportunity to spew my opinions. Since Slashdot readers look for "stuff that matters," I'll #include [std/disclaimer.h] and say that this is me talking, not my employer, and I'll be honest about what I think the issues are (and hopefully not ramble too much).
Is a network proof against DDoS possible?
by Paul Crowley
Is vulnerability to DDoS-type attacks due to a flaw in the design of TCP or IP, or is the design of a network that's inherently resistant to such attacks an unsolved problem? Is it possible to imagine a fix that would address this, or a protocol that wouldn't be vulnerable even when many machines are compromised?
There are flaws in anything created by humans. Sure, the TCP/IP protocols we are using today have some weaknesses, but they also work amazingly well, don't you think? And all things created by humans are improved over time as new ideas are developed and known problems are identified and solved. Its just a matter of how quickly these improvements can be implemented, and as we've seen with GOSIP, IPv6, etc. change can come very slowly. (A good book on this topic if you are interested is "Diffusion of Innovations", by Everett M. Rogers, The Free Press, ISBN 0-02-926650-5.)
Denial of service attacks are one of the easiest forms of attacking systems and networks; resources can easily be exhausted, programming flaws in network stacks and devices can be exploited to cause failures, covert channels can be created allowing hidden and practically unstoppable communication and control. In other words, there is no single "this problem" to be solved here, but rather a hole bunch of little "these problems".
In fact, the current DDoS tools implement UDP large packet floods, TCP SYN (session setup resource exhaustion) floods, ICMP "echo" floods and "Smurf" (directed broadcast ICMP "echo reply") floods, and other DoS techniques could easily be added (and many other exploits exist).
Yes, there are some proposed fixes that can address some of these problems. I'll get to them in a minute.
by Dr Caleb
There seems to be several solutions floating around, mostly smart routers that track valid traffic and MAC addresses.
Would changing to IPv6 help eliminate these type of attacks? >From what I read of the specs on IPv6, all the data needed to track a packet from destination right down to the MAC address is included in the packet.
I don't claim to be enough of an expert on IPv6 yet to say which of the current set of DoS attacks are eliminated by its features (better to ask the real experts like Steve Bellovin). Perhaps IPv6's Quality of Service features, IPSec authentication features, etc., *may* provide a means of defeating some packet flood attacks by rate limiting flows, allowing quicker discarding of "invalid" packets, etc., but I'm not sure if it will entirely eliminate DoS attacks.
There are other new proposals that have recently been put forward for consideration, such as Bob Moskowitz' "Host Identity Protocol" (which addresses some problems in TCP session establishment and identification of "valid" packets) and a proposed method of tracing packet flows (independent on ISP involvement) that uses a probabilistic packet marking technique developed (coincidentally) by researchers at the University of Washington. Documents describing both of these are available at:
Since IPv6 is *still* not widely implemented, and these other proposals are likely years from implementation as well, it is probably best to focus now on the fundamental issue in large scale DDoS attacks, and that is we need to put a MAJOR emphasis on minimizing the population of systems that can be trivially root compromised. (I didn't say it would be an *easy* solution, but it is one thing that can be started immediately.)
Stop Spoofing At The Backbone?
How viable would spoof protection at the backbone level be? In other words, after a certain date, all downstream links are categorized as either able to peer for other network blocks, or simply not. Admins who can't be bothered to spoof-protect their networks would get IP source ranges outside their IANA assigned IP block dropped at their first upstream provider; sites which need to maintain peering relationships thus have their direct motivation (their backup networks will cease to function) to specifically lock down their peer forwarding to only those IP ranges they're actually peered with.
Yes, you obviously get problems as peering scenarios get traveling-salesman levels of complexity, but most sites (to my knowledge) don't exceed more than a few levels of peering--we should take advantage of this fact to enforce a top down elimination of infinite source spoofability? And, if so, would the precedent that this creates help or hinder the growth and freedom of the Internet?
Eliminating IP source address spoofing would eliminate attacks such as forged DNS query attacks and the MacOS 9 TCP/IP stack bug (both packet size/number amplification flood attacks), and would definitely make it easier to trace packets back to their source networks, greatly simplifying and speeding up the (still major) task of stopping the agents from sending out packets and doing forensic investigation. Add to that the elimination of incoming directed broadcast packets and you also get rid of "Smurf" amplification attacks.
Not only do I think this should be done at a site's border routers, but also on all routers within the network. Programs like stacheldraht attempt to determine if they can successfully send packets with forged source addresses, and both it and TFN2K have code to randomize packets on a per octet basis (not exactly CIDR compatible, but still pretty clever and effective.) This means that if you have a /16 network with several hundred subnets, the agents could forge the final two octets, looking like they are coming from hosts on all of a site's subnets at once. Depending on your network infrastructure and political organization and authority, this can either force you to have to sniff on *each* subnet, or do your own router-by-router debugging of packet flows to locate the actual host(s) sending the packets (links to various documents on packet tracing are found on the page I referenced above.) If no host can forge source addresses beyond its own subnet, the task is greatly simplified (and you only need to put filters on one router to stop the flow from one agent host.)
Practices like those described in RFC 2267 should, in my opinion, be a standard requirement under any network peering agreement and AUP, but what motivation do ISPs and NAPs have to enforce them? Its not common for a company to say to a customer, "Hey, I don't want to take your money!" It tends to be a matter of waiting for an attack to happen for upper management to start asking the techies how to react, but by then the damage is already done. More emphasis on prevention and preparation for possible attacks seems to me to be more prudent (some ways of mitigating DDoS attacks are also listed on the page referenced above). If it costs more, so be it. That is the price of betting our economy on the Internet as its fundamental infrastructure.
The bad news here is that this tactic of filtering does nothing to deal with bandwidth (or other resource, like half-open socket) consumption attacks -- e.g., large packet UDP or ICMP floods, or SYN floods to random ports -- so again we've only solved a subset of the problems (forged addresses and directed broadcast), not to mention this solution doesn't help at all with the initial compromises and installation of agents.
Firewalls for Dummies?
With the increasing popularity of broadband, always-on connections and the increasing distribution of networking software, it seems like "Joe DSL" faces a greater risk of having his system compromised than before. How much can the average user be expected to learn about securing their system? Do you foresee developments, either in software, education or in other services that might help private computer users or small time administrators protect themselves better?
I expect the average Joe DSL to probably learn the hard way, just like he learned not to step off the curb in rush hour against the traffic light, and to take everything valuable out of his car when he parks it on a dark city street: by suffering an incident, and the resulting cleanup cost.
This is an education problem of *huge* proportion, and just like the filtering question, there isn't much motivation for ISPs to hand-hold their growing customer base, and their marketing department -- just take a look at their ads -- tells you how fast they are, but doesn't say a word about the new risks you will face (some will mail you a warning a few months later, which may be a bit too late.)
Not only that, but most broadband ISPs have user bases far larger than they have staffing to support, so even trying to contact them, find your way through the tier1/tier2/first line manager/Nth line manager hierarchy, and actually identify the owner of a compromised cable modem or DSL served system can take days. These customer's systems make excellent bases for scanning/attacking other sites, running eggdrop bots, or "bouncing" connections to make it harder to trace attack activity, and the intruders know this.
As for education, I am not quite sure what can be done there. Mandatory "driver's license" style tests before getting a DSL account? Forget it. "Tickets" handed out by the Net Police for allowing your system to be compromised and used to attack other sites? Not likely, and I don't think anyone wants that. Law suits by victims of attacks against the owners of compromised systems? That is already starting to happen, but do we really want people to learn as a result of law suits, to throw lawyers at the problem, diverting badly needed system admin funds to pay $200 an hour to the suits?
There probably will need to be some monetary incentive to securing systems (because people pay attention to money.) The federal government is passing laws about privacy of personal, medical, and credit information (and these can't be private if the systems that house them are not secure), and insurance companies will likely start charging higher rates for systems that are not managed well and become involved in security incidents with high dollar damages (but PLEASE, first raise the rates on anyone who drives a car while talking on a cell phone!)
Some ISPs now offer security services, and will provide "firewall" services for their customers, but this comes at a high price. Most users want $19.95 a month service, which is basically just buying a raw, wide open pipe.
"Personal firewall" software is also becoming popular, but I've wasted many an hour explaining to someone reporting an "attack" that what his program was reporting was either a false positive, or was mis-categorized, and not at all what they thought. Feature filled packet filtering programs allow users to shoot themselves in the foot and break TCP/IP applications, while overly simplified programs leave gaping holes or users turn off too much and only think they have security. A lot more work and education is needed in this area.
A fruitless exercise?
Isn't the intersection of the sets:
- Clueless enough to allow massive DoS out of their network.
- Yet likely to install this detector.
Yes. Next question?
Seriously, the detection of agents/handlers on a system, and on the network, let alone doing forensic data gathering to assist in stopping a distributed attack and identifying the attacker, is not easy. There are too many ways for an intruder to disable logging and accounting, conceal programs and files using "root kits" and loadable kernel modules, and change the defaults for commands and packet contents that will defeat the file system and network scanners that have been developed to deal with these DDoS programs, and the learning curve is steep to counter the intruders' anti-detection measures.
This is one of my pet peeves; THERE ARE NOT ENOUGH GOOD SYSTEM ADMINISTRATORS. There needs to be WAY MORE of them, they need to be PAID AND TRAINED BETTER, and (to put it bluntly) they need to be considered a critical resource REQUIRED for powerful computers on the Internet today, not as overhead expense to be minimized.
The fundamental requirement for securing (or breaking into) any system is knowing how the system works, how to take it apart and how to put it back together again. These DDoS attacks over the past six months have been made more costly to respond to because of things like "root kits", which exceed the average admins ability to get around. For more on why, see:
I think a big reason that Universities, K-12s, small businesses and non-profits, and home users with cable modem and DSL lines have their systems regularly compromised is because these systems are often deemed a necessity for research or business, but the only money that goes into them is the money it took to buy the hardware in the first place. They very often do not have tape drives, software upgrade licenses and regular patch application, sufficient manuals or books on system administration, and the person administering the system is usually the first person who can spell "U-N-I-X" and has a "real" job doing research, programming, or Web page design.
People need to start thinking about today's top of the line computers on gigabit networks as the equivalent of a BMW, a Range Rover, or an Audi A series. You would be an idiot to only put gas into it and never take it in for regular maintenance, instead trying to do the work yourself in the garage, and to leave a spare set of keys in plain view on the dashboard. No, you take your car in regularly to a trusted and trained mechanic (and pay $50 an hour for their skills), change the oil and rotate the tires regularly, and do your best to keep it from being stolen (including buying those annoying car alarms that nobody pays attention to when they go off.) But that is basically what too many people do with computers; don't take care of them, don't hire skilled people to regularly maintain them, don't adequately monitor them, and don't really care if someone else hijacks them.
I regularly hear people say, "I don't care about securing my system. I don't have anything important on it. What could they steal?" Well, there are gigabytes of disc space these days, REALLY fast CPUs that can spit out lots of packets, and high-speed network connections. It is when tens or hundreds of thousands of people think and act this same way that someone else suffers. This attitude HAS to change and people HAVE to learn about the risks and ways to address them.
Should security research be done in obscurity?
It is nearly a mantra among us that there is no security through obscurity. It would seem that with a sufficient number of us too lazy or too ignorant to secure our own machines that there is possibly no security through openness either. Do you think that the open research model that Mixter, Farmer and others have always advanced as a reason for releasing their tools is still justified?
Yes, I think the open research model is justified. There is a passage in the Bible (John 8:32, and on a plaque in CIA HQ), "And ye shall know the truth, and the truth shall set ye free." But that only works when everyone knows the truth (and uses that information wisely in their design and purchasing decisions). Until that balance is reached, those who wish to abuse this knowledge win out over those who have not yet attained it. It's that simple (and that hard - ooh, how Zen like.)
As you point out, there is a large percentage of system admins that don't have the same knowledge as those who break into their systems. If that percentage (of a very large, and growing number of powerful computers on fast networks) isn't reduced, the total number of systems that can be compromised and controlled by an attacker grows to the point where its now possible to build attack networks of two or three THOUSAND computers.
What I think needs to happen is to follow the advice of someone (I forget the source) who said, "There should be a hacker on every board of directors," and I would add on every development team. I don't think it helps to ignore weaknesses, or keep them quiet, because they will eventually cause problems. And it is not enough to identify the weaknesses if nobody learns from the mistakes of the past and actively tries to avoid them in the future. One reason that these changes occur so slowly, in my opinion, is that the people who really know the technical details of security are too far removed from the real decision makers, and the layers of managerial filtering inbetween often filter out the security voices in favor of the "lets please the masses" voices.
Engineers are not taught simply how to construct buildings, they are taught how to know when they will fail so they don't come crashing down and kill everyone. There are standards and codes that say how buildings should be constructed, and we (for the most part) don't have much trouble with buildings killing us in the U.S. But software and operating systems are designed for ease of installation and use by the largest number of (untrained) people possible and practically nobody complains. Where is this same sense of "we must build it so it doesn't break?"
I hear an ad for a new online bank and think, "Great. Now all my bank Web based business communication site and think, "Great. Now the people discussing business plans will have their discussions at risk." I hear a news story about a company that facilitates designing buildings online and think, "Great. Now the plans to unknown office buildings are at risk." I can just picture the CEOs and the stock analysts drooling at how cool and efficient these new web tools are, and how much money the company that produced them is going to make, but unless they are designed with security in mind from the start, they should all be considered very risky to use.
Somebody in a position of decision making authority for any e-business needs to understand these weaknesses that are discovered and publicized, and to make sure these weaknesses are acknowledged and addressed in ALL computer based system and application designs.
I think one of the biggest issues will be identifying Denial of Service as an attack. I have a legitimate load testing utility that simulates actual browser traffic. Say I run it against someone else's site. They'll see that a lot of traffic's coming from me, and eventually figure out it's bogus and take appropriate measures. But distribute this and it'll look like actual traffic. Get enough friends doing it, and we take 'em down with what appears to be perfectly normal browsing.
The analogy to the "real" world is roads and bridges. During normal hours, they run well. During rush hour, they clog up and perform poorly. And during a demonstration (like recent examples in Seattle and Miami), they clog up and perform poorly. You can consider the recent anti-WTO situation up in Seattle to have been a DoS attack on downtown. But you wouldn't consider gridlock at 5:30 p.m. in Los Angeles to be a DoS attack.
To solve these problems, you have to know what's causing them. If it's just normal traffic and the infrastructure is insufficient, it gets ignored until people get fed up enough to vote more tax money into building wider roads or better public transportation (again, analogous to buying more servers or a fatter pipe). If it's demonstrators, you either address their concerns or you send in the National Guard to beat the crap out of them (depending on the political climate).
In this world, it's easier to differentiate the two situations. If a bunch of cars are jammed together at rush hour, you know it's a traffic problem. If it's crowds of people singing songs and holding signs, you know it's a demonstration. And if it's a possible sick-out at Northwest Airlines, you're not sure if it's a DoS or not, so you get a warrant to read their home e-mail and find out.
With computer protocols, though, usage and abuse can look identical. Even wild surges in activity can be from legitimate usage. How do you forsee systems being put in place that can differentiate between actual usage and DoS? Doesn't this almost inevitably lead to some non-forge-able, traceable, unique identifier? And doesn't this translate to the demise of privacy on the Web?
Not necessarily. Sure, normal usage may exceed capacity. But a protest by thousands of people is not "normal usage;" that is a mass exercise of individual rights in a democracy to gather and express their opinions. [I live about four blocks from where the tear gas and concusive grenades were being lobbed at protesters, and I personally think the response was excessive and the protesters had a right to protest. They weren't damaging property in my neighborhood, they were chased into it, and traffic could simply go around it. Seattleites are used to backups, like you point out.]
I *don't* think it is OK for a single individual (or small group) to take control of the resources of 3,000 *unknowing* individuals' resources and anonymously force them into that individual's service. That is not an exercise of democratic speech, that's theft of private resources. That's what DDoS attacks are.
If there is a problem with truly normal usage exceeding capacity, you could argue that capacity simply needs to be increased, and there is a cost associated with that increase. I start to question things when this increase in capacity is made on an insufficient budget, so there is nothing left for people and tools to protect the new "required" infrastructure. If the infrastructure is so vital, should its proper monitoring and administration be neglected? Is it wise to use this as the infrastructure for our record-setting-growth economy? If we build a fragile infrastructure for our economy just for the sake of growth and short term revenue (and pandering to customers demanding more and more services at lower and lower prices), and the result is that an individual can wage an anonymous protest and take parts of it down. I'd rather that the growth was a bit slower and infrastructure was more secure.
Antionline: True help?
I saw this evening on CNN that the FBI has enlisted the help of none other than Antionline, in its search for the perpetrators of the DoS attacks. What is your opinion, regarding this decision? How does this reflect upon the FBI's ability to investigate cybercrimes?
I have not seen any news reports that Antionline was enlisted to assist the FBI (and don't see anything searching CNN online.) I also read a Reuters article that claimed Mixter wrote stacheldraht (he did not), and that stacheldraht is used to break into systems (it has nothing to do with breaking in, just sending packets -- the break-ins are done using other tools, which usually implement buffer overflows in services like rpc.cmsd, rpc.ttdbserverd, amd, named, etc.)
Just because the media says something (or worse, one reporter quotes another reporter) that doesn't mean it is true. They make plenty of mistakes, especially when reporting on tight deadlines (I have published corrections to some articles in the DDoS page I referenced above.)
If you've had much contact with security specialists working for the government, how much confidence do you have in them that they're smart enough to:
- Understand the problem well enough
- Spot good solutions if they come along
DDoS attacks ARE a problem. I could imagine that they could serve as terrorist/psychological attacks in time of war. Because the computers that are doing the actual DoS attacks could be within the country being attacked, the attacks would be nearly impossible to stop at the borders.
"The government" is a pretty big population, which includes federal law enforcement (as you point out), as well as a huge slew of departments and agencies, their state/local equivalents (including public schools and universities). In such a large population, you will find both skilled and unskilled members of that population (fitting a bell-shaped curve like most populations).
If we don't like it that all attacks are attributed to "hackers", we should likewise have some respect and not just jerk our knees and say anyone who works for the government is automatically clueless.
The Government Accounting Office (GAO) has been auditing and analyzing many of the federal departments and agencies for years, and some of its reports (I have a number linked on my home page) are pretty critical, while others highlight agencies that have done a lot to secure their systems and provide "best practices" advice to improve the situation.
As for law enforcement, the FBI has been doing a lot recently to create a skilled central core of computer crime analysis and investigation resources, and in establishing training facilities and developing working relationships with their peer agencies in other countries (since the Internet is global, response must be global). Since they haven't been at this very long, of course this will be a bumpy and sometimes inconsistent process, and it will take time to build depth and breadth of computer forensics skills (and there is usually a LOT of forensic data to process and understand), but they are working very hard.
I would also say that I think the Clinton administration has done a much better job than its predecessors in trying to address these issues (e.g., the President's Commission on Critical Infrastructure Protection, the formation of NIPC to coordinate incident response and information dissemination to the public and private business sectors, and the National Plan for Information Systems Protection.)
If you've read the National Plan -- subtitled "An Invitation to a Dialogue" -- you will see that a great deal of thought has gone into dealing with infrastructure protection, and that they are asking for cooperation and input from the private sector security experts, which means us. (Now is the time to make your opinions known, and that doesn't just mean ranting on the dc-stuff list, where you are preaching to the choir. Of course, people there will agree with you, but does that change anything? You need to write your Congressional representatives, the President's Council, and vote.)
I, too, question the amount of emphasis in the current budget being placed on surveillance, but I'm really happy to see money being allocated to programs like better forensic analysis capabilities and identifying talented high-school students and helping them to study computer security in college, rather than ignoring their talent (a form of disrespect or a result of fear) and risking losing them to a life of attacking systems instead of securing them.
For example, I know at least one admin (who was 15 at the time I met him) who knows more about securing Unix systems than many admins I encounter on a daily basis. Sure, he was 15 and had some issues with judgment that 15-year-olds have that caused friction with his employers, but he was just 15! Give him a break, and respect his talents! If he was managed more closely, his obvious skills would *still* be an asset to his former employers. I don't want to see someone like this get frustrated at not finding a place to get paid for what he loves to do, and land in jail for following his curiosity and passion in his own way (which usually involves making an eventual mistake in judgment that draws the attention of law enforcement). I already pointed out there is a lack of skilled system administrators, and I'd rather see young talent be put to use to solve these problems, and the National Plan addresses this.
by Ex Machina
What do you have to say to the idea that this could be a DoS attack launched by computers infected with a Robert T. Morris style worm? Would it be possible to launch something like this and have it and its probes remain undetected until a date where it will launch a synchronized DoS?
Given what I've seen as far as these particular tools go (including the scanner used by one group), I have no reason to believe the current attacks are automated and worm-like.
That said, I think it won't be long before someone *tries* to take that next step and further automate the process of scanning & intrusion to constitute DDoS networks.
Think about it, though, for a moment. Using the current DDoS tools, the intruders need to create a large network, without losing agents due to attrition as system/network admins notice the initial "setup" intrusions, and they would have to control the growth of this network so that the handlers are not crushed under the weight of an overly large network (or exposed because the agent "Hi mom!" traffic gets too noisy), hope that clocks are synchronized well enough to not expose the attack too early, and to control the resulting network during an attack, all without being detected. There are some tricky issues of coordination and communication that must be dealt with to prevent such a worm from running wild and disclosing itself. Whoever wants to try this should probably ask rtm about what it feels like to make that kind of mistake.
The alternative is to not use a coordinated/distributed model, but instead use the more standard model of propagating uncontrolled attack agents using a combination of social engineering and trojan horse programs. This has already happened.
In early February, 1999, a message faked to look like it came from Microsoft, claiming to be an upgrade to Internet Explorer (with an attached program named "ie0199.exe") was sent to many thousands of users on the Internet. Those who ran this program got what appeared to be an innocuous error message about a missing DLL, and most just gave up and deleted the message. What they didn't realize was that they had just unwittingly installed a program on their system that set itself up to run on boot the *next* time the system came back up. At next system startup, the program then started sending packets (as a self-described act of revenge) to random hosts on the Bulgarian Telecommunications Company network, causing them significant problems for who knows how long.
Worms also seem to work best against a single, self similar operating system/architecture/service combination, which means the attackers would have to do the same recon scanning they do now to get a list of these hosts, so why not just stick with what they know works and infect systems on the list in parallel, instead of by some non-deterministic spreading behavior?