More Info on the October 2002 DNS Attacks 232
MondoMor writes "One of the guys who invented DNS, Paul Mockapetris, has written an article at ZDnet about the October '02 DNS attacks. Quoting the article: "Unlike most DDoS attacks, which fade away gradually, the October strike on the root servers stopped abruptly after about an hour, probably to make it harder for law enforcement to trace." Interesting stuff."
Damn terrorists... (Score:4, Funny)
Re:Damn terrorists... (Score:4, Interesting)
A better description would be anarchists. Anarchy is lawlessness and disorder as a result of governmental failure (in this case, to set up a system where the root servers are safe, but not particularly so).
But then,we can't say that, can we? Anarchy is popular here on slashdot.
Re:Damn terrorists... (Score:2, Insightful)
There is such a thing as good hackers and even good crackers but a stupid DOS against the root dns servers? How can you defend that?
Re:Wow, you're oblivious (Score:2)
Solution? (Score:4, Funny)
Re:Solution? (Score:2)
Re:Solution? (Score:5, Insightful)
Noone has a legitimate need for streaming several hundereds or thousands pings per second...
Or at least put a lid on it when someone starts sending lots of pings for more than a couple of seconds...
Re:Solution? (Score:5, Insightful)
Doing so would require remembering who pinged, and when, for the last few seconds. Under normal conditions, that sounds trivial, but pings don't cause any problems under "normal" conditions. In a DDoS, you might have a million machines all pinging. How do you propose to store, look up, and update the last ping time for 100 million pings per second? A quick off-the-cuff calculation shows that *just the storage* for 10 seconds of such recording would take around 8Gb (32b IP and 32b timestamp). That doesn't include the CPU time to find matches (not that bad, since you can use the IP as an array index, but you can almost guarantee a continually invalid CPU cache) or update the list. And, that assumes you *always* dedicate that 8Gb to each server running on the machine, since otherwise the search you propose requires adding new pings to a dynamic list, making the lookup time become very very non-trivial.
More importantly, even if you *do* manage such a feat (or even get rid of ping altogether), attackers can still use other services (like, for example, DNS lookups, which I'd like to see a DNS server try to stop supporting).
Actually, it surprises me that no DDoS clients use SSH yet... Although not every machine (ie, Windows) runs an attackable server, a well-planned attack could suck up significant bandwidth, memory, *and* CPU power, all in one tidy packet.
Re:Solution? (Score:3, Insightful)
You don't need to keep track of every ping. Keep track of each IP and the number of pings recieved. Flush the data periodically to expire them.
Length of attack becomes irrelevant, as does the exact ping rate. (as far as storage goes anyway)
So 1 million * 12-byte record (4-IP, 4-last ping time, 4-count) = 12MB.
The CPU time required to check would probably still make this infeasable.
Re:Solution? (Score:2)
I certainly is infeasible. There is a simple way to make this work. At the edges (and probably adjacent routers too), set a rate limit on ICMP. No tracking of IPs, just counting traffic and dropping the excess. As a bonus, the software to do this is already deployed.
brilliant! is this what the article suggests? (Score:2)
Re:Solution? (Score:2)
Yes, you *would* need to check into the package to see if it's a ping and store a table of pings per second per ip.
But you could use a simple counter. You wouldn't need to actually store every package.
Look at the package, "oh, it's a ping", check if ip is in list, increase counter, if the counter show a abnormal amount of pings in a short period, don't send it towards it's destination.
Then you'd go through the table at regular intervals and remove ip's that haven't increased since the last check...
The crucial part here is cpu power.
You need to be able to look into packages without slowing down the routing.
But many of todays more powerful routers are allready capable of looking into packages at line speed.
Especially if you do it close to the source.
You could settle for doing it at the ISP connection routers, the one closest to the subscriber.
Those routers rarely need to deal with a multitude of 1 or 10Gbit lines, or at least most are capped at the bandwith the customer subscribes for.
That way you wouldn't need to burden the more central routers with this.
Other DOS attacks might be harder to fight though...
A really busy web-server configured to reverse dns-lookups of every connection might actually produce a frightening amount of dns-lookups per second. =/
But it should be possible to recognise abnormal traffic of dns, ssh and other protocols too...
Re:Solution? (Score:2, Insightful)
Secondly, assuming all DDoS are just simple ping is very short sighted. A much more effective DDoS is to spoof packets from IP addresses that arent being routed on the internet, when these reach the routers that connect to the name servers, depending on their configs, they would end up flooding their ip routing cache with useless entries, leading to the routers going down, leading to the nameservers being down.
Re:Solution? (Score:2)
You probably send your 10000 ping at a more moderate speed.
Otherwise your test would end in a few milliseconds.
What I'm talking about is of course to stop routing pings when they exceed a certain ping/sec for a period of time.
Noone *needs* to send 100Mbit/sec pings for several hours.
And I don't assume all DDoS'es are pings. But some are. And stopping some are better than stopping none.
What is really needed though is a new set of protocols that's designed for a new where idiots and bad guys are present.
Allmost every aspect of the internet structure is designed from the bottom up to be used only by nice people.
Re:Solution? (Score:5, Informative)
Windows has only the most vague concept of a "root" user, and rooting a Windows box takes about 40 lines of code (basically, the problem comes from the GUI - any program running with administrator privelage, such as a virus scanner, can spawn additional processes also running as the administrator. Making them do so requires nothing more than getting a handle to a text edit control, pasting in the desired malicious code, and using the address of the edit's buffer as a start-of-execution point. All of which *any* user can do.
Re:Solution? (Score:2)
Go ahead - root my box.
Oh dear, you can't. What a shame.
BTW: The Shatter attack is easily preventable. Start the antivirus UI process as part of an isolated job with limited UI privs. It'll be in a separate windowing namespace, and the shatter attack will no longer work.
Simon
Re:Solution? (Score:5, Informative)
Tell me, do you run *all* your programs in a private UI context? The antivirus program just makes the "classic" example. How about your usually-hidden-but-always-instantiated NVidia setup panel? Any services you run that have a control panel for configuring them (Tardis, for example)? A local web server? One of those annoying (but often necessary for proper functioning of the related device) printer or scanner control panels?
Aside from not trusting the so-called "privacy" of running something on a private desktop, you don't even need to bother breaking that layer of security. Just look for something else running as administrator... or backup... or power user... or replicator... or even "guest", which by default has an obscenely high level of privelage (relative to a Unix box, which doesn't even usually *have* an account as conceptually insecure as Window's guest account). If you've managed to configure a Windows box to have *everything* run as a specific, seperate user, in its own UI context, I tip my hat to you. I also do not envy the hell of making even trivial config changes to such systems, nor do I envy the frustration your users must feel at trying to use such a system productively. Put simply, Windows lacks the *design level* security to make it generally useable yet reasonably safe against its own users.
Finally, even if you change the default permissions on "ping" as the parent suggested, under Windows that doesn't do a damned thing to stop a trojan that *includes* its own ping program from working just fine. Remember that, in dealing with a DDoS problem, it doesn't matter if a security expert *can* lock down a given box - It only matters that 99% of the people out there won't bother to fix (or even *know about*) a given exploit allowing raw network access.
Re:Solution? (Score:3, Funny)
Don't say stuff like that on slashdot.
Re:Solution? (Score:2)
If you get your trojan to 8 million boxes, you'll have 8 million boxes flooding their closest router with pings with that specified destination adress.
If that router can see that "Oh, this user is sending me 1Mbit/second pings" and after half a second dicides to stop forwarding those pings, you'll stop those packages from even *starting* their journey.
The attacked box will be DOS'ed for a second instead for several hours...
This has actually happened to a company i used to work for.
We got complaints that the internet was slow.
I got a sniffer and checked the traffic on our internet link and found out that one of our servers where sending 100Mbit/second pings at the router.
It turned out that someone had hacked the server and installed a program that at a given signal would start DOS'ing a server...
All those packages, or at least the 4Mbit/second that our ISP's router was capped at, got onto the internet and probably did some damaged at the recieving end.
If our ISP's router had done what I suggested, that wouln't have been the case.
And there is no ID system in sight. Just a small extra check to see if the package the router just checked the recieving IP of is a ping or not.
Re:Solution? (Score:3, Funny)
Not only would this directly contradict pr0n's charter of advancing telecommunications technology, but it would also inevitably lead to the banning of pr0n... and nobody wants that.
For the sake of our pr0n, let the terrorists have their ping!
Re:Solution? (Score:2)
Get serious. The ping command is definitely not a tool made to damage other people's computers. And though the article is a litle unclear on that issue, it sounds like this attack could in fact not have been done using the ping command.
The ping command is used to send legitimate ICMP ECHO REQUEST packets, which a computer according to the stanards MUST reply to with an ICMP ECHO REPLY packet.
What the attack did was to produce ICMP ECHO REQUEST packets with forged source address, so all the replies would be sent to the root DNS servers. This could not have been done by the use of the ping command.
You just shouldn't remove usefull tools and install firewalls to break the standards just to "improve" your security. Your efforts will be useless either because you are protecting against something that is not a problem, or you fail to defend against the actual problem because the attack could have been done in other ways as well.
In fact flooding is impossible to defend against, but a correctly configured system is going to be responsive again just a few seconds after the flooding has stopped.
Re:Solution? (Score:2)
Re:Solution? (Score:3, Interesting)
This is just as should be expected... (Score:5, Interesting)
Re:This is just as should be expected... (Score:5, Interesting)
I'm really curious how "The October attacks showed a greater level of sophistication" than past attacks? As far as I can tell the attacker just had a bunch of cracked boxes with decent pipes to the internet and started a ping -f on all of them.
In other news.... (Score:5, Funny)
Unfortunatley, the theives didn't wait for law enforcement officials to show up, making it much harder to identify them.
Re:In other news.... (Score:5, Funny)
How is it a crime to kill cereal [reference.com]?
Yeah, I guess it's a bit agressive, but hardly a crime. They come up with all sorts of weight watching schemes these days and I suppose cereal killing is just one in the crowd. And just like many other such schemes, this proves that method doesn't work very well, since he suddenly stopped.
Killing Cereal... (Score:2)
Dalnet DDOS Attacks (Score:5, Interesting)
Why don't Dalnet and the FBI (or whoever) get together to solve a mutual problem ?
Dalnet could get some much-needed help, and the FBI could get some much-needed experience into investigating this sort of attack. They would also be dealing with someone (or some people) who could move on to attacking bigger things.
Also if they caught the attackers, they would get some useful publicity, some justification for an increased spend on cyber-deterrence, and the deterrent effect of having the perpetrators suitably punished - as well as putting a genuine menace behind bars.
Re:Dalnet DDOS Attacks (Score:5, Insightful)
M$ is just as much a part of the problem as well. With more and more cable, DSL and other "always on" connectivity available, more and more of these machines are vulnerable.
Scanners out there can easily identify and infect 1000 home user's machines, and these attacks come from them. The actual perpetrator is long gone. All they do is momentarily log in and "fire it off", then they immediately log out, and they are gone.
Tracing IPs back to the attacker is just going to identify the innocent machines or owners who are totally unaware of their activity until they either power down their machines or somehow discover it.
Re:Dalnet DDOS Attacks (Score:2)
But an ISP (or some body such as the FBI) may be able to identify all the packets travelling to an infected machine on its network, and perhaps trace which machine is connecting to it to co-ordinate the attacks - or at least the first machine in a chain.
Or perhaps other means of dealing with the problem could be investigated (routing protocols, or whatever). Also, the ISPs which allow outgoing source IP addresses to be spoofed could be identified. If spoofed source IP addresses become a huge problem to significant parts of the internet, those ISPs could be asked, pressurised (or legislated against) in order to stop this - if technically feasible (sorry, but I'm no networking expert).
OK, people may not think it worth doing just to save a single IRC network, but it's not a problem that can be ignored for ever while it gets worse and worse (due to the reasons you give in your post) and becomes a threat to more and more of the internet.
Re:Dalnet DDOS Attacks (Score:2, Interesting)
It is beyond me why the ISP's would even want one crap packet come out of their network. Its costing them money. Their upstream connection costs money...
For some interesting numbers go take a look at MyNetWatchman [mynetwatchman.com] These dudes even TELL the ISP's that there is something wrong. But most just get ignored.
Truth is most people could care less that their computer is doing something wrong. They just want a bit of email and to surf a bit. Hell most just want it to stay up long enough, and be a bit faster. Considering the 300 programs they are running out of the box.
The only way I have ever been able to explain to a person what its about is the apartment analogy. A theif goes into an apartment building and rattles every doorknob. He finds one that opens. He then uses that apartment as a base to sneak around to rattle other doorknobs. Most people get very upset when I tell them someone is basicly trying to break into their house. The next words out of their mouths are usually 'who can I report this to?' All I can tell them is no one.
Re:Dalnet DDOS Attacks (Score:4, Informative)
He basically got his hands on one of the "zombie" trojans the DDoS'ers use, reverse engineered it to find out how it works (and which IRC servers it talks to to receive its commands), wrote his own to connect to said server and waited until the attackers personally logged in. It really is a good read.
CRIKEY! Script Kiddie Hunter! (Score:3, Funny)
I hadn't read that guy's site in a while because it's too alarmist. But I read the linked GRC article and found roughly 5-15% useful text among all of that. The IRC log was priceless; ^^boss^^ was stupid if he was surprised someone could've figured that out how to locate and connect to his IRC server. (I'm not necessarily dissing Gibson with that stament, though; he's alarmist but is fairly knowledgable although he can sound fairly stupid at points, too.)
What struck me is how much his articles read like Crocodile Hunter:
CRIKEY!! I've been DDoS'ed by SCRIPT KIDDIES' WIN9x ZOMBIES!! Lucky for me they weren't Win2k or WinXP zombies or I'd be DEAD!!
[Imagine the following text centered, large, bold and in a different color]
etc., etc..
I actually enjoy Crocodile Hunter, though.
Re:Dalnet DDOS Attacks (Score:4, Interesting)
At any time, each server MUST be able to handle a load of
requests for root data which is three times the measured peak of
such requests on the most loaded server in then current normal
conditions. This is usually expressed in requests per second.
This is intended to ensure continued operation of root services
should two thirds of the servers be taken out of operation,
whether by intent, accident, or malice.
With 13 current servers, this means that 8-9 servers can be taken out at one time and have negligible impact on the world's DNS queries, assuming that the outage is at a peak time and the servers are being hit very hard. Practically speaking, the existing root servers are probably built even more toughly, so the remaining 4-5 servers can probably handle shorter outages (such as that mentioned in the article) without significant effort, and even if brought down to 2-3 could probably handle things with some difficulty.
According to root-servers.org, the existing servers are fairly concentrated, with only those in Stockholm, London, and Tokyo not in the United States. Perhaps three more, with one maybe in South Korea, one in Australia, and one in North Africa or the Middle East (Cairo would be ideal to cover both) would be a viable option? I realize that the last is probably going to be questionable for some, given the censorship agendas often in place in the area, but it would help to make further attacks a little more difficult, as well as adding a little prestige and maybe tech investment to the area. Just an idea.
As for Dalnet, why isn't the FBI involved? (I'm not aware of current happenings on the network, as I don't use it.)
Re:Dalnet DDOS Attacks (Score:2)
"How are you going to find a knowledgeable operator for the one in America?
That country consists entirely of spammers who can only write English, and service providers who cannot read your complaint written in Korean.
I doubt that there exists any American provider or organisation where an employee has the required level of understanding of computing and Korean language to support such a system."
Re:Dalnet DDOS Attacks (Score:2)
Well they're both motor vehicles which take you from A to B, powered by an internal combustion engine, travelling on the non-internet super-highway.
"Here's why no one in the FBI cares about DalNET or their DDoS attacks: No one outside of DalNET gives a shit."
Please read the post. The third and fourth paragraphs give a few reasons why it might be useful if they did.
"It's pretty damn offtopic. Yes, it deal with DDoS attacks but in no way is it remotely relevant to DNS root servers."
The title of the article is "More Info on the October 2002 DNS Attacks". Personally I think a comment about another large-scale internet attack, carried out in the same way, is pretty on-topic.
Re:Dalnet DDOS Attacks (Score:2)
No, Seymore's 1977 accord will take you approximately 3/4(B-A) then it breaks down and you get laughed at.
The title of the article is "More Info on the October 2002 DNS Attacks". Personally I think a comment about another large-scale internet attack, carried out in the same way, is pretty on-topic.
Well, except one in a mission critical DNS based attack and the other is an attack on a bunch of fat guys sitting in their mothers basement jacking off to kitty porn.
What outage? (Score:2, Informative)
It would take about a week (Score:2, Informative)
Re:It would take about a week (Score:2)
Doesn't that assume that you're only visiting sites that are already cached on your DNS server?
Re:It would take about a week (Score:3, Informative)
If you run your own DNS, you should cache it.
Re:It would take about a week (Score:2)
Re:It would take about a week (Score:2)
However, I do know that the Win2k and later series OSs from Microsoft do contain what is called "DNS Client". This client has the job of doing DNS caching. (And a bunch of other stuff I think.)
Restarting the thing can be a quick way to do what would otherwise require a reboot.
The Win98/ME/95 series stuff had a client too, but it couldnt be cleared without rebooting. Though I think it's timeout was not as long.
So yes there is caching going on, one of the main reasons why my first question to my clients is "when did you last reboot?"
Re:It would take about a week (Score:4, Informative)
Re:It would take about a week (Score:2)
Almost all web browsers have caches. Usually, they work correctly. Sometimes they don't.
Tools->Internet Options->Settings->Check for newer versions of stored pages: Every visit to the page
Re:It would take about a week (Score:2)
Your upstream provider almost certainly has a cache.
His upstream providers likely have caches.
Their upstream providers likely have caches.
Depending on the exact path taken, a name request might be erratic as to whether (and to what) it resolved.
It would probably take a week for killing all the root servers to take down the internet, although some breakage would be noticeable after about 24-36 hours.
Things working off of fixed ip addresses would continue to work.
If intermediate caching DNS servers keep used stale addresses until a fresher valid address is known, a lot of the internet would keep on going indefinitely.
Re:What outage? (Score:2, Interesting)
Long outages would change the whole thing. Imagine that we could't read slashdot for a whole week!
Responsibility of the ISP (Score:5, Insightful)
Well then, isn't it logical to try and rate limit/filter as close to the source as possible then? Of course this shifts responsibility...
If all ISPs were proactive in dealing with customers machines being used as zombies to launch attacks, then internet users as a whole would have less problems trying to deal with being the target of an attack.
A few logical steps:
Some ISPs may do this, I don't know, but from the articles I read about DDoS attacks it appears that most don't.
Re:Responsibility of the ISP (Score:3, Insightful)
I know its possible.... im sure they wouldnt waste time if someone was uncapping their modem.
Re:Responsibility of the ISP (Score:2)
"To: John Doe
From: ABC Networks
Subject: Your computer has a virus
Dear John Doe, according to our records, at 01/10/2002 modem XYZ was--"
[DELETE]
John Doe: Damned spammers.
You really do have to make the call to make sure it gets fixed. It used to be that most people just cannot read well enough to understand a virus warning (well, once the Internet wasn't a snobby elitist club anymore, at least). Now there's the spam goggles everyone wears that filter it before they have a chance to not understand it.
If you call them, you can do one of two things: Get someone who goes "Oh, OK. I will fix it tonight." (Then you check up on them.) Or, you get someone who goes "Oh my God oh my God what do I do, did I hurt anything this is horrible!" You have to send that person to a shop, but which is worse karma- sending a person to a shop where they're gonna get whacked 150 bucks, or not doing anything about it at all?
Re:Responsibility of the ISP (Score:2)
how about this one:
Re:Responsibility of the ISP (Score:2)
[...] then email the poor sap...
That reminds me of some Nimda hunting I did at work. My intranet web server kept getting hit from within the intranet in a different English speaking country. I reported it to the proper company groups, but it kept on happening. Finally I tried to hack into it using remote MMC management. I don't know why, but it let me in. I was able to copy a text file to the c$ share, start the scheduling service and use the at command to run notepad and display the text file on the desktop. The text file, of course, said something along the lines of "this pc is infected with the Nimda virus; please notify your network administrator or pc tech and unplug it from the network." I did that several times over 3 days. I think it took about 5 days before I finally quit getting hits from it.
(I resisted the urge to try to remotely disinfect it since I didn't know what business function the PC served.)
I can believe people ignoring emails, but people are so paranoid about viruses that if Notepad kept popping messages on their screens I would think they'd go running screaming to their administrator begging him to save their data. Maybe I should've made the note sound sinister instead of helpful and then they'd get help?
That reminds me, I intended to check out why the hell I could administer a PC in a different country and find out if my PCs were as vulnerable. I'll put that on tomorrow's to do list.
Re:Responsibility of the ISP (Score:3, Interesting)
Get in touch with MS for the rate limit on ammounts of pings that can be sent. Get them to code into their OS some sort of rate limit for icmp-echo-reply packets, like you described. Also, make ISPs far, FAR more aggresive when dealing with this. Is a computer sending out code red/nimda attacks? Disconnect it, write letter to the owner and disconnect them permanently after a few times. Same thing for ping flooding. If it happens often, (testing network strain over the internet shouldn't happen often) engage the same procedure as with code red/nimda infected computers.
Re:Responsibility of the ISP (Score:2)
And it would take about 2 hours before someone compiled and distributed a "raw" ping client for windows.
Egress Filtering (Score:5, Insightful)
Implementation of simple egress filtering rules at border routers or at firewalls (regardless of who owns them) would dramatically decrease the efficacy of DDoS attacks.
If my organization owns the A.B.C network, there is no reason why any packets bearing a source address of anything other than A.B.C.* should be permitted to leave my network.
NAT environments can implement this by dropping packets with source addresses that do not belong to the internal network.
Of course, for this to be effective it would have be used on a broad scale, i.e. around the world...
Re:Egress Filtering (Score:3, Informative)
Re:Egress Filtering (Score:2)
The idea is that for each host on the Internet, there is at least one independently administrated router in front of it which performs source address validation before forwarding packets further upstream to a transit network (where address validation becomes complicated).
However, it would take quite a long time until you saw any effect, like any other DoS mitigation tactic which does not support incremental deployment.
ICMP Traceback is promising, though. I really hope that it's as useful as it looks.
Re:Egress Filtering (Score:3, Informative)
Actually, there is at least one very good reason. If company A has 2 internet connections through provider A and B, and wishes to do load balancing, but for one reason or another can not announce a single subnet through both providers, they can at least do outbound load balancing and change the source address on a per packet basis, so incoming traffic for connections initiated by someone local are evenly distributed through both connections. Obviously any connections that originate from the outside world (i.e. someone on the internet trying to view this company's website) have to be answered with the same IP that the request originally went to as the source address (or stuff will break(tm)), so this wont work in that situation, but any request that originated on the company's network, and goes out to the internet, can have the outbound traffic load balanced on a per packet basis over their multiple internet connections, even if they can't announce the same block through both providers. This however requires that some packets have a source address in the subnet of for instance provider A, when they go out through the circuit with provider B, to evenly load balance packets.
The other option, which does not require sending packets with a source address for one provider when it goes through another, is to do it on a per connection basis, and not a per packet basis, however depending on your traffic, etc.. this may not work nearly as well.
While obviously, the number of people implimenting something like this is few, and the benefits are many to implement anti-spoof measures, to the few people doing something like the above, it sucks. However, there is an answer, that will satisfy both causes.
To the few people that do load balance in the method mentioned above, a simple ACL allowing only packets with either subnet as the source (for either line A or B's block), and deny all other sources, will both allow them to load balance outbound traffic, and it will protect your network (and others) (since they can't spoof any other address, other than their block with the other provider through you, as the ACL will drop it).
For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on:
"ip verify unicast reverse-path"
Re:Egress Filtering (Score:2)
For everyone else, you can use the following command on a Cisco with CEF enabled, which drops all traffic that does not have a source address that is routed through the interface the packet was received on:
"ip verify unicast reverse-path"
The way to turn on reverse-path filtering on a Linux firewall is:
/proc/sys/net/ipv4/conf/*/rp_filter; do
for i in
echo 2 > $i
done
Re:Egress Filtering -- needs more work (Score:3, Insightful)
Of course, unless the zombies were smart enough to know the IP range within the border router, you'd still get a metric buttload of invalid packets at the border router. Some kind of threshhold alarm might be a good idea -- but then there's the problem of locating what machine within the border is generating the packets...
In a perfect world, the best solution would be that people didn't let their machines get 0wn3d in the first place, [Insert maniacal laughter]!
Egress filtering is a good thing but it's not a complete solution. (And it's a good thing that I turned back from the Insufficient-light Side of the Hack many years ago.) Here's an explaination of a reflection attack. [grc.com] (Yes, that "end of the Internet" grc. :^)
Disclaimer! (Score:2, Funny)
I guess that I shouldn't worry, unlike script-kiddie h4x0rs, Slashdot users are intelligent, wise .. , never do stupid things .. , never abuse the system .. oh shit
Re:Responsibility of the ISP (Score:2, Insightful)
It's almost certainly an easier thing for the ISP to do:: your implicit assumption that everyone's a BSD-user with 30 years of security experience is not that appropriate when describing people who got a PC for christmas and had to get a friend to show them how to plug the monitor in... and these people do need the net just as much as we do, before we get the élitists flaming back as reply to this.
The ISP will typically be spending more time than is healthy measuring peoples' bandwidth anyway, even if for nothing better than to check they've not got an uncapped modem. So when someone who typically browses a few web-pages a minute suddenly starts requesting files at 300 per second, it's pretty easy to see they're either testing a spider, or they got infected.
The credit-card companies seem to manage such pattern-matching, although admittedly that's not real-time.
Conversely, the ISPs will need to be smart enough to realise that if someone's playing RavenShield then there's a good reason for them to be pinging the same computer twice a second, and sending unnatural amounts of data. But then, that's not such a hard problem to solve. Neural networks and all that... (says someone who's never had to program a neural network!)
And arguably, it's more useful than the tecchies spending all their waking hours trying to detect connection-sharing, or rogue linux machines on their network.
How to Protect the DNS (Score:3, Interesting)
apparentlyicannwatch [icannwatch.org]new year resolution was to migrate [icannwatch.org] from nuke to slash.
TLD Question (Score:5, Interesting)
I'm not an expert, but as I understand it, DNS attacks are relatively benign, since DNS info is cached all over the place and doesn't change much anyway (this is essentially what the article says). Now, the author seems much more worried about attackts against Top Level Domains, because of reasons related to the nature of the information that TLD servers have, and he suggests a few techniques that they could use. What he doesn't say is what techniques the TLD's are using currently, and how secure they are.
/. know?
Does anyone out there on
Re:TLD Question (Score:2)
http://cr.yp.to/djbdns/forgery.html [cr.yp.to]
Hrrrmmm (Score:5, Funny)
Hrrrrmmm. That makes it look deliberate. Hrrrrmmm.
Chief Wiggum's on the case! (Score:2)
Re:Hrrrmmm (Score:2, Insightful)
That would be funny.
IDEA for DNS Survivability (Score:4, Informative)
Why not allow the admin to specify the maximum diskspace that the cache can use up, and then only prun the records when that (possibly huge) database grows too large? In addition, DNS records should not just arbitrarily expire...
If a record has not reached it "expire" date, the cache is just fine. If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update. Now, that would allow large DNS servers to maintain quite a bit of functionality even if all other DNS servers go down, and would do so while requiring only the most popular queries are saved on the server (so not everyone has to become a full root DNS server).
Re:IDEA for DNS Survivability (Score:2, Informative)
Generally there are two ways to keep caches relatively fresh: expire records based on some precondition (such as time) or have the master source send out notifications when data was changed. And DNS can do BOTH.
First, there are three kinds of expirations in DNS, all time based where the periods are selected by the owner of the domain. The first is when you attempt to look up a name which doesn't exist; that's called negative caching and is typically set to just and hour or two. The next is the refresh time which indicates when an entry in a cache should be checked to see if it is still current and is typically about a half a day. And finally the time-to-live is the time after which the cache entry is forcibly thrown away, and is usually set to a couple weeks or more.
Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed. This allows dirty cache entries to be flushed out almost immediately . But DNS notifications are usually used only between coordinated DNS servers, and not all the way to your home PC.
It should be noted though that most end users' operating systems do not really perform DNS caching very well if at all...usually it is your ISP that is doing the caching. Windows users are mostly out of luck unless you are running in a server or enterprise configuration. Linux can very easily run a caching nameserver if you install the package. I don't know what the Macs do by default.
Re:IDEA for DNS Survivability (Score:2)
This is only for DNS servers such as BIND that use AXFR to update slaves.
Finally DNS servers can coordinate notification messages, whereby the primary name server for a domain will send a message to any secondaries whenever the data has changed.
Modern DNS servers use better methods such as rsync over SSH or database replication, which provide real security, instant updates and more efficient network usage.
Re:IDEA for DNS Survivability (Score:2)
Re:IDEA for DNS Survivability (Score:2)
Because I like to actually be able to change my DNS records after they are published.
In addition, DNS records should not just arbitrarily expire...
They don't arbitrarily expire. They expire when the TTL for the record has been reached.
If a record HAS reached it's "expire", it should still remail valid UNTIL the DNS server has been able to get a valid update.
That would allow an attacker to blind your DNS resolver to DNS changes by keeping it from contacting a remote DNS server. And if the same attacker can poison your cache, the cache will keep the poisoned records forever.
Re:IDEA for DNS Survivability (Score:2)
There are so many flaws with this logic that I'm not sure where to begin.
First of all, if an attackers has poisoned your cache, that almost always requires Admin intervention anyhow.
Second, if an attacker can blind your DNS server to updates, in the current scheme, your DNS would completely fail, instead of one record being invalid, so this is not a capability attackers have, and even if they did, you would be much better off with my modifications, than with the current scheme.
Re:IDEA for DNS Survivability (Score:2)
If, by ``new", you mean I've been at it for ``less than 5 years", you're absolutely right.
Umm, well, yes... I knew that. Maybe I missed something, but I don't believe I said that it was very difficult to poison a DNS cache, so I'm not sure what you are trying to say.
BTW, I've already read several of djb's DNS documents, including the one you referenced.
Re:IDEA for DNS Survivability (Score:2)
First, I was refering to expiring in the current, standard sense.
The owner of the DNS record picking an expiration IS essentially arbitrary... It's certainly arbitrary as far as a caching DNS server is concerned.
Now, if you'd like to post what you think is wrong with the solution, that might be useful.
Re:IDEA for DNS Survivability (Score:2)
Re:IDEA for DNS Survivability (Score:2)
I do not suggest ignoring the expirations, nor simply caching them forever. What I am suggesting is that it should not be automatically removed when it's expiration comes... Instead, if an expired record is requested, the DNS server should TRY to fetch the update from it's parent DNS servers, HOWEVER, if it is UNABLE to get that update, it should (instead of returning an error) return the expired record.
Doing that with a host record would be fine ONLY IF you had software that would update each of the records in
Re:IDEA for DNS Survivability (Score:2)
Well Mr. Troll or Idiot, whichever is the case, I know exactly what I am talking about... Those questions were purely rhetorical.
Well, if the question was rhetorical, why even bother asking? Were you just talking to hear yourself speak?
Re:IDEA for DNS Survivability (Score:2)
Go look-up the definition of rhetorical... Then you will know.
Re:IDEA for DNS Survivability (Score:2)
The timing of this is incredibly coincidental. Less than a week ago I was setting up MaraDNS as a new caching DNS server, all the while wondering how difficult it might be to impliment this in MaraDNS. In addition, after first posting this message, I was considering sending off a message to the MaraDNS developer's mailing list proposing the idea... I guess I don't need to worry about that, now.
I'm aware of it, and I very much like that feature.
Heh... You know, I just realized that my
Anyhow, I was quite glad to get this message, and I certainly hope I'll see this feature in future versions of MaraDNS.
For those who can't be bothered to RTFA... (Score:4, Interesting)
What we can do (Score:3, Insightful)
It's difficult for any reasonable person to know where to begin solving these issues. Traditionally, nailing down machines and networks so they are more secure has been seen as the best approach, but there's little anyone can do about having bandwidth used up by unaccountable "hacked" machines, as is seemingly more and more the modus-operandi.
Attempts to trace crackers are frequently wastes of time, and stiffer penalties for hackers are compromised by the fact that it's hard to actually catch the hackers in the first place. The situation is made worse that many of the most destructive hackers do not, themselves, set up anything beyond sets of scripts distributed to and run by suckers - so-called "script kiddies".
Given that hackers usually work by taking over other machines and coopting them into damaging clusters that can cause all manner of problems, less focus than you'd expect is put onto making machines secure in the first place. The responsibility for putting a computer on the Internet is that of a system administrator, but frequently system administrators are incompetent, and will happily leave computers hooked up to the Internet without ensuring that they're "good Internet citizens". Bugs are left unpatched, if the system administrators have even taken the trouble to discover if there are any problems in the first place. This is, in some ways, the equivalent of leaving an open gun in the middle of a street - even the most pro-gun advocates would argue that such an act would be dangerously incompetent. But putting a farm of servers on the Internet, and ignoring security issues completely, has become a widespread disease.
There is a solution, and that's to make system adminstrators responsible for their own computers. An administrator should be assumed, by default, to be responsible for any damage caused by hardware under his or her control unless it can be shown that there's little the admin could reasonably have done to prevent their machine from being hijacked. Clearly, a server unpatched a few days after a bug report, or a compromise unpatched that has never been publically documented, is not the fault of an admin, but leaving a server unpatched years after a compromise has been documented and patches have been available certainly is. Unlike hackers, it is easy to discover who is responsible for a compromised computer system. So issues of accountability are not a problem here.
Couple this with suitably harsh punishments, and not only will system administrators think twice before, say, leaving IIS 4 out in the wild vulnerable to NIMDA, but hackers too - for the same reasons as they avoid attacking hospital systems, etc - will think twice about compromising someone else's system. Fines for first offenses and very minor breaches can be followed by bigger deterents. If you were going to release a DoS attack into the wild, but knew that the result would be that many, many, system administrators would be physically castrated because of your actions, would you still do it?
Of course not. But even if you were, the fact that someone has been willing to allow their system to be used to close the DNS system, or take Yahoo offline, ought to be reason enough to be willing to consider such drastic remedies. Castration may sound harsh, but compared to modern American prison conditions, it's a relatively minor penalty for the system administrator to pay, and will merely result in discomfort combined with removal from the gene-pool. At the same time, such an experience will ensure that they take better care of their systems in future, without removing someone who might have skills critical to their employer's well being from being taken out of the job market.
The assumption has always been made that incompetent system administrators deserve no blame when their systems are hijacked and used for evil. This assumption has to change, and we must be willing to force this epidemic of bad administration to be resolved. Only by securing the systems of the Internet can we achieve a secure Internet. Only by making the consequences of hacking real and brutal can we create an adequate response to the notion that hacking, per-se, is not wrong, that it causes no damage.
This quagmire of people considering system administrators the innocents in computer security when they are themselves the most responsible for problems and holes will not disappear by itself. Unless people are prepared to actually act, not just talk about it on Slashdot, nothing will ever get done. Apathy is not an option.
You can help by getting off your rear and writing to your congressman [house.gov] or senator [senate.gov] [senate.gov]. Write also to Jack Valenti [mpaa.org], the CEO and chair of the MPAA, whose address and telephone number can be found at the About the MPAA page [mpaa.org] [mpaa.org]. Write too to Bill Gates [mailto] [mailto], Chief of Technologies and thus in overall charge of security systems built into operating systems like Windows NT, at Microsoft. Tell them security is an important issue, and is being compromised by a failure to make those responsible for security accountable for their failures. Tell them that only by real, brutal, justice meted out to those who are irresponsible on the Internet will hacking be dealt with. Tell them that you believe it is a reasonable response to hacking to ensure that administrators who fail time and time again are castrated, and that castration is a reasonable punishment that will ensure a minimal impact on an administrator's employer while serving as a huge deterent against hackers and against incompetence. Tell them that you appreciate the work being done to patch servers by competent administrators but that if incompetent admins are not kept accountable, you will be forced to use less and less secure and intelligently designed alternatives. Let them know that SMP may make or break whether you can efficiently deploy OpenBSD on your workstations and servers. Explain the concerns you have about freedom, openness, and choice, and how poor security harms all three. Let your legislators know that this is an issue that effects YOU directly, that YOU vote, and that your vote will be influenced, indeed dependent, on their policies concerning maladministration of computer systems connected to the public Internet.
You CAN make a difference. Don't treat voting as a right, treat it as a duty. Keep informed, keep your political representatives informed on how you feel. And, most importantly of all, vote.
Need more secure desktops (Score:3, Interesting)
It seems to me that this is another call for more secure computers. If the "zombies" were not so easy to create, then such attacks would not be so easy to mount. I think security has gotten better, but there is still great room for improvements. I have some random thoughts that might help.
First, broadband providers should not sell bandwidth without standard firewall. I do not see such a proposition to be expensive, as a standalone unit is quite cheap, and the cost to integrate such circuitry into a DSL or cable box should be even less expensive. Broadband providers should stop their resistance to home networking and use bandwidth caps or other mechanism, if necessary.
Second, the default setting in web browsers must be more strict. Web browser should not automatically accept third party cookies or images. Web browser should not automatically pop up new windows or redirect to third party sites. Advertising should not be an issue. I know of no legitimate web site that requires third party domains. For instance /. uses "images.slashdot.org" and the New York Times uses "graphics7.nytimes.com". Of course, these default setting should be adjustable, with the appropriate message stating that web sites that use such techniques are likely to be illegitimate. I know of a few sites that require all imagers and cookies to be accepted, but I consider those to be fraudulent.
Third, email mail programs should by default render email as plain text. There should a button to allow the mail to render HTML and images. There should be a method to remember domains that will always render or never render. Again, third party domain should not render automatically. In addition, companies need to not promote HTML and image based email. Apple is particularly guilty of this. The emails they send tend to be illegible without images.
Fourth, the root must be the responsibility of the user or a third agent must have full liability for a hack. This should be basic common sense, but it apparently is not. MS wants access to the root of all Windows machines, but I do not see MS saying they will accept all responsibility for damage. Likewise, the RIAA wants access to everyone root, but again, are they going to pay for the time it takes to reinstall an OS. I think not. With privilege come responsibility. Without responsibility all you have are children playing with matches.
Re:Need more secure desktops (Score:3, Insightful)
Nice idea, but what about the ad-supported sites that use agencies to get advertising, rather than selling ad space direct to the advertiser. Then it makes perfect sense for www.smallsite.com to have an image on it from images.adagency.com.
I agree entirely that html email should be banished from the face of the net, and third party cookies serve litle or no purpose.
Question: (Score:4, Interesting)
Whose laws are being enforced, and upon whom?
DDoS attacks and IPv6 (Score:3, Insightful)
I was wondering - does IPv6 solve this problem (using some sort of digital signatures or another ingenious way), or sites will be still vulnureable to script kiddies?
Re:DDoS attacks and IPv6 (Score:3, Insightful)
IPv6 can though provide a very secure layer (IPsec) but it comes at an expense. It is not something that you would want to use for DNS queries, where the name of the game is speed and the number of hosts involved can be thousands or even millions.
But for the less voluminous DNS messages, such as zone transfers which occur between mirrors, authenticity is much more of a concern. IPsec could be very useful there, but it is probably unnecessary as DNS already has it's own security protocol built into it (DNSSEC).
In general though IPv6 does provide many benefits over IPv4 and in some ways does provide many new tools to address the DDoS and script kiddies; but like any single technology it is not a super pill that makes all the ills go away.
Re:DDoS attacks and IPv6 (Score:2)
This will unfortunately remain a problem for the same reason it'll remain a problem with email - unless all possible nodes that traffic can be routed through are known and trusted, you have to take much of your routing information on faith.
End users don't need root or TLD servers (Score:4, Insightful)
End users don't need root or TLD servers; they just need to have DNS queries answered. That's why normally, they are configured to query the ISP or corporate DNS servers, which in turn do the recursive query to root, TLD, and remote DNS servers. Given that, consider the possibility of the ISP or corporate data center intercepting any queries done (as if the end user were running a recursive DNS server instead of a basic resolver) and handle them through a local cache (within the ISP or corporate data center). It won't break normal use. It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one). It will prevent a coordinate attack-load from the network that does this.
They talk about root and TLD servers located at major points where lots of ISPs meet, which poses a potential risk of a lot of bandwidth that can hit a DNS server. So my first thought was why not have multiple separate servers with the same IP address, each serving part of the bandwidth, much like load balancing. And then, you don't even have to have them at the exchange point, either; they can be in the ISP data center. They could be run as mimic authoritative servers if getting zone data is possible, or just intercepting and caching.
Re:End users don't need root or TLD servers (Score:3, Interesting)
Wrong. I run my own local DNS resolver, dnscache [cr.yp.to]. I don't trust my ISP to manage a DNS resolver properly. What if they are running a version of BIND vulnerable to poison [cr.yp.to] or other issues [cr.yp.to]? What if I am testing DNS resolution and need to flush the cache? (I do this routinely.) They also don't need to see every DNS query I make. If they want to sniff and parse packets, fine, but no need to make it any easier on them.
It won't break even if someone is running their own DNS (although they will get a cached response instead of an authoritative one).
That would be possible only if they were in fact intercepting every single DNS packet and rewriting it. It would make it impossible for me to perform diagnostic queries to DNS servers. And unless they were doing some very complex packet rewriting, it would break if an authoritative server was providing different information depending on the IP address that sent the query.
If you can't even get ISPs to perform egress filtering, why would they do something as stupid and broken as this? Egress filtering would do much more to stop these types of attacks.
Besides, how does this stop me if I am the ISP? There are plenty vulnerable machines that are on much better connections than dialup or broadband.
Re:End users don't need root or TLD servers (Score:2)
What egress filtering? The kind that blocks DNS queries sent to the root or TLD servers with a source address of the actual machine doing the querying, while under control of a virus or trojan that has infected a million machines? Sure egress filtering will stop a few bad actors who are forging source addresses, such as bouncing attacks off of broadcast responders. And egress filtering is not easy to do on large high traffic routers where there are a few hundred prefixes involved, belonging to the ISP and multitudes of their customers. You think an access list that big isn't going to bring a router to its knees?
Rate limiting is worthless... (Score:3, Insightful)
..if the flood is randomly generated queries from thousands of compromised hosts. There would be no way to separate flood traffic from legit traffic. A worm could do this, or a teenager with a lot of time on their hands.
It's easier for peons to get together a smurf list to attack the roots, but a nice set of compromised hosts issuing bogus spoofed queries would be just devastating.
The solution is not more root servers. Attackers gain compromised hosts for free, root servers must be paid for. The solution is to make some kind of massively distributed root server system.
Was that a weapons test? (Score:2)
Now, that could be an actual government, military operation [including our own], as part of a general preparedness effort for war: when you strike, you use a combination of surprise attacks to make your main attack more effective.
Or it could be terrorists, running a weapons test in the same way.
Or it could be some grad student, testing out a theory of his. It just doesn't sound like a normal cracker.
Oh really? (Score:2, Insightful)
Ok, let's pretend such a magical replacement actually exists, and you have it up and running. Then, the skr1pt k1dd1es show up and start a 'trinoo' or 'tribal flood' type DoS that floods your network and slows all your servers down to a crawl. Tell me again how your magical new DNS replacement is going to deal with this situation better than the old one?
Re:Why we need to abandon DNS (Score:2, Informative)
If things were as bad as you seem to think they are, the whole Internet system would have crumbled to rubble long ago. In reality, it has scaled amazingly well, and has been unbelievably robust.
Perhaps you should go purchase a clue, you obviously don't have one of your own.
Re:Why we need to abandon DNS (Score:2)
I have single-handendly written a working recursive DNS server without getting paid for my work. There is a reason why there are only three [cr.yp.to] of [t-online.de] us [maradns.org] in the entire world; DNS is that bad. Actually, it is a good deal worse than you can imagine.
Let me put it this way. Writing a DNS client (or a non-recursive DNS server) is sort of like Highlander I [imdb.com]. Entertaining, really. You think to youself "Hey! That was easy! A recursive server can't be too bad!"
Well, writing a working recursive DNS server is like watching Highlander II [imdb.com]. Suddenly, just as Highlander II changes your outlook on the entire Highlander franchise, writing a recursive DNS server changes your outlook on the entire DNS protocol.
But, hey, don't take my word for it. Dan, one of the other three of us, feels the same way. Thomas, the last of us, has made no statements either for or against DNS. If we were to review recursive DNS the same way Rotten Tomatoes [rottentomatoes.com] reviews movies, DNS would get a 0%; possibly a 33% if Thomas secretly loves DNS and hasn't told anyone. By any standard, that makes for a bomb that should have tanked at the box office.
Alas, it didn't. And so we are stuck with a horrible mess of a protocol today.
- Sam
Re:Why we need to abandon DNS (Score:3)
The question is: Who is going to develop such a protocol? I have heard a lot of mumbling for a DNS replacment; I have seen little actual action done to make such a replacment. If such a protocol gets developed, I most assurably will be one of the first to implement.
What real solutions do people have to the fragile root servers issue (these days, the fragile .com servers issue).
- Sam
Re:Why we need to abandon DNS (Score:2)
You should try assembler once!
Oh, and when you're on it, please write a replacement for it.