Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: Re:BCP38 (Score 1) 312 312

Even if the malware were BCP38 aware, there would still be a problem. For said malware, if it's in a position to make a decision as to whether or not it CAN participate optimally, the machine has already been owned by it. There is no disadvantage to the malware in deciding to participate sub-optimally because the rooted machine cost the maker of the malware nothing to gain and nothing to operate anyway. Why would you opt not to use this free contributor to your cause even though it's not an optimal contributor? You wouldn't.

Comment: Re:BCP38 (Score 1) 312 312

Absolutely. In managed CPE, it should happen first at the CPE, and in all circumstances should happen in the ingress stage at the first customer-facing edge router under the ISP's control. Unfortunately, a plurality of circumstances in which an ISP (even a small one) needs to inject traffic into his upstream from source IPs that aren't directly the end-user ISP's arise. (BGP multi-homed end-customer of multiple smaller ISPs or customer of a smaller ISP and a large ISP.) There are also cases of other ISPs transiting yours, etc, that may add new IPs occasionally. My issue is that the policy that a backbone provider can only peer to an ISP who certifies that they implement BCP38 will be useless, as there's no practical, automated way to determine whether that downline ISP lied about that or not (in the face of so many legit cases of legit unexpected origin IP traffic.)

Comment: Re:Seem it would be easy to identify on the ISP si (Score 1) 312 312

So when you slow it down, you slow down the good packets with the bad packets. You (the generic upstream ISP) can't tell the difference. Only that target X.X.X.X says we're sending too much to him and we need to throttle it down. So, we throttle down all that we are sending to target X.X.X.X. And all of our customers and downstreams and those transiting to destination X.X.X.X through us suddenly loose the ability to effectively use the target service. Not helpful.

Comment: Re:Two things (Score 1) 312 312

Honestly, I agree. The penalties exacted for actually being the party behind a massive DDoS (when it can be proven objectively and conclusively) are not currently nearly severe enough.

Whether or not it's any part of the aggravating party's intent, DDoS's, on a broad scale, almost certainly do KILL PEOPLE.

Were there no middle-aged Xbox Live or PSN users, with already too-high unmanaged blood pressure that experienced a massive (fatal) stroke at the frustrations they experienced with their technology products over the Christmas holidays?

Of course, it IS those peoples' faults as well, in not managing their blood pressure. I'm betting, however, that more than once in DDoS history, has an impacted party's life been cut short (in the straw that broke the camel's back sense) by frustrations arising from the DDoS.

Still, how many depressed kids whose Twitter contacts are their social safety-net had a bad day and in the midst of a Twitter outage committed a suicide that might have been avoided if the service had worked? We can't really know.

Comment: Re:Here's One Idea: (Score 2) 312 312

It's actually a pretty good idea.

Some proprietary (ISP specific) implementations of similar mechanisms actually exist.

There are numerous ways that you (as ISP) can expose, to your above-average-network-engineering-capabilities-wielding downstream client, a mechanism by which you allow this downstream client to edit an egress filter rule-set on IP traffic headed toward same said downstream client.

I have had such an arrangement with ISPs whereby I can insert a groomed config snippet into my providers' edge routers facing the links to my network via a simple authenticated http call.

This is actually a useful technique as long as you're not target of a really massive attack.

There have to be limits or you run out of router / switch filtering resources or CPU resources (depending on the implementation). If we ignore or resolve those limitations, this works if you have a significant but not massively adopted solution on offer. Where it fails is the point where the traffic trying to get to you is no longer just congesting the links that carry traffic from your ISP to you, but rather now that traffic is so massive that it congests your ISPs upstreams' links into your ISP. It's impractical to have the core backbone maintain these filters, as the size would create a new kind of scale limit in the network. Thus, you're now denied service by way of congesting your ISPs upstream links rather than your ISPs links to you.

Comment: Re:Much like MTU handling (Score 2) 312 312

Indeed something along that line is what I think the Internet protocol needs. While IP is freely packet-switched and may appear stateless when you glance in the specs, TCP/IP routers and hosts are actually session-based internally and the number of concurrent sessions is limited.

I feel like this is a trap.

You have a creepily low user id. So much so that you probably were around for the beginnings of IP network as a mass-market communications mechanism.

However, I would suggest that your contention that TCP/IP routers (generically speaking) are session based is incorrect. Particularly, this is incorrect with respect to the vast majority of the core internet routing and Layer 3 switching infrastructure as employed by ISPs and carriers. In order to achieve the massive traffic scale that these devices handle, they mostly are stateless forwarders unconcerned with the higher level protocols above IP and unconcerned with maintaining session / state information on the traffic flows through the router/switch. This allows the hardware's specialized ASICs to forward the packets without having to retain any history of "sessions" or spend precious CPU time matching each packet to a session.

Comment: Re:BCP38 (Score 1) 312 312

It's actually really difficult to tell what is and isn't a proper source IP at the top-level backbone layer. Remember that must of their customers are BGP multi-homed. And while routing registry databases and RPKI are great, those solutions still have limitations. The key problem is that it is virtually impossible to determine on an automated basis that a downstream ISP of any significant size is or is not reasonably implementing BCP38. Thus, when you make it a "must certify that you do this in order to connect to us", everyone just checks the box.

Comment: Re:BCP38 (Score 5, Insightful) 312 312

BCP38 is a fantastic idea. Being in a position in which I serve as a consultant to many indie-ISPs' network administrators on a frequent basis, I strongly encourage sane enforcement of source IP data at ingress-toward-the-ISP from customer-facing links. Many of my clients implement this. The trouble is, it doesn't help with many modern DDoS's. It certainly helps with the common traffic-amplification attack types, but many distributed bot-net based attacks now directly the target service by impersonation of legitimate client implementations. This will do nothing for those. The server side will see the many thousands or more of IPs that are attacking them, and see them correctly, but the trouble is, there are way too many to manage and they look like legit clients. Complicating things, it's likely that many of the infected machines ARE also LEGIT customers / clients. Implementing BCP38 is and will remain a good thing. But as DDoS strategies evolve, and upload speeds on consumer links increase in terms of throughput, this strategy not be a long term solution to many categories of DDoS.

Comment: Re:Much like MTU handling (Score 1) 312 312

Send some sort of ICMP message upstream that indicates your maximum capacity for handling traffic. It's a DOS vector in itself, but you could minimize it.

Umm... No. Any such form of congestion notification, if respected by upstream parties, would certainly reduce traffic to you. The obvious problem, however, is that it will reduce NASTY/BOT traffic as well as LEGITIMATE traffic. So, you send this ICMP message, and the upstreams that hear it kindly shape what's exiting their network toward you? How do they choose from the available packets they have heading toward you what to let through and what to delay/drop? If some giant number N of senders wants to swamp you, it matters little that their ISPs or your ISPs or any transports between them know that they must reduce the traffic toward you. You still have a DDoS, but now it's a self-throttled DDoS, and the upstreams are still dropping or delaying legitimate traffic that you want, only now it happens before the natural limits and instead occurs upon artificial limits. The end result is less traffic hits you, and you still go out of service to most of the world (from the end-user experience perspective), because the senders who are politely throttling can't tell which packets are evil and which packets are sent by the people you want to receive from.

Comment: Re:This is not a novel idea. (Score 1) 143 143

Agree -- and I totally meant to mention that as well. In fact, Opera Mini is a more on-point example than the Blackberry infrastructure, as with Opera Mini (at least some builds thereof) you similarly had no choice in keeping another server out of your web-browsing experience.

Comment: This is not a novel idea. (Score 1) 143 143

It's worth taking note that this is not a completely novel idea. The Blackberry web browser when running the Blackberry Internet Service has also used server-side resources of RIM's infrastructure to slice and dice and optimize web services. The same is true of email attachments -- the RIM infrastructure intercepts and re-optimizes. Especially apparent in viewing PDF attachments to email. In the Blackberry Enterprise Server infrastructure, this functionality actually moves to ones own BES server instance, with end-to-end encryption between the BES server and the handheld. This fact, at least, provides a corporation with the ability to not have the security exposure of having RIM decipher the pages and content. Perhaps the objection is that for Kindle fire we don't have an independently implementable server-side browsing optimization node?
The Courts

Usenet Group Sues Dutch RIAA 90 90

eldavojohn writes "With the Pirate Bay trial, it's been easy to overlook similar struggles in other nations. A Dutch Usenet community named FTD is going on the offensive and suing BREIN (Bescherming Rechten Entertainment Industrie Nederland). You may remember BREIN (along with the IFPI & BPI) as the people who raided and cut out the heart of eDonkey. This is turning into a pretty familiar scenario; the FTD group makes software that allows its 450k members to easily find copyrighted content for free on Usenet. The shocking part is that FTD isn't waiting for BREIN to sue them. FTD is refusing to take down their file location reports, and is actually suing BREIN. Why the preemptive attack? FTD wants the courts to show that the act of downloading is not illegal in the Netherlands. (Both articles have the five points in English that FTD wants the courts to settle.) OSNews has a few more details on the story."

You can do this in a number of ways. IBM chose to do all of them. Why do you find that funny? -- D. Taylor, Computer Science 350

Working...