Please create an account to participate in the Slashdot moderation system


Forgot your password?

Comment Re:BCP38 (Score 1) 312

Even if the malware were BCP38 aware, there would still be a problem. For said malware, if it's in a position to make a decision as to whether or not it CAN participate optimally, the machine has already been owned by it. There is no disadvantage to the malware in deciding to participate sub-optimally because the rooted machine cost the maker of the malware nothing to gain and nothing to operate anyway. Why would you opt not to use this free contributor to your cause even though it's not an optimal contributor? You wouldn't.

Comment Re:BCP38 (Score 1) 312

Absolutely. In managed CPE, it should happen first at the CPE, and in all circumstances should happen in the ingress stage at the first customer-facing edge router under the ISP's control. Unfortunately, a plurality of circumstances in which an ISP (even a small one) needs to inject traffic into his upstream from source IPs that aren't directly the end-user ISP's arise. (BGP multi-homed end-customer of multiple smaller ISPs or customer of a smaller ISP and a large ISP.) There are also cases of other ISPs transiting yours, etc, that may add new IPs occasionally. My issue is that the policy that a backbone provider can only peer to an ISP who certifies that they implement BCP38 will be useless, as there's no practical, automated way to determine whether that downline ISP lied about that or not (in the face of so many legit cases of legit unexpected origin IP traffic.)

Comment Re:Seem it would be easy to identify on the ISP si (Score 1) 312

So when you slow it down, you slow down the good packets with the bad packets. You (the generic upstream ISP) can't tell the difference. Only that target X.X.X.X says we're sending too much to him and we need to throttle it down. So, we throttle down all that we are sending to target X.X.X.X. And all of our customers and downstreams and those transiting to destination X.X.X.X through us suddenly loose the ability to effectively use the target service. Not helpful.

Comment Re:Two things (Score 1) 312

Honestly, I agree. The penalties exacted for actually being the party behind a massive DDoS (when it can be proven objectively and conclusively) are not currently nearly severe enough.

Whether or not it's any part of the aggravating party's intent, DDoS's, on a broad scale, almost certainly do KILL PEOPLE.

Were there no middle-aged Xbox Live or PSN users, with already too-high unmanaged blood pressure that experienced a massive (fatal) stroke at the frustrations they experienced with their technology products over the Christmas holidays?

Of course, it IS those peoples' faults as well, in not managing their blood pressure. I'm betting, however, that more than once in DDoS history, has an impacted party's life been cut short (in the straw that broke the camel's back sense) by frustrations arising from the DDoS.

Still, how many depressed kids whose Twitter contacts are their social safety-net had a bad day and in the midst of a Twitter outage committed a suicide that might have been avoided if the service had worked? We can't really know.

Comment Re:Here's One Idea: (Score 2) 312

It's actually a pretty good idea.

Some proprietary (ISP specific) implementations of similar mechanisms actually exist.

There are numerous ways that you (as ISP) can expose, to your above-average-network-engineering-capabilities-wielding downstream client, a mechanism by which you allow this downstream client to edit an egress filter rule-set on IP traffic headed toward same said downstream client.

I have had such an arrangement with ISPs whereby I can insert a groomed config snippet into my providers' edge routers facing the links to my network via a simple authenticated http call.

This is actually a useful technique as long as you're not target of a really massive attack.

There have to be limits or you run out of router / switch filtering resources or CPU resources (depending on the implementation). If we ignore or resolve those limitations, this works if you have a significant but not massively adopted solution on offer. Where it fails is the point where the traffic trying to get to you is no longer just congesting the links that carry traffic from your ISP to you, but rather now that traffic is so massive that it congests your ISPs upstreams' links into your ISP. It's impractical to have the core backbone maintain these filters, as the size would create a new kind of scale limit in the network. Thus, you're now denied service by way of congesting your ISPs upstream links rather than your ISPs links to you.

Comment Re:Much like MTU handling (Score 2) 312

Indeed something along that line is what I think the Internet protocol needs. While IP is freely packet-switched and may appear stateless when you glance in the specs, TCP/IP routers and hosts are actually session-based internally and the number of concurrent sessions is limited.

I feel like this is a trap.

You have a creepily low user id. So much so that you probably were around for the beginnings of IP network as a mass-market communications mechanism.

However, I would suggest that your contention that TCP/IP routers (generically speaking) are session based is incorrect. Particularly, this is incorrect with respect to the vast majority of the core internet routing and Layer 3 switching infrastructure as employed by ISPs and carriers. In order to achieve the massive traffic scale that these devices handle, they mostly are stateless forwarders unconcerned with the higher level protocols above IP and unconcerned with maintaining session / state information on the traffic flows through the router/switch. This allows the hardware's specialized ASICs to forward the packets without having to retain any history of "sessions" or spend precious CPU time matching each packet to a session.

Comment Re:BCP38 (Score 1) 312

It's actually really difficult to tell what is and isn't a proper source IP at the top-level backbone layer. Remember that must of their customers are BGP multi-homed. And while routing registry databases and RPKI are great, those solutions still have limitations. The key problem is that it is virtually impossible to determine on an automated basis that a downstream ISP of any significant size is or is not reasonably implementing BCP38. Thus, when you make it a "must certify that you do this in order to connect to us", everyone just checks the box.

Comment Re:BCP38 (Score 5, Insightful) 312

BCP38 is a fantastic idea. Being in a position in which I serve as a consultant to many indie-ISPs' network administrators on a frequent basis, I strongly encourage sane enforcement of source IP data at ingress-toward-the-ISP from customer-facing links. Many of my clients implement this. The trouble is, it doesn't help with many modern DDoS's. It certainly helps with the common traffic-amplification attack types, but many distributed bot-net based attacks now directly the target service by impersonation of legitimate client implementations. This will do nothing for those. The server side will see the many thousands or more of IPs that are attacking them, and see them correctly, but the trouble is, there are way too many to manage and they look like legit clients. Complicating things, it's likely that many of the infected machines ARE also LEGIT customers / clients. Implementing BCP38 is and will remain a good thing. But as DDoS strategies evolve, and upload speeds on consumer links increase in terms of throughput, this strategy not be a long term solution to many categories of DDoS.

Comment Re:Much like MTU handling (Score 1) 312

Send some sort of ICMP message upstream that indicates your maximum capacity for handling traffic. It's a DOS vector in itself, but you could minimize it.

Umm... No. Any such form of congestion notification, if respected by upstream parties, would certainly reduce traffic to you. The obvious problem, however, is that it will reduce NASTY/BOT traffic as well as LEGITIMATE traffic. So, you send this ICMP message, and the upstreams that hear it kindly shape what's exiting their network toward you? How do they choose from the available packets they have heading toward you what to let through and what to delay/drop? If some giant number N of senders wants to swamp you, it matters little that their ISPs or your ISPs or any transports between them know that they must reduce the traffic toward you. You still have a DDoS, but now it's a self-throttled DDoS, and the upstreams are still dropping or delaying legitimate traffic that you want, only now it happens before the natural limits and instead occurs upon artificial limits. The end result is less traffic hits you, and you still go out of service to most of the world (from the end-user experience perspective), because the senders who are politely throttling can't tell which packets are evil and which packets are sent by the people you want to receive from.

The Courts

Plaintiff In Tech Hiring Suit Asks Judge To Reject Settlement 215

An anonymous reader writes with news that Michael Devine, one of the plaintiffs in a lawsuit accusing tech firms including Apple and Google of conspiring to keep salaries low, has asked the court to reject a $324 million settlement. "Apple has more than $150 billion in the bank, eclipsing the combined cash reserves of Israel and Britain. Google, Intel and Adobe have a total of about $80 billion stored up for a rainy day. Against such tremendous cash hoards, $324 million is chump change. But that is what the four technology companies have agreed to pay to settle a class action brought by their own employees. The suit, which was on track to go to trial in San Jose, Calif., at the end of May, promised weeks if not months of damaging revelations about how Silicon Valley executives conspired to suppress wages and limit competition. Details of the settlement are still under wraps. 'The class wants a chance at real justice,' he wrote. 'We want our day in court.' He noted that the settlement amount was about one-tenth of the estimated $3 billion lost in compensation by the 64,000 class members. In a successful trial, antitrust laws would triple that sum. 'As an analogy,' Mr. Devine wrote, 'if a shoplifter is caught on video stealing a $400 iPad from the Apple Store, would a fair and just resolution be for the shoplifter to pay Apple $40, keep the iPad, and walk away with no record or admission of wrongdoing? Of course not.' 'If the other class members join me in opposition, I believe we will be successful in convincing the court to give us our due process,' Mr. Devine said in an interview on Sunday. He has set up a website, Tech Worker Justice, and is looking for legal representation. Any challenge will take many months. The other three class representatives could not be reached for comment over the weekend."

Ex-NASA Employees Accuse Agency of 'Extreme Position' On Climate Change 616

grumpyman writes "A coalition of 49 ex-NASA employees, including seven Apollo astronauts, have accused the U.S. space agency of sullying its reputation by taking the 'extreme position' of concluding that carbon dioxide is a major cause of climate change. Is the claim in this letter opinion or fact?"

Comment Re:This is not a novel idea. (Score 1) 143

Agree -- and I totally meant to mention that as well. In fact, Opera Mini is a more on-point example than the Blackberry infrastructure, as with Opera Mini (at least some builds thereof) you similarly had no choice in keeping another server out of your web-browsing experience.

Porsche: there simply is no substitute. -- Risky Business