Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:What a waste of brainpower (Score 2) 81

Here, I must disagree. I'm a software developer and network engineer. Specifically, my particular software development specialty involves interacting intimately with the network layer. (I'm in the VoIP world.) These people are doing good work in relating characteristics of latency to distance and geolocation and along the way are learning a great deal about the various factors that influence latency and jitter in the real world across working, real world networks. While you may not enjoy the particular aims that they're pursuing as a commercialization strategy, they have to get paid somehow... Meanwhile, the things that they learn about the causes of latency, jitter, and other aspects of service quality in packet networking can be USEFULLY utilized by everyone else in improving the network. Just a thought.

Comment Re:Kick backs? (Score 1) 181

But even moreso, it sounds like they signal via some mechanism (be it some RESTful call, etc, or something more esoteric) that the stream should be sent as the low bandwidth version even though more bandwidth is likely available. Which makes sense. And would explain why only partners who are set up in their system and have done interop with them would be covered under the program.

It sounds like legitimate traffic engineering to me.

Comment Re:Web-scale breach (Score 2) 96

It's an issue as old as this industry.

NoSQL needs a boost in adopters to gain momentum and beat-out a lot of entrenched relational databases?

Easy. Just make it easy. Make it mind numbingly easy to make the database perform useful work. Simple package installation, no need to configure anything away from defaults. Example project can connect and do CRUD-y things in minutes. Explain away security and good practice in the documents no one will read, (because the manual as they see it is the blog post on "5 minutes to your first working NoSQL web project"). And.... Win. It's "easy", adoption skyrockets. It's hard to knock those results. Well, at least if you believe you have no moral obligation to help ensure your software doesn't do harm.

Meanwhile, back in the land of responsibility, an out of the box PostgreSQL instance can't even be connected to from another system. You're forced to confront the security configuration and access considerations. This takes more time and you actually have to read, but more importantly, you have to stop and define what should and shouldn't be allowed. You might have to understand the actual operating environment from a network access and authentication perspective.

This sort of press coverage will help to set things back to right. Sadly, it will do so at the cost of so many innocent(-ish) people's privacy.

Comment Re:Kick backs? (Score 2) 181

So, T-Mobile makes the offer: "Hey providers, if you work with us and send your video in a way that means we can intercept and compress it further, we'll let you be a part of this scheme." It's reasonable. It doesn't violate net neutrality (it's available to Amazon and YouTube, they just choose not to use it - be it for political, financial, or technical reasons), and it's probably a good idea.

Do we actually know that they require intercepting and modifying the stream, or do they simply have a way to signal to their partner that, hey, for the moment, assume video requests coming from IP xyz is bandwidth capped and just go ahead and stream the 400kbps or less stream as your system already would after probing the need to downgrade, versus if the user signals to you that they want the HD despite the cost, in which case, post a message to this API in our billing/provisioning and stream the HD content?

Comment Think End User Simplicity, Not Malice (Score 1) 181

You know, it's probably way simpler than any of the nefarious reasons. I do believe that they've automated the process of making the video websites automatically use the low bandwidth option EVEN while the data bearer and configuration and bandwidth management in the network would allow much faster data at that time point. Which means: an unsophisticated user can utilize these popular video sites in an unlimited fashion while still enjoying full bandwidth for their other activities on the device and most importantly, the user need not do anything to switch between the free unlimited slower internet and the metered faster internet. Honestly, normal end users -- the people who mobile services are tailored for -- are broadly a pretty simple group who at a minimum would be annoyed at having to do anything to switch modes. More realistically, under the mechanism you propose where there's some "limited speed" mode that has unlimited usage, what would really happen is end users would "forget" or "fail to understand how" to switch to the limited mode before "bingeing on" and result in constant calls into customer service, trying to get usage fees waived. This sort of thing has to be automatic or it will fail, as pertains to the masses.

Comment Re:BCP38 (Score 1) 312

Even if the malware were BCP38 aware, there would still be a problem. For said malware, if it's in a position to make a decision as to whether or not it CAN participate optimally, the machine has already been owned by it. There is no disadvantage to the malware in deciding to participate sub-optimally because the rooted machine cost the maker of the malware nothing to gain and nothing to operate anyway. Why would you opt not to use this free contributor to your cause even though it's not an optimal contributor? You wouldn't.

Comment Re:BCP38 (Score 1) 312

Absolutely. In managed CPE, it should happen first at the CPE, and in all circumstances should happen in the ingress stage at the first customer-facing edge router under the ISP's control. Unfortunately, a plurality of circumstances in which an ISP (even a small one) needs to inject traffic into his upstream from source IPs that aren't directly the end-user ISP's arise. (BGP multi-homed end-customer of multiple smaller ISPs or customer of a smaller ISP and a large ISP.) There are also cases of other ISPs transiting yours, etc, that may add new IPs occasionally. My issue is that the policy that a backbone provider can only peer to an ISP who certifies that they implement BCP38 will be useless, as there's no practical, automated way to determine whether that downline ISP lied about that or not (in the face of so many legit cases of legit unexpected origin IP traffic.)

Comment Re:Seem it would be easy to identify on the ISP si (Score 1) 312

So when you slow it down, you slow down the good packets with the bad packets. You (the generic upstream ISP) can't tell the difference. Only that target X.X.X.X says we're sending too much to him and we need to throttle it down. So, we throttle down all that we are sending to target X.X.X.X. And all of our customers and downstreams and those transiting to destination X.X.X.X through us suddenly loose the ability to effectively use the target service. Not helpful.

Comment Re:Two things (Score 1) 312

Honestly, I agree. The penalties exacted for actually being the party behind a massive DDoS (when it can be proven objectively and conclusively) are not currently nearly severe enough.

Whether or not it's any part of the aggravating party's intent, DDoS's, on a broad scale, almost certainly do KILL PEOPLE.

Were there no middle-aged Xbox Live or PSN users, with already too-high unmanaged blood pressure that experienced a massive (fatal) stroke at the frustrations they experienced with their technology products over the Christmas holidays?

Of course, it IS those peoples' faults as well, in not managing their blood pressure. I'm betting, however, that more than once in DDoS history, has an impacted party's life been cut short (in the straw that broke the camel's back sense) by frustrations arising from the DDoS.

Still, how many depressed kids whose Twitter contacts are their social safety-net had a bad day and in the midst of a Twitter outage committed a suicide that might have been avoided if the service had worked? We can't really know.

Comment Re:Here's One Idea: (Score 2) 312

It's actually a pretty good idea.

Some proprietary (ISP specific) implementations of similar mechanisms actually exist.

There are numerous ways that you (as ISP) can expose, to your above-average-network-engineering-capabilities-wielding downstream client, a mechanism by which you allow this downstream client to edit an egress filter rule-set on IP traffic headed toward same said downstream client.

I have had such an arrangement with ISPs whereby I can insert a groomed config snippet into my providers' edge routers facing the links to my network via a simple authenticated http call.

This is actually a useful technique as long as you're not target of a really massive attack.

There have to be limits or you run out of router / switch filtering resources or CPU resources (depending on the implementation). If we ignore or resolve those limitations, this works if you have a significant but not massively adopted solution on offer. Where it fails is the point where the traffic trying to get to you is no longer just congesting the links that carry traffic from your ISP to you, but rather now that traffic is so massive that it congests your ISPs upstreams' links into your ISP. It's impractical to have the core backbone maintain these filters, as the size would create a new kind of scale limit in the network. Thus, you're now denied service by way of congesting your ISPs upstream links rather than your ISPs links to you.

Comment Re:Much like MTU handling (Score 2) 312

Indeed something along that line is what I think the Internet protocol needs. While IP is freely packet-switched and may appear stateless when you glance in the specs, TCP/IP routers and hosts are actually session-based internally and the number of concurrent sessions is limited.

I feel like this is a trap.

You have a creepily low user id. So much so that you probably were around for the beginnings of IP network as a mass-market communications mechanism.

However, I would suggest that your contention that TCP/IP routers (generically speaking) are session based is incorrect. Particularly, this is incorrect with respect to the vast majority of the core internet routing and Layer 3 switching infrastructure as employed by ISPs and carriers. In order to achieve the massive traffic scale that these devices handle, they mostly are stateless forwarders unconcerned with the higher level protocols above IP and unconcerned with maintaining session / state information on the traffic flows through the router/switch. This allows the hardware's specialized ASICs to forward the packets without having to retain any history of "sessions" or spend precious CPU time matching each packet to a session.

Comment Re:BCP38 (Score 1) 312

It's actually really difficult to tell what is and isn't a proper source IP at the top-level backbone layer. Remember that must of their customers are BGP multi-homed. And while routing registry databases and RPKI are great, those solutions still have limitations. The key problem is that it is virtually impossible to determine on an automated basis that a downstream ISP of any significant size is or is not reasonably implementing BCP38. Thus, when you make it a "must certify that you do this in order to connect to us", everyone just checks the box.

Slashdot Top Deals

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...