Next they will say Chocolate isn't healthy for you either...so I'll have to stop drinking Chocolate beer.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
If this works like uRPF is implemented in a default free zone this will only help for spoofed traffic which with today's botnets means absolutely nothing as the traffic is from legitimate hosts/addresses. To prevent something like this you would need a system that could collect from all of the major backbone providers that would try to recognize the pattern based on destination addresses (and likely entire subnets) and then distribute filtering back at the ingress nodes while the attack persists. On large networks we often implemented triggered blackhole routers to do something similar but not exactly the same.
I can't see this problem being solved with anything less than analytics and a multi-provider way to block the ingress traffic to the destination. Even then the hosts that have been "botted" will still be effectively denied from using said service and hope that you don't get a lot of false positives. Today's routers while very powerful are basically quick cut-through switching devices and are not meant to do deep inspection. Scaling protection at the destination is expensive and more of a blunt weapon than a scalpel to prune it out and even moving it out to the "cloud" means a lot of expense which for better or worse leads more towards and end like we had after 9/11 of implementing the TSA because of terrorists. It adds expense and leads to potentially less privacy due to the inspection required.
I think this is more generally known as Unicast Reverse Path Forwarding (uRPF).
The fact that ISP's are taking the holier than thou stance of how they will stop building out and "creating jobs" sickens me. The Telecom industry didn't create the prosperous interconnected world that we live in today, innovators and content creators did. Without content and the interconnected devices we have today, there is no need for the infrastructure.
We have a government that continues to protect old business models because they have been bought to the detriment of we the consumers. The ISP's today are "passively throttling" competing content providers by refusing to participate in the network model that got us to where we are today because they want to milk additional revenue that they frankly are not entitled to. If the ISP's require additional revenue to build out their network so that they can deliver what their customers request from the Internet at large then they need to pass that cost on to the consumers. The idea of requesting or initiating party pays is well established in telecommunications but now ISP's want to disregard the fact that without their customers requesting the data, it would not be sent. The idea of the Internet is that anyone can connect and offer up content without having to become a 3rd Tier ISP themselves just to connect to every network. Many of them partner or create CDN's to make their services better and reduce the impact on ISP's.
There may be 100's of thousands of jobs at ISP's but there are many times more that have been created by Internet enabled innovators and content creators. Those are who are "too big to fail". We should not be trying to protect an oligopolist broadband market and the relatively small number of jobs it represents when 100x as many jobs are possible if we keep the Internet free and open.
Correct, once the packets are transmitted to you, its too late to apply QoS. The only thing you can control is your outbound requests which as it happens has a directly (although not linear) relationship to the amount of traffic sent back to you. This article outlines it brilliantly and is a must read for anyone using QoS on most consumer grade equipment:
That said, classification of traffic is a much more challenging problem than QoS is and is what really needs to be addressed. This comes from a "Network Guy" on a 4/1Mbps DSL connection who works from home and has to compete with his kids playing XBOX and streaming Netflix so I play with this a lot. At this point in time, it seems like Palo Alto has the best classification engine out there and that with their QoS polcies may be the best solution around but I haven't had a chance to play with it.
(FWIW I too run Tomato Shibby on an Asus N66U)
Agreed. I read GOP and immediately thought the worst but what I found was a well thought out article that actually acknowledges the problems and lays out some very interesting reforms that could actually make the system better.
No company in business today wants you to own anything. They want to own it and give you a limited license to use it. Boxee is the latest to jump on the" I need to have a monthly income stream beyond one time selling hardware" so lets do it by not storing stuff locally but in our cloud where we can charge for it. I was very excited to read about this new box as I was looking for a DVR solution for just regular OTA content that I occasionally want to watch without having to have a monthly fee or a computer based solution. I just moved into the country and I got pissed off while reading about how I need to sign up for 2 years to get Satellite service and at the end I STILL dont own the equipment but they are leasing it to me. This is is for a combination of two reasons, 1) theft of service (having it in multiple locations at once) and 2) To stop the secondary market where people can have contractless service.
Additionally as others have mentioned, not everyone has these huge pipes to the Internet...for $70 a month I get a 2M down / 512k up DSL connection where I had a $40 15M down / 5M up connection in the city...
I believe even though this is not necessarily out of state travel, we have been granted Freedom of Movement through the Privileges and Immunities Clause of the Constitution through Supreme Court rulings. Outside of that, I have a hard time believing that due process wouldn't be required as several times in the article it was mentioned that warrents were requested.
Agreed. Even with SLAAC to get an IP, you wouldn't be able to tell where the device was. Additionally, the waste of putting the electronics in every bulb would be ridiculous.
I'll second Juniper, if not for commit confirmed but rollback 1...they have some really nice switches these days with the EX series. This comes from someone who supports both Cisco and Juniper but the adage that "nobody was ever fired for picking Cisco" is true enough as well. I don't think you would go wrong with either.
This is not about the content of the network, this is about capacity and symmetry. Barely anything is incoming from Comcast to Level 3: Everything comes from Level 3 into Comcast. Therefore, just as Akamai did, Level 3 need to pay for the data used over Comcast pipes.
While Comcast is a large provider, what they do is different from what someone like the large backbone providers which have peering arrangements. Because Comcast (like all Broadband providers) has a MUCH larger amount of endpoints than your typical WAN/Backbone provider it is always going to have more data being pushed to it than it sends. That will never change and it is their business model but they now want to be treated like they are a transit provider when really they are just a data sink. Comcast wants to say its just because of the vast discrepancy of traffic but content delivery is always going to use a lot of bandwidth and to get around "net neutrality" by just claiming its not the content but the amount of traffic is just a lousy excuse to disguise the true reason.
Yes, there will be Carrier Grade NAT (CGN) used for the time to be. You will primarily see if in Mobile Wireless networks for handsets that don't require a full Internet connection but other ISP's will eventually be forced to do the same. That said, CGN is required so that we can do Dual Stack (where you have both an IPv4 and IPv6 address). This is the most commonly accepted transition technique and really the best available. It works by using the DNS system to determine if the name you are trying to resolve has a AAA or AAAA (referred to as a Quad A) record. The IP stacks of today are set to prefer Quad A over AAA records so if a site has a IPv6 address (or Quad A record) you will hit the site using your IPv6 connection. CGN is a IPv4 technology and not a IPv4 to IPv6 Gateway. CGN just allows us to do a massive amount of NAT44 that most of our current NAT devices can't handle.
Really there is nothing to see here that hasn't been said over and over again on every "World ending IPv4 shortage" article on Slashdot. Yes, the threat is real. Does it really matter to many people outside of Service Providers, not really because almost everyone else is doing NAT44 today anyone in one form or another. As usual, what should be taken from this is that if you are a Network Engineer responsible for managing a network, you should be taking the time to take inventory of your IPv4 space and making plans for implementing Dual stack in the near future.
I don't know enough about your environment but hopefully you know that that isn't a possibility across Layer 3 devices (and when I say VLAN's, I assume that you are talking about an IP segment and not just a VLAN number). That said the "ip dhcp helper" or DHCP relay I think is what you are looking for. This way you can have 1 DHCP server serving numerous VLAN's or L3 IP segments. If you have more specific questions feel free to reach out to me.
Router Lab: www.onlinerouterlab.com