Exactly. It's a problem of over buffering, not a lack of layering violation voodoo.
'You can't kid a kidder. Having been a lobbyist, he knows all their tricks,' says Blair Levin.
So this is what we've been reduced to? The disconsolate wish, having turned the regulatory body over to one of the kleptarchs, that he will discover not only his duty to society but also unbiased objectivity, and turn on his own? A ray of hope so thin strains my credulity.
I don't know, if done right it can go really well. See Joseph Kennedy and the initial SEC. He may actually be on the up and up, only time will tell.
"Perhaps it could feature democratically elected managers."
Because a popularity contest is the best way to chose technical positions.
As if it worked any different in private industry...
The problem is you have to trust that peer to police their network.
It leads to a situation where one bad actor network with content can make it never successful.
Let me try this as simple as I can. Just because you ran BGP with your provider, does not make you a peer or transit network.
You just said default route. That is a leaf node. You're at the end of the world. You are not peering. uRPF is useful when you're a leaf. It is *completely useless as a real peer* in it's current form.
Let me illustrate this for you with a completely made up scenario: You are Telia, you peer with Abovenet in 3 places, how do you configure uRPF on those links so that it keeps spoofed packets out and doesn't break all your downstreams?
As one who has maintained an ISP's peering, it is no where near as complicated as you make it sound. Enterprise class hardware (from Cisco, Juniper, etc.) have builtin support for unicast reverse path filtering (uRPF) that's effectively processing free -- based on the routing table ("FIB" -- forwarding information base) -- very effectively preventing traffic from entering (or leaving) your network that doesn't belong there.
(As an end user, uRPF presents a small problem as the ISP DHCP server is a 10-net host and I null route 10/8.)
Yes obviously, which can be implemented in 2 modes: strict, which is useless as an upstream peer because you don't necessarily have best path down to them for everything you're hearing, or as loose, which is again useless as an upstream peer because you might as well turn it right off.
Dude, clearly you have no idea what you just read.
The problem with this becomes what if you're a transit provider yourself. The logistics of managing that kind of fitering suck. It's why most peers don't.
There needs to be a middle ground between loose and strict like feasable. I don't want to accept packets for any route I have, nor do I want to drop any packet that doesn't head back the same direction. For reasonable filtering at that level, it needs to be "allow any packets that should reasonably come from this peer per their advisement that I can filter". Sure you can base it of IRR or something, but it would be much more effective if this was signaled than configured.
If you don't mind, can you also pass along how utterly offensive it is that articles are being deleted that are critical of the new site. It's worse than the beta itself. This is a user content site, and has always been open forum. The very fact that criticism is being squelched makes me want to walk away in disgust.
Your users aren't idiots, stop treating us like them, or there won't be any left in a month.
Heres your feedback: The site is AWFUL.
The reason I have thus far not taking your survey is it is HOPELESSLY biased in your favor and useless.
Scrap the new site, or don't expect me to be here when it's implemented. Social media is fickle, and this site will be a myspace memory if you continue to ignore the userbase. We can always go tolerate reddit for a while until something else takes it's place. I've been coming here for 10 years, but this may end it for me.
Personally, I prefer an air supply *not* at risk of detonation...
Hardly. If it's shown he did then lets throw him out, but this does not indicate anything of the sort. If anything, it shows the NSA is probably more like the J Edgar of old, and needs to be reigned in signifigantly. I wouldn't be surprised to hear them spying on the president as well.
"...require companies with more than 250 employees to submit..." Solution: Fire all but 250 of your employees.
If necessary, outsource any remaining work to 1 or more subcontractors, each of which has 250 employees or less.
Or we could just end the gamemanship which allows companies to claim employees aren't employees if they walk and quack like a duck, just because they're farmed out only for the purpose of avoiding having to call them employees.
The slides refer to a feature called "pacing" where it doesn't send packets as fast as it can, but spaces them out. Can someone explain why this would help? If the sliding window is full, and an acknowledgement for N packets comes in, why would it help to send the next N packets with a delay, rather than send them as fast as possible?
I wonder if this is really "buffer bloat compensation" where some router along the line is accepting packets even though it will never send them. By spacing the packets out, you avoid getting into that router's bloated buffer.
From the linked slides:
Does Packet Pacing really reduce Packet Loss?
* Yes!!! Pacing seems to help a lot
* Experiments show notable loss when rapidly sending (unpaced) packets
* Example: Look at 21st rapidly sent packet
- 8-13% lost when unpaced
- 1% lost with pacing
Well, if you're sending UDP, and your server is connected to a gig link, and the next link between you and the server is 1m, and the buffer size of the device is 25ms...
Sending data over 25k, you might as well set packets 18+ on fire, because they sure aren't making it to the destination unless you delay them accordingly.
As I understand it, QUIC is largely about multiplexing - downloading all 35 files needed for a page concurrently. The test was the opposite of what QUIC is designed for
TCP handles one file at at a time* - first download the html, then the logo, then the background, then the first navigation button
QUIC gets all of those page elements at the same time, over a single connection. The problem with TCP and the strength of QUIC is exactly what TFA chose NOT to test. By using a single 10 MB file, their test is the opposite of web browsing and doesn't test the innovations in QUIC.
* browsers can negotiate multiple TCP connections, which is a slow way to retrieve many small files.
What the hell are you talking about? You're conflating HTTP with TCP. TCP has no such limitation. TCP doesn't deal in files at all.
Give H1B holders who blow the whistle on their employers violating the law (overworking them, or claiming and paying them as if it were a much lower skilled job that in reality is higher skilled, the employer just wanted to scare off US workers, etc.) either fast-path to a Green Card, double the pay (paid by fines) they would have earned, and/or freedom to move to a different employer for their stay.
I.e. change the incentives for H1B visa holders to rat out misbehaving employers, rather than being scared to say anything because they loose if they do.
That's like saying cops as a whole won't abuse power if we listen to the rare whistle blowers every now and then. When the system itself is abusable by design, 4 or 5 honest actors aren't going to fix that, you have to fix the system.