Thanks for those interesting details on how big routers work. Looks like they're optimized for throughput not latency. Though if as you say normal link loadings mean that many packets need to be buffered, pipelining packets isn't going to improve latencies.
But I don't think there's any reason x86 severs, which make up many of the nodes at both ends of a path, couldn't be set up to reliably pipeline packets. A 1500 byte packet on a 1Gbps link is received in 12us, a 20 byte header in 160ns. To reduce latency the processor needs to work out the packet's destination within this interval, which is about 30k instructions. This shouldn't have to touch RAM, because I'm sure the routing tables of all but the most connected hubs can fit in the 25-45MB L3 caches of current Intel server CPUs.
The sequence would be: get interrupted when a header arrives, work out the destination link, initiate a DMA to that link that contains the new header followed by whatever part of the packet has been received up to then, set up interrupts for when the Tx buffer is nearly exhausted, and on each of these DMA whatever else has been received up to that time.
Not really. The x86 architecture can't handle the interrupts. That's why modern cards don't interrupt on every packet received.
What, even with so many cores available now that one could be reserved as a network co-processor? If the bottleneck is instead another aspect of the architecture, I'm sure such a trillion dollar industry can work out how to fix it.
As well as handling interrupts on header receipt, you'd also have to handle interrupts on near exhaustion of transmit buffers, so packets can be pipelined in chunks between ports.
There are problems with this approach. First of all higher bandwidth of the interface the less you get out of it.. A 1500 MTU packet vs 10GB or 100GB PHY is not going to be detected by the most 1337 gamer with her 8000000 dpi x-ray laser mouse and super mega ultra polling keyboard.
Transmitting a 1500 byte packet after the 20 byte header has been received reduces latency by a factor of 75. Given enough hops, this can be significant.
The bigger issue is this only works reliably when switching between instantaneously free/equal capacity links otherwise if you don't buffer at all your going to suffer some serious packet loss.
As I said in my other reply, if the incoming and outgoing link speeds don't match, can't you only buffer until time when the outgoing packet can be transmitted continuously? For example, if the outgoing link is twice the speed, starting to transmit when half the packet has been buffered.
Thanks for that info. Looks like network chips for PC/server architectures need to start supporting interrupts on header receipt.
Even if an outgoing link is faster than the incoming one, time can still be saved by starting to transmit the outgoing header as soon as the whole packet can be transmitted continuously. Even sooner if the physical protocol supports gaps, or when wasted bandwidth is less important than reduced latency.
Buffering and switching latency is the main source of delay, not signal latency in the copper and fiber. Microwaves would do exactly nothing to improve the switching and buffering latency. If anything they'd make it worse: light in fiber travels much further than line-of-sight microwave before it has to be regenerated with another delay.
There's not much delay for a simple repeater compared to a node that must read the packet so it can re-route it.
My question would be: To what extent do Internet nodes (with empty buffers) start forwarding a packet after reading its IP header, rather than waiting for the whole packet to be received? For example, does Linux support this? Not doing this ubiquitously would really increase end-to-end packet latency, which the cited paper argues is more important than speeding up protocols for getting a lower-latency Internet.
As well as being on the side, there are also several search engine ads placed at the top of the main result list, forcing you to scan past them to read the first organic result. They're also coloured, which draws your eye to them.
But my main thrust was that even if they're out of the way, they're still ads with agendas to relieve you of your cash.
Search engines should provide "view all relevant ads" link that even those using ad-blockers can see. This would allow such people to see what's on offer when they are in a buying mode, without having to disable their blockers, refresh and page to see a good selection of ads, then re-enable their blockers.
So my question to all of those infuriated by those content producers who would "dare" to try to protect their ads is this: what viable alternative do you suggest?
Here's the alternative to advertising on which I've been working. Advantages: no cost to users, publishers earn money from their content rather than the stuff around it, and allows monetization of unbiased content (that unlike ads tries to tell the whole truth).
Yes, this is an ad. But because it's on-topic and in some way solicited, I think it's acceptable, and shouldn't be equated to something intrusive.
This is why I actually don't have a problem with Google's text ads. You do a search on some terms, and alongside your search results you also get some ads based on those terms. This can be really helpful if you're looking for a product to solve a problem you have, and the ad shows you something which is exactly what you're looking for.
While such search engine ads are less of a problem because they're often relevant to both your current motivation (finding things) and your topic of interest, they're still presenting you with information that's biased both in presentation (pay for prominence) and content (spin), as well as being distracting when you're not in a buying or curious mode. Sure, organic results aren't perfect, with SEO manipulating rankings, and company websites that spin natively. But at least you've got a chance of seeing some independent sources of advice. So I can understand those who block search engine ads.
Thanks for posting those stats. An interesting dip in the 60s (and more no-opinions). Was this the result of the general cultural liberation, before increased crime hardened people's attitudes, then before decreased crime and a spate of wrong convictions started bringing it down again?
Massachusetts is more liberal than the US average, meaning that only one-in three support death sentences (one-in-four in Boston). Even fewer support it in this case.
The only people that are interested in making a stand against the jury's decision in this particular case would be those opposed to the death penalty in all cases, basically those that do not believe that the State should kill people.
And those who are opposed to the death penalty aren't allowed on capital crime juries. One reason to oppose the death penalty is that it's only decided by the dwindling number of people willing to impose it, who will increasingly fail to collectively represent the average citizen.
That's surprising. You'd think that market forces would lead insurance companies to discriminate on every relevant factor that they're legally allowed to take into account, as long as the burden of obtaining that information didn't outweigh the advantages.
In Australia, motor property insurers base their premiums on just about anything, including the factors you mentioned. However health and motor 3rd-party injury insurance providers are forced to "community rate" many factors to avoid otherwise nasty discrimination.
Here in Australia, private health insurance premiums are allowed to increase according to the age at which the person joined the fund, penalising people who only join (to supplement the universal back-up) in old age when 90% of health spending occurs. It seems fair.
The difference with the US is that health insurance isn't associated with employers — you buy your own. This is why you get the age discrimination you mention.