Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:GPL and copyright (Score 1) 189

The trouble with that is, when given the option of paying or not, the vast majority of users will choose "not," at least given a wide enough target audience (some niches may be filled with more generous people of course..)

I agree that it's hard to sell application software to individuals. But it's much easier to sell application software to businesses (including software that helps make individuals money), and to sell system software (components and tools). Business have both the money and a desire to preserve their reputation, so most will go legit even if the software's open, especially if support comes with the purchase.

So there's no reason why this sort of software shouldn't be open, particularly as such customers are the ones who are most interested in tinkering. I doubt RMS would have started GNU/FSF if that printer driver had been open source but not free of charge.

Comment Re:GPL and copyright (Score 2) 189

It's the best part of twenty years since I wrote any software where we cared about copyright. Everything I've written since then has been useless without our hardware, and that's where we make the money.

You're lucky that you've got closed hardware to act as a dongle for your software. But does this mean people who want to earn a living writing software for open hardware are SOL? I think such people should be able to put a (fixed, non donation) price on use of their work, but at the same time keep the software open so that users can tinker.

Comment Re:Idiots (Score 1) 221

Thanks for those interesting details on how big routers work. Looks like they're optimized for throughput not latency. Though if as you say normal link loadings mean that many packets need to be buffered, pipelining packets isn't going to improve latencies.

But I don't think there's any reason x86 severs, which make up many of the nodes at both ends of a path, couldn't be set up to reliably pipeline packets. A 1500 byte packet on a 1Gbps link is received in 12us, a 20 byte header in 160ns. To reduce latency the processor needs to work out the packet's destination within this interval, which is about 30k instructions. This shouldn't have to touch RAM, because I'm sure the routing tables of all but the most connected hubs can fit in the 25-45MB L3 caches of current Intel server CPUs.

The sequence would be: get interrupted when a header arrives, work out the destination link, initiate a DMA to that link that contains the new header followed by whatever part of the packet has been received up to then, set up interrupts for when the Tx buffer is nearly exhausted, and on each of these DMA whatever else has been received up to that time.

Comment Re:Idiots (Score 1) 221

Each core has its own registers, L1 cache, and L2 cache. So if one core were to be devoted to the OS (or just to interrupt processing, which the x86 can do), there'd be either no context switches, or context switches that fit in the caches and so hardly have to touch the RAM. Dedicated cores would especially make sense on servers whose job is to route, filter, and cache traffic, and even on application servers.

Comment Re:Idiots (Score 1) 221

Not really. The x86 architecture can't handle the interrupts. That's why modern cards don't interrupt on every packet received.

What, even with so many cores available now that one could be reserved as a network co-processor? If the bottleneck is instead another aspect of the architecture, I'm sure such a trillion dollar industry can work out how to fix it.

As well as handling interrupts on header receipt, you'd also have to handle interrupts on near exhaustion of transmit buffers, so packets can be pipelined in chunks between ports.

Comment Re:Idiots (Score 1) 221

There are problems with this approach. First of all higher bandwidth of the interface the less you get out of it.. A 1500 MTU packet vs 10GB or 100GB PHY is not going to be detected by the most 1337 gamer with her 8000000 dpi x-ray laser mouse and super mega ultra polling keyboard.

Transmitting a 1500 byte packet after the 20 byte header has been received reduces latency by a factor of 75. Given enough hops, this can be significant.

The bigger issue is this only works reliably when switching between instantaneously free/equal capacity links otherwise if you don't buffer at all your going to suffer some serious packet loss.

As I said in my other reply, if the incoming and outgoing link speeds don't match, can't you only buffer until time when the outgoing packet can be transmitted continuously? For example, if the outgoing link is twice the speed, starting to transmit when half the packet has been buffered.

Comment Re:Idiots (Score 1) 221

Thanks for that info. Looks like network chips for PC/server architectures need to start supporting interrupts on header receipt.

Even if an outgoing link is faster than the incoming one, time can still be saved by starting to transmit the outgoing header as soon as the whole packet can be transmitted continuously. Even sooner if the physical protocol supports gaps, or when wasted bandwidth is less important than reduced latency.

Comment Re:Idiots (Score 1) 221

Buffering and switching latency is the main source of delay, not signal latency in the copper and fiber. Microwaves would do exactly nothing to improve the switching and buffering latency. If anything they'd make it worse: light in fiber travels much further than line-of-sight microwave before it has to be regenerated with another delay.

There's not much delay for a simple repeater compared to a node that must read the packet so it can re-route it.

My question would be: To what extent do Internet nodes (with empty buffers) start forwarding a packet after reading its IP header, rather than waiting for the whole packet to be received? For example, does Linux support this? Not doing this ubiquitously would really increase end-to-end packet latency, which the cited paper argues is more important than speeding up protocols for getting a lower-latency Internet.

Comment Re:Fuck you. (Score 1) 618

As well as being on the side, there are also several search engine ads placed at the top of the main result list, forcing you to scan past them to read the first organic result. They're also coloured, which draws your eye to them.

But my main thrust was that even if they're out of the way, they're still ads with agendas to relieve you of your cash.

Search engines should provide "view all relevant ads" link that even those using ad-blockers can see. This would allow such people to see what's on offer when they are in a buying mode, without having to disable their blockers, refresh and page to see a good selection of ads, then re-enable their blockers.

Comment Re:A poorly made point, but still a point (Score 1) 618

So my question to all of those infuriated by those content producers who would "dare" to try to protect their ads is this: what viable alternative do you suggest?

Here's the alternative to advertising on which I've been working. Advantages: no cost to users, publishers earn money from their content rather than the stuff around it, and allows monetization of unbiased content (that unlike ads tries to tell the whole truth).

Yes, this is an ad. But because it's on-topic and in some way solicited, I think it's acceptable, and shouldn't be equated to something intrusive.

Comment Re:Fuck you. (Score 1) 618

This is why I actually don't have a problem with Google's text ads. You do a search on some terms, and alongside your search results you also get some ads based on those terms. This can be really helpful if you're looking for a product to solve a problem you have, and the ad shows you something which is exactly what you're looking for.

While such search engine ads are less of a problem because they're often relevant to both your current motivation (finding things) and your topic of interest, they're still presenting you with information that's biased both in presentation (pay for prominence) and content (spin), as well as being distracting when you're not in a buying or curious mode. Sure, organic results aren't perfect, with SEO manipulating rankings, and company websites that spin natively. But at least you've got a chance of seeing some independent sources of advice. So I can understand those who block search engine ads.

Comment Re:not surprised (Score 1) 649

Thanks for posting those stats. An interesting dip in the 60s (and more no-opinions). Was this the result of the general cultural liberation, before increased crime hardened people's attitudes, then before decreased crime and a spate of wrong convictions started bringing it down again?

Massachusetts is more liberal than the US average, meaning that only one-in three support death sentences (one-in-four in Boston). Even fewer support it in this case.

Comment Re:not surprised (Score 2) 649

The only people that are interested in making a stand against the jury's decision in this particular case would be those opposed to the death penalty in all cases, basically those that do not believe that the State should kill people.

And those who are opposed to the death penalty aren't allowed on capital crime juries. One reason to oppose the death penalty is that it's only decided by the dwindling number of people willing to impose it, who will increasingly fail to collectively represent the average citizen.

Slashdot Top Deals

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...