Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:complex application example (Score 1) 161

The point is that each and every component involved, from hardware through firmware to software, is designed under the premiss that it is okay to drop a packet at any time for any reason, or to duplicate or reorder packets.

That entire sentence is damn near a lie. Those issue can happen, but they shouldn't happen. You almost have to go out of your way to make those situations happen. Dropping a packet should NEVER happen except when going past line rate. Packets should NEVER be duplicated or reordered except in the case of a misconfiguration of a network. Networks are FIFO and they don't just duplicate packets for the fun of it.

As for error rates, many high end network devices can upwards of an error rate of 10E-18, which puts it at one error every 111petabytes. I assume you'd have to divide that error rate by the number of hops.

I've seen enough system designs where they send data as UDP packets and they require incredibly low packet-loss rates, border-lining never. It can be done, but you're not going to be using dlink switches. You can purchase L4 switches now with multi-gigabyte buffers. They're meant to handle potentially massive throughput spikes and not drop packets.

I assume this is all intra-datacenter traffic or at least an entirely reserved network.

Comment Re:Where is the market demand? (Score 1) 161

The whole point of QoS is to not have to add more hardware, but to make better use of your current hardware while not having large amounts of jitter. Mainframes don't need to worry about interactive processes, but many modern day work loads do. What they want is a good average throughput with a maximum latency.

Comment Re:complex application example (Score 1) 161

* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

How are you planning on handling UDP checksum errors without a backchannel or EC? The physical ethernet layer is lossy, so you're screwed even before the packet hits the NIC.

Lossy?

I just logged into my switch at home and it has 146 days of uptime with 20,154,030,043 frames processed and 0 frame errors. I can even do a 1gb/1gb, for a total of just under 2gb/s at once, iperf, and have 0 packets dropped.

Let the network group worry about QoS. But yes, errors will eventually happen, they're just very rare. But when they do happen, it's probably pathological and you'll get a lot of them. But I wouldn't go so far to say "the physical ethernet layer is lossy", as a general statement.

Comment Re:complex application example (Score 1) 161

When you handling lots of little messages/jobs/tasks that are coming in quickly, passing data between processes is a horrible idea. Between context switching and system calls, you're destroying your performance.

You need to make larger batches.

1) UDP/Job comes in, write to single-writer many reader queue(large circular queues can be good for this) and the order number, maybe a 64bit incrementing integer. If the run time per job is quite constant, then you could use several single reader/writer queues and just round robin them. This would reduce potential lock contention, but would come at the cost of variable work loads could cause a bias towards a single worker.

1.a) You're not receiving packets fast enough to worry about threading reading from the NIC. If you had to look into making this part faster, like millions of Packets Per Second, the first thing I would find out is if this packets are coming from multiple data sources and if jobs need to be processing in order relative to all sources or to themselves. If themselves, then you could have a load balancer trying to round-robin and sticky by Source IP.

2) Worker sees jobs in queue(since this is a speed sensitive dedicated matching, polling could work, but may want event based), grabs N jobs, where those N Jobs can be reliably completed in a timely fashion, this may be 1 or may be 100, who knows until you test. Note the order number of your Jobs. You don't really need to grab N jobs if using a single reader/writer queue since there is no real contention, but reading in batches is good for high contention queues like multi-readers.

3) Your worker will now loop through each job running each script, hopefully all on the same worker/thread.

4) Write out the completed jobs to a single reader single writer queue. If you don't use a single reader/writer queue and instead have a multi-writer queue, you may want to commit finished jobs in batches to reduce contention.

5) Have another worker poll/event each of queues for each worker. This worker can make sure the jobs are put back in order. This process I assume to be relatively lite, so probably a single worker to handle all of the worker queues, but could also be threaded. You just need to manage the ordering somehow.

You should have no more than N number of workers per core, where N is probably a small number, like 2. Lots of threads is bad.

I love single reader/writer queues, they can be lock-less.

Your problem sounds close to what Disruptor handles (Google: disruptor ring buffer)(fun read: http://mechanitis.blogspot.com...). May want to also look into that kind of design. It's an interesting project that runs on Java and .Net, and I think C or something, but I can't remember. Still a good read.

Comment Re:10.10 per hour (Score 1) 778

I understand where you're going with that, but I don't think minimum wage is meant to support a family. It should be just enough for a single person to fully support themselves, including basic healthcare, eating healthy, and a bit of extra to have fun with friends so they don't get stuck in depression. Essentially enough to be a physically and mentally healthy social individual.

Comment Re:This will only buy a little time (Score 2) 778

If everyone's equal, then nobody has incentives to do anything

Research shows that paying people too much is worse than paying too little. You get the best performance out of people when they get paid just enough to be content with their life. Where they can have a decent roof of their head, be healthy, afford the bare necessities and a bit extra to have fun with the family a few times a week. That's it. They need to be secure and they need to be happy, but no more.

On average, if someone is getting paid enough to own a flashy car, their productivity will go down.

This only applies to job with any amount of creativity. Jobs that are repetitive manual labor show an almost linear increase in productivity with pay, so increased pay works just fine. But we're in the process of automating manual labor and will at some point in the future have replaced all manual labor with automation.

Comment Re:Free market economy (Score 1) 529

And regulations are what stopped factories from taking orphans off the streets and putting them to work in very dangerous settings that many times results in the deaths of the children. Have a skinny chimney? Starve your orphan slave, until they can fit, then get them in there and light the fire under them so they quickly push through all that soot that will clog their lungs and lead to a painful early death.

There's also the chance the child may get stuck and starve or burn to death. But whatever. Zomg! regulations are bad!

In a way, regulations are a government enforced set of ethics and morals. Not entirely, but very similar.

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...