If it left you crippled instead of dead, yes. However sane governments just have enough taxes to deal with the consequences of natural diasters instead of having "asteroid insurance", "crazy guy with a gun insurance", "tiger insurance", etc.
Of course, they do! They just pay everything to their executives.
In general, the idea of "profit" as something objective and measureable is completely idiotic for most of the modern companies, because a sizeable chunk of what is expense for the company, is income for the people who are in control of it.
Then enlighten us about your preferred consumption method of modern history.
Anything attributed to unknown or un-verifiable sources is likely to be a lie.
Anything that is accompanied with ideological editorializing is likely to be a lie.
Anything said by "historian" about anything hapening across some major ideological conflict is likely to be a lie.
Anything said by "historian" about his own government is likely to be a lie.
Anything said by American "historian" is likely to be a lie.
Anything said by a pissed off writer is likely to be a lie.
Whatever left... Yeah, that's really not much, but you usually can trust your own memory and published documents when those documents were a part of some process where validity mattered.
So you will just accuse me of bias because you happen to believe the propaganda version?
he article was written by the guy that did the driver, I think we can assume he knows his stuff.
Most of the driver is just a copy of Intel driver, with additional functionality bolted on top. Whatever the author's abilities are, the goal was not to produce a working protocol stack, and benchmarks of this hack can't be used to predict anything but the behavior of this hack.
No it appears that if you want to switch more than 10-18 Gbit/s the computer would have a memory bandwidth problem. Trying to use multiple cores and NUMA might improve on that, but I do not think you would manage to build a 24 port switch that switches at line speed this way
But if you could somehow get an external switch to do 99% of the work, this might work...
And then they would inevitably slow down this hack, too, what makes me doubt the validity of the measurements.
I am not sure how much more we can get out of this discussion. From my side I believe you are going too far in trying to make a problem out of something that actually works quite well for some very large companies (Google and HP!).
Those companies merely announced that they intend to use this "technology" somewhere. They are not throwing out the routers they have. They likely replace some level 2 and level 3 switches ("almost routers") and treat the whole thing as a fancier management protocol for simple mostly flat or statically configured networks that they have in abundance. For all we know, Google may already have no routers at all except for links between their data centers, as they are famous for customizing their hardware/network infrastructure for their own unique software infrastructure, and would probably gain more from multi-port servers connected by very primitive switches into clusters with VLAN or even physical topology following the topology of their applications' interfaces.
Packets need to be delayed when the controller needs to be queried and that is true for both OpenFlow and traditional switches.
Except traditional switches never have high-latency, unreliable links between their components, and the data formats follow the optimized design of ASICs and not someone's half-baked academic paper.
We are just fighting over some nano or possible microseconds here with no one showing that it actually matters.
Then why don't people just place Ethernet between a CPU and RAM? It's "nano or possibly microseconds", right?
Google uses for, or they wouldn't be doing it.
At my company we are using it too and it works very well for us. We are an ISP by the way.
If it works, then the way you use it, did not require anything complex to begin with, and you use it as yet another management protocol. You could have bought cheap level 3 switches before, and configure them to do exactly the same thing with command line, except with less buzzwords.
Ownership is a lot more than the right to deny use (and not always the right to deny use), and the "extensions of our body" argument is also flawed. The basis of "ownership" is our territorial instinct. If you move into my land (or speak to my woman), I will knock you in the head with my club. If I didn't do that, I would starve and have no offspring, so all people today descend from more or less territorial forefathers.
At no point in history, starting before apes that humans eventually evolved from, this was the case -- they were all social animals and controlled territory, food, etc. only as a group with complex hierarchy within the group that had absolutely nothing to do with ownership. Those loners in caves never existed, and could not possibly exist because humans never had physical traits necessary for surviving and defending an individual without a group. A hunter living alone in the woods, as much "close to nature" it seems, is something much more recent, brought by the development of technology. Personal property is also a recent cultural development, and even now it usually acts as a proxy for social status and power.
INB4 wikipedia is full of propaganda. Then correct them. Controversial articles are easy to spot.
If it's 19th to 21th century, it's someone regurgitating modern propaganda.
Dig deeper, make your own mind.
You can't "dig deeper" when all you have is a collection of propaganda workers and their parrots, all trying to out-shout each other while trying to keep the impression of legitimacy.
Onre thing that Wikipedia really sucks at, is history of anything after 18th century.
Of course, this is one of the three microscopic ex-USSR Baltic countries. Their politicians' idea of history is worse than Wikipedia.
That is bullshit. Here is a guy that benchmarked the Intel X520 10G NIC that wrote a small piece titled "Packet I/O Performance on a low-end desktop": http://shader.kaist.edu/packetshader/io_engine/benchmark/i3.html [kaist.edu]
His echo service manages to do between 10 and 18 Gbit/s of traffic even at packet size of 60 bytes. And there is plenty of optimizations he could do to improve on that. The NIC supports CPU core affinity so he could have the load spread on multiple cores. The memory bandwidth issue could have been solved with NUMA. But even without really trying we are hitting the required response time on desktop hardware.
That's without data ever being accessed from userspace, no protocol stack, average packet size being half of the maximum, and there is a good possibility that the measurements are wrong, because then it would be easier to implement the whole switch by just stuffing multiple CPU cores into the device, and the whole problem would not exist.
The simple fact is that after the packet has been transferred over the 10G link it will go through a PCI Express (x8) bus and be processed by the Linux OS - the same OS that you earlier claimed to be running on the control plane of the switches designed by your company. The only difference here is that I would probably get a faster system CPU than would be in your hardware.
Actually in that driver it was not processed by anything -- copying is performed while in kernel. Data is mapped to userspace-accessible memory but userspace does not even touch it, everything happens in kernel, with network stack being completely bypassed. The only thing less relevant (but perfectly suitable for a switch if it was indeed that fast) would be forwarding done entirely in interrupt handlers.
Another important question is, it's still unknown what the latency of that thing is -- with enough buffers it can get very high, and we did not even take into account all the delays in hardware, sitting in the queues in the switches that forward them over the management network.
As to the blocking issue, only packets from the same (new) flow would be queued.
Except there is no new flow because it will be only created after the controller will produce a rule to identify it, so any packet may belong to it. And there is a matter of somehow expiring old rules based on lack of traffic that matches them. This works very well when the whole NAT or load balancing can be reduced to predefined static mapping with a hash. What is done already with managed switches, even cheap ones. There is no benefit in making fancy controllers do that.
Also given that protocols like TCP do not just suddenly burst out 10G of packets
But protocols like UDP (and convoluted monstrosities on top of them with multiple dependent mappings, like SIP), do.
, the next packet following the initial SYN packet is not likely to arrive before the SYN has been processed by both switch and controller and forwarded long ago.
Except there may be a DoS in progress that fakes those packets. On a host, you can have an algorithm that prevents them from eating your memory, so only real mappings (with both sides completing the handshake) end up being persistent. On a switch you have no way to implement such an algorithm because it does not fit into rules you can define by your protocol, what means, either your simple switch has to become a router, or it will not be able to provide such functionality without falling a victim to a simple SYN flood.
And again packets to other destinations will not be blocked while we wait for the controller and somehow I get the impression that you think they would.
There is also a matter of number of rules, size of internal RAM in ASICs that contains them (and it's a different kind of RAM, not cheap and not large, think CPU cache), and time that it takes to update it. All for nothing because CPU in the switch can do the same, better, without any "globally standardized" formats for rule-passing protocol, without data ever showing up on management interfaces, without any of those hare-brained schemes. As I said, there is a place for standardization of switch ASIC control, and it will be a great way to make open source firmware for high-end switches and routers feasible. But then the standard should not be crippled or shoehorned into a management protocol that is supposed to work over the network.
So you are saying my estimate of 200 ns delay is wrong? Give me your own calculations.
This would work if data was transmitted instantly and there were no packets. 200ns round trip latency can be only achieved if network interface on the controller that can handle ten million packets per second. Network cards can only do 1-2 millions, and this does not count latency caused by a network stack of a general-purpose OS on the controller, and any delays in the switches along the way.
Yes the incoming packet is in a queue while the switch waits for response from the controller. That response can be there within 200 ns. In the meantime the switch is not blocked from processing further packets.
It is blocked because it does not yet have a rule that will determine the processing of any packet that follows, so any packet may happen to match it, and have to sit in the queue until the new rule arrives from the controller.
A 200 ns delay on the first packet in a flow of packets is so little that is barely measurable. You will be dealing with delays much larger than that simply because you want to send out a packet on a port that is already busy transmitting.
Switches always delay packets -- this is what queues are for. In this case, however, any packet that has to be processed by a controller means instant blocking of the input queue, something that network switch designers avoid like the plague. And just imagine what will happen if the response will be lost and everyone will wait for timeout and re-transmit. The loss of traffic will make those large routers look cheap in comparison.
armature terrorist attack
Design can not be tested. Only implementation can be tested, and testing does not resuce the stupidity of people that work on the project, it just catches most glaring mistakes and worst brain farts. If the programmer writes buggy code, it will be still buggy after passing tests -- except if he is stupid or dishonest enough, he will be able to cover up bugs by writing workarounds for tested scenarios.
My apologies, the very opposite applies here -- the comment about the shit programmers/managers are for http://news.slashdot.org/comments.pl?sid=3764705&cid=43768603 and http://news.slashdot.org/comments.pl?sid=3764705&cid=43768647
Agreed. For more than one reason, and from personal experience. I've had both, a crew of code monkeys and a small but incredibly efficient team of well paid but also very good programmers. To say that the latter were vastly outperforming the former (for less money in total, too) is an understatement.
That means, they work on the project that is already reduced to the worthless state, likely due to your inept management.
LOOK EVERYONE, shit programmers and shit managers are coming out of their closets!
I agree with you, in theory. But, in order for that 10x-100x to be achieved, conditions have to be conducive. It's like starting up that 600mph car to make a record run on the salt flats. If things are right, you can achieve phenomenal results. But, things usually aren't that optimal, and in a suboptimal environment, your gifted driver will be much less able to distinguish themselves from a merely good driver.
This is completely wrong.
Bad environment at work hurts everyone, however good programmer remains being good programmer until the point when project is so bad, it is worthless no matter who and how works on it.
Having watched things play out many times over a (eek) long career, I've observed that it's fairly rare that skill and opportunity coincide. If you find yourself in such a position, revel in it, as it probably will not last.
That's because you are a shit programmer who drags everyone down.