If it left you crippled instead of dead, yes. However sane governments just have enough taxes to deal with the consequences of natural diasters instead of having "asteroid insurance", "crazy guy with a gun insurance", "tiger insurance", etc.
Of course, they do! They just pay everything to their executives.
In general, the idea of "profit" as something objective and measureable is completely idiotic for most of the modern companies, because a sizeable chunk of what is expense for the company, is income for the people who are in control of it.
Reading the above, I am *so* glad I live in a country with free healthcare for all.
Go ahead, rub it in.
I honestly can't see how anyone who can make a sane argument against that.
If you're the majority shareholder of a HMO organization that owns hundreds of hospitals and a US senator at the same time, you may still not be able to make a sane argument against it, but you're going to try like hell.
Then enlighten us about your preferred consumption method of modern history.
Anything attributed to unknown or un-verifiable sources is likely to be a lie.
Anything that is accompanied with ideological editorializing is likely to be a lie.
Anything said by "historian" about anything hapening across some major ideological conflict is likely to be a lie.
Anything said by "historian" about his own government is likely to be a lie.
Anything said by American "historian" is likely to be a lie.
Anything said by a pissed off writer is likely to be a lie.
Whatever left... Yeah, that's really not much, but you usually can trust your own memory and published documents when those documents were a part of some process where validity mattered.
So you will just accuse me of bias because you happen to believe the propaganda version?
Nobody has shown that what Apple has done shouldn't be morally acceptable.
I know I'll regret responding to such an obvious troll, but...
1) When a company like Apple avoids/evades paying taxes, it hurts the free market by taking for themselves an advantage that other companies can or do not. Primarily, a company the size of Apple does this by using its tax advantage to press anti-competitive advantages by buying up other companies. If you or I wanted to buy a company that Apple also wanted to buy, and the company cost $1billion, Apple would basically be able to buy that company for $700million while we would have to pay the full $1billion. By using this advantage to destroy competition, there is greater consolidation and greater loss of competition. Pretty soon, it's not really a market at all, much less free.
If you believe a "free market" is a force for good, then what Apple is doing is bad.
2) By not contributing their share of taxes (the same share that other companies have to pay), Apple uses public assets without paying for them, forcing the shortfall onto the rest of us and their competitors. Bad for us, and bad for the free market.
3) Stealing is immoral. Even you would probably agree that taking something that you have not paid for is immoral. Apple uses a lot of common resources, from infrastructure to the legal system, at a much higher rate than most people (or companies) by not paying their share of the costs, those costs are shifted on to us. In the language of the American Right, Apple is "stealing from future generations".
4) Lying is immoral. Here's one of Apple's tax "avoidance" scams: They register a patent in the United States. This forces the United States government to use resources to protect Apple's patent rights. Then, Apple transfers the ownership of that patent to a company that does not exist in Ireland, which pays its fees to another company that does not exist in say, Holland (thus the famous "Dutch-Irish Sandwich"). Because the ownership of that patent is in the other country and removed further by paying license fees in the third country, Apple completely avoids any taxes at all. Yet, if an Apple patent is threatened, they sue in US court and the US government is called upon to protect Apple's patent. So, for the purposes of taxes, the patent is not American, but for the purposes of enforcement, the patent is American. I'm pretty sure you can see how this is immoral.
Further, I'm betting that Apple's claim that 2/3 of their profits come from outside the US and indeed outside the jurisdiction of any sovereign nation, Apple's lying. This is why they're going to settle this ASAP, because if the forensic accountants go to work on Apple's books, the penalties could be astronomical and Apple's already wounded share price would halve again.
5, 6 & 7: Corporations were given special status to protect investors and owners from direct liability, not to protect them from having to act in a moral way. You seem willing to absolve Apple from any moral responsibility for anything, yet you want them to be treated as a person for the purposes of political activities. So now the moral questions are directed at you, khallow.
Finally, if you believe that taxes are immoral on their face, I would remind you that the purpose of the American Revolution was not to achieve freedom from taxation, but rather from taxation without representation. You cannot make a persuasive argument that you are not represented. You may not like your representation, but that's the way our system was designed. If you don't like the American system, then we have a different discussion altogether.
As a matter of fact I never directly used Bitcoin.
Because you're not goofy.
Personally, I do all of my transactions in Darknet Credits, which is the new monetary system based on reputation and righteous deeds. I can't actually buy anything, but I'm in on the ground floor.
You've obviously not used Bitcoin a lot.
You could accurately say that everyone has obviously not used Bitcoin a lot.
he article was written by the guy that did the driver, I think we can assume he knows his stuff.
Most of the driver is just a copy of Intel driver, with additional functionality bolted on top. Whatever the author's abilities are, the goal was not to produce a working protocol stack, and benchmarks of this hack can't be used to predict anything but the behavior of this hack.
No it appears that if you want to switch more than 10-18 Gbit/s the computer would have a memory bandwidth problem. Trying to use multiple cores and NUMA might improve on that, but I do not think you would manage to build a 24 port switch that switches at line speed this way
But if you could somehow get an external switch to do 99% of the work, this might work...
And then they would inevitably slow down this hack, too, what makes me doubt the validity of the measurements.
I am not sure how much more we can get out of this discussion. From my side I believe you are going too far in trying to make a problem out of something that actually works quite well for some very large companies (Google and HP!).
Those companies merely announced that they intend to use this "technology" somewhere. They are not throwing out the routers they have. They likely replace some level 2 and level 3 switches ("almost routers") and treat the whole thing as a fancier management protocol for simple mostly flat or statically configured networks that they have in abundance. For all we know, Google may already have no routers at all except for links between their data centers, as they are famous for customizing their hardware/network infrastructure for their own unique software infrastructure, and would probably gain more from multi-port servers connected by very primitive switches into clusters with VLAN or even physical topology following the topology of their applications' interfaces.
Packets need to be delayed when the controller needs to be queried and that is true for both OpenFlow and traditional switches.
Except traditional switches never have high-latency, unreliable links between their components, and the data formats follow the optimized design of ASICs and not someone's half-baked academic paper.
We are just fighting over some nano or possible microseconds here with no one showing that it actually matters.
Then why don't people just place Ethernet between a CPU and RAM? It's "nano or possibly microseconds", right?
Google uses for, or they wouldn't be doing it.
At my company we are using it too and it works very well for us. We are an ISP by the way.
If it works, then the way you use it, did not require anything complex to begin with, and you use it as yet another management protocol. You could have bought cheap level 3 switches before, and configure them to do exactly the same thing with command line, except with less buzzwords.
Ownership is a lot more than the right to deny use (and not always the right to deny use), and the "extensions of our body" argument is also flawed. The basis of "ownership" is our territorial instinct. If you move into my land (or speak to my woman), I will knock you in the head with my club. If I didn't do that, I would starve and have no offspring, so all people today descend from more or less territorial forefathers.
At no point in history, starting before apes that humans eventually evolved from, this was the case -- they were all social animals and controlled territory, food, etc. only as a group with complex hierarchy within the group that had absolutely nothing to do with ownership. Those loners in caves never existed, and could not possibly exist because humans never had physical traits necessary for surviving and defending an individual without a group. A hunter living alone in the woods, as much "close to nature" it seems, is something much more recent, brought by the development of technology. Personal property is also a recent cultural development, and even now it usually acts as a proxy for social status and power.
INB4 wikipedia is full of propaganda. Then correct them. Controversial articles are easy to spot.
If it's 19th to 21th century, it's someone regurgitating modern propaganda.
Dig deeper, make your own mind.
You can't "dig deeper" when all you have is a collection of propaganda workers and their parrots, all trying to out-shout each other while trying to keep the impression of legitimacy.
Onre thing that Wikipedia really sucks at, is history of anything after 18th century.
Of course, this is one of the three microscopic ex-USSR Baltic countries. Their politicians' idea of history is worse than Wikipedia.
That is bullshit. Here is a guy that benchmarked the Intel X520 10G NIC that wrote a small piece titled "Packet I/O Performance on a low-end desktop": http://shader.kaist.edu/packetshader/io_engine/benchmark/i3.html [kaist.edu]
His echo service manages to do between 10 and 18 Gbit/s of traffic even at packet size of 60 bytes. And there is plenty of optimizations he could do to improve on that. The NIC supports CPU core affinity so he could have the load spread on multiple cores. The memory bandwidth issue could have been solved with NUMA. But even without really trying we are hitting the required response time on desktop hardware.
That's without data ever being accessed from userspace, no protocol stack, average packet size being half of the maximum, and there is a good possibility that the measurements are wrong, because then it would be easier to implement the whole switch by just stuffing multiple CPU cores into the device, and the whole problem would not exist.
The simple fact is that after the packet has been transferred over the 10G link it will go through a PCI Express (x8) bus and be processed by the Linux OS - the same OS that you earlier claimed to be running on the control plane of the switches designed by your company. The only difference here is that I would probably get a faster system CPU than would be in your hardware.
Actually in that driver it was not processed by anything -- copying is performed while in kernel. Data is mapped to userspace-accessible memory but userspace does not even touch it, everything happens in kernel, with network stack being completely bypassed. The only thing less relevant (but perfectly suitable for a switch if it was indeed that fast) would be forwarding done entirely in interrupt handlers.
Another important question is, it's still unknown what the latency of that thing is -- with enough buffers it can get very high, and we did not even take into account all the delays in hardware, sitting in the queues in the switches that forward them over the management network.
As to the blocking issue, only packets from the same (new) flow would be queued.
Except there is no new flow because it will be only created after the controller will produce a rule to identify it, so any packet may belong to it. And there is a matter of somehow expiring old rules based on lack of traffic that matches them. This works very well when the whole NAT or load balancing can be reduced to predefined static mapping with a hash. What is done already with managed switches, even cheap ones. There is no benefit in making fancy controllers do that.
Also given that protocols like TCP do not just suddenly burst out 10G of packets
But protocols like UDP (and convoluted monstrosities on top of them with multiple dependent mappings, like SIP), do.
, the next packet following the initial SYN packet is not likely to arrive before the SYN has been processed by both switch and controller and forwarded long ago.
Except there may be a DoS in progress that fakes those packets. On a host, you can have an algorithm that prevents them from eating your memory, so only real mappings (with both sides completing the handshake) end up being persistent. On a switch you have no way to implement such an algorithm because it does not fit into rules you can define by your protocol, what means, either your simple switch has to become a router, or it will not be able to provide such functionality without falling a victim to a simple SYN flood.
And again packets to other destinations will not be blocked while we wait for the controller and somehow I get the impression that you think they would.
There is also a matter of number of rules, size of internal RAM in ASICs that contains them (and it's a different kind of RAM, not cheap and not large, think CPU cache), and time that it takes to update it. All for nothing because CPU in the switch can do the same, better, without any "globally standardized" formats for rule-passing protocol, without data ever showing up on management interfaces, without any of those hare-brained schemes. As I said, there is a place for standardization of switch ASIC control, and it will be a great way to make open source firmware for high-end switches and routers feasible. But then the standard should not be crippled or shoehorned into a management protocol that is supposed to work over the network.
So you are saying my estimate of 200 ns delay is wrong? Give me your own calculations.
This would work if data was transmitted instantly and there were no packets. 200ns round trip latency can be only achieved if network interface on the controller that can handle ten million packets per second. Network cards can only do 1-2 millions, and this does not count latency caused by a network stack of a general-purpose OS on the controller, and any delays in the switches along the way.
Yes the incoming packet is in a queue while the switch waits for response from the controller. That response can be there within 200 ns. In the meantime the switch is not blocked from processing further packets.
It is blocked because it does not yet have a rule that will determine the processing of any packet that follows, so any packet may happen to match it, and have to sit in the queue until the new rule arrives from the controller.
A 200 ns delay on the first packet in a flow of packets is so little that is barely measurable. You will be dealing with delays much larger than that simply because you want to send out a packet on a port that is already busy transmitting.
Switches always delay packets -- this is what queues are for. In this case, however, any packet that has to be processed by a controller means instant blocking of the input queue, something that network switch designers avoid like the plague. And just imagine what will happen if the response will be lost and everyone will wait for timeout and re-transmit. The loss of traffic will make those large routers look cheap in comparison.
armature terrorist attack