Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:not so simple... Re:I should hope so (Score 0) 279

He is not capable of any major harm of course,

then:

I just worry one day I'm gonna hear about another nutcase shooting spree, and hear his name as the culprit...

-------

he is convinced he has Asperger's (he doesn't display any symptoms except "being smart",

and immediately then:

which he isn't at all)

Huh?

Comment Re:price tag is irrelavant (Score 0) 289

I would think that studying how fast a 1-year-old learns to say "mine" compared to "ours" would refute such a claim...

One year old babies neither talk, nor have a concept of other people being their peers.

Anyway, ownership behaviour seem to me to be quite related, whether they are claimed by an individual, a family, a company or a state. "Theft" from such an "owner" seem to be met with similar emotional responses across all such owner classes.

No, that's a much more fundamental concept of causing harm. Taking away something a person needs, harms him regardless of any "ownership" involved.

Comment Re:Good (Score 0) 1145

and given time we can slowly move to using metric all of the time if we want. The most effective change happens so slowly that you can't pinpoint when exactly it happened. Since there's no urgency here, it will be fine if it takes another generation or so to fully transition.

You don't understand. It takes a generation after the point when using the wrong system will get you fined, laughed at, or punched in the face. Otherwise the old system, no matter how idiotic, continues to perpetuate itself indefinitely.

Comment Re:rather have money (Score 0) 524

Of course, they do! They just pay everything to their executives.

In general, the idea of "profit" as something objective and measureable is completely idiotic for most of the modern companies, because a sizeable chunk of what is expense for the company, is income for the people who are in control of it.

Comment Re:Don't copy that floppy! (Score 0) 289

Then enlighten us about your preferred consumption method of modern history.

Anything attributed to unknown or un-verifiable sources is likely to be a lie.
Anything that is accompanied with ideological editorializing is likely to be a lie.
Anything said by "historian" about anything hapening across some major ideological conflict is likely to be a lie.
Anything said by "historian" about his own government is likely to be a lie.
Anything said by American "historian" is likely to be a lie.
Anything said by a pissed off writer is likely to be a lie.

Whatever left... Yeah, that's really not much, but you usually can trust your own memory and published documents when those documents were a part of some process where validity mattered.

Comment Re:How can you have a software defined network? (Score 0) 75

he article was written by the guy that did the driver, I think we can assume he knows his stuff.

Most of the driver is just a copy of Intel driver, with additional functionality bolted on top. Whatever the author's abilities are, the goal was not to produce a working protocol stack, and benchmarks of this hack can't be used to predict anything but the behavior of this hack.

No it appears that if you want to switch more than 10-18 Gbit/s the computer would have a memory bandwidth problem. Trying to use multiple cores and NUMA might improve on that, but I do not think you would manage to build a 24 port switch that switches at line speed this way :-).

But if you could somehow get an external switch to do 99% of the work, this might work...

And then they would inevitably slow down this hack, too, what makes me doubt the validity of the measurements.

I am not sure how much more we can get out of this discussion. From my side I believe you are going too far in trying to make a problem out of something that actually works quite well for some very large companies (Google and HP!).

Those companies merely announced that they intend to use this "technology" somewhere. They are not throwing out the routers they have. They likely replace some level 2 and level 3 switches ("almost routers") and treat the whole thing as a fancier management protocol for simple mostly flat or statically configured networks that they have in abundance. For all we know, Google may already have no routers at all except for links between their data centers, as they are famous for customizing their hardware/network infrastructure for their own unique software infrastructure, and would probably gain more from multi-port servers connected by very primitive switches into clusters with VLAN or even physical topology following the topology of their applications' interfaces.

Packets need to be delayed when the controller needs to be queried and that is true for both OpenFlow and traditional switches.

Except traditional switches never have high-latency, unreliable links between their components, and the data formats follow the optimized design of ASICs and not someone's half-baked academic paper.

We are just fighting over some nano or possible microseconds here with no one showing that it actually matters.

Then why don't people just place Ethernet between a CPU and RAM? It's "nano or possibly microseconds", right?

Google uses for, or they wouldn't be doing it.

See above.

At my company we are using it too and it works very well for us. We are an ISP by the way.

If it works, then the way you use it, did not require anything complex to begin with, and you use it as yet another management protocol. You could have bought cheap level 3 switches before, and configure them to do exactly the same thing with command line, except with less buzzwords.

Comment Re:price tag is irrelavant (Score 0) 289

Ownership is a lot more than the right to deny use (and not always the right to deny use), and the "extensions of our body" argument is also flawed. The basis of "ownership" is our territorial instinct. If you move into my land (or speak to my woman), I will knock you in the head with my club. If I didn't do that, I would starve and have no offspring, so all people today descend from more or less territorial forefathers.

At no point in history, starting before apes that humans eventually evolved from, this was the case -- they were all social animals and controlled territory, food, etc. only as a group with complex hierarchy within the group that had absolutely nothing to do with ownership. Those loners in caves never existed, and could not possibly exist because humans never had physical traits necessary for surviving and defending an individual without a group. A hunter living alone in the woods, as much "close to nature" it seems, is something much more recent, brought by the development of technology. Personal property is also a recent cultural development, and even now it usually acts as a proxy for social status and power.

Comment Re:Don't copy that floppy! (Score 2, Informative) 289

INB4 wikipedia is full of propaganda. Then correct them. Controversial articles are easy to spot.

If it's 19th to 21th century, it's someone regurgitating modern propaganda.

Dig deeper, make your own mind.

You can't "dig deeper" when all you have is a collection of propaganda workers and their parrots, all trying to out-shout each other while trying to keep the impression of legitimacy.

Comment Re:How can you have a software defined network? (Score 0) 75

That is bullshit. Here is a guy that benchmarked the Intel X520 10G NIC that wrote a small piece titled "Packet I/O Performance on a low-end desktop": http://shader.kaist.edu/packetshader/io_engine/benchmark/i3.html [kaist.edu]

His echo service manages to do between 10 and 18 Gbit/s of traffic even at packet size of 60 bytes. And there is plenty of optimizations he could do to improve on that. The NIC supports CPU core affinity so he could have the load spread on multiple cores. The memory bandwidth issue could have been solved with NUMA. But even without really trying we are hitting the required response time on desktop hardware.

That's without data ever being accessed from userspace, no protocol stack, average packet size being half of the maximum, and there is a good possibility that the measurements are wrong, because then it would be easier to implement the whole switch by just stuffing multiple CPU cores into the device, and the whole problem would not exist.

The simple fact is that after the packet has been transferred over the 10G link it will go through a PCI Express (x8) bus and be processed by the Linux OS - the same OS that you earlier claimed to be running on the control plane of the switches designed by your company. The only difference here is that I would probably get a faster system CPU than would be in your hardware.

Actually in that driver it was not processed by anything -- copying is performed while in kernel. Data is mapped to userspace-accessible memory but userspace does not even touch it, everything happens in kernel, with network stack being completely bypassed. The only thing less relevant (but perfectly suitable for a switch if it was indeed that fast) would be forwarding done entirely in interrupt handlers.

Another important question is, it's still unknown what the latency of that thing is -- with enough buffers it can get very high, and we did not even take into account all the delays in hardware, sitting in the queues in the switches that forward them over the management network.

As to the blocking issue, only packets from the same (new) flow would be queued.

Except there is no new flow because it will be only created after the controller will produce a rule to identify it, so any packet may belong to it. And there is a matter of somehow expiring old rules based on lack of traffic that matches them. This works very well when the whole NAT or load balancing can be reduced to predefined static mapping with a hash. What is done already with managed switches, even cheap ones. There is no benefit in making fancy controllers do that.

Also given that protocols like TCP do not just suddenly burst out 10G of packets

But protocols like UDP (and convoluted monstrosities on top of them with multiple dependent mappings, like SIP), do.

, the next packet following the initial SYN packet is not likely to arrive before the SYN has been processed by both switch and controller and forwarded long ago.

Except there may be a DoS in progress that fakes those packets. On a host, you can have an algorithm that prevents them from eating your memory, so only real mappings (with both sides completing the handshake) end up being persistent. On a switch you have no way to implement such an algorithm because it does not fit into rules you can define by your protocol, what means, either your simple switch has to become a router, or it will not be able to provide such functionality without falling a victim to a simple SYN flood.

And again packets to other destinations will not be blocked while we wait for the controller and somehow I get the impression that you think they would.

There is also a matter of number of rules, size of internal RAM in ASICs that contains them (and it's a different kind of RAM, not cheap and not large, think CPU cache), and time that it takes to update it. All for nothing because CPU in the switch can do the same, better, without any "globally standardized" formats for rule-passing protocol, without data ever showing up on management interfaces, without any of those hare-brained schemes. As I said, there is a place for standardization of switch ASIC control, and it will be a great way to make open source firmware for high-end switches and routers feasible. But then the standard should not be crippled or shoehorned into a management protocol that is supposed to work over the network.

Comment Re:How can you have a software defined network? (Score 0) 75

So you are saying my estimate of 200 ns delay is wrong? Give me your own calculations.

This would work if data was transmitted instantly and there were no packets. 200ns round trip latency can be only achieved if network interface on the controller that can handle ten million packets per second. Network cards can only do 1-2 millions, and this does not count latency caused by a network stack of a general-purpose OS on the controller, and any delays in the switches along the way.

Yes the incoming packet is in a queue while the switch waits for response from the controller. That response can be there within 200 ns. In the meantime the switch is not blocked from processing further packets.

It is blocked because it does not yet have a rule that will determine the processing of any packet that follows, so any packet may happen to match it, and have to sit in the queue until the new rule arrives from the controller.

A 200 ns delay on the first packet in a flow of packets is so little that is barely measurable. You will be dealing with delays much larger than that simply because you want to send out a packet on a port that is already busy transmitting.

Switches always delay packets -- this is what queues are for. In this case, however, any packet that has to be processed by a controller means instant blocking of the input queue, something that network switch designers avoid like the plague. And just imagine what will happen if the response will be lost and everyone will wait for timeout and re-transmit. The loss of traffic will make those large routers look cheap in comparison.

Slashdot Top Deals

The sooner you make your first 5000 mistakes, the sooner you will be able to correct them. -- Nicolaides

Working...