Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:I don't see the problem. (Score 1) 667

It seems that the launch site has been rather precisely determined. Perhaps you missed that memo.

And no matter how much evidence the US or Ukrainian government produces, no matter how detailed and annotated, Russia will dismiss it with a wave of a hand as fabricated, slanted, biased...whatever they want. They'll never admit responsibility.

Comment Re:I don't see the problem. (Score 1) 667

What they need to do is to organize UN peacekeeper mission there, not wage proxy war with US.

Yes, because UN peacekeepers have such a long, sterling reputation on stopping stuff like this from happening.

But regardless, the UN will never do anything in this conflict. Russia holds a veto in the Security Council, and they will stop any such measures from ever happening.

Comment Re:Free market economy (Score 1) 529

Markets in poor neighborhoods carry what 'poor' people buy

They buy what gives them the most calories per dollar, while also focusing on foods that require the least preparation time (since their work typically leaves them with little time to spare). End result: saturated fat, refined sugar and sodium, with very little in the way of necessary vitamins and minerals.

Poverty is now owning... a car out of warranty!

For most of the United States, owning a car is a necessity for both working and buying food.

Comment Re:cause and/or those responsible (Score 0) 667

Nothing is objectively known about the airliner. Everything, from Ukrainian air traffic control ordering the plane to descend to a dangerous altitude to who detected what, is all supposition and hearsay at this point.

It is my personal suspicion that the Ukrainian authorities were hoping for an accident of this sort and were intent on placing a civilian airliner in as dangerous a position as possible. Whether that was the case for this specific airliner on this specific flight is unclear.

And I'd argue that Korean Airlines 007 is a better example for this reason. The US had been using civilian airliners for spying on Russia for some time and doctored the evidence to remove Russian pilots radioing warnings to the aircraft in order to make the incident more incriminating than it was. Whether that flight was used for spying, was shadowed by such an aircraft, or merely happened to be in the wrong place at the wrong time, all becomes incidental. The accident was inevitable and the US government of the time was guilty of ensuring civilians would someday die for the benefit of military intelligence. It was merely a matter of which plane would be blown out of the sky and when.

In this case, the Ukranian authorities deliberately downplayed the risk of missile attacks on overflying aircraft and deliberately worked to place aircraft in the most dangerous air corridors that the airlines would permit. That is indisputable. Their opponents were known to be firing on aircraft and had shot several down. When your time to respond is measured in milliseconds, the nearest aircraft identification guide is mere hours away, to paraphrase what Americans often say about cops.

An accident was inevitable. The separatists weren't interested in avoiding one, the Ukrainian authorities certainly weren't. It was merely who would die for someone else's ideals. Whether or not this aircraft was deliberately placed in the path of a SAM battery is unimportant.

Both sides are therefore guilty. Both sides deserve blame.

Comment Re:complex application example (Score 4, Informative) 161

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:

questions:

* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

Comment Re:String theory is not science (Score 5, Insightful) 147

It's testable, it's measurable, it's repeatable, it's capable of prediction. it's either the simplest model that meets these requirements AND produces correct predictions, OR it is not.

Therefore it is science.

Maths is a science, for the reasons given in the first line. Science is a mathematical system, because ultimately there is nothing there, just numbers. (See: Spinons and other quasiparticles.)

Comment Multiverse theory (Score 4, Informative) 147

There are many multiverse theories and they can all be tested.

Many Worlds: The theory that there are no real "probability waves" in QM, merely overlapping realities that diverge at the time the "waveform" collapses.

This is an easy one. Entangled particles operate using the same physics as wormholes. If one of the entangled pair is accelerated to relativistic velocities, say in a particle accelerator, they will not exist in the same relative timeframe. It would seem to follow that if Many Worlds is correct, one of the particles will be entangled with multiple instances of the other particle, which would imply that every state would be seen at the same time. If the options are left spin and right spin, you'd see an aggregate state of no spin even if no spin isn't a physical possibility. And seeing something that doesn't exist either means you're in a Phineas and Ferb cartoon or Many Worlds is correct.

Foam Universe: This is the sort described in the article.

Yes, impact studies are possible, but they're only meaningful if you have enough data and you can't possibly know if you do. You're better off trying to make a universe, preferably a very small one with a quantum black hole at the throat of the bridge linking this universe to that one. What you will observe is energy apparently vanishing, not existing in any form - mass included, then reappearing as the bridge completely collapses.

Orange Slice Universe: This conjectures that multiple, semi-independent, universes formed out of the same big bang and will eventually converge in a big crunch.

It doesn't matter that this universe would expand forever, left to its own devices, because the total mass is the total mass of all the slices. Although they are semi-independent, they interact at the universe-to-universe level. In this scheme, because there's a single entity (albeit partitioned), leptons cannot have just any of the theoretical states. The state space must also be partitioned. Ergo, if you can't create a state for an electron (for example) that it should be able to take, this type of multiverse must exist.

Membrane-based Universe: This postulates that universes are at an interface between a membrane and something else, such as another membrane.

However, membranes intersecting with the universe are supposed to be how leptons are formed, in this theory. The intersection will be governed by the topology of the membranes involved (including the one the universe resides on), which means that lepton behaviour must vary from locality to locality, since the nature of the intersections cannot vary such as to perfectly mirror variations in the shape of the membrane the universe is on. Therefore, all you need to do is demonstrate a result that is perfectly repeatable anywhere on Earth but not, say, at the edge of the solar system.

Comment complex application example (Score 4, Insightful) 161

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

Slashdot Top Deals

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...