Forgot your password?
typodupeerror
Open Source

Ask Slashdot: Where Do You Get (or Share) News About Open Source Projects? 85

Posted by timothy
from the just-start-typing-random-ips dept.
An anonymous reader writes "Now that freshmeat.net / freecode.com doesn't accept any updates, I wonder how the Slashdot crowd gets news about new projects, and even new versions of existing projects. For project managers, where could you announce new versions of your project, so that it can reach not just those who already know the project. Freshmeat / Freecode had all the tools to explore and discover projects, see screenshots (a mandatory feature for any software project, even with only a console interface or no interface at all) and go to the homepage of the project. I subscribed years ago to the RSS feed and sometimes found interesting projects this way. You could replace these tools by subscribing to newsletters or feeds from the projects you follow, but that doesn't cover the discovery part." And do any of the major development / hosting platforms for Free / Open Source projects (GitHub, Launchpad, or Slashdot sister-site SourceForge) have tools you find especially useful for skimming projects of interest?

Comment: Re:FUD filled.... (Score 1) 211

It sounds like this transformer had its center tap grounded and was the path to ground on one side of a ground loop as the geomagnetic field moved under pressure from a CME, inducing a common-mode current in the long-distance power line. A gas pipeline in an area of poor ground conductivity in Russia was also destroyed, it is said, resulting in 500 deaths.

One can protect against this phenomenon by use of common-mode breakers and perhaps even overheat breakers. The system will not stay up but nor will it be destroyed. This is a high-current rather than high-voltage phenomenon and thus the various methods used to dissipate lightning currents might not be effective.

Comment: Re:But what IS the point they're making? (Score 1) 301

This is why I regard the basic principle of natural selection as almost a logical tautology. It is so essentially true that it can't even be described as a scientific theory, almost a logical law like 1 + 1 = 2 (strictly speaking that is a definition, but I'm sure you understand).

Networking

Comcast Carrying 1Tbit/s of IPv6 Internet Traffic 143

Posted by Unknown Lamer
from the hurd-1.0-released dept.
New submitter Tim the Gecko (745081) writes Comcast has announced 1Tb/s of Internet facing, native IPv6 traffic, with more than 30% deployment to customers. With Facebook, Google/YouTube, and Wikipedia up to speed, it looks we are past the "chicken and egg" stage. IPv6 adoption by other carriers is looking better too with AT&T at 20% of their network IPv6 enabled, Time Warner at 10%, and Verizon Wireless at 50%. The World IPv6 Launch site has measurements of global IPv6 adoption.

Comment: This is a surprise? (Score 2) 137

by BCW2 (#47522227) Attached to: Internet Explorer Vulnerabilities Increase 100%
History shows that more than 80% of windows vulnerabilities are IE based. Only the gullible and foolish would use such an unsecure and worthless piece of crapware. IE has never been good M$ couldn't even give it away when Netscape cost money. Nobody would use it when it was free. M$ had to incorporate it into the OS before they got any real market share.

+ - Letter to Congress: Ending U.S. Dependency on Russia for Access to Space 1

Submitted by Bruce Perens
Bruce Perens (3872) writes "I've sent a letter to my district's senators and member of congress this evening, regarding how we should achieve a swifter end to U.S. dependency on the Russians for access to space. Please read my letter, below. If you like it, please join me and send something similar to your own representatives. Find them here and here. — Bruce

Dear Congressperson Lee,

The U.S. is dependent on the Russians for present and future access to space. Only Soyuz can bring astronauts to and from the Space Station. The space vehicles being built by United Launch Alliance are designed around a Russian engine. NASA's own design for a crewed rocket is in its infancy and will not be useful for a decade, if it ever flies.

Mr. Putin has become much too bold because of other nations dependence. The recent loss of Malaysia Air MH17 and all aboard is one consequence.

Ending our dependency on Russia for access to space, sooner than we previously planned, has become critical. SpaceX has announced the crewed version of their Dragon spaceship. They have had multiple successful flights and returns to Earth of the un-crewed Dragon and their Falcon 9 rocket, which are without unfortunate foreign dependencies. SpaceX is pursuing development using private funds. The U.S. should now support and accelerate that development.

SpaceX has, after only a decade of development, demonstrated many advances over existing and planned paths to space. Recently they have twice successfully brought the first stage of their Falcon 9 rocket back to the ocean surface at a speed that would allow safe landing on ground. They have demonstrated many times the safe takeoff, flight to significant altitude, ground landing and re-flight of two similar test rockets. In October they plan the touchdown of their rocket's first stage on a barge at sea, and its recovery and re-use after a full flight to space. Should their plan for a reusable first-stage, second, and crew vehicle be achieved, it could result in a reduction in the cost of access to space to perhaps 1/100 of the current "astronomical" price. This would open a new frontier to economical access in a way not witnessed by our nation since the transcontinental railroad. The U.S. should now support this effort and reap its tremendous economic rewards.

This plan is not without risk, and like all space research there will be failures, delays, and eventually lost life. However, the many successes of SpaceX argue for our increased support now, and the potential of tremendous benefit to our nation and the world.

Please write back to me.

Many Thanks

Bruce Perens"

Comment: Re:complex application example (Score 4, Informative) 161

by lkcl (#47493359) Attached to: Linux Needs Resource Management For Complex Workloads

> the first ones used threads, semaphores through python's multiprocessing.Pipe implementation.

I stopped reading when I came across this.

Honestly - why are people trying to do things that need guarantees with python?

because we have an extremely limited amount of time as an additional requirement, and we can always rewrite critical portions or later the entire application in c once we have delivered a working system that means that the client can get some money in and can therefore stay in business.

also i worked with david and we benchmarked python-lmdb after adding in support for looped sequential "append" mode and got a staggering performance metric of 900,000 100-byte key/value pairs, and a sequential read performance of 2.5 MILLION records. the equivalent c benchmark is only around double those numbers. we don't *need* the dramatic performance increase that c would bring if right now, at this exact phase of the project, we are targetting something that is 1/10th to 1/5th the performance of c.

so if we want to provide the client with a product *at all*, we go with python.

but one thing that i haven't pointed out is that i am an experienced linux python and c programmer, having been the lead developer of samba tng back from 1997 to 2000. i simpy transferred all of the tricks that i know involving while-loops around non-blocking sockets and so on over to python. ... and none of them helped. if you get 0.5% of the required performance in python, it's so far off the mark that you know something is drastically wrong. converting the exact same program to c is not going to help.

The fact you have strict timing guarantees means you should be using a realtime kernel and realtime threads with a dedicated network card and dedicated processes on IRQs for that card.

we don't have anything like that [strict timing guarantees] - not for the data itself. the data comes in on a 15 second delay (from the external source that we do not have control over) so a few extra seconds delay is not going to hurt.

so although we need the real-time response to handle the incoming data, we _don't_ need the real-time capability beyond that point.

Take the incoming messages from UDP and post them on a message bus should be step one so that you don't lose them.

.... you know, i think this is extremely sensible advice (which i have heard from other sources) so it is good to have that confirmed... my concerns are as follows:

questions:

* how do you then ensure that the process receiving the incoming UDP messages is high enough priority to make sure that the packets are definitely, definitely received?

* what support from the linux kernel is there to ensure that this happens?

* is there a system call which makes sure that data received on a UDP socket *guarantees* that the process receiving it is woken up as an absolute priority over and above all else?

* the message queue destination has to have locking otherwise it will be corrupted. what happens if the message queue that you wish to send the UDP packet to is locked by a *lower* priority process?

* what support in the linux kernel is there to get the lower priority process to have its priority temporarily increased until it lets go of the message queue on which the higher-priority task is critically dependent?

this is exactly the kind of thing that is entirely missing from the linux kernel. temporary automatic re-prioritisation was something that was added to solaris by sun microsystems quite some time ago.

to the best of my knowledge the linux kernel has absolutely no support for these kinds of very important re-prioritisation requirements.

Comment: complex application example (Score 4, Insightful) 161

by lkcl (#47492919) Attached to: Linux Needs Resource Management For Complex Workloads

i am running into exactly this problem on my current contract. here is the scenario:

* UDP traffic (an external requirement that cannot be influenced) comes in
* the UDP traffic contains multiple data packets (call them "jobs") each of which requires minimal decoding and processing
* each "job" must be farmed out to *multiple* scripts (for example, 15 is not unreasonable)
* the responses from each job running on each script must be collated then post-processed.

so there is a huge fan-out where jobs (approximately 60 bytes) are coming in at a rate of 1,000 to 2,000 per second; those are being multiplied up by a factor of 15 (to 15,000 to 30,000 per second, each taking very little time in and of themselves), and the responses - all 15 to 30 thousand - must be in-order before being post-processed.

so, the first implementation is in a single process, and we just about achieve the target of 1,000 jobs but only about 10 scripts per job.

anything _above_ that rate and the UDP buffers overflow and there is no way to know if the data has been dropped. the data is *not* repeated, and there is no back-communication channel.

the second implementation uses a parallel dispatcher. i went through half a dozen different implementations.

the first ones used threads, semaphores through python's multiprocessing.Pipe implementation. the performance was beyond dreadful, it was deeply alarming. after a few seconds performance would drop to zero. strace investigations showed that at heavy load the OS call futex was maxed out near 100%.

next came replacement of multiprocessing.Pipe with unix socket pairs and threads with processes, so as to regain proper control over signals, sending of data and so on. early variants of that would run absolutely fine up to some arbitrarry limit then performance would plummet to around 1% or less, sometimes remaining there and sometimes recovering.

next came replacement of select with epoll, and the addition of edge-triggered events. after considerable bug-fixing a reliable implementation was created. testing began, and the CPU load slowly cranked up towards the maximum possible across all 4 cores.

the performance metrics came out *WORSE* than the single-process variant. investigations began and showed a number of things:

1) even though it is 60 bytes per job the pre-processing required to make the decision about which process to send the job were so great that the dispatcher process was becoming severely overloaded

2) each process was spending approximately 5 to 10% of its time doing actual work and NINETY PERCENT of its time waiting in epoll for incoming work.

this is unlike any other "normal" client-server architecture i've ever seen before. it is much more like the mainframe "job processing" that the article describes, and the linux OS simply cannot cope.

i would have used POSIX shared memory Queues but the implementation sucks: it is not possible to identify the shared memory blocks after they have been created so that they may be deleted. i checked the linux kernel source: there is no "directory listing" function supplied and i have no idea how you would even mount the IPC subsystem in order to list what's been created, anyway.

i gave serious consideration to using the python LMDB bindings because they provide an easy API on top of memory-mapped shared memory with copy-on-write semantics. early attempts at that gave dreadful performance: i have not investigated fully why that is: it _should_ work extremely well because of the copy-on-write semantics.

we also gave serious consideration to just taking a file, memory-mapping it and then appending job data to it, then using the mmap'd file for spin-locking to indicate when the job is being processed.

all of these crazy implementations i basically have absolutely no confidence in the linux kernel nor the GNU/Linux POSIX-compliant implementation of the OS on top - i have no confidence that it can handle the load.

so i would be very interested to hear from anyone who has had to design similar architectures, and how they dealt with it.

Man must shape his tools lest they shape him. -- Arthur R. Miller

Working...