Follow Slashdot stories on Twitter


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:Timer wheels (Score 2, Informative) 298

The main reason they are using the buckets is to delay sorting costs as far as possible into the future so that there is less cost for most timers (as most timers are apparently deleted far before expiring). I'd suggest that the major performance gain is due to this lazy sorting, and not because their data structure avoids page faults. (Well, it does avoid page faults that the old linked list algorithm would have caused, but these page faults are due to the sorting going on in the original code which is avoided in the second. If timers did not expire, the two approaches would be quite similar, both generating page faults when sorting the linked lists, which likely have bad locality, and neither being as good as an IO-efficient algorithm.)

Using timer wheels as a heap structure wouldn't be appropriate unless many of the objects placed in the heap are removed prior to making it to the top of the heap. If this is not the case the sorting of the items from one bucket to the next bucket (sorting a linked list) would cause many page faults if the list didn't fit in internal memory. Timer wheels do nothing to address data locality which is the main problem faced by extremely large heaps. Your mention of in-order access is only true if the lists at each bucket as indeed stored sequentially in memory. This is hard to guarantee unless that space is reserved ahead of time or some dynamic reallocation scheme is used. I read the linked article as implying that simple linked lists were used, which generally have very bad data locality. Even if if a linear list was guaranteed, however, the sorting step when pushing things from one bucket down to the next bucket would incur page faults (assuming the list was too big to fit in memory) unless an I/O-efficient or cache-oblivious sort were used. (Which could easily be done, making an IO-efficient timer wheel structure.)

The algorithm discussed in the article is for a general purpose heap. In most heaps the main cost is in removing elements from the root of the heap as they bubble up through it, rather than deleting them prematurely (as is the case with timers). Different approaches for fundamentally different problems.

Comment Re:English Please (Score 3, Informative) 117

Not quite true... a 3-sphere is actually the *surface* of a 4-dimensional sphere. So, not exactly something that lives in our world. In topology, the dimensions refer to the dimensionality of the surface, and not the space the surface lives in (ie: a circle drawn on a piece of paper is a 1-sphere, but the surface it was drawn on is 2-dimensional).

Measuring the Speed of Light With Valentine's Day Chocolate 126

Cytotoxic writes "What to do with all of those leftover Valentine's Day chocolates? — a common problem for the Slashdot crowd. The folks over at Wired magazine have an answer for you in a nice article showing how to measure the speed of light with a microwave and some chocolate. A simple yet surprisingly accurate method that can be used to introduce the scientific method to children and others in need of a scientific education."

Breaking the Squid Barrier 126

An anonymous reader writes "Dr. Steve O'Shea of Auckland, New Zealand is attempting to break the record for keeping deep sea squid alive in captivity, with the goal of being able to raise a giant squid one day. Right now, he's raising the broad squid, sepioteuthis australis, from egg masses found in seaweed. This is a lot harder than it sounds, because the squid he's studying grow rapidly and eat only live prey, making it hard for them to keep the squid from becoming prey themselves. If his research works out, you might one day be able to visit an aquarium and see giant squid."

Comment Re:Consistent Histories? (Score 1) 365

I've been curious regarding this... assuming we could pluck them out in the same order from each 'vat', is there any way to generate two giant vats of stored entangled photons? One stays on the earth, the other travels with the ship. Photons are measured according to some kind of predetermined low-bandwidth schedule, which allows sending of little messages. These little messages could then trigger large high-bandwidth messages. The photons have thus been entangled as long as the ship has been traveling, but the message is still 'instantaneous' as far as the earth and the ship are concerned. Is bending the rules of faster-than-light information travel like this possible?

Dying Man Shares Unseen Challenger Video 266

longacre writes "An amateur video of the 1986 Space Shuttle Challenger explosion has been made public for the first time. The Florida man who filmed it from his front yard on his new Betamax camcorder turned the tape over to an educational organization a week before he died this past December. The Space Exploration Archive has since published the video into the public domain in time for the 24th anniversary of the catastrophe. Despite being shot from about 70 miles from Cape Canaveral, the shuttle and the explosion can be seen quite clearly. It is unclear why he never shared the footage with NASA or the media. NASA officials say they were not aware of the video, but are interested in examining it now that it has been made available."

Dad Delivers Baby Using Wiki 249

sonamchauhan writes "A Londoner helped his wife deliver their baby by Googling 'how to deliver a baby' on his mobile phone. From the article: 'Today proud Mr Smith said: "The midwife had checked Emma earlier in the day but contractions started up again at about 8pm so we called the midwife to come back. But then everything happened so quickly I realized Emma was going to give birth. I wasn't sure what I was going to do so I just looked up the instructions on the internet using my BlackBerry."'"

Hand Written Clock 86

a3buster writes "This clock does not actually have a man inside, but a flatscreen that plays a 24-hour loop of this video by the artist watching his own clock somewhere and painstakingly erasing and re-writing each minute. This video was taken at Design Miami during Art Basel Miami Beach 2009."

Comment Re:79% accuracy ... (Score 1) 132

Why do things at a level of granularity of 100,000 lines? Why not get the quantum computer to do the 'repeat x times and pick the most common answer' at each instruction? It'll introduce exactly the same slow down factor, and vastly reduce the chance of error propagation.

Conventional computers already have a certain amount of error that creeps in. Suppose during a single tick of the clock cycle there is some chance P of an error occurring. Find the 'repetition factor' n such that the quantum computer guarantees you no more than the same amount of error P, and you've got a quantum computer that gives you exact answers (over arbitrarily long programs) with the same chance of error as a conventional computer having run the same number of arguments. It becomes more an issue of speed, rather than errors.

Having to repeat each fundamental step some large number of times slows down the computer. (To get 1/10^40 chance of error, repeat the calculation ~400 times given any individual calculation has a 21% chance of failure.) However, this will disappear as the underlying hardware is improved, which it invariably will be (given more years of development time, I'm sure errors will be reduced from 21% to far below 1%).

Many real world problems are already built around this notion, whereby we only know the answer to be correct with a high degree of confidence (see 'Monte Carlo algorithms'). The underlying process only needs to be correct more often than it's wrong (it could be 50% + epsilon versus 50% - epsilon) and we can just keep repeating the calculation until we get any arbitrary accuracy we wish.

Comment Re:79% accuracy ... (Score 4, Informative) 132

No, it's actually a perfectly reasonable idea. Consider running the device (n+m) times. The probability of it being right n times and wrong m times is given by:

P(n,m) = (n+m)!/n!/m! 0.79^n 0.21^m

Now consider the probability of it being right (majority has the right answer) out of 2n+1 trials. This is the given by:

S(n) = sum( P(n+1+i,n-i), i=0..n )

This can be simplied to a closed form using Legendre and gamma functions, but that's kind of messy and it's far easier to just plug in values and do the summation. As it turns out, doing the experiment 15 times and taking the majority (plugging 7 into S(n)) will give you the correct answer 99.4% of the time. Doing things 35 times gets you to five nines of accuracy... completely reasonable in my books.

Comment Re:confiuration (Score 1) 306

Assuming you've got a reasonably modern version of X, and a somewhat capable video card, then xrandr does exactly what you're looking for. Mind you, it's a command-line application, but it's definitely not hard to use. A frequent Ubuntu contributor made a nice little GTK GUI front-end to it called urandr. This does exactly what you want (configure per-output rotation, resolution, etc). The only caveat is that you need to have configured X to have a big enough virtual screen size (, Screen section, Display subsection, Virtual keyword) to support any anticipated output resolutions (combined size of the entire desktop across all outputs).

Slashdot Top Deals

Where are the calculations that go with a calculated risk?