Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:White board is and will always be the best way (Score 1) 164

by Whiternoise (#49158785) Attached to: Ask Slashdot: Whiteboard Substitutes For Distributed Teams?

I'm pretty sure any distributed solution is going to need to be connected to a computer. The computer is probably going to be much less than the board itself, those things are pricey.

Seems like this is exactly what the OP needs, although it's not clear if they all work at home which would make it a lot more expensive.

Comment: Re:I'm not too impressed with the depth camera (Score 1) 120

Well for a stereo system you can't claim 98% accuracy between two distances! I found a presentation where the baseline is given as 75mm:

We still don't know what the cameras are, or the focal length, but I'm sure we'll find out eventually. For now we can use: relative accuracy = (Z/(75e-3*900). Note that 900 represents the minimum measurable disparity divided by the focal length in pixels. This turns out to be almost exactly right with respect to Dell's numbers.

So at 3 feet = 0.91cm, we expect around 98.5%. At 15 feet we get around 93%, 20 feet 90% and so on from there. At 30m we're at around 50% precision, not good enough for mapping, but maybe good enough for background segmentation.

I think it was poorly advertised. Stereo imaging is great for high density 3D measurement, but it sucks at long distances unless you have huge baselines. In case you're wondering, satellites use different orbits to get wide enough distances between the shots (kilometer scale baselines). RealSense works well for doing things like background detection - you look for any pixel which has zero parallax or close up work, e.g. face scanning or augmented reality on a tabletop.

Unfortunately what happens when this sort of thing gets released is everyone, rightly, assumes that they can do stuff like measure buildings. In reality, the technology simply doesn't work like that.

The problem is compounded when people complain that it only works in good lighting. Well sure, but how do you think this system works? Intel recently bought TYZX, a 3D imaging company. What was their main product? An ASIC that performs stereo correlation in real time without any drain on the host processor. So we can be 90% sure that this is what's inside RealSense. It's not like the Kinect or the other RealSense camera that projects an IR pattern into the scene. The point here is that stereo matchers require strong signals in order to get good matching accuracy (which pixel in the other image does this pixel correspond to?). If you take a picture with your crappy tablet cameras, it's going to have shot noise, JPEG artifacts (maybe), dark noise and probably the gain is through the roof. All this means it's almost impossible to accurately match pixels between the images so you can't measure distances accurately either.

There's a reason why all the promo shots are taken on bright sunny days!

Comment: I'm not too impressed with the depth camera (Score 4, Informative) 120

The reviewer should be embarrassed, and so should you for not reading up on RealSense, but it's probably unintentional.

The error is because stereo depth accuracy is quadratic, it degrades as the square of the distance to the sensors. The distance (baseline) between the cameras in a RealSense unit is so small that any distance measured beyond a few metres is inaccurate. It was a stupid thing to demonstrate, but it shows that many reviewers (and users it seems) don't understand the limitations of 3D measurement systems. For this reason, Intel clearly states that RealSense is only good up to 10m (and even then I would be sceptical that it works well beyond 5).

This is easily verifiable with your eyes. As an object gets further away, it becomes harder and harder to determine its distance because the optical parallax of the object tends to zero (i.e. it appears in the same x-position on each of your 'sensors'). Try it next time you're in a car or on a train, we all know that nearby objects appear to whizz past while background features like mountains/hills remain stationary.

Specifically the error equation is dZ = Z^2/bf (the distance measurement is is Z = bf/d where d is the disparity (parallax) in pixels)

Where dZ is the distance error, Z is the target distance, b is the baseline and f is the focal length in pixels. I've assumed that you can detect correspondences to within one pixel, realistically it'll be better than that for a competent stereo matching algorithm. Now in this case Z is several hundred metres, b is order 100mm and f order 1000px.

Do the maths: 100^2/(100e-3 * 1000) = around 100m error. At 5m? It's around 25cm and 1m it's 1mm. The actual numbers will be different because I don't know the exact baseline, or the focal length. I can tell you for sure that the cameras aren't high enough resolution for that to make a significant difference to the accuracy.

Comment: Re:DVD (Score 1) 251

by Whiternoise (#48919645) Attached to: Ask Slashdot: Best Medium For Personal Archive?
How does that compare to commercial DVDs that you've bought? I have movie DVDs, PS1 games and PS2 games that still play perfectly. My Dad's CD collection is older than me and it's still fine. It seems that it's the quality of the disc and the way it's burned that makes a difference rather than the medium itself. That may not help much for home backups, but there is plenty of evidence (my house is full of it) that disc based media lasts for decades. On the other hand I too have tried to read discs that I've burned maybe 10 years ago and all are corrupt.

Comment: Re:Not much aperture (Score 1) 19

by Whiternoise (#48825357) Attached to: Exoplanet Hunting NGTS Telescope Array Achieves First Light

In the case of NGTS and SuperWASP most of the time the telescopes aren't looking at the same target. The purpose of this array is to observe large swathes of the sky simultaneously so each camera has a distinct field of view of around 8x8 degrees which can be mosaiced together. In principle they could also observe a target simultaneously in different filter bands, but I think normally they would pass that duty over to the VLT to gather much more light.

Also there are plenty of telescopes in the 1-2m class that do not have adaptive optics, if your location is good enough then you can get close to the atmospheric diffraction limit (about half an arcsecond) which is still nice. Adaptive optics lets you get down to the diffraction limit of your optics (which is many times greater usually). Most people observe near sea level where the atmosphere is nice and thick so the seeing is awful. If you go up a mountain things get a lot better!

Comment: Re:Not much aperture (Score 2) 19

by Whiternoise (#48825257) Attached to: Exoplanet Hunting NGTS Telescope Array Achieves First Light
Exposure times on SuperWASP are around 30 seconds according to them. The sensor quantum efficiency is 90% so it's close to counting photons (don't quote me!), I think in practice it's a bit more complex. They're multi-stage-Peltier cooled, backthinned, e2v, blah blah blah. Plus other amazing things like 1% linearity over the whole dynamic range, around 20 electrons readout noise and so on.

Comment: Re:Not much aperture (Score 2) 19

by Whiternoise (#48823303) Attached to: Exoplanet Hunting NGTS Telescope Array Achieves First Light
Also remember that these are typically aperture photometry measurements, so the peak pixel could be 20,000 counts and you have an 8-16 pixel neighbourhood that also contributes so could easily get 100,000 counts within your aperture for a single exposure. The dark noise on the SuperWASP CCDs is extremely low: 72 electrons per pixel per hour.

Comment: Re:Not much aperture (Score 4, Informative) 19

by Whiternoise (#48821479) Attached to: Exoplanet Hunting NGTS Telescope Array Achieves First Light

See my other post for more info - particularly the bit about why we'd use this over a satellite.

A major pro for a dedicated array is that it doesn't have anything else it should be observing. Normally these things are very wide-field, for telescopes. SuperWASP used off the shelf SLR lenses (good ones, mind, Canon 200mm f/1.8's) to create a mosaiced wide view of the sky. They also used a lot of very expensive (Andor) CCDs. The smaller amateur telescopes, e.g. a 3" refractor, might have a focal length of 400mm or so. The field of view of SuperWASP is around 22 x 22 degrees - that is ridiculously wide. The CCDs were 2048px square so we're not talking about high magnifications on deep objects here. These systems are not fast point-point scanners. They're huge eyes watching large chunks of the sky continually, pumping out gigabytes of data every night.

NGTS has similar specs to SuperWASP, 200mm focal length covering a field of around 10 x 10 degrees. Note that the mounts are also off the shelf, but super expensive for amateurs

As I mentioned 1/1000 isn't that amazing. If you expose so your target gives you 15,000 counts and you a measurements per second then you can easily get a nice high signal to noise over a time scale of minutes. The star, once you correct it with some stable reference target and allow for atmospheric extinction, should have essentially a flat brightness so any dip is noticeable.

After this it's a down to PhDs and Postdocs to sift through all the data, write automatic routines to generate light curves for all the stars and so on. Google sextractor, don't worry, it's SFW ;) .

Comment: Not much aperture (Score 4, Interesting) 19

by Whiternoise (#48821261) Attached to: Exoplanet Hunting NGTS Telescope Array Achieves First Light

I would say it's observation time on thousands of potential targets. Who's going to do it?

You don't need adaptive optics or anything fancy, exoplanet hunting is (mostly) measuring quantities of light. Whether that light's been bent a little through the atmosphere and lands on a nearby pixel makes little difference. All you end up doing is using a larger photometric aperture (a circle of pixels that you consider to be the star). Adaptive optics is useful for other things, but for transit detection, meh. Observatories regularly defocus stars (into donut shapes) if they're getting too much light from a star in the field - this is a surprisingly common problem with huge mirrors.

You can observe exoplanet transits with a DSLR and a small telescope if you have the patience. It's a matter of finding bright stars. Again, you're not going for high resolution or magnification, you're just measuring light. By taking repeated observations, binning your data, phrase-wrapping (by plotting the data as a function of orbit phase) you can increase your signal to noise. The signal is maybe 0.001% of the light, but if you measure 1,000,000 counts then that 1000 count dip is probably above the noise.

Big observatories cost a lot of money to run and are highly competitive. If you have an extremely strong case for a follow-up observation (e.g. Kepler spotted something and you want to observe it further) then you can get time, but really we'd like surveys that will stare at hundreds of thousands of stars for months on end. Amateur networks like the AAVSO (variable stars) are very valuable because they provide free, virtually continuous data for hundreds of stars. It's simple, boring work that isn't feasible with big-shot observatories; it would be a waste of instrument capabilities.

Satellites can do this, but they can't store the data, they normally only provide flags that say "this star looks like a good candidate". So the benefit of something like this telescope array is that it can generate vast amounts of data (continuously) and we can actually store it for processing later.

Comment: Re: Perfect? Really? (Score 2) 340

by Whiternoise (#48771407) Attached to: Researchers "Solve" Texas Hold'Em, Create Perfect Robotic Player
Might be a good way to detect cheaters though, if the poker house has a copy of Cepheus running it would be able to detect if a player was betting perfectly every single time. Then it gets philosophical - should you ban someone for playing perfectly? Is it illegal? After all you don't know anything about the hidden cards nor do you have any control over them. It'll probably end up like card counting.

Comment: Re:I guess that means ... (Score 2) 340

by Whiternoise (#48771367) Attached to: Researchers "Solve" Texas Hold'Em, Create Perfect Robotic Player
In a casino luck still plays a significant role because you don't have the luxury of "as many hands as necessary" (or unlimited money). If you (human) get a royal flush and the computer gets a pair ten times in a row, as fantastically unlikely as that is, you're going to walk away with all the money every time. The point is that it will always play optimally and eventually statistics will win out and you'll lose to it. Also note that although it's perfect, it's not necessarily as profitable as a human player as it won't attempt to capitalise when you make an error.

Comment: Re:Stars or noise (Score 4, Insightful) 97

by Whiternoise (#48744419) Attached to: Hubble Takes Amazing New Images of Andromeda, Pillars of Creation
That's the amazing thing about this image - from a low zoom level it looks like CCD shot noise. Then you realise that the zoom slider is fully out and you can go in.. and in.. and in.. until you see that the noise isn't noise, it's actually all stars. You can verify this by panning to the edge of the frame where the galaxy is far less dense and you can see stars with the (low noise) darkness of the universe behind them.

Comment: Re:Understanding the Indian retailers. (Score 1) 53

by Whiternoise (#48440335) Attached to: Indian Brick-and-Mortar Retailers Snub Android One Phones

Read the rest of the summary:

When sales did not meet their expectations, they decided to release their products via the brick and mortar store channel. However, smaller retailer and mom-n-pop shops have decided to show their displeasure at having being left out of the launch by deciding not to stock Android One.

Comment: Re:Dubious claims (Score 1) 169

by Whiternoise (#48415973) Attached to: City of Toronto Files Court Injunction Against Uber
Las Vegas has around 600k people resident, it's only just off the list. Wiki puts it at 603,488 (2013) and Vancouver, at the bottom of the list is 603,502 (2011). They're essentially the same size; 14 odd people is a statistical blip even though it's silly trying to compare census data from different years.

Promising costs nothing, it's the delivering that kills you.