Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:FUD. FUD everywhere. (Score 1) 141

Technical mumbo jumbo sauce (you are reading Slashdot, by the way) is exactly the reason that fingerprint scanning used for usernames on a specific system isn't a privacy concern, because the data are useless when taken out of context. Unless you take a full ink/digital copy of the fingerprint, the data collected by the system is worthless because you can't use it anywhere else. The other point is that your fingerprints should not be considered secret. They are trivial to steal simply by following you to a café and swiping your glass once you're finished, unless you insist on wearing latex gloves everywhere?

In terms of tracking, the issue is not so much "why are they tracking students", but whether biometric tracking offers a significant improvement over standard RFID cards without added risk of private data being leaked everywhere. The problem people seem to have here is that the food data is being linked to people (via census data) and then shared with the authorities. In this case they actually seem to be interested in tracking what kids eat in order to improve school meals.

Your argument boils down to: "I'm too lazy to consider how the system actually works, but it must be bad, right? Oh noes, the gubmint has the data too!"

Comment FUD. FUD everywhere. (Score 1) 141

Let's play devil's advocate here. I've given up my fingerprints to Japan upon entry as a tourist. I did the same for the USA. Oh well. Fingerprinting is so routine nowadays that anyone who travels internationally will fall foul of it eventually. Like it or not, sooner or later it'll happen to you. Does it have to be bad?

This sort of scheme has been done in the UK too, for secondary schools. The biometric systems replace ID cards which get lost, stolen and so on. There is another argument that biometrics hides who gets free school meals which prevents bullying. The key point here is that these systems do not record your fingerprint in the same way that law enforcement do. They take a temporary image, create something like a hash (it's not a hash, but it's a similar concept) from some characteristic features and then compare that to whatever is in the database. While that certainly identifies you and you're now explicitly linked to the food you bought, it's not something that could then be used to forge a national ID card. Is the 'hash' from this system interoperable with a competing system? Who knows, probably not. At most you could forge an input to that particular biometric system.

So they feed back this data to the government. What is the data? Is it a scan of the finger that would hold up in court? Or is it just some hash identifier, linked to the student's name and the food they bought. In which case the privacy risks are questionable, but the scheme is opt-in for now and the same issues would be there if a standard RFID card was used instead.

Comment Re:camera shake? (Score 2) 21

No, the building stays the same colour. Very simply, consider a particular feature on the building. The location of that feature will shift between adjacent pixels in the image if the building moves relative to the camera. When this happens the pixels change colour (e.g. a 'sky' pixel might now be a 'building' pixel).

The technique can be exploited for other things like blood flow, but in general things don't change colour as they move - unless they're travelling really fast

And I've noticed this a lot on recent submissions, tons of second or third hand sources that aren't terribly useful.

http://people.csail.mit.edu/mrub/vidmag/: Original source for Eulerian video magnification

Comment Re:White board is and will always be the best way (Score 1) 164

I'm pretty sure any distributed solution is going to need to be connected to a computer. The computer is probably going to be much less than the board itself, those things are pricey.

http://smarttech.com/Home+Page/Solutions/~/link.aspx?_id=BCF4121A410B48A79C89A8700775DC8B&_z=z

Seems like this is exactly what the OP needs, although it's not clear if they all work at home which would make it a lot more expensive.

Comment Re:I'm not too impressed with the depth camera (Score 1) 120

Well for a stereo system you can't claim 98% accuracy between two distances! I found a presentation where the baseline is given as 75mm: https://intel.lanyonevents.com....

We still don't know what the cameras are, or the focal length, but I'm sure we'll find out eventually. For now we can use: relative accuracy = (Z/(75e-3*900). Note that 900 represents the minimum measurable disparity divided by the focal length in pixels. This turns out to be almost exactly right with respect to Dell's numbers.

So at 3 feet = 0.91cm, we expect around 98.5%. At 15 feet we get around 93%, 20 feet 90% and so on from there. At 30m we're at around 50% precision, not good enough for mapping, but maybe good enough for background segmentation.

I think it was poorly advertised. Stereo imaging is great for high density 3D measurement, but it sucks at long distances unless you have huge baselines. In case you're wondering, satellites use different orbits to get wide enough distances between the shots (kilometer scale baselines). RealSense works well for doing things like background detection - you look for any pixel which has zero parallax or close up work, e.g. face scanning or augmented reality on a tabletop.

Unfortunately what happens when this sort of thing gets released is everyone, rightly, assumes that they can do stuff like measure buildings. In reality, the technology simply doesn't work like that.

The problem is compounded when people complain that it only works in good lighting. Well sure, but how do you think this system works? Intel recently bought TYZX, a 3D imaging company. What was their main product? An ASIC that performs stereo correlation in real time without any drain on the host processor. So we can be 90% sure that this is what's inside RealSense. It's not like the Kinect or the other RealSense camera that projects an IR pattern into the scene. The point here is that stereo matchers require strong signals in order to get good matching accuracy (which pixel in the other image does this pixel correspond to?). If you take a picture with your crappy tablet cameras, it's going to have shot noise, JPEG artifacts (maybe), dark noise and probably the gain is through the roof. All this means it's almost impossible to accurately match pixels between the images so you can't measure distances accurately either.

There's a reason why all the promo shots are taken on bright sunny days!

Comment I'm not too impressed with the depth camera (Score 4, Informative) 120

The reviewer should be embarrassed, and so should you for not reading up on RealSense, but it's probably unintentional.

The error is because stereo depth accuracy is quadratic, it degrades as the square of the distance to the sensors. The distance (baseline) between the cameras in a RealSense unit is so small that any distance measured beyond a few metres is inaccurate. It was a stupid thing to demonstrate, but it shows that many reviewers (and users it seems) don't understand the limitations of 3D measurement systems. For this reason, Intel clearly states that RealSense is only good up to 10m (and even then I would be sceptical that it works well beyond 5).

This is easily verifiable with your eyes. As an object gets further away, it becomes harder and harder to determine its distance because the optical parallax of the object tends to zero (i.e. it appears in the same x-position on each of your 'sensors'). Try it next time you're in a car or on a train, we all know that nearby objects appear to whizz past while background features like mountains/hills remain stationary.

Specifically the error equation is dZ = Z^2/bf (the distance measurement is is Z = bf/d where d is the disparity (parallax) in pixels)

Where dZ is the distance error, Z is the target distance, b is the baseline and f is the focal length in pixels. I've assumed that you can detect correspondences to within one pixel, realistically it'll be better than that for a competent stereo matching algorithm. Now in this case Z is several hundred metres, b is order 100mm and f order 1000px.

Do the maths: 100^2/(100e-3 * 1000) = around 100m error. At 5m? It's around 25cm and 1m it's 1mm. The actual numbers will be different because I don't know the exact baseline, or the focal length. I can tell you for sure that the cameras aren't high enough resolution for that to make a significant difference to the accuracy.

Comment Re:DVD (Score 1) 251

How does that compare to commercial DVDs that you've bought? I have movie DVDs, PS1 games and PS2 games that still play perfectly. My Dad's CD collection is older than me and it's still fine. It seems that it's the quality of the disc and the way it's burned that makes a difference rather than the medium itself. That may not help much for home backups, but there is plenty of evidence (my house is full of it) that disc based media lasts for decades. On the other hand I too have tried to read discs that I've burned maybe 10 years ago and all are corrupt.

Comment Re:Not much aperture (Score 1) 19

In the case of NGTS and SuperWASP most of the time the telescopes aren't looking at the same target. The purpose of this array is to observe large swathes of the sky simultaneously so each camera has a distinct field of view of around 8x8 degrees which can be mosaiced together. In principle they could also observe a target simultaneously in different filter bands, but I think normally they would pass that duty over to the VLT to gather much more light.

Also there are plenty of telescopes in the 1-2m class that do not have adaptive optics, if your location is good enough then you can get close to the atmospheric diffraction limit (about half an arcsecond) which is still nice. Adaptive optics lets you get down to the diffraction limit of your optics (which is many times greater usually). Most people observe near sea level where the atmosphere is nice and thick so the seeing is awful. If you go up a mountain things get a lot better!

Comment Re:Not much aperture (Score 2) 19

Exposure times on SuperWASP are around 30 seconds according to them. The sensor quantum efficiency is 90% so it's close to counting photons (don't quote me!), I think in practice it's a bit more complex. They're multi-stage-Peltier cooled, backthinned, e2v, blah blah blah. Plus other amazing things like 1% linearity over the whole dynamic range, around 20 electrons readout noise and so on. http://arxiv.org/abs/astro-ph/...

Comment Re:Not much aperture (Score 2) 19

Also remember that these are typically aperture photometry measurements, so the peak pixel could be 20,000 counts and you have an 8-16 pixel neighbourhood that also contributes so could easily get 100,000 counts within your aperture for a single exposure. The dark noise on the SuperWASP CCDs is extremely low: 72 electrons per pixel per hour.

Comment Re:Not much aperture (Score 4, Informative) 19

See my other post for more info - particularly the bit about why we'd use this over a satellite.

A major pro for a dedicated array is that it doesn't have anything else it should be observing. Normally these things are very wide-field, for telescopes. SuperWASP used off the shelf SLR lenses (good ones, mind, Canon 200mm f/1.8's) to create a mosaiced wide view of the sky. They also used a lot of very expensive (Andor) CCDs. The smaller amateur telescopes, e.g. a 3" refractor, might have a focal length of 400mm or so. The field of view of SuperWASP is around 22 x 22 degrees - that is ridiculously wide. The CCDs were 2048px square so we're not talking about high magnifications on deep objects here. These systems are not fast point-point scanners. They're huge eyes watching large chunks of the sky continually, pumping out gigabytes of data every night.

NGTS has similar specs to SuperWASP, 200mm focal length covering a field of around 10 x 10 degrees. http://www.ngtransits.org/tele.... Note that the mounts are also off the shelf, but super expensive for amateurs http://www.astrosysteme.at/.

As I mentioned 1/1000 isn't that amazing. If you expose so your target gives you 15,000 counts and you a measurements per second then you can easily get a nice high signal to noise over a time scale of minutes. The star, once you correct it with some stable reference target and allow for atmospheric extinction, should have essentially a flat brightness so any dip is noticeable.

After this it's a down to PhDs and Postdocs to sift through all the data, write automatic routines to generate light curves for all the stars and so on. Google sextractor, don't worry, it's SFW ;) .

Comment Not much aperture (Score 4, Interesting) 19

I would say it's observation time on thousands of potential targets. Who's going to do it?

You don't need adaptive optics or anything fancy, exoplanet hunting is (mostly) measuring quantities of light. Whether that light's been bent a little through the atmosphere and lands on a nearby pixel makes little difference. All you end up doing is using a larger photometric aperture (a circle of pixels that you consider to be the star). Adaptive optics is useful for other things, but for transit detection, meh. Observatories regularly defocus stars (into donut shapes) if they're getting too much light from a star in the field - this is a surprisingly common problem with huge mirrors.

You can observe exoplanet transits with a DSLR and a small telescope if you have the patience. It's a matter of finding bright stars. Again, you're not going for high resolution or magnification, you're just measuring light. By taking repeated observations, binning your data, phrase-wrapping (by plotting the data as a function of orbit phase) you can increase your signal to noise. The signal is maybe 0.001% of the light, but if you measure 1,000,000 counts then that 1000 count dip is probably above the noise.

Big observatories cost a lot of money to run and are highly competitive. If you have an extremely strong case for a follow-up observation (e.g. Kepler spotted something and you want to observe it further) then you can get time, but really we'd like surveys that will stare at hundreds of thousands of stars for months on end. Amateur networks like the AAVSO (variable stars) are very valuable because they provide free, virtually continuous data for hundreds of stars. It's simple, boring work that isn't feasible with big-shot observatories; it would be a waste of instrument capabilities.

Satellites can do this, but they can't store the data, they normally only provide flags that say "this star looks like a good candidate". So the benefit of something like this telescope array is that it can generate vast amounts of data (continuously) and we can actually store it for processing later.

Comment Re: Perfect? Really? (Score 2) 340

Might be a good way to detect cheaters though, if the poker house has a copy of Cepheus running it would be able to detect if a player was betting perfectly every single time. Then it gets philosophical - should you ban someone for playing perfectly? Is it illegal? After all you don't know anything about the hidden cards nor do you have any control over them. It'll probably end up like card counting.

Comment Re:I guess that means ... (Score 2) 340

In a casino luck still plays a significant role because you don't have the luxury of "as many hands as necessary" (or unlimited money). If you (human) get a royal flush and the computer gets a pair ten times in a row, as fantastically unlikely as that is, you're going to walk away with all the money every time. The point is that it will always play optimally and eventually statistics will win out and you'll lose to it. Also note that although it's perfect, it's not necessarily as profitable as a human player as it won't attempt to capitalise when you make an error.

Slashdot Top Deals

The moon is made of green cheese. -- John Heywood

Working...