Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re:A pity... (Score 1) 220

Yes, right now it is limited by the technology (a full frame sized sensor with 2 micron pixels would be really sweet for this, but I suppose process would be really expensive), but eventually it will be limited by physics itself. For example, if you were to somehow be able to make a sensor array whose pixel pitch dipped way below half the wavelength of the light you are capturing and if you used microlenses at the wavelength of light, you wouldn't really be able to capture any more three-dimensional/refocusing information anymore.

Comment Re:A pity... (Score 1) 220

Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.

Do remember that the Lytro captures its image at one instance (okay, technically integrated over a short period of contiguous time), so while for static scenes your approach would work, it wouldn't work all that well with dynamic scenes. Personally, I'd like see more artistic photos such as say a black balloon covered in starry speckles bursting with a figurine of the baby from the end of 2001 inside.

Comment Re:The article writer is a deaf idiot (Score 1) 841

Well, technically speaking, finite-length signals can't be band-limited due to the uncertainty principle, and a band-limited signal which has been windowed in time will have some spill-over, causing small amounts of aliasing. Of course, in theory, this effect is really minuscule if you have a long enough signal, a good windowing function and/or not setting your sampling rate at exactly twice the bandwidth of the original unwindowed signal. The engineering rule of thumb pz came up with for oversampling would only be useful for ADCs and DACs due to limitations and difficulty in designing good analog filters. The intermediate storage format for the signal digitally would not really benefit much from such a high sampling rate.

Comment Like those SAT prep books (Score 1) 446

Years back, I remember working through some of those SAT prep books for the math section. Seemed like every one of them had at least one error in the solutions, with Barron's seeming the best and stuff like Kaplan's having many mistakes. Well, obviously I was bored, so when my answer didn't agree with theirs, I wrote proofs proving their answer was wrong.

Comment Re:Short on details (Score 3, Informative) 129

From their diagram, it looks like each contact lens is composed of two lenses. Imagine making a tiny little lens that focuses a very close micro-display onto the retina and a normal sized contact lens for every-day use. Cut out the middle of the normal contact lens and insert this tiny little lens. You'll essentially have two "scenes" superimposed on your eye -- one focused on the micro-display and one focused on the surrounding environment. I imagine getting rid of aberrations on the tiny little lens is going to be very tricky and thus the resolution/image-quality of the entire display system might be quite limited. Another issue that's not so serious would be that your defocus bokeh would be kind of strange...

Comment Uh... summary? (Score 5, Insightful) 172

The operator of the crippled Fukushima Daiichi nuclear plant said it is studying whether the facility's reactors were damaged in the March 11 earthquake even before the massive tsunami that followed cut off power and sent the reactors into crisis.
Kyodo news agency quoted an unnamed source at the utility on Sunday as saying that the No. 1 reactor might have suffered structural damage in the earthquake that caused a release of radiation separate from the tsunami.
Apparently, the earthquake had caused a crack in the containment vessel.

I'm not sure how the summary writer came to that conclusion... Shouldn't we wait for an actual report/finding before stating that?

Comment Port cities? (Score 3, Interesting) 125

The companies used for this fraud include the name of a Chinese port city in their official name. These cities
include: Raohe, Fuyuan, Jixi City, Xunke, Tongjiang, and Dongning.

Odd that they'd use the term "port city", as these don't sound like major transportation hubs. What's interesting is that all these places they've named are actually places on or near the border of Russia and China, in Heilongjiang Province.

Comment Re:please, enough horseshit (Score 2) 411

I don't know where New Scientist got their 1490 number, but the IAEA report on Chernobyl ( cited by Wikipedia gives an upper tier of 1480+ kBq/m^2 of cesium-137, except that 3100 km^2 of land in Russia, Ukraine and Belarus had at least that level of contamination. If you look at the total area at 555+ kBq/m^2 of cesium-137, you get 10300 km^2 of land which had at least that level of contamination. You'd need a circle with a radius of ~57 km to contain that much land.

Since 18 March, MEXT has repeatedly found caesium levels above 550 kBq/m2 in an area some 45 kilometres wide lying 30 to 50 kilometres north-west of the plant.

The wording here is kind of weird. Why are they using a single length to describe area? Furthermore, does "repeatedly found caesium levels above 550 kBq/m^2" mean that the entire area has that level of radioactivity or that simply they measured it in a few places and got high readings? The New Scientist article then gives high peak rates like 6400 kBq/m^2 of cesium, but fail to provide a comparison with Chernobyl. And to finish it off, they move onto talking about a totally different isotope (iodine-131). Becquerels measure the number of decay events per second, so comparing becquerel readings between two different isotopes is kind of pointless -- it doesn't compare the amount of energy being released.

Comment Color changing is the exception! (Score 4, Interesting) 31

There's nothing that requires a hologram to change colors as you change the viewing angle; it's just that there are many different techniques for generating holograms and the rainbow hologram happens to have been adopted widely in the commercial regime. Classic holograms were monochrome and required coherent illumination to see. The rainbow hologram is nice in that you can see it under white light, but suffers from color issues (obviously) and also only presents a three-dimensional view along one axis (try tilting your VISA card 90 degrees next time and the eagle should appear flat). I don't know if the exhibit is still there or not, but the MIT Museum in Cambridge had a really nice hologram exhibit with lots of different holograms. A bunch of them were full color and didn't have that rainbow effect.

This article does make me more curious about surface plasmons, however, since I hear that mentioned a lot nowadays and don't have a very good understanding of them.

Comment Re:I don't buy it (Score 2) 176

Just a nitpick, but...

  • Increasing the number of megapixels while keeping everything else the same does not change the amount of light the sensor collects, although each individual pixel gets less light.
  • Increasing or decreasing the fill-factor or changing the total sensor size does.
  • In low light situations, statistics of the intensity of light should be Poisson, which means that 4 pixels at 1/4 the area, when averaged together, should result in the same amount of SNR, assuming relatively small read noise, which should be dominated by the Poisson shot noise in this situation.
  • Thus, the only downside of having just more pixels on a sensor if everything else was equal (note that fill factor of pixels would probably be different between different sensors) is that there's now a lot more bandwidth coming out of the sensor, which could be an issue with power efficiency. This could possibly be mitigated by reducing the bit depth on each individual pixel, if you assume there's going to be more noise.
  • One upside is that if the pixel size is smaller than the diffraction spot size, then the relative size of your Bayer mosaic should make demosaicing easier.
  • To summarize... if you had more pixels in the same sized sensor and can deal with the extra bandwidth, then you should be able to, in the worst case, downsample to achieve similar or higher performance compared to what you had before.

Slashdot Top Deals

grep me no patterns and I'll tell you no lines.