Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:I don't see much point in this (Score 1) 64

I think Twitter is a bit faster than that. Twitter users in Japan seem to respond really fast when they feel any moderate level of shaking; at times, if you follow enough Japanese people on Twitter, your entire timeline gets filled with people saying "oh hey, something's shaking" or "it's rocking" or "boobs!". So yes, you will get advanced warning if there are people closer to the epicentre than you posting on Twitter (and as long as they are not using a certain phone provider which got overloaded during the big earthquake/tsunami last year while all the other providers were fine). And yes, this obviously doesn't work if the earthquake knocks out all the cell phone infrastructure in the areas between you and the epicentre.

Amusingly enough, I was watching a UStream broadcast run by some Japanese guy and people from a different area of Japan told him that they just had an earthquake, and he replied saying he didn't get any info here. Then, several seconds later, the earthquake alarm went off in the broadcast. So, I think Twitter isn't going to be very far off in terms of speed and definitely should be able to inform you about an ongoing earthquake as long as it's not a super short one and you're like next to the epicentre.

Comment Re:For those who still don't get it (Score 1) 98

The analogy kind of works and kind of doesn't. A parallax barrier has an image layer and a fixed mask layer. What these guys did was to allow for multiple layers with time-varying patterns and optimize the pattern on each layer so as to create a better image. So it's more like "this is to integral what parallax on crack is to lenticular."

Comment Re:Holografika.com (Score 2) 98

The company website is scant on details of their technology, but it's obvious that a different implementation is used and my guess from what they do say is that it's a lenticular device that only generates horizontal parallax. In that case, try tilting your head 90 degrees to the side and you'll lose depth perception, whereas this wouldn't be the case for the tensor display mentioned in the article. It might not be that important of an issue, until you want to lie down on a couch and watch a 3D program on TV...

Comment Re:It's a tensor display. (Score 4, Informative) 98

Oh interesting, so they finally gave it a name. I remember coming across the 2-layer version of the display sometime ago. Looks like they also have an interesting theoretical foundation to go with it; the abstract of the first paper from Gordon Wetzstein's page gives a nice overview.

What essentially is going on is that you can model (at least when talking about things much larger than the wavelength of light) light as a four-dimensional function (i.e. intensity of light along all the possible rays that fill space), which is referred to in this research area as a "light field." Putting a mask somewhere in space will mask out a 2D-extrusion of the mask shape in 4D space. Putting multiple masks at different planes will mask out the product of this 2D-extrusions (and the extrusion angle varies as a function of depth). Hence, what they are doing is attempting to piece together the original 4D function by piecing together unmasked portions at each time frame.

For a more simplified view, you can think of this as trying to create a 2D picture through a sequence of special single-color 2D pictures created by placing stripe patterns oriented at a fixed set of angles on top of a light panel.

If you've taken linear algebra, it is somewhat like decomposing a matrix into a sum of rank-one matrices, except here each component needs to be positive (masks cannot create "negative" light).

Comment What's novel in the patent? (Score 4, Informative) 161

I only briefly looked at the patent, and it looks like it's simply the application of OFDM to wireless communication between computers. OFDM, for those who aren't very familiar, is a way to deal with linear time invariant systems that can corrupt the data. For example, you can consider the signal going from one antenna to the other as going through such a system. Since these types of systems will only modify the amplitude and phase of each frequency band separately, instead of mixing them together as would be the case in the time domain, you encode the information you want to send as specific frequencies. For example, if you send out a wireless signal and it echoes all over the place, the time domain signal gets all mixed up and "slushy". However, if you perform a Fourier transform on the input signal and the output signal, you'll notice that the echoing only caused frequency bands to individually get attenuated/magnified and/or shifted in phase, but none of the frequency bands has mixed together. OFDM exploits this property to provide for robust communication (well, it's a bit more complicated than that, but that's the general gist of it). However, it sounds like this patent is simply saying "hey, OFDM is good for wireless communication", which feels kind of obvious to me considering the point of OFDM.

Comment Re:A pity... (Score 1) 220

Yes, right now it is limited by the technology (a full frame sized sensor with 2 micron pixels would be really sweet for this, but I suppose process would be really expensive), but eventually it will be limited by physics itself. For example, if you were to somehow be able to make a sensor array whose pixel pitch dipped way below half the wavelength of the light you are capturing and if you used microlenses at the wavelength of light, you wouldn't really be able to capture any more three-dimensional/refocusing information anymore.

Comment Re:A pity... (Score 1) 220

Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.

Do remember that the Lytro captures its image at one instance (okay, technically integrated over a short period of contiguous time), so while for static scenes your approach would work, it wouldn't work all that well with dynamic scenes. Personally, I'd like see more artistic photos such as say a black balloon covered in starry speckles bursting with a figurine of the baby from the end of 2001 inside.

Comment Re:The article writer is a deaf idiot (Score 1) 841

Well, technically speaking, finite-length signals can't be band-limited due to the uncertainty principle, and a band-limited signal which has been windowed in time will have some spill-over, causing small amounts of aliasing. Of course, in theory, this effect is really minuscule if you have a long enough signal, a good windowing function and/or not setting your sampling rate at exactly twice the bandwidth of the original unwindowed signal. The engineering rule of thumb pz came up with for oversampling would only be useful for ADCs and DACs due to limitations and difficulty in designing good analog filters. The intermediate storage format for the signal digitally would not really benefit much from such a high sampling rate.

Comment Like those SAT prep books (Score 1) 446

Years back, I remember working through some of those SAT prep books for the math section. Seemed like every one of them had at least one error in the solutions, with Barron's seeming the best and stuff like Kaplan's having many mistakes. Well, obviously I was bored, so when my answer didn't agree with theirs, I wrote proofs proving their answer was wrong.

Slashdot Top Deals

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...