Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Expectations (Score 1) 293

The iPad has a lot going for it, especially that you can get one for about 1/3 the price of that thing (if you convert the 1998 dollars, see eldavojon's post) and that you have wireless networking (a major plus).

The thing is, you can't do the math that way. Tech gets cheaper as it gets older, and new tech gets more powerful but remains roughly the same price, whereas inflation just goes up. If you look on Amazon, someone is selling it for $375, used.

Never mind any number of other UMPC devices...

Comment Re:Crazy people (Score 1) 515

It's not that crazy. I was on a bus heading to Heathrow airport back around 1999 or so. The speakers on the bus were vibrating in time with the sweeps of the radar.

My head was also throbbing in time with the sweeps of the radar.

Of course, that's a MUCH more powerful signal.

Comment Or... (Score 1) 369

Or, like another scientist did recently, you can just take all of the component bits and pieces, freeze it in ice, and leave it for 15 years.

At the end of it, you will have RNA and a bunch of amino acids.

This is why I read New Scientist magazine religiously.

Comment Re:No (Score 1) 425

Raw Power

Sorry, but I prefer my power cooked, thanks.

As in, I prefer it to actually be usable. And not to have to use half of it to make up for a lousy GPU.

On the PS3, the Cell (SPUs) are mostly used for:

  • Audio decoding, output & mixing
  • Vertex culling to reduce the load on the woefully underpowered and bandwidth limited GPU
  • Shader patching, again to reduce load on the woefully underpowered and bandwidth limited GPU

Sure, it occasionally does other things, but all of the above are things that the 360 has dedicated hardware to do. Your Cell processor? It's not doing what you think it should be.

Comment Re:Actually... (Score 1) 96

I think the main reason that it's not obvious is that the structure of the retina is quite a bit more complex than you make it out to be. First of all, there is essentially an exponential fall-off of receptor density as we move away from the fovea. Secondly, there are several horizontal channels in the lamina of the retina that aggregate receptor inputs in an center-surround manner (eg. on-center, off surround, off center, on surround)- and these horizontal channels are of differing lengths.

So it's not such an easy question of which, if any, are the privileged pieces of the circuit, or which, if spatial areas of the retina are privileged, since there are multiple spatial scales in the former, and spatial frequency gradients in the latter.

There are also some complications about time averaging. The retina has both on and off channels - on channels have fast temporal response to increased light, then their activity decays back to 'base', while off channels have the opposite transient response. So you have asymmetric temporal responses between the channels (which is one of the reasons you have center-surround processing). You also have the detail that most neurons in the retina don't spike, they communicate using membrane potentials rather than action potentials (spikes) - and the temporal resolution of many of these channels is still not fully understood.

I think the reason that your insight isn't obvious, is because it's very difficult to translate that insight into a form that's understood by those expert in the anatomy and physiology so they can tell you whether your assumptions are consistent with the data.

Actually, if the center-surround system has the on & off channels spike, then during a microsaccade, any crossing of the boundary should cause a fast-spike. If the motion is known, then the difference in time between the activation of the "on" channel and the activation of the "off" channel (as the edge crosses the boundary) should give a finer-resolution location of the edge.

Membrane potentials make sense in terms of color processing; we already know that color is a lower-bandwidth channel. It's also fuzzy and separate from edge recognition (I know this from... er... some experiments I did on myself involving... oh heck, pretty damn pure MDMA). So if the edge recognition triggers hit, they respond strongly to the extant color data (which is diffuse), and assume the gross color of the area pretty strongly. It seems that even in the fovea, color fills in pretty wildly - which is consistent with a membrane potential action. Also, that kind of slow response should be related to exposure control.

[My specific test case here: Reflected light of varying colors on a blank white surface. The edge-recognition triggers were misfiring, giving something that looked akin to a rolling segmented LCD text display. The color of the phantom text assumed the very light, diffuse and even color across the white surface as a strong, vivid primary color. I'm pretty certain that colors data is not spatially tightly encoded; edges are used to trigger the association of the two.

Other things I noticed during that experience were eigenfaces - apparently when the facial recognition system breaks down and you look at someone, all you get is eigenfaces - or at least the low & high frequency recognition systems go out of sync leading to something that looks a lot like them.

Another thing I noticed was that feature detection is rather interesting. Gross-feature detection is separate from texture-determination. It's kind of like the way GPU's paint a scene; you have the Z-buffer which provides depth, and then you have gross features (triangles in the case of a GPU), and then the actual texture of the surfaces themselves. When the texture system misfires, you get interesting effects, including something that looked like lots of little white bubbles mapped onto the surface, to something that looked like a rolling set of five-pointed stars and linear ridges rolling across the surface.

Anyway... that's a total aside. Anyone involved in that kind of research should at least try a little of that stuff... it's relatively safe, and to an even mildly trained eye will provide a lot of insight in how the processing systems work]

Comment Re:Actually... (Score 1) 96

Michele Rucci's lab figured out a while back that microsaccades improve our perception of high spatial frequency stimuli..

Thanks for the link :)

The thing that gets me is, surely this is obvious? The receptors on the retina are arranged in a poisson distribution (random, no receptors closer to each other than a certain limit). As long as the microsaccades are roughly 1/2 the poisson distance in any direction, this should at least lead to a doubling of the resolution of the signal achieved, averaged over time. If the brain keeps track of the distances moved, you get even higher resolutions (although my guess is that for bandwidth reasons, the microsaccades are 1/2 the poisson distance or less; that way there's really no bookkeeping necessary).

Comment Re:Heh. (Score 1) 781

Grammar nazis asside, this is not real serious benchmarking. It doesn't even take into account WHAT Windows 7 installs and WHAT Ubuntu installs

What's more, is that Windows Betas are always Checked builds. They have a huge chunk of debugging code still in there, and they don't optimize across function boundaries so they can get solid stack traces when things fall over.

In other words, it's not optimized, and is not a fair test. Compare it vs. the release version, and it'll be equivalent to Ubuntu.

Way to go guys.

Comment Re:but... (Score 1) 803

Because the only thing you need for an iPhone to work right is iTunes, and therefore iTunes needs to come with all of the supporting infrastructure to make the iPhone work smoothly, seamlessly, and automagically.

It's because of that the world's most pleasant-to-use cell phone is the way it is. Love it or hate it, at least now you should be able to understand it.

I don't own an iPhone... so all it is is bloat. And it occasionally spams my CPU.

Comment Re:but... (Score 5, Insightful) 803

Except in Apple's case, it's somewhat worse... after all, why the fuck would they install MobileMe or Bonjour on my system when I install iTunes?

Why the FUCK do they think I want their networking system along with their player?

Bonjour

Grrrrrrrrrrrrrrrrrrrrrr. Weak. At least the .NET extension is within the realms of making sense.

Comment Re:Surprise to Anyone? (Score 1) 369

And if I was asking for ONLY using UTC time for the BIOS, then that MSDN article would be pertinent. I'm simply asking for the option, in case a person dual-boots with Linux or Solaris and Windows. Thanks, though.

The thing is, the BIOS doesn't use timezones. If you want that fixed, talk to the BIOS manufacturers.

Comment Re:Surprise to Anyone? (Score 1) 369

On the down side, how hard is it for Microsoft to add some code to accommodate people who have their hardware clock set to UTC? I mean just put a damn check box there!

Raymond Chen's Old New Thing Article on why this is the case

One reason: What's more, some BIOSes have alarm clocks built in, where you can program them to have the computer turn itself on at a particular time. Do you want to have to convert all those times to UTC each time you want to set a wake-up call?

Slashdot Top Deals

After the last of 16 mounting screws has been removed from an access cover, it will be discovered that the wrong access cover has been removed.

Working...