Visualizing Ethernet Speed 140
anthemaniac writes "In the blink of an eye, you can transfer files from one computer to another using Ethernet. And in the same amount of time, your eye sends signals to the brain. A study finds that images transferred to the brain and files across an Ethernet network take about the same amount of time." From the article: "The researchers calculate that the 100,000 ganglion cells in a guinea pig retina transmit roughly 875,000 bits of information per second. The human retina contains about 10 times more ganglion cells than that of guinea pigs, so it would transmit data at roughly 10 million bits per second, the researchers estimate. This is comparable to an Ethernet connection, which transmits information between computers at speeds of 10 million to 100 million bits per second."
Re:So if I plug enough CAT5 cables into it... (Score:2, Insightful)
720 * 576 = 414,720 luma
414,720 / 4 = 103,680 chroma
414,720 + 103,680 = 518,400 elements
Let us assume a poor human eye that can see only 25 frames/second in bright light conditions.
518,400 elements * 25 FPS = 12,960,000 elements/second
Add to this that the human eye probably has at least 8 "bits" of color resolution. How is an eye nerve that only transmits 8 "bits" a second going to convey even this relatively poor quality image?
But the data isn't "pixels" or anything.... (Score:5, Insightful)
At what point are we measuring the data? If the data that's actually being measured is something like "My Mom standing next to a table with a vase full of flowers on it" - then having 10 Mbits/sec is a heck of a lot of data. If it's raw video - then it's pathetically little.
We can estimate the bandwidth your eyes could theoretically produce if they were transmitting "raw video". We know that the retina has a resolution of around 5k x 5k "pixels" and we can see motion at around 60Hz and we have more dynamic range than we can display with 12 pixels each for Red, Green and Blue. So at the 'most raw', two eyes would require 5k x 5k x 60Hz x 2 x 12 x 3 bits per second. That's 108 Gbits/sec - which is vastly more than the 10Mbits to 100Mbits this article suggests. You can argue about the details of the numbers I used here - but we're looking at four orders of magnitude - so I have to be a LOT wrong!
So it's pretty certain that what they are measuring in TFA is some kind of condensed or summarized version of the visual data.
That being the case, it's pretty silly to be comparing "My Mom standing next to a table with a vase full of flowers on it" to a 640x480 JPEG file. It's simply not an 'apples and apples' comparison.
Re:So if I plug enough CAT5 cables into it... (Score:5, Insightful)
Remember that humans don't see a pixel-per-pixel representation of the world. We see a tight spot of color and detail in the center of our retina (Fovea? Bio-types please correct me) surrounded by blurry shapes and lines. Around the edges, in peripheral we don't even see color, just luminance.
Proof? Take a bright LED lamp and move it into your peripheral vision. What color is it, not from memory, but just from looking at it?
The Fovea-area of the retina is more densely packed cells and blood vessels. It has more cones -- Chroma-type cells -- than rods.
This indistinct image is inverted and processed into a whole by the brain, which carefully processes different shapes, lines, movement, flickering, and what-not to produce what you THINK you see. The brain fills in any given pieces of the image that don't have enough detail, frequently from memory.
This is why optical illusions work. You deceive the biological mechanisms that process the image into producing bad data by giving them a skewed sample of the image.
Also, neural mechanisms are asynchronous and really can't be mesured in a k/s rate. You perceive a flicker of motion one second and then a spot of color the next. This is assembled into a ball that you turn to face-- to get a better image-- and then catch. Your brain has a lot of built-in firmware to do image manipulation, built you have to 'learn' the software necessary to do pattern matching and response over your lifetime.
You only get a few bits worth of information for the first few milliseconds that you're recognizing the ball, and then many megabytes worth the last few second.
Another thing... as sensitive and immersive as vision is, your ears probably have much, much more data input. They have vastly more dynamic range. Most people don't even notice themselves filling in visual information with audio information, but it does happen.
For example, you hear a person's voice, and you *think* you see their face.
Close your eyes when talking to someone, especially when in a group. Note how easy it is to visualize faces just from hearing voices.
I'm not denying that the brain has massive throughput from the senses, but you really shouldn't try to measure it in digital terms. It's all analogue.