Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Blind Get Wired - for Sight 141

Graz writes "MSNBC has an article about a blind guy that can navigate around obstacles using camera input into his brain." It's not much, but it's way better than nothing. Looks a little bit painful, though. Update: See this ABC story with a slightly different take on the same subject.
This discussion has been archived. No new comments can be posted.

Blind Get Wired

Comments Filter:
  • As soon as I saw the article, I was reminded of the cat story... I see interesting applications ahead for both projects independently, but how about using them together, cooperatively (output from cat -> input to blind human)! Just think of the applications for seeing-eye animals! Soon they will actually be able to let their owners see exactly what they are seeing (of course, unless the humans are wearing the cat as a hat, they will be getting a much different perspective...but talk about *night-vision*!). Severian -- "I am the meaning of this sentence."
  • I personally would not volunteer a child of mine for such experiments, even if I'd designed the device myself.
    HA! Especially if I designed it myself!
    -Chuck
  • Strange, to think that several centuries ago, doctors drilled holes in the heads of their patients to "let the evil spirits out", and now modern medical science is doing something very similar, as a valid medical treatment.


    Rev. Dr. Xenophon Fenderson, the Carbon(d)ated, KSC, DEATH, SubGenius, mhm21x16
  • He was part of the inspiration for my handle. It is worth noting that when they hooked up the ship's computer to his visor they couldn't make sense of what he saw. When they asked how he could he responded "Practice".

    A lot of my customers are blind so this wold probebly be nice. Except for the cost.
  • This guy's been on the NY Subway system ... you gotta hope he doesn't meet a bag-snatcher ... that'd really smart. (Wouldn't like to be nearby either - when the wrenched cable bundle pulls out sizeable lump of brain along with it ... yuck.)

    Regards, Ralph.
  • by Genom ( 3868 )
    I can see it now...

    In the first non-experimental implantation, blind guy Joe Smith gets a successful cybernetic eye implant.

    The test of his eye was a copy of Playboy, opened to the centerfold, and placed directly in front of his face.

    Upon opening his "eye" for the first time, Joe suffered a major coronary...the cause, say doctors, was "sensory overload".
  • Remember that cat vision article a while back? Looks like the technology to talk both ways is there ... well kind of. This is still very promising.

    My question is how useful would it be to someone blind from birth? How would you calibrate, etc. This guy was a least able to see till age 36, so had some idea of what to expect.

    dv
  • Let's just hope he doesn't sue us when he finds out we've been trying to reverse-engineer ourselves for centuries! ;)

  • I think the much harder part would be interpreting brain waves to simulate these senses than the actual simulation.

    Frankly, Im surprised as hell they got this far. If they can interprete and send brain signals as images, thats more than I knew was possible before. I didnt know we were near to this type of technology.

    However, was this blind person previously not blind? If he wasn't how does he know that what hes seeing is anything like reality? For all we know, the signals could be interpreted wrong and give him some distorted view.
  • Or, more accurately, Lily C.A.T.
  • I don't know - something looks a little fishy about that cranial interface - and installed in 1978?

    "A blind man known only as 'Jerry'"
    "Dobelle is chairman of the Dobelle Institute, a medical device company in New York."
    "Dobelle said an improved version of the device should go on sale overseas, in limited numbers, later this year."

    I'm calling "hoax" -

  • Remember that cat vision article a while back?

    Yep, it looks like we're one step closer to curing feline blindness! CYBERCAT!


    ---
  • Actually, I would doubt he paid one red cent. In fact, the odds are, he might even have been paid for the experiment. From what I know about medical research, volunteers are hard to come by, and the only way they tend to get them is to pay, or offer the chance for something unbelieveable (e.g., sight to the blind). And I think I would have a pretty hard time with being deprived of my vision. An overwhelming percentage of sensory input is visual. I'd probably give it a go. At any rate, it's a super altruistic endeavor. Probably, he'll never have 20/20 vision, but his participation in the experiment paves the way for the future.
  • Take a look at http://www.dobelle.com/vision/video.html they have even clips about the cables jacking into the connector o the patients skull! Eerie stuff. That is pure Borg technology. There are really a lot of avis and I, currently downloading them all :-) And by the way , it seems to be not a typo. They again write that he got the implants 22 years ago.
  • I bought a pair of those and started watching Baywatch. All I could see was a glass tube.
  • I'd undergo surgery myself if I could have heat beams implanted in my eyes! I'd have sunglasses on and be able to set someone's hair on fire and nobody would be the wiser. Or how about being able to see through clothing? Endless hours of fun!!!
  • Actually there have been experiments with feedback, too - There was a case where a paralyzed guy was able to control a computer cursor with his brain waves - only in "up/down" directions, but also a very impressive start. Sorry, I don't have any links handy right now.

    I say give this technology five to ten years, and see what happens.


    The only scary thing is, what if they really succeede to build a "thought reader"? It's not that unrealistic, you know....
  • I can just see it now... (Pun Intended) It's the press conference of the millenium (have to fit this overused word in there somehow). All of the world is watching as the first blind human to use computer aided sight is getting ready to demo his new found sense. He's walking up to the podium, and !POP!, trips over the cable, yanking all the diodes out of his skull...

    But then again, maybe they'll have thought ahead and installed a tension release connector... :-)

    Jeremy
  • I guess Jerry does 'need it like a hole in his head'.
  • How about combining this with the virtual newscaster of the previous posts? - a live feed to CNN. Would he learn to filter out all the trivial background stuff the way we learn to ignore so many details? Think about it, you could build up the capacity to ignore newscasts, advertisements, political broadcasts.

    Anyone want to donate their kid to research?

    Lets not restrict ourselves! How about several kids? They could have IR communication between them from chips embedded in their amygdalae - Distributed Smelling ! Sorry, time for sleep I think.

  • Well, It looks like we're well on our way to Borg implants. hmm, don't let MS find out.
  • The interesting thing here is that the visual/distance data is processed not by the retina (which does a fair amount of processing before the signals get to the rest of the brain), but by the computer system. This is a great test of various theories of what type of lower/upper level processing is done by the brain.

    I have to admit, the direct neural link is pretty interesting, too.

    Bryan Baskin
  • That was HILARIOUS! Why was this moderated down? It wasn't off topic! It was a joke about a "blind man"!!! Geez...
  • Hrm...

    Basically, what you were saying was "The brain has never been able to use anymore then 5 senses." That's true, but until we hook more stuff up to it, we'll never know. There's also the possibility of ether "growing" new lobes for new things or "emulating" the lobes in hardware. Or we could just plug new senses into the visual cortex or something (would it be another type of sight then?)

    Anyway, the human brain can do *a lot* more then is evolutionarily needed. I'm sure that it could be augmented somewhat by technology

    "Suble Mind control? why do html buttons say submit?",
  • Right, but that's like saying "My PC which has no expansion slots has never been able to use anything more than the default hardware, but until I crack the case and start soldering stuff onto the traces on the motherboard, I'll never know." It's not that simple.

    Hrm... The first time I read that statement, I thought you were saying that couldn't just do that with a computer. The second time I read it I thought you were saying that you could do it with a computer, but not with the brain. In any event, you probably could solder more stuff on to a computer motherboard if you really needed to (the memory interface would be a good place to start...)

    Evolution is not kind to superfluous stuff.

    If that were true, we would all still be single celled organisms, I'd bet. I don't know why our brains would need to be so advanced as to be able to figure out things like relativity, integral calc or the structure of the atom.

    Of course you know more about the brain then I do, obviously we wouldn't just be able to solder on new inputs for new senses. But what about "sidebanding" the data? I mean, when someone plays quake they don't have to think about there fingers or the monitor in front of them there just "there". In a Sci-Fi Novel I'm writing [iastate.edu] people interface with computers by 'co-opting' unnecessary nerves in the spinal cord. Would something like this work?

    "Suble Mind control? why do html buttons say submit?",
  • Just out of curiousity, how do you keep implants like these in place and functioning for such a long time (1978)? Does the scalp form a protective barrier or seal that prevents infection? Do the internal wires accumulate proteins or other contaminants ("brain cheese" :-) and become less effective?

    The picture in the article made it appear that the wires went through the head, not to a connector on the head. Maybe that was just an illusion.

    In any case, how do you ensure that a cranial implant survives 22 (!) years? Was that a typo?

  • by taer ( 31134 )
    He could see before the age of 36. At least thats what the article said.
  • Sure, I can't see any reason why an artificial eye wouldn't work. Artificial retinas seem to be comming along nicely (better than Jerry's rig, anyway), and they operate just as you say, by signalling to the LGN via the optic nerve. OTOH, we're still at least a decade away from even something as simple as a good 1024x1024 greyscale, let alone the retina's 10000000x10000000x24 or so.

    However, my point is that this has little to do with how vision, let alone any other sense, works. What we have is a method for encoding bitmaps with just enough similarity to the retina that the subject can see something; we're at the leading edge of understanding trivially complex, if important, aspect of visual processing. blakestah was correct: it is ludicrous to think we can do what you're proposing with our current technology and understanding of the brain. We're still in the 'if I press here do you see something?' stage of neuroprosthesis, and decades, at the very least, from actually being able to restore sight.
  • No, that was more a matter of choosing a (relatively) easy target. The catcam pulls data from the LGN (IIRC), which basically maintains a 1:1 mapping with the retina. The data being pulled from the LGN's output (it was output, wasn't it?) to the Primary Visual Cortex will be mostly identical to the the output from the retina to the LGN. So, except for the geography involved, you basically have a very complicated artificial retina for your computer.

    Actual vision--what people experience as sight--is far trickier than this, involving far more processing. (Complexity analogy: the catcam-level as a $200 camera plugged into a $2 million dollar supercomputer. Understanding the camera will not necessarily help you understand how the computer recognizes your face and says hi, though it is certainly a start.)
  • No no, don't misunderstand me - I'm not saying it will never happen. I'm just pointing out that *right now* it only works one way. It seems to me that the logical conclusion is to work this into some sort of VR system (when it's capable of that). Of course, to make it a really great VR system, you'd have to simulate the other senses too... I wonder how difficult that would be for a device similar to this?

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • by Foogle ( 35117 )
    The obvious responses to this story will be along the lines of "jacking" into computer systems. It should be noted though, that this technology (while obviously a long way from that point) only works one-way. That is to say, the man cannot communicate with his sensor.

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Has anybody taken a look at the picture of this guy, it reminds me a lot of the slashdot icon for the MS Borg, only this guy is for real.
    Check him out [msnbc.com]

  • Did you look at any of the videos (alright, I know they were AVI's, so you might not have - anyhow, there is a link elsewhere in this article you can follow)?

    On of the video shows an overlay of what he is seeing, over the actual camera image. While it is far from "ideal", you could kinda tell how the system worked - by using image processing to do edge detection, then presenting that to a (probably) array of wires impanted in the visual cortex, which when stimulated, causes "phosphenes" to appear (which, if you close your eyes in a dark room, you can see random ones). I could see that the overlay (of what the blind person saw) matched the edges of the manniquin (sp) pretty closely - as he panned the camera of the manniquin, you could see the array change shape to hug the shoulders and head of the manniquin (edge detection working).

    I think if he had a better array, things might be different - but it is pretty cool what they are doing with the 20 year-old implant tech they have to work with.

    I remember reading somewhere that 'rogue' researches out there have experimented with stimulating the visual cortex via strong magnetic and microwave fields - anyone have any info on this?
  • I'd imagine that those wires terminate in some sort of plug or connector, and that the hole in his skull is neatly sealed off by the receiving end that the plug connects to. I don't think anyone's about to leave this man with a gaping hole in his head.
  • by MrEd ( 60684 )
    Brings new meaning to the line from 'Basic Instinct', that movie that all us 19 year olds rented from the video store with our older brothers...

    One cop: What're you going to do tonight?

    Other cop: Jack off the computer, I guess.

  • I personally would not volunteer a child of mine for such experiments, even if I'd designed the device myself.

    Damn straight. My kid's not going to have holes punched in his head either. And that's a big hurdle for this sort of research to clear. Experiments on the human brain are exclusively done to volunteers, usually those who have no real alternative to this sort of surgery. And let's face it, how many people want to have their brain poked at?

    There's no way to get around this. Dr. Mengele is about the only other option. *shudder*

  • So you mean I can watch the glowing blue hockey puck streak for real now? Whoo!

  • Cover an eye. You'll find out that you can still perceive depth pretty well, in many environments; there'll probably be a worse reduction on field of view rather than depth perception.

    What you do retain is the ability to perceive textures and other cues, either innate or gained by experience, that let you rapidly decide what's further. For instance, if you look upon a tiled floor, the tiles further away look smaller -- and the lines all converge at infinity. Normally assuming that the lines are closer together means further away is safe, unless you're dealing with a twisted architect or interior designer...

    That all depends on having enough resolution to see these patterns. If all you can provide is just a few signals, the regular methods may not work too well.
  • I can't wait until Sony© unveils their model at CES 2004. With features like SteadyShot®, NightShot® and LaserLink® . Having a steady x-ray shot that you can broadcast to your TV. Wow - just think what you could do with this thing! I would beat a hasty path (and anyone who got in my way) to the doctor to have one of these implanted ;)
  • The problem here is not that the hardware was the restriction- the surgical implanting of the electrodes was the hard part... Want more pixles? You'll need more electrodes- more surgery and altogether harder to pull off. Perhapse nano would work that- but in my opinion, we're better off looking for biological fixes to it. There's already such a good conduit in place.. It's just got to be fixed where it's broken, eh?

    mmmm.... nerve regeneration...
  • I think you could could do the same thing to grab speech without speeking that you could do to grab arm movments without moving your arm.. just watch little abortive movements of the vocal cords.

    Subvocalization, yes. You are correct that this may be a way to grab speech signals (I dare not say more because I'm thinking of researching in this area), but you still wouldn't get the vocal tones. Also, if you're grabbing directly from small muscle movements, the technology will not be useful for helping persons (like Hawking) with motor disorders. Since the brainjack is most likely to arise out of research for the disabled, I think it unlikely that we'll end up going this route.


    You might not care about spech that much though once you got the arm/finger movements working since a 2D symbole buffer is a much better means of communication (and your langauge can adapt to it the same way it adapts to writing instead of speeking).

    In which case, you once again don't really need direct neural interface; this sort of thing is the principle behind the various chorded keyboards and "sign languages" that are being experimented with to enable wearables.

    Actually, your face and mouth might be the best way to get info to the computer. There are LOTS of seperatly controlled mussles which you could be trained to move individually.

    That's certainly true. On the other hand, you'd look pretty weird, especially trying to compute while talking.

    Connecting close to the mussle level solves all these problems since you can always choose notto talk.. unless yuou talk to yourself a lot.. :)

    Many people can be seen to move their lips when they read. I know that I have a tendency to subvocalize when I'm typing messages. (Of course, I also talk to myself.)

    Example: I supposet there might be way to stimulate a section of knowledge to make you recolection of it better, but we could do something like this by having a series of "note cards" which recap the importent theorems or something.

    -shrug- We've got that now to one degree or another; my text editor lets me have a "clip book" of common language motifs for all the markup and programming languages I know. I could probably write similar reminders for any new language, or any extensions I make to my existing languages. If you're talking about something that provides better memory by giving people an external store... well, in theory it's possible, but it's a long way off, because we haven't a clue what format the brain uses to encode memories, let alone the signals to access those memories.

    It just sems that we don't know that much about our though processes execpt for the parts that work via langauge.

    We don't know much about our thought processes, period. The 1990s were supposedly the "Decade of the Brain". We made some progress. The advent of fMRI helped a great deal, as did advances in microbiology and biochemistry. The Genome Project may also help. However, there's a huge amount of work remaining to be done, just as the HGP has sequences, but no clue what those sequences do.

    At the intro levels of cognitive science, AI, and robotics, everything looks neat. Then you get inside and discover how little we actually understand. It can get a bit discouraging at times, but it also gives you some job security.

    Alik


  • Damn straight. My kid's not going to have holes punched in his head either. And that's a big hurdle for this sort of research to clear. Experiments on the human brain are exclusively done to volunteers, usually those who have no real alternative to this sort of surgery. And let's face it, how many people want to have their brain poked at?

    Actually, I should amend my original comment. I would not volunteer a child of mine for such an experiment given the current state of the tech. However, if I knew that my child would be blind from birth, and if such implants were highly successful in adult humans, and if neonatal implants had already been tested and proven in other species, then I'd be willing to go ahead with it.

    As you say, volunteers are a hurdle. However, there are plenty of people with various kinds of sensory and motor disabilities out there. Implantation of electrodes isn't pleasant, but it's usually not life-threatening. Given the tradeoff, I think there'll be sufficient volunteers to establish clinical usefulness once this field takes off (which seems likely to happen Real Soon Now).

    Alik
  • How much do they really know about the phonological loop? How diffrent is the voice in the back of my head from the voice that is about to come out?

    The answer in the cobwebs of my mind is that there is some linkage, but it's not total. That is, when you're hearing your thoughts in your head or articulating via typing, the hearing/speech zones do become activated, but to a lesser degree than when you actually perform those tasks.

    Actually, when one thinks about it, the processes of typing and writing are pretty neat; it's conversion to a completely different output form, and yet my brain knows that speech and text have very similar informational content.

    If the process of thinking about moving an arm or saing something really dose triger activity like the actual movement then maybe we could interact with a computer via these abortive movements. This would be the killer app. for brain implants since people could use images and audio in everyday communication.

    There is the possibility of doing what you suggest, though we need to improve our tech a bit first. Using it to move a mouse or select symbols is a bit more realistic than having the computer snag entire images and sounds out of your head, though. (And remember, when the sound is in your head, it doesn't have the tones of your voice attached to it yet; those come when it's actually articulated. Thus, if you coded it to audio, the computer would have to do voice synthesis for you. (Stephen Hawking, obviously, would love such a thing. I doubt we'll be able to give it to him before he dies.))

    Also, realize the potential problems of doing this. If there's an implant recording your thoughts and sending the data stream to an Internet-connected computer... you think you've got Big Brother looking over you now? Just wait.

    Idea: the phase space of the ``possitions of the human body'' is MCUH larger then 3 dimentional. It might be possible to take a mussle group and program a computer to respond to the movements of those mussles as if they were moving an object in a higher dimentional space, then the user might gain some intuition about things which they could apply to solving some open problem.

    This is an interesting idea. I don't think it needs direct neural interface, though. Why not just use a control glove or other existing haptic interface? If you need a really large number of DFs, use one of the motion-capture systems they have for animation tasks, like the Flock of Birds.

    Alik
  • Hrm... The first time I read that statement, I thought you were saying that couldn't just do that with a computer. The second time I read it I thought you were saying that you could do it with a computer, but not with the brain.

    I believe that doing it to the brain is potentially impossible, or at least very very hard. Doing such a thing to a computer is simpler, since computers are simpler devices, but I still don't think it'd be easy.

    In any event, you probably could solder more stuff on to a computer motherboard if you really needed to (the memory interface would be a good place to start...)

    There's two problems with that analogy. First, the memory interface is just that: an interface. If you've got an open slot, then you've got something more than the brain has. (Yes, the brain has memory, and it's got to be stored somewhere. Maybe someday we'll be able to read/write it directly, but that's even further off than doing image capture from the visual system.)

    The second problem is the one you'd have to overcome to try and do the computer task. Let's say you do solder something in between one of the SIMMs/DIMMs and the microprocessor. Every time you send a signal to your device, the memory also sees it. Every time your device writes something to the lines, memory also sees it. Every time you fetch from or write to that memory, your device sees the signal. In all cases, something will be trying to react to a signal it wasn't meant to see. It seems unlikely that you could get this to work even with very clever software, unless the device's purpose was simply to modify the actions of the memory chip.

    If that were true, we would all still be single celled organisms, I'd bet. I don't know why our brains would need to be so advanced as to be able to figure out things like relativity, integral calc or the structure of the atom.

    I said "superfluous". The ability of our brain to handle abstract thought is clearly not superfluous; it's allowed us to survive and proliferate very nicely. Advanced physics will eventually get us the hell off this planet before it becomes uninhabitable, and therefore the ability to understand it is directly useful for the continuation of the species. However, if there were a population of neurons that wasn't doing anything (like the fallacy that people are only using 10% of their brain), it would be very rapidly selected against.

    But what about "sidebanding" the data? I mean, when someone plays quake they don't have to think about there fingers or the monitor in front of them there just "there". In a Sci-Fi Novel I'm writing people interface with computers by 'co-opting' unnecessary nerves in the spinal cord. Would something like this work?

    I'm not making the connection between the two. I went and read the parts of your story you've posted. You made reference to "unconnected nerves" --- AFAIK, there ain't no such animal. Part of the biology of neurons is that if they don't receive regular simulation (both electrical and chemical), they die.

    You raise an interesting point, though. A lot of this die-off occurs in childhood. If you did one of the neonatal implants and plugged it into some sensory cortex and used it to stimulate and secrete growth factors, you could create an extra population of sense neurons specifically devoted to that implant. The problem is, it's still sharing a bus with the existing sense data. We have no idea whether or not the brain will be able to handle the extra data. (If you used it for something related to that sense, like seeing UV or smelling new compounds, it could possibly be integrated properly.) To test it, though, we'd need to be doing potentially harmful experiments on perfectly normal kids. That's unlikely. It's also a rather limited use of implants anyway. You don't need an implant to see UV or IR; all you need are the right kind of lenses. I think brain implants are very cool, but they shouldn't be used if not necessary; the body never responds perfectly to being messed with.

    Alik
  • Ouch, ouch, and ouch. The fact that we are putting devices into ourselves to modify the way we perceive our environment is an amazing thing. I can't wait until devices are made that will enhance the eyes (no I am not talking about those little plastic things gadgets that you stick into your eyes that make your eyes hurt after 15 minutes of computer use). Perhaps this concept will develop into a tool to cure blindness! Bravo! Now the blind will be able to see the blue flaming hair of the "Virtual Newscaster". ;)
    Although the benefits of this could be great, I think I will wait until they have a pocket sized version...

    Byzandula
  • Did anyone notice the size of the hole that the wires run through? I think I could stick my pinky finger in it. That's relatively large considering where it leads to. And what happens if someone pulls on those wires? If I were him, I'd have the wires laid in parallel under the skin of the skull and run to his hip. Kind of like that flat coax if you're running under carpet.
  • Not quite a $200 camera pluggedinto a $2 million dollar supercomputer. More like somebody's EYEBALL plugged into a supercomputer, and working. If they can decode an image at the LGN, then encoding an image at the LGN is a logical extension of the technology. So an artificial eye could be built to view the image with a camera, then encode the image and put it into the LGN. Nobody's done it yet, but between the Cat Cam and Jerry, I think it's obvious that somebody's going to do it quite soon. In any event, by no means is communicating images with a brain "ludicrous".
  • I think our understanding over visual signals in the brain may better than you think. There's a lot of refinement, but the basic protocol is already know. I refer you to the Slashdot article at http://slashdot.org/articles/99/10/07/1313256/shtm l

    Several reseachers at Harvard have run an implant into a cat's visual cortex, and can transmit images from the cat's mind to a computer. This is the reverse direction of sending images INTO the brain, but it displays a solid enough knowledge of visual signals in the brain that I'd hesitate to use the word "ludicrous". They'll get there, and soon.
  • will i have to install windows on my brain for it to run, or have they already released the linux drivers?


    ---------------
  • It is worth noting that when they hooked up the ship's computer to his visor they couldn't make sense of what he saw.

    Where ARE the screenshots, anyway? I would expect they've probably hooked up his computer to some sort of terminal...it probably *wouldn't* make too much sense if the brain has to do a lot of interperatation on what it gets for input, but would be interesting to see nonetheless. I remember the cat article had some sshots, anybody know if those were "real", or simulated?


    mcrandello@my-deja.com
    rschaar{at}pegasus.cc.ucf.edu if it's important.
  • They already sell glasses that do that. Unfortunately they only come in one style, with big red spirals painted on them, so everyone knows you're staring at their bum.


    mcrandello@my-deja.com
    rschaar{at}pegasus.cc.ucf.edu if it's important.
  • Several reseachers at Harvard have run an implant into a cat's visual cortex, and can transmit images from the cat's mind to a computer. This is the reverse direction of sending images INTO the brain, but it displays a solid enough knowledge of visual signals in the brain that I'd hesitate to use the word "ludicrous". They'll get there, and soon.

    I think I will stick with my words for now. The idea is that you can take an image display and figure out what patterns of activity it causes in the brain, and then cause them through an implant. To think that this is possible currently through stimulation at the level of the cerebral cortex is ludicrous. Besides, the effects of sustained long term stimulation at that level are well known, and if the experiment works initially it will not work for ANY extended period.

    LOWER levels of the visual system offer substantially more hope for this process. It is much easier to describe the coding of a visual image at the level of the retina, where simple things like local contrast and brightness of a pixel in space are (almost) all that matter. In fact, there is a professor at UC Berkeley, Frank Werblin, who has a nice display of how far along this has come. It would be VERY possible to take an image and stimulate the retina to create a perception of the image.

    Clay Reid at Harvard works on a model of the lateral geniculate nucleus in which he looks at image coding one level higher in the nervous system. There are well known effects of gaze angle and attention that he ignores to simplify his work. Others, like Joe Malpeli, work very hard on those other effects, and work in awake preparations. Even to say we have a STRONG understanding of the coding at the LGN does not mean we know what is happening one level higher in the cerebral cortex.

    And that was the level we were talking about simulating an image.

  • According to slashdot, this has already happened.

    From http://slashdot.org/features/99/10/01/1215235.shtm l

    "Clark's #7, sensory input. I just talked to a professor of neurophysiology here and he told me a few interesting things. He said that we would definitely be able to do this within 100 years.

    There's lots of research into this area, especially the eyes. Today we have a pad you can wear on your back that has thousands of pins in it. These pins put light pressure on the skin of your back to form a "braille" image of the b/w image from a camera. With practice, people are able to see with their skin. Fully jacking the brain should be do-able by 2100 he says definitely. I think he was being conservative."


    Hmmm, this is interesting. I am a neurophysiologist who studies the sense of touch and I haven't heard of these sensory pads - not that they'd be that hard to construct. But again spatial resolution will be a problem. There is plenty of skin back there, but the spatial resolving capability is poor due to receptor density. The fingertips and lips are the high spatial resolution centers.

    In any case, I'd appreciate a reference if you have one more detailed than the above.


  • Two comments. First, the small nerve node where the right and left eye images are swapped (basically, the patch panel for the optic nerves) is called the optic chiasma. I believe (but am less sure that the right and left eyes are completely reversed as to brain half. I don't think it's half of each eye to each half of the brain. Someone check me on that, as I'm a lowly EMT and not a doctor. :-) Second, as a person who has lived his entire life without depth perception (I have monocular vision) I am not sure why they are so emphasizing depth perception as a priority. Get the resolution up to where other visual cues, such as the relative size of objects, becomes usable to the patient.

    I can do just about anything an unimpaired person can do, including parallel parking, backing a car with an attached trailer, and shooting archery at ranges up to 100 yards. The only things that really give me trouble are stairs in dim light (I sometimes don't realize where the first step is) and catching thrown objects (team sports involving a ball are Right Out, to use the British expression).

    If I were in Jerry's position, I would want the priority to be given to visual acuity. Depth perception is vastly overrated for daily activity, as our brains have excellent compensatory means.

  • Seeing the way things are going, how about this:

    You lose your sight. There is perfect electronic vision available, but the operation is too expensive for you to be able to afford. A company comes to you, willing to foot the majority of the bill. You agree, and after the operation you have perfect vision... and a banner ad permanently floating at the top of your field of vision.
  • I do not know all the specifics, but I would say that protein and other materials would adhere to the implant (for information, see Vrohman effect (sp?). These adhesions probably will not cause a degradation on the device itself, but may cause other complications. These complications may include inflamation, calcification, and scar tissue. As far as lasting 22 years, some vascular implants last 15 years with 99% success. It is quite possible to create a complex implant with a life expectancy of 30 years or more.
  • If I where blind, I would lend myself to this sort of things.
    First: it gives you hope. Second: you stand a chance to get it working good enough in your livespan (~20 years from now) Third: even a 10x10 bmp is better than nothing. Four: you are HELPING OTHER PEOPLE. Would you sit on your ass and do nothing? Javier
  • Depth perception is vastly overrated for daily activity, as our brains have excellent compensatory means

    The thing is that it seems that jerry's brain is getting the image 'alredy processed', because he has an "ultrasonic range finder" to measure the distance of the objects. Maybe the image arrives to the last instance of the brain, and the 'compensatory funcitions' of his brain are not even used (maybe that's the part that's screwed up, and that's why he can't see).

    If I cover one eye, I can still get some depth perception by the size of things, the perspective, and how I have to focus the eye to get a clean image of an object. Sure, it's not _crucial_.. I was just wondering why they use that "ultrasonic range finder" instead of a second camera. Also, a better 3d effect is given by providing 2 images of an object from 2 different angles.

    Now, I also wonder how is formatted an image for a brain, including the information of every particular object, and the depth stuff.

    I want more information!! :)

    --

  • One question is how best to provide depth perception.

    Isn't this privided by the 2 images, provided by the 2 eyes? I wonder if they are are sending the image to both 'eyes', or maybe to a part of the brain that gets the "merged" image of the 2 eyes. I wonder if they figured out how to send images to each eye..

    And, I wonder if the guy gets dizzy when he moves his pupils around and the image doesn't change.. Or when he moves the glasses around

    --

  • Did anybody notice that the 2 articles on MSNBC and ABCNews were exactly identical?

    Different perspective? No.. Maybe different HTML formatting, but identitcal articles.

    Both done by associated press (c) 2000 (see bottom of articles)
  • I wonder how much actual formating of the data was required between the camera and the implant. Does it actualy get injected to a nxn array of brain cells or do they give him the data 'raw' and let his brain figure it out? The latter seems more eligant, and the brain is cappable of addapting. Getting it to recognize input is there seems tricier. I remember hearing a long time ago of some scientist who wore glasses w/ lenses that flipped everything upsidedown. Eventualy his mind was able to addapt and everything seemed normal to him. So as long as the brain treas the electrical signals as input, interpretation should only be a matter of time, for both children and adults. My guess would be that this is much easier in young children, though.
  • awhile ago an article was posted here on slashdot that talked about a guy who was paralyzed. Some doctor inserted something into his brain, and now the guy can transmit his thoughts to the computer by concentrating. That is jacked into the computer.
  • I dare say that you'd have a different opinion if you were blind yourself, oh noble AC. Brain implant surgery has extended the lives (real lives, not wheelchair or nursing-home bound) by at least five years, running shunts from the brain to the stomach to help drain fluid pressure buildups. Opening the cranium to help people should never be looked upon as long as it is done in the strictest of care. A man who was able to see for a great deal of his life can finally once again regain some of that sight that he obviously cherished (or he would not have signed up for the surgery). I commend the scientists and doctors who have made this, and all other medical practices possible. As more advanced and sophisticated medical procedures become more routine, the general public is able to afford these procedures more and more. I personally hope to live to see a world where many disabilities can be countered by the intellect and ingenuity of man. The real horror story is trying to stifle innovation.
  • no, it's definitely right halves --> right hemisphere, left halves --> left hemisphere.
  • The left half of both retinas is transmitted to the back of the left hemisphere of the brain, in the occipital (correct me) lobe. The right half of both retinas is transmitted to the right occipital. That way, two composite images are achieved in the brain. I'd bet that if they did stereoscopic vision, they only used one half of the brain.
  • This guy was blind since the age of 36. Therefore, he "knew" how to see when he had this device implanted. He also "knew" how to map out new things visually, so although it was probably jarring psychologically, being able to see (albeit limited) wasn't a huge surprise to his underlying brains.

    How about someone who was blind since birth? Has anyone here read The Man who Mistook his Wife for a Hat by Oliver Sacks? (Sacks is a neurologist and a superb writer--if you get a chance, read some of his stuff.) I don't remember all the details exactly, but detailed in this book were his experiences with a patient, a man named Virgil. He'd been blind for decades, lost his sight as a very young child. He had a successful operation to restore his sight, and at first it all seemed quite hopeful.

    Virgil was best when he visited his childhood home and saw all the things that he'd already "learned" to map out. In the cities he just got confused and felt better keeping his eyes closed. Eventually this psychological state and other complications overwhelmed him.

    Back on topic, now.
    In Virgil's case, there seems to be a period in very early childhood in which the senses and how the brain learns to use them becomes "hard coded". Virgil went blind when his sight was in the middle of developing.
    Jerry, the man in the article, already KNOWS how to see. There is some talk in this thread about whether or not a device like this could be "integrated" in with the rest of the senses--but perhaps, if implanted in a very, very, very young blind child who hasn't had the senses "hard-coded" yet...Ethics questions of all kinds arise from this, but it'd be interesting to see if the brain would indeed hard code this into itself.

    I dunno.

  • To only see a few specks of light, surely doesn't seem to be worth it. Far be it for me to place judgement on the guy and the situation. More power to him, but if it were me...well...who knows... Would you do it? I'm sure he had to pay a hell of a lot for so little...
  • Yeah, one way now, what about later? That's a pretty pessimistic view to take; "It only works one way. What a piece of junk." The cool thing here is that they've managed to feed input directly to the brain; without killing the subject (anyone have any info on the general well-being of 'Jerry'?).

    Somebody wants to start experementing with 'jacking' into computer systems, I'm there man.
  • Why should the brain implant be necessary? Why couldn't you deliver the "visual" information using a device close to a Braille converter? I imagine a little device that you can strap onto the back of your hand or wherever the skin has enough nervous endings to discriminate separate tactile impulses.

    This has been done. I forget all the details, but Daniel Dennett writes about it in Consciousness Explained. Essentially the system involved cameras mounted on glasses, and a grid of "tinglers" strapped to the subjects' abdomen or back. With some practice, they could learn to interpret the impulses from the grid as 'sight'. A few interesting notes:

    • The response time is fairly slow - on the order of several seconds to identify/recognize anything.
    • Subjects were able to distinguish between tingler input and normal tactile sensations, so that they didn't (for example) interpret an itch in the tingler area as 'visual' input.
    • Supposedly, one subject (a man blind from birth) was given cameras with a zoom lens. When the researcher hit the zoom button without the subject's knowledge (causing the image to 'loom'), the subject jumped back and shielded himself with his arms. This one sounds almost too neat to be true.

    Long term, this approach is unlikely to produce anything useful, partly because there's no place on the skin where nerves are packed densely enough that you can use a tingler grid with sufficient resolution to be useful, and partly because of the slow response time (OTOH, babies must 'learn to see' as well, and this doesn't happen overnight - maybe with time they'd adapt well enough that the response time was reasonable).

  • >(On the other hand, there is the example of the cat. However, that cat was never expected to have long-term survival.)

    What cat?

    >>I just looked back and realized that they said he had this implanted in 1978 -- Does that seem like an awful long time ago (for this sort of thing) to anyone else?

    >Not really.

    Well, it must seem that way for 'Jerry'. Can you imagine spending over two decades being a guinea-pig for this kind of device? It makes my skin crawl. And every time there seemed to be something new that might work & might just give him his sight back being unsuccessful? I'd imagine the main consolation for Jerry is not the extremely limited visual experience that he's being offered now, but the possibility that greater and better things might just come as a result of this for other blind people in the future. If this is the case, the man has my sincerest admiration. However, seeing as he must have been hoping for more than this for himself, he also has my pity. Poor noble guy.



  • I'd say that we are on track for a lot more 'linked' technology in the future. For starters, better cybernetic replacements for damaged parts, but it won't end there. It must, in due course, come the pass that the technology will enable people to surpass their prior limitations. On the subject of extended vision, you could have prostheses allowing people to see further into the electromagnetic spectrum. You could also, if you wanted to, implant weoponry, communications gear, databases, processors which dictate irrefutable commands, and many other devices inside people, as many a science fiction author has explored. (Personally, I especially like Stephen Donaldson's 'Gap Series, but that's mainly to do with the characters, plot and writing style). It's gonna be a bumpy ride, people.



  • Has anyone noticed the staggering amount of *good* stuff that is coming out of the scientific community these days? It's truly staggering. They've come out with a possible cure for the common cold, a method to allow blind people to see that has a lot of potential, controlling machinery with thought... using Mir as a hotel in space (something I wasn't sure I'd see in my lifetime)... Also the discoveries about the nature of the universe and the way life works...

    This may be slightly offtopic, but not completely... I say to the scientific community, "keep up the good work!"


    If you can't figure out how to mail me, don't.
  • The problem with this is the sensiticity of areas of skin. Braille is readable as it is relatively simple combinations of few dots in recognisable patterns. With more complex patterns and especially interpretable ones this becomes more difficut and anything over very large blocks of contrast will be impossible to interpret. On the other hand, the electrical stimulation of the visual centers offers greater potential for drtail and resolution, maybe with possibilities for shade with different levels of electrical signal. I remember seeing some stuff about this research a couple of years ago when they were talking about how far they could push the image on the implants and the greatest limit was the technology needed to create the more complex arrays (100*100+) AFAIR there was also a query about how much detail the centres could interpret, but in many ways this would be the only way of giving sight to those who's retina has no communication with the visual centres. For other forms of blindness and focal problems there is the possiblity of projecting onto thte retina, which is getting closer to a solution by the day....
  • I'm with JustShootMe! kudos tothe geeks of the world! you are making yesterdays scifi into todays scifact. though I must say I might rin screaming from the highly borgish look of the hole inthat man's head... it just reminds me too much of bad star trek episodes. I may end up with a nightmare about this one.... but I doubt the man with the electric eye gives a damn ... he can kinda see.
  • Ye Gads! It took this long to find a reference to LaForge. Ah well. So how long before this guy an detect infared light, increases in tacheon emmissions, and microfractures in a freighter's hull? It's impresive work but I agree, the 1978 hardware sure could stand an upgrade. I'm sure they'll work that out for their next subject. A human brain isn't quite a good ol' ZIF socket, is it?
  • My training is nowhere near complete, but my initial guess would be no. The brain does have dynamic connection capacity, but that capacity has always been set to deal with the five senses, all the way down the evolutionary tree.

    It may be possable by mapping the new capability into a nuance of an existing sense, particularly the somatic sense. The higher areas of the brain are quite capable of such mapping already. We may not gain new senses, but quirks of old senses can be used. For example, it is known that if tactile stimulators are set up right on extended arms, the subject will eventually percieve the stimulation to be occuring in empty space between the arms. Perhaps if the world is mapped to a persons body (GPS), they will come to 'feel' where they are based on the existing somatic sense. That wouldn't even be all that odd to the person. We allready overlay a great deal onto the somatic senses ('gut feelings' and such).

    A ballistics visual overlay that ties into the cerebellum could be interesting. I suspect that a lot of things will start as visual and audio overlays for prostetic vision and hearing, and move up from there. Prostetic somatic senses will also be needed for artificial limbs, and will add another input to overlay.

    really sensitive to disruption. A little bit of extra electrical stimulation at the wrong time can seriously fuck things up.

    Agreed, we're nowhere near that sort of thing now, and won't be for a while. Even when the tech improves, there will be SERIOUS ethical issues involved. The first steps will have to be on adult volenteers as allways.

  • You have to start somewhere. This guy probably didn't pay for the implants at all, seeing as he was a volunteer.

    And hey, remember, this implant is from 1978. That means the hardware is, for the most part, ancient in our current technological terms. If he were to get another implant using something more modern, it's quite possible he'd be able to see significantly more.

    But, this is a start. There's a saying out there, "the journey of a thousand miles begins with a single step." Well, consider this to be that single step.
  • Well, I wouldnt say that its ludicrous. There are lots of ludicrous ideas and thats not one of them. It may be speculative and farfetched, but not ludicrous.

    Anyways either I am misinterpreting the article or this is far advanced over what I knew we could do previously.
  • Wired Magazine, in an attept to boost their dwindling number of reader subscriptions has decided to release a new version of Wired ... entirely in Brail.

    Unfortunately, sales of the new Brail edition have plummeted to exactly one sale, as only one blind man has been determined as capable of actually surfing the web... More on this later.


    Sorry guys, guess i just read the article title wrong the first time around...

    --Cycon
  • Looks a little bit painful, though. There are no pain receptor in the brain itself. For the record, most brain surgery is done with only local anastetic. Thus, the surgeon can control exactly what he/she is doing ;-)

  • Wow, think of the options...

    This particular AC is gone, so let me suggets a Beowulf cluster of them... :)

    Actually, a cluster could use senses like smell in a much more advanced way, building a map of the smells in an area, not just as one specific sampling point.

    Ditto with noise, you'd have a lot more samples.

    A high bandwidth link and a sound-chip dedicated to finding better 3d cues with the info, and modifying your perceptions so that you 'hear' the sound as coming from where all units agree it is...

    And, then there's simply RC5 or CSC cracking. :)
  • by Foogle ( 35117 )
    I just looked back and realized that they said he had this implanted in 1978 -- Does that seem like an awful long time ago (for this sort of thing) to anyone else?

    If they could do a 10x10 pixel image back then, what are they capable of now? Forget Sony's goggles; I want my next monitor to work like this ;)

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Well, there's something called the Auditory Brain Implant. It converts auditory signals into electrical signals, and sends them directly into the brainstem, bypassing the auditory nerve. This is useful for people who've had their auditory nerves damaged, possibly from an acoustic neuroma (tumor on the auditory nerve). Last I read about it a few years ago, it takes a bit of training to familiarize oneself with the new "sounds," but otherwise helps to get some form of hearing back to folks who've lost it. A simple Yahoo search doesn't show anything informative about it, though.
  • Using it to move a mouse or select symbols is a bit more realistic than having the computer snag entire images and sounds out of your head, though. (And remember, when the sound is in your head, it doesn't have the tones of your voice attached to it yet; those come when it's actually articulated.)

    I think you could could do the same thing to grab speech without speeking that you could do to grab arm movments without moving your arm.. just watch little abortive movements of the vocal cords. Example: It feals like I can "speek" without blowing over the vocal cords, i.e. the vocal cords move but no sound comes out. This would probable make is easyer to the information from your voice then from your arms because you would not need to train the person in these "almost movements" which do not really move anything.. You could just train then to not exhale when they spoke to the machine.

    You might not care about spech that much though once you got the arm/finger movements working since a 2D symbole buffer is a much better means of communication (and your langauge can adapt to it the same way it adapts to writing instead of speeking).

    Actually, your face and mouth might be the best way to get info to the computer. There are LOTS of seperatly controlled mussles which you could be trained to move individually.

    Also, realize the potential problems of doing this. If there's an implant recording your thoughts and sending the data stream to an Internet-connected computer... you think you've got Big Brother looking over you now? Just wait.

    Connecting close to the mussle level solves all these problems since you can always choose notto talk.. unless yuou talk to yourself a lot.. :) Seriously, I doubt that we really understand the advantage of connecting to the brain at the level of thoughs. Examples: It would be nice to have a machine which would automatically answer the easy parts of a problem I am working on, but this will still require formulating a problems statment, so I think needing to speek the statment to the computer is a minor thing. Example: I supposet there might be way to stimulate a section of knowledge to make you recolection of it better, but we could do something like this by having a series of "note cards" which recap the importent theorems or something. It just sems that we don't know that much about our though processes execpt for the parts that work via langauge.

    Jeff
  • Niven and Barnes' book "Achilles' Choice" brings up some interesting possibilities and dangers with the technology of wiring our brains with computers. In their scenario, the "linked" enjoy a level of power, both technological and political. In an earlier book, "Oath of Fealty", he touches on some interesting advantages of having instant data and communications available from a cerebral implant communicating with a central computer.

    So... This story looks like another small victory towards the future where we can get our information faster and easier. Right now, it is a tool for helping the handicapped, but perhaps it will be a status symbol once it reaches technological maturity. Be honest! Who would not be tempted to become one of the "linked"?
  • So... This story looks like another small victory towards the future where we can get our information faster and easier. Right now, it is a tool for helping the handicapped, but perhaps it will be a status symbol once it reaches technological maturity. Be honest! Who would not be tempted to become one of the "linked"?

    I'd be tempted, but I know plenty of people that would absolutely have to have it if it were available. People like to do some wierd stuff to their bodies (and minds)...some even claim to be 'addicted' to piercings and tattoos. Tatts and piercings are kinda lo-tech, but the point is that there are tons of people that love enhancing their bodies--some are even fanatical about it. If high-tech enhancements become available to them, they'll get them in a heartbeat.

    numb
  • "The brain has all the input ports connected to senses already, and as far as we know there's no place God left for us to install new peripherals."

    Hmm. Just like God to have a lock on the hardware and software. If God would open things up, maybe the human brain could be ported to other architectures, and then we'd /really/ see some computing power.

    I just hope God doesn't become the next Apple - Innovation is great, but let us in on it as well!

  • Makes you wonder if, as a baby, you had some strange thing (IR port, GPS, radio) wired into your brain just after birth, would you learn how to use it, just as you learn how to stand up, talk, and focus your eyes?

    My training is nowhere near complete, but my initial guess would be no. The brain does have dynamic connection capacity, but that capacity has always been set to deal with the five senses, all the way down the evolutionary tree. The brain has all the input ports connected to senses already, and as far as we know there's no place God left for us to install new peripherals. Thus, any kind of neural-interface tech is likely to work using the existing sense inputs.

    Now, that doesn't mean there can't be some kind of extra output added, as was the case with that guy down in Georgia. It is likely that people will eventually be able to control computers through thought. However, chances are that the output from the computer will still come back through the same sense lines. (It's possible, I suppose, that someone might figure out exactly where the "phonological loop" of short-term memory is (that's the part you're using when you hear your own voice in your head), decode the representations of all known phonemes, and start injecting thoughts in via electrodes. However, that sort of capability is in the very far future.)

    As for putting something into a kid so they'd be naturally adapted to it... might not work as well as you'd think. Yes, kids' brains are more adaptive. However, kids' brains are also still developing, and thus really sensitive to disruption. A little bit of extra electrical stimulation at the wrong time can seriously fuck things up. I personally would not volunteer a child of mine for such experiments, even if I'd designed the device myself.

    Alik

  • They absolutely cannot interpret and send brain signals as images. That is ludicrous to think about with our current understanding of visual signals in the brain.

    What they can do, is map 100 inputs onto 100 surface electrodes of the brain. Naw, scratch that. Given most implants viability, they probably have a 100 electrode implant with about 20 good signals. The person then learns to interpret whatever perceptual form those 20 inputs take in his brain.

    This is also already working in the other direction. They have human motor cortex implants that allow patients to light up patterns of activity in LED grids, and researcher John Chapin of Hahneman has similar implants in rats. His trained rats operate motorized lever arms with their brain signals.

    The potential for fine levels of control is still not so hot though, and miniaturizing biocompatible implant grids takes a certain degree of skill. It is not as simple as throwing it on a wafer board and off we go.

    The retina implants have a lot more promise than Dobelle's at this point. I should point out that cochlear implants with a dozen electrodes are becoming commonplace. The US version was developed at UCSF by close friends.
  • The device you describe has been created, and is little improvement over classical Braille. It is called the Optican. There are various technical reasons for the limitations of the skin. The largest is the resolution. Braille uses 0.5 mm high dots in a 2x3 grid with 2 mm spacing. And, BTW, you would recognize Braille characters as well as a blind person with about 2 days of training.

    Even reasonable but not so hot visual information requires about 100 pixels. The fingertips also do not like to work together. Some blind people use 3 fingers locked together to read Braille; these fingers have an extraordinary tendency to stay locked together the rest of the time too.

    Basically, you have a nice idea, and one that was explored and largely dropped in the late 1960s.

  • Well, I think there's even more to it than that. Suppose everyone has "sense transmittors" plunked into their brains and is on a big wireless network. Want to know what it smells like in San Francisco one morning? No problem. Want to know what the weather is like outside, don't turn on the weather report, just feel what someone outside is feeling. Always wanted to know what it would be like to screw your neighbors wife...err....nevermind.
  • Disclaimer: this post is a joke, in real life I understand that this research is a large step ahead for the disabled and intend no disrespect to those who work on it or who volunteer themselves as test subjects.

    Now sir, we thank you for volunteering for this research. After the operation you will have vision just like the rest of us...

    later

    What? All I see is a bunch of letters and stuff, what happened.

    That's just the licence agreement, say I agree and you can continue using the implants, otherwise we will have to shut them down and you can go back to how you were.

    But, but this license agreement says I cannot use the apperatus to look at any system running software not made by microsoft!

    Well isn't that a resonable price to pay for sight?

    I suppose so. I accept.

    Oh and by the way, that license agreement, if you read enough of it, also said you agree to hand over all your assets to Bill Gates, you read that part didn't you?

    I don't even want to know what new meanings this brings to "blue screen of death"...

  • According to slashdot, this has already happened.

    From http://slashdot.org/features/99/10/01/1215235.shtm l

    "Clark's #7, sensory input. I just talked to a professor of neurophysiology here and he told me a few interesting things. He said that we would definitely be able to do this within 100 years. There's lots of research into this area, especially the eyes. Today we have a pad you can wear on your back that has thousands of pins in it. These pins put light pressure on the skin of your back to form a "braille" image of the b/w image from a camera. With practice, people are able to see with their skin. Fully jacking the brain should be do-able by 2100 he says definitely. I think he was being conservative."

    Unless I've misunderstood this bit entirely, it is separate from Clarke's predictions.

    Also of interest are braille monitors and reading machines- essentially they're monitors and OCR devices with vibrating pins instead of pixels. Nifty, eh?

    zorba
  • by Platinum Dragon ( 34829 ) on Sunday January 16, 2000 @09:23PM (#1366242) Journal
    Allow me to spend a moment considering what this kind of technology and experimentation represents.

    150, perhaps even one hundred years ago, the completely blind, and even visually impaired (hi!) were considered next to useless by "normal" society. Disabled in general were embarassments to be hidden, rather than fellows to be assisted and given a chance to grow. A century or two ago, I probably would have been sent to a "special" school, if my family were rich and looking to dump me. If my parents were poor, forget it. No way to make up for crap eyes, or deformed legs, or a fried brain.

    Fast-forward to 2000. Legs don't work? Get prosthetics! Muscles don't respond properly? Treatment, baby! Eyes not up to snuff? Get a brain implant! This is a glorious time, at least if you live in a region with access to medical help. Whereas someone like me or worse would be stuck in some "good with his hands" job long ago, now I can participate in a radio/TV arts program. So can the completely blind guy one year ahead of me. Advances like these may allow him, me, and other blind/visually screwed people to one day experience sight approaching that of a person blessed with a working pair of optic receptors. Perhaps bulky visors, headsets, even glasses will be unnecessary.

    If there was a project in progress to fix my &lt 20/200, color-blind, light-sensitive eyes, or at least get around the problem, I'd sign up in a second. I wouldn't wish this state on my worst enemy. However, I've become used to it. I still express disbelief at the ranges most people can discern text, when I'm still trying to figure out just what in hell they're looking at.

    We can always expect the worst, hope for the best, and work toward a better future in any way we can. Otherwise, why the fsck are we here?

    (actually...leave that question for another thread.)

    what /.'ers want to know is, does it run linux, and can you make a beowulf cluster out of it?
    plat
  • by Weezul ( 52464 ) on Sunday January 16, 2000 @10:00PM (#1366243)
    I suppose, that someone might figure out exactly where the "phonological loop" of short-term memory is (that's the part you're using when you hear your own voice in your head), decode the representations of all known phonemes, and start injecting thoughts in via electrodes.

    How much do they really know about the phonological loop? How diffrent is the voice in the back of my head from the voice that is about to come out?

    I don't really think that adding other sences would be that importent as compaired to just have a really ergonomic computer interface at the other ends of the ones we have now. I am a first year graduate student in mathematics and the most useful thing I can think of is having the ability to quickly write symboles to my field of vision and move them about without having to say stuff or move my arms. If the process of thinking about moving an arm or saing something really dose triger activity like the actual movement then maybe we could interact with a computer via these abortive movements. This would be the killer app. for brain implants since people could use images and audio in everyday communication.

    There are also problems in math which would be easyer to solve if you could develope an intuition about higher dimentional spaces. Idea: the phase space of the ``possitions of the human body'' is MCUH larger then 3 dimentional. It might be possible to take a mussle group and program a computer to respond to the movements of those mussles as if they were moving an object in a higher dimentional space, then the user might gain some intuition about things which they could apply to solving some open problem. It's kinda funny to think we may have applications of biology to mathematics someday.. :)

    Jeff
  • by MrEd ( 60684 ) <`ten.liamliah' `ta' `godenot'> on Sunday January 16, 2000 @07:59PM (#1366244)
    Makes you wonder if, as a baby, you had some strange thing (IR port, GPS, radio) wired into your brain just after birth, would you learn how to use it, just as you learn how to stand up, talk, and focus your eyes? The possibilities for this sort of thing would be very interesting if the problems could be worked out.

    Anyone want to donate their kid to research? If he survives, he'll be able to do 23-digit factoring in his head... specifically in the math coprocessor under his skin.

  • by Markee ( 72201 ) on Monday January 17, 2000 @01:01AM (#1366245)
    What I don't understand is this:
    Blind people are able to read Braille which, I understand, is comparable to a punchcard character code. Little bumps on a surface stand for the characters of the alphabet. To a non-visually impaired person like me, it is a miracle of skill and training to read characters where all I feel is a surface with bumps. This shows what the brain can do given enough training (and will.)
    So I think: Why should the brain implant be necessary? Why couldn't you deliver the "visual" information using a device close to a Braille converter? I imagine a little device that you can strap onto the back of your hand or wherever the skin has enough nervous endings to discriminate separate tactile impulses. The device would have little bumps ordered in a 10x10 array, raising and lowering them by electromagnetic switches, much like a Braille converter. So the blind person could have the visual input from the computer, translated into a tactile image that is delivered to the back of his hand, and he could feel the raw image like he is able to feel Braille writing.
    No brain implant would be needed, and it would make the device much cheaper and more usable for general purposes.

  • by Alik ( 81811 ) on Sunday January 16, 2000 @08:41PM (#1366246)
    Basically, what you were saying was "The brain has never been able to use anymore then 5 senses." That's true, but until we hook more stuff up to it, we'll never know.

    Right, but that's like saying "My PC which has no expansion slots has never been able to use anything more than the default hardware, but until I crack the case and start soldering stuff onto the traces on the motherboard, I'll never know." It's not that simple. For one thing, where would you plug in new inputs? I'm in neuroscience (admittedly only at the beginning grad level), and I can't think of a place.

    There's also the possibility of ether "growing" new lobes for new things or "emulating" the lobes in hardware. Or we could just plug new senses into the visual cortex or something (would it be another type of sight then?)

    Well, first off, you can't really say that a given function exists in a given lobe, because they're shipping signals all over the place. Secondly, even if you grow a new set of tissue to plug your new port into, you still need to teach the brain to integrate that signal, and that's a process which is not really understood at all. To some degree, we'd have to solve the problem of where consciousness is.

    Your comment about plugging in new things to the visual pathway (or another sense pathway) makes the most sense, and I think that this is how most "new senses" will end up. If we could actually get massive arrays of microelectrodes (and the software to configure and drive them), then it'd be possible to overlay stuff on the retinotopic maps in realtime, just like CBS does to their video feed. (Yes, we will cover your significant other's body with ads for our new porno site. Ain't technology wonderful?)

    Anyway, the human brain can do *a lot* more then is evolutionarily needed. I'm sure that it could be augmented somewhat by technology.

    It depends on how one parses your statement. Yes, the brain can think of a whole lot of stuff. On the other hand, there's evidence that suggests that it basically handles all that stuff using a few limited pathways. It all mostly comes down to efficient pattern-matching anyway. However, that does not imply that the brain has a lot of extra circuitry lying around which one can just co-opt at will. Evolution is not kind to superfluous stuff.

    Alik
  • by Alik ( 81811 ) on Sunday January 16, 2000 @08:21PM (#1366247)
    I just looked back and realized that they said he had this implanted in 1978 -- Does that seem like an awful long time ago (for this sort of thing) to anyone else?

    Not really. Ever since we realized the brain was electric, people have been stimulating it like mad and seeing what they can put in and take out. The fact that it's taken us this long to get this far should tell you something about how hard a problem it is. (Consider how much trouble we still have with the problem of computer vision. It's getting better, but it's nowhere near a solved problem, IMHO.)

    If they could do a 10x10 pixel image back then, what are they capable of now?

    Well, speaking as someone who's sort of part of they... a 10x10 pixel image. That's why this is news --- after two decades of trying, computer tech has finally gotten to the point where we can give Jerry useful input. However, the science of brain electrodes hasn't advanced that much. They're more durable now and often thinner, but in practice, we still probably wouldn't be able to sink nearly enough into Jerry's visual cortex to convey a complete visual image. (On the other hand, there is the example of the cat. However, that cat was never expected to have long-term survival.) We may be jacking in within our lifetimes, but it's not going to be as soon as you hope.

    Alik Widge
    MD/PhD Program
    University of Pittsburgh/Carnegie Mellon

I've noticed several design suggestions in your code.

Working...