Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel

Intel Releases Open-Source Stereoscopic Software 129

Eslyjah writes "Intel has released a software library that allows computers to "see" in 3D. The library is available for Windows and Linux, under a BSDish license. Possible early applications include lipreading input. Check out the CNN Story, Intel Press Release, and project home page."
This discussion has been archived. No new comments can be posted.

Intel Releases Open-Source Stereoscopic Software

Comments Filter:
  • by bucky0 ( 229117 ) on Tuesday December 18, 2001 @07:59PM (#2723665)
    "While AMD processors may be slightly cheaper and run legacy x86 programs more quickly, SSE-optimised code compiled with Intel's compiler completely creams even the new Athlon XP "1900". Intel doesn't need to make up marketing numbers in order to make their processors look faster than they actually are"

    I have a few issues with what you said. 1)The fact that intel had to release a compiler specially designed to work with it's processor to get it to get all the perfomance out of it is goofy, I can probably bet that if AMD wanted to, they could release an AMD optimised compiler and do the same thing. Also, to get the performance out of that compiler you would have to recompile everything to use it(or wait for your closed-source software makers to provide you witha build) 2) The marketing numbers are made by benchmarking intel processors and AMD processors. The AMD marketing number is the approx. clock speed of a comprable intel processor. If anything, you should flame intel for making their instruction pipeline hideously long to get their processors to ramp up to high clock speeds(sacrificing performance at the same time).

    "Our web server used to run on a 1.4GHz Thunderbird, which was cheap but notoriously unstable" The processor probably wasnt unstable, I have a similar system without problems, it's probably another part of your system. Processors either work or they dont you can't have a half-broken processor

    "And when the fan blew out a month ago, the whole computer was taken with it!"
    Why would you use a cheap fan in a webserver for a buisness? Why didnt you use monitoring software to automagically shut the system down when the fan went out?(and I know about the video...that's a different situation, in the Tom video, they took the heatsink off, with the heatsink on, there should be enough time to shut it down.
  • Sounds like CMU (Score:2, Insightful)

    by quinto2000 ( 211211 ) on Tuesday December 18, 2001 @08:04PM (#2723693) Homepage Journal
    Sounds like a project being worked on by Carnegie Mellon University researchers. I know CMU has a close relationship with Intel. Anyone know any more about the connection to this research?
  • by orn ( 34773 ) on Tuesday December 18, 2001 @08:21PM (#2723809)

    This is pretty neat, but reminds me of something from a few years ago.

    I interviewed with Intel and during the interview they said in no uncertain terms that they were actively trying to keep people upgrading their systems, and hence keep the dollars rolling in. At the time, the interviewer said that the technique was largely by helping Micro$oft to keep new OSs coming that required more and more horsepower to run properly.

    This is very cool in its own right (or could be, I haven't looked at it completely), but strikes me as another way they can push that curve...
  • by Mike_L ( 4266 ) on Tuesday December 18, 2001 @09:34PM (#2724126) Homepage
    Voice recognition is the next major advancement in computer user interfaces. Lip reading will increase the accuracy of voice recognition software. It is exciting that Intel is furthering the field of cybernetics.

    I look forward to the day when I can dictate to my PC by just mouthing the words. Voice recognition and touchscreens will save the office worker from Repetetive Stress Injury and Carpal Tunnel Syndrome. Lipreading will make voice recognition practical for large offices and many other areas.

    -Mike_L
  • Good, but not new. (Score:4, Insightful)

    by cosyne ( 324176 ) on Tuesday December 18, 2001 @10:38PM (#2724326) Homepage
    While i have to say that Intel's OpenCV library rocks (for a number of reasons), stereoscopic vision is nothing new. The cnn article is more or less crap ("Until today, computer vision applications has been restricted to two dimensions
    "? nice try...) It's mishmash of reporter hype and stock text which describes computer vision in general ("Over the next 5 to 10 years, Intel Corp. expects computer vision to play a significant role in simplifying the interaction between users and computers"). The Sussex Computer Vision Teach Files [susx.ac.uk] page has a reasonable description of stereoscopic vision [susx.ac.uk] from 1994. Lip reading is not really a 3D problem, so stereoscopic capabilites aren't going to help much. Many of the other uses- 3D environment modeling, object modeling and recognition, etc, are being worked on (again, the algorithms aren't new, this is just a new open source implentation) but they're not easy.

    I don't mean to sound pessimistic, though. OpenCV is really cool, both as a corporate contribution to open source, and as a programming library even if you never look at the code. And the Matlab interface means fewer MSVC++ sessions which end with me feeling homicidal ;-) The inclusion of stereo vision will be cool for people trying to write vision applications, but it's not advancing the state of the art.
  • by robbo ( 4388 ) <slashdot@NosPaM.simra.net> on Tuesday December 18, 2001 @11:45PM (#2724551)
    Stereo vision algorithms have been around for years, and I suspect that OpenCV implements some of the more common published methods. We understand the image formation process pretty well now and working with a calibrated stereo head is easy. Taken one step further, improvements in automatic camera calibration and cpu speed have led to nearly real-time 3d reconstruction from a monocular video stream (where the camera is moving through the scene).

    Actually, IMHO, pure monocular vision is a more interesting (and challenging) problem-- it's pretty clear that human stereo vision is an exercise in redundancy, since we can do pretty well with one eye closed, not to mention the fact that we perceive all kinds of 3d structure in 2d contexts (like your favourite pr0n- umm, quake screenshot ;-). The fundamental question is how do we interpret 2d images into 3d models (or whatever representation we use in our heads)? This a distinctly different (and more difficult) problem from building a 3d model from a motion sequence or stereo pair.

  • by grmoc ( 57943 ) on Wednesday December 19, 2001 @12:23AM (#2724682)
    Stereoscopic vision is a very VERY useful thing for all things which percieve their environment through visual apparatus.

    Huh? What do you mean? Well, close one eye (or put an eyepatch on) and look at your flat world. How far away is that streetlight? Hmm. How tall is that man? Hmm..

    Of course, we as people are much MUCH MUCH better at percieving (interpreting) our visual environment than computers are. Humans generally have little trouble correctively percieving things such even through partial occlusions, changes in scale, orientation, distortion (glasses might make you able to see, but straight lines become anything but..) and changes in intensity and color.

    Being able to get 3d information about objects aids greatly in interpreting what they are.
    An image (2d) of a hand is (almost always) full of occlusions. These occlusions are diffucult to interpret in 2d because the edges in 2d have less differentiation than a depth-map would. (The distance metric is less ambigious!)

    Wouldn't you like your computer to interact with you as if it was (a very obedient) human? This helps.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...