Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Hardware for Homebrew Motion Capture? 82

goruka asks: "We are a small garage game development and 3D animation group, and as such, we try to develop by reducing costs as much as we can. Recently, it came to our mind that we could setup and develop a home-brew motion capture system by using three consumer USB web-cams to motion track bright objects attached to the body. However, we don't know which web-cam models can: capture at a decent frame rate (25fps) and resolution; are supported and easily programmed under GNU/Linux, since we'd like to later release our software as open source; and lastly, won't cost us a fortune. What are your experiences with such devices?"
This discussion has been archived. No new comments can be posted.

Hardware for Homebrew Motion Capture?

Comments Filter:
  • I have never tried to create a setup like the one that you are attempting, but I have had good experiences with Logitech [logitech.com] webcams. Is there an existing program that combines the 3-camera stereo image or are you starting from scratch? What computer system are you using? Make sure that you are not running so many things off of USB (or IEEE1394) that you cannot keep up the throughput from your webcam. This could happen, for example, if you are streaming at maximum quality and saving all of the data to an ex
  • I would definitely be wary of anything too cheap or with a higher MP count.
    Preferably, try not to have the pixel count too high.

    Typically, webcams are fairly lousy in terms of noise and focus. Getting a
    cam which is pushing the MPs is really just going to be asking for trouble.

    As far as linux go, I believe that mtp cameras are pretty much all going to
    work.

    http://ptp.sourceforge.net/ [sourceforge.net]
  • Different solution (Score:5, Interesting)

    by JanneM ( 7445 ) on Wednesday August 16, 2006 @10:48PM (#15924390) Homepage
    Since you probably don't need to do anything real-time with the capture data, I'd suggest that you use whatever inexpensive cameras you can - and record streams onto video. Ideally, you'd borrow three camcorders and use them. Then you can at your leisure transfer the streams to a machine via firewire and calculate 3d-data to your hearts' content.

    The benefit of this setup is that you can get away with very cheap hardware (you can probably borrow needed camcorders from friends and family if it's just a temporary deal), and the image quality - resolution, dynamic range, low-light performance, noise - will be a lot better than with a heavily compressed usb-cam stream.

    As for synking the streams, you have that problem with three usb cams as well (can't caprture three usb-streams on the same computer), and with camcorders at least one step up from the bargain bin, you should be able to use sync cabling if you're really concerned about capturing frames at the same instant. I doubt that would be necessary, though, for the kind of precicion you're looking at getting (just do a linear interpolation between captured points to do an approximate soft sync should be fine for any movement you can hope to capture at 25/50 frames/s anyway).

    • by MBCook ( 132727 )
      I agree, I think that you shouldn't need too much sync. As for how to sync them up, it would be trivial to just flash a LED in the middle of the scene for 1/30th of a second (1/25th in PAL) and use that flash for the sync. That way you don't have to do it by hand since it should be on only one frame or so.
      • by JanneM ( 7445 )
        Syncing frames is quite easy. I was however thinking about capture sync - making sure all three cameras grab an image at the same time. With 30fps (PAL, full-frame), a fast-moving object - a hand and elbow doing a punch, say - there will be a fairly noticeable difference in position between the three cameras if they're not capturing in phase.

        The solutions are: ignore it, and don't rely on the motion capture alone for fast movement; figure out afterwards approximately what phase the cameras are in and interp
    • Syncing is easy... just walk up there and do a big arm "clap" in view of all three cameras. Then, delete anything that comes before the frame where your hands come together. This way, each video will start at the same moment in time.

      (of course, this doesn't sync refresh, but your animation package will be interpolating anyway, so it doesn't matter much.)
    • Re: (Score:3, Insightful)

      by monopole ( 44023 )
      Firewire cameras such as the UniBrain Fire-I allow for synced capture along a firewire dasiychain. You have to adjust the framerate and dynamic range to allow for bandwidth issues
      • Or he could go old school and buy a clapperboard [wikipedia.org] like they use for filming movies.

        The only reason that board needed to clap was so that the audio and video streams could be synched.

        You can use that visual/audible cue to sync the video & audio from three different cameras with zero trouble if it's all going to be digitized anyways.
  • Axis (Score:3, Informative)

    by Southpaw018 ( 793465 ) * on Wednesday August 16, 2006 @10:55PM (#15924427) Journal
    We run an Axis 207 [axis.com] at work. Pair it up with Zoneminder [zoneminder.com] and you've got yourself a montion capture system, albeit in the form of home security system software.
    • Motion-activated video capture is not even close to the same thing as motion capture. It's not in the same ballpark, it's not even the same motherfuckin' sport.
  • by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Wednesday August 16, 2006 @10:56PM (#15924429) Journal
    They run for about $100 and they are available at most CompUSA stores (and nowhere else, it seems).

    Features:
        * 640x480@30fps w/high compression enabled (15 or 10 without)
        * 35mm camera screw mount
        * Manual adjustments on camera (sensor angle and focus ring)
        * Lots of software settings to play with (AGC, white balance, shutter speed, aperature)
        * Compatible with the PWC 10.0.12 drivers from http://saillard.org/linux/pwc/ [saillard.org]
        * Above all: stable.
  • One suggestion: Do this in a poorly lit area, and have lights mounted on your cameras. (Ideally, they would be almost the only light sources.) Use reflective tape for the dots.
    You will then have very bright spots on an almost black background. That will make recognising the spots easy, even for the most brain-dead of algorythms.

    Professional 'body mechanics' use reflective spheres.
  • I'm not sure what constitutes a 'decent' resolution but the Unibrain Fire-i [unibrain.com] caps at 640x480p at 30 fps and is pretty cheap. I few months ago I would have recommended Apple's iSight but I understand it's been discontinued. They both connect to the box with 1394 instead of USB which I suspect will be a plus with something as timing sensitive as this. They use this lib [sourceforge.net] not any of the camcorder stuff.
  • Here is what I can tell you. My guess is that FireWire would be easier because all FW video uses the same driver. Find yourself (or just borrow or rent) two video cameras with firewire and use that. You only need two because one can capture one pair of axis (X and Z) and the other the other pair (Y and Z). If you put the cameras at right angles to each-other (pointing down the axis) you should have all you need. The programming half shouldn't be too bad, in relative terms. All you have to do is extract the

    • by d3matt ( 864260 )
      The more cameras you have, the better for motion capture. It allows you to track the shiny parts from multiple angles. I would dare say that three is a bare minimum. At the UTD motion capture lab, they have 30 cameras (or did when I saw it). When capturing in real time, it still lost track of the balls on occasion.
  • by Hufo ( 684441 ) on Wednesday August 16, 2006 @11:11PM (#15924492)

    This is a lot of work but also a lot of fun! I did it for a real-time demo project with a few friend. We used Christmas fairy lights and 5 mini-VHS camcorders. You can see the result at the very end of our Childbone [pouet.net] demo.

    Nowadays, using webcams will save you a lot of troubles, and you can find lots of very useful codes on the Internet (such as Intel's OpenCV [intel.com], however majors issues that you still have to solve would be calibrating camera positions and reliably tracking crossing markers in images. In my system I had to do an editor to manually reassign markers when incorrectly detected or labeled, which can be a very tedious task...

    I would recommend Logitech Quickcam Pro 5000 webcams, as they are USB 2.0, can do 640x480 at 30 fps, and most importantly use the somewhat recent generic USB Video Class spec, for which a driver for linux [berlios.de] is available. I have a few of those and the image quality is quite good :)

    Good luck!

    • Re: (Score:3, Interesting)

      by munpfazy ( 694689 )
      Interesting.

      I wonder how the big studios deal with marker crossings? (Then again, perhaps they just pay humans to do tedious work.)

      Seems like there must be a cheap hardware solution, given enough time and energy.

      For example, one could put colored filters on the reflectors. By replacing each dot with a cluster of colored dots and then selectively blackening them you could code each point uniquely. It would take some experimentation to figure out how to get the results you need with consumer gear. Presuma
    • Re: (Score:3, Informative)

      by Niet3sche ( 534663 )

      I can second the OpenCV nomination.

      However, I think I may be able to add something to the puzzle: I was informed (but have not yet tested) that IEEE1394 (Firewire) cams will synch across the bus. This means that you no longer have to worry about adjusting for framedrops or timing or whatnot. Rather, the two cameras "see" their fields in lock-step with respect to time. I know that some folks here locally have had great success with Uni-Brain Fire-i cameras, but earlier in the thread someone reported

  • by mnmn ( 145599 )
    I was looking for a USB cam that would do 640x480 at 30fps at least. I was trying to put it on an RC airplane with a PC104 computer, drive and battery. Never did find a camera that did this native res at this speed. But I didnt look hard enough either.

    I should have posted on slashdot.
  • I can't imagine how little you must value your programmer time if you're considering a homebrew solution! If programmer time is cheap, then ok, but my preference would be to outsource it. There are plenty of contract mocap studios around, take advantage of that fact . . .
    • Many consider a homebrew solution regardless, if for no other reason than "Open Source". Reinventing the wheel can be a good thing if it's something you enjoy doing, and if it means it will cost you (and everyone else) so much less in the future.

      I, for one, will be pretty interested in how this turns out. I'd love to be able to do motion capture for a little one or two-man game.
      • by cjp ( 624694 )
        I would go so far as to say that _most_ consider a homebrew solution. I don't think it's a great idea personally, as they're unlikely to realise beforehand just how large a task it can be. If your schedule relies on accurate estimates and fixed costs it's a dangerous thing to attempt. In the case of an indie game it may not be an issue, but yeah, in any commercial context . . .
        • There is no such thing as an accurate estimate in software development, so you're right there -- outsource absolutely everything to people who have already solved the problem, because then you know how much it costs.

          It also means you'll never innovate a thing.

          Or, try Doug's Law (correct me if this is from anywhere but my friend Doug): Make an estimate. Double it, then change the units. That's the estimate you give to the business end, and to the clients.

          So, for instance, if it'll take you 10 seconds, say
          • by cjp ( 624694 )
            Damned good rule of thumb, yeah :D It's tempting to apply that, but I'm thinking it wouldn't go down too well. Heh. In my experience the cause of the discrepancies is that the estimate only covers getting the feature to minimal functionality, whereas the final product requires something more polished and complete.
            • Well, the good news is, if you try to take all of that into account, and make a very conservative estimate to begin with, and you follow Doug's Law, then you end up with something that is pretty damn accurate, or at least close enough that your employers/beancounters/customers won't doubt it. Then, if you really can get it done in a fraction of the time, it's that much better for you.

              Oh, I'm going to add my own amendment to Doug's Law: Doug's Law only covers unknowns. Anything you know -- time cost by mi
          • by Osty ( 16825 )

            It also means you'll never innovate a thing.

            You just need to know what to outsource and what to do in-house. If you're an up-and-coming motion capture and 3D animation studio, then it might be worthwhile to try to innovate in the motion capture space. If instead you're an indie game developer that just happens to need motion capture for your next game, you're better off outsourcing the motion capturing and focusing on innovation in your game design and gameplay.

            Just because you can outsource somethin

          • I believe Bill Burkett told me that estimation method in late 1983.
      • also by starting with a homebrew system a lot of knowlege is gathered in research which really can help you avoid the gotchas if you eventually go to a commercial system instead.
  • You could try "Optitrack" by naturalpoint software. Seems really useful, actually and for $249, it's worth taking a chance on too.
    Here's their link:
    http://www.naturalpoint.com/optitrack/ [naturalpoint.com]

    If you have Poser(and free time), you can also try the Rotoscoper plugin by PhilC as well.
    Huge link follows:
    http://istore.mikrotec.com/philc/index1.html?page= catalog&trackerid=1661406456&category=a&vid=208024 5373&pid=924839477&oldvid=2143420604 [mikrotec.com]
  • Wouldn't it be easier to rig up an armature using old joystick potentiometers at each joint than use three cameras and try to extrapolate position data from that? You could easily have a single machine running a perl script collect the position data from all the sensors at one time into a file and then write it out as bvh/whatever or convert it later.

    Something like Gypsy [animazoo.com], only lots cheaper.
    • No cameras. Use accelerometers. Lots of them (20-40?). I guess this is a bit of a research project, but a small prototype (eg. one arm) would not cost much. Accelerometers are surprisingly accurate. However this would require some reasonably clever physics algorithms to recover body position (eg. gravity provides a constant 1g downwards acceleration).

      Simon.
  • Hitlab (Score:2, Interesting)

    by kramulous ( 977841 )
    Howdy,

    Hitlab (NZ [hitlabnz.org] but also an American office [somewhere]) also have come out with some pretty funky motion tracking. Beit for other purposes, but the source is available (via SourceForge: ARToolkit).

    It may not be exactly what you are wanting, but with a little modification it should, and, importantly, is CHEAP.

    Good luck. Hope to see some break-through gaming experiences. Hooroo
  • by Anonymous Coward
    I don't do MoCap, but I have worked in face tracking and I have been in major film industry motion capture studios and seen their set ups. It is very complex.

    First, they have many, many cameras, because you have to have 3 unobscured camera views to triangulate a point. I assume you want to mocap people doing game moves, so multiple camera are required. Also, they capture with infra-red, not visible light. They put infra-red reflective spheres on the people at key locations, so what the camera picks up are

  • by Robbat2 ( 148889 ) on Thursday August 17, 2006 @01:19AM (#15924926) Homepage Journal
    Couple of things in here, from researching the field with a university research lab to see about buying commercial gear, I have a lot of suggestions.

    - For your camera, look for cheap used DV cameras on ebay. Not super high res, but lots of them 3 ain't going to cut it, consider at least 9 (high/low from each of the cardinal directions, and on top [might want a few for different sectors]) - occlusion is an absolute bitch of a problem.
    - This will provide reliable time-synced data, and NOT max out your USB bus.
    - USB cannot provide you with images from 3 cameras with the same timesync, it's just not capable of such behavior.
    - Firewire has a longer length limit on the cables, which is a big help for your work.

    - Cheap PCI firewire cards - two should be enough, this will give you 6 seperate firewire busses, and put you at the limit of your PCI bus.
    - Find filters that fit said cameras, and are opaque to visible light, but transparent to infrared.
    - Rig up really bright infra-red lighting, ideally with a low quantity of visible light output.
    - Go to an burgler alarm supply place, and buy infra-red reflective tape - I leart this tip from the EA guys a couple of years back, the 'official' reflective tape from 3M costs too much, and is a pain to order, but alarm places stock stuff that works even better, and is cheaper to boot.
    - Buy really small polystrene balls, and cover with infra-red tape. On one small part of the ball, put the hook side of a velcro dot. These are reusable now, avoiding problem with tape waste. You can also clean them easily to keep them very reflective.
    - For your subjects, get them to wear any clothing that velcro will hook reliably onto (pretty easy choice)
    - Place the reflective balls on either side of every joint, spaced not more than 90 degrees apart - eg your elbow should have 8 balls.

    Using infra-red helps reduce the data-set size way down, and also lets you use the cameras in monochrome for capturing, greatly reducing the data-set size.

    From working with several commercial mocap rigs, I'll say that the calibration routines are extremely important. You need to accurately map the entire volume that you wish to capture in. Depending on space available to you, consider building a simple frame or using a lighting rig to attach the cameras to.

    I will repeat again, occlusion is an absolute killer problem. From visting the EA facilities in Burnaby BC to specifically research their systems (I was working with a university research lab at the time), they estimated that they lost 2 hours of production a day to occlusion problems during mocap shoots.
    Your system must be capable of tracking all the balls, all of the time. If it loses one, it's almost impossible to pick it up again properly during a runtime - you'd need to recode the relative location of that ball before it gave you useful data again.
    • For your subjects, get them to wear any clothing that velcro will hook reliably onto (pretty easy choice)

      They're making a sequal to Leisure Suit Larry. Their subjects Won't wear clothes.
    • I would think that the number of cameras needed can be greatly reduced by using better research and algorithms. There is quite a bit of research that has largely been ignored by most mocap software which I expect can greatly reduce the markers and cameras needed for good quality motion capture

      1) there is the research on animation from doing a series of figure drawing poses - the algorithm provides a set of poses that the user picks as being correct (a single figure drawing is ambiguos since there are multi
      • by Robbat2 ( 148889 )
        The primary reason for lots of cameras is still the occlusion problem. I agree that if you are only capturing one person at a time that the problem is reasonably reduced, but it still exists (consider the pose of bending over to touch your toes, which greatly occludes your front surface).

        The figure pose stuff I have dealt with as well - using early and recent versions of Credo Interactive's "LifeForms" and the mocap input form they take. It's good for helping to solve occlusion by predicting which location
      • by zeke2.0 ( 921786 )
        Thanks for the link, I just joined.
    • Re: (Score:3, Interesting)

      by Emil Brink ( 69213 )
      Sounds interesting, the tip about IR-reflective tape especially. It got me thinking ... if reflective tape is expensive enough to warrant hunting for cheaper sources ... And you also need to get IR light sources, wouldn't it make sense to invert the lighting, and put IR-emissive dots directly on the mocap actor? Something like LED throwies [makezine.com] but with IR LED(s) rather than visible light? Perhaps it's still too expensive and/or impractical what with batteries and so on, though. I do wonder how it would compare,
      • by Robbat2 ( 148889 )
        that's actually a very interesting, esp that you could make the LEDs flash (actually a dim-bright transfer, possibly using reed encoding, but never turning off totally) with an encoded number, and thus allowing them to be identified quickly by software after an occlusion event.
      • by Grab ( 126025 )
        Problem with this is getting a 360-degree output in all three axes. LEDs don't cut it for that - too narrow a viewing angle. Old-school incandescent bulbs are much better. Try digging out some of those little bulbs they used to use for school science experiments that used 4.5V - they're actually round too, which is good. Downside is that you need some way of attaching them to your actor, and they get hot to the touch, and the screw fitting means that they're going to be a couple of cm away from the acto
    • Point gray [ptgrey.com] do firewire cameras + SDK for computer vision research that I believe automatically synch if put on the same firewire connection.
  • I have a set of three Canon ZR500's that I'm planning to use in a rig that will record video from the top, right and front view of a moving target simultaneously. However, there are going to be a number of limitations to overcome with such a setup.

    Luckily, it seems RealViz will soon be releasing a new software package [realviz.com], called Movimento, that claims to be able to obtain motion capture data from video of events taken from multiple camera angles. While this new software should definitely make things easier for
  • You're going to have problems with webcams. The biggest problem is synchronization; getting them all to take their frames at the same time. Pinnacle Imaging is a scientific imaging device vendor that publishes res and framerate specs for all their cameras, and some of their cameras support frame synchronization... but a quite a price.

    When I was looking into it for a local doctor's office that wanted to do gait analysis research (something near and dear to my heart since i have a game leg and still no real
  • Would RFID chips? (Score:3, Interesting)

    by mikael ( 484 ) on Thursday August 17, 2006 @08:36AM (#15925892)
    Could you have each marker represented by a RFID chip? Then detecting the position of each marker would only require four RFID transmitters. The time delay would give you the distance to each marker and you could use triangulation to determine the current orientation. And each RFID tag would be easy to label.
    • by cr0sh ( 43134 )
      From my research, doing this with a single RFID chip "marker" is possible, if you can deal with the low resolution of the device (+/- 1 foot or so - may be better with more transmitters) and the lag time (second or so delay). For realtime mocap, this isn't usable, but for "realtime" inventory tracking, perfectly acceptable. With more markers, things just get worse - and for mocap, you will need more markers, typically one per major joint, depending on what you are modelling.

      Unless things have come a long wa

  • Friends were doing a thesis on 3D image capture using 2 USB webcams, in paralell with my thesis, so we often exchanged ideas and progress details. They researched the topic of the cameras VERY deeply and one conclusion is ALL WEBCAMS SUCK. The most expensive and advanced models on the market (about 2000% the price of the cheapest) have about 40% better parameters than the cheapest ones. They just come with fancy software and SDK for them exists at all. But they have just the same distorting sucky optics, th
    • Samsung SBC-331A.

      These run between $150 and $400 depending on where you get them and options.
      It's a standard B/W NTSC camera with a clock sync so you can chain them together. CS lens mount, with electronic iris/zoom lens control.

      Then you get yourself a few Osprey 1xxs in a mid-end server, and you can support 4 cameras pretty reliably. If you want to do more than 4, you might look at a Matrox Morphis; those are PCIe x4 or PCI-X, so they're a bit pickier. You could probably get 12 full frame streams from 3 of
  • Clean solution (Score:3, Interesting)

    by John Sokol ( 109591 ) on Thursday August 17, 2006 @10:18AM (#15926429) Homepage Journal
    I use the Kodicom 4400r board http://dvr.videotechnology.com/ [videotechnology.com] this uses 4 Conexant 878 chips (formerly called Brooktree BT878)
      The default bt878 driver in FreeBSD works but I had to write a small driver to init the video switcher on the board.
      Using very simple code you can capture and process 4 full motion video channels in FreeBSD.

      I there is also the BTTV Linux driver for this board.

      CCTV Cameras can be had for $35 each and the board is $200. for a total cost of $300 for 3 cameras to do motion capture.

      I have used Blinking dual color LEDS on the target very successfully.
      Also retroreflective balls and LED lighting also works well. The $35 black and white versions of these camera come with IR leds for so called "Night Vision" and works great with the 3M reflective tape.
      See http://www.videotechnology.com/old1104.html [videotechnology.com] Retroreflective Materials for more info on that also.

     
  • This is what i would do: Get an inexpensive TV capture card and attach a video camera to it, configure so you can watch the live video on the computer, instead of reflective points use an infrared led+resistor+batery (have you ever point a remote control to a camera and click?). Some already mentioned zoneminder, you can use it as a starting point and as it is open source, you can hack it to your hearts content. Good luck!
  • Hi. A friend and I have been writing some computer vision software for precisely this scenario. We intend to market it to consumers like you for a low cost (> $200) but it's not available yet. That said, a couple of things come to mind:
    First of all, the library of choice for all things computer vision is Intel's OpenCV library. They have a linux version. I know nothing about the linux library, but the windows version only supports a few webcams. The cheapest and best for the money is the usb 2.0 Logitech
  • by cr0sh ( 43134 ) on Thursday August 17, 2006 @01:26PM (#15927885) Homepage
    ...is the most important thing. First off, I want to give kudos to everyone who has responded to these - just about everyone here has given great ideas and suggestions to the problem. These ideas should be listened to and evaluated. I myself have been researching the idea of sourceless and sourced motion capture and position tracking for a long time now, as it relates to virtual reality applications, simply because there is nothing commercial for the task that comes down to homebrew pricing. Personally, I only want to track two things - the position and orientation of my hand, and the orientation of my head. The second I (and anyone else) can easily do today with a cheap 3-axis accelerometer/compass system. The first, though, is not easy at all. Position is one thing, but the orientation is a completely different beast.

    With that said, your ideas on using webcams is spot on, but you are going to need more than three, mainly for occlusion handling. For the rig I was contemplating (using webcams much the same as you), I was thinking of at least four cameras. The main problem I ran into (just in thinking about it, no actual implementation), and as others have described, was timing issues. For best results, you need all the frames captured from the cameras to happen at the exact same time. Since with USB webcams this isn't possible, you either need to come up with another solution (people here have mentioned some "high end" cameras that have syncing systems), or deal with it in software (very difficult to do, in addition to dealing with everything else, and still getting a high frame rate).

    Another problem you are going to run into (and has been mentioned by others, but not much on the reason) is webcamera resolution. Most webcams that capture at decent framerates do so at QVGA (320x240). Even those that capture at a real 640x480 typically do so at only around 15fps, instead of 24 or 30. Rare (and more expensive) is the webcam that will capture at 24-30fps with VGA resolution. Even at VGA resolution, though, you are going to have to deal with the angular vs pixel resolution of the camera. What I mean by this is that as an object moves throught the FOV of the camera, it is going to only be imaged by certain pixels of the CCD imaging device. Depending on the distance away from the camera, the object may move say a foot, and only move (on camera) a pixel or so. The further away the moving object, the fewer pixels covered due to parallax. This translates into a lower resolution of pixels (on camera) to inches/cm (in real motion). In fact, this is almost the inverse problem of HMDs, where you can have high resolution, and low FOV, or vice-versa. In order to have both (in either cameras or HMDs), you have to pay a lot of money. In optical camera-based mocap, this means HDTV or better resolution cameras. I hope you understand what I mean here, because it is important for motion capture where you may be capturing large amounts of motion over a lot of area. For close-ups (like facial capture) it is less important - but remember, the higher the resolution of the camera, the finer the motion you can capture at all distances from the person/object to the camera. Higher resolution cameras translate into higher prices for the system, because you have to deal with more data, all in realtime. Not easy, not cheap.

    You might best be able to deal with this by going the custom camera route. What you would want to do is build a custom frame capturing system, using 640x480 (or better) b&w CCD cameras (you don't need color, you just need IR sensitivity - even with B/W cameras, you are going to filter the final image down so far that it is mostly only a true b&w 2bpp image - so the closer you can do that in hardware, the less you have to do in software). This won't be easy, but many people have done similar systems for homebrew robotic vision systems, so look there. Realize that this kind of a project will likely dwarf your game development project in both hardware and software needs, and you might end up with a system

  • We developed a motion capture system for a bit of a different application (6-DOF joint movement tracking for biomed research). Getting the cameras working is trivial compared to the processing required to actually get 3d motion capture working reliably. Of course, we were going for something that probably has to be much more accurate than what you need, but it's not a trivial manner to write the software for something like this. Of course, there may be stuff out there you can use. Anyway, here's a brief
  • are supported and easily programmed under GNU/Linux, since we'd like to later release our software as open source

    I'm wondering, if you're intending to release later as open source, then why does that necessarily mean that it must be supported under GNU/Linux? Seriously, open source can be for any operating system. Just pick the OS or vision API that makes camera interfacing the easiest, and write your code for that, while keeping the main parts as portable as possible. Open source shouldn't automatically

  • by ripnet ( 541583 )
    Strap a nintendi wii-mote to each joint. These things are supposed to be basically bluetooth position and motion detectors.
    g
  • Hello, I am making a motion capture system with NaturalPoint's OptiTrack cameras: http://geocities.com/mocap_is_fun/ [geocities.com] Here is a sample video: http://www.geocities.com/mocap_is_fun/mocap34.wmv [geocities.com] It can output BVH which can be loaded by other apps like Poser: http://www.geocities.com/mocap_is_fun/Poser.wmv [geocities.com]

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...