Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Voice Recognition for a Techie? 102

kaybee asks: "I am a long-time developer, sysadmin, and general computer junkie (for fun and for work) who needs to seriously curb the usage of his hands. I'm curious as to the current voice recognition options, preferably usable on Linux and Windows. I prefer the command-line to a GUI, I prefer Vim to anything else, and I still read my email with Pine. I'd like to hear options for sending email via voice, which I hope is easy, and I'd love to hear of any solutions that allow effective coding via voice, which seems much more difficult."
This discussion has been archived. No new comments can be posted.

Voice Recognition for a Techie?

Comments Filter:
  • Oh, I'm sorry. I thought you said voice recognition for a Trekkie....

  • Write it yourself (Score:3, Informative)

    by Kawahee ( 901497 ) on Tuesday April 18, 2006 @07:59PM (#15153718) Homepage Journal
    Write it yourself. Grab the Microsoft Speech SDK [microsoft.com] and WINE or some suitable interoperatibility layer and you should be good for Windows and Linux. The Microsoft Speech SDK doesn't require oodles of code to make it work, so you should be able to get a working sample under Windows in about half an hour. It comes with some rudimentry samples as well, and since it's not released under any particularly binding license you can just build your code around it.

    'Course you could go the other way with some Open Source speech recognition and cygwin or similiar.
  • by Anonymous Coward on Tuesday April 18, 2006 @08:01PM (#15153732)
    No, computer, I said, "awk single quote left curly print dollar one right curly single quote file dot txt pipe sort pipe uniq dash see greater than a dot out"

  • Hand use (Score:5, Funny)

    by Doomstalk ( 629173 ) on Tuesday April 18, 2006 @08:01PM (#15153737)
    [...]who needs to seriously curb the usage of his hands.

    Lest they... *ahem* wander.
    • A proposition from a serious mind-uploader could explain the need to relinquish hand use, by stating it as a life's goal. I write such stuff for my own mind-uploading quest, under the guise of my other nickname of Jimekus. The Ingrid software is always ready and being tested, but my VB6 GUI Ingrid Command And Control Edition will only again be useful to me when I can get a wireless microphone that will accept my voice commands without interfering with the output from the Ingrid On Winamp Frontend. A recentl
    • No way, think about it you could cup while rubbing with a pure voice controlled pr0n library.
  • wouldn't bother yet (Score:3, Informative)

    by joe 155 ( 937621 ) on Tuesday April 18, 2006 @08:02PM (#15153743) Journal
    I've actually used some vioce recognition over the years and it's got a lot better than it used to be. last time I used to use a voice recognition software on my computer even though I did loads of the training it just didn't seem to get it; eventually I had to give up... it wasn't cheap either. Whilst I think it will have potential to do a lot in the future I'm just not sure that it's really at the stage where it can be considered a full time replacement; especially for technical jobs
  • by FlameboyC11 ( 711446 ) on Tuesday April 18, 2006 @08:14PM (#15153815)
    The main issue I see with coding by voice is that each character needs to be said by a word. We only have 26 single sounds we can make (at least us english speakers) and so pretty much everything besides the basic sounds have to be the result of multiple letters strung together. Here's some math:

    Lets say you type at about 40wpm, or about 160characters per minute (this is a low estimate of 4 chars per word), or about 2.5 characters per second.
    To be as productive speaking, you'd probabily have to speak about the same number of words per second as you type characters, or 2.5 words. That's really fast.

    Sorry bub, doesn't look speech is a very good alternative. Hell, Brain Implants on the other hand...
    • The best thing to do is take a rest on your hands, and get professional help. Voice recognition for coding sucks.. believe me. You're better off doing something else altogether if it comes to that.

      Coding is very precise work, and voice recognition just isn't good at that. If you try coding with your voice, you'll soon find that your voice hurts, and you've been immensely frustrated at the whole experience.

      Have you had medical attention to your hands?

    • We only have 26 single sounds we can make

      No. Not true, even in English. For example, "c" does not make a sound distinctly different from all other characters. Some letters, such as "x" make sounds that can easily be made from a combination of other letters. Including pairings and such, linguists say that the English language includes something closer to 45 single sounds.

      I used to teach Special Ed and saw software that could recognize entire words and use them in writing in a word processor. I have not
      • And then add in letter combinations like "th". And then add in multiple different sounds from the same letters or combinations like the "th" in "think" versus the "th" in "this". The first is unvoiced, the second is voiced; like the difference between "f" and "v".
      • Hooray, I just got out of linguistics class and happen to have my book on me. According to my "Contemporary Linguistics: An Introduction 5th Edition" by O'Grady, et al, there are 49 phonemes in American English. Keep in mind that variants and dialects of English can vary quite a bit, and the book itself says some speakers may be missing a few of the phonemes.
        • Keep in mind that variants and dialects of English can vary quite a bit, and the book itself says some speakers may be missing a few of the phonemes.
          With, of course, the classic case of cot, caught, and bother, which are defined with three different phonemes, but where the average person in the use uses only two of them based upon region.
      • if it could be done by some of the pioneering programs 10 years ago, why many programs now would recognize only individual letters and not words

        I think what GP means is that you can speak whole words into a word processor and it will match them against words in its dictionary. Any it doesn't recognize would need to be spelled out.

        When coding, We use a lot of words that aren't in the dictionary. if, else and switch would be ok but Degrees2Radians isn't going to be in any dictionary so you're going to end
        • Maybe I'm wrong, since it's been 10 years, but I think you can train the systems for new words. It'll be a pain at first, but in the long run, it'll help. It might be a mess coding in something like Java, though, where so many methods are made up of 3-4 distinct words.
      • No. Not true, even in English. For example, "c" does not make a sound distinctly different from all other characters.

        Yes it does, but only when followed by an h. The "ch" sound is distinctly different from sounds produced by any other letters. If it weren't for "ch", yes, 'c' would be a rudendent letter.

    • I think you may be confused. First of all, there are way more than 26 sounds in the English language. It's more like 49 individual consonants, vowels, and dipthongs, and many monosyllabic words can be constructed from those.

      As far as I can tell, you're saying that words would need to be spelled out character by character so you'd have to talk really fast to be productive. Custom dictionaries would go a long way towards fixing that. The main issue would be whether a particular speech recognition solution int
      • I think the biggest problem with speech as an input for techies is that the software itself has not yet been written. While there may be recognition software that can comprehend speech at normal speed and append its dictionary as it runs, there's none that I know of that has been set up to function in a technical environment. It may be as simple as putting the pieces together, but it would probably require a lot of hacking on your own. The second biggest problem would be wearing out your voice, although th

    • I think his concern was carpel tunnel syndrome. Hence his comment on curbing the use of his hands. Your analysis is accurate otherwise; one should use hands if one can. At least for regular things.

      Hmm, as for coding it does make one think a bit. I think you might see this sort of thing eventually for coding, but you'd need a special compiler (and perhaps language) that had a bit of AI in it to avoid silly mistakes with commenting, commands, variables, that sort of thing. It could work potentially thoug
    • Obviously, what we need is a voice recognition algorithm with Phonetic Punctuation [wikipedia.org] support built in.

      Of course, we will have to extend it - Victor Borge didn't have sounds for #, < or > - but I'm sure we can come up with something.

      Of course, some programming languages will be better than others - Ada will sound almost normal (other than having to bark out all the words in your best Drill Instructor parade voice), while Perl.... you'll need a good sock on the mike to keep the spit out, and people will t
  • by Anonymous Coward on Tuesday April 18, 2006 @08:28PM (#15153919)
    Voice recognition is good for letter-writing but bad for overall computer usage, especially in UNIX shell (incl vi and especially Emacs). Picking programs that don't require jumping all over the keyboard for basic tasks can reduce the strain. Same goes for programming syntax: Python is a lot more RSI-friendly than Perl, for example. (IMHO) Write scripts that automate routine tasks, even if it's just one line with lots of regex.
  • by Anonymous Coward
    Eye won stride to yews voice wreck iginition soft wear tomb ache a slash dot post. Eye was knot imp pressed. It was sofa king we todd did
  • by Lars512 ( 957723 ) on Tuesday April 18, 2006 @08:36PM (#15153973)

    Seriously, if you're suffering hand or arm pain, you should think about the way you're doing things now. Speech recognition is unlikely to replace your current coding practices, although it might help with writing reports.

    Instead, try using the keyboard break feature in gnome. To start with, have it kick you off your computer every 30 mins for a 3 min break, and don't allow yourself to postpone breaks. Get some equivalent software for windows too. Use your 3 min breaks to walk around and stretch. Within a week, you won't be a lot less productive, but your arms will feel a lot better. Then you can maybe up it to 40 mins. In the short term, a course of anti-inflams might help (ask your doctor).

    Also, don't come home in the evening and play games on your computer, or do more work. Your arms probably can't take it. Equivalently, inform your employer of your condition and subsequent inability to work reckless overtime hours.

    These two things should get you started for long-term sustainable maintenance of your arms.

    • While your suggestions are just fine, they're worthless to someone who has a real structural problem. I wrote the following in an email to my Cranial Osteopath [osteohome.com] when requesting an appointment sooner than 3 weeks out.

      During one of our early sessions (about a year ago, for me it seems like quite a long time), you said something about about the work being something like "peeling an onion" [in that the "trauma"/"lesions" comes off in layers]. I like the analogy, but I've settled on something slightly different

      • My advice only really applies to someone who's still typing and working on a normal keyboard. If you're unable to do even that for short periods (as in the case of my fiance), then you need serious medical treatment, and I agree voice recognition is the way to go. She uses a tablet, which lets her do a little computer work, 1 year later from the initial flare-up. In your case, spending 6 or 8 hours is clearly destructive, when the problem is already so bad. I still think the keyboard monitor to kick you of
        • ah, but the thing is, the discomfort's there from the moment I sit down. It started 7.5 years ago, in my first semester at college. I learned to ignore it, and while it does get worse if I don't take a break, the difference between taking a break and not is nearly imperceptible. Your suggestions might have done something for me then, but based on what I've learned since I started with the Osteopathic treatment program, what I've been through these last few years was inevitable.

          But I'm getting better now.
    • Yep, stop playing games, and get right with your keyboard. That means GET A GOOD CHAIR. A good steno chair, which will set you back $200. No arms. Back support.

      Mouse as little as possible.

      Anti-inflamatories are your friend. Much of what's happening to your hands is your own body's doing.

      You need to be able to heal over night as much as you do damage during the day. Do .01% more damage each day than you heal, and someday you simply won't have hands. So heal as much or more each night, and slowly you'
  • Linux Adaptability (Score:4, Informative)

    by skwirlmaster ( 555307 ) on Tuesday April 18, 2006 @08:45PM (#15154015)
    It's been a while since I've had to look into speech recognition for linux, but this link should help you get started: Linux Accessibility Resource Site [utoronto.ca]

    Read down to the section about speech recognition. I hope that helps.
  • Seriously. Try not using CLI for everything and see if that helps your problem.

    Voice recognition is still hit-or-miss.

    • Voice recognition is still hit-or-miss
      It seems to work with Nintendogs on low end hardware (Nintendo DS with 4MB memory). I suspect the secret is having a limited number of things to match - for example a voice menu with limited options in each context that sound very different.
      • It seems to work with Nintendogs on low end hardware (Nintendo DS with 4MB memory). I suspect the secret is having a limited number of things to match - for example a voice menu with limited options in each context that sound very different.

        But in Nintendogs, you're talking to a dog. It's okay if it doesn't quite understand the first time. In fact, that's expected and part of the "charm" of the dog. I don't mind telling my dog to "sit" a couple times, but if I had to tell my computer to "save" three

  • Shoot! (Score:5, Informative)

    by Bios_Hakr ( 68586 ) <xptical@@@gmail...com> on Tuesday April 18, 2006 @08:47PM (#15154030)
    For gaming on WinXP, I use an app called Shoot! [gameclubcentral.com]. While playing Falcon, I use it for fairly simple (press T, wait 5 seconds, press 1) macros. I was dicking around and decided to set up a profile for some simple stuff in Cygwin. If I say "list", the program returns "ls". "List all" will return "ls -a". "List all long" will return "ls -la".

    You can, with some tweaking, even get it to understand complicated stuff. If I say "manual g r u b", I can get "man grub". "Vi save quit" could be mapped to ":wq" without too much trouble.

    Anything you can type, it can do.

    I don't think it works under Linux. I don't know of anything like it under linux. It does, however, work quite well inside PuTTY.
    • This sounds ver similar to the built-in voice recognition that macintoshes have had for some time now (not to knock it). Anything you want typed or done can be triggered by a voice command, though it has to be scripted individually.

      In terms of Windows, the best that I know of is still Dragon Naturally Speaking [nuance.com], though I strongly recommend pirating it first to decide if it serves your needs. Unfortunately, even with regular training it still gets things wrong with alarming frequency. You have to retouch e
  • mmmmmmmaudio (Score:4, Interesting)

    by MobileTatsu-NJG ( 946591 ) on Tuesday April 18, 2006 @09:03PM (#15154110)
    "I'd like to hear options for sending email via voice, which I hope is easy, and I'd love to hear of any solutions that allow effective coding via voice, which seems much more difficult."

    I've wondered about this myself. I tend to use my computer with the headphones on. Often, I'm listening to music or.. well just plain silence, just the standard dings of Windows. I do pay attention, though, to the sounds coming from the computer. (i.e. the traditional hoo-hoo of recieving an email.) I've always wondered about what more could be done with sound to make the user more aware of the goings on with their computer, especially when a number of apps are actively working. I think I was inspired by an episode of Futurama I caught. One of the character's personalities was in the Pilot's body. The Pilot, whose personality was in yet another body was trying to describe how to interact with the ship. I remember him saying "Can you hear that faint little tone? That's the status of..".. or something or other.

    In any event, it's fun to imagine. I wouldn't mind if a soft low-volume voice were to say "You have recieved an email from: John Smith." I had a job a few years ago where that would have been a nice little feature since messages would come in that required urgent attention. My solution to the problem at the time was to use a custom filter that would specficially notify me of important messages by bringing a little window up to the surface. That was fairly annoying, though, when the computer was busy and it was slow as molasses to get the window to go away.

    • I can't believe I misread that summary. He's talking about emailing via voice, not having the voice play back his emails. Yes, I'm an idiot, that's what I get when I post before my first coffee.
    • Heh, I already have this feature in my new desktop machine. Apparently, the CPU fan is PWMed by the motherboard, and it changes its speed based on the powernow clock speed. Therefore, you hear the fan revving up whenever the CPU is doing something. To be honest, it's getting to be really annoying, simply because you hear it rev up every time you, say, drag something with the mouse.
  • Yea, you've ever talked dirty to your computer and felt as if the feelings are not mutual?
    I hope better voice recognition and TTS will resolve that.
  • OSSRI, VoiceCoder (Score:3, Interesting)

    by robocyberdroidbot ( 898422 ) on Tuesday April 18, 2006 @09:19PM (#15154189) Homepage
    I ran into this problem while working (coding) & trying to do grad school (in Comp Sci). The first point I'd make is, take a rest break (no computer use) for a while if you can. ASR isn't really there yet, & it won't help you with other things you might want to be pain-free for... seriously. That said, there is a group called "Open Source Speech Recognition Initiative" whose mailing list I'm on, but they don't have any product yet. Might get a better answer posting there, though. Or not. There's also a group on Yahoo (I think) called VoiceCoder. That's your best bet right now, although it's all about Dragon Naturally Speaking & various hacks & kludges to be able to do coding, use Dragon for Linux, etc. Dragon has been reported to run under WINE, but of course YMMV depending on your hardware, versions, etc., etc. Finally, whatever approach you try, expect it to take a good long while before you begin to approach your hand-using productivity. The technology isn't there yet, and even though I know how to improve it, I have no Ph.D. so no one would give me the $$ to do the research that could back up my claim.
  • perlbox using sphinx (Score:5, Informative)

    by Danny Rathjens ( 8471 ) <<gro.snejhtar> <ta> <2todhsals>> on Tuesday April 18, 2006 @09:23PM (#15154208)
    The perlbox voice control app is kind of a stalled project, but it is a nifty front end for the open sphinx voice recognition engine.
    http://perlbox.sourceforge.net/ [sourceforge.net]
    http://cmusphinx.sourceforge.net/ [sourceforge.net]

    Command and control is a lot easier to do with voice recognition since the dictionary the engine has to choose from is so much smaller. Having voice recognition engines understand arbitrary words well is still a bit difficult.

  • I was looking to make a headless system that I could bark commands at, and I was quite sucessful at developing my own actually. I used IBM's ViaVoice SDK and modified a few of the sample programs they had that were written in C. It took a little work getting the system running, it being a tad old and all, but eventually got it down to where it was completely useable and could make requests like "new mail" and "talk to me dirty". Oh and yes, it was a Linux system it was running on (Slackware 8).

    Google it.
  • A new and inovative input device has had some positive reviews floating around the net lately. It's called AlphaGrip and is basically a keyboard mapped onto a large game controller (with a track ball to boot). I ordered one a few days ago so I don't have first hand experiance with it yet but the reviewes come from some reputable sites (linked below). It clames to allow 50-wpm with only 30 hours of training. I'm not so sure about that but I'm willing to find that out for myself. Sorry for the short post but
    • that is pretty spiffy! I bookmarked that link to their product page, and 99 bucks doesn't seem all that unreasonable if it is as comfortable as they say (or it looks). Keyboard and mouse takes three hands! nuts! It's always seemed 'tarded to me that way... And I have tried trackpad keyboards, don't like them, something like this, though, where you can sit back in a comfy chair and surf and type looks pretty neat.
  • First, find a solution that makes it easy to enter text into a GUI (gnome accessibility, WINE w/dragon natural speaking, whatever).

    Find a subset of words that are short, easy to remember, easy to say, and above all -- accurately translated by the chosen voice recognition software.

    Then create a small perl script that can take this coded input and convert it into a nicely formatted chunk of code.

    You can have different translators for different target languages... for example

    In shell programming, you might have the following:

    hash -> #
    bang -> !
    pipe -> |
    test -> [
    end test -> ]
    mark -> '
    quote -> "
    end mark/quote (keeps them balanced for shell scripts)

    for identifiers... don't name them. For example, lets' say you wanted to do this:

    function hello_lcase {
        HELLO = $1
        if [ -z $HELLO ] ; then
            echo "Hello world"
            echo -n "Hello from "
            echo $HELLO | sed -e 's/.*/\L\0/'

    you would say:

    hash bang slash bin slash bash
    new function 1
    set local 1 ref in 1
    if test empty ref local 1 end test
    echo string 1
    echo option n string 2
    echo ref local 1 pipe program s e d option e space
    mark s slash dot star slash back upper l back 0 slash end mark
    end if
    end function 1

    you'd run the perl script and it'd ask you:

    what do you want to call function 1: foo
    what do you want to call local variable 1 in function 1: HELLO
    what do you want to use for string resource 1: Hello World
    what do you want to use for string resource 2: Hello from

    and it'd output the script (maybe after running through indent)

    You could substitute "1" for any easily recalled mnemonic or symbol the text->speech translator is unlikely to mistranslate (in this case "foo" and "hello" would probably be fine as is)
    Then you'd get a chance to globally "refactor" your symbols and give them nice-looking names, only having to type them once.
  • I just did a presentation on speech recognition software for the Office of Disabilities Services at my school, and since I see that you have a lot of response on the technical aspect of it, I'd like to bring up something else: how speaking to the computer affects *you*. One of the things that most surprised me about using speech recognition is how speaking comes from a different part of the brain than typing. Composition through speech is *very* difficult to start; don't think you're going to just dive in
  • VoiceCode (Score:2, Informative)

    Don't get too discouraged by the large number of commenters who haven't used speech recognition or who don't understand why someone might need to lay off the keyboard for a while. I wrote 100k lines of C++ code hands-free for my astronomy thesis over the course of two years, using with speech recognition software that is now about 10 years out-of-date. There have been significant improvements in both the speech recognition technology and tools for coding by voice since then. For coding, take a look at th
    • Re:VoiceCode (Score:4, Informative)

      by lpq ( 583377 ) on Wednesday April 19, 2006 @02:01AM (#15155150) Homepage Journal
      I tied both Dragon Naturally Speaking (costing ~ $500 or $600 at the time), and gave up the training problems and low recognition rates. I tried IBM's ViaVoice Professional, USB-Pro -- with digital signal processing in an included microphone and a digital connection to my computer. With a 1 paragraph training session, it was already over 95% and improving over Draggin'. It was easier to train, and you could train it on the text you were typing -- i.e. it was able to learn from corrections and merge them back into your voice profile.

      Unfortunately, IBM released it in 2001-2002, then forgot about it. They've since gone onto their non-training voice recognition solutions for sale to businesses. They seem to have advanced, but not in any retail product.

      Dragon has come out with updates, but from people who have used and trained on *both*, ViaVoice has higher accuracy (~1% difference). The ViaVoice product price has fallen, and Dragon has, of course, gone up....

      Whatever product you get, get a fast 2+CPU machine with lots of RAM - 2GB or more. The ViaVoice algorithm adapts to your talking speed -- it will perform more looks and comparisons and have greater accuracy as the processor speed goes up. ViaVoice stops comparing when it runs out of time (your speaking has gotten too far ahead). But it listens to the words, in context, to determine spelling. The more memory it has, the more vocabulary it can pull into memory. Note -- I am saying get a dual-cpu (or dual core) machine, the faster the better.

      Viavoice was also released on Linux, but without as much application support.

      For coding support in voice products -- there just hasn't been enough demand.

      But for "wrist support" -- try a multi-faceted approach. Maybe voice recognition, maybe a tablet for input? Ergo keyboards, trackballs? It's not a comfy field. There isn't a great financial incentive to develop voice input for coding when you can hire foreigners for peanuts, and keep having eager generations of new hackers to come and be sacrificial lambs on the keyboards of progress...;-)

  • by belmolis ( 702863 ) <billposer AT alum DOT mit DOT edu> on Wednesday April 19, 2006 @04:30AM (#15155498) Homepage

    For something that runs on Linux directly, you might have a look at the Accessible Speech Recognition Technology software [slashdot.org]. It's a research project, not a polished system, but you might be able to hack it to do what you need.

  • In the UK the terms are different. Over here the process to which you are referring (having a computer hear, understand and interpret words being spoken to it) is called "Speech Recognition". This process is very tough because the machine needs to be trained by the individual doing the speaking - there are differences in dialect, accent, timbre, pitch, all kinds of voice attributes which can throw the machine off course.

    OTOH, "Voice Recognition" is used to describe the process of taking a voice sample and c
    • I'm looking for something like this actually. Dead simple to set up, requires "training". Prerecorded samples which, if detected, run certain shell scripts. Gimmicky, like "computer, pause music".
      • If you're on linux, try out cvoicecontrol. I just ran across it yesterday (thanks to this thread), and it seems to work pretty well. I've only just tested it, but setting it up is very simple and it seems accurate enough.
  • xvoice (Score:3, Interesting)

    by TheRealDamion ( 209415 ) on Wednesday April 19, 2006 @05:30AM (#15155641) Homepage
    xvoice is a gtk1 X application which uses IBM's ViaVoice engine to provide voice control and dictation support to arbitrary X applications. xvoice.sf.net is the url. The mailing list mainly covers issues of getting the ViaVoice libs working on modern distributions. The last release of VV was around the glibc2.0/2.1 era and most new ld.so's will struggle to execute the libraries and java dependancies. It's also fairly hard to buy a copy of VV 2nd hand anywhere and IBM appear to ignore any request to release it.

    However once you get past all of these issues (actually even running the old gtk1 xvoice becomes hard on modern dists), it works a charm. As it's X clean, you can X to any X server, be it one run under OSX or Windows, or a Sun SPARC box. You just need the mic connected to the x86 Linux box the client runs on.

    This meets your requirement for editing in vim etc. The accuracy, I found was fantastic.
  • I'll Ike's peach recognition all hot. Hits deaf finite Lisa free king she tune awe wad aim in. Soon Hampshire wheel used at took converse hate tuna Delhi basis. Him a gin bee ink ape able loft haul King Kong chats norm ally a Zeus aim thai mass Jah king of. the hill Harry you sand dumb harass Sing Sing whool after behaving the peep hole gay thing York raze him ownings trains crypted, dead Bill Ike wad deaf hock.
  • It's possible to recognize speech pretty well (and no, the ridiculous examples of "I'll Ike's peach recognition all hot" don't really happen for any reasonable engine that uses language models, and most of them do these days).

    The main problem is that no one actually speaks or writes as eloquently as people present speech recognition.

    Try this experiment: map backspace, delete and arrow keys to @ and try to write a letter or some code. You'll quickly give up. When you see demos of speech recognition, you neve
  • A few years ago (ahem... 1996-1997) when VR became the big business buzz I was tasked with implementing a pilot project for a large government organization. The goal was to get rid of stenographers and have extremely-highly-paid analysts do their own bottom-of-the-pay-scale transcription. Keep in mind it was government -- which meant that most of the people in these positions were 50+ and many had never learned how to type, and only started using computers because my prior big project forced them to.

  • Some folks might remember that OS/2 Warp 4 (September 1996) was released with both IBM voice navigation and IBM voice dictation technology as part of the standard package.

    The initial product package even included a headset microphone in the box.

    Not many people used it, and at that point in time it required some initial training to use in an effective manner (it had to learn each person's pronunciation habits), but there were still a few folks I knew that got a lot of mileage out of the technology at the tim
  • SpeechLion (Score:2, Interesting)

    by rbrewer123 ( 884758 )
    I have a small project based on sphinx4 that allows command and control of Linux. It is really not ready for primetime yet, but help and feedback is appreciated. I have looked into dictation (for email) with sphinx4 but have not implemented it yet.

    http://freshmeat.net/projects/speechlion [freshmeat.net]

  • Dragon [nuance.com] version 8 made major improvements in recognition. The preferred version will read out loud. My wife has neck and shoulder problems, Dragon allows her to use a computer reasonably well. They have ratings for different microphones, I sprung for 5 Dragon usb mic. Doesn't make sense to cheap out after the software's installed. We got an upgrade offer after giving up on Dragon many versions ago. The version 8 release is actually worthwhile.

If graphics hackers are so smart, why can't they get the bugs out of fresh paint?