Someone I know is looking for a programmer to design and code an audio application. This is for a university, so the resulting program will be used in labs, possibly around the country. The source code will be published under some open-source license.
The resulting program would have to run on Macintosh and Windows, and preferably on Linux as well. The project would have to be completed within n months -- that's undetermined, and depends somewhat on the final feature set, but definitely not as much as a year. So it would be important that the programmer be familiar with some application framework that allows for development on those three platforms, because there isn't a budget for spending three months to learn about how to do that. I don't think anyone cares what language/environment you use as long as it's compatible with an open-source license and is portable to the three desired platforms.
The core of the program is an algorithm that will take input from an audio source (microphone) of human singing and convert it, in real time, into a frequency in Hz. This means not only doing an FFT, on a scale of tens of milliseconds, in real time -- which is fairly well understood -- but also interpreting the result. When a pitch is sung, the frequencies produced are not limited to the base tone, which our ears hear as "the pitch." The note produced includes a variety of formants and overtones, depending on the vowel being sung.
Pulling out "the frequency" from a constantly-changing vector of amplitudes is not rocket science, but it's not trivial either. If you're not an expert on digital audio, don't worry too much about it. Explaining this to you is the professor's job. But if the above description scares you, well, this project is probably not for you.
If it helps you visualize, the core of this is quite similar to the core of the program Music MasterWorks, with the key difference that this application must be geared toward feedback and evaluation in a classroom setting. And of course that it will be open-source, a chief advantage of which is that other academic institutions will be able to build on the work over time and maintain it.
There is of course money to pay for this. This is a university, so you're not getting stock that will make you rich. But it'll pay the bills for those n months.
The remaining necessary features are pretty boring: the program has to be able to take audio input from whatever standard microphone setup there is for the platforms needed. It has to play audio (to establish the key for the student and play for them the notes they are to sing back). It has to allow students to log in, and has to store data about their activities, over a network to an SQL database (real simple DB stuff).
The final goal for this project is to allow professors to flexibly input a series of MIDI notes, and to have students sight-sing (since displaying a staff visually is probably harder than playing the necessary notes out the speaker!). That means features like authorizing a professor, reading MIDI input live from a MIDI interface, storing that data, and displaying notes on a staff in a (relatively simple) graphical format. But I think we're assuming that's going into version 2.0.
The programming work can almost certainly be done over the internet, with meetings by regular ol' phone, after an initial face-to-face meeting.
None of this is starting soon. There is grant paperwork, and approvals, which have yet to happen. My best guess is that the ball will get rolling "sometime this calendar year" -- not a very accurate guess, I know, sorry about that.
If you're a programmer who'd be interested in this project, drop me a line at firstname.lastname@example.org.