Wingfield writes: I am a researcher with the Bio-music program at the University of North Carolina at Greensboro. Currently we are analyzing recordings of bonobo apes for evidence of conversational rhythm. This research has many applications, from discovering the evolutionary beginnings of our language to improving something as mundane as public speaking. Every conversation has an underlying rhythm to it; if you have ever watched bad actors you know what it's like to hear no conversational rhythm: awkward, unnatural, and stilted. The bonobo's we are studying, who use vocalizations that can be compared to barks in terms of duration and intensity, have shown very promising signs of conversational rhythm both with each other and with human researchers with whom they interact. However, we are attempting to find ways to represent this in an unambiguous way, through a computer analysis that would not only be more accurate than human ears, but would save us the tedium of spending hours analyzing ten seconds of audio. Enter the slashdotters: Do any of you have any familiarity with a program that may be able to be adapted to fit our needs, such as a program that can detect stresses placed on syllables in speech?