Submission + - Developers just open sourced a framework for AI avatars that move and gesture wh (nerds.xyz)
BrianFagioli writes: A newly open sourced framework called SentiAvatar aims to improve how AI generated âoedigital humansâ move and speak during live conversations. The project, developed by SentiPulse and researchers from Renmin University of China, focuses on synchronizing speech, facial expressions, and body gestures to reduce the uncanny valley effect that often plagues avatar systems. The framework generates six second motion sequences in roughly 0.3 seconds, allowing digital characters to maintain continuous movement while responding to spoken dialogue.
The release also includes a conversational motion dataset containing 21,000 clips and about 37 hours of synchronized speech, facial expression, and full body animation data. Developers say the system uses a planning architecture that determines which gestures or expressions should occur before filling in detailed animation frame by frame. The goal is to produce more natural body language during real time conversations, though the real test will be whether the open source community can turn the technology into believable digital characters outside of controlled demos.
The release also includes a conversational motion dataset containing 21,000 clips and about 37 hours of synchronized speech, facial expression, and full body animation data. Developers say the system uses a planning architecture that determines which gestures or expressions should occur before filling in detailed animation frame by frame. The goal is to produce more natural body language during real time conversations, though the real test will be whether the open source community can turn the technology into believable digital characters outside of controlled demos.