Some developers have already used it in their games. Nvidia is also making the tool’s training framework available, allowing users to tweak its models
AI generating facial animations and lip-sync. PIC Courtesy/Nvidia
Nvidia is open-sourcing Audio2Face, its AI-powered tool that generates realistic facial animations for 3D avatars based on audio input. Developers can use the tool and its framework to create realistic 3D characters.
Audio2Face works by analysing the “acoustic features” of a voice, allowing it to generate animation data that it maps to the 3D avatar’s facial expressions and lip movement. It can be used to create 3D characters for pre-scripted content, as well as for livestreams.
Some developers have already used it in their games. Nvidia is also making the tool’s training framework available, allowing users to tweak its models.
This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever
Subscribe today by clicking the link and stay updated with the latest news!" Click here!



