• Home
  • Download
  • License
EmoteEmote
  • Home
  • Download
  • License
  • Sign In
    Logout

EMOTE

Emotional Speech- Driven Animation with Content- Emotion Disentanglement

SIGGRAPH Asia 2023 (Conference)

Radek Daněček1, Kiran Chhatre2, Shashank Tripathi1, Yandong Wen1, Michael J. Black1, Timo Bolkart1
1Max Planck Institute for Intelligent Systems, Germany; 2KTH Royal Institute of Technology, Sweden

EMOTE = Expressive Model Optimized for Talking with Emotion

Paper Video

Code

Given audio input and an emotion label, EMOTE generates an animated 3D head that has state-of-the-art lip synchronization while expressing the emotion. The method is trained from 2D video sequences using a novel video emotion loss and a mechanism to disentangle emotion from speech.

Code and model are available in through the github repo.
Training code and data coming soon.

@inproceedings{EMOTE,
  title = {Emotional Speech-Driven Animation with Content-Emotion Disentanglement},
  author = {Daněček, Radek and Chhatre, Kiran and Tripathi, Shashank and Wen, Yandong and Black, Michael and Bolkart, Timo},
  publisher = {ACM},
  month = dec,
  year = {2023},
  doi = {10.1145/3610548.3618183},
  url = {https://emote.is.tue.mpg.de/index.html},
  month_numeric = {12}
}

© 2023 Max-Planck-Gesellschaft -Imprint-Privacy Policy-License
RegisterSign In
© 2023 Max-Planck-Gesellschaft