1Carnegie Mellon University 2Aix Marseille Univ, CNRS, LPL, LIS 3NTT Corporation 4University of Kansas
We present GeSTICS, a novel multimodal corpus designed to facilitate the study of gesture synthesis in two-party interactions with contextualized speech. GeSTICS comprises audiovisual recordings of post-game sports interviews, capturing both verbal and non-verbal communication aspects.
The GeSTICS dataset will be available for download soon. Please check back later or contact us for more information.
The GeSTICS dataset is designed to enhance the generation of realistic nonverbal behaviors in:
Improve the naturalness of virtual agent interactions in various applications.
Enhance the realism of animated characters in games and interactive media.
Develop more natural and intuitive interactions between humans and robots.
@inproceedings{kebe2024gestics, title={GeSTICS: A Multimodal Corpus for Studying Gesture Synthesis in Two-party Interactions with Contextualized Speech}, author={Kebe, Gaoussou Youssouf and Birlikci, Mehmet Deniz and Boudin, Auriane and Ishii, Ryo and Girard, Jeffrey M. and Morency, Louis-Philippe}, booktitle={ACM International Conference on Intelligent Virtual Agents (IVA '24)}, year={2024}, address={GLASGOW, United Kingdom}, month={September}, publisher={ACM}, doi={10.1145/3652988.3673917}, isbn={979-8-4007-0625-7/24/09} }
For full license details, please contact the authors.