ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Cross-conditioned recurrent networks for long-term synthesis of inter-person human motion interactions

Kundu, JN and Buckchash, H and Mandikal, P and Rahul, MV and Jamkhandi, A and Babu, RV (2020) Cross-conditioned recurrent networks for long-term synthesis of inter-person human motion interactions. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 1-5 March 2020, Snowmass Village, CO, USA, USA, pp. 2713-2722.

[img] PDF
IEEE_WIN_CON_APP_COM_VIS_2713-2722_2020.pdf - Published Version
Restricted to Registered users only

Download (3MB) | Request a copy
Official URL: https://dx.doi.org/10.1109/WACV45572.2020.9093627

Abstract

Modeling dynamics of human motion is one of the most challenging sequence modeling problem, with diverse applications in animation industry, human-robot interaction, motion-based surveillance, etc. Available attempts to use auto-regressive techniques for long-term single-person motion generation usually fails, resulting in stagnated motion or divergence to unrealistic pose patterns. In this paper, we propose a novel cross-conditioned recurrent framework targeting long-term synthesis of inter-person interactions beyond several minutes. We carefully integrate positive implications of both auto-regressive and encoder-decoder recurrent architecture, by interchangeably utilizing two separate fixed-length cross person motion prediction models for long-term generation in a novel hierarchical fashion. As opposed to prior approaches, we guarantee structural plausibility of 3D pose by training the recurrent model to regress latent representation of a separately trained generative pose embedding network. Different variants of the proposed frameworks are evaluated through extensive experiments on SBU-interaction, CMU-MoCAP and an inhouse collection of duet-dance dataset. Qualitative and quantitative evaluation on several tasks, such as Short-term motion prediction, Long-term motion synthesis and Interaction-based motion retrieval against prior state-of-the-art approaches clearly highlight superiority of the proposed framework. © 2020 IEEE.

Item Type: Conference Paper
Publication: Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
Publisher: Institute of Electrical and Electronics Engineers Inc.
Additional Information: cited By 0; Conference of 2020 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2020 ; Conference Date: 1 March 2020 Through 5 March 2020; Conference Code:159803
Keywords: Automotive industry; Computer vision; Human robot interaction; Recurrent neural networks, Diverse applications; Embedding network; Motion generation; Quantitative evaluation; Recurrent networks; Sequence modeling; Short term motion predictions; State-of-the-art approach, Motion estimation
Department/Centre: Division of Interdisciplinary Sciences > Computational and Data Sciences
Date Deposited: 30 Sep 2020 07:41
Last Modified: 30 Sep 2020 07:41
URI: http://eprints.iisc.ac.in/id/eprint/65623

Actions (login required)

View Item View Item