Increasing pose comprehension through augmented reality reenactment

Abstract

Standard video does not capture the 3D aspect of human motion, which is important for comprehension of motion that may be ambiguous. In this paper, we apply augmented reality (AR) techniques to give viewers insight into 3D motion by allowing them to manipulate the viewpoint of a motion sequence of a human actor using a handheld mobile device. The motion sequence is captured using a single RGB-D sensor, which is easier for a general user, but presents the unique challenge of synthesizing novel views using images captured from a single viewpoint. To address this challenge, our proposed system reconstructs a 3D model of the actor, then uses a combination of the actor’s pose and viewpoint similarity to find appropriate images to texture it. The system then renders the 3D model on the mobile device using visual SLAM to create a map in order to use it to estimate the mobile device’s camera pose relative to the original capturing environment. We call this novel view of a moving human actor a reenactment, and evaluate its usefulness and quality with an experiment and a survey.

Publication
Multimedia Tools and Applications
Yuta Nakashima
Yuta Nakashima
Associate Professor

Yuta Nakashima is an associate professor with Institute for Datability Science, Osaka University. His research interests include computer vision, pattern recognition, natural langauge processing, and their applications.

Related