[Oshita Lab.][Research Theme] [Japanese]



Multi-Touch Interface for Character Motion Control



Example of the multi-touch interface for character motion control.

[mp4 (5,600 KB)] demo video (mp4, 5,600 KB)
We propose a multi-touch interface for character motion control with which a user can control a character's pose freely and make the character perform various actions by simply dragging the character's body parts using a multi-touch input device. We use style-based inverse kinematics to synthesize a natural-looking pose that satisfies given constraints using a learned model of sample postures. However, the style-based inverse kinematics is suited to posture editing and not motion control. It cannot handle various types of actions and cannot generate continuous and physically valid motions. To overcome these limitations, we prepare different learned models for each action and choose an appropriate learned model according to the user's input. More specifically, we use a single learned model for posing and different learned models for each type of action. We also use different motion generation methods for posing and action controls. Furthermore, we introduce tracking control as a post-process after posing and action controls to generate continuous and physically plausible motion. We implemented the proposed method using a Windows 7 Touch API on a multi-touch-enabled personal computer, and demonstrated the power of our interface.


Publications


e-mail address