[Oshita Lab.][Research Theme] [Japanese]



Motion-Capture-Based Avatar Control Framework


Figure 1. Phtoes of a prototype of the motion-capture-based control framework.
This paper presents a motion-capture-based control framework for third-person view virtual reality applications (Figure 1). Using motion capture devices, a user can directly control the full body motion of an avatar in virtual environments. In addition, using a third-person view, in which the user watches himself as an avatar on the screen, the user can sense his own movements and interactions with other characters and objects visually.
However, there are still a few fundamental problems. First, it is difficult to realize physical interactions from the environment to the avatar. Second, it is also difficult for the user to walk around virtual environments because the motion capture area is very small compared to the virtual environments.
This paper proposes a novel framework to solve these problems. We propose a tracking control framework in which the avatar is controlled so as to track input motion from a motion capture device as well as system generated motion (Figure 2). When an impact is applied to the avatar, the system finds an appropriate reactive motion and controls the weights of two tracking controllers in order to realize realistic and also controllable reactions. In addition, when the user walks in position, the system generates a walking motion for the controller to track (Figure 3). The walking speed and turn angle are also controlled through the user's walking gestures.
Using our framework, the system generates seamless transitions between user controlled motions and system generated motions. In this paper, we also introduce a prototype application including a simplified optical motion capture system.
Figure 2. Tracking control approach. Under normal conditions the avatar tracks a captured motion. Under conditions in which multiple system generated motions are required, the avatar can track each of them seamlessly. Figure 3. Walking interface. Basically the user can freely move around in the mocap area. If the user performs a walking gesture without horizontal movement, the system then generates a walking motion to be tracked.



Publications


e-mail address