Towards a model-based 3D marker-less human motion capture.
Quah, Chee Kwang.
Date of Issue2008
School of Computer Engineering
This research proposes a novel framework for capturing 3D human motion from video images using a model-based approach. Existing commercial motion capture methods that place markers onto the human will cause hindrance to the human performers. Our approach does not require any markers. Our contributions consisted of the two main phases: (1) to construct a 3D human puppet model that is very similar to the subject, and (2) to follow the motion of the subject using this 3D model. The human model and movements have to be accurate so that we can obtain quantitative data for applications such as bio-mechanical analysis. A substantial amount of work has been emphasized on building the 3D human model, as the accuracy and reliability of the motion tracking depend very much on it. The reconstruction of 3D human model is facilitated by a generic geometrical human model consisting of the external skin and its internal skeleton. The output is an accurate external skin of the subject with its estimated internal skeleton. This approach uses several cameras and does not require prior camera calibration. First, the camera calibration and 3D reconstruction take place simultaneously to produce the intermediate 3D model once the characteristic points between the generic model and the real one are registered. Then, we automatically matched the silhouette curves of the intermediate model and the real subject to yield a better 3D human model. Our setup requires no prior calibration and moderate human interaction, its operation is simple, inexpensive and efficient as compared to the existing 3D laser body scanners and computer imaging methods. Our human motion tracking algorithm starts by the automatic learning of the colour/texture onto the puppet model from its initial pre-positioned posture. Then, our computation will synthesize the 3D puppet movements such that it minimizes the image differences between the synthesized movements and real athlete’s motion. This is realized by using a simulated annealing algorithm to search iterative for the optimal posture represented by the joint kinematics with the various degrees of freedom. The joint kinematics then drive the skin of the model puppet to produce the synthesized image, which will be used to compare with the real image. The image rendering for the motion synthesis is the most computationally intensive module and it is sped up by using a graphics processor unit (GPU). With our results, we demonstrated that we are able to track the motion of the arms, which are usually highly articulated and quatitatively small in the images. The advantages of our method are: (1) it does not require image segmentation, (2) it copes with occlusion, and (3) it is able to operate in highly cluttered environments.
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision