Automatic Rigging and Animation of 3D Characters

Ilya Baran & Jovan Popović

Figure 1: Our method takes a static character mesh and an input skeleton and rigs the character so that it can be animated by controlling the skeleton.
Rigging a character.

Paper: PDF (SIGGRAPH 2007)
Video: AVI (DivX 6, 76MB, with sound)
Supplement: PDF
Code: Our C++ source and a Windows binary can be found here.


Animating an articulated 3D character currently requires manual rigging to specify its internal skeletal structure and to define how the input motion deforms its surface. This is a tedious process and we describe a method to automate it. Our motivation is to be able to build a system for children and other non-professional animators that, when presented with, for example, an unfamiliar quadruped character, can immediately execute commands such as "make it walk like a dog." To achieve this, we need to be able to embed an existing skeleton into the volume inside a character mesh, and then to attach the mesh vertices to the bones of the embedded skeleton. To our knowledge, this has not been previously attempted.


Our method consists of two main steps: skeleton embedding and skin attachment. Skeleton embedding computes the joint positions of the skeleton inside the character by minimizing a penalty function. To make the optimization problem computationally feasible, we first embed the skeleton into a discretization of the character's interior, and then refine this embedding using continuous optimization. The skin attachment is computed by assigning bone weights based on the proximity of the embedded bones smoothed by a diffusion equilibrium equation over the character's surface.

Designing a good penalty function is quite difficult because it must penalize many different embedding deficiencies: short bones, improperly oriented bones, asymmetry, etc. Our approach is to manually design a basis set of penalty functions, each penalizing a particular type of embedding deficiency, and then find the linear combination of these basis penalty functions that performs best on a set of manually labeled example embeddings. Although this process is labor-intensive, it only needs to be done once, not per character.


Our prototype system, called Pinocchio, can typically rig input characters in under a minute. It then applies motion capture data to the character, using online motion retargetting [1] to eliminate footskate. We have successfully used Pinocchio to rig many bipeds, quadrupeds, and a centaur. When Pinocchio fails, it can accept a joint placement hint to constrain the search for an embedding.

To keep our evaluation objective, we tested Pinocchio on 16 biped characters (built by an artist using Cosmic Blobs) that we did not see or use during development. Many of these characters were challenging due to their cartoony proportions and features that may be mistaken for limbs. Pinocchio correctly rigged 13 of these characters automatically, and the remaining 3 were correctly rigged with a single joint placement hint.


This research is supported by Solidworks Corporation. Ilya Baran is supported by an NSF Graduate Research Fellowship.


[1] Kwang-Jin Choi and Hyeong-Seok Ko. Online Motion Retargetting. In Journal of Visualization and Computer Animation, Vol. 11, No. 5, pp. 223--235, December 2000.

Back to my homepage