Automatic Image Alignment to Plastic Avatar Head
|
Motivation |
For previous projected virtual avatar system, the face image captured through head-worn camera is projected to the plastic avatar head, and the alignment between projected imaginary and plastic avatar is adjusted manually, which will take 3-5 minutes for each time, even in the case that the existed users manipulate the system again. As one of preliminary steps of mapping textures captured by multiple cameras (incl. HD CCD camera and Kinect depth camera) to plastic surface automatically, we try to align the image captured by head-worn camera to the plastic projection surface. |
Pipeline |
|
Approach |
# Off-line Initialization3D CAD Model
Figure 2. CAD model of plastic avatar head. Projector-Surface Calibration We simplify the CAD model (in Fig. 2) to a face mesh presentation, as shown in Fig. 3. In order to find the 3D-2D correspondences between the points on plastic avatar surface and the pixels on projector's projection plane, we annotate several points on the plastic surface, as illustrated in Fig. 4. Since the 3 by 4 projection matrix has 11 DOFs, so at least 6 correspondences are needed.
Figure 3. Mapping from plastic avatar surface to projection plane. Figure 4. Manual annotation. 2D Mesh Generation Then based on the output of 2D feature detector, we map the corresponding points on plastic surface to generate the 2D mesh for warping, as shown in Fig. 5.
Figure 5. 2D warping mesh # Real-time OperationFeature Detection We employ an open source face tracker [1] to locate facial feature points, as show in Fig. 6(left). b. To accelerate the detection speed, we simplify the model to make a 34-point model (9 points on the contour, 6 points on eyebrows, 8 points on eyes, 7 points on nose and 4 points on mouth), as shown in Fig. 6(middle). And in order to get a piece-wise warping, we divide the face into triangular patches through Delaunay Triangulation, as demonstrated in Fig. 6(right).
Figure 6. (From left to right) Original detected facial feature points (66 points), simplified facial feature points (34 points), Delaunay triangulation (53 triangles). Image Warping The triangles are warped one by one to their correspondences in 2D warping mesh of plastic avatar model. One warping demonstration is shown in the video below.
Projection Static Image
Video from Web Camera
Video from Head-worn Camera
|
Reference |
[1]
https://github.com/kylemcdonald/FaceTracker This approach is
detailed in the paper: J. Saragih, S. Lucey, and J. Cohn, “Face
Alignment through Subspace Constrained Mean-Shifts”, ICCV2009 |