IEEE Computer Graphics and Applications

Download PDF

Keywords

Rendering Computer Graphics, Three Dimensional Displays, Solid Modeling, Mathematical Model, Two Dimensional Displays, Computational Modeling, Linear Regression, Applications, Facial Reshaping, Portrait Retouching, Image Retargeting, Computer Graphics, Picture Image Generation

Abstract

We present an easy-to-use parametric image retouching method for thinning or fattening a face in a single portrait image while maintaining a close similarity to the source image. First, our method reconstructs a 3D face from the input face image using a morphable model. Second, according to the linear regression equation derived from the depth statistics of the soft tissue in the face and the user-set parameters of weight-change degree, we calculate the new positions of the feature points. The Laplacian deformation method is then used for non-feature points in the 3D face model. Our model-based reshaping process can achieve globally consistent editing effects without noticeable artifacts. We seamlessly blend the reshaped face region with the background using image retargeting method based on mesh parametrization. The effectiveness of our algorithm is demonstrated by experiments and user study.

Faces are essential to make a first impression, consciously or unconsciously. Facial appearance is also vital for communication. Beautiful faces are pleasurable to look upon.1 Since facial shape is an important determinant of beauty, it can be desirable to modify a face to be fatter or thinner in order to be more attractive. To accomplish this, a facial weight-change simulator is needed to measure model growth and shape modification. Potential applications of this simulator are not limited to the beauty and medical industries. It also plays an important role in digital entertainment, and film and television production.

Compared to other images, processing of facial images is particularly delicate. People are also relatively good at determining the smallest differences in the appearance of a face. Photo retouching is able to present convincing adjusted faces while maintaining the natural appearance of the face (see Figure 1). However, this time-consuming work must generally be performed by a

Graphic: Figure 1. Our parametric facial reshaping method automatically simulates the weight-change of a 2D portrait image and generates a fatter or thinner face as intended. (Middle) is the original input image of Albert Einstein; (left) is the result of weight-change degree -2, which indicates losing weight by 2 degrees; (right) is the result of weight-change degree +2, which implies gaining weight by 2 degrees.

Figure 1. Our parametric facial reshaping method automatically simulates the weight-change of a 2D portrait image and generates a fatter or thinner face as intended. (Middle) is the original input image of Albert Einstein; (left) is the result of weight-change degree -2, which indicates losing weight by 2 degrees; (right) is the result of weight-change degree +2, which implies gaining weight by 2 degrees.

skilled, talented retouching artist. Since retouching is experience-based, the result relies heavily on the users’ preference and effort. The process is also not parametric, which makes it especially difficult to control the degree of weight change.

The most related work to ours is proposed by Danino and colleagues who proposed a parametric 2D facial weight-change simulator based on 2D empirically knowledge.2 This method can gener-

Graphic: Figure 2. Comparison results. (a) is the input image. (b) and (d) are the results of Danino and colleagues’ method,2 while (c) and (e) are our results.

Figure 2. Comparison results. (a) is the input image. (b) and (d) are the results of Danino and colleagues’ method,2 while (c) and (e) are our results.

ate realistic results when the input face is frontal with neutral facial expressions. However, this method does not use the semantic information of the underlying face model, and the background is simply warped without considering the contents of the image. As a result, it may introduce obvious artifacts when the weight-change is large (see Figure 2). Another related work is intro-duced by Zhou and colleagues3 who proposed an image retouching technique for realistic reshaping of human bodies in a single image. This model-based approach can create desired reshaping effects by changing the degree of reshaping which characterizes a small set of semantic attributes. However, it cannot deal with facial reshaping directly. Moreover, it relies on a large 3D whole-body morphable model which may limit its application.

Inspired by the work of Zhou and colleagues,3 we present an image-based facial reshaping method using a linear regression equation. The weight change deformation of a face is parameterized by adjusting BMI (Body Mass Index) values.4 We first reconstruct a 3D face model from the input 2D image using a morphable model,5 and label the feature points on the 3D face model. Then, we calculate the deformed positions of feature points according to the weight-change degree related to BMI. After that, we generate the deformed 3D face model using the Laplacian deformation method, and project it into 2D image as the deformed face region. With the help of content-aware image retargeting approach by Guo and colleagues,6 we finally blend the deformed face region and the background to obtain the reshaped 2D image.

The contributions of this work are: (i) a novel geometric weight-change simulator is presented, which is automatic, fast, and robust; (ii) parametric deformation of the face caused by varying BMI is based on a reliable face tissue depth database, which leads to a reshaped face in compliance with life experience and the repeatable reshaping process.

Related Work

Weight-change Simulator

Few approaches to weight-change simulation have been proposed during the past decades. It first appeared in the innovative work of Blanz and Vetter.5 The morphable 3D face model was built on hundreds of 3D face scans. Certain features, including weight, were manually labeled and mapped to the parameter space. Thus, weight-change simulation could be achieved by adjusting weight parameters. However, the simulated result is greatly affected by the constraints of the database. If the reshaping parameter is beyond the scope of the database, the reshaped face is probably unsatisfactory. Moreover, the hair part of the image is particularly problematic. Danino and colleagues presented a facial weight-change simulator for 2D images.2 The face region is divided into regions characterized by different weight-change patterns. Its overall process is fast and robust, and the results are clear, sharp, and realistic. Nevertheless, the transformation between the original part and modified face parts is empirically defined without considering the semantic information of the underlying face model. In addition, the input images are limited to frontal face images with neutral facial expressions, and the involved warping method is not content-aware.

3D Face Reconstruction

Lots of facial reconstruction methods based on a single image exist. In the exceptional work of Blanz and Vetter,5 a morphable face model was matched to a given 2D image by optimizing the parameters for the similarity between the 2D rendering of the morphable model and the original 2D image. Similar to the morphable head model, Chai and colleagues computed around 100 principal components for a collected head model database and fitted a 3D head model to the input image.7 After that, a plausible high-resolution strand-based 3D hair model was developed for portrait manipulations, such as portrait pop-ups. Compared to previous 3D facial databases, FaceWarehouse by Cao and colleagues provided a much richer matching collection of expressions which can depict most human facial actions.8 Different from these approaches, we are interested in facial reshaping based on a face image.

In 3D craniofacial reconstruction, Greef and colleagues conducted a large-scale study on how facial soft tissue thickness changes according to sex, age, and weight.4 They studied 967 Caucasian subjects of both sexes, and varying ages and BMIs, and measured their facial soft tissue thickness on 52 facial feature points. For each factor and for both sexes separately, a multiple linear regression of thickness versus age and BMI was calculated. Our weight-change simulation is inspired by their regression equations.

Image Resizing and Retargeting

Many content-aware image retargeting techniques have recently been proposed. Following the insightful survey conducted by Shamir and Sorkine,9 the approaches fall into two categories: discrete and continuous. In discrete methods, seam carving and cropping are adopted to resize the input image. Continuous approaches optimize mapping using constraints, leading to content-aware resizing. Similar to body-aware image warping by Zhou and colleagues,3 we embed the input image into a 2D triangular mesh, which is used to drive image warping to guarantee coherent resizing effects across the background. An approach to image retargeting employing mesh parametrization is proposed by Guo and colleagues,6 which achieves the goals of emphasizing the important while retaining the surrounding context with minimal visual distortion. The preservation of salient object and image structure are maintained by optimizing constrained energy.

Algorithm

We divide a portrait image into two regions: face region and the remaining region. For simplicity, we call the remaining region as the background region in our paper. A reshaping algorithm of a portrait image requires several steps. Figure 3 illustrates the outline of our algorithm. A 3D face model is first reconstructed using the method developed by Blanz and Vetter.5 Based on forensic research results and the weight-change degree assigned by the user, deformed facial point positions are set (see 3D Face Reshaping section). Laplacian transformation is conducted afterwards (see 3D face Deformation section). Since only changing the face region is likely to introduce noticeable distortion to the background, a retargeting method is adopted (see Image Retargeting section).

Graphic: Figure 3. Algorithm overview. Derived from input image (a), a 3D face model is calculated (b). Based on forensic research results and user-specified weight-change degree +3, Laplacian deformation is performed (c). Afterwards, a feature-relevant control mesh (d) is built on the original image. The target mesh (e) is produced by solving a mesh parametrization problem which preserves deformed face features with minimal visual distortion to the background. Standard texture mapping is finally used to render the target image, as shown in (f).

Figure 3. Algorithm overview. Derived from input image (a), a 3D face model is calculated (b). Based on forensic research results and user-specified weight-change degree +3, Laplacian deformation is performed (c). Afterwards, a feature-relevant control mesh (d) is built on the original image. The target mesh (e) is produced by solving a mesh parametrization problem which preserves deformed face features with minimal visual distortion to the background. Standard texture mapping is finally used to render the target image, as shown in (f).

3D Face Reshaping

Our face reshaping algorithm is inspired by forensic research results by Greef and colleagues.4 This study was focused on how sex, BMI and age influence the depths of facial soft tissue. The population in their research consisted of 457 males and 510 females of varying ages and BMIs. They selected 52 feature points where 10 points located on the midline and 21 points located bilaterally. The selection of these feature points was based on the ability to reliably locate them on the face. A multiple linear regression of soft tissue thicknesses versus BMI and age was calculated for male and female separately, as tabulated in Table 1.

Table 1. Linear regression equation: regression coefficients, the root mean square (RMS) errors and the significance levels. * p < 0:05; ** p < 0:01.

Our work differs from that of Greef and colleagues’4 in that we add two extra control points (see Figure 4 and points 53 and 54 in Table 1) and set the limit for weight-change degrees to make them more suitable for our framework. Without feature points 53 and 54, the deformed 3D faces are likely to introduce artifacts around the pterion. The linear regression equation can be expressed as follows:

Y=b0+b1×age+b2×BMI,b0=(b01,b02,,b054),b1=(b11,b12,,b154),b2=(b21,b22,,b254),(1)

where vector Y represents the tissue depths of the 54 feature points, BMI represents the body mass index, and b0, b1, b2 represent regression coefficients, respectively.

Moreover, there should be a limit for losing weight. Even someone is emaciated, the depths of soft tissue are still above 0. Therefore, we define the limit of the weight-change degree for the i th point Ti as:

Ti=1b2i(b0i+b1i×age+b2i×BMI),i=1,2,,54.(2)

For a particular input image, the age of a person remains unchanged. Therefore, with varying BMIs, the updated feature point positions are only influenced by b2. We assume that the variation of facial tissue depth is along the feature point normal direction:

Si=Si+db2iNi100,i=1,2,,54,(3)

where Si is the i th feature point position before deformation, Si is the deformed i th feature point position, d is the weight-change degree, and Ni is the corresponding normal of the i th point.

Graphic: Figure 4. Illustration of feature points. The total number of feature points is 54, with 10 located on the midline and 22 located bilaterally.

Figure 4. Illustration of feature points. The total number of feature points is 54, with 10 located on the midline and 22 located bilaterally.

3D Face Deformation

The 3D face model for the input portrait image is reconstructed using the method proposed by Blanz and Vetter.5 They collected 200 head structure data using laser scans and exploited the statistics of the dataset to derive a morphable model and a parametric description of faces. Then, a fitting algorithm is developed to match the morphable model to the input 2D face image under shape and texture constraints. After that, a 3D face model conforms to the 2D face image is reconstructed. As a preparation to our algorithm, one of the generated models needs to be labeled with feature points manually. Since the topology of the morphable model mesh remains the same, we can use the pointwise correspondence to locate the feature points on other face models automatically.

After obtaining the deformed feature point positions in the 3D Face Reshaping section, various methods are capable of calculating the displacements of the non-feature points. Noh and colleagues proposed to use Radial Basis Functions to solve this problem.10 A human face is full of abundant geometric details, and human perception is extremely sensitive to facial distortion. Therefore, we employ a Laplacian deformation method similar to that employed by Liao and colleagues,1 which is based on the differential surface representation proposed by Sorkin and colleagues.11 By utilizing Laplacian deformation, geometric details are preserved as optimally as possible.

The 54 feature points are assigned as handles, which are moved to new positions Si . The updated positions of feature points are calculated in 3D Face Reshaping section. A better result is obtained if the handle constraints are satisfied in a least square sense. With cix , ciy , and ciz representing the x,y,z coordinates of the new position of the i th feature point respectively, the 54 handle constraints for x -coordinates are:

xi=cix,i1,2,,54(4)

Thus, all deformed face point positions x~ are obtained by solving the following quadratic minimization problem:

x~=argminx(Lxδx2+i=154|xicix|2),(5)

where matrix L is the topological Laplacian of the face mesh, x is the vector of the x-coordinate of all vertices, and δ is the Laplacian coordinate matrix. The y and z coordinates are calculated in the same way.

The 3D face deformation results are shown in Figure 5. The negative weight-change degree indicates the decrease of BMI, which means losing some weight. On the contrary, positive weight-change degree represents the increase of BMI.

Graphic: Figure 5. 3D face deformation results. The image on the left is the original image. The following images are the 3D face deformation results of weight-change degree -4, -2, 0, +2, and +4, respectively.

Figure 5. 3D face deformation results. The image on the left is the original image. The following images are the 3D face deformation results of weight-change degree -4, -2, 0, +2, and +4, respectively.

Image Retargeting

Directly projecting a reshaped 3D face model into a 2D image will introduce visual artifacts. A content-aware image warping method is employed. Our method is based on the work of Guo and colleagues,6 which avoids the distorting the salient object and retains the surrounding background with slight distortion. In their approach, a feature consistent mesh is generated using a constrained Delaunay triangulation algorithm according to the feature points extracted from the 2D input image. Several constraints, including boundary, saliency, and structure, are defined to avoid distorting salient objects in the optimization process for retargeting. After a stretch-based mesh parametrization process, the homomorphous target mesh is calculated, and the resulting image is rendered using texture mapping.

Background Region

The control mesh should be consistent with image structure and retain uniformity of point density. The boundary of the input image is discretized first, and all of the points are set as control points. For the background part, the Canny operator is employed, and other control points are detected. Some additional points are added to keep the points well-distributed. As shown in Figure 6 (a), the blue points represent the control points in the background.

Graphic: Figure 6. Control mesh comparison before and after 3D face deformation. The red dots in (a) are the control points on the contour profile as hard constraints, the green dots on the face are regarded as hard constraints, and the blue points on the background are set as soft constraints. The four points on the corners of the picture are set as fixed points. (b) is the hard constrained mesh superimposed on the source image. (c) is the comparison of deformed mesh superimposed on the source image. Since the weight-change degree is +3, the face contour expands, and the background needs to be compressed.

Figure 6. Control mesh comparison before and after 3D face deformation. The red dots in (a) are the control points on the contour profile as hard constraints, the green dots on the face are regarded as hard constraints, and the blue points on the background are set as soft constraints. The four points on the corners of the picture are set as fixed points. (b) is the hard constrained mesh superimposed on the source image. (c) is the comparison of deformed mesh superimposed on the source image. Since the weight-change degree is +3, the face contour expands, and the background needs to be compressed.

Face Region

The control points on face regions are selected based on the result of the Canny operator, as well. Once we obtain the 3D face model and the deformed model in 3D Face Deformation section, a pointwise correspondence is set. Consequently, the deformed face region can be achieved easily. The i th point of the original morphable model is projected into the image space and marked as Pi . Pi stands for the projected position of the i th point on the deformed model. The control mesh on the face region expands or shrinks with varying weight-change degrees. In Figure 6(c), the deformed constraint mesh is drawn on the source image. With +3 weight-change degree, the constrained mesh over the face expands.

Face Contour

After 3D face deformation, the locations of vertices of 3D faces will change, and also their 2D projections in 2D image. As a result, the control points originally located on the contour profile are likely to shift away from the deformed contour profile after 3D face deformation, which will lead to noticeable artifacts after the retargeting process. Therefore, the control points along the contour profile of the face must be carefully selected. Let Mc be the set of the contour points along the source image, and Pic be the i th point in Mc , and P0c,P1c,,Pnc are in clockwise order along the contour. With a predefined threshold l , the control points are selected by minimizing the following energy function:

min Et + λEd ,(6)

where Et is employed to distribute the control points uniformly along the contour of the face region, and Ed is employed to constrain the shifting of control points from the deformed contour. They are defined as follows:

Et={n=0.(dA)2n=1.PicMc((darc(Pic,P(i+1)%nc)l)2)n>1.(7)

Ed=PicMc(d(Pic;B)2+d(Pic;B)2),(8)

where B stands for the background of the source image, B0 is the deformed background, n is the number of points in set Mc , dA represents the length of the contour along the face region in the source image, λ is the weight factor balancing the influence of distance threshold constraints and location energy, which is set to 10 for our results. d(x;S) is the least distance of point x from set S . That is,

d(x;S)=inf{d(x,s)|sS}.(9)

darc(Pic,Pi+1c) stands for the length of the face contour from Pic to Pi+1c in clockwise order.

Equation 6 is minimized by adding one selected control point in Mc each time, which results in the largest decrease of energy. The selected point in each iteration is located between the adjacent dots with the longest distance along the contour. Thus, the above process in general can be efficiently implemented. The final solution is reached if adding a point does not reduce energy.

Constrained Mesh Parametrization

Based on the control points selected from the background, face region and face contour, the constrained Delaunay triangulation algorithm is utilized to generate a feature-consistent mesh, as shown in Figure 3(d). Using the method proposed by Guo and colleageus,6 the homomorphous target mesh is achieved, as shown in Figure 3(e). The background part is rendered using texture mapping, while the face region part is rendered based on the 3D deformed model. Finally, the reshaped image is obtained, as shown in Figure 3(f).

Figure 6 shows the comparison of control mesh before and after 3D face deformation. The control mesh of the face region (see Figure 6(b)) is fattened after 3D face deformation (see Figure 6(c)). In this example, the weight-change degree (Figure 6(a)) is +3 degrees.

Graphic: Figure 7. Reshaping results. The left column is the weight-change degree of -2, the middle column is the original input image, and the right column is the weight-change degree of +2.

Figure 7. Reshaping results. The left column is the weight-change degree of -2, the middle column is the original input image, and the right column is the weight-change degree of +2.

Results and Discussion

We have implemented our algorithm on a desktop PC with Intel I7 4.0 GHz CPU and 32 G memory. The average computation time is about 0.6 seconds for images with dimensions 640×480; 1.2 seconds for dimensions 800×600; and 1.6 seconds for dimensions 1024×768. We tested our method on a variety of facial images with various backgrounds and poses. Figure 7 shows some examples. For each example, the image in the middle is the input portrait, the left and right images are the reshaping results of -2 degrees and +2 degrees, respectively.

Comparisons

One available facial reshaping method is the facial weight-change simulator proposed by Danino and colleagues.2 This approach consists of the following steps. First, user marks thirteen landmarks on the portrait along the cheek and two landmarks around the neck. With user specified weight-change degree, the new locations of the landmarks are calculated based on empirically determined coefficients. After that, a thin-plate spline warping is employed to obtain the deformed facial image. To remedy the artifacts in the deformed background, a synthetic background with similar color is used to replace the actual background.

For frontal-view face images with neutral facial expressions and simple backgrounds, this method can produce realistic results. However, it may produce artifacts for non-frontal-view face images because some landmarks are hidden. Moreover, as the landmarks are mostly along the cheek regions, the nonlinear thin-plate spline warping will cause obvious distortions in other face regions (such as cheek regions as shown in Figure 8 (b) and (e). When the weight-change degrees are large, Danino and colleagues’ method2 will produce obvious artifacts (see the distortions in Figure 2 (b) and (d)). For facial images with complex backgrounds, this method will generate unnatural distortions because their image warping is not content-aware. Since our approach recovers the 3D face model to simulate the weight-change of face and employs the content-aware image retargeting method, we can produce natural results with various expressions and poses.

Graphic: Figure 8. Comparison results. (a) and (d) are input images. (b) and (e) are results of Danino and colleagues’ method,2 while (c) and (f) are our results. Please note the differences in the regions marked by red boxes.

Figure 8. Comparison results. (a) and (d) are input images. (b) and (e) are results of Danino and colleagues’ method,2 while (c) and (f) are our results. Please note the differences in the regions marked by red boxes.

We also compare our reshaping results with unprocessed camera images. We collected the pictures of some celebrities who have experienced weight change from being underweight to overweight or vice versa. Figures 10 (b-d), (g-i) are camera images and (a), (e), (f), (j) are our reshaping results. These reshaping results share a close similarity with the camera images.

User Study

We have devised a user study to objectively verify the effectiveness of our facial reshaping method by measuring if the subjects can differentiate between our reshaping images and unprocessed camera images among various individuals of both sexes and varying BMIs.

Examples. We generate several reshaping images using our method described in the Algorithm section, which is called ours . We also collect various unprocessed images which contain human faces via the Internet, which is called real . The individuals shown in real have experienced significant changes of weight.

Study details. We recruit 25 subjects for this task. Each subject views 16 pairs of images of the same individual. Subjects are told to choose the most realistic image in the image pair. Two reference unprocessed images are provided in order to give users a more comprehensive impression of the person shown in the image pair. The first part of the user study is called RT . Ten of these pairs contain one real image and one reshaping image of the same person taken from different places. One example is shown in Figure 9 (a-d). (a) is our reshaping result and (b) is an unprocessed image. (c) and (d) are both unprocessed images, which are provided as references. The second part of this user study is called ST . The remaining six pairs contain one real image and the reshaping image on which it is based. One example of this is shown in Figure 9 (e-h). (f) is the original image, and (e) is the reshaping result generated from (f). (g) and (h) are provided as reference images for this pair.

Graphic: Figure 9: User study examples. (a-d) are the images used in user study RT, while (e-h) are used in user study ST. (a, e) are our reshaping results, and (b, f) are unprocessed images. (c, d, g, h) are unprocessed camera images as well, which are provided as references. In RT, ours (a) are compared with other camera images taken under different circumstance (b). In ST, ours (e) are compared with the source images (f).

Figure 9: User study examples. (a-d) are the images used in user study RT, while (e-h) are used in user study ST. (a, e) are our reshaping results, and (b, f) are unprocessed images. (c, d, g, h) are unprocessed camera images as well, which are provided as references. In RT, ours (a) are compared with other camera images taken under different circumstance (b). In ST, ours (e) are compared with the source images (f).

Table 2. User study results. The statistics results of one-sample, two-tailed t-test for RT and ST. Test value is 0.5 (50%). CI stands for Confidence Interval of the difference.

Graphic: Figure 10: Comparison results. (b-d) and (g-i) are camera images. (a), (e), (f), (j) are our reshaping results.

Figure 10: Comparison results. (b-d) and (g-i) are camera images. (a), (e), (f), (j) are our reshaping results.

These image pairs are presented in a randomly permuted order, and the placement (left or right side) of real and ours is randomized, as well.

Results . We analyze the user study data of the two cases ( RT,ST ) separately. When the objects are asked to pick which image appeared more realistic in the RT test, 49.6% of the subjects choose ours . By performing a one-sample, two-tailed t -test for these 10 examples, we find out that subjects cannot find significant differences between our results and the real images ( p -value >> 0.05). Therefore, the results of ours are as realistic as real to some extent. Regarding the ST part, fewer subjects chose ours (45.33%). Compared with the source image, subjects are able to distinguish the source image better. However, the t -test result of ST demonstrates that the difference is also not substantially obvious. Through this user study, we can conclude that our method is able to create natural reshaping results.

Limitations

For very large weight-change degrees, our approach may generate artifacts around the cheek region and introduce noticeable distortion to the background. In our current implementation, the neck region of the input image is considered as background. As a result, the artifacts near neck regions may become obvious when the weight-change degrees are large (as shown in Figure 2).

Gaining or losing some weight will influence the appearance of the face. When gaining weight, a person’s facial contours tend to expand, wrinkles seem reduced and, to some extent, a double chin emerges. When losing weight, a person’s facial contours tend to shrink, wrinkles seem increased and, to some extent, a double chin disappears. Our current approach cannot simulate such wrinkle changes and “double chin” changes. As shown in the bottom-right image of Figure 7, wrinkle artifacts around the eye regions may arise when the weight-change degree is large.

Conclusions and Future Work

We have proposed an effective image reshaping system to thin or fatten a face based on user input weight-change degree. After we gain a 3D morphable face model, forensic data are used to parameterize the reshaping process of the 3D model. We rely on the deformed 3D model to reshape the source image. We introduce a novel approach for choosing control points along the profile of the face. The effectiveness of our parametric weight-change reshaping method is proved by examples and user study.

Our system provides a real-time solution to reshaping a camera image by simply setting weight-change degree.

We are currently working on several enhancements to our reshaping system. Although the current system allows reshaping face regions, the neck region should be added to generate more visually pleasing results. In addition, there are more extensions to render the face region with the reconstructed face morphable model, such as relighting. We are also interested in extending our approach to the mobile phone platform.

Acknowledgements

We would like to thank the anonymous reviewers for their valuable suggestions and comments, which help to improve the readability of this paper. Xiaogang Jin is supported by the National Key R&D Program of China (No. 2017YFB1002600), the NSF of China (Nos. 61732015, 61472351), and the Key Research and Development Program of Zhejiang Province (No. 2017C03SA160073). Kun Zhou is supported by the NSF of China (No. 61272305), the National Program for Special Support of Eminent Professionals of China, and Lenovo’s Program for Young Scientists.

References


  • 1.Q. Liao , X. Jin , and W. Zeng , “ Enhancing The Symmetry And Proportion Of 3d Face Geometry ,” IEEE Transactions On Visualization And Computer Graphics , vol. 18 , no. 10 , 2012 , pp. 1704 – 1716 .
  • 2.U. Danino , N. Kiryati , and M. Furst , “ Algorithm For Facial Weight-change ,” Proceedings Of The 2004 11th IEEE International Conference On Electronics, Circuits And Systems (ICECS 04), IEEE , 2004 , pp. 318 – 321 .
  • 3.S. Zhou et al. , “ Parametric Reshaping Of Human Bodies In Images ,” ACM Trans. Graph. , vol. 29 , no. 4 , 2010 , p. 126:1 .
  • 4.S. De Greef et al. , “ Large-scale In-vivo Caucasian Facial Soft Tissue Thickness Database For Craniofacial Reconstruc-tion ,” Forensic Science International , vol. 159 , 2006 , p. S126 .
  • 5.V. Blanz and T. Vetter , “ A Morphable Model For The Synthesis Of 3d Faces ,” Proceedings Of The 26th Annual Conference On Computer Graphics And Interactive Techniques (SIGGRAPH 99), 1999 , pp. 187 – 194 .
  • 6.Y. Guo et al. , “ Image Retargeting Using Mesh Parametrization ,” IEEE Transactions On Multimedia , vol. 11 , no. 5 , 2009 , pp. 856 – 867 .
  • 7.M. Chai et al. , “ Single-view Hair Modeling For Portrait Manipulation ,” ACM Trans. Graph. , vol. 31 , no. 4 , 2012 , p. 116:1 .
  • 8.C. Cao et al. , “ Facewarehouse: A 3d Facial Expression Database For Visual Computing ,” IEEE Transactions On Visualization And Computer Graphics , vol. 20 , no. 3 , 2014 , pp. 413 – 425 .
  • 9.A. Shamir and O. Sorkine , “ Visual Media Retargeting ,” ACM SIGGRAPH ASIA 2009 Courses , 2009 , p. 11:1 .
  • 10.J.-Y. Noh and U. Neumann , “ Expression Cloning ,” Proceedings Of The 28th Annual Conference On Computer Graphics And Interactive Tech-niques (SIGGRAPH 01), 2001 , pp. 277 – 288 .
  • 11.O. Sorkine et al. , “ Laplacian Surface Editing ,” Proceedings Of The 2004 Eurographics/ACM SIGGRAPH Symposium On Geometry Processing Pages (SGP 04), 2004 , pp. 175 – 184 .

Haiming Zhao is a Ph.D. candidate at State Key Lab of CAD & CG, Zhejiang University. Zhao received his B.Sc. degree in statistics from Zhejiang University in 2012. His research interests include mesh editing, image processing and 3D printing. Contact him at haiming2zhao@gmail.com.
Xiaogang Jin(corresponding author) is a professor at the State Key Lab of CAD & CG, Zhejiang University. His research interests include implicit surface computing, cloth animation, crowd and group animation, texture synthesis, and digital geometry processing. Jin received his Ph.D. in applied mathematics from Zhejiang University. He received an ACM Recognition of Service Award in 2015. Contact him at jin@cad.zju.edu.cn.
Xiaojian Huang is a Software Engineer in Hangzhou Tuhua Technology Co. Ltd. Huang received his B.Sc. degree in computer science from Zhejiang University in 2014. His research interests include facial animation and cloth simulation. Contact him at xjhuang0401@foxmail.com.
Menglei Chai is working toward the Ph.D. degree at the State Key Lab of CAD & CG, Zhejiang University. Chai received his B.Sc. in computer science from Zhejiang University in 2011. His research interests include image-based modeling and interactive image manipulation. Contact him at cmlatsim@gmail.com.
Kun Zhou is a Cheung Kong Professor in the Computer Science Department of Zhejiang University, and the Director of the State Key Lab of CAD & CG. Prior to joining Zhejiang University in 2008, Dr. Zhou was a Leader Researcher of the Internet Graphics Group at Microsoft Research Asia. He received his B.S. degree and Ph.D. degree in computer science from Zhejiang University in 1997 and 2002, respectively. His research interests are in visual computing, parallel computing, human computer interaction, and virtual reality. He currently serves on the editorial/advisory boards of ACM Transactions on Graphics and IEEE Spectrum. He is a Fellow of IEEE. Contact him at kunzhou@acm.org.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles