Example-based Automatic Portraiture
Hong Chen, Lin Liang, Ying-Qing Xu, Heung-Yeung Shum, Nan-Ning Zheng.
The 5th Asian Conference on Computer Vision (ACCV2002), Melbourne, Australia, 23–25 January,
2002. [BibTeX]
Example-Based Caricature Generation with Exaggeration
Lin Liang, Hong Chen, Ying-Qing Xu, Heung-Yeung Shum.
10th Pacific Conference on Computer Graphics and Applications (PG'02), pp. 386, October 09 - 11,
2002. [BibTeX]
Example-Based Composite Sketching of Human Portraits
Hong Chen, Ziqiang Liu, Chuck Rose, Ying-Qing Xu, Heung-Yeung Shum, David H. Salesin.
3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR'04),
2004. [BibTeX]
Example-based Facial Sketch Generation with Non-parametric Sampling
Hong Chen, Ying-Qing Xu, Heung-Yeung Shum, Song-Chun Zhu, Nan-Ning Zheng.
International Conference on Computer Vision 2001 (ICCV'2001), pp. 433--438,
2001. [BibTeX]
PicToon: A Personalized Image-based Cartoon System
Author(s): Hong Chen, Lin Liang, Yan Li, Ying-Qing Xu, Heung-Yeung Shum.
Proceedings: Proceedings of the tenth ACM international conference on Multimedia (MULTIMEDIA '02), pp. 171--178, December,
2002.
[BibTeX]
Abstract:
In this paper, we present PicToon, a cartoon system which can generate
a personalized cartoon face from an input image. PicToon is
easy to use and requires little user interaction. Our system consists
of three major components: an image-based automatic Cartoon
Generator, an interactive Cartoon Editor for exaggeration, and a
speech-driven Cartoon Animator. First, to capture an artistic style,
the cartoon generation is decoupled into two processes: sketch generation
and stroke rendering. An example-based approach is taken
to automatically generate sketch lines which depict the facial structure.
An inhomogeneous non-parametric sampling plus a flexible
facial template is employed to extract the vector-based facial
sketch. Various styles of strokes can then be applied. Second, with
the pre-designed templates in Cartoon Editor, the user can easily
make the cartoon exaggerated or more expressive. Third, a real
time lip-syncing algorithm is also developed by recovering a statistical
audio-visual mapping between the character’s voice and the
corresponding lip configuration. Experimental results demonstrate
the effectiveness of our system.