Non-Photorealistic Computer Graphics Library

[ home · search · about · links · contact · rss ] [ submit bibtex ] [ BookCite · NPR Books ]

User:

Pass:

Found 5 item(s) authored by "Jingyi Yu" .

Proceedings A Framework for Multiperspective Rendering
Jingyi Yu, Leonard McMillan.
Eurographics Symposium on Rendering, Norrkoping, Sweden, 2004. [BibTeX]

Proceedings A Non-Photorealistic Camera: Detecting Silhouettes with Multi-flash
Ramesh Raskar, Jingyi Yu, Andrian Ilie.
SIGGRAPH 2003 Technical Sketch, Conference Abstracts and Applications, 2003. [BibTeX]

Technical Report Harnessing Real-World Depth Edges with Multiflash Imaging
Kar-han Tan, Rogerio Feris, Matthew Turk, J. Kobler, Jingyi Yu, Ramesh Raskar.
MERL (Mitsubishi Electric Research Laboratories, No. TR2005-067, December, 2005. [BibTeX]

Proceedings Image Fusion for Context Enhancement and Video Surrealism
Ramesh Raskar, Andrian Ilie, Jingyi Yu.
3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR'04), 2004. [BibTeX]

Article Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering using Multi-Flash Imaging

Author(s): Ramesh Raskar, Kar-han Tan, Rogerio Feris, Jingyi Yu, Matthew Turk.
Article: ACM Transactions on Graphics, Vol. 23, No. 3, pp. 679--688, 2004.
[BibTeX] [DOI] Find this paper on Google

Abstract:
We present a non-photorealistic rendering approach to capture and convey shape features of real-world scenes. We use a camera with multiple flashes that are strategically positioned to cast shadows along depth discontinuities in the scene. The projective-geometric relationship of the camera-flash setup is then exploited to detect depth discontinuities and distinguish them from intensity edges due to material discontinuities. We introduce depiction methods that utilize the detected edge features to generate stylized static and animated images. We can highlight the detected features, suppress unnecessary details or combine features from multiple images. The resulting images more clearly convey the 3D structure of the imaged scenes. We take a very different approach to capturing geometric features of a scene than traditional approaches that require reconstructing a 3D model. This results in a method that is both surprisingly simple and computationally efficient. The entire hardware/software setup can conceivably be packaged into a self-contained device no larger than existing digital cameras.

Visitors: 190670