Non-Photorealistic Computer Graphics Library

[ home · search · about · links · contact · rss ] [ submit bibtex ] [ BookCite · NPR Books ]

User:

Pass:

Found 5 item(s) authored by "D. Rowntree" Find Author on Google.

Proceedings Cartoon-style Rendering of Motion from Video
John P. Collomosse, D. Rowntree, Peter M. Hall.
Intl. Conference of Video, Vision and Graphics (VVG), pp. 117--124, July, 2003. [BibTeX]

Article Rendering cartoon-style motion cues in post-production video
John P. Collomosse, D. Rowntree, Peter M. Hall.
Graphical Models, Vol. 67, No. 6, pp. 549--564, November, 2005. [BibTeX]

Technical Report Stroke Surfaces: A Spatio-temporal Framework for Temporally Coherent Non-photorealistic Animations
John P. Collomosse, D. Rowntree, Peter M. Hall.
University of Bath, No. CSBU 2003-01, June, 2003. [BibTeX]

Article Stroke Surfaces: Temporally Coherent Artistic Animations from Video
John P. Collomosse, D. Rowntree, Peter M. Hall.
IEEE Transactions on Visualization and Computer Graphics, Vol. 11, No. 5, pp. 540--549, September/October, 2005. [BibTeX]

Proceedings Video Analysis for Cartoon-like Special Effects

Author(s): John P. Collomosse, D. Rowntree, Peter M. Hall.
Proceedings: 14th British Machine Vision Conference (BMVC), Vol. 2, pp. 749--758, Norwich, U.K., September, 2003.
[BibTeX] Find this paper on Google

Abstract:
In recent years the Vision community has shown interest in processing images and video for use by the entertainment industries. Typical applications include 3D reconstruction of models, and rendering graphics models into video. This paper broadly aligns with that trend, but differs in that we process video to emphasise motion in Cartoon-like styles, in which moving objects deform in defiance of physical laws, and leave trailing marks of one kind or another in their wake. We provide an introduction to the effects real animators use, and show how a judicious choice of standard processing techniques, supplemented by novel methods, can be used to achieve convincing results. We illustrate the robustness of our method using several video sequences, ranging in content from simple oscillatory to articulated motion, under both static and moving camera conditions.


Visitors: 190889