Hardware-Determined Feature Edges
Author(s): Morgan McGuire, John F. Hughes.
Proceedings: 3rd International Symposium on Non-Photorealistic Animation and Rendering (NPAR'04), pp. 35-147,
2004.
[BibTeX]
Abstract:
Algorithms that detect silhouettes, creases, and other edge based
features often perform per-edge and per-face mesh computations
using global adjacency information. These are unsuitable for
hardware-pipeline implementation, where programmability is at
the vertex and pixel level and only local information is available.
Card and Mitchell and Gooch have suggested that adjacency
information could be packed into a vertex data structure; we
describe the details of converting global/per-edge computations
into local/per-vertex computations on a related ‘edge mesh.’
Using this trick, we describe a feature-edge detection algorithm
that runs entirely in hardware, and show how to use it to create
thick screen-space contours with end-caps that join adjacent thick
line segments. The end-cap technique favors speed over quality
and produces artifacts for some meshes.
We present two parameterizations for mapping stroke textures
onto these thick lines—a tessellation-independent screen space
method that is better suited to still images, and an object space
method better suited to animation. As additional applications, we
show how to create fins for fur rendering and how to extrude
contours in world-space to create the sides of a shadow volume
directly on the GPU.
The edge mesh is about nine times larger than the original
mesh when stored at 16-bit precision and is constructed through a
linear time pre-processing step. As long as topology remains
fixed, the edge mesh can be animated as if it were a vertex mesh.