Introduction One of the two major shadowing algorithms is the shadow volume algorithm, first described by Crow  and Williams . An excellent review describing the current state of the art was written by Eric Lengyel , which also shows how to implement the algorithm robustly. Implementing shadow volumes requires geometry computations which have successfully been accelerated using programmable graphics hardware. This article shows a combined CPU/GPU approach where the graphices card can accelerate shadow volume extrusion even if a programmable vertex unit is not available.The geometry computations necessary for shadow volumes are silhouette determination and silhouette extrusion. Silhouette determination finds the contour between front- and backfacing regions with respect to the lightsource. Silhouette extrusion then projects any vertex which is part of that contour away from the lightsource, preferably to infinity.It is possible to perform all of these computations on the GPU, but this requires a specially augmented geometry which increases the vertex load by a factor of 3. An alternative method only moves the silhouette extrusion to the GPU while the CPU does the silhouette determination . It is the combined CPU/GPU approach that this article will be focusing on.

 The Combined CPU/GPU Approach to Shadow Volumes In the combined CPU/GPU approach, the CPU does silhouette determination while the GPU does silhouette extrusion. The geometry is prepared so that each vertex used for shadow casting comes in 2 versions, one time as (x, y, z, 0) and another time as (x, y, z, 1). One of these will get projected away from the lightsource while the other ones remain unaltered. Exactly which version of the vertices gets projected, the ones with w = 0 or the ones with w = 1, is an arbitrary implementation choice. The task of the CPU is to write an index buffer that references the proper vertices to make up a shadow volume, a front cap, or a back cap.

To project a vertex at (x,y,z) away from a lightsource at (lx, ly, lz) towards infinity, the GPU has to perform the following calculation:

 ```x' = x - lx y' = y - ly z' = z - lz w' = 0 ```

where (x', y', z', w') is the projected vertex. For directional lightsources, all vertices are projected to (-lx, -ly, -lz, 0), so no per-vertex computation is necessary. Usually a vertex program is employed to perform the projection, or not, depending of the w coordinate of the incoming vertex.

 Matrix Reformulation The projection described above can be expressed in matrix form as

 ```| 1 0 0 -lx | | 0 1 0 -ly | | 0 0 1 -lz | | 0 0 0 0 | ```

When multiplied with the model-view matrix (the matrix that transforms from model coordinates into eye coordinates), the resultant matrix will transform vertices from model coordinates directly to their projected position in eye space. Selective projection of vertices can thus be reformulated as a selective vertex transform, where the w coordinate of the incoming vertex dictates which matrix to apply.

Fortunately, selective transform is available since the first generation of hardware T&L cards via vertex blending. We establish two transformation matrices and set up the render state in a way interprets the w-coordinate of the incoming vertex as blend weight for a blended transform. If we have transformation matrices M0 and M1, blend weight w and original vertex v, the operation becomes

 `v' = v * w * M0 + v * ( 1 - w ) * M1 `

When M0 is the matrix, that transforms vertices from model space into eye space, and M1 is a matrix that transforms vertices from model space to their projected position away from the lightsource, as discussed above, then all vertices that have w = 0 are projected away from the lightsource while the vertices that have w = 1 retain their original position.

 Example Code I have provided example code how to set up vertex pointers and matrices in OpenGL, using GL_ARB_vertex_blend. Note that in DirectX, the same functionality is available with the vertex format D3DFVF_XYZB1.

 ```void SetupPointersARB( const float *verts ) { glVertexPointer( 3, GL_FLOAT, 16, verts ); glWeightPointerARB( 1, GL_FLOAT, 16, verts + 3 ); glEnable( GL_VERTEX_BLEND_ARB ); glEnable( GL_WEIGHT_SUM_UNITY_ARB ); } ```

In this function, the vector of floats which is given as parameter is supposed to point into an array of (x, y, z, w) values. GL_WEIGHT_SUM_UNITY is specified as to implicitly make the weight for the second matrix complementary to the weight for the first one. The following code will then set up the appropriate modelview matrices.

 ```void SetupMatrices( const float *modelview, const float *lightpos ) { glMatrixMode( GL_MODELVIEW0 ); glLoadMatrixf( modelview ); glMatrixMode( GL_MODELVIEW1 ); glLoadMatrixf( modelview ); const float pmatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, -lightpos, -lightpos, -lightpos, 0, } glMatrixMultf( pmatrix ); } ```

In this function, the modelview parameter is the matrix that transforms from model coordinates into eye coordinates, while the lightpos parameter is a 3-vector specifying the position of the lightsource in eye coordinates.

 Non-projective Vertex Blending The catch is that there exist implementations where projective matrices are not supported for vertex blending, since vertex blending was originally designed for tweening or softskinning animation. When using vertex blending for shadow volume extrusion, we face the problem that one of our matrices is projective. Since we are projecting to infinity, the projection consists merely of setting w = 0 in the output vertex. If the hardware doesn't allow projective vertex blending, we can alternatively make x, y and z huge and retain w = 1. The correct substitution would be to make x, y and z infinitely large, which is not possible. But for practical purposes there may be a large enough factor that works ok in a given situation.

The alternative projection operation then is

 ```x' = ( x - lx ) * large_factor y' = ( y - ly ) * large_factor z' = ( z - lz ) * large_factor w' = 1 ```

and the corresponding matrix is

 ```| large_factor 0 0 ( -lx * large_factor ) | | 0 large_factor 0 ( -ly * large_factor ) | | 0 0 large_factor ( -lz * large_factor ) | | 0 0 0 1 | ```

Now that both matrices have the last row as (0, 0, 0, 1), we can use it even for non-projective vertex blending. The altered setup code is shown below.

 ```void SetupMatrices( const float *modelview, const float *lightpos ) { glMatrixMode( GL_MODELVIEW0 ); glLoadMatrixf( modelview ); glMatrixMode( GL_MODELVIEW1 ); glLoadMatrixf( modelview ); const float LARGE = 16777216; const float pmatrix = { LARGE, 0, 0, 0, 0, LARGE, 0, 0, 0, 0, LARGE, 0, -lightpos * LARGE, -lightpos * LARGE, -lightpos * LARGE, 1, } glMatrixMultf( pmatrix ); } ```

 Conclusion An alternative way of accelerating the shadow volume extrusion has been shown that can be hardware accelerated on any T&L card.

 References Crow F (1977) Shadow Algorithms for Computer Graphics. Computer Graphics 11: 242-247 Kilgard M J, Everitt C (2003) Optimized Stencil Shadow Volumes. GDC Presentation: http://developer.nvidia.com/docs/IO/8230/GDC2003_ShadowVolumes.pdf Lengyel E (2002) The Mechanics of Robust Stencil Shadows. Gamasutra Feature: http://www.gamasutra.com/features/20021011/lengyel_01.htm Williams L (1978) Casting Curved Shadows on Curved Surfaces. Computer Graphics 12: 270–274