Dynamic Lighting In 3D Scenes
by (13 January 1999)
|Return to The Archives
|This document is a short description of the illumination model used in Focus, my small 3D engine that I use for trying out techniques related to real-time 3D computer graphics. The model is based on light volumes, and it offers an alternative for the commonly used models.
The most commonly used methods are listed here, with a brief
description and some (dis)advantages:
Most basic shading technique. A single shading color is determined per polygon, and the entire polygon is either rendered with this color, or the color is added to the texture on the polygon. This technique tends to emphasize the flat nature of the polygons in the scene. In some cases it looks very good, particularly when you WANT your scene to look 'computerized'.
More advanced technique. A single shading color is determined per vertex. The color is then interpolated over the edges, and ultimately interpolated over the scanlines, effectively averaging the vertex colors fluently over the entire polygon. If used correctly, this technique makes objects look round. The technique can not be used in a convincing way if there are multiple lightsources in the scene. It is also impossible to have a highlight in the middle of a large polygon. Another disadvantage is that when the polygon moves or rotates, a static lightsource does not cause static lighting on the polygon. This effect is kinda hard to explain. :)
(Fake) Phong shading
One of the most realistic techniques for dynamic lighting. A texture is attached to every light source. This texture is then projected on every polygon, by using the normals of each polygon vertex as an index in the lightmap. This way, highlights can occur in the middle of a polygon. Also, the lightmap is fully configurable: It can be dithered, smooth, or very sharp in the centre. It is very hard however to have directed spotlights with this technique, or to have multiple lights on a single polygon.
Quake's lighting model. For some odd reason, this fact alone makes this model the 'most wanted'. The same happens to things like span buffers and BSP trees: If Carmack does it, it must be good, so nobody tries anything else. Believe me: Carmack would never have built Quake if he wasn't trying new things, so please STOP following him! Or me. :) Lightmaps are low resolution textures that are calculated for every polygon in a scene. Because they are precalculated, the values of pixels can be calculated using very expensive techniques, like radiosity. This way, you can have very good static lighting, complete with shadows. It is static, however, and sometimes the low resolution makes the shadows extremely 'soft' at the edges. And even at this low precision, the shadow maps take a considerable amount of memory.
So this is where 'my' model kicks in. I'm sure that it is
used before, and maybe it is even documented, but what the
Basically, I project a texture on a polygon. That's nothing new: It looks like phong shading. But a fake phong lightmap is used as an environment map, and thus every pixel of every polygon is covered by a pixel from the phong lightmap. This is obviously not always what you want: If you have a spotlight in a room with a narrow beam, you don't want every polygon in the room to catch pixels from the lightmap. And you CERTAINLY don't want that if you have multiple narrow beams.
So, I construct a nice volume for the beam. Four planes do the job just fine. Then, every polygon in the scene is clipped against this volume. (Obviously, some optimizations are needed here, but that's not needed to get the idea at the moment) The resulting polygons and polygon parts are precisely the lit polygons: If you would draw these parts in white, you would have a nice beam in your scene. It does shine THROUGH your objects though, I'll address that in a minute. But of course we want to get a bit further than a large square white beam. So, the clipped parts are processed a bit further: First, they get the texture of the light source. This can be anything: A nice single spot, or lots of spots, or even a full color photograph. Then, each vertex of the (clipped) polygon needs new U/V coordinates. Use the following formulas for this:
U = lightmap_width * dist1 / (dist1 + dist3)
V = lightmap_height * dist3 / (dist2 + dist4)
U is the horizontal index in the lightmap texture;
V is the vertical index in the lightmap texture;
dist1 is the distance of the vertex to the first boundary plane of the light volume;
dist3 is the opposite plane;
dist2 is the distance of the vertex to the second boundary plane of the light volume;
dist4 is the opposite plane.
Thus; dist1/(dist1+dist3) is always a value between zero and one, regardless of the distance of the vertex to the light source. The same goes for dist3/(dist2+dist4).
Now the texture can simply be drawn transparent over the existing polygons.
|I promised to address the problem of the 'penetrating volume'. That's simple: Only polygons that face the light source are potentially lit. One obvious optimization: Only polygons that face the camera need to be lit. Otherwise they won't be visible anyway. :) More optimizations: If the light source doesn't move, the clipped polygons can simply be drawn again in the next frame, so the lighting is not so expensive anymore. It is rather easy to implement a system that recalculates the clipped parts ONLY when the light source or the scenery changes. This way you can have fully dynamic lights that are as fast as regular methods, except when the lights move. If you keep the number of simultaneous moving lights in your scenes low, this is some awesome lighting!
You can easily extend this technique to cast real-time,
fully dynamic shadows. I will not go into details here,
because I have not yet implemented it myself. This is how
you do it: Every polygon that is drawn as a lightmap is
right after this step drawn ONTO the lightmap itself in
pure black. This way, all polygons behind this polygon
receive the adjusted lightmap with a shadow on it.
This technique has several disadvantages:
1. Every lightsource must be recalculated compeletely every frame, if you don't want to keep LOTS of lightmaps.
2. Polygons that do not cast shadows on polygons because there ARE no polygons behind them still need to be drawn on the lightmap, because it's virtually impossible to detect these cases at a decent speed.
3. Polygons need to be rotated twice because they need to be rendered on the lightmap as if they where seen by the light source.
Some quick ideas:
The biggest problem when trying to keep dynamic lights speedy is the number of polygons that are potentially visible to the lightsource: Each of these polygons need to be clipped against the light volume, and this is an expensive operation. The best way to solve this problem is to limit the number of potentially lit polygons, and precalculate a list of polygons that fall into this category. You can for example have a really cheap light that just shines in the corner of a room. It will always shine upon the same polygons, even when it moves. This can be exploited much further, if you're creative. :) Another idea: The shadow casting algorithm will also work on 3D cards, I guess. If you don't use a 3D accelerator and it's bilinear filtering, the shadows might look better if you filter the lightmap with the shadow after each polygon. This is an expensive operation, though.
That's all for now, I hope you have some good ideas of your
own now. Don't imitate, INNOVATE!
My engine can be found at: http://www.flipcode.com/focus/