Building a 3D Portal Engine - Issue 08 - Polygon Filling
by (19 February 1999)



Return to The Archives
Introduction


This is the last article about the basics of 3D coding for a while. After this one, we'll dive straight in the essentials of Portal rendering, starting with the most simplistic forms, and advancing to the better ones while on the way exploring the opportunities that they bring.

But before we get to that, there's one last issue that has to be addressed. You can do wire polygons. That's easy. But how are those incredible cool perspective correct bilinear filtered bumpmapped tilable true-color polygons done in real-time? Well, it's actually not too hard. There are a few things to explain, and let's get this article structured this time:
  • Generic polygon rendering: Tracing outlines
  • Interpolate - interpolate - interpolate
  • The buzzwords, and how to do it - Phong, bumps, bilerp
  • So, to start with the first one: Tracing outlines. No matter how your final polygon should look, this is how to start. All polygon fillers that I ever saw use this approach.

    To fill a polygon, you draw horizontal (or in some cases, vertical) lines, from left to right, top to bottom, until you have drawn the entire polygon. So, you have to start with finding out what the start points and end points of these lines are. That's what I call 'outline tracing' or 'edge tracing'.

    It's done like this: First define two arrays, with as many elements as there are lines on your screen. The first array is called 'x_start', the second 'x_end'. Next, find out wich lines your polygon covers. This can be done by finding out what the minimum and maximum y coordinates of any of the vertices of your polygon are. Then, for this range, initialize the arrays with impossible values. For example, x_start could be set to '10000', x_end to '-10000'. Finally, process the edges. Interpolate from the start of the edge to the end, skipping one screen line per iteration. Compare the x value of each line the edge covers with the minimum and maximum values that you keep in the arrays, and replace those values if neccessary.

    Here's some pseudo-code:

    
         // Declare the arrays
         int x_start[SCREEN_HEIGHT], x_end [SCREEN_HEIGHT];
         // Determine first and last line that the polygon covers
         int y_min = SCREEN_HEIGHT+1;
         int y_max = -1;
         for (int i=0; i<polygon.vertices(); i++)
         {   if (polygon.vertex(i).y < ymin) ymin = polygon.vertex(i).y;
             if (polygon.vertex(i).y > ymax) ymax = polygon.vertex(i).y;
         }
         if (y_min == y_max) return;
         // Initialize arrays for this range
         for (i=y_min; i<y_max; i++)
         {   x_start[i]=SCREEN_WIDTH+1;
             x_end[i]=-1;
         }
         // Trace edges
         for (i=0; i<polygon.vertices(); i++)
         {   // Determine edge coordinates
             float x1 = polygon.vertex(i).x;
             float y1 = polygon.vertex(i).y;
             float x2 = polygon.vertex((i+1) % polygon.vertices()).x;
             float y2 = polygon.vertex((i+1) % polygon.vertices()).y;
             // We want to draw from top to bottom
             if (y2<y1)
             {  swap (y1, y2);
                swap (x1, x2);
             }
             // Determine slope of edge
             float deltax = (x2-x1)/(y2-y1);
             for (int p=y1; p < y2; p++)
             {   int xpos = (int)x1;
                 if (xpos < x_start[p]) x_start[p]=xpos;
                 if (xpos > x_end[p]) x_end[p]=xpos;
                 // Advance one screen line
                 x1 += deltax;
             }   
     


    At the end of this process, x_start and x_end contain exactly what you need: The contours of the polygon. Note that there is another way to trace the outlines of triangles; in that case it's more efficient to skip the arrays and trace the left and right side of the polygon simultaneously. I will not explain this technique, since I hardly ever use it. You could add it as a special case to your 3D engine, for polygons that are (still) triangles after clipping.

    OK, next important thing for polygon filling is INTERPOLATION. You have already seen some interpolation in the pseudo code, where the x position of the edge is interpolated from the minimum y-value to the maximum y-value. Filling the lines from left to right with a single color requires no further interpolation of course, but as soon as you want to do even a little bit more, you'll need a big dosis of interpolation.

    Imagine that you assigned a texture to your polygon. Now each vertex has not only an 'x' and 'y' position, but also an 'U' and 'V' value. In this case, you need four extra arrays: u_start, u_end, v_start and v_end. During the edge tracing, you should now not only remember the x-position per screen line, but also the U-value at the start and end of each line, plus the V-value. The U and V delta's can be calculated the same way as the x_delta in the above pseudo code too.

    Once you have your arrays filled with x, U and V values, you want to draw the spans. And here's the interpolation again: Just like a loop that draws from x_start to x_end for a given line interpolates from x_start to x_end, you should interpolate u_start to u_end, and v_start to v_end. So, you need a delta-U and delta-V PER LINE. It's calculated as follows:

    
        delta_u = (u_end[line]-u_start[line])/(x_end[line]-x_start[line]);
        delta_v = (v_end[line]-v_start[line])/(x_end[line]-x_start[line]);
     


    So, when drawing pixel 10 from a span, you should draw the texel at the U/V location that can be determined as follows:

    
        u_position = u_start[line] + delta_u * 10;
        v_position = v_start[line] + delta_v * 10;
     


    Of course, just like in the pseudo code for the edge tracer, you need to decide wheter you want to use floating point or fixed point for your delta's, since integer values are clearly not good for storing delta's that are smaller than 1.

    Every polygon filler is based on these principles. With this knowledge in mind, let's explore some techniques that you'll see pretty often in the feature-lists of 3D engines:

    Gouraud Shading:
    This is the name for interpolating color values over a polygon. Each vertex is assigned an intensity or RGB color, wich is then first interpolated over the edges, just like the texture in the above example, and then over the lines, again, just like the texture. You can either interpolate a single value ('intensity'), to get a single light color, or interpolate red, green and blue separately.

    (Fake) Phong Shading:
    For this technique a texture, containing a picture of a light spot, is projected over the polygon. Each vertex of the polygon now not only has a U/V for it's own texture, but also a U/V for the phong-map. The U and V's for the phong map are calculated using the normal vectors of the vertices. This is actually a very tricked technique, since vertices don't have normal vectors: For this technique, each vertex is assigned a normal vector by averaging the normal vectors of the polygons that it connects. Since normal vectors are of the form (x, y, z), where x, y and z are values between -1 and 1, fake-phong shading uses any two of these values as U and V in the phong map. This results in fluent flow of 'light' over the polygon when the normals of the polygons in an object alter, for example when the object rotates. Big advantage of this technique over gouraud shading is that you can have a light spot in the middle of a polygon, wich is impossible when you interpolate color values over the polygon. Besides this, you can draw the light spot yourself, so it can be as sharp or as soft as you want, or you can even use a normal picture, for example, the environment of the object. This last example is also called 'environment mapping'.

    Bump Mapping:
    This is a bit too complex to go in too much detail, but I'll try anyway: For bump mapping you need an extra texture, called the bumpmap. This texture represents the relief on the polygon. The goal of any bump mapping algorithm is to somehow get a relation between the light intensity for individual pixels, and the relief on the polygon. You also want to take into account the direction of the light of course, so the polygon doesn't look the same from all directions. In my Focus engine, I implemented a really simple bump mapper. It's based on the fake phong shading I just explained. The bumpmap is first processed to get it in a special format; the first four bits of each pixel in the bumpmap represent the 'slope' of a pixel in vertical direction, the second four bits the slope in horizontal direction. When fetching pixels from the phong map, I add these values to the U and V that I would normally use to fetch a pixel from the phong map. There are many other algorithms to do things like this, but this simple technique already gives a convincing result.

    Perspective Correct Texture Mapping:
    Since polygons are drawn on the screen with perspective, you can't really interpolate U and V values linearly, like I said before. A really simple example to visualize this this: Imagine a checkerboard that is close to you with one edge, but much further away with the opposite edge. It's clear that the squares that are close to you are larger than the squares that are further away. If you would have interpolated linearly, you would have gotten a weird bend in the board. This is where perspective correct texture mapping kicks in. Here's how to do it: U and V aren't linear in screenspace, as we just saw. But since we use something like 1/Z for our perspective, U/Z and V/Z ARE linear in screenspace. So, all you have to do is interpolate U/Z and V/Z over the edges of your polygon. The values that you store in the edge arrays can again be interpolated linearly over the lines that you draw. At the very last moment, you should convert back to U and V, by doing the opposite operation: (U/Z)*Z and (V/Z)*Z. This is of course an expensive operation, and you don't want to do it per pixel that you draw on the screen. The solution for that is to do it once every 8 or 16 pixels, and interpolate linearly between those values. This interpolation introduces errors again of course, but usually that's hard to see, unless you watch closely or force 'worst cases' (like standing close to a wall in Quake).

    If I need to spend more text on anything discussed in this document, let me know! You should have enough material again to have a good time experimenting with the techniques.


    Article Series:
  • Building a 3D Portal Engine - Issue 01 - Introduction
  • Building a 3D Portal Engine - Issue 02 - Graphics Output Under Windows
  • Building a 3D Portal Engine - Issue 03 - 3D Matrix Math
  • Building a 3D Portal Engine - Issue 04 - Data Structures For 3D Graphics
  • Building a 3D Portal Engine - Issue 05 - Coding A Wireframe Cube
  • Building a 3D Portal Engine - Issue 06 - Hidden Surface Removal
  • Building a 3D Portal Engine - Issue 07 - 2D & 3D Clipping: Sutherland-Hodgeman
  • Building a 3D Portal Engine - Issue 08 - Polygon Filling
  • Building a 3D Portal Engine - Issue 09 - 2D Portal Rendering
  • Building a 3D Portal Engine - Issue 10 - Intermezzo - 8/15/16/32 Bit Color Mixing
  • Building a 3D Portal Engine - Issue 11 - 3D Portal Rendering
  • Building a 3D Portal Engine - Issue 12 - Collision Detection (Guest Writer)
  • Building a 3D Portal Engine - Issue 13 - More Portal Features
  • Building a 3D Portal Engine - Issue 14 - 3D Engine Architecture
  • Building a 3D Portal Engine - Issue 15 - Space Partitioning, Octrees, And BSPs
  • Building a 3D Portal Engine - Issue 16 - More On Portals
  • Building a 3D Portal Engine - Issue 17 - End Of Transmission
  •  

    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.