U-V Overflow in Perspective Texture Mapping
Question submitted by (30 June 1999)

Return to The Archives
  I am using a common method of perspective texture mapping of defining a point and 2 direction vectors in world space (P, M, N) that define the texture's position and orientation. These are then translated into view space. I compute the orthogonal vectors A, B, C, and from these I use the projected 2D polygon screen coordinates to compute a, b, c (really 1/u, 1/v, and 1/z). To compute individual coordinates in texture space I take the reciprocal of c and multiply it by a and b to get u and v.

This works great except at the edges of polygons where I often get u, v values that fall outside the texture boundaries. Apparently this is a common problem, since every example of this approach I have seen has been accompanied by some sort of hack to get around this. I have tried all of the common "fixes" but none have really been satisfactory. I have tried:
  • Letting my textures wrap around (not an option for my particular implementation.)
  • Clamping the u,v values at the ends of each scanline (time consuming plus makes the texture swim when the polygon is rotated.)
  • Stretching the P, M, N vectors to cover an area in 3-space larger than the polygon (don't like this because of the storage and processing costs for extra vertexes. I also tried stretching P, M, N at runtime but have found no sure formula for how much to stretch.)
  • Storing u, v coordinates at each polygon vertex (don't like the extra storage and processing during clipping and projection)
  • Using sub-pixel precision in edge stepping when rasterizing polygons (I've read that taking care to rasterize polygon edges to always step on the center of pixels will fix the problem. I've found that this helps somewhat, but polygons at certain angles still exhibit the problem.)
  • My question, then, what is the _RIGHT_ way to fix this?

      EXCELLENT question... I always love it when people ask for the RIGHT way to fix something. In this case, there are a few aspects to consider...

    The first is to make sure you're properly rasterizing your polygon (i.e. sub-pixel and sub-texel rasterization.) If you're not already doing both, you really should consider it. I promise that once you start you won't want to go back. The overhead is negligable for sub-pixel and minor for sub-texel.

    I've written a couple docs on the subject:
  • sub-pixel accuracy
  • sub-texel accuracy
  • But this won't solve your problem completely... The primary problem I find when I get overflows has to do with an n-pixel sub-affine perspective correct texture mapper (say that 10 times as fast as you can, and I'll email you a lolipop.) Say we're using 16-pixel spans...

    Notice the three spans... the first two are full 16-pixel sub-spans, the last sub-span is a partial sub-span (sub-sub-span?.) It's usually more straight forward to compute the delta across the final span as if it was a full 16-pixel sub-span. But unfortunately, the perspective curve through texture space doesn't intersect the final pixel at the right place. This means that you can underflow or overflow your texture coordinates in the last sub-span very easily. To solve this, you must make sure that you calculate the delta across that span using the proper end-points of the sub-span.

    Though, you never said that you were actually using sub-affine texture mapping. So if you're not, don't lose hope yet there's even more to come...

    Of course, you should be using floating point, at least, until you get it working the way you want. But even if you ARE using floating point values (single or double precision) you can still end up with overflow. This is because the deltas you calculate to step across your span will have an inherent error in them (other than some very rare cases.)

    In this case, each time you step your values to the next pixel (or sub-span) you end up accumulating error. This means that it's actually harder to maintain accuracy when you're doing a divide-per-pixel as opposed to a sub-affine method.

    You can reduce this by performing a multiply rather than an add per pixel (simply multiply your delta by the number of pixels you've already processed.) On the common PCs, the overhead of the multiply is much less than most people would think. And if your texture mapper is fast enough (read: "memory bound") it is completely free.

    There are other issues to consider, but these are some pretty low-level issues. Some time ago, I wrote some tutorial source that uses each technique for texture mapping (affine, sub-affine and a fully accurate divide-per-pixel.) Throuthout this source, I explain (in comments) the precision problems I encountered in each. It also comes with explanations of the solutions I chose. I wrote this some time ago, so it compiles for DOS but is still perfectly valid (and should be reasonably easy to port.) I would have included a compiled version of each, but I no longer keep a DOS compiler on my machine.

    I will enter this into the public domain for the first time here (tmap.zip).

    Response provided by Paul Nettle

    This article was originally an entry in flipCode's Fountain of Knowledge, an open Question and Answer column that no longer exists.


    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.