Building a 3D Portal Engine - Issue 14 - 3D Engine Architecture
by (02 April 1999)



Return to The Archives
Introduction


This week in Phantom's Guide To Writing A Portal Based 3D Engine: Engine architecture. We will have a look at the general layout of a 3D engine, and discuss the importancy of 'early out' thinking. This article was a special request, so... Enjoy man.

How does an engine look? Where do you start? Tough to answer when you are just starting, and also tough to answer when you have just completed your fifth engine or so. :) Obviously, there are things that you need every time, and I guess I'm using them in the same order too every time, so it must be possible to say something about it. Well, here we go...

A 3D engine needs a rasterizer. Of course. Problem is, most people (including me) usually start with this, even when the rasterizing is done by an accelerator. That's probably because for software rendering it can make or break your graphics output, either by being too slow or too inaccurate, or both. For hardware rendering, you have to dive into things like Direct3D, or Glide, or OpenGL. Whatever you choose (and PLEASE, don't choose Direct3D) you have to learn how to use the library, and that's usually something completely different than what you had in mind when you started programming. I mean, you start coding a 3D engine because you have this terrific idea or algorithm, not because you where so incredibly curious about what Microsoft did to waste the performance of a perfectly good 3D card. Obviously, the rasterizer (or, rasteRAZOR as I tend to call my highly optimized software fillers:) is the last part of your 3D engine pipeline.

Let's see what comes before that. Well, we need to rotate some coordinates, obviously. And we need some shading. And polygons need to be clipped to the screen. And of course, we need some user input, some collision detection, and a whole lot more.

And that shows us the basis of 3D engine design. The basis is: "HOW CAN I PREVENT AS MANY ROTATIONS/ CLIPS/DRAWS AS POSSIBLE?" - Or: "The Important Early Out Question". The answers to this question should determine the order in which you do things.

In my opinion, this means that any rendering pipe line should start with culling. Processing what is not seen is a waste of time, time that could have been used for cool effects. When we designed the Lost Shadows engine for Lost Boys, we wanted to be able to display worlds that are only limited by diskspace. If you have a billion polygons, your program can't process them all. You can't rotate a billion vertices within a second, and even if you could, it would be a waste of time when you could only see thousand of them. So, rotations happen after culling. The same goes for a lot of things. Light sources don't need to cast beams when those beams are not visible. Actors don't need to be animated when the user can't see it. Collision detection doesn't have to be done for a polygon on the other side of the desert. And so on.

The last example (collision detection) is a potential problem. I mean, a typical game loop looks like this:
  • Get User Input
  • Respond
  • Show New State
  • That means, the user first moves his joystick forward. So, he expects his virtual representation to move forward. Cool, but there's a wall there. So, the character moves slightly forward, bounces back, and arrives at a different location. Since the user was moving rather fast, all this happens in a single frame. That means that collision detection has to be performed BEFORE rendering, as it is part of the 'response' of the program to the user. I call this a 'problem', since collision detection often can use intermediate results from a renderer (e.g., what is visible/near).

    An important thing that you could have noticed by now is the fact that culling should happen before applying rotation/translation matrices. That means that culling should happen in untransformed world space, and that's harder than culling in 2D screen space. Well, harder, that is how you look at it, but usually it's a bit harder to imagine operations in 3D space, than in 2D screen space. It's essential though, as you really don't want to rotate tons of invisible stuff.

    But there's another thing that you should do before you start rotating vertices all over the place. Clipping is often done in 2D. Some people think it's faster, most people just do it like that because it's easier. Well, it isn't faster, and it it's barely easier. And it introduces some nasty problems. When you clip in 2D, for example, you get problems with your texture coordinates. You see, these don't move linearly over the polygon edges in 2D, unless a polygon is parallel to the screen. In 3D, you don't have this problem. But more importantly, you might clip away a polygon that you just rotated. And that's a shame. :) So, be a man (or a woman, just don't get confused, ask a doctor if you're unsure:), and learn how to clip in 3D. You'll need it anyway for the more advanced 3D stuff.

    Finally, you have determined a good set of polygons to draw. You have done the best you could to minimize the set, you have clipped it to the view frustum, so now...

    Yes, now you can rotate. And translate. :) And, of course, you'll want to do some lights. Lights can be done at this stage, or before rotating; it doesn't really matter, and depends very much on how you want to implement your illumination system. Using lightmaps and texture caching is of course completely different than just applying a single color to each polygon, depending on it's orientation. I will not go into details here. One thing that I would like to mention is the fact that I think that lightmaps are going to be history real quickly now. Everyone seems to love them, because they allow polygons to be rendered in a single pass and so on, but I really don't understand why everyone insists on using this old Quake technology... I mean, sending a texture to a modern accelerator is one of the worst things you can do. Drawing a polygon twice to get it shaded barely takes time, and gets faster all the time. Then WHY is everybody so anxious to save a pass by uploading several kilobytes to that card? Man, that worked for software rendering, but it caused many man months of STUPID work for lots of commercial games. I could go on ranting about this, about dynamic lights, bumpmapping, which are all a lot harder 'thanks' to those bloody lightmaps...

    Anyway. Back to rotations. :)

    And once that is done, you can rasterize the resulting polygons in 2D screen space. With the current accelerators, rasterizing needs special attention. There are still many people without good accelerators, so you may want to provide a good software rasterizer. If you choose to do a hardware rasterizer (too), you still need to be carefull: Some API's prefer polygons sorted on texture, others rather have them depth sorted. Paying attention to things like that can make a huge difference. For example, when the Unreal programmers added OpenGL support, they had to change the graphics pipeline to get acceptable performance. Recently, the TRIBES team had major trouble getting the TNT fast. In both cases, the problems where caused by an engine that was initially written for Glide, and later changed to support OpenGL. Difficulties like these are hard to prevent maybe, but you have to pay special attention to this part of your renderer if you want to be able to support multiple rasterizers at all.

    In the near future, this may become a larger problem. Current accelerators barely support geometry processing, but future hardware might very well be capable of doing 3D transformations for you, meaning that larger parts of your pipeline need to be customizable. While this certainly offers great potential, it also makes developing an engine a nightmare, especially when some vendors decice to implement a completely different interface for this than others.

    So, this is the general 3D renderer layout. Let's summarize:
  • Culling, in world space
  • Clipping, in world space
  • Rotate and translate to screen space
  • Rasterize
  • There may be occasions where you might want to do things different, but always keep in mind that your motivation should be 'EARLY OUT'; if you introduce more work for the processor just to have it easier for yourself, you're gonna regret it.


    Article Series:
  • Building a 3D Portal Engine - Issue 01 - Introduction
  • Building a 3D Portal Engine - Issue 02 - Graphics Output Under Windows
  • Building a 3D Portal Engine - Issue 03 - 3D Matrix Math
  • Building a 3D Portal Engine - Issue 04 - Data Structures For 3D Graphics
  • Building a 3D Portal Engine - Issue 05 - Coding A Wireframe Cube
  • Building a 3D Portal Engine - Issue 06 - Hidden Surface Removal
  • Building a 3D Portal Engine - Issue 07 - 2D & 3D Clipping: Sutherland-Hodgeman
  • Building a 3D Portal Engine - Issue 08 - Polygon Filling
  • Building a 3D Portal Engine - Issue 09 - 2D Portal Rendering
  • Building a 3D Portal Engine - Issue 10 - Intermezzo - 8/15/16/32 Bit Color Mixing
  • Building a 3D Portal Engine - Issue 11 - 3D Portal Rendering
  • Building a 3D Portal Engine - Issue 12 - Collision Detection (Guest Writer)
  • Building a 3D Portal Engine - Issue 13 - More Portal Features
  • Building a 3D Portal Engine - Issue 14 - 3D Engine Architecture
  • Building a 3D Portal Engine - Issue 15 - Space Partitioning, Octrees, And BSPs
  • Building a 3D Portal Engine - Issue 16 - More On Portals
  • Building a 3D Portal Engine - Issue 17 - End Of Transmission
  •  

    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.