Real-time Object Morphing
by (28 March 1999)



Return to The Archives
Introduction


In this document, I will explain a technique to morph 3D objects in realtime as seen in quite a few demos. Although the actual geometric morphing is not hard at all, keeping the shading and everything speedy requires a special trick. I will first explain what morphing really is, and how it is done. After that, I will tell you how to make it fast.


What is Morphing?


Morphing means that you start out with some object, which can be anything, and over the course of a number of frames change this object into something different. It’s kinda like what Odo from Deep Space 9 could do, albeit on a slightly smaller scale :).

Morphing really is a bit of a fuzzy term, since every operation which changes the geometry of an object can be considered morphing. Even the squashing of a rubber ball when it hits the ground. In general though, simple deformations like this are not considered morphing. I use the term for all operations which change the angles between the faces of an object (This includes the squashing I mentioned).

The actual deformation of an object's geometry can be done in a number of ways. The easiest is simply to linearly interpolate the X,Y and Z coordinates of all verts in an object to those of the second object. Another method converts all coordinates to spherical coordinates, and interpolates these. The latter can give really cool effects, but it’s rather hard to implement in an existing 3D-engine I think.

However the morphing is done, you have to make sure that the original object and the object it is morphed into have the same number of vertices, the same number of faces, and the faces are constructed from the same vertices. It is possible to simulate for example a cube (12 triangles) morphing into an isocahedron (20 triangles) by splitting up some of the cubes’ triangles into multiple smaller ones, so that the cube is constructed from 20 triangles.


How is it usually done?


Assuming the coordinates are interpolated linearly in N frames, the actual morphing process, complete with shading, goes like this:

First, calculate the delta’s for interpolation of coordinates. This is easy: For each vertex the start- and end-values of the X,Y and Z coordinates are known. Therefore, the coordinates of any vertex at a given framenumber are :


    X = Xstart + framenumber * (Xend-Xstart)/N;
    Y = Ystart + framenumber * (Yend-Ystart)/N;
    Z = Zstart + framenumber * (Zend-Zstart)/N;
 


In the actual morphing loop, this will look like:


 X = Xstart;
 Xinc = (Xend - Xstart) / N;
 for (i = 1; i<=N; i++)
 {
   ......
   X += Xinc;
   ......
 }
 


And the same for Y and Z. In practice, you’ll probably have a temporary object with coordinates you continually update using the above formulas. After each update, recalculate the normals and process your object as usual to display it on the screen. In pseudo-code:


 for (all verts in the object)
 {
   Calculate increase values: (start-end)/N;
 }

TempObject = OriginalObject;

for (i=1; i<=N; i++) { [Process TempObject as usual] [Blast TempObject to screen-buffer] for (all verts in TempObject) { X += Xinc; Y += Yinc; Z+=Zinc; }

for (all faces in TempObject) { [Recalculate face-normal] }

}


So you start with a certain object (OriginalObject) and after N frames, this object has become something different. That’s all, morphing is easy :).

Everything I said so far is nothing special, and even rather trivial. But, although all of this works just fine for flat-shading, you’ll hit problems as soon as you want to do something more advanced like gouraud- or fong-shading (fong = fake phong). This is because flat-shading only uses the face-normals, while gouraud and fong require the vertex-normals to be recalculated, which is extremely slow.


Why is it slow to re-calculate Vertex-normals?


Calculating face-normals is not very expensive. Considering the average object is a few hundred faces at most, it is perfectly acceptable to do this in realtime, especially on today’s fast systems. For the vertex-normals, things are different. Let’s look at a standard loop for creating vertex normals. Here's some general case pseudo-code that Kurt described in his tutorial on vertex normals.


 // Calculating vertex normals using a vertex list and polygon faces (vertex indices);

 loop through your vertex list {
  
       clear this_vertex_normal;    // initialized to [0,0,0]

       loop through your face lists {
 
                 for each face vertex, check if its
                 the same as the vertex as in our
                 current outer vertex loop.

If it is, {

this_vertex_normal += this_face.normal; } } this_vertex_normal.Normalize(); }


So for all vertices the complete face-list is checked. Since the number of vertices and the number of faces depend closely on each other, this turns out to become a computational nightmare: an X^2 relation between the number of elements (X) and the cost of the algorithm.

Just for clarity: I’m still talking about morphing objects here. In a normal situation, the vertex-normals are calculated once at initialization, and rotated the same way as the rest of the object. No problemo.

And remember, this loop is executed on top of the calculation of the face-normals. All in all, it would be nice if the inner loop could be eliminated. Happily, there is a way to do this. Let’s move on to the next part.


So how do I make it fast?


To eliminate that annoying and expensive inner loop I showed you in the previous chapter, let’s turn the world upside down. Normally, you have a list of vertices, and a list of polygons. These are roughly of the structure:


 class Vertex
 {
   float X,Y,Z ;  //Coordinates in 3D-space for this vertex
   ....
 }

class Polygon { int V1,V2,V3 ; //Indices of the vertices which make up this polygon .... }


Now the trick to make realtime morphing fast (insert Tadaa.wav here :) is to have for each vertex a list of polygons in which that vertex is being used. The previous line is essential for the algorithm, so reread it until you understand perfectly.

This polylist can be pre-determined, since it is a good bet that polygons in an object don’t switch vertices. Your Vertex class now becomes:


 class Vertex
 {
   float X,Y,Z ;  //Coordinates in 3D-space for this vertex
   ....
   int Incident;	//Number of polygons that use this vertex
   int *PolyIndex;	//Linked list of polygon-indices
 }
 


Now somewhere in the initialization part of your 3D-object, the following piece of code, or something similar, should be added:


 loop through your vertex list 
 {
	 int this_vertex.incident = 0;

loop through your face lists { for each face vertex, check if its the same as the vertex in our current outer vertex loop.

If it is, { add this face to linked list of this vertex this_vertex.incident++; } } }


This has to be done only once. Now during morphing, after the normal procedure you would follow for flat-shaded objects I described above, insert the following:


 loop through your vertex list 
 {
 	 this_vertex.normal = (0,0,0);

repeat this_vertex.incident times { add normal of current face in linked list to normal of this vertex and proceed to next face. }

this_vertex.normal /= this_vertex.incident; }


You now have correct vertex-normals for the morphing object in every frame. This allows you to perform gouraud-shading, fong-shading, environment-mapping and whatever you do using vertex-normals.

There are of course other ways to do this, but as far as I know this is the fastest method which yields correct vertex-normals. I know of at least one demo (Fashion by Logic Design) which doesn’t recalculate the vertex normals at all. Because they used an almost spherically symmetrical object, this had the effect of a moving light-source! It’s in the hornet archive, so go check it out. It’s pretty cool.


Conclusion


The algorithm I presented here is by no means revolutionary. However, it does the job quite well in a lot of situations. There are a lot of details left to fill in, but this depends on the actual implementation. The nice thing about this algorithm is that it can be implemented in any 3D-system with little or no modification to the system itself, because all the extra steps can be carried out outside the actual rendering-pipeline.

The performance I got is a 320-face, 160-vertex torus morphing in any way desired at full frame-rate on a 486/120 (That’s a few years ago :). Admittedly, my implementation at the time was not a full-fledged 3D-engine. But it indicates that on a modern computer and for not-too-complex objects, real-time morphing is feasible, even within complex environments. This opens up some nice possibilities. What if your friendly collegues in Half-life suddenly morph into hideous monsters?

If you want to contact me about this tutorial for questions or anything, mail to: j.bouwens@student.tue.nl

 

Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
Please read our Terms, Conditions, and Privacy information.