|Hosting by PhotoTangler Collage Maker|
ATI Mojo Day 2002: San Francisco
by (01 October 2002)
|Return to The Archives|
ATI held its first North American ATI Mojo Day at the Grand Hyatt in San Francisco, CA on September 19, 2002. The San Francisco Mojo Day, which brought together over 200 game developers and other graphics professionals, is a follow up to a similar event ATI sponsored a month earlier at the European Developer's Conference. Attendees ranged from Blizzard to BioWare, Adobe, SGI and others who came to see presentations by ATI, Microsoft and Intel geared towards educating developers.|
The name "Mojo Day" is ATI's answer to the question of who has the grooviest graphics hardware in the industry and lent an Austin Powers theme to the event (complete with costumed presenters.) ATI is currently the number one graphics chip maker with 42% of the retail market share and is the first company to deliver a fully DirectX 9 compliant card, the Radeon 9700. This is four months before the expected winter release of DirectX 9 and NVIDIA's competitive architecture, NV30.
But succeeding in the graphics card market requires more than just creating great cards; without software that can take advantage of the latest features, consumers have no incentive to upgrade. For the latest generations of hardware, this means developers need to write sophisticated shaders and carefully tune graphics code to achieve maximum performance. Mojo Day educated game and other software developers.
The morning began with Joe Chien, Software Engineering Director at ATI, talking about the exciting new technology behind the Radeon 9700. |
The Radeon 9700 was introduced earlier this summer, and so far ATI has shipped about a million units. Even with a high retail price tag of $350, good sales numbers indicate that hardcore gamers are readily adopting the Radeon 9700. They haven't made an official announcement yet, but ATI suggested that in nine months they may have a mass market DirectX 9 card with a much lower price tag.
A few unique aspects differentiate the 9700 from earlier cards in the Radeon series. Firstly, it has no silicon devoted to the fixed function pipeline, and instead the driver implements the same transform and lighting calculations using vertex shaders. This changes the rules a bit when it comes to maximizing performance in your application. In the past, conventional wisdom prefered using fixed function over the programmable pipeline whenever possible, since the dedicated hardware was faster. On the 9700 the roles are reversed, since it's likely you can write a shader that's more efficient for your application than the driver's fixed function shaders. There's also significant overhead involved in switching between your own vertex shaders and using the fixed function pipeline since it requires loading up a large shader – one of the most expensive state change operations on the programmable hardware.
The second differentiating feature of the latest generation Radeon is support for floating point throughout the entire pixel engine. This means you can create 1, 2 and 4 channel floating point surfaces (including the frame buffer) with either 16 or 32-bits per channel. Pixel shader computations are also all carried out using 96-bit floating point numbers, replacing fixed point calculations used in the previous generations. Some of the impressive effects made possible by this increased range and precision are visible in the Radeon 9700 natural light demo.
Like you'd expect, the Radeon 9700 is a lot faster than its predecessors. According to ATI it's also the fastest card available right now based on standard benchmarks like 3DMark 2001. Under optimal conditions, the Radeon can transform and light about 300 million vertices per second thanks to four parallel vertex shader engines. Radeon 9700 supports AGP 8x (with a throughput of 2.0 Gb/s). The Radeon 9700 also uses the third generation of ATI's HYPER Z technology – a collection of Z-buffer features including hierarchical tests, compression, and depth testing early in fragment processing.
All of the conference attendees received a Radeon 9700 graphics card along with a CD-ROM of code samples, tools, and white papers that make up the Radeon SDK.
Dave Bartolomeo of Microsoft's 3rd Party Windows Gaming Group started off the DirectX 9 presentations with an overview of the new features of the API. Unlike past revisions of DirectX, which introduced radical changes in the design (like vertex buffers and the programmable pipeline), DirectX 9 simply polishes the existing interfaces, fills in some gaps, and makes logical extensions to the vertex and pixel shaders. Developers should find the transition to DirectX 9 very gentle.|
The one way Microsoft has made a big change to the API is by re-introducing support for 2D rendering. Microsoft has brought back DirectX 7-style 2D rendering in DirectX 9, although now it's totally integrated into the 3D pipeline. This means there won't be any performance problems when switching between 2D and 3D rendering, you can operate on stencil and depth values as well as color, and 2D operations are fully hardware accelerated.
As mentioned earlier, pixel and vertex shaders have undergone some incremental improvement bringing both up to version 2.0. This means longer programs and more constant registers for both kinds of shaders. Vertex shaders now support loops and conditionals based on constant values. This kind of control flow allows shaders to be parameterized on values like the number of lights that are enabled or how many bones affect each vertex in a soft-skinned mesh. Pixel shaders have a nice new feature called multiple render targets (MRTs) which allow you to write to multiple surfaces from inside of a pixel shader.
Jason Mitchell and Guennadi Riguer of ATI demonstrated how MRTs can be used to implement a number of image space effects, including depth of field and object outlining. For both of these effects, the MRTs are used to simulate a G-buffer – a structure that stores information about the material and geometric properties of the pixels in the frame buffer. In the outlining example, the pixel shader stores the world space surface normal in one render target, and the eye space depth in another. The application then uses a pixel shader to perform edge detection on each of these buffers, writing the union of the results as a black outline into the frame buffer.
Another new feature in the DirectX 9 shaders is support for displacement mapping. Displacement mapping works by offsetting the vertices of a highly tessellated mesh based on values looked up in a texture map. This is facilitated by a new vertex shader operation which allows looking up a value in a texture map. The highly tessellated geometry is generated on the hardware from a coarse mesh using N-patches which produce a smoothly tessellated mesh. Displacement mapping acts as a form of mesh compression since each vertex is represented with a single scalar displacement.
Microsoft also introduced a few new functions in DirectX 9 to include all of the functionality which was available under OpenGL but missing in DirectX. These include a scissor rectangle, depth bias, and anti-aliased line rendering. There's also a new texture coordinate generation mode for sphere mapping, although Dave mentioned that this will probably be the last addition Microsoft makes to the fixed-function pipeline.
A totally new feature in DirectX 9 is an asynchronous notification interface. This interface provides a mechanism to communicate information from the hardware to the application without blocking execution on the CPU or the graphics card. One example of using the interface is an occlusion query, whereby the hardware returns the number of fragments that passed the depth test during a rendering operation. Typically a low polygon bounding volume is rendered with color and depth buffer writes disabled and the application queries how many fragments passed the depth test. If no fragments from the bounding volume are visible, then the complex model doesn't need to be rendered.
Jason Mitchell rounded out the discussion of DirectX 9 pixel shaders with a demonstration of a procedural wood effect he adapted from a RenderMan shader. This shader served as an impressive example of the sophistication of the programmable hardware as well as the efficiency of Microsoft's High Level Shading Language (HLSL). Jason originally wrote and optimized the shader in assembly language, and later created a HLSL version. Using a variety of different versions of his shader and the beta version of the HLSL compiler he found that the assembly output was at most one or two instructions (cycles) longer than his hand-written assembly. Jason predicts that by the time DirectX 9 is released, the compiler should be good enough that there will be no performance penalty for using the HLSL.
The HLSL compiler won't be the only Microsoft tool making shader development easier in DirectX 9. Dave Bartolomeo took the stage once again and demonstrated the shader debugger that he's been developing at Microsoft. The debugger seamlessly integrates into Visual Studio .NET and allows programmers to debug shaders (both in assembly form and as source level HLSL code) the same way they debug C++ code. The user can set breakpoints, examine values in the Watch window, and step through code using the familiar Visual Studio interface. The debugger supports a number of features beyond just the basics too; a pseudo register allows all of the render states to be examined in the Watch window, and the user can also pull up windows which show the current contents of the frame buffer and all the texture units. Pretty much the only thing the shader debugger can't do is let you modify any of the values during the execution of the shader (Dave also conceded during the question and answer session that it can't make coffee either). Unfortunately the debugger won't be available for Visual Studio 6.0 because of the lack of a plug-in mechanism.
Microsoft's DirectX 9 is currently in beta testing and is scheduled to ship by the end of the year.
Although the presentations focused on DirectX, ATI announced that they are very committed to supporting OpenGL. The advanced features of the Radeon 9700 will be accessible through extensions, and when OpenGL 2.0 is standardized, ATI will be releasing compatible drivers.
RenderMonkey Tool Suite
The final Mojo Day seminars focused on the RenderMonkey Tool Suite. This is a collection of freely available tools designed to make developing next generation effects easier. Alex Vlachos, the lead programmer for ATI's demo applications group, gave a presentation on two of the RenderMonkey tools.|
The first is an application designed to generate fur textures for use with real-time characters called FurGen. The basic fur rendering technique was introduced by Jed Lengyel and others in the paper “Real-Time Fur over Arbitrary Surfaces,”, and involves creating concentric shells around the furry object. These shells are partially transparent and are drawn using special texture maps showing cross sections of the fur perpendicular to the direction of growth. Fin geometry is also added near the silhouette of the object by extruding the mesh's edges in the direction of the surface normal. The fins are drawn using a different fur texture which shows cross sections of the fur parallel to the direction of growth. This combination of shells and fins gives the impression of fully volumetric fur.
For the best looking effect, the fur is rendered using anisotrophic shading, and the length, color and density are varied over the surface of the model. To accomplish these effects, a lot of information about the fur needs to be stored in the shell and fin textures. This is where FurGen comes in. Given a bunch of parameters – like curliness, density, length and direction – FurGen creates shell and fin textures. In addition to storing the opacity of the fur at each point, the textures also encode information about the direction, the coordinates for looking up the color, and the order in which fur pixels are removed to thin or shorten it. The complete details of the encoding are described in John Isidoro's SIGGRAPH 2002 presentation.
The second tool Alex demonstrated is their normal map tool, which they used to create the smooth appearance of the car in the Radeon 9700 car demo. The purpose of this tool is to recreate the look of a high detail, high polygon mesh using a much lower resolution model and a normal map. This technique has become popular recently due to the impressive results demonstrated by id Software in DOOM 3. To use the tool, an artist creates both a high polygon and a low polygon version of the same object, and runs the program to automatically generate the normal map. The tool does this by intersecting rays with the high polygon mesh; these rays originate on the surface of the low polygon mesh and are aligned with the surface normal. The surface normal of the high polygon is stored in the normal map texture. The tool has a number of parameters which allow tweaking how the ray intersections are computed, such as finding the first intersection with the ray, or shooting the rays in both the positive and negative direction and taking the closest point. Unlike FurGen which hasn't been released yet, the normal map tool is already available from ATI's developer website and comes with full source code.
The last part of the RenderMonkey Suite presented at Mojo Day is the RenderMonkey IDE. Drew Card and Natasha Tatarchuk (two members of the three person development team) hosted a special interactive session where everyone got the chance to sit down at a computer a walk through some examples with the tool. The basic idea behind the IDE is to provide a graphical interface for managing all of the properties of a shader effect. An effect is made up of a combination of passes, each which can contain a stream mapping (for assigning vertex components to input registers in a vertex shader), variables (which can bound to shader constants), render state settings, texture maps, vertex shaders and pixel shaders. In “programmer mode” the user can create all of these things, edit shaders using syntax coloring, and set the values of variables. Certain variables, which the shader developer marks as “artist variables” show up when the IDE is used in “artist mode”. Everything else, including the shader source, is hidden in artist mode, and the result is a very simple interface by which artists can tweak the parameters of a shader. For example, a simple Phong illumination shader would provide diffuse color, specular color and a shininess scalar as artist parameters. Clicking on any of these in the IDE brings up an editor appropriate for the type of variable – a color picker for selecting the diffuse and specular colors and a slider to adjust the scalar. This interface is a really nice step towards allowing artists to be an active part of shader development; hopefully the next step will be directly integrating the RenderMonkey IDE into 3D Studio MAX or Maya.
One of the nicest things about the IDE is that it has a very open architecture which allows developers to extend or replace its functionality through plug-ins. For example, developers can add new plug-ins to support their own model and texture formats and can replace the shader preview window with their own graphics engine. The RenderMonkey IDE project files are stored in the standard XML format, so it's also easy to load them directly into your application, or convert them into a different format during your resource build process.
ATI plans on releasing the first official version of the RenderMonkey IDE when DirectX 9 becomes available, although a beta version of RenderMonkey IDE is available now from ATI's developer website. The current version doesn't support the HLSL or some amenities (like undo) but these will be in by the time version 1.0 is released.
To close Mojo Day, ATI hosted a private party at the Velvet Lounge in downtown San Francisco, complete with psychedelic sixties decor, go-go dancers, and complimentary hors d'oeuvres and drinks all night.|
With great hardware, open tools for developers and go-go girls, ATI is positioning themselves to remain a leader in the graphics hardware industry. Nothing has been announced yet, but ATI has hopes of making Mojo Day a biannual event with one session being held at the GDC. Groovy, baby.
The slides for all of the ATI presentations are available online at ATI's developer web site. You can also find presentations from SIGGRAPH 2002 covering similar material in greater depth.
If you're interested in reading more about Mojo Day, Tom's Hardware and Gamers Depot both have additional coverage from different perspectives.
Max McGuire is the lead graphics programmer at Iron Lore Entertainment in Sudbury, MA.