|
Archived Questions & Answers (June 03, 1999):
Question: Author: Date: On Form Factor Calculation Aggravated Nosebleed 03 June 1999 More Radiosity Questions Aggravated Nosebleed 31 May 1999 Age Old Programming Debate Christopher Dudley 29 May 1999
Thank you for all the help you've given so far, it's really cleared things up. I've got a patch creation, subdivision and coversion back to lightmap system set up. In an effort to avoid the problem of splitting long narrow patches into even more narrow subpatches I split each polygon into 3 by cutting at the middle of each edge of the original patch. My problem now is calculating the patch-to-patch form factor. I've tried to wrap my brain around the equations presented in Computer Graphics: Principles & Practice (2nd ed, p.796 eqn 16.65) but I'm having real trouble. The eqn' looks like
d = ( dest.center - src.center ) cos( thetaS ) = fabs( dotProduct( d, src.normal ) ) / d.length cos( thetaD ) = fabs( dotProduct( d, dest.normal ) ) / d.length H(ij) = Line of sight between dest.center and src.center? meaning that I should have been able to do
Question by: Aggravated Nosebleed |
|||
I noticed in your calculations, you perform your dot products followed by a divide by the length of the vector. I assume this is because they are not unit vectors... And no, you should not have a value from [0...1], you should have a value between 0 and infinity. This number represents the amount of light that one patch shoots to another patch. There are two things that can cause the numbers to go beyond 1.0. 1. If the distance of your patches is less than 1.0. You'll want to try to prevent this, since this causes aliasing and other artifacts in the results. 2. If the receiving patch is larger than the transmission patch. In this case, the differential area calculation will divide the larger by the smaller, and your values will go beyond 1.0. This is perfectly acceptable (and necessary.) Your question has prompted me to write another document titled "Radiosity In English II". In this document, I discuss this calculation in detail, and how it relates to the physical world. At the end of this document, I also discuss the numbers that will be flowing through this calculation. You might find it helpful. |
1. I understand how to calculate the ambient term but when do I add it to the patch radiosity values? After the calculation? If this is the case, do I add the ammount of ambient light originally calculated or the ammount that remains at the end? Won't this eliminate any true-blacks? 2. When I build the patch list from the polygons, don't I have to ensure that the patches are inside the polygon? What about patches that are only partially inside the polygon, will I have to recalculate their surface area? Question by: Aggravated Nosebleed |
|
1. You should never add the ambient term to the patch's radiosity values. The ambient term is simply used when rendering the scene during each step of progressive refinement. 2. This was something that threw me for a loop when I wrote my first radiosity processor. When you think about it, the answer is obvious. Since the radiosity equations require the area of each patch (the sender and the receiver) then it's obvious that you need to know the actual area of each patch. This means that if a patch extends beyond the polygon that it is meant to represent, that patch must be "clipped" to the polygon. You can avoid this by subdividing your polygons in such a way as to avoid this problem alltogether: However, this can sometimes cause other problems to arise: In the above example, subdividing the patch in this way results in some fairly uneven subdivisions since the smaller patches are still quite elongated. A more extreme example (a polygon 1,000 units long and .001 units tall) can make it extremely difficult to subdivide a patch where the subdivision is needed most. It also makes subdivision much more difficult to manage when accuracy issues start to crop up. |
Hi, this is not SPECIFICALLY about 3D graphics, but since it is relavent to my current project (I've been tasked with making a great 3D engine...joy) I will ask anyway. I've been told by some that C is the only way to code anything you want fast. Then I turn around and have the performance of my C program completely blown away by a C++ proggy. I've been of the kind that goes down as close to the hardware as possible, but recently by low-level approach has been overshadowed by the C++ abstractions. Basically, my question is this: Is there really any reason NOT to just write things in C++? C++ makes the code loads easier to understand, and I do enjoy the OO approach better, but there are always those that say I'm foolhardy to try it. So Mr. Midnight, could you shed some light on this programming debate? (Religious war, more accurately.) Question by: Christopher Dudley |
|
This is very simple. In most cases, the best language for games is C++. From experience, I can say that KAGE was developed entirely in 100% C++. When I say 100%, I mean a full 100%. Even the software rasterizers were written in C++. Is it fast enough? Considering the graphics of KAGE and frame rates that surpassed even those of Quake (the original), the answer is yes. I will say that I would have HATED to see how KAGE would have turned out had I not used C++. And the FLY! project proves this point much stronger with nearly a million lines of C++ code. People may argue my opinion, which is fine, everybody has that right. Just be careful whose opinions you choose to heed. |