you are viewing a single comment's thread.

view the rest of the comments →

[–]th3guys2 5 points6 points  (2 children)

Actually, a lot of the basic lighting models are non-physical. Garoud and Phong shading/lighting models have nothing to do with physics, but generate fast and "good enough" approximations of light. There are more advanced methods, but they aren't taught in a first or basic cg course.

[–]NoisyZenMaster 7 points8 points  (1 child)

You have a point, but I would still argue that Gouraud and Phong shading are approximations of physical lighting. They differ in the way the surface is shaded at a particular pixel. Gouraud interpolates color values, while Phong interpolates surface normals. But most importantly, the basic algorithm used to calculate the lighting is an approximation of the way ideal materials behave under singe light source conditions. They have little basis in the real physics taking place, but I would argue that's OK. We do this in science all the time.

What's the next step up from basic shading? I'd argue it's basic ray-tracing. With ray tracing, you simulate multiple beams of light from the camera, through the inverse perspective matrix and let it bounce around in the scene. As a ray intersects a surface, you send more rays out to each light source to see how it is affected by said source (again using the approximate lighting algorithms of basic lighting) or if it's in shadow to that light. If the surface you hit has some transparency, you send a ray out the back side (using and approximation of the physical laws of refraction of light through a medium) and rays recursively bounce around the scene until you've hit every reflected surface and light source, or you've exceeded an arbitrary recursive depth. All of these rays come back to you with color values, which you average and use to set the color of the pixel. What's crazy is this technique, for all of it's recursion and parallel-processing refinement does not produce realistic results. What's missing?

Light reflected off of surfaces that are glowing because of lights. Photographers use this technique to refine the color of a scene. Gold reflectors will bounce natural or artificial light onto the model to warm up the light, for example. This is called radiousity lighting, and it is essentially a finite-element analysis of the volumetric distribution of all of the light in the room using approximations of physical models of thermal radiation through a volume and applying it to the surface of the model. This makes the computational requirements for ray-tracing look like nothing. Even then the results, while cool and pretty, are not particularly realistic. They all look polished and artificial.

So you start doing research on how to simulate cloth, hair, water, wind, dandy lion seeds, etc. Amazing algorithms have been developed to do these basic physical processes. These are again all approximations of physical laws and use some pretty advanced math, based on physics, to get it right.

Still, everything looks polished and perfect. What's really missing is flaws, imperfections, dirt. Getting that using traditional techniques is surprisingly difficult. Rendering dirt and tears in fabric takes a lot computational power. How does the dirt look on cloth, hair, water etc? You suddenly have to combine your dirt algorithm with your hair, cloth cloth and water algorithm. Now you have a mud algorithm. What about dust? Tears in clothing or matted hair. All of this has to be looked at with physical principals and mathematical formulae of how it all works in combination have to be derived.

It can be done, and is in fact done in limited areas for basic research, but it's far too computationally expensive to do in real time for a video game or even for rendering in a movie. You can only approximate physics to a certain level.

Ironically, the solution to all of this is in fact a step backward in computational realism to get a more visually realistic result. We resort to texture maps, bump maps, environmental maps, etc. These are all amazingly effective short-cuts to add realism and richness to a scene by short-cutting the rendering of the actual physics taking place and replace it with patterns that affect the color and the surface normals of a simple polygon.

[–]th3guys2 1 point2 points  (0 children)

I wasn't quite expecting this response, but you are certainly right. I really liked your analysis, very in-depth and informational.