World of reflections 1
This is a pretty old text. Nevertheless it seems a lot of people still find it quite useful, so it’s online again in my brand new blog. Maybe I will even continue this series :)
CGI is a very young, but fast growing field of science. Many different techniques have been developed in a short amount of time to copy nature, most of the time adressing a specific problem. And it’s often very hard to choose the right tools for a specific task.
Because of the way humans recognize the interaction of light with matter some main techniques got popular. The most popular one is to work with surfaces, not volumes. Another thing is, that the cgi world comes to life by sending rays into it to gather the required information.
The main problem in reconstructing natural phenomens is the amount of information which is needed at every time. Let’s assume the world, where we life, also exists when we do not look at it (better: when our senses doesn’t recognize anything). An incredible amount of photons make their way through space, carrying energy. They interact somehow with the smallest known parts of our world, the atoms. The energy comes in pakets, quantums, some of these energy packages help the atom to a higher energy state, some are absorbed and the energy is converted. The excited atoms are unstable and spontaneously return to their ground state, which liberates some of the absorbed energy again as light or photons. These photons have characteristic wavelenghts according to what atom emitted it. Sunlight for example has got many “different” photons, a broad range of wavelengths, only a small amount of them can be recognized by the built-in light-recognition-tools of humans (eyes). Other well known light emitters tend to have a characteristic color, because of a special element this emitter consists of. A typical light from a Wolfram-emitter (light-bulb) has got its main frequencies in a range what we would call “orange”. Neon-gas emitters tend to have a green color.
This prozedure of sending and absorbing energy happens all the time with so many “particles” at a very low level. Our eyes act as cameras in this world. They gather light, recognize a small amount of wavelengths (as colors), and the amount of photons (in a logarithmic way btw.). And it happens in all directions. What we see is just a statistic state of the amount of different photons at a specific point in space. This is just a very rough overview of the physics behind it. But it’s enough for now, because I think the detailed theory behind it is not correct in a specific way. It serves well for our experiments and it describes our world as we recognize it, but as I said before, we assumed that the world exists without someone who measures it (e.g. we and our senses) and that’s the point where science serves humans, not knowledge.
Particles, which carry basic informations about their absorbing and reflecting behaviour. Particles who likes to get bound with other particles to form new structures with different behaviours. A modeling system where someone can build those elements or choose one from a library, spray with them, model with bigger amounts of those particles like with clay, let those elements interact with others to form new ones (oxidation, explosion, pressure, temperature). Perhaps somewhere in the future computers can deal with such an amount of information. Perhaps this amount of particles start to show a behaviour, where energy is converted in controlled packages which can be seen as information packages, perhaps these particles tend to hold their state and start to build what we would call a lifeform. Perhaps something like this already happened, harhar.
If somebody would ask me what the next important step in CGI would be I would answer: better integration and a “modeling” tool for prozedural maps, raymarching them in the viewport to use them as “models” to adress the volume of many objects and enhancing the details of Sub Surface Scattered objects. A 3d-bitmap format would be interesting as well to render out prozedural maps or other raymarched information (surfaces, particles, etc.), which can be edited in a “Voxel-Photoshop” and re-used as real, animated 3d-texures. As said before every atom is a light absorber and emitter. And the color of an object depends on the wavelengths of the photons and the characteristics of the atoms which are hit by these photons. The light can be emitted in all directions from a atom, but does depend on the direction the photon came from (otherwise reflections wouldn’t be possible, but don’t think of them as bouncing photons, it has to do with the wave-characteristics of photons why they are reflecting or refracting – or at least that’s the way this behaviour can be calculated…). Many things in our world doesn’t reflect light directly on it’s surface (like gas, hehe). Light travels trough most of the matter and starts to get absorbed and reflected and absorbed and reflected inside of the things. Like on this picture of a wax-sphere. This type of wax tends to absorb many wavelenghts excepts some green and yellow ones (they are dominant at least) and acts as a light emitter for green color, like every other object acts as a light emitter (except a black hole). You can see a dominant greenish color on my hand, while holding the wax into the light.
As most of you know, this effect got popular in the last years and many raytracers are capable of simulating light, which scatters “beneath the computer-generated surface”, so called Sub Surface Scattering. Most raytracers use a similar method for this like the sampling method for Global Illumination, where a specific amount of rays are shooted from a specific sample point on the surface, gathering light and color information (amount and wavelength) to give the sample point a specific color. The Monte Carlo method. Many raytracers also have unique ways to reduce the rendering time while increasing quality, because a high amount of rays has to be used to reduce noise artifacts. There are simple interpolation filters of the sample points, more intelligent interpolations who recognize sharp corner of objects, adaptive placement of sample points (where needed) and glued sample points which helps in animations. All in all those algorithms sound complicated, but in fact they aren’t – only the rendertime increases very much and the art at the moment is to find algorithms and methods to reduce this rendertime… In fact, when I think back to 1995 or -6, when I first touched a 3d program, one of my first tries where to illuminate a sphere by a light source, shining on a reflecting surface (mirror) and I wondered, why this didn’t work. Then I recognized, that my whole bedroom gets illuminated a little bit, just when I turned on a small torch, if I did this in my 3d program, everything was black except the object in the direct light-cone. That was the moment when I realized, that everything reflects light, and more, what we see are just reflections. You can imagine how excited I got when the first GI renderer fell into my hands. It was like the feeling, when I read the making of Matrix, while I thought a few years earlier, why not automate the image modeling-prozess for capturing animated objects…
But back to reflections. I wonder if all of the readers here know, how simple surface shaders like Blinn or Phong work, what a Diffuse color is, and what the hell those Highlights are. At least I wondered for a very long time, I mean I learned how to use them and how to achieve certain effects, but I think I never really realized what they really represent.
As mentioned, every object is a light emitter and absorber. A full reflective material, which reflects all photons (and other rays) like many metals do (especially metals vaporized on a very clean surface like glass) represents the whole environment on every point of it’s surface. As soon as some colors get absorbed, the reflections get less stronger and reflected rays get “colored”. So the diffuse color is just the reflection of the light source, which gets emitted after the object absorbed parts of the light. And the reflections are “diffuse”, because of the fact, that most surfaces in nature have a certain amount of roughness and tend to scatter the reflected light. I do not mean roughness which can be seen with the eye, I mean really, really small bumps. Different shading models simulate different roughness structures of the surface in a fast and efficient way. Highlights are parts of this surface, where the reflection of the light source is strongest – or with other words, where more light from the source is reflected. The diffuse part of the surface also has small highlights in it, but they get smaller and smaller as the surface looks away from the lightsource and the reflection gets less pronounced. Those highlights doesn’t exist in the real world in that way. That’s because the real world doesn’t have this type of light emitter like in 3d programs (point or area lights). In real world every light emitter has got a form because of several reasons and the “highlights” are more defined, you can see the reflection of the lightsource better. So a highlight is just a stronger part of the diffuse reflection.
The picture of the leather shows the complexity a surface has in real world. The “highlight” comes from a window where sunlight shines in. It would be impossible to make those highlights with simple shading model, you have to use real reflections to simulate the reflection of the window. And you have to use a blurry reflection to simulate the diffuse light scattering and perhaps a filter color to simulate some absorbtion of different wavelengths. This is also a good example why HDRI lighting became so popular the last years. You can not only simulate complex lighting situations with all those ambient parts, you can also use this HDRI for the raytracer to produce “real” reflections and “real” hot highlights with more definition.
This picture on the right shows an interesting effect of a reflecting table spoon. The main light comes from above, you can se a very bright reflection of the light in the spoon. The interesting part are the scratches – they describe circles in the spoon and every scratch can be seen as a very small cylindrical valley, which reflects light along its edge, but shows other reflections between them. This way the reflection gets diffused in a certain direction, also known as a anisotropic reflection in cgi. On this picture we see how the reflection of the main light forms a cross. Those anisotropic reflections are very likely on brushed metal or hair and there are different shader models out there who adress this kind of “highlights”. I just wanted to show you that, most of you probably know this. And I want to point out, that the shader models which simulates this effect should be used to produce fast anisotropic highlights. If you want to be correct you better use real geometrie (it makes no sense to put a anisotropic shader on cgi-hair, because the effects comes from the amount of small, cylindrical hair primitives) or at least a bumb map to simulate the scratches. I only found the shader useful for situations like hair with the opacity-map technique.
Another behaviour of photons is, that they tend to reflect when hitting a surface with a small angle, but they tend to go through the matter when they hit the object with an angle near 90 degrees. This is also an important effect, often seen in fibre-optics, where the light shouldn’t go out of the cable. This effect is known as the Fresnel-Effect. Most raytracers deal with it. The picture of the laptop shows it very good, the reflection is very bright at a small viewing anlge, but much of the rays get through the plastic and get absorbed by the black LCD’s at a high viewing angle. Nearly every material shows this effect more or less, especially transparent objects (every material is “transparent” btw. – it just depends on how many atoms the light can shine trough).
Ok, I think that’s it for now. A small introduction into the world of reflecting light. I hope you learned something useful from this text and my English wasn’t too bad, feel free to write me opinions or suggestions. I included some pictures of a glass-object to show you some more effects like caustics and dispersion (= different refraction-angles of the different frequencies of the light -> the light gets split into its wavelengths). There’s also a interesting photo with this object and his shadow, I just wanted to show, that the shadow of the transparent object has got the same dense as the shadow of my hand, because of the light refracting to spots on the desk. A common mistake of beginners when raytracing shadows of transparent object… If you have questions, just ask. And go through your world with open eyes, there are so many small things which are so wonderful – it will improve your skills as an artist and your understanding of the technical aspects.