Fogging

If Earth's atmosphere would not contain any fog and haze, it probably would have been invented for real-time rendering because it's so useful!

Rendering works the faster the less vertices are in a mesh and the less the rasterizer hence needs to operate to get a pixel. That can be achieved by using less detailed meshes, or by not showing meshes out to an arbitrarily large distance. But if we simply drop meshes in the distance, there's holes and sudden gaps - fog hides them from the view, so whatever is done at large distance is never apparent.

This eminently useful property of fog has sometimes led to a rather utilitarian view of it in rendering - fog is often only seen as a device to speed up rendering and save memory and not as something worth rendering on its own. Which is a pity, because there's lots of details to fog and haze, and for instance in the context of a flightsim, fog, haze and clouds completely dominate what we see during a flight.

Fog from a rendering perspective

Technically, in order to render fog, you need to specify its color, the fogging coordinate and the fog function (which returns the amount of fogging as a function of the coordinate). Practically, fog is then implemented as

fragColor.rgb = mix(fogColor.rgb, fragColor.rgb, fogFunction(fogCoordinate));

where it is implicitly assumed that the fog function returns 1 if there's no fog and 0 for a completely fogged pixel.

Let's start with the fog coordinate. Usually a pixel is more fogged the more distant it is from the eye, so this is some kind of a distance. About the easiest choice is simply the z-coordinate in eye space (remember the z-coordinate is the 'depth' of the pixel relative to the eye). This however fails for large fields of view, because the true distance is length(eyeCoord) or in other words sqrt(eyeCoord.x * eyeCoord.x + eyeCoord.y * eyeCoord.y + eyeCoord.z * eyeCoord.z), and to the degree that x and y get large at the edge of the visual field, the choice leads to oddities.

The next possible choice is to simply use the actual distance of the vertex, i.e. length(eyeCoord). That works for large fields of view, but has a problem when looking up. On a hazy day, the visibility in the atmosphere is perhaps 10 km - yet we see the Sun which is three hundred million kilometers away! The quoted visibility only holds for a horizontal ray, not for a vertical ray, because haze becomes much thinner going higher into the atmosphere. So even fogging with the actual distance to a point is not necessarily correct (we'll discuss the correct procedure below).

Note that if you decide to use the distance to the vertex as fog coordinate, you have to pass the vector to the vertex from vertex to fragment shader as a varying vec3 and compute the distance there. If you compute and pass the distance, you will get bad results, because while every coordinate component is a linear function, the vector length is not, so it does not interpolate properly across a triangle.

Say we picked a fog coordinate. We know there's hardly any fog close by, we have some idea of the visibility, so beyond some distance objects should be fully fogged. What functions to choose?

Usually (again motivated by real world optics) exponentials are chosen, either

float fogFunction (in float fogCoord, in float visibility)
{
return exp(- pow(fogCoord/visibility, 2.0));
}

or the variant

float fogFunction (in float fogCoord, in float visibility)
{
return exp(- fogCoord/visibility);
}

The first function has relatively less fog for pixels closer than the visibility and more fog for pixels farther than the visibility. From a rendering perspective, that's superior, because it gives a good view on the scene which we put effort into rendering and smoothly hides precisely what we want to hide, i.e. the distance. The main justification for the second function is however that this is how real fog works.

Note that neither fog function is precisely zero when the fogCoord reaches the visibility defined that way, so we need to render a bit further out than that nominal value to hide the mesh cutoff.

On a technical note - see from the example how GLSL allows to declare functions outside the body of the main function. The function needs to have the return variable type declared just like in C++, but unlike for C++ all incoming variables need the keyword in added.

Real fog and haze

If you aim for something like the picture below, the simplified picture of fog sketched above is hopelessly inadequate.

Some effort for rendering fog - Rayleigh, Mie and diffuse scattering channels and volumetric distribution.

First, in reality the density of haze is not constant but given by a 4-dim distribution rho(x,y,z,t). The 4th dimension is time - fog can form and dissolve over time!

How fogged we see something then depends on the interaction cross section sigma of light with the fog. In general, that may depend on wavelength - for instance dry hazes like smoke particles predominantly scatter blue light, whereas wet hazes scatter all wavelengths equally well. Because the distribution of dry vs. wet haze may vary in space, we need to sum over various fog components.

The formal expression for how fogged something appears as a function of wavelength requires us to do an integral, i.e. the result is

fogging (lambda) = exp( - integral dl sum_n rho_n(x,y,z,t) * sigma_n (lambda))

where dl is the line integral along the ray from a point to the eye. In the case that there's only one type of fog and the density is constant, the integral dl rho sigma(lambda) just becomes equal to the fog coordinate times the product of rho and sigma, which in turn is equal to the inverse visibility for a given wavelength. That matches to the simple expressions given above.

Usually, the distance along the ray at which the argument of the exponential becomes unity, i.e. integral dl sum_n rho_n(x,y,z,t) * sigma_n (lambda) = 1 is called the optical depth at this wavelength. In the simple case above, the optical depth is just the visibility parameter.

Unfortunately, in general the fog distribution is not constant in space. That means there is no meaning to something like a 'visibility' in the scene - what you see depends on where you are and where you look at. You may for instance be five kilometers next to an airfield which happens to be right underneath a thunderstorm. You can't see the airfield because in heavy rain the visibility is poor, but you might be able to see tens of kilometers horizontally into other directions and billions of kilometers looking up into the sky!

The optical depth then has to be computed for every ray in the scene. That's in general not doable - numerical integration is an expensive procedure even outside real time rendering, on a per-frame, per pixel basis it's usually much too slow. Thus, approximations have to be used in which the integrals can be computed analytically. For instance, the above expression can be computed as an (albeit lengthy) expression assuming the haze density is a function of altitude only and falls off with a characteristic scale H like rho (z) = exp(-z/H) (to do this computation requires a Taylor expansion in Earth's curvature, so you need to have a good grasp of university-level calculus to get there, and this leads way beyond the scope of this tutorial).

We'll cover doing integrals inside a shader in some more simple cases later.

As an example for a more complicated determination of the fog coordinate, consider this geometry assuming a two layer model of a lower layer with denser fog (characterized by visibility) and a higher layer with sparse fog (characterized by avisibility). The eye is the vector relPos and the distance dist away from the vertex (with the z-coordinate pointing upward). The eye is the vertical distance delta_z away from the upper edge of the more dense lower fog layer.

The shader then has to distinguish four cases - the eye is in the lower layer looking at a vertex outside of it, it's in the layer looking at a vertex also in the layer, or the eye is outside the layer looking at a vertex inside it or both eye and vertex observed are outside the lower layer. For each of these cases, the relative paths inside the dense and sparse region are different and need to be obtained separately.

float vAltitude;
float delta_zv;
float H;
float distance_in_layer;
float transmission_arg;
// angle with horizon, are we looking upward or downward?
float ct = dot(vec3(0.0, 0.0, 1.0), relPos)/dist;

if (delta_z > 0.0) // we're inside the layer
      {
      if (ct < 0.0) // we look down
            {
            distance_in_layer = dist;
            vAltitude = min(distance_in_layer,mvisibility) * ct;
            delta_zv = delta_z - vAltitude;
            }
      else // we may look through upper layer edge
           {
           H = dist * ct;
           if (H > delta_z) {distance_in_layer = dist/H * delta_z;}
           else {distance_in_layer = dist;}
           vAltitude = min(distance_in_layer,visibility) * ct;
            delta_zv = delta_z - vAltitude;
           }
      }
else // we see the layer from above, delta_z < 0.0
      {
      H = dist * -ct;
      if (H < (-delta_z)) // we don't see into the layer at all, aloft visibility is the only fading
           {
           distance_in_layer = 0.0;
           delta_zv = 0.0;
           }
      else
           {
           vAltitude = H + delta_z;
           distance_in_layer = vAltitude/H * dist;
           vAltitude = min(distance_in_layer,visibility) * (-ct);
           delta_zv = vAltitude;
           }
      }

// ground haze cannot be thinner than aloft visibility in the model
// so we need to use aloft visibility otherwise

transmission_arg = (dist-distance_in_layer)/avisibility;

if (visibility < avisibility)
      {
      transmission_arg = transmission_arg + (distance_in_layer/visibility);
      }
else
      {
      transmission_arg = transmission_arg + (distance_in_layer/avisibility);
      }

The output of this computation is the optical depth which acts as a fog coordinate to be used in a fogging function further down in the shader.

Fog lighting

What's often swept under the rug is that fog and haze really drive their own lighting. Consider a thick fog layer with the camera halfway immersed into it. The Sun illuminates the upper edge of the fog brightly with direct light, but as you descend, the light becomes weaker and weaker and loses it's direction. In fact, the distribution of light as a function of depth into the fog layer can be described by a diffusion equation.

Being inside the layer, what do you see? You see a change in fog hue, brighter for the pixels upward of you, darker for the pixels downward. If in addition the Sun is low in the sky, the situation gets even more subtle:

The changing hue of fog requires a dedicated lighting computation for the fog.

If we'd compute lighting of fog for the terrain or sky pixel behind, it would go wrong. Imagine you're at the bottom of a fog-filled valley looking up - if the fog would not be there, you could see a mountain peak in bright light. But the fog blocking your view will not be in bright light, it will be shadowed by all the fog above it. Thus, you need to do a light computation for the fog pixel.

The problem with that is that the fog pixel doesn't have a location. Fog is anywhere on the ray between the eye and the fogged object - and the amount of light reaching the fog is potentially different anywhere on this ray as well. So what to pick?

The correct solution would be to compute light for multiple positions along the ray and do the integral, but that's too expensive. However, the result is going to be dominated by what happens at one optical depth (when the argument of the exponential is about unity) - any farther away and the fog will be obscured by other fog, any closer and there's too little effect of fog to matter. So a fog lighting computation should be done for one optical depth along the ray or the terrain pixel behind, whichever comes first.

In fact, real hazes can be almost arbitrarily complex since they scatter light in a way that strongly depends on angle, In particular ice crystal hazes are known to produce a variety of optical phenomena if the right shaped crystals are aligned just right, for instance sundogs, or a parhelic circle:

Complicated scattering phenomena on haze - sundogs, the 22 deg ring an a parhelic circle.

Some technical remarks

If you have multiple objects in the scene, it is mandatory that the same fogging scheme is applied to all of them. The eye is very good in perceiving even subtle color variations along mesh seams, and if you have one object which is fogged by a 5% different color because it runs a different shader, it will simply stick out. Equally important is that the fog function treats the seams between terrain mesh and sky correctly. Dependent on precisely how the sky is implemented, that may require some thought, because as in the case of fog, sky vertex coordinates are not real coordinates, they just represent a projection surface.

Continue with Vertex shader transformations.


Back to main index     Back to rendering     Back to GLSL

Created by Thorsten Renk 2016 - see the disclaimer and contact information.