# Light intensity

In discussing the standard Blinn-Phong scheme, we have considered only a single light source, however we have hinted in a few places why having several light sources in the scene is not exactly simple. We will now take the time to consider the problem in some detail.

In the following, we will loosely call the length of an (rgb) vector the light intensity (largely because it measures how bright a color will appear on screen). Now, intensity also has a precise meaning in physics - it is related to the energy transported by a beam of light, or to the number of photons. This picture also helps to understand how and why a light gets weaker with distance - as the beam spreads, the available energy or number of photons just has to be distributed across a larger area.

For the physical light intensity definition, the intensity of two light sources falling onto a given surface is just the sum of the individual intensities (because energy or photon numbers are additive quantities).

Unfortunately, that's not true for the perceived light intensity which is used in rendering to define colors on the screen. For that reason, adding two color vectors will not give the correct result for the summed intensity.

According to the Weber-Fechner law of intensity perception, the perceived change in intensity of a stimulus is approximately proportional to the logarithm of the physical stimulus intensity. In other words, one would render a bright noon with a intentity of 1.7 (length of a white color vector) and a sunset scene with perhaps 0.8-0.9, but that doesn't mean the physical intensity of the light is about half - in reality the Illuminance of the first is a good 100.000 lux whereas the second is typically 500 lux - i.e. every reduction of the rendered intensity by ~0.3 corresponds to a reduction of the actual intensity by about a factor 10.

Thus, if you have a light bulb with a rendered intensity of 0.6 (that's about 50 Lux) and add that to the sunset illumination of 0.87 (or 500 Lux), you effectively tell the renderer that the physical intensity of the sunset got a factor 100 brighter by the lightbulb - clearly absurd, as in reality the lightbulb won't make a dent against the still bright sun over most of the scene.

What you have to do instead is to add the light sources is to convert the rendered intensities to physical intensities, then add these, and then convert back to rendered intensities. Say our lighting model is I_R = 0.14 * log(I_P) for the rendered intensity I_R and the physical intensity I_P, then we find that I_P = exp(I_R/0.14) and a combined physical intensity of 550 lux - which increases the rendered intensity from 0.87 to 0.88 - which is rather different from simply adding the rendered intensities.

The same relations also need to be taken into account if light attenuation is computed. For instance, the light intensity is reduced exponentially when the light crosses fog, i.e. every characteristic attenuation length the intensity drops by constant factor, say 3. However, that is the physical light intensity - the perceived light intensity drops by a constant number only, say 0.1.

## Gamma correction

The discussion above is relevant for a procedure called gamma correction which has to do with re-weighting color value based on intensity. In the old days when cathode ray tube (CRT) monitors were common. the electrical current applied to a pixel was taken proportional to the color value, and the resulting yield of photons (i.e. physical light intensity) was given by a power law (i.e. current to some power).

The exponential/logarithmic Weber-Fechner law formulated above is also in more detail rather a power law (over a large intensity range, the functions are fairly similar). By lucky accident, it so happened that the CRT and the perception color law almost canceled each other, i.e. doubling color value also roughly led to doubling perceived intensity. The result is a perceptually uniform color distribution - a change of 0.1 in color value is about perceived the same way if the color value is 0.1 or 0.9.

The residual mismatch between the two power laws could then be mostly accounted for by the transformation of a color value x as x -> x^γ which for γ = 1 leaves the color value unmodified, for γ >1 compressed the range of the darker colors and widens the mapping of the brighter color values and vice versa. Take a look at the following example to see how this looks in practice, giving either more nuances to the (bright) clouds or to the (dark) terrain:

 A scene before gamma correction. Gamma correction > 1 to compress the dark color range and expand the bright color range. Gamma correction < 1 to compress the bright and expand the dark color range.

Moden LCD monitors have a different intensity response to current but are calibrated to give a power-law response to achieve the same perceptual uniformity.

To fully correctly display color values, gamma correction has to be used both when a texture is extracted from a recording device (say a camera) with the known transfer function of the camera and a second time when the color is displayed on any screen with a transfer function characteristic both of the screen itself and the viewing conditions.

Here, part of the trouble is the surround effect.

## Perception in low light

It's a well-known fact (especially to hobby astronomers) that the eyes take some time to adapt to darkness (often 20-30 minutes), but once completely adapted, the intensity perception is orders of magnitude improved. While you can't see only the brightest stars after stepping out initially, after 20 minutes in complete darkess you will be able to see even faint galaxies.

This makes things complicated in a simulated environment such as 3d rendering, because there is the simulated adaption state of your simulated eyes which is possibly distinct from the real adaption state of your real eyes. If you render a night scene in fullscreen and your computer is located in a dark room, it might look quite okay. If you render the same scene in a window with part of your OS menu bars visible in the background, it might look too dark. If you are sitting outside in bright sunshine, it might appear so dark that you can't recognize anything. Nothing has changed in the simulated environment, all that is different is the adaption state of your real eyes to the surrounding real light.

For that reason, it is impossible to devise a rendering scheme that delivers good visuals for low light scenes without user input - the renderer can't know what surround effect to compensate for. Professional simulation environments often require fully dark-adapted (real) eyes in these situations. Otherwise, the user will have to do a gamma correction to achieve perceptually similar colors to the intended effect.

## High dynamic range (HDR) lighting

For practical purposes, you need to commit to what a light value in the rendering code means as soon as there's more than one light in the scene. The color value of a light may represent physical intensity - that has the advantage that you can directly use physics equation for attenuation, scattered intensity, Fresnel reflection intensity etc. The light intensity then needs to be mapped to perception only in the final step. However, if you have the sun with 100.000 lux and a candle with 10 lux in the scene, floating point precision may not be enough to account for the differences accurately.

It is then possibly to use encoding schemes for the intensity that supply double point precision which usually is enough to preserve the subtle effect of adding candlelight to the sun.

However, if the light values internally are supposed to represent perceived intensity, floating point precision is quite enough to deal with the vast intensity differences (in the example above, the difference betwen 0.87 and 0.88 is well within what floating point precision can handle). The difference is that then intensities can not simply be added and need to be 'unwrapped' before plugged into any equations.

Mathematically both schemes are equivalent - HDR can be achieved both by brute force or by applied math.

## Practical consequences

Probably the most important thing in devising a rendering model is to have a consistent model relating physical and perceived intensities (a perception model) and stick with it for all light sources in the scene and only account for the local monitor and surrond effect by a final brightness/gamma transformation.

One common mistake is to double-count the adaption of the eyes. Many people are instinctively tempted to render a flame visually brighter at night by increasing its intensity. However, if the flame is put into the scene with the correct physical mapped to correct perceived intensity and if the rest of the scene dims accordingly, as in nature eye adaption and contrast to the rest of the scene will already make the flame appear brighter without the need to ever change its intensity.

In many situations simplified prescriptions give numerically almost the same result as going through the full perception model. In particular:

• a presciption to render the higher of light color value of lights A or B is usually good everywhere except in the transition region when the light sources are almost equal. If you're dealing with artificial lights illuminating a scene as the sun goes down, that is usually acceptable. If you want to render the effect of two equal-intense light cones intersecting, it is however not.
• if one of the lights is faint and has a perceived intensity of 0.1 or so, it usually can be simply added without creating a pronounced visual mismatch. There is a slight overbrightening, but when translated to physical intensity this corresponds perhaps to a factor two, which is usually acceptable. A brighter secondary light however can usually not be added without effectively instructing the renderer to show the primary light several orders of magnitude more intense than it actually is.
Especially when computing multiple intersecting secondary light sources, it may be necessary to go through the full perception model.

Back to main index     Back to rendering     Back to GLSL

Created by Thorsten Renk 2016 - see the disclaimer, privacy statement and contact information.