Texturing

So far, we've covered how to assign colors to an object by attaching materials channel color vectors to a vertex. If we want to display details of an object that way, we depend on how fine the mesh is - we can't vary colors across a distance smaller than the vertex spacing that way.

However, details of how objects look are rather important, so there's another way - we can tell the fragment shader how the color distribution looks on the triangle surface between vertices. The fragment shader can know coordinates at the triangle edges, and it can use these coordinates to look in a large table for a color value to use - that large table is usually called a texture in rendering. But what it really is is a large table that takes 1,2 or 3 coordinate axes and looks up a value.

Texture mapping

Now - in what coordinate system do we reference that table? Well, to make life interesting, we do it in a system we haven't covered so far, namely in texture coordinates The reason is that when we want to texture objects, we often map flat 2d texture sheets onto the curved surface of a 3d object (we don't have to, it's perfectly fine to use 3d textures, except they use quite a bit more memory and are slower to look up). For some objects (cubes, cylinders,...) that works without distortion, but for many others (sphere,...) the texture sheet needs to be stretched and dragged to cover the object.

When you model a mesh in a 3d modeling application, the process of dragging a texture sheet properly across the surface is called uv-mapping. It's this map which defines the relation from model coordinates to texture coordinates. This map is in fact not an overall constant - coordinate systems for curved surfaces are not linear transformations, they're objects of differential geometry and their transformation laws are involved. Suffice to say, the information of the mapping needs to be attached as a vertex attribute.

If you have a model loader application side, it should just attach the map automatically, so the vertex shader needs to have a line

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;

to get the texture map to the fragment shader. If you're not defining any transformation on the texture application-side, you can also leave gl_TextureMatrix[0] out because it's trivial then.

Actually, there's eight slots for textures available in OpenGL, called texture units which can be assigned to an object. If you like (though it's expensive) you can assign a different texture map to all of them, in which case you need to transform all eight sets of maps (in GLSL versions higher than 1.30, there's no pre-defined slots for this any more, you need to define your own on the C++ side - the math however is the same).

If you have pass your texture to the fragment shader then, you can fetch the texture set back and use it to look up the table of the texture, which has to be declared as a data type uniform sampler2D (for a 2d texture that is, there's also 1d and 3d textures possible - you can guess how the data structures and commands are called...).)

uniform sampler2D texture;

vec4 texel = texture2D(texture, gl_TexCoord[0].st);

(now, the .st here is just another swizzling convention, we might equally well have written gl_TexCoord[0].xy or gl_TexCoord[0].rg and gotten the same result - never forget that a vector is just a vector internally, and there's nothing holy about it.)

After doing this procedure, we have a color value which is properly mapped across the triangle, so we can insert it into Blinn-Phong to color the final screen pixel. So at the point where we add the specular highlights, we just multiply the texel in and do

fragColor = color * texel + specular;

and we get a properly textured object on-screen.

Alternative texture mappings

A texture in rendering is technically just a table which you can query for a value in coordinates normalized such that 0 and 1 are the edges of the texture sheet. Dependent on how the texture is declared application-side, it might actually be repetitive in one or the other coordinates, in that case, if you query it at any number, only the fractional part will be used, i.e. if you ask for the texture value at 2.5, you'll get the value at 0.5 or halfway into the texture sheet. If the texture is not declared as repetitive, you get a zero vector.

But you can run any transformations you like on the texture coordinates. Say you want to read an overlay texture at a different, higher resolution from the base texture but with the same uv-map (if you want a different uv-map, you have to pass the map coords explicitly, see above). It's easy:

uniform sampler2D overlay_texture;

vec4 overlay_texel = texture2D(texture, gl_TexCoord[0].st * 10.0);

Say you want to stretch a texture in one direction:

float stretch = 2.0;
vec4 texel = texture2D(texture, vec2(stretch * gl_TexCoord[0].s, gl_TexCoord[0].t ));

If the texture isn't declared repetitive, you can make it so inside the shader while loading a stretched overlay sheet:

vec4 overlay_texel = texture2D(texture, vec2(fract(10.0* gl_TexCoord[0].s), fract(5.0 * gl_TexCoord[0].t) ));

In fact, you don't necessarily need to use the uv-map at all if you know something about your model. Say we want to render a landing pad. We know the model is oriented such that the z-coordinate is up in model space. The pad surface will thus be in the (xy)-plane. We want to render rust and dirt as discoloration, but we don't have a good uv-map for the model. Easy - we put the model (xv) coordinates into a varying vec2 modelCoord and pass that to the fragment shader, then do

vec4 overlay_texel = texture2D(texture, modelCoord) ));

and that's a distortion-free mapping of a surface in model coordinates.

Procedural texturing

If you now think of a texture less of an array than of a means to paint a mesh, the strange thing is that you don't need a texture for that. All the texture referencing call does it input coordinates and output a color value. Any function that does the same will do. If you're not so firm in math and think of something simple like a sine function, that will appear as a massive restriction. However, using combinations of noise functions, quite natural-looking results can be obtained.

An example for a completely procedural rock texture.

One point of this is to stress the point that there's nothing necessarily image-like about passing a texture to the renderer - neither do you need a texture to color an object in great detail, nor is coloring objects all a texture is good for, they may well be seen as tabularized functions of coordinates as compared with explicitly coded functions of coordinates.

Why's procedural texturing useful? One nice use case are 3d textures, i.e. functions which take (x,y,z) triplets of coordinates as input. One of the nicest example is wood grain. The wood grain is a 3d structure in the wood, it's not painted on, and dependent on how you cut the wood, the grain appears different. So, if you have a function that gives the 3d distribution of wood grain, you can call it with the real vertex coordinates of every wooden object, and you'll automatically get how the grain appears on the surface of the object without any uv-mapping, regardless of how complicated the object is. Neat.

Now, if you want to tabularize 3d textures, they tend to be large. If you want the resolution of a 1024x1024 texture for you wooden object, the 3d texture is basically a thousand of those. And to look up a value is a bit more than twice expensive than from a 2d texture. If you do it with a function on the other hand, the memory consumption is vanishing for both the 2d and 3d case and you just end up with a bit more than twice the performance footprint.

Texture compositing and blending

There are many reasons why you would want to texture objects with more than one layer. Think of dynamical discolorations runtime - as your warbird gets into a fight, you might want to render soot from a burning engine and a few bullet impact marks which were not there when you took off. Think of high resolution effects - you have a pattern like rust discolorations of a landing pad which you want to overlay the way the pad is painted. If you read the overlay at 10 times higher resolution, you get then times the visual impression of detail for two times the texture memory rather than hundred times! Think of texturing large terrain surfaces with repetitive textures and overlay compositing as a tool to avoid visual repetition to give yet another example.

Meet two of my closest GLSL friends - mix() and smoothstep().

The first one pretty much does what the name says - it takes three arguments - the first two are objects (can be vectors or floats, just have to be the same) and the last argument is the mixing fraction. If you use mix(x,y,f) the return value is x*(1−a)+y*f, i.e. for the fraction set to 0 you get the first value, for the fraction set to 1 you get the second. That's very handy if you want to blend two colors, because you can simply do

vec3 color1 = vec3 (1.0, 0.5, 0.0);
vec3 color2 = vec3 (0.0, 0.5, 1.0);

float mix_fraction = 0.3;

vec3 newColor = mix(color1, color2, mix_fraction);

Now, if you deal with alpha channels, usually just mixing the (rgba) values is not what you want - rather if you want to overlay a texture with an alpha channel over a normal texture to render e.g. dirt or discoloration, you want to use the alpha channel as argument for the mix like

vec4 texel = texture2D(texture, gl_TexCoord[0].st);
vec4 overlay_texel = texture2D(overlay_texture, gl_TexCoord[0].st);

texel.rgb = mix(texel.rgb, overlay_texel.rgb, overlay_texel.a);

If you don't have an alpha channel to mix, you need to make sure the mixing fraction is between 0 and 1. A fraction = clamp(fraction, 0.0, 1.0) accomplishes this, but there's another neat way. The smoothstep function gives a gradual interpolation between 0 and 1 for any argument x and parameters a and b so that whenever x is smaller than a the function is 0, whenever it is larger than b the function is 1 and it smoothly changes in between without any kinks.

For instance, to get a transition in the texturing as a function of an altitude parameter (say a snowline) such as the base texture is used below 500 meters and the overlay is used above 1000 meters, you can do

vec4 texel = texture2D(texture, gl_TexCoord[0].st);
vec4 overlay_texel = texture2D(overlay_texture, gl_TexCoord[0].st);

texel.rgb = mix(texel.rgb, overlay_texel.rgb,smoothstep(500.0, 1000.0, altitude));

You can use the smoothstep function for many purposes - to gradually fade out an effect beyond some cutoff, to change a linear variation of a parameter between 0 and 1 into a sharper non, linear variation like x = smoothstep(0.5, 0.6, x), or even to get wave-like patterns different from what a sine function does.

Another very powerful technique is to use a noise function as the mixing argument - this generates irregular patches of texture drawn on top of each other which blend into an organic look and is one of the most useful techniques to combat texture tiling - more on that later.

Continue with Lighting beyond Blinn-Phong.


Back to main index     Back to rendering     Back to GLSL

Created by Thorsten Renk 2016 - see the disclaimer and contact information.