This article is part of the Experiments with OpenGL series
Texturing sounds like the art of pasting images on our geometry, but there is more to it than that.
First of all we need to understand that a texture is just a data, I like to think of it as an array of data. And since there is an array involved there must be indices to fetch data. That’s where things start getting complicated.
The first scenario is where our geometry has one to one mapping for data in the texture. Like, say we are using the texture to get the color information for our geometry, and if both has same dimension then everything works good. But, real world is much harsh than that. Often, there would be cases where we have geometry large or smaller than our textures. That’s where the magnification and minification filtering take place.
Let’s understand filtering first. From mathematical point of view, a continuous signal is sampled at some distribution of time and then filtering equation is used to build the original signal back. And then there is a Nyquist theorem that states that the sample frequency of the signal must be greater than 2 to be able to recreate the original signal.
Now, I’ll like to explain how I like to understand all that as a graphics programmer. As I said above, I like to image the texture as a 2D array. When applying texture on our geometry, what we are actually trying to achieve is to calculate the indices to that array.
The way it works is we have texture coordinates between 0.0 and 1.0 and then we multiply with our texture size. Say, if we have a texture of size 256×256, and our texture coordinates are (0.5, 0.5) we get the texture indices as (128.0, 128.0). This is the easy case. The practical scenarion is if our texture coords are something like (0.7, 0.3) then, we get the texture indices (179.2, 76.8).
Most often we get two scenarios, one is our geometry is smaller than the texture or our geometry is larger than the texture. In other words we either have to either minify the texture or magnify the texture frequency.
What is texture frequency you might think? Here’s how I like to think of them. Imagine an image texture as tightly packed graph made of points, where a point is the minimum possible unit. Now when we say minified texture, imagine the entire image shrinked so that most of the points start overlapping each other, so for each texture index, we get more than one data. Similary, a magnified texture means the image is blown up, so that for some of the indices we don’t have any data.
As you might have already guessed, magnified texture is not that big of a problem. We can easily either pick the nearest available data or interpolate and calculate the missing data. The minification is more problematic, as we have more data for same indices. We look at all of the cases in detail later on.
Coming to texture indices. How do interpret floating point indices? One simple way is to just ignore the decimal part and fetch the color data at (179, 76). This almost similar to the GL_NEAREST filtering. The actual filtering calculates the Manhattan distance to the nearest index.
Here’s sample of texturing with both the min and mag filter as GL_NEAREST.
Other possible way is average color data from four neighbouring indices (179, 76), (179,77), (180,76) and (180,77) using the bilinear interpolation.
Here’s a sample with both min and mag filers as GL_LINEAR
As you can see resolving pixelation as the near screen is easier, but at more depth we still have aliasing problems. That’s why I said above that magnification is easier to fix than minification. The real problem is that we have more data available for same indices, because of all the overlapping of texture dots or texels.
One of the ways OpenGL helps us in fixing that is with mipmapping. Mipmapping is a technique of creating many smaller texture from the original texture and as our geometry gets smaller we start referring to the smaller textures, hence less collisions.
With mipmap enabled, we have more control over the minification filtering. One is how to pick the mipmap. The mipmap can be picked with nearest algorithm or linear algorithm with the texture data being picked again as nearest or linear algorithm.
Here is a sample with mipmap nearest and texturing linearly (GL_LINEAR_MIPMAP_NEAREST).
And here is with mipmapping linearly and texturing also linearly (GL_LINEAR_MIPMAP_LINEAR). This is also called as trilinear filtering.
This is the maximum you can do with OpenGL filtering techniques. Only one extra thing that we can do is how the mipmaps are created. We can let the OpenGL create mipmaps or us or we can explicity create the mipmaps.
When OpenGL creates mipmaps we can provide a hint on how the mipmaps are created with
In our future take, we will experiment with anisotropic filtering, which is just a filtering method to resolve unnecessary blurring of rectangular textures viewed from oblique angles. To work with anisotropic filtering we need the GL_EXT_texture_filter_anisotropic extension and mipmaps.
For the time being this is a sample code that you can play with:
GLfloat max_anisotropicity; glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &max_anisotropicity); printf("max_anisotropicity: %f",max_anisotropicity); glTexParameterf(GL_TEXTURE_2D, GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, max_anisotropicity);
And as always, don’t forget to checkout the code from the repository https://github.com/chunkyguy/EWOGL
Goodbye, and have a nice day!