How mipmap worked with fragment shader in opengl?

53 Views Asked by At

Mipmaps seem to be handled automatically by OpenGL. The function provided by the fragment shader seems to be to return the color of the sampling point corresponding to the pixel. So how does opengl automatically handle mipmaps?

1

There are 1 best solutions below

0
Yakov Galka On

When you use the texture(tex, uv) function, it uses the derivatives of uv with respect to the window coordinates to compute the footprint of the fragment in the texture space.

For a 2d texture with an isotropic filter the size of the footprint can be calculated as:

ρ = max{ √((du/dx)² + (dv/dx)²), √((du/dy)² + (dv/dy))² }

This calculates the change of uv horizontally and vertically, then takes the bigger of the two.

The logarithm of ρ, in combination with other parameters (like lod bias, clamping, and filter type) determines where in the pyramid the texel will be sampled.

However, in practice the implementation isn't going to do calculus to determine the derivatives. Instead a numeric approximation is used, typically by shading fragments in groups of four (aka 'quads') and calculating the derivatives by subtracting the uvs in the neighboring fragments in the group. This in turn may require 'helper invocations' where the shader is executed for a fragment that's not covered by the primitive, but is still used for the derivatives. This is also why historically, automatic mipmap level selection didn't work outside of a fragment shader.

The implementation is not required to use the above formula for ρ either. It can approximate it within some reasonable constraints. Anisotropic filtering complicates the formulas further, but the idea remains the same -- the implicit derivatives are used to determine where to sample the mipmap.

If the automatic derivatives mechanism isn't available (e.g. in a vertex or a compute shader), it's your responsibility to calculate them and use the textureGrad function instead.