Monday 20 April 2015

So I'm working hard on make OpenGL work like DirectX 11, and come to the problem of depth.

GL expects you to use a projection matrix that transforms near z to -1.0, and far z to 1.0, which is a horrible use of numerical precision. Behind the scenes, GL will then map from [-1.0,1.0] to [0.0,1.0] to give you a nonzero depth in your buffer.

DirectX expects you to use a projection matrix that transforms near to 0 and far to 1. It doesn't need to map to [0.0,1.0] because you've already given it the range it's expecting. You can also have far as 1 and near as 0, which is actually much better for numerical precision reasons.

So if you use the same projection matrix, in the "same" vertex shader, in GL and DX, you'll get different depths. What comes out as z=0.0 in DirectX, will come out as z=0.5 in OpenGL.

glDepthRange to the rescue! This function controls the mapping from the values you actually send to GL to the values it puts in the depth buffer. If you leave it as the default, or pass glDepthRange(0.0,1.0), it will map [-1,1] to [0,1]. So if we want to leave the numbers untouched, we just pass glDepthRange(-1.0,1.0), right?

Wrong! Because as the GL spec states,
depth values are treated as though they range from 0 through 1 (like color components). Thus, the values accepted by glDepthRange are both clamped to this range before they are accepted.
Yes, another boneheaded OpenGL spec snafu means that however much you may want to map [0,1] to [0,1], OpenGL won't let you. If you pass glDepthRange(-1.0,1.0), GL will clamp the -1.0 back up to zero, and the mapping will still go [-1,1] -> [0,1].

The solution? For NVIDIA hardware, you can use their extension glDepthRangedNV, which is not nerfed like the standard function. For AMD.... ?

No comments:

Post a Comment