The reason is that the depth value is encoded into three channels. We use 24 bits for the depth values, and each color channel consists of only 8 bits, so that we have to split the depth value onto three separate color channels. By the reason, if we linearly interpolate each channel of the encoded values, the value will be broken.
My idea to use linear filter for the depth buffer requires two preconditions.
- We store the depth values on color buffer not on the depth buffer so that we can control the encoding way.
- We want to get the average values from four texels. In order words, this way does not work if we want to linearly interpolate at arbitrary ratio. It has to be exactly at the middle of four texels.
For example we have two values: 257 and 254. Of course the actual depth value ranges from zero to one, but floating point expression is hard to read as an example.
The encoding formula is like this:
- Blue channel = value % 256
- Green channel = ( value / 256 ) % 256
- Red channel = ( value / ( 256 * 256 ) ) % 256
For the value, 257, the encoded value is [ R, G, B ] = [ 0, 1, 1 ]. For the value, 254, the encoded value is [ R, G, B ] = [ 0, 0, 254 ].
When we sample at exactly middle of the texels, the hardware linear filter will give us the interpolated value, [ R, G, B ] = [ 0, 0, 127 ], because first it calculate values for each channels, and it will truncate the values lower than zero.
The decoded value will be 127; The way to decode is the opposite process of the encoding way: 0 * (256*256) + 0 * 256 + 127. The result we expect is "255", but we got 127.
The problem is at under-flow part. The averaged value on the Green channels is supposed to be 0.5. If we can get the value below than zero, we can correctly reverse back the original value.
0.5 * 256 + 127 = 255.
Thus we need some buffer bits for the truncation. The encoding formula must be changed to:
- Blue channel = value % 256
- Green channel = ( ( value / 256 ) % 128 ) * 2
- Red channel = ( ( value / ( 256 * 128 ) ) % 128 ) * 2
Since we need to use bilinear filter for 2D image, we need to reserve 2 bits for each channel. Fortunately overflow never happens due to the characteristics of average. Now we can use 6 bits for the Red channel, 6 bits for Green channel, and 8 bits for Blue channel; total 20bits. I assume that each channel has only 8 bits available.
In case that 20 bits are not enough, we can also use the fourth channel, alpha. Then we can use 6 bits for Red, 6 bits for Green, 6 bits for Blue and 8 bits for Alpha; total 26bits, which is big enough for the most cases.
This way should work for depth value blur with the trick I explained before: http://wrice.blogspot.com/2010/03/gaussian-filter-for-anti-alias.html
Please note that this trick does not work when we expect arbitrary linear interpolation; the texel coordinate is not exactly at the middle. It will cause under-flow problem with much higher precision.
1 comment:
First I thought this can be used for depth down-sampling, but we need "min" value rather than "average" value on depth down sampling.
Post a Comment