Monday, December 25, 2017

Gamma Correction


I wrote this post last year when I was teaching Brown’s graduate computer graphics course, and wanted to share this on my blog because I find myself referring to this from time to time.

Our human eyes are more sensitive to changes in dim lights than changes in bright light. Why? It’s more helpful to distinguish things in the dark (background vegetation vs. saber tooth tiger) than to be able to spot the same difference in bright contexts (the sun, the sun + 1 lumen). If twice the number of photons hit our photoreceptors, we only perceive it to be a fraction brighter, rather than twice as bright. Our eyes measure radiance in a nonlinear fashion.

Definition: Radiance is how much power is emitted, per unit surface area it is emitted from, per solid angle it arrives at (you can think of solid angle as a 3D version of a 2-D planar angle), but for the colloquial purposes of this blog post, you can think of it as similar to "number of photons". 

On the other hand, camera sensors measure radiance in a linear fashion: if twice the number of photons hit the camera sensor, it merely counts twice as many photons. Let's say this count is V_in, and our camera can capture anywhere between 0 and 255e6 photons in a third of a second (roughly a dim lamp).

Suppose the camera saves V_in into an image (e.g. a PNG file), in which we get 256 possible integer "bins" that we use to discretize V_in, linearly. If the value 128 corresponds to "bright", our eyes will not be able to perceptually see much of a difference for values 129, 130, ... 255. That's a waste; we could have used these bits to encode darker tones that we are more sensitive to changes to.

Therefore, in order to make efficient use of the bits allocated to the PNG file, the camera "gamma encodes" the "true radiance measurement" by raising it to the (1/2.2) power. 

V_enc = V_in ^ (1/2.2)
V_enc is then stored linearly into the 256 integer bins. This squashes the brighter values into a smaller space in the image. Equivalently, darker tones end up taking up more of the 256 bins.



Image from http://www.cambridgeincolour.com/tutorials/gamma-correction.htm

Next, we want to display the camera image V_enc on a computer monitor such that it looks perceptually the same as if V_in photons (anywhere from 0 to 255e6) had entered our eye (we were looking at the scene instead of the camera). Therefore, the display simply does the following to recover the original photon count:V_in = V_enc ^ 2.2

This is known as "gamma decoding", or "gamma correction". The exponent (2.2) is referred to as the "display gamma". Note that the encoder may have used a different gamma (e.g. 1/2.4) which can lead to a mismatch what the recording device (camera) "saw" and the screen (decoder) that is viewing it. Usually this is not such a big deal (and there is the philosophical question of which image is "truest").  At a professional film studios like Pixar, extraordinary care is taken to ensure that every employee's display is calibrated to the exact same gamma (otherwise colors and lighting tuned by different artists will not appear the same!)

Is My Display Gamma-Correcting?

To test whether your display is performing gamma correction/decoding, consider the following image consisting of alternating values 0 (min brightness) and 1 (max brightness). This experiment was originally suggested to me by John Hughes.



It can be generated via the following MATLAB program:
img=zeros(200,400);
img(51:end,1:2:end)=1;
img(1:50,1:200)=0.5;
img(1:50,201:400)=(0.5^(1/2.4));
imshow(img);

If you sit back in your chair, the vertical bars should meld together, and appear to have uniform brightness 0.5.

If your display is not doing gamma correction, the displayed brightness of 1 bars stays at 1, the displayed brightness of the 0 bars stays at 0, and the perceived brightness is therefore 0.5. If your display is doing gamma correction, the displayed brightness of the 1 bars is decoded to 1^2.2 = 1, and the displayed brightness of the 0 bars is 0^2.2=0, so the perceived brightness is still 0.5. So regardless of the display gamma, the striped bars should have the same color.

On the top left is a rectangular region with brightness 0.5. On the top right is a rectangular region with brightness 0.5^(1/2.2). If your display is doing gamma correction, the top right should appear to be the same tone as the striped bars (as it gets decoded to 0.5), and if your display is NOT gamma correcting, the top left should be the same tone as the striped bars.



Hope this clarification helps. Thinking about color, light, and gamma correction can make you start to question reality, so feel free to leave questions in the comments.


Discussion and Further reading:

Although gamma correction usually only comes up in the context of rendering and photography, the notion of using more bits of storage to store more perceptually distinctive information is fundamental to compression algorithms like JPEG and perceptual distance research in computer vision. Despite gamma encoding/decoding being the simplest "compression algorithm" for natural images possible, it's a neat trick and a good reminder that compression is not only about bits, but taking into account human interpretation of those bits.

In optics and radiometry people care about images that do not compress luminance. Image formats such as OpenEXR are suitable for storing lossless radiance values, and are the standard rendering output format at Pixar.

http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html

No comments:

Post a Comment

Comments will be reviewed by administrator (to filter for spam and irrelevant content).