TransWikia.com

Why do pure colors (red/green/blue) become a mixture of colors when converting raw?

Photography Asked by Atnas on January 15, 2021

In trying to understand how raw is converted, I created a synthetic raw image that has a red, green and blue gradient strip, with a gamma of 2.2 (dng). I made the synthetic raw by converting an image from a Nikon D200 to uncompressed DNG, then overwriting the image data using python.

When I convert it to jpeg, I would expect it to retain the pure colors, like the image below:

Expected result

I then used lightroom and lightzone to convert this image to jpeg with default settings and white balance set to daylight. These are the results, where especially the red and blue contain other colors and their own.

Lightroom:

Lightroom image

Lightzone:

Lightzone image

My understanding of white balance was that it was just a number to multiply each color by, but not mixing them. That appears to be wrong. Can anyone explain why the colors do not remain “pure”?

2 Answers

Here are some causes for non-zero values that you expect to be zero. The most relevant to your problem are listed first.

  • Your synthetic raw does not account for the input color profile of the camera, which is based on how the specific colors filter in the Bayer matrix interact with lighting sources when photographing calibration targets (with a lens, which may have its own color shift). Here is the histogram and output with the D200 profile (using RawTherapee):

    Histogram: DCP

    Result: DCP

    Compare with when no camera profile is used.

    Histogram: sRGB → sRGB

    Result: sRGB → sRGB

    See RawPedia: What Are DCP Profiles and Why Do I Need Them?

  • Some people refer to how manufacturers design their cameras to render colors to deviate from "reality" in a pleasant manner as their "color science".

These are configured in cameras via settings often labeled "color profiles" or "film simulations". They often have names such as Standard, Neutral, Vivid, Portrait, Landscape, and Flat. Some raw processors may attempt to replicate these profiles in a layer of color modification separate from color-correction profiles, camera input color profiles, and color spaces.

  • The working color space may not match the output color space. Make sure the working and output color spaces match. If that is not possible, selecting a different conversion method may produce "better" results because out-of-gamut colors are converted differently by different algorithms. Here are histograms to illustrate:

    • Adobe RGB → sRGB:

      Histogram: Adobe RGB → sRGB

    • sRGB → sRGB:

      Histogram: sRGB → sRGB

Less relevant possibilities:

  • Highlight Recovery using CIELab Blending can affect the colors at the bright end of the gradients. Other methods (Blend, Color Propagation, and Luminance Recovery) had no apparent effect on the gradients.

    Result: CIELab Blending

  • The demosaicing algorithm can affect interpretation of colors.

    See RawPedia: Demosaicing

  • Some programs add noise or dithering when converting color from higher bit depths to 8-bit/channel.

... the red shift is towards the actual color of the "red" filter in a typical Bayer mask - somewhere between yellow and orange. – Michael C

Ideally, "pure" input colors would result in output colors that match the Bayer filter of the selected camera. However, in practice, that is not what happens.

The color shift appears to be caused by camera profiles used by the raw processors, along with other contributing factors. They are created by processing photographs of calibration targets, taken with real lenses, under different lighting sources, as described at RawPedia. So the color filter does contribute, to the extent that it is involved in profile creation. However, each program produces different output, so it cannot be said which, if any, of them shifted toward, or away from, the actual colors of the Bayer filter used in any particular camera.

Regardless of the actual color in the Bayer array, select a different profile, and the output colors change. Use a different program, which uses different profiles, and the colors change. Use a synthetic profile, and the colors may be entirely unrelated to any real Bayer array.

Yet another layer of color modification that dissociates the actual colors of the Bayer array from the output image is "color profiles", such as Standard, Neutral, Vivid, Portrait, Landscape, Flat, which some raw processors may attempt to replicate.

Lightroom image Lightzone image RawTherapy image

Correct answer by xiota on January 15, 2021

The short answer is that the "red", "green", and "blue" filters in your camera's Bayer mask are not the same colors as the Red, Green, and Blue colors used by emissive RGB displays. Neither are the "red", "green", and "blue" colors to which the three types of cones in our retinas are most sensitive.

Here are the sensitivities of the Short wavelength, Medium wavelength, and Long wavelength cones in our retinas, with the curves for each the color that we perceive for the wavelength of that type's peak sensitivity.

enter image description here

Typical Bayer masked sensors are similar, though the "red" filter is a little more "yellow-orange" than "yellow-green".

enter image description here

Here's the same graph with vertical lines drawn where a typical RGB display (or RYGB display, which adds a yellow channel) emit their light. Notice how much distance there is between the peak of each "red", "green", and "blue" channel on a camera's sensor and the Red, Green, and Blue channels emitted by an RGB display. Particularly, please notice how much closer the "red" filters in our Bayer masks are to yellow than to red.

enter image description here

The lack of true Red in the Bayer mask also helps to explain why sensors are green.

All cute little drawings on the internet notwithstanding, the "red" filters in most Bayer masks are centered at around 590nm, which we perceive as an orangish shade of yellow, and not red at around 640nm. There are also more subtle differences between the "blue" and "green" filters and the colors used by RGB displays.

So the "pure" Red color emitted by an RGB display at about 640nm creates a response in more than just the photosites masked by a filter most sensitive to 590nm but with significant sensitivity all the way from 560nm to 790nm or so. The "green" filtered photosites also respond to 640nm Red light. Everything past 790-800nm is filtered by the IR cut filter in the filter stack in front of the sensor (which isn't placed in front of the sensor when sensitivity is measured).

Likewise, the "pure" Green color emitted by an RGB display at about 530nm creates a response in more than just the photosites masked by a "green" filter. The "blue" filtered photosites also register a response. Ditto for the 480nm light emitted by the display's Blue channel. Both the "blue" and "green" filtered photosites on the camera sensor register a response to that light.

We make our cameras this way to emulate the way our eye/brain system creates the perception of color from certain wavelengths of electromagnetic radiation. The only reason we call a portion of the electromagnetic spectrum visible light is because that portion of the EMG spectrum creates a biological response when it falls on the cones in our eyes' retinas. There are no specific colors implicit in certain wavelengths of light, there is only the perception of color created by the eyes and brain that perceives it. Animals with cones that have differing responses to the same wavelengths of light do not see the same colors for the same wavelengths and combinations of wavelengths.

For a camera to create "pure" colors when pointed at an RGB display, one would need a sensor that has a "red" channel that does not respond at all to Green or Blue light emitted by the RGB display, a "green channel" that does not respond at all to Red or Blue light, and a "blue" channel that does not respond at all to Green or Red light. But such a camera would not be able to construct any colors other than pure Red, pure Green, and pure Blue. There would be no way to synthesize other colors using overlapping sensitivities of "red", "green" and "blue" filtered photosites that mimic the way our retinal cones and our brains combine to synthesize colors based on the overlapping sensitivities of our S, L, and M cones.

Answered by Michael C on January 15, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP