TransWikia.com

Why are the color spaces we have access to incomplete?

Photography Asked by Wombat Pete on December 8, 2020

The question, then:

If all colors are combinations of red, green and blue, and my monitor’s pixels use all three, why is its color space limited to so small a portion of the actual complete color space? What colors are we NOT seeing and why?

Similarly, if a camera captures all three, why can it not capture the entire visible color space?

It’s that last bit that may differentiate this question from the one referenced. It’s one thing to know that there are a few practically available spaces smaller than and contained by the visible space. But it’s perfectly possible to know that and have no idea how to explain what colors are in the technologically accessible spaces and which aren’t. And since those spaces are bounded, there has to be a logic to what’s in them and what isn’t. I’d love to be able to answer that – what colors do I see in the world that I can’t ever see on a screen or a printed image (using one of the color spaces smaller than the visible color space)?

5 Answers

why is its color space limited to so small a portion of the actual complete color space?

Because the "red", "green" and "blue" which your monitor uses are pale, probably not noticeable but still pale. You would probably not be surprised if your monitor used distinguishably pale colours and was said to have small colour space.

No matter how pale the "red", "green" and "blue" (and ANY other set of three different colours) are, it is always possible to reproduce a colour with them if you may have negative amount of each. But, this is not possible physically.

No matter how saturated the "X", "Y" and "Z" are you cannot practically reproduce arbitrary visible colour with them, even if they are monochromatic (fully saturated), see reasoning below.

Similarly, if a camera captures all three, why can it not capture the entire visible color space?

Because of Luther-Ives conditions. (May be called Maxwell-Ives criterion in other places)

It is not entirely correct to say that digital camera does not capture entire visible colour space until you define what does it mean to capture entire visible colour space. It's not that camera does not capture some colours (all digital cameras are likely to produce different positive reponse to every possible wavelength between 400 and 700 nm), the problem is that cameras break human metamerism rules - camera maps different series of input SPDs to same response. It means that every camera produced will respond to some pair of SPDs(many of them in fact) equally while they won't be observed as equal and vice versa: it will respond to some pair of SPDs differently while they are observed as equal.

Here's an example of trying to deduce true colour from Nikon D70 data taken from http://theory.uchicago.edu/, it is some optimal camera response transformed to XYZ space:Nikon D70 CIE best fit

This graph shows how well colours can be reproduced. Knowing that a CIE XYZ is a space of imaginary super-saturated colours you can see that colour reproduction accuracy is a trainwreck. And to top it off D70 image data gets clipped from the bottom (negative values) when transformed to XYZ space - which is in a sense the gamut limitation because XYZ is usually the widest colour space used after RAW processing. The negative values are lost forever (if they ever were useful).

I'd love to be able to answer that - what colors do I see in the world that I can't ever see on a screen or a printed image (using one of the color spaces smaller than the visible color space)?

Look at any CD or DVD under bright light and you will see colours which won't be printed or displayed using consumer technology in nearest future.

Regarding prediction: if you mark x and y chromaticities of primaries (which is the exact term for "red", "green" and "blue") of some device or colour space onto this graph you will see which parts of colour space the space does not favour. An example of doing this with sRGB, the common colour space of modern LCD. Following chromaticities are marked on mentioned example. The colours which output device may reproduce lie within the smallest convex polygon containing all marked primaries.

This is why you can't reproduce all colour space with three colours - the visible colour space cannot be matched with a triangle lying inside the convex curved figure. To display all visible colours you need all of the spectrum.

Another demonstration: there are sensitivity graphs in the article about LMS space (they are approximation of human eye cone responses). If you take wavelengths x, y, and z (x1, x2, x3, ..., z3 being LMS response for x, y, z), and if you take any fourth wavelength w=(w1,w2, w3) and try to solve the equation system w=a*x+b*y+c*z the solution (a, b, c) (the amount of each colour needed to reproduce w) will contain at least one negative number no matter which w. you pick. The curved drawing of visible colour space is just an illustration for that. You may use XYZ, CIE1931 or any other space's colour matching function as well, this will yield same result. Here is an Excel spreadsheet for quick experiments.

SPD - spectral power distribution.

P.S. It is also worth mentioning that artificial reproduction limits not only saturation but brigtness and darkness too, but that is completely another story, and I yet have to see any progress in technology other than incremental which may solve this problem.

Correct answer by Euri Pinhollow on December 8, 2020

Color space has 2 words... Color and space.

Color

If all colors are combinations of red, green, and blue...

Incorrect. That is a simplification that works on humans. Our eyes have receptors that use this kind of combination. That is the physiological component.

More or less is that one type of receptors works with blue and yellow and the other with red and green. Those are 2 coordinates of the Lab space. Daltonism is when one type of receptors (red and green) is somehow "broken".

In real life for example a yellow object actually is yellow, not green, and red at the same time. Our brain, to simplify, thinks that the yellow is more or less green and more or less red.

Some TV monitors use that to produce 4 color monitors: http://www.sharp-world.com/aquos/en/product/4_color_innovation.html

Space

It is easier to understand the limitations of color space in a bad quality print, let's say a newspaper.

It does not matter how bright the inks are, how clean the paper looks. the colors are duller than the real thing. It is a physical limitation of the device: the news-absorbing-ink-paper.

It is not that you can not have green, it is that that green is not as bright as a summer leaf and the neighbor's yard. That is the meaning of space. You have colors, but you are limited to how much you have.

Let us use another word... Range

I need to update this page: http://www.otake.com.mx/Apuntes/Imagen/EnviromentMaps/

There is a cool animation on how a camera chops the range it can photograph on the section: What are High Dynamic Range Images?

Our devices are limited to the range of different intensities of light they can record. The common file formats are limited on saving big data and displaying it.

Make a test

Take your brand new Ipad or whatever... Take it to a sunny day on the beach and try to see details on a photo displayed... you will see just dark photos compared to the bright sunny day. In your office, you think you are seeing all colors, in reality, the screen is not that bright...

For a real case like Mission's Impossible Gohst Protocol hologram to happen, the display should be able to match real-life colors: https://www.youtube.com/watch?v=ydIPKkjBlMw

So, Colour space

Is how the limitations we have on our current technologies relate to a statistically wider space average people can see. There are some people that see more colors than average: https://en.wikipedia.org/wiki/Tetrachromacy#Human_tetrachromats

Yup. There are limitations to our current technology.

Answered by Rafael on December 8, 2020

First Question

If all colors are combinations of red, green and blue, and my monitor's pixels use all three, why is its color space limited to so small a portion of the actual complete color space? What colors are we not seeing and why?

The answer to this question is (relatively) simple. I'm going to reference the sRGB color space (depicted below) since it's the most common color space for monitors, but this applies to all physically-realizable color spaces.

sRGB color gamut

Imagine that all the visible colors are contained within the thick black horseshoe in the above diagram. The pure red, green, and blue colors that are displayed by your monitor are depicted by the respectively-colored dots (and white is depicted by the gray dot in the center).

Every color that your monitor can display must be a mixture of these three primaries (red, green, and blue), and any mixture of two or more colors appears in between those colors in the diagram. Therefore, all colors that are mixtures of red, green, and blue must fall within the shaded triangle, the "sRGB gamut." Importantly, this means that:

Not all colors are mixtures of red, green, and blue!

All of the colors inside the black horseshoe curve but outside the sRGB gamut cannot be displayed on an sRGB monitor. This includes pretty much all colors of laser light, the colors in a prism or a rainbow, and many highly-saturated blue-green colors (like the 2013 color of the year).

Note that because the sides of the horseshoe are curved, no matter what three colors you choose, the triangle that those colors form will never include the entire horseshoe (as long as you choose real colors, but we'll come back to that later).


In order to understand why this is, let's talk about the CIE color spaces, the most basic of which is the XYZ color space.

Basically, we can find a way to assign a set of three numbers to any color such that two colors appear the same if and only if they get assigned the same three numbers. The way that these numbers are assigned is called a color space.

The XYZ color space assigns these three numbers (X, Y, and Z, unsurprisingly) by weighting the spectrum of the color with three functions of the wavelength. These functions (x-bar, y-bar, and z-bar) are shown below.

CIE 1931 2 degree standard observer color matching functions

So far this is a little bit abstract, so I'll give an example. Here is the spectrum of "standard daylight," more specifically the CIE Standard Illuminant D65:

CIE illuminant D65 spectral power distribution

(Note that the y-axes of these diagrams are in arbitrary units. Since we're dealing with the color of light and not brightness, the scale doesn't matter as long as we scale all components the same way.)

The name D65 comes from the fact that this spectrum is close to that of an ideal blackbody radiator at a temperature of 6500 kelvin. This is a little hotter than the surface of the Sun (5780 kelvin) due to atmospheric absorption and scattering.

We compute the X, Y, and Z values of this color by multiplying its spectrum with the three color matching functions (x-bar, y-bar, and z-bar) and taking the area under the resulting three curves:

CIE XYZ tristimulus values calculation example: D65

Typically the XYZ values are scaled so that white has a Y of 1, giving us:

X(D65) = 0.9505
Y(D65) = 1.0000
Z(D65) = 1.0888

We often transform this to the xyY color space for convenience, where:

x = X / (X + Y + Z)
y = Y / (X + Y + Z)

x(D65) = 0.3135
y(D65) = 0.3236

The two values x and y depend only on the color of the light, and not on the brightness, and they fully describe the color. I said before that three numbers are necessary to describe the color of light, but that's only true when the brightness is included in "color." Without brightness (one number) you only need two. The XYZ color space was designed so that Y represents the brightness of a color, which is why it is included in the xyY color space.

We can compute the x and y values of different wavelengths of monochromatic light and plot them on a diagram:

CIE chromaticity diagram, Planckian locus

That is where the horseshoe diagram comes from! The ticks mark the wavelengths of light along the edge. Note that the bottom edge has no ticks: colors like magenta can't be made from a single wavelength of light (there can't be a magenta laser).

Pretty much all other color spaces, sRGB included, are defined in terms of the CIE color spaces. Usually they pick a red, green, and blue primary and a white point (described in the XYZ or xyY color space), which is enough to completely specify a color space.

Note that there are plenty of values of x and y that are outside the horseshoe. These don't represent real colors. However, these "imaginary" colors can sometimes be useful. For example, the ProPhoto RGB color space uses "imaginary" green and blue primaries. This way it can represent more colors than a color space that uses three real colors for primaries. The downside is that you now have to be careful about "imaginary" colors that could be present in your files. The reason that larger color spaces like ProPhoto RGB and Adobe RGB aren't often used outside of professional environments is that it isn't worth being able to record colors that you can't display!

In a similar vein, we can imagine negative amounts of color. Mathematically, you can solve for three RGB values that will represent any color, but one or more of the RGB values will be negative when you try to represent a color outside the gamut of your color space. It's perfectly valid to use a negative R, G, or B value to represent a color, but most files only hold positive values, and physical displays can only show positive values (since you can't emit "negative light").

Second Question

Similarly, if a camera captures all three [red, green, and blue light], why can it not capture the entire visible color space?

There are actually two different issues going on here. The first is related to the issue of limited gamuts above. For example, I have my camera set to record in the sRGB color space. The camera may be physically capable of detecting colors outside the sRGB gamut, but it isn't able to record them!

Again, cameras typically limit themselves to the "small" sRGB color space because they will most likely be edited and viewed on sRGB displays, and recording colors that you can't display is not worth the hassle for the average user.


The second issue is a little trickier, and deals with a phenomena called metamerism.

This is the same phenomena that causes some colors to look different under different lighting conditions like daylight, incandescent light, and fluorescent light. (For example, my camera bag usually looks black indoors, but has a slight brownish tint outdoors.)

This is caused by the fact that we reduced a continuous spectrum into only three numbers. Now it is still true that three numbers are sufficient to perceptually identify a color. However, getting those three numbers right is difficult. To see why this is, let's look at an example. I'll show the same D65 spectrum as before, but let's also look at a metamer of it.

metamer spectral distribution

The two spectra look quite different, don't they? Let's repeat our steps from the first section to calculate the X, Y, and Z values of the metamer:

CIE XYZ tristimulus value calculation: metamer

They happen to be exactly the same! This means that a light with the "metamer" spectrum will look identical to a light with a D65 spectrum. Since the spectrum of a light is continuous, there are an infinite number of metamers for every color.

Now let's look at how a camera sees this pair of perceptually identical colors. Here are the same color matching functions from before, along with three new functions (Rcam, Gcam, and Bcam) that represent the sensitivity of an imaginary camera to different wavelengths of light.

Example camera RGB sensitivity

To compute what raw RGB values the camera assigns to these three colors, we use the same procedure as for calculating the XYZ values: multiply the spectrum with the sensitivity curves, and take the area under each curve: (Note that I also scaled the areas so the maximum value would be less than 255.)

Camera RGB value: D65 illuminant

Camera RGB value: metamer

The two RGB values are different! Even after transformation to sRGB values, Adobe RGB values, or even xyY values, the two will remain different. Therefore these two colors will be recorded and subsequently displayed differently even though they appeared identical.

This wouldn't be a problem if we could make filters that exactly mimicked the CIE color matching functions (or a linearly independent combination of them); and although in practice we can get close, it's nearly impossible to match them exactly.

Furthermore, two people may have different color matching functions! Although the variation is not huge, it can be enough to cause some colors to appear differently to different people. This means that even if we do everything right according to the CIE spec, the colors still won't look exactly right to some people.


In summary, color reproduction is simple in theory, but practical limitations mean that imperfect colors are the norm. However, "imperfect" is usually "good enough," and you probably don't have to worry about it.

Answered by 2012rcampion on December 8, 2020

Your basic assumption: "If all colors are combinations of red, green and blue" is just wrong. Rafael says it works on humans, but this is also wrong. Let me answer this: "What colors are we NOT seeing and why?"

Take the light coming from a low-pressure sodium lamp ("SOX"). It is made of two wavelengths at 589 nm and 589.6 nm, both have the same "amber" colour when projected on a white screen. There is no red, no green, no blue here. This "pure amber" light and colour, typically a colour that you cannot capture with a digital camera, reproduce on a screen or print. This is the light that Olafur Eliasson uses here:

enter image description here

see also: https://www.youtube.com/watch?v=hd077pa-5CI

Of course, you can approximate this colour pretty well, it seems, on your computer. But actually the colour you get is not the same: it is more "pale" or more "grey" or "white" than the original colour. SOX amber-yellow light appears like gold-yellow, incredibly saturated. On the pictures it looks whitish-dullish greyish-orange, not in real life believe me!

Other colours/objects that are common in real life but not well captured by digital cameras or displayed by computer screens are:

  • laser beams
  • ember
  • sunset/sunrise colours
  • primary CYAN printing ink
  • Neon orange paints
  • Red and orange selenium glass

...and lots of others

Answered by adrienlucca.wordpress.com on December 8, 2020

I would like to elaborate on the question of what we can do within a 3 color-space.

As some people have answered, you can't really render exactly all colors with only 3 fundamentals, because the horseshoe diagram is not ...a triangle.

The problem concretely in the biggest spaces possible is to render correctly some colors at the fringe of the horseshoe, those outside a triangle, in real life, mainly some green and cyan.

More complicated, not all people are equal in terms of the color they see. You surely know about daltonism, but do you know about tetrachromatism ? Actually there are humans (very rare, admittedly), but also fish and birds, which have 4 different types of cones and can see much more complex colors that what could be approximated by 3-colors pixels.

Actually, vision is a complex process which associates physical sensors (cones) and neurological computation, so what you "see" is very far from a simple monochromatic color, and one reddition may even only exists in your own brain and be different in another one ! (remember the famous blue-yellow dress). The perception from a mix of 3 pure colors, RGB, computed to match a given real life color, may be slightly different for different people because the actual cones performance may vary, but also because of the way the immediate neuron network behind interpret it.

As a consequence, RGB pixel colors chosen will always be an arbitrage on what is good in average for humans. And then, you have physical constraints to make them which may further move away from the ideal rendition.

On top of that, when you talk about accessing a color space, you should keep in mind that the image was priorly captured by a camera which does also have its own space. Some high-end cameras even use different profiles for different shooting situation (indoor, low-light, et...), meaning they don't have the exact same rendering of a given physical color, because they need to prioritize different parts of the color space to make it look more natural to the eye.

The image you see on a paper is usually the result of a process including 4 "color spaces" : the camera, the computer screen and your brain to make any adjustment and the printer. Most of the differences come from physical limitations to build the perfect monochromatic color pixel, but as explained above, part of it are theorical limitations due to the human perception process.

Answered by Hugues on December 8, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP