Photography Asked by TopCat on March 14, 2021
It is commonly stated that the spectral response of RGB colour imaging sensors (CIS) is chosen to match that of the human eye. While this is roughly true, when you examine this claim in detail it is only true to a very limited degree.
The human eye spectral response has significant overlap of red and green cones:-
Typical colour imaging sensor spectral response looks as follows:-
There are enormous differences between the CIS and the eye spectral responses. The red 50% point is ~515nm for the eye and 580nm for the CIS. The green 50% point is ~505nm for the eye and ~475nm for the CIS. The Blue 50% point is 475nm for the eye and 510nm for the CIS.
Why is there such large discrepancy between the two? Surely these sensors will require a large amount of colour crosstalk correction in the sensor electronics?
As you know, the human eye has light sensitive rod shaped cells that see light and shadow. These allow vision in dim light conditions however, this view will be monochromatic. The center area of the retina contains cone cells that are pigmented. These selectivity provide color vision. The majority of people have cone cells sensitive to red, green, and blue, the primary colors for mixing light sources.
The color vision we have is not fixed as to its sensitivity of colors. Our eyes have an amazing ability to make adjustments, in other words, an involuntary ability to shift hues based on conditions.
You can prove this for yourself. Procure some colored filters. Cellophane gift-wrap will do. Place a red filter over one eye. Stare about the room for a few minutes and then remove the filter. The filtered eye has changed its sensitivity (quire a lot), the un-filtered eye did not change. This involuntary color sensitivity change is eye independent. Try different color filters, and blink, right eye, left eye, you will be amazed.
OK, the camera, be it film or digital consists of a light sensitive material. The color sensitivity of film is adjusted using colored dyes called sensitizing dye. These alter the sensitive of film which in its natural state is only sensitive to violet and blue.
Digital cameras use colored filters over the light sensitive sights on the imaging chip. The color of the filters and the ratio of red, green, and blue filters, to each other is adjusted to force the chip to register colors in a similar way as does our eye/brain combination. Look up Bayer Matrix or Bayer Pattern on the web. The red, green, and blue filters over the light sensitive sites are arrange in a special pattern that modulates (modifies) the light that reached the photo receptors so that they mimic the human eye. Bayer is the Kodak engineer that figured this out.
Answered by Alan Marcus on March 14, 2021
This is a remarkably complex and interesting subject and no answer will be comprehensive.
The TL;DR result is that if the final picture looks acceptably like what you perceive, then it works.
Your eye spectra diagram seems a little compressed. You don't say where it came from. Here's one from Eye and Colors:
The color filters used for sensors will never match human spectra sensitivity. Not even humans match other humans. The question becomes whether or not you can manufacture and produce color filters that can be acceptably processed to produce a result that works. On top of the mismatch in spectra of the filters mentioned, sensors also use twice as many green sensors as a way to bump up green sensitivity to more closely match typical human color perception.
Could the color filters more closely match human spectra? Probably, but at what cost and benefit when a little software already handles the corrections?
On top of camera processing you also have the question of presentation.
Various display devices have their own spectra that also don't match the human eye. For example, the Samsung Quantum Dot vs Conventional TV:
This too requires its own processing to produce results that are desired or acceptable.
For even more esoterica, you may want to delve into All the Colors We Cannot See, and Red-Green & Blue-Yellow: The Stunning Colors You Can't See
Answered by user10216038 on March 14, 2021
Surely these sensors will require a large amount of color crosstalk correction in the sensor electronics?
So do our eye/brain systems that correct for a wide variety of light sources.
Even though we often call them "red", "green", and "blue" cones, here are the actual colors to which our short wavelength (S-cones), medium wavelength (M-cones), and long wavelength cones (L-cones) are most sensitive:
Notice that our "red" L-cones are actually most sensitive to a lime green color between yellow and green and are less than 50% efficient at 640nm true "red" light. Our S-cones and M-cones aren't most sensitive to exactly "blue" and "green", either.
The Bayer masks over most of our color cameras' sensors aren't pure "red", "green", and "blue, either. The "blue" and "green" filters are pretty close, but the "red" filters are usually a shade of yellow-orange centered at around 590nm, rather than "red" light at 640nm.
Our emissive color reproduction systems (monitors, TVs, and other tri-color screens) do tend to emit colors that are closer to actual Red, Green, and Blue with the total area of their emissions centered at about 460, 525, and 640 nanometers. The specific colors can vary widely from one display to the next. Some four-color emissive displays also have a yellow channel at about 580nm, which is actually closer to the the 564nm peak sensitivity of our "red" cones than the 590-600nm peak of the "red" filters on our Bayer Masks or the 640nm light emitted by the 'Red' channel of our RGB color reproduction systems.
For more, please see these existing questions and answers here at Photography SE:
Why are Red, Green, and Blue the primary colors of light?
Why don't mainstream sensors use CYM filters instead of RGB?
This answer to 6500k calibrated monittor - what are the RGB values to represent a given monochromatic radiation of known wavelength? contains a lot of background information applicable to the question above, including a link to this excellent resource.
Answered by Michael C on March 14, 2021
Somewhat technical but here's a link that describes something in imaging science called the Luther criterion, which explains what camera sensors can not equal typical cone responses.
Answered by R Hall on March 14, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP