Photography Asked on March 3, 2021
From what I understand most digital cameras have a sensor where each pixel-sensor has three sub-sensors, each one with an R,G and B filter. RGB is obviously the more fundamental colour model since it directly corresponds with the receptors (cones) in the human eye.
However, RGB filters necessarily cut out two thirds of white light to get their component. Surely cameras would benefit from shorter exposure times if the filters were instead CYM where each element cuts out only one third of the light? The camera’s processor can still save the image in whatever format the consumer wants since a CYM datapoint can be converted easily to an RGB one.
I know this is sometimes done in astrophotography where three separate B&W photos are taken with CYM filters.
Am I just wrong and this is, in fact, what’s already done – or is there a good reason for an RGB sensor?
First, a little background to clear up a slight misunderstanding on your part.
The vast majority of color digital cameras have a Bayer filter that masks each pixel with a color filter: Red, Green, or Blue.¹ The RAW data does not include any color information, but only a luminance value for each pixel.
However, RGB filters necessarily cut out two thirds of white light to get their component.
Not really. There's a lot of green light that makes it past the 'red' and 'blue' filters. There's a lot 'red' light and a good bit of 'blue' light that makes it past the 'green' filter. There's some 'blue' light that makes it past the red filter and vice-versa. The wavelengths that the 'Green' and 'Red' filters are centered on are very close to one another, and 'Red' is usually somewhere between 580nm and 600nm, which is more in 'yellow-orange' territory than 'red'. The "peaks" of the filters in a typical Bayer array aren't aligned with the wavelengths we describe as "red", "green", and "blue."
So in a sense, our cameras are really YGV (Yellow-Green-Violet) as much as they are RGB. Our color reproduction systems (monitors, printers, web presses, etc.) are what are RGB, CMYK, or some other combination of colors.
As one can see, the green filtered pixels on a Bayer sensor array are the most sensitive to light. Additionally, half the color filters on a Bayer array are green, with only one-quarter being blue and the remaining quarter being "red". This makes our sensors more sensitive to the middle of the visible spectrum that to either end of it. The sensor referenced above is most efficient with light just above 500 nm.
Once we also consider that sunlight when filtered by the Earth's atmosphere is also brighter in the middle visible wavelengths than on the extremes, it should become clear why our eyes evolved to be more sensitive to those middle parts of the visible spectrum and why we design our camera sensors to also be most efficient at those middle wavelengths.
As the illustration above shows, sunlight is strongest at these middle wavelengths and our eyes and cameras are most efficient/sensitive at those same wavelengths. So even though there are three different color filters on a Bayer mask, we don't lose fully two-thirds of the light with a reasonably efficient sensor.
The colors used in Bayer filter arrays mimic the human eye, where our 'red' cones are centered around 565 nm, which is a greenish yellow, as opposed to our 'green' cones that are centered around 540 nm, which is green with just a tint of yellow mixed in. Our 'blue' cones are centered at about 420 nm. For more about how both the human vision system and our cameras create "color" out of the portion of the electromagnetic radiation spectrum we call "light", please see: Why are Red, Green, and Blue the primary colors of light?
There's no hard cutoff between the filter colors, such as with a filter used on a scientific instrument that only lets a very narrow band of wavelengths through. It's more like the color filters we use on B&W film. If we use a red filter with B&W film all of the green objects don't disappear or look totally black, as they would with a hard cutoff. Rather, the green objects will look a darker shade of grey than red objects that are similarly bright in the actual scene.
Just as with the human eye, almost all Bayer filters include twice as many "Green" pixels as they do "Red" or "Blue" pixels. In other words every other pixel is masked with "Green" and the remaining half are split between "Red" and "Blue". So a 20MP sensor would have roughly 10M Green, 5M Red, and 5M Blue pixels. When the luminance values from each pixel are interpreted by the camera's processing unit the difference between adjacent pixels masked with different colors are used to interpolate a Red, Green, and Blue value (that actually corresponds to somewhere around 480, 530, and 640 nanometers) for each pixel. Each color is additionally weighted to roughly the sensitivity of the human eye, so the "Red" pixels carry a little more weight than the "Blue" ones do.
The process of converting monochrome luminance values from each pixel into an interpolated RGB value for each pixel is known as demosaicing. Since most camera manufacturers use proprietary algorithms to do this, using third party RAW convertors such as Adobe Camera RAW or DxO Optics will yield slightly different results than using the manufacturer's own RAW convertor. There are some sensor types, such as the Foveon, that do have three color sensitive layers stacked on top of each other. But the manufacturers claim such a sensor with three 15MP layers stacked on each other is a 45MP sensor. In reality such an arrangement yields the same amount of detail as an approximately 30MP conventional Bayer masked sensor. The problem with Foveon type sensors, at least thus far, has been poorer noise performance in low light environments.
So why don't most digital cameras use CYM filters instead of RGB¹ filters? The primary reason is color accuracy as defined by the human perception of the different wavelengths of light. It is much more difficult to interpolate color values accurately using values from adjacent pixels when using a CYM mask than when using an "RGB" mask.¹ So you give up a little light sensitivity to gain color accuracy. After all, most commercial photography at the highest levels is either done with controlled lighting (such as a portrait studio where it is easy enough to add light) or from a tripod (which allows longer exposure times to collect more light). And the demands of professional photographers are what drives the technology that then finds its way down to the consumer grade products.
¹ Except the three color filters for most Bayer masked "RGB" cameras are really 'blue-with a touch of violet', 'Green with a touch of yellow', and somewhere between 'Yellow with a touch of green' (which mimics the human eye the most) and 'Yellow with a lot of orange' (which seems to be easier to implement for a CMOS sensor).
Correct answer by Michael C on March 3, 2021
Cyan magenta yellow sensors have been made, along with red green cyan and a few other variations.
The main problem is that even with RGB sensors there is significant overlap between the spectral response of each of the dyes, i.e. the "green" pixels are sensitive to red and blue light to a certainer extent. This means the results require complex calculations to obtain accurate colours, the relative responses of adjacent red and blue pixels are used to judge how much of the green response was really the result of red and blue light.
With CMY the problem is much worse. You're essentially trading light efficiency for colour accuracy. This may be fine for astronomical photography where you don't always have crisp colour boundaries, hence you can reduce colour noise by blurring, but it's not good for landscape or fashion photography.
Amongst RGB chips, the exact choice of filters varies by manufacturer. Canon for instance uses weak dyes with broad response in order to chase low light performance, but the specific dyes used are also tuned toward discerning colours under fluorescent lighting, for the benefit of the army of sports and news photographers who use Canon cameras.
Sony on the other hand with the A900 tried to break into the professional fashion market by providing very high colour accuracy. Colour filter arrays used in medium format digital backs are tuned to provide pleasing (though not necessarily accurate) skintones.
Answered by Matt Grum on March 3, 2021
The reasons camera makers settled on the RGBG Bayer array likely has more to do with patents, availability, and cost, than it does with color "accuracy". In principle, any set of three appropriate, "orthogonal" (so to speak), colors should be fine for color reproduction. With more advanced sensors and processors, it should be even easier.
I doubt the RGB vs CMY color accuracy claim because conversions between RGB and CMYK are done all the time for print. Also, prior to white balancing, the demosaicked colors in raw files are nothing close to the actual desired colors. If the colors were really "accurate", photographers wouldn't have to spend so much time color correcting photos.
Fujifilm's various sensor experiments (Super CCD, EXR CMOS, X-Trans) demonstrate that just because everyone else does something a particular way doesn't mean it's necessarily the best way to do it. Kodak also experimented with different color arrays, but they didn't do a very good job of marketing their technology and patents.
The Nikon Coolpix 5700, a 5mp camera from around 2002, appears to be among the last cameras to use a CYGM color array. Digital Photography Review says (emphasis added):
Image quality is excellent, with that great matrix metering, good tonal balance and colour (accurate and vivid without blowing out colours) plus above average resolution. Purple fringing is down but the overall look of the image is still very 'Coolpix'. Noise levels are good, especially when compared to other five megapixel digital cameras (as indicate by our comparison to the Minolta DiMAGE 7i).
The few image quality details we picked up on; barrel distortion, highlight clipping and Bayer artifacts aren't the kinds of problems which affect every day shooting and won't spoil your overall enjoyment of the 5700's image quality.
Answered by xiota on March 3, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP