TransWikia.com

Why isn't the xvYCC color space seeing uptake for still photography?

Photography Asked on March 15, 2021

For the last fifteen years, sRGB has been the primary standard for computer monitors (and for consumer-level printing). That’s changing now, as wider-gamut LED-backlit monitors become common. Usually, photographers use these with a color space like aRGB, which is semi-standard — my camera can save JPEGs in that space natively, for example.

But there’s a new standard widely pushed in the AV industry to replace sRGB. This is IEC 61966-2-4 — xvYCC (or “x.v.Color”, for marketing purposes). This color space has a gamut 1.8× larger than sRGB, covering 90% of the color range of human vision (instead of the uninspiring 50% covered by our current common denominator). Read much more at Sony’s web site on the xvYCC.

The important point, though, is that this isn’t theoretical. It’s part of the HDMI 1.3 standard, along with a specification for color depth of 10 to 16 bits per color (“Deep Color”, that’s called). Unlike aRGB, which is basically a professional niche thing, there’s broad support in consumer-level-gear.

That’s the background. The question is: given that this is widely catching on, and that we’re all likely to have computer (and TV!) hardware capable of supporting it in the next few years, why is this being sold as basically only a video thing? It seems like the camera industry would be happy to get on board.

Sony is big into the idea, and launched video cameras supporting it four years ago now. The Playstation 3 supports it, for goodness’s sake! Why not put it in the Sony Alpha dSLRs as well? And Sony’s not alone — Canon has video cameras supporting it too.

Of course, if you’re shooting RAW, in-camera support is un-important. It’s the converter software people who would have to get on board — why isn’t there a push for this? As I understand it, xvYCC is an extension of YCbCr, which is already used in JPEG files. But as I read the literature, I find lots of mentions of updated MPEG standards, but nothing about still photographic images.

Why can’t we have nice things?

7 Answers

xvYCC is a particular clever way of encoding color data: it abuses the YCC representation by using previously-forbidden combinations of values to represent colors outside the gamut of the RGB space used in the YCC scheme. That is, some YCC tuples decode to colors with negative R G or B values. Previously these were simply illegal; in xvYCC these are permitted, and displays with bigger gamuts than the RGB system are welcome to render these as best they can. So really it's a clever mostly-compatible hack to get some extra gamut without much changing the format.

Does it make sense to use it in still photography? I don't really think so. There's not really the need to be compatible with YCC, so why not use a wide-gamut space like ProPhoto RGB? Or better yet, since using extra bit depth is not expensive for stills, why not go with something like CIELAB that can cover the whole human perceptible gamut? You have enough bits that the ability to encode all those imaginary colors doesn't cost you any appreciable amount of color resolution.

Of course, the question of camera support is a little bit irrelevant - if you really care about color you should pull raw detector values from the camera and start from those. And even if you do this you're still going to be stuck in the camera's perceptible gamut. And the accuracy of your color representation will also depend on how well your camera's filters approximate the spectral response of human cones - get it wrong and colors that look identical to the eye will look different to your camera. No encoding will fix that. In fact this happened with one cheap digital camera I had- in this case its IR sensitivity made embers look purple. Even if you screen out IR, things with spiky spectra like rainbows and fluorescent lights or minerals (and maybe some dyes) are going to show this effect when continuum spectra look okay.

Correct answer by Anne on March 15, 2021

To start simply, the answer is "It is used for still photography!" I'll explain a little more in a bit, and its use is fairly niche at the moment.

The roots of xvYCC

The xvYCC encoding is, as far as I can tell, a modern enhancement to YCC encoding, or in its long form, Y'CbCr (or YCbCr, which is slightly different.) The YCC encoding is part of a family of luminance/chrominance color spaces, which are all largely rooted in the Lab* ('Lab' for short) color space formulated by CIE back in the 1930's. The Lab color space is also a Luminance/Chrominance color space, wherein the luminance of a color is encoded in the L* value, while two chrominance axes of a color are encoded in the a* and b* values. The a* value encodes one half of the chrominance along the green/magenta axis, while the b* value encodes the other half of the chrominance along the blue/yellow axis. These two color axes were chosen to mimic and represent the four color primary sensitivities of the human eye, which also lie along a red/green and blue/yellow pair of axes (although true human eyesight involves a double-peak red curve, with the smaller peak occurring in the middle of the blue curve, which actually means the human eye is directly sensitive to magenta, not red...hence the green/magenta axis in Lab.)

The YUV Encoding

Y'CbCr is probably most prominently recognized in the form of YUV video encoding. The YUV encoding was specifically designed to reduce the amount of space necessary to encode color for video transmission, back in the days when bandwidth was a rather scarce commodity. Transmitting color information as RGB triplets is wasteful, since R,G,B triplets encode color with a fair amount of redundancy: all three components include luminance information as well as chrominance information, and luminance is weighted across all three components. YUV is a low-bandwidth form of Y'CbCr luminance/chrominance color encoding that does not have the wasteful redundancy of RGB encoding. YUV can consume anywhere from 2/3 down to 1/4 the bandwidth of a full RGB signal depending on the subsampling format (and, additionally, it stored the full detail image in the distinct luminance channel Y, which conveniently supported both B&W as well as Color TV signals with a single encoding format.) It should be clearly noted that YCC is not really a color space, rather it is a way of encoding RGB color information. I think a more accurate term would be a color model than a color space, and the term color model can be applied to both RGB and YUV.

From the reference linked in the original question, it appears that xvYCC is an enhanced form of Y'CbCr encoding that stores encoded luminance/chrominance color information with more bits than YUV. Instead of encoding luminance and chrominance in interleaved sets of 2-4 bits, xvYCC encodes color in modern 10-bit values.

Use in Still Photography

Intriguingly enough, there is one DSLR camera brand that does use something very similar. Canon added a new RAW format to their cameras in recent years, called sRAW. While a normal RAW image contains a direct bayer dump of full sensor data, sRAW is actually not a true RAW image format. The sRAW format does not contain bayer data, it contains processed Y'CbCr content interpolated from the underlying bayer RGBG pixel data. Similar to the TV days, sRAW aims to use more original signal information to encode luminance and chrominance data in a high precision (14-bpc), but space-saving, image format. An sRAW image can be anywhere from 40-60% the size of a RAW image, and the gains are realized by a similar interleaving and sharing of luminance information amongst multiple chrominance pairs (similar to how RGBG bayer pixels are shared to generate actual RGB pixels.)

The benefit of sRAW is that you maintain high human-perceptual color accuracy in a compact file format, and make better use of the RGBG pixels on the bayer sensor (rather than overlapped sampling that produces nasty color moire, sRAW performs non-overlapped chrominance sampling and overlapped/distributed luminance sampling.) The drawback is that it is not a true RAW format, and color information is interpolated and downsampled from the full bayer sensor. If you do not need the full RAW resolution of the camera (i.e. you only intend to print at 8x10 or 11x16), then sRAW can be a real benefit, as it can save a lot on space (as much as 60% savings over RAW), it saves faster than raw providing a higher frame rate, and makes better use of color information captured by the sensor than full resolution RAW.

Answered by jrista on March 15, 2021

I'll add a couple of notes around Jon's...

  1. The color space is meaningful in a camera context only when talking about JPEGs because, for Raw images, the color space is a choice in the "development" phase. Some cameras (Pentax semi-pros for certain) allow the choice of sRGB or aRGB for the JPEG development, so perhaps they may add a third (or fourth for ProPhoto). Then again, for most professionals, they'll pull in the image in the desired color space for their intended output medium.

  2. The viewer (and/or device) must also be aware of the color space and be able to handle it. While wide gamut monitors are becoming more common, they're ery likely still a massive minority and it will take a while to catch up. Heck, I know quite a few people who still have old CRT monitors hooked up to otherwise decent computers.

Answered by John Cavan on March 15, 2021

You have things almost completely backwards. This is not a case where still photography could/should "catch up" with video -- quite the contrary, this is a matter of video having finally caught up to (roughly) the capabilities that TIFF (for one example) provide a couple decades ago (or so).

While you certainly didn't see very many 16 bits/channel TIFFs 20 years ago, the capability was already there, and 16 bits/channel (in TIFF and various other formats) is now fairly common. At the same time, I feel obliged to point out that most people seem to find 8 bits/channel entirely adequate. Just for one obvious example, JPEG2000 supports 16 bits/channel and better compression than the original JPEG -- but has nowhere even close to the use of the original JPEG spec.

Around the same time (actually, a bit before) xvYCC was working on (roughly) catching up with the capabilities of TIFF, the openEXR file format was being developed. It supports up to 32 bits/channel. While it's not in all that wide of use yet, I'd expect it'll be a bit like TIFF, and will come into wider use eventually.

As far as color space goes, it's true that the larger number of bits/pixel if xvYCC supports a larger gamut than sRGB. Again, however, ProPhotoRGB (for one example) provides a much wider gamut -- and (in all honesty) it's open to some question whether there's much need for a larger color space than ProPhotoRGB already provides (roughly 13% of the colors you can represent in ProPhotoRGB are basically imaginary -- they go beyond what most people can perceive).

The advantage of xvYCC is in reducing the amount of data needed/used to represent a given level of quality. For HD video (in particular), minimizing bandwidth is extremely important. For digital still cameras, however, bandwidth is a much smaller concern -- while it would certainly be nice if (for example) I could fit twice as many pictures on a particular size of CF card, it's not a particularly serious problem. Relatively few people use the largest capacity of CF cards available, nor is the cost of CF cards a substantial part of a typical photographer's budget.

Bottom line: in terms of technical capabilities, xvYCC provides little that isn't already available.

Edit: I should probably add one more point. LCDs started to replace CRTs for most monitors about the time digital cameras came into wide use -- but consumer-grade LCD monitors are only now starting to exceed (or really even approach) 8 bits/channel color resolution. It was hard to worry much about having 10 or 12 bits/channel when a typical monitor could only display around 6.

There's also the minor detail that a lot of people just plain don't care. For them, photographic quality falls under a pass/fail criterion. All most people really ask for is that a picture be reasonably recognizable. I suspect people are slowly starting to expect better, but after years of Walgreens (or whomever) turning their red-headed daughter into a blonde (etc.) it takes a while to get used to the idea that color can be accurate at all.

Edit: There is actually another step beyond JPEG 2000: JPEG XR. This supports up to 32 bits/channel (floating point) HDR. It also specifies a file format that can include all the usual EXIF/IPTC-type data, embedded color profile, etc. Relevant to the question here, that includes a value to specify that a file should use the xvYCC color space (a value of 11 in the TRANSFER_CHARACTERISTICS syntax element, table A.9, in case anybody cares). This doesn't seem to be in wide use (at least yet) but does directly support xvYCC color space for still images.

Answered by Jerry Coffin on March 15, 2021

So, to answer my own question a bit after some research:

While it isn't xvYCC, for reasons that really still elude me (since JPEG encoding uses a similar older scheme), there does appear to be some encouraging moves on the "we can have nice things!" front, because it appears that at least Microsoft cares about wider color gamut and better bit-depth in still photography — at least a little bit.

They have been, slowly but surely, pushing for a new file format standard called JPEG XR (formerly called Windows Media Photo, and then HD Photo). It's an interesting move forward from the "traditional" JPEG, offering better compression at the same image quality, and (to the point of this discussion) higher bit-depth support.

JPEG 2000 does this too, but it's been largely a flop, possibly because of concerns with patents covering the wavelet compression it uses, or maybe something else. The important point is: Microsoft is promoting JPEG XR now, featuring it in a lot of their software, including Internet Explorer 9. As of 2009, it's an official real international standard, and is covered by Microsoft's "Community Promise" to not enforce their patents in a hostile way against implementations. So that's pretty good for future uptake.

And, along with that, they're pushing the idea of more-bits-per-channel as "high color", (which is funny to me, since in my mind that's still the old 16-bit-for-all-channels video card mode). As part of this, they've got a possibly-ridiculously-large "intermediate" color space called scRGB — read a nice detailed account of it here — which is supported by JPEG XR, if you want. It might not be particularly useful as a final colorspace, since most of its colors are in the "imaginary" area outside of human perception. But anyway, the point is, Microsoft is integrating higher-bit-depth standards into the Windows operating system, and still photography is part of that. From a slightly-old-now CNET interview: "I absolutely expect scRGB support in cameras to accompany JPEG XR."

But that was in 2007. Four and a half years later, we're still not seeing cameras supporting JPEG XR, let alone fancy wide-gamut high-depth color spaces. But, maybe I'm just being impatient. As the other answers here note, display hardware that supports wide gamut is just becoming available, support in the world's most popular OS is pretty recent, and the first web browser to support it was released this month. As that catches on, and is hopefully eventually picked up by Chrome and Firefox, image processing programs (including RAW converters) will gain support, and actual direct output from cameras will follow.

Or the whole thing will flop. Time will tell. :)

Answered by mattdm on March 15, 2021

The xvYCC color space probably isn't seeing an uptake for still photography because new Standards have been developed which are an improvement to the older Standards, and no Manufacturer wants to invest in a Standard that might depreciate before it's replaced by the 'next greatest thing'. They learned from VHS vs. Beta.

The High Efficiency Image Format (HEIF), MPEG-H Part 12, is a File Format that specifies a structural format, from which codec-specific image formats can be derived.

HEIF also includes the specification for encapsulating images and image sequences conforming to the High Efficiency Video Coding (HEVC, ISO/IEC 23008-2 | ITU-T Rec. H.265 or MPEG-H Part 2).

It is mentioned in Apple's WWDC 2017 Keynote Video: https://youtu.be/oaqHdULqet0?t=58m49s .

Apple's iPhone 7 and newer takes what is photographed and saves it in JPEG or HEIF Format. Using HEIF can provide a pristine Camera to Storage to Display solution - a complete Infrastructure without loss or conversion from input to output (when using HEIF uncompressed).

It's not that they fully support every feature (much as MPEG is rarely "fully supported") or like it's not easy enough for anyone else to do it, it's just that they seem to be first out with a complete solution for Images (for Video we have had a subset of HEVC H.264, H.265 and recently HikVision's H.265+ for years).

If you know of other Cameras supporting HEIF please comment or edit, thanks.

Cameras that record Images and Videos at a particularly high Dynamic Range (the Sensor is over 16 Bits per color) often don't process the Data (make a compressed File) but instead output the Raw Data directly, example: http://www.jai.com/en/products/at-200ge - that Camera outputs 24 to 30 Bits per pixel or http://www.jai.com/en/products/at-140cl - that Camera outputs 24 to 36 Bits per pixel.

It is possible to obtain a CIE 1931 color space Camera (and probably other Color Spaces) if you search endlessly or are willing to pay a speciality camera supplier to make exactly what you want, you'll probably be on your own writing the Software to convert from your Color Space to one used by other Programs.

Here is a Link to Quest Innovations' Condor3 CIE 1931 Camera: http://www.quest-innovations.com/cameras/C3-CIE-285-USB .

Cameras with 3,4,5 or 6 Sensors can split the Spectrum into smaller pieces and provide more Bits per Channel, resolving the exact color and intensity more precisely: http://www.optec.eu/en/telecamere_multicanale/telecamere_multicanale.asp .


Prism for a 3 Channel Camera 3CCD or 3MOS

Prism for a 4 Channel Camera 4CCD or 4MOS

Prism for a 5 Channel Camera 5CCD or 5MOS


References:

https://developer.apple.com/videos/play/wwdc2017/503/

https://nokiatech.github.io/heif/technical.html

https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format

Answered by Rob on March 15, 2021

That is precisely what was used in Kodak Photo CD (the colorspace is called PhotoYCC), moreover it is a predecessor of xvYCC_601. The problem why it was not done is because it is ONLY limited range (as Cb and Cr components are extended for extended gamut, while Y' stays the same), while full range is preferred for photo imagery; the other problem is there is no defined transfer when the Y' is Superwhite, i.e. more than (219 + 16) * 2^(n-8) (that is 235 for 8 bits). From IEC 61966-2-4:

However, if the specular components that are brighter than white exist in a captured image, there will be pixels with Y′ signals greater than “1” [235 in decimal]. These components should be compressed (or clipped) into the given quantization range. An example for the specular compression method is provided in Figure A.1. NOTE Different proprietary compression methods in either Y’ components or R’G’B’ components are used in practice.

Moreover HDMI reserves 255 and 0 (for 8 bit) in YCbCr mode (but not for RGB) for sync., and even more for 10 and 12 bits (Bit levels “from 0 to 2^(N-8)-1” and “from 255 × 2^(N-8) to 2^N-1”.) That is also a problem.

In reality nobody prohibits you for using LG C9 TV and writing your own software for that for SDR to present colours outside BT.601 or BT.709 (as xvYCC supports both).

Nevertheless the equivalent for sRGB, sYCC is defined and used in JPEG 2000.

Answered by Валерий Заподовников on March 15, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP