TransWikia.com

luminance range for camera

Photography Asked by jllangston on October 31, 2020

According to this article, luminance is proportional to pixel measurements via the following:
L=frac{N_d, f_s^2}{K_c, t, S}; ; (1)

Where:
N_d is pixel value
f_s is fstop
t is exposure time
S is ISO
K_c is a camera constant

If we shoot the same scene with same ISO and fstop, but change exposure, luminance is constant, and we can use a slope equation:
L=frac{Delta N_d, f_s^2}{Delta t, K_c, S}; ; (2)

From wikipedia, EV is related to Luminance like so:
EV=textup{log}_2left ( frac{L, S}{K_1}right ); ; (3)

Where K_1 is another constant.

Combining (2) and (3) yields the following:
EV=textup{log}_2 left (frac{Delta N_d}{Delta t}frac{f_s^2}{K_1, K_c} right )

If we were to take the difference of EV values, we would get the following formula:
EV_2-EV_1=textup{log}2 left (frac{Delta N{d_2}}{Delta t}frac{f_s^2}{K_1, K_c} right )-textup{log}2 left (frac{Delta N{d_1}}{Delta t}frac{f_s^2}{K_1, K_c} right ); ; (4)
We can use this property of logs:
textup{log}_ax-textup{log}_ay=textup{log}_aleft (frac{x}{y}  right ); ; (5)
From (4) and (5) we get the following:
Delta EV=textup{log}2 left (frac{Delta N{d_2}}{Delta N_{d_1}} right )
Equivalently, the following:
2^{Delta EV}=frac{Delta N_{d_2}}{Delta N_{d_1}}

For a pixel with 256 possible values, the max value of the right hand side approaches 256. My question is this: Since 2^8=256, for a given image, it seems like the maximum range we could theoretically see across it is 8 EV.

Is this correct? I realize that in equations 1 and 3, L is technically the average scene luminance, but if our scene were reduced to a single pixel, the math should be correct. Or am I applying something horribly wrong?

I forgot to add that this is in application to raw images, vs images processed via manufacturer color curves.

Thanks much!

2 Answers

When recording raw files with linear gamma encoding it requires 1 bit/EV; so if recording in 8bit (256 values) then the max range would be 8EV.

However, most cameras only record in 8bit when recording jpegs with a 2.2 gamma curve applied; 8bit with a 2.2 curve can display ~ 12EV/stops.

And most cameras record raw files at either 12 or 14bit; so raw files can typically record up to 12 or 14 EV. But that is only the EV range, or difference between min/max recordable. It is not the number of steps/stops discernible/recordable w/in the min/max values... that is what DXO calls "tonality."

enter image description here

Answered by Steven Kersting on October 31, 2020

For a pixel with 256 possible values, the max value of the right hand side approaches 256. My question is this: Since 2^8=256, for a given image, it seems like the maximum range we could theoretically see across it is 8 EV.

Is this correct? I realize that in equations 1 and 3, L is technically the average scene luminance, but if our scene were reduced to a single pixel, the math should be correct. Or am I applying something horribly wrong?

Well, it's kind of correct if you were to never adjust the digital linear values derived from the analog charges read off the sensor during analog-to-digital conversion (ADC) and only converted the linear analog levels into 8-bits while setting white point and black point exactly eight steps apart. But what you see when you look at images depicted with linear values intact isn't very useful. It's mostly a blob of dark nothingness.

That all goes out the window when we process the monochrome luminance values collected from each photosite in 12-bits or 14-bits and apply non-linear gamma curves to that data before reducing to 8-bits.

In such a case, one can make the distance between each discrete level as close or as far in terms of brightness as one wishes. Adjusting that distance between each discrete value is exactly what we do when we start moving 'Contrast', 'Highlights, 'Shadows', 'White Point', and 'Black Point' sliders around or, alternately, use a 'Curves" tool in a raw convertor application before exporting to an 8-bit per channel raster image format.

We can choose to make everything darker than a specific 14-bit value equal zero in our 8-bit output image. That's what we call the "Black Point". We can also choose to make everything higher than a specific 14-bit value equal 255 in our 8-bit output image. That's what we call the "White Point". We can then choose how much of the distance between our black point and white point is used by each discrete step between '0' and '255'. We're not required to make 8,191 in our 16-bit raw file equal exactly 127 on our 8-bit image. In fact, we rarely do so since we apply different channel multipliers to the demosaiced monochromatic luminance values from photosites filtered with the blue-violet, green, or yellow-orange filters of our Bayer masks to get red, green, and blue values for each pixel.

It's not that much different from what Ansel Adams did with his Zone System almost a century ago to be able to depict details from both highlights and shadows in scenes with a wider dynamic range than the photo paper he was printing on was capable of reproducing.

Answered by Michael C on October 31, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP