TransWikia.com

What's the difference between "Fake HDR" and real, bracketed exposure HDR?

Photography Asked by rubikscube09 on March 3, 2021

As I began to brush up on my landscape photography skills I came across the polarizing (pun intended) issue of HDR photography. More specifically, I came across a well written blog post titled "Is a Lightroom HDR "Effect", Really HDR?" . To quote the post:

I saw this post the other day on 500px (link: http://500px.com/photo/8023755) and it got me wondering. The photographer, Jose Barbosa (who’s work I think is fabulous by the way), wrote “No HDR” next to his photo. But the photo (to me at least) looks like an HDR photo. (…) I did a little digging in the metadata of his photo and saw lots of adjustment brush work done with Clarity (Basically the HDR effect slider in Lightroom). And in the comments on 500px post, the photographer himself wrote “processing in Lightroom and Viveza 2”. (…)

My point (and question to you) is whether HDR (that’s not really HDR), is still HDR? Hasn’t HDR simply just become and effect? Kinda like Black & White or the cross-processing effect. Do we still need 3 or 5 or 7 bracketed photos that were processed in a program like Photomatix, to classify an image as an official HDR photo? Or is HDR simply the effect of bringing out more detail in the shadows and highlights (and maybe a little gritty/surreal look to it).

It seems I have the same question as the author: What really is the difference between these "fake hdr" effects added through say, lightroom’s clarity adjustment, along with a shadow/ highlight recovery as opposed to "real" HDR involving bracketed exposures at +/- n EV ? Is there extra noise in the "fake" method? Is there any (noticeable) difference at all ? On a similar note, is there any reason to take an hdr image if we can just use shadow/highlight recovery to evenly expose the entire scene?

5 Answers

What's the difference between “Fake HDR” and real, bracketed exposure HDR?

The only difference is how broadly or narrowly you decide to define the term High Dynamic Range Imaging (HDR). Do you use the broader term as it has been historically used for over 150 years to reference techniques used to display a scene with higher dynamic range than the dynamic range of the display medium? Or do you insist on a very narrow definition that uses techniques that have only been around a couple of decades to argue that the only legitimate definition of HDR is an 8-bit tone-mapped version of a 32-bit floating point light map created by combining multiple bracketed exposures? That's pretty much it.

HDR as the term is commonly used today is only one form of High Dynamic Range Imaging (HDRI) that has been going on since at least the 1850s.

Gustave Le Gray took multiple exposures at different exposure values to create seascapes that used the bright sky from one glass plate negative and the darker sea and shore from another.

The zone system when shooting and developing and tone mapping performed in the darkroom in the mid-20th century was raised to an art form by Ansel Adams and others as they used developing times and dodging and burning of prints to lower the total dynamic range of a scene to what the photo papers they were using were capable of displaying.

In the realm of digital photography there are multiple techniques used to depict a scene with a High Dynamic Range using a medium, such as a computer monitor or print, that is not capable of as great a contrast between the brightest and darkest parts of a scene as the scene itself contains. What many people mean when they say HDR is only one such technique among many.

Though far from the only legitimate one, the most common understanding today of the term HDR is what evolved from ideas first introduced in 1993 that resulted in a mathematical theory of differently exposed pictures of the same subject matter published in 1995 by Steve Mann and Rosalind Picard. It makes a high-dynamic-range light map from multiple digital images exposed at different values using only global image operations (across the entire image). The result is often a 32-bit floating point 'image' that no monitor or printer is capable of rendering. It must then be tone mapped by reducing overall contrast while preserving local contrast to fit into the dynamic range of the display medium. This often leads to artifacts in the transitions between areas of high luminance values and areas of low luminance values. (Even when you open a 12-bit or 14-bit 'raw' file in your photo application on the computer, what you see on the screen is an 8-bit rendering of the demosaiced raw file, not the actual monochromatic Bayer-filtered 14-bit file. As you change the settings and sliders the 'raw' data is remapped and rendered again in 8 bits per color channel).

When the techniques outlined by Mann and Picard were first applied in mainstream consumer level imaging applications, those applications usually required the images used to be in jpeg format. A little later on, if you wanted to get real exotic, you might find a program that let you use TIFFs. Often users would take a single raw file, create a series of jpegs from the single file with something like -2, 0, +2 exposure/brightness differences and then combine them using the HDR program. Even a 12-bit raw file can contain as much dynamic range as a -2, 0, +2 series of jpegs. A 14-bit raw file can contain the equivalent information as that in a -3, 0, +3 series of jpegs. Only fairly recently have most HDR applications based on creating floating point lightmaps allowed the use of raw file data as their starting point.

In the broadest use of the terms HDR (or HDRI), other processes that do not involve 32-bit luminance maps and the necessity of tone mapping are also included. Combining different areas of different exposures of the same scene, whether via a physical 'cut & paste' as Le Gray did over 150 years ago or via modern digital imaging applications that use layers, is one way. Other techniques, such as Exposure Fusion or Digital Blending digitally perform global adjustments in a way that doesn't require the same type of tone mapping that a 32-bit floating point light map does. As mentioned earlier, many of the techniques used in the darkroom to produce prints from exposed film in the 20th century were a means of displaying scenes with a very wide dynamic range using photographic paper that had the capability of a lower dynamic range than the negative film used to capture the scene. The same is true of these varied digital techniques.

Even converting a 14-bit raw file, where the data for each pixel only has a luminance value but no real colors, and using demosaicing algorithms to interpolate an 8-bit per color channel red, green, and blue color value for each pixel based on the different luminance values of adjacent pixels that are filtered using a Bayer mask with alternating patterns of red, green, and blue can be considered HDRI, especially when irregularly shaped tone curves are applied to the resulting RGB values.

Correct answer by Michael C on March 3, 2021

if we can just use shadow/highlight recovery to evenly expose the entire scene

This depends on the dynamic range of the scene you are trying to capture and the dynamic range the sensor is able to capture.

If you barely get any details in the shadows when you expose in order not to blow the highlights, you need multiple exposures.

If you can get enough details in the shadows (with minimal or acceptable noise levels) while also preserving highlights, you might be satisfied with capturing and adjusting a single photo.

Answered by D. Jurcau on March 3, 2021

In my opinion, it's as simple as this: An HDR photo is a photo in which you try to bring up the details in every part of a scene with a high dynamic range. After all that's what the name "HDR" says by itself.

Now, what is a high dynamic range? It's when the shadow parts of the picture are a lot darker than the bright parts of the picture. Historically, one would take multiple photos with different exposures to capture the detail in every part of the scene, because cameras didn't have the ability to capture a high dynamic range. Nowadays, camera's can easily capture 14 stops of exposure (which means that the detail in the darkest tone is 2^14 times less physically light than the brightest tones the camera can capture) (for example: the Nikon D750 has 14.5 stops of dynamic range). This is a lot, and in more situations then before enough to achieve the same effect of an "HDR photo" using multiple exposures. So in short, cameras have become better at capturing large ranges, and therefore the need for multiple photos of different exposures has dropped, but that doesn't make it not HDR. To conclude, the photo on 500px you linked is definitely an HDR photo, since you can tell you can see lots of details in all parts of the picture for a scene with originally a lot of dynamic range.

Answered by Martijn Courteaux on March 3, 2021

There is a misconception of "HDRI"

A High dynamic range image has more dynamic range than a normal image. It sounded pretty lame explanation but that is what it is.

Here are some animations I made explaining what is an Hdri: http://www.otake.com.mx/Apuntes/Imagen/EnviromentMaps/Index.html#HighDynamicRangeImages

The misconception is that a part of the process of manipulating this images, called tone maping is the same as the Hdri image itself. It is not. An Hdri image really contains more information. Sunny outsides and dark interiors together.

To be able to see that information again on a normal monitor, the Hdr image has to be "remaped". A tone mapped image has a specific type of contrast. Not an overall contrast but a contrast on adjacent zones. Now you can see contrasted clouds, and contrasted dark interiors for example.

The point is that you could take a normal photo (no bracketing involved) and tone map it, so you can have the "hdri" look.

Here are 3 shoots. One is tone maped from a set of 3 bracketed photos (real hdri but now tone maped into a 8 bit image)

The second is a tone maped from a single shoot.

The third one is just the normal single shoot but moving the levels a lot, to reveal some aditional info on the sky we did not see before.

The point is that a normal photo can be manipulated, dodge and burn to achive similar look, contrasted look on a photo.

Here are the bracketed original photos The one used for (2) and (3) is the Ev0:

Some info is hidden there. But when processed could become banded, because you have limited information in the pixels. To avoid banding, it is better if you have more information, more levels of the light tones and more levels on shadows... That is why you took the bracketed images.

The point is simmilar to this post: What's the point of capturing 14 bit images and editing on 8 bit monitors? More levels of info is better on editing images.

Answered by Rafael on March 3, 2021

My answer will be a kind of practical and experimental way of getting the understanding.

  1. Go to the place like city with building and the street were you can look at the sun or the sky in the sunny day directly, or to go to the place where for example there is a forest and you can as well see the sun or the sky.

  2. Look (using your own eyes) at the buildings or the the trees.

  3. Look on the sky or closely to sun (rather do not look directly on the sun).

You can observe how eyes are adapting. You can see both trees and the sky.

  1. Take your camera when you have live view option you can switch it on.

  2. Focus your camera on the same points as you had looked at.

You will see that camera cannot effectively on one and the only one exposure get the view of the sky and the darker objects. Even if you will shot the scene with a kind of average exposure settings there are places which will be black and some which will be white (undersaturated and oversaturated). Even if you will try to locally reduce or increase the exposure in the specialized software (there are plenty of them), there will be no way to get the shape and colours out of whites and blacks.

It is the reality. The current sensors are not so advanced as the eyes. BTW for the sake of the art, it is as well desired feature, which is used in Low key and High key photography.

Let's continue the experiment.

  1. If you can shot couple of photos to get more information and then merge them in the HDR software later, then you will combine all the information from all photos.

  2. The best is to shoot more images with smaller EV step, when the scene is static. If there are moving objects, there shall be a kind trade-off between the nuber of shots and EV step.

Depending on the situation you can shoot from 3 images +/-1EV or 3 with +/2EV up to 9 or more shots +/-0.5EV or even +/-2EV.

Important is also how you do the EV change. Basically the most popular way is to increase or decrease the shutter time. With some limits one can use ISO changes to get the same, but with high ISO values images are noisier. Changing aperture makes images tricky to merge, and the effects are interesting (I would say artistic or conceptual). Of course one can use ND filters, to extend own EV range - try to use some extreme like ND 3.0 filter.

Finally when the set of images are well prepared and with really wide range of EVs then the HDR result will be amazing and there is no way to fake it from single image, as the single image will have definitely less information.

Final comments.

This kind of scene you can capture using gradient filter, but HDR is more general, especially when the border between the light and dark places is not linear.

I recommend to use tripod for any experiments with HDR :-).

Answered by Seweryn Habdank-Wojewódzki on March 3, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP