Photography Asked by Jonathan Winters on February 17, 2021
HDR and multiple exposure blending seems very popular these days. I wonder, what did film photographers do to solve dynamic range issues?
For example, I know GND filters were common (and still are), but what about cases where a mountain was in the middle and a GND would darken it too much, was this photo not taken? Or were certain shots just not possible?
The first generally known case of taking two different exposures of the same high dynamic range scene and combining the results was around 1850. Gustave Le Gray did it to render seascapes showing both the sky and the sea. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive. Since then, combining multiple exposures to deal with wide dynamic ranges in a scene has been going on.
In the mid-twentieth century dodging and burning - selectively increasing or decreasing the exposure of regions of the photograph - became popular. This was done in the darkroom using masks to alter the exposure time of different elements in the scene when the image from a negative was projected onto the photosensitive paper using an enlarger. Ansel Adams raised the technique to an art form. All one has to do is study the prints he made of Moonrise, Hernandez, New Mexico to see this development. The negative was exposed and developed in 1941. Adams made over 1,000 prints as they were ordered by customers over the next 4 decades. He chemically altered the original negative at least once in the late 1950s to darken the sky. The prints considered most definitive weren't produced until the mid-1960s.
Adams also developed the Zone System to set exposure. Cameras didn't have built in meters until the middle of the 20th century. Before then a light meter was used, or the photographer just worked off the knowledge of what luminance a particular object would have under different lighting conditions. Adams divided the luminance range capability of his negatives into zones and would meter the brightest and darkest objects for which he wished to retain the details. He would then set exposure for the brightest and darkest areas of the scene to fall in the specific zone he planned to develop at the brightest/darkest capability of the papers he used to make prints with. Even then a negative could record more dynamic range than a print could reproduce. Thus the need for controlling contrast with exposure/developing times and for dodging and burning.
Different films also had different characteristics with regard to dynamic range and contrast. The amount of time used to develop exposed film also affected the highlights and shadows differently, as did the amount of time used to make the print.
Answered by Michael C on February 17, 2021
Myself, I just didn't worry. GNDs work, you can rotate and stack them as needed. You can create them for specific shapes if you want something special using semi-transparent gels or films, etc. etc.
But overall, people were more interested in the result, and not interested at all in pixel peeping and number crunching to judge an image by numerical means as to its "quality" rather than look at whether the result was pleasing to the eye.
Answered by jwenting on February 17, 2021
There was actually a chemical you could add to the processing to increase the stop range, but I cannot for the life of me remember what it was called
Answered by Theo on February 17, 2021
The limitations one worked with and the techniques for getting around those limitations (when that was possible at all) differed considerably depending on the scene you were trying to capture as well as on the film you were using.
Graduated and split neutral density filters were part of the game, certainly, but they were only the beginning. But understand that that "beginning" could extend, in the case of a landscape photographer using a large-format (8x10 or larger) camera and colour transparency film, all the way to creating a custom cut-out ND gel (created by tracing the image on the camera's ground glass), which would be stuck to a plate and used in a compendium lens shade (sometimes called a "matte box") in front of the lens. Mind you, those were the obsessive types who created images for the calendar, jigsaw puzzle and coffee table book markets — they tend to make a dozen or two spectacular pictures each year by showing up at the same location for weeks on end, waiting for everything to be exactly right, and going home without taking a picture more often than not.
In the very beginning, there was image compositing — something we of the Photoshop generation tend to take for granted, and tend to assume is a new thing. (If we understand that it was done in the past, we tend to think of it as special-effects trickery, or in terms of surreal images like those created by Jerry Uelsmann.) The fact of the matter is that it as almost necessary in the early days, since the plates or paper negatives were sensitive only to blue light. Because exposure for things other than the sky depended on capturing whatever minuscule amounts of blue light were reflected by non-blue things, one couldn't actually create an image that had both sky and ground detail. If the plate had a sensitivity that was equivalent, say, to ISO 1 for things terrestrial, it also had a sensitivity equivalent to ISO 64 for the daylight sky, which would put the sky somewhere around 8 stops brighter than the midtones of the landscape at the best of times. So you had a choice: white skies, or separate exposures. (And if you're going to do separate exposures, why not just have a stock library of pretty and dramatic skies on hand, eh?)
Orthochromatic film (sensitive to violet, blue, green and into the yellows) lessened, but did not solve, the sky problem. When panchromatic B&W film (sensitive across the visual spectrum) came along, the nature of the problem changed completely. One no longer had to accept losing the sky altogether as "just thee way things are" since it became possible to capture the sky and the ground in the same exposure. And that, more than anything else, put the notion of image compositing on the back-burner in most minds.
With B&W pan film, your choices are almost unlimited. If the too-bright areas of your image can be isolated by colour (like a blue sky), then you can use one of the band-block filters to selectively tone down that portion of the spectrum. That's why people used filters like the K2, #25 and #29 filters — they all block short-wavelength (blue & violet) light to varying degrees, which could provide anything from a detailed-but-plausible sky to something really dramatic. (The K2 could almost be considered a contrast-correction filter.) You could also use a split or graduated neutral density filter (or a split colour filter, depending on the effect you wanted to achieve).
The biggest tool in the arsenal, though, was contrast manipulation in development. Developing for a shorter period of time results in a thin, low-contrast negative, and developing longer results in a dense, high-contrast negative. With a "proper" exposure, either will ruin the picture. But if you deliberately overexpose the negative and then under-develop, you end up with a negative that has normal density and lower-than-normal contrast. (Similarly, if you under-expose and over-develop, you get a normal-density negative with very high contrast.) With this knowledge in hand, you can quickly get to a rule of thumb that says: expose for the shadows and develop for the highlights. That requires metering, usually with a spot meter (not incident metering) to assess the absolute values of the shadows and the relative values of the highlights.
The Zone System is a way to systematize that process. Through testing, one can develop sets of exposure/development combinations for various films that will accommodate various contrast ranges such that they can be printed on a standard paper. The "real" Zone System is an image-by-image thing, and is really only suitable for sheet film (or rolls that are shot entirely under one set of conditions); the "standard operating procedure" for roll-film shooters was to find a good N-1 recipe and rely on paper contrast grade changes to make up the difference between negatives.
As with HDR, though, that results in a job half done. It's really nice that you managed to get everything into one image, but the result is flat and uninteresting until it is tone-mapped. (And it really is; see Chip Forelli's B&H Event Space presentation Straight Print to Finished Print: the Untold Story for a lot more info.) That's where dodging and burning come in — one needs to put back all of the interesting details that creating a low-contrast image minimized. That, of course, is a lot of work — work you can lessen quite a bit by combining the Zone System with selective colour filtration and graduated/split ND filters at capture time.
Colour film changed the game again. Not only does it have (mostly) inherently lower latitude than panchromatic B&W, it is also much more prone to reciprocity failure. This isn't the simple reciprocity failure of B&W film, where you need to make long exposures even longer. Anyone can cope with that, and long exposures with colour film are not much more complicated than with B&W film. No, there's a complex interplay between the silver and colour developers, along with the differing depths and thicknesses of the colour emulsions, that mean that there is only a small range of development times where the relationships between the various colours is even close to workable. For slide films, a one-stop push or pull is drastic and already starting to show colour shifts. For colour negatives, you start running into unfixable colour shifts at about 1 1/3 stops over and 1 2/3 stops under (depending on the film; "consumer" Kodaks were better-behaved but had strikes against them in other areas). And even when you could get the contrast right, pushing and pulling would do things to saturation that might not fit the image.
So how did we handle contrast with colour? Well, we started by choosing the right film for the job. A wedding photographer, for instance, would gravitate towards something like Kodak's Vericolor III Professional (VPS), which had a wide latitude and low saturation, and could (if used carefully) capture both the bride's dress and the groom's tux comfortably and render good skin tones. But VPS would result in a pretty bland landscape. So we used films with a bit more saturation and "punch", often picking different films for different circumstances (you couldn't top Kodachrome for fall foliage, but Fuji's Velvia had it beat six ways from Sunday for lush spring vegetation). We used split/graduated ND filters where they made sense. Since printing upped the contrast again (especially for Cibachromes/Ilfochromes made from transparencies) we'd use contrast masking to knock it back down again when it was appropriate. And — perhaps most importantly — we learned to make strategic sacrifices when they made sense. If blacks had to block up or whites had to blow out to make the best picture, that's what we did. It wasn't a tragedy in those days. (I'm not convinced it's a tragedy today either.)
Answered by user2719 on February 17, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP