TransWikia.com

Is there a known practice of post-processing to make a finished photo while viewing a subject?

Photography Asked by bdsl on May 11, 2021

Photographers often aim to create a work that accurately depicts a subject, and/or is informed or inspired by the experience they had looking at it.

A significant part of the work of making a photo is often in post-processing on a computer, rather than in preparing and taking the photo with the a camera.

So I wonder whether there’s any known practice of taking a computer to the subject (or vice versa) and creating the finished photograph while viewing both together. I’m sure that people have done this but my question is whether it’s a practice that has a name and perhaps prominent photographers have talked about doing, or prominent photography commentators have discussed.

Of course for small product photography this is likely to happen incidentally, and for street photography it’s usually impossible, so I’m thinking more about things like landscape, cityscape, and portraiture.

In the comments Kaz asked what the purpose of this would be. I’m looking for answers about people doing it for any purpose (or the negative answer to say no-one really does it), but a few purposes I can think of might be using a subjective impression of colours and brightness etc as a reference, preserving the option to take further photographs in case there are relevant details visible in the subject that it turns out during post processing weren’t sufficiently captured, capturing a mood in a more abstract way, working in collaboration with a portrait subject, or just enjoying the environment while making a landscape image.

3 Answers

It's not exactly the term you are specifically looking for, but looking for information about tethering will get you in the right track. I'm not aware of any generally used term to describe tethering plus applying further post-processing during a shooting session, but some folks definitely do it.

Tethering is when the camera is hooked up to a computer or other device that displays the image as soon as it can be transferred from the camera to the host device immediately after it has been shot. Depending upon what application is being run on the host device, postprocessing steps not available in-camera can be applied to each image as it is imported by the host device and then applied to the image as it is displayed on the screen of the host device.

Tethering also allows controlling many functions of the camera using the host device. Things such as ISO, Tv, Av, etc. can be set form the application running on the host device. Of course if a specific lens requires manually turning an aperture ring on the lens that is not controllable by the camera, then one can't adjust aperture via the tethering application. Ditto for zooming a lens. If the only way to zoom a lens is to turn the ring on the lens, then one can't change the focal length using the tethering application (unless one also has an electromechanical device attached to the lens that can move the zoom ring and be controlled from the host device).

Answered by Michael C on May 11, 2021

For analog, instant film such as Polaroid. An example is William Wegman 24”x24” camera work.

For digital, straight to jpg using a camera’s built in features such as black and white, sepia tone, warm, etc. (the options tend to increase with each new generation of camera).

Personally, I find direct to jpg liberating because there is no more work to be done later...RAW always means there’s more work to do. And it is sitting at a computer not behind a lens. Mirrorless means I see what the jpg will look like.

For portraits and other static subjects, lighting techniques can produce exact results straight out of the the camera. And again, it’s not time at a computer.

It’s just photography.

And that’s what you can call it.

Anyone who tells you different is wrong. Just make pictures the way you want to make them.

Edit. A person can use one or more color filters in front of the lens to make fine adjustments to color balance. Some cameras such as some by Sony allow in camera adjustments on the blue-amber and green-magenta axes. https://support.d-imaging.sony.co.jp/support/tutorial/ilc/ilce-6400/en/06.php

Answered by Bob Macaroni McStevens on May 11, 2021

Photographers are very concerned with accurate capture, and the art of photography deals with inherent limitations in 1) the difference of light perception between eye vs. camera, and 2) the nature of light in subject vs digital screen vs print. Strictly speaking, Polaroid-like photography is probably the only way that you can do live calibration, comparing the real vs captured, reflective light quality. Beyond this, the photographer is largely "blind", without additional tools to increase the reliability of their shoot. Below are some descriptions of how this is done.

The closest to what you're hinting at, to me, is various practices of monitoring and feedback. Many properties can be monitored and calibrated. Light metering, color-matching, white-balance; all of which, for extra accuracy are sampled multiple times, different parts of the subject and evaluated for 1) absolute values, 2) relative values (one heuristic for ideal contrast is 8:1 ratio of light exposure in your subject).

Then it depends on what the final medium is. Traditionally, this is a print. If so, and what you're after is to improve the accuracy of the photo, then both the camera LCD and the computer monitor may be too unreliable. The lighting and exposure of your subject is reflected light, unlike digital screens which are usually backlit. Therefore, screens are in turn are calibrated to a specific printer and its ink (professional printers will help with this).

To broaden the idea, lenses are also chosen as a function of degree of distortion magnification relative to the naked eye. Wide angle lenses "stretch" width, and zoom lenses can "compress" depth'. Naked eye magnification is somewhere close to 50mm (I think), depending on your sensor size. Finally, sensors and lenses contribute to more or less naturalistic contrast, and lighting and shadows will change the sense of a more 2- vs 3-D subject.

Answered by Mark K on May 11, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP