Photography Asked on November 2, 2021
This question is not quite in the same vein as artistic photography but I feel that this StackExchange is best suited to answering the question. Closest questions I could find for further reading include this and this but I feel they are not quite the same as what I need.
For a research project, I am looking to take photographs of tubes carrying varying concentrations of blood in a hospital, and a readout from a prototype spectrophotometer. I am trying to assess the agreement between the spectrophotometer/colorimeter’s readout, and the perception of the color/concentration of blood in the tube. Currently, nurses/residents qualitatively assess the blood density and make decisions on patient treatment that way, possibly using a palette of blood colors, e.g. some kind of hematuria scale.
In order to get data, but of a more objective variety than just trusting a nurse/doctor’s readout, I believe I need to take photos of different patients’ tubing as they carry different concentrations of blood, and record the spectrophotometer result at the same time. I can then compare RGB readouts of the tube in Photoshop to the hematuria grading scale (which has CMYK values associated with each blood concentration grade) linked above, and basically determine statistical agreement between my device and the "actual color" of the tube… notwithstanding a phenomenological definition of color. My spectrophotometer records absorbances in watts/area at a few wavelengths in the visible range, but I’ve made a little lookup table in code that translates that to hematuria grades from 0-10.
While I know very little about manipulating color, my understanding is that as I go around the hospital taking photographs, the ambient light and the angle and a LOT of different factors might change the result captured by and processed by my smartphone/digital camera sensor. My smartphone outputs in RAW or I can buy a better camera, but either way, to "correct" for ambient conditions, I want to take to take all these photos with a standard palette in the background, and use that palette to match all the images up with each other.
My question then is this: how should I best go about using an in-picture color palette to "match" colors between photographs? Any workflows I should look into or avoid? What other factors might I not be aware of that I should take into consideration for "scientific" color analysis?
I think you need to define what your null hypothesis is, to help guide you on making a robust test (and think what may mess with the results)
I would suggest taking a bunch images with a printed scale beside them.
As pilot you could just use your own mixed with blood from a finger prick or the juice from the bottom of a meat tray. (Be aware this would be bio hazard waste so have a plan on disposal)
Find part of the image which is white* (ideally a 18% grey card, but you don't have that luxury) and use that to create a lighting correction filter.
*
Most urine collection bags have part which is made of white plastic
Then you will want to collect a bunch of images of real samples next to a printed copy of that hemurea scale (so same lighting etc). Then compare the color spaces. I would suggest comparing the HSL (Hue, Saturation, Luminosity) colorspace, but check if another may be better.
I am guessing hue will have a strong correlation. Luminosity will need to be corrected against the ambient lighting (which you can work out by comparing the luminosity of a white area of the image), but will also probably have a strong correlation. Saturation may or may not be useful. (Also it may be worth looking at the exposure metadata in the image to help normalize your values)
You will want to collect the 1-10 scale at measurement time to compare what an eyeball thinks vs what it looks like in a photo.
Something else you will need to factor in is the urine darkness scale (Clear to yellow to dark brown)
Someone who is dehydrated or with jaundus is going to have a very different hemurea scale
Answered by DarcyThomas on November 2, 2021
Since you have a scientific background (I hope?) I'll use mathematics to describe why what you're saying is impossible.
A camera only has 3 color filters--red, green, and blue. Obviously those three words are qualitative and not quantitative at all, but what is important that all color is multiplied by the spectral transmission function of those RGB filters to be registered on the photodiode. This means that each pixel on a color image (after demosaic) is basically a 3-dimensional vector.
White balance/color grading/etc is merely a linear or affine transform on this rank-3 vector space.
However, the emission spectra of objects is obviously continuous, yet in the RGB color space, it ends up having a finite dimension vector. This is the idea behind metamerism, which happens both in human eyes (because there are three times of cone cells) and also in most cameras.
AFAIK photospectrometers and colorimeters are basically a pixel array with a filter bank on it, with enough filter variation on every single "color filter" in front of the photodiode that reliably captures a high-dimension color vector of the scene. And obviously through some very easy matrix math, you can obtain a color vector that corresponds to how much light level is in each "bin", in say, 5nm increments or whatever.
I think this is enough information to tell you there is no way of obtaining reliable color information from a camera, at least for scientific purposes. You can sorta kinda do it but it's not going to give you any scientifically sound results, just qualitative results at best.
edit: to AT LEAST make it better if you wanna go for the qualitative route, you should try to use a standard, reputable camera model, and use a specific, known light source. This will make things at least usable. Be sure to use RAW images from the camera, and use a high CRI light source.
Or just use a spectrophotometer and be done with it...
Answered by hatsunearu on November 2, 2021
My question then is this: how should I best go about using an in-picture color palette to "match" colors between photographs? Any workflows I should look into or avoid? What other factors might I not be aware of that I should take into consideration for "scientific" color analysis?
You'll probably never get the precision you need for scientific analysis using color palettes to (mostly) compensate for varying lighting conditions, particularly in mixed lighting environments such as rooms with artificial lighting that also have windows providing natural lighting. Not to mention that many types of artificial lighting flicker with the frequency of the alternating current powering them, so that the brightness, color temperature, white balance, and CRI are highly variable depending on exactly where in the cycle the camera exposes the photo. Since almost all digital cameras expose the frame from one edge to the other sequentially, the spectral properties of the lighting can even be different from one side of the frame to the other of a single photo if the total exposure time is less than 1/2 of the mains power frequency. The shorter the exposure time, the more varied the light from one side to the other as the image is taken sequentially as a narrow slit between shutter curtains, either mechanical or electronic, transits the sensor.
To get the precision you desire, you're almost certainly going to need to record all samples under identical lighting conditions provided by appropriate, scientific grade lighting. Even high end studio flash units used in creative photography, while more consistent than cheaper monolights or speedlights, do not provide a consistent enough output from shot to shot to be used for precise scientific analysis based on color.
For most of the history of photography, "close enough" has been good enough for creative photography and even non-critical scientific documentation.
This applies to exposure, where one batch of emulsion might be more or less sensitive than another batch made using the same "recipe" (formula) from the same jugs of chemicals that age over time due to exposure to air. This then is extended further by variations from one shot to the next in the exact aperture size and exposure time used at the same settings by the same camera. Even well into the 21st century, cameras that use mechanical linkages between body and lens to set the aperture are much less consistent from one shot to the next than cameras that use all-electronic connections for the camera to digitally communicate to the lens how far the micro-servo actuating the aperture diaphragm should move. It can be pretty obvious when watching time lapse movies created using still images taken over time whether the camera/lens used had mechanical or electronic aperture control.
Whereas "close enough" meant anywhere within one to two stops either way of "ideal" exposure for Matthew Brady working to document the immediate aftermath on Civil War battlefields, today we have cameras that are considered "good enough" if they can expose within one-sixth stop of "ideal" from one frame to the next when identical settings are used.
This also applies to color management and color reproduction, where the goal has always been to get color reproduction "close enough" for human eyes viewing the results to perceive more or less the same thing they would perceive if they looked at the original thing being imaged. Trichromatic color reproduction works because human eyes use trichromatism to measure certain wavelengths of electromagnetic radiation and human brains use the results of the differences between our three types of cones to create a perception of color. If another species were to view the images created using RGB and CMYK color processes developed to work with human eye/brain systems, the varying wavelengths to which their eyes are most sensitive would not allow them to perceive the same colors in our images as they would perceive when viewing the things that were imaged. (Please see this answer to Why are Red, Green, and Blue the primary colors of light? for a lengthy discussion regarding this.)
There is a part of me that wants to see how much we can "get away" with a more approximate method. After all, nurses eyes' don't do "exact" color matching either when using a grading scale; an important aspect for assessing my device is the external utility: whether it's useful in clinic, not if its exact (although it would be nice).
By far the thing that would get you the closest to "accurate" color the easiest would be to use a flash that can be manually set to the same output each time. Though there would be some variation from one pop to the next, it would be much closer than using "color checkers" to match disparate lighting from one sample to the next. If you expose dark enough, to reduce the influence of ambient light, and (this would be moderately critical) always have the flash the same distance from the samples, then you might get "close enough" to fool human eyes, which is what most commercial color workflows in a creative photography setting are all about. If the tubing or containers holding your samples are highly reflective, then illuminate the test articles from behind, masking off any spill light so that your only significant light source in the captured photo is the light shining through your samples.
It should go without saying you will also need to use manual exposure with your camera to use the same values (Tv, Av, ISO) each time. Also manually set white balance settings (specific color temperature and WB correction) to the same thing each time. Shoot from the same distances between the flash and test article as well as from the test article to the camera. If the white balance settings in camera more or less match the color of the light from the flash, you should get fairly consistent (but not to a scientifically qualitative degree) results useful for producing a grading scale for "eye matching" by your nurses.
It could be something as simple as a camera with manual exposure and white balance controls with a bracket attached to hold a small flash at a set distance from the camera and a black, non-reflective card mounted between the camera and near the flash with a hole in the middle the right size for placing a sample between the flash and camera (sort of like a microscope slide that is illuminated from behind). Of course you would need some method to insure the thickness of your samples is uniform as their density could affect the resulting color. This would assume the tubing is consistent in terms of color, thickness, diameter, etc. from one sample to the next. If you always used tubing that is the same "part number" from the same supplier, this should get you close enough. Even better if all tubing is from the same box or lot/batch number.
If reflectivity of the tubing is not an issue when using flash, then attaching the flash to the camera and insuring a standard distance between camera, tubing, and something like a non-glossy light gray card placed behind the tubing at a consistent distance might also work. The diameter and translucence of the tubing would be a determining factor in which method would work better.
Ideally you would want a flash that can fire at relatively low power levels, so get a flash with a relatively low guide number that allows manual power to be reduced to at least 1/64 if not 1/128 of full power. The guide number is an indication of how bright the flash is at its full power. Each power of two reduction (1/2, 1/4, 1/8, etc.) is half as much total light energy as the previous power. Guide numbers, however, are a logarithmic scale with each halving of total light energy being a reduction of the GN by a factor of the square root of two (√2). If a flash has a GN of 32 meters, then at 1/2 power the effective GN would be 22 (32 ÷ √2 = 22). At 1/64, the same flash would have an effective GN of 4 (since 1/64 is 1/(2)^6, then we would calculate the GN by 32 ÷ (√2)^6 = 4).
How you produce and print your grading scale materials should also be carefully controlled so that you have no obvious metameric failure when the scales are viewed by your nurses' eyes under various types of lighting. A methodology there is well beyond my expertise to allow me to confidently tell you how to do that, though.
Answered by Michael C on November 2, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP