Photography Asked by entropyfever on March 25, 2021
I have implemented a filter which makes use of RGB color space plus some transformation functions. This filter can be seen here in action. The natural image is obvious to me.
// Natural Image
// Filtered Image
But the reason it seems obvious to me doesn’t have to do with my knowledge that a dog cannot be yellow or purple. It’s the perception of certain colors that exist in the original image and do not exist in the filtered one.
So my question is what kind of colors do natural images contain ? Are there some certain rules ? Is there a color space (not RGB) that has to do with human perception and not computer (adding and substracting numbers like rgb). Also, are there any filters that change the image but keep the natural feeling (like changing the hue but maybe something more advanced ?
I can't really speak to math and color spaces, but as far as what we expect from images takes in the natural world:
Looking at your example image specifically, I think the "rule" that your filter has broken is that we expect dark or shadowed areas to be dark and for specular highlights to be light. Your example looks like you inverted the colors and then made the new "light" areas yellow.
I's also say that images that are almost entirely heavily saturated colors seem to be a little rare in nature.
Answered by David Rouse on March 25, 2021
Light exists as electromagnetic waves. Radio waves, ultra violet, infrared, micro, X-rays, and gamma rays are all cousins; they differ by their frequency of vibration. The shorter frequencies are the most energetic. Our eyes are only sensitive to a small segment of the electromagnet spectrum. We cannot see below light frequencies that begin with infrared (heat waves) nor can we above ultra violet that causes sunburn.
The rays we see are captured by light sensitive cells on the retina of our eyes. There are rod shaped cells that only respond to light and transmit no color to our brain. There are three variations of cone shaped cells. These do send color data to our brain. Most people only have three types of cone cells trichromatic. Some few women are tetrachromats with four cone cell type. They see subtle shades of yellow and green, unseen by the general population. Anyway this is a broad subject beyond the scope of this posting.
A color image was first produced in 1861 by Maxwell (Scotland 1831 – 1879). His method was to take three pictures on black and white photo material. One was taken with a red filter mounted over the camera lens, then a second with a blue filter, and a third with a green filter. These three black & white pictures were than projected using three autonomous projectors, each fitted with the same filter used when the picture was taken. The result was a full color image projected on the screen.
Modern color films use this same scheme. Three black & white film emulsions are coated on the same transparent film. The film layers are independently sensitive to just one of the three light primary colors which are red, green and blue. Electronic photography (digital photography) uses the same scheme. The camera’s light sensitive chip is covered with tiny light sensitive receptors called pixels (picture elements) These sites are individually filtered with red, green, and blue filter covers. Thus the camera records an image fractured with a paint by number scheme, again, red, green, and blue primary colors are uses.
When we view these electronic pictures on a computer or TV screen, these displays present the picture using millions of sub-pixels. The sub-pixels are red, green, and blue. These are super tiny and they blend together to form the color picture you see on your screen.
The pictures we make a far from perfect. If we could replicate a sunlit vista, you would need sunglasses to view it with comfort. All pictures we make only approximately replicate the original vista. However the pictures we make are quite good. We use filters to alter the appearances. We can both improve and degrade with filters. The use of such filters is an art form and there are no real rules for art.
On the other hand, color imaging is a science that started more than 100 years ago. Filters are mix color and hue by rules observed over the years. We are talking color theory and color space. This all started with the Munsell System of classifying colors. This study of colorimetry continues. It is challenging and the study has evolved into what is called color space. Let me add, the modern color camera and the modern color print work because they use filters, dye and pigment that corresponds to the way our eyes detect color. We are talking about the cone cells in our eyes, they have pigments that make them sensitive to specific colors.
Answered by Alan Marcus on March 25, 2021
This question is fun to explore.
Are there some certain rules?
Yes, the set of rules (in a simplistic way) are:
1. Physics
Some wavelengths are too short and too energetic. They will destroy any biomolecule, so no living form can survive if you receive it from the star on your solar system.
Some are too long that will pass thru the living tissue of any organ meant to perceive electromagnetic waves, like radio waves. They pass even thru walls.
Some are absorbed, scattered, or reflected in our atmosphere or the gases within it, so only some wavelengths are more suitable to be used by the organisms on this planet.
2. Evolution
There is a chance some of our ancestors perceived some other wavelengths or not. But depending on the survival rate, we evolved as we are. Some other species worked better with bigger eyes, more sensitive to low light, some with sharper images... evolution.
The type of wavelengths we can see, we call them visible light.
3. Physiology
Most individuals of our species have some cells that react to some wavelengths, some to its intensity, and some of a somehow narrow wavelength.
4. Statistics
Some statistical studies have been made thru modern times determining what colors a human can see. These are translated into standards that we can use to determine things like color spaces.
Some individuals perceive more or less the colors of these spaces, and some have some form of daltonism or color blindness. For them, those colors are the natural way to see. Some can not see any color at all.
3. Perception of technology
I am pretty sure that people that watched some of the first color images were so impressed that even some poor representation of colors were not as important as the milestone itself.
So the tinted photo was more "natural" than previous black and white images.
Imagine the first movies in color, imagine a family with their first color TV back in those days.
4. Economy
One set of rules that drive these technologies is the economy. More accurate color reproduction costs more than a simpler one. So the normal color reproduction has less range than a specialized one.
But let's go back to the dog...
6. Our experience
Yes, probably there are no rules that state that a dog can not be yellow... except probably some evolutionary ones, (like survivability of the yellowest dogs) or physiological ones, (Like some proteins that turn dog cells into yellow ones). But our everyday experience tells us that there are no yellow dogs.
My first dog was a german shepherd, a white one. And before them, I didn't know they could be white ones. But overall we know some animals furry animals can be white, like polar bears or wolves. So even if you have not seen a white german shepherd before it is not unnatural.
We all have seen some art experiments, let's say "Andy Warhol type" of images. In our experience, those kinds of saturated images are artificial.
But probably not to a toddler. If you put a toddler to color a dog image, probably will paint it yellow, and will look natural to him.
Answered by Rafael on March 25, 2021
I think you need to come up with more examples of filters. Your present one is clearly a negative. As you try different things, you will map out 'this still looks natural' and 'This is clearly unnatural' with a middle category of 'Something is odd here'
Example: I shot some slides at 8,000 feet in winter in a canyon lit only by the sky. Fully natural shot: But everything had a strong blue cast, so the point that the scarlet sweater one of the subjects was wearing was a greyish burgundy. But that was essentially a 12,000K light source.
That same shot had the oddest looking sun shots: Everything in the sun had that slightly golden look you get an hour before sunset, but the shadows were blue. Same thing: Shadows illuminated at 12000K and at 6,000 feet there was less diffusion than normal. Ground being snow covered made the colour shifts obvious.
When I shot B&W film I liked to use a red or yellow filter because it made my skies dramatically dark.
Look at those pix from San Francisco during the California fires. Positively martian.
Another high elevation (Say 12-15 thousand feet) mountain effect. Take pictures of subjects nearby on a cloudless day. You will have a problem getting both shadow detail and highlight detail. The sky doesn't bounce nearly as much fill light into the shadowed areas as at sea level. The image looks way too contrasty to be natural. You also get these extreme contrast on beaches and snow, coupled with most of the world being near white, instead of grey.
Try this: Take a pic on a day with flat light desaturate the colours. It's natural colour -- just not as much of it. If you both desaturate and increase brightness it looks less natural. Why?
I made a mistake once, and shot half a roll of Ektachrome 2.5 stops under. Shots were abnomal, but most people I showed them to: "Nice night shot" Indeed when we are out in moonlight, there is no detail in shadows, and colours are substantially desaturated. Experiment with this to get your own night shots. (A vid guy said you can do this two ways: Silver moonlight -- underexpose, lower contrast, desaturate and Blue moonlight: Do less of the above, and shift the colour balance to the blue. )
Back to your question. I think it's that:
Overall we expect certain things to have certain colours. Plants are green. Sky is blue, grey, or white, dirt is brown,grey or black. (Yes, there are red soils in places, but to people who aren't resident, it looks out of place)
Overall we expect a certain distribution of luminosity -- this is why a lightmeter can assume the world is an 18% greycard and get it right a lot of the time. Yes there are highkey (snowscapes on an overcast day) and lowkey (landscapes during the twilight hour) that violate this expectation and the first time we see it, it is both odd, and captivating.
This is one of the reason that good photographers push the edges: Abnormal = interesting. By changing some aspect of what we consider normal, we re-engage people to look again. Black and white does this by throwing away the colour information. But even here, you play games. A LOT of Ansel Adam's pictures are much darker than 18% grey. One of his recurring themes is large expanses of textured black. Book printers hate him. Really hard to catch that in ink and paper print.
Answered by Sherwood Botsford on March 25, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP