Photography Asked on April 12, 2021
I know people use fancy software like Lightroom or Darktable to post-process their RAW files. But what if I don’t? What does the file look like, just, y’know, RAW?
There is a tool called dcraw which reads various RAW file types and extracts pixel data from them — it's actually the original code at the very bottom of a lot of open source and even commercial RAW conversion software.
I have a RAW file from my camera, and I've used dcraw in a mode which tells it to create an image using literal, unscaled 16-bit values from the file. I converted that to an 8-bit JPEG for sharing, using perceptual gamma (and scaled down for upload). That looks like this:
Obviously the result is very dark, although if you click to expand, and if your monitor is decent, you can see some hint of something.
Here is the out-of-camera color JPEG rendered from that same RAW file:
(Photo credit: my daughter using my camera, by the way.)
Not totally dark after all. The details of where exactly all the data is hiding are best covered by an in-depth question, but in short, we need a curve which expands the data over the range of darks and lights available in an 8-bit JPEG on a typical screen.
Fortunately, the dcraw program has another mode which converts to a more "useful" but still barely-processed image. This adjusts the level of the darkest black and brightest white and rescales the data appropriately. It can also set white balance automatically or from the camera setting recorded in the RAW file, but in this case I've told it not to, since we want to examine the least processing possible.
There's still a one-to-one correspondence between photosites on the sensor and pixels in the output (although again I've scaled this down for upload). That looks like this:
Now, this is obviously more recognizable as an image — but if we zoom in on this (here, so each pixel is actually magnified 10×), we see that it's all... dotty:
That's because the sensor is covered by a color filter array — tiny little colored filters the size of each photosite. Because my camera is a Fujifilm camera, this uses a pattern Fujifilm calls "X-Trans", which looks like this:
There are some details about the particular pattern that are kind of interesting, but overall it's not super-important. Most cameras today use something called a Bayer pattern (which repeats every 2×2 rather than 6×6). Both patterns have more green-filter sites than red or blue ones. The human eye is more sensitive to light in that range, and so using more of the pixels for that allows more detail with less noise.
In the example above, the center section is a patch of sky, which is a shade of cyan — in RGB, that's lots of blue and green without much red. So the dark dots are the red-filter sites — they're dark because that area doesn't have as much light in the wavelengths that get through that filter. The diagonal strip across the top right corner is a dark green leaf, so while everything is a little dark you can see the green — the bigger blocks of 2×2 with this sensor pattern — are relatively the brightest in that area.
So, anyway, here's a 1:1 (when you click to get the full version, one pixel in the image will be one pixel on the screen) section of the out-of-camera JPEG:
... and here's the same area from the quick-grayscale conversion above. You can see the stippling from the X-Trans pattern:
We can actually take that and colorize the pixels so those corresponding to green in the array are mapped to levels of green instead of gray, red to red, and blue to blue. That gives us:
... or, for the full image:
The green cast is very apparent, which is no surprise because there are 2½× more green pixels than red or blue. Each 3×3 block has two red pixels, two blue pixels, and five green pixels. To counteract this, I made a very simple scaling program which turns each of those 3×3 blocks into a single pixel. In that pixel, the green channel is the average of the five green pixels, and the red and blue channels the average of the corresponding two red and blue pixels. That gives us:
... which actually isn't half bad. The white balance is off, but since I intentionally decided to not adjust for that, this is no surprise. Hitting "auto white-balance" in an imaging program compensates for that (as would have letting dcraw set that in the first place):
Detail isn't great compared to the more sophisticated algorithms used in cameras and RAW processing programs, but clearly the basics are there. Better approaches create full-color images by weighting the different values around each pixel rather than going by big blocks. Since color usually changes gradually in photographs, this works pretty well and produces images where the image is full color without reducing the pixel dimensions. There are also clever tricks to reduce edge artifacts, noise, and other problems. This process is called "demosaicing", because the pattern of colored filters looks like a tile mosaic.
I suppose this view (where I didn't really make any decisions, and the program didn't do anything automatically smart) could have been defined as the "standard default appearance" of RAW file, thus ending many internet arguments. But, there is no such standard — there's no such rule that this particular "naïve" interpretation is special.
And, this isn't the only possible starting point. All real-world RAW processing programs have their own ideas of a basic default state to apply to a fresh RAW file on load. They've got to do something (otherwise we'd have that dark, useless thing at the top of this post), and usually they do something smarter than my simple manual conversion, which makes sense, because that gets you better results anyway.
Correct answer by mattdm on April 12, 2021
It's a really really big grid of numbers. Everything else is processing.
Answered by WolfgangGroiss on April 12, 2021
I know it's already been answered quite well by mattdm, but I just thought you might find this article interesting.
In case the link goes down, here is a summary:
The human eye is most sensitive to colors in the green wavelength region (coincidental with the fact that our sun emits most intensely in the green region).
The camera eye (charge coupled device (CCD) or complimentary metal oxide semiconductor (CMOS)) is sensitive only to light intensity, not to color.
Optical filters are used to attenuate different wavelengths of light. For example, a green pass filter will let more green light through than red or blue light, though a bit of each will make it through the green filter just as the medium wavelength cones in our human retinas react a little to red and blue light though they respond much more strongly to green.
Optical filters used in digital cameras are the size of the individual pixel sensors, and are arranged in a grid to match the sensor array. Red, green and blue (sort of like our cone cells) filters are used. However, because our eyes are more sensitive to green, the Bayer array filter has 2 green pixel filters for each red and blue pixel. The Bayer array has green filters forming a checkerboard like pattern, while red and blue filters occupy alternating rows.
Getting back to your original question: what does an unprocessed RAW file look like?
It looks like a black an white checkered lattice of the original image.
The fancy software for post-processing the RAW files first applies the Bayer filter. It looks more like the actual image after this, with color in the correct intensity and locations. However, there are still artifacts of the RGB grid from the Bayer filter, because each pixel is only one color.
There are a variety of methods for smoothing out the color coded RAW file. Smoothing out the pixels is similar to blurring though, so too much smoothing can be a bad thing.
Some of the demosaicing methods are briefly described here:
Nearest Neighbor: The value of a pixel (single color) is applied to its other colored neighbors and the colors are combined. No "new" colors are created in this process, only colors that were originally perceived by the camera sensor.
Linear Interpolation: for example, averages the two adjacent blue values and applies the average blue value to the green pixel in between the adjacent blue pixels. This can blur sharp edges.
Quadratic and cubic Interpolation: similar to linear interpolation, higher order approximations for the in-between color. They use more data points to generate better fits. linear only looks at two, quadratic at three, and cubic at four to generate an in between color.
Catmull-Rom Splines: similar to cubic, but takes into consideration the gradient of each point to generate the in-between color.
Half Cosine: used as an example of an interpolation method, it creates half cosines between each pair of like-colors and has a smooth inflected curve between them. However, as noted in the article, it does not offer any advantage for Bayer arrays due to the arrangement of the colors. It is equivalent to linear interpolation but at higher computational cost.
Higher end post-processing software has better demosaicing methods and clever algorithms. For example, they can identify sharp edges or high contrast changes and preserve their sharpness when combining the color channels.
Answered by jreese on April 12, 2021
I think a lot of people imagine that raw files are simply an array of pixel values straight out of the camera sensor. There are cases there this is really the case, and you have to supply some information about the sensor in order to let the software interpret the image. But a lot of the consumer cameras usually give "raw files" that actually are more or less conforming to the TIFF file specification (in some cases, the colours may be off). One can try by simply change the file extension to ".tif" and see what happens when opening the file. I think some of you will se a good picture, but not everyone, because there are differences between how different camerabrands solve this.
A TIFF file instead of a "real raw file" is a good solution. A TIFF file can have 16 bits per colour. That's enough for all cameras I know.
Ed: I wonder why this answer got downvoted. The answer is essentially correct (with reservation for the fact that camera manufacturers don't have to use TIFF structs, but many of them do).
About the part about array of pixels straight out of the sensor, it is not ridiculous to expect something like that. Because that is how a lot of sensors outside the consumer camera market works. In these cases, You have to provide a separate file that describes the sensor.
By the way, the word "RAW" is used because it should mean that we get the unprocessed sensor data. But it's reasonable that the camera manufacturers use a structured format instead of raw files for real. This way the photographer doesn't have to know the exact sensor data.
Answered by Ulf Tennfors on April 12, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP