TransWikia.com

Why don't smartphone cameras correct pincushion distortion automatically?

Photography Asked on April 30, 2021

I’ve previously asked why the GoPro doesn’t automatically correct for the fisheye distortion, and this question is related, but different:

Given that smartphones have so much processing power nowadays (Nvidia GPUs, for instance), and that their pincushion distortion is easier to correct than the extremely wide-angle fish-eye lens in the GoPro, why don’t smartphones automatically correct this distortion and the skew it produces in photos?

Below is a Sony Xperia Z5 photo illustrating the extreme distortion. The exercise ball is quite spherical in reality.

Sony Xperia Z5 pincushion distortion

Other smartphones don’t fare much better.

Samsung Galaxy S5, iPhone 5S, Sony Xperia Z5, Nexus 5X pincushion distortion

Automatic distortion correction of the sort done in post-processing should be relatively fast because the camera parameters are fixed, and the code could be highly optimized in the camera firmware.

But even if correction were slow, it could be queued up for later, yet still automatic. Why isn’t this done? Pictures are starting to look rather ridiculous in the last crop of wide-angle lens smartphones, and the problem applies pertinently to the one most common types of photos: the groupife.

enter image description here

5 Answers

One reason is that when you correct for lens distortion you end up with a non-rectangular image. Usually it will be cropped to roughly the largest rectangular area within the non-rectangular image. This means that your image no longer includes everything you saw on the screen when you took the picture. Users generally don't like that. In your picture above, the people on the left and right would get partially cropped out, for example.

Also, any sort of resampling, like what would occur in lens correction, will introduce artifacts, such as either blurring or ringing. Better to let the user compose the shot they want and actually give them that shot.

And, as @Chris points out in the comments: "Moreover, the distortion may not be a problem for e.g. most landscape shots. Therefore, in combination with the technical disadvantages, lens correction at that degree is a creative choice and should be decided by the photographer."

Answered by user1118321 on April 30, 2021

This type of distortion in the corners is one of the trade offs for having a rectilinear lens that can render a wide field of view that covers a spherical area into a flat rectangular image. If you want things near the edges of such a wide field of view to appear undistorted, then you need a fisheye lens. But the fisheye lens will not produce a rectangular image, it will produce a spherical one. And all of the straight lines in the field of view will appear curved unless they pass directly through the center.

Answered by Michael C on April 30, 2021

The problem with the outline of the ball not being a circle is not a sign of pincushion or barrel distortion.

Indeed let us assume an idealised perfect lens. There is one focal point (the distance of that point from the sensor is the focal length). Now for any point of an object that you photograph, just consider the line through that point and the focal point. This line will intersect (the plane containing) the sensor in one point, this point will be the corresponding point in the image. Now imagine a ball somewhere. A line from the visual boundary of the ball to the focal point is a line through the focal point that is tangential to the ball. All of these lines together form a cone. The intersection of that cone with the sensor will be an ellipse. It will only be a circle if the ball is exactly in the center of the image. The further from the center of the image the ball is, the less it is like a circle. Just imagine that you are looking at the cone from the position of the ball, the sensor will be at an angle to the line from you through the focal point. Have a look at the images for conic sections.

Answered by Carsten S on April 30, 2021

What you are describing are fundamental phenomena.


The first phenomena illustrated with photograph of a ball and partially with a group photograph of people is resulting from incorrect viewing of the image.

The optical behaviour of your objective is called rectilinear mapping and it is concerned as benchmark of optical behaviour: all lines are preserving their straightness on the final photograph:

In geometric optics, distortion is a deviation from rectilinear projection, a projection in which straight lines in a scene remain straight in an image. It is a form of optical aberration. https://en.wikipedia.org/wiki/Distortion_(optics)

If you placed a number of straight lines around the ball you could find out that all of them are straight on the photograph. This is not geometrically possible if you want the ball to be round on the image and be placed anywhere except the center of the image at the same time.

However, there is a way you can fix it: you can view your image at such a distance that it occupies same angle of view as in real life.

For example, an uncropped image recorded using 21,5mm 135-equivalent rectilinear objective reproduces 90 degrees of sight across the diagonal (the focal length is one half of the 135 frame diagonal). If you view that image at distance equaling one half of the image diagonal with one eye closed you will see no irregularities from what you could see in person with one eye closed (except the monitor limitations). If you use this trick given that you know the equivalent focal length of your objective you can see a circular ball (or you can move closer to the LCD until the ball is circular). If would be almost impossible for images taken with too wide objectives because monitor would not show you good image (either too dark or something worse if it is too old) at that angle of view (consider corners which are viewed at the biggest angle in this case).

This is the only way to view images without any geometric discrepancies, and fisheye objective (aka objective with strong barrel distortion) will not even allow you to use this trick without a correspondingly curved monitor.

You could notice that not a single photo looks ok when you are viewing it at wrong angle. The ball is not circular because of similar thing: you are not viewing it at a correct angle. You cannot fix any given image so that it looks natural at any viewing angle/distance.

No geometric forms other than ball are different: you cannot represent them with a flat photograph without telling how to look at it, same refers to people faces on the third photograph (but those faces are trickier because their appearance is affected by the second phenomena too).

Here is a simple proof: I photographed ball displayed on my monitor from the correct angle.

Skewed photograph of ball photographed at the "correct" angle

The second phenomena illustrated with second image is a very fundamental property of all compact imaging systems and it is called perspective.

Cameras were created to simulate human vision. If we were bees our cameras would simulate bee vision instead but it is not the case. Most of cameras produced and human eye follow pinhole concept: there is an imaginable point in reality (the centre of entrance pupil) and a vector attached to it (camera direction) which correspond to each image taken.

Diagram

Every object which occupies given number of blue lines will have same size on the image plane. Every flat object placed in the object plane will occupy same space even if it is moved around. Cameras are not sensing the size, they are sensing the sectors occupied by objects and their angular positions. See the following image: the closer object of the same size occupies more FOV sectors.

The number of angle sectors which object occupies depend on how far it is from entrance pupil plane in inverse proportional way. Whenever you select your object and select your viewpoint you are fixing distances from objects to viewpoint and every object will occupy the space on the image plane which corresponds to the number of angular sectors it occupies.


I will describe how these two phenomena interact with objects briefly while trying to guess the common ground for me and you.

  • First photo: you are viewing the image from incorrect distance and that is what makes the ball elliptic.

  • Second photo: you usually look at shelves only when standing in front of them and you camera cannot see anything different from what you see if you are holding it near the eye. The lower shelves are farther from pupil plane (which is tilted) and thus they and whatever stands on them are smaller and additionally to that the items which are placed deep inside are farther too and they are even smaller then what is lies on the edge of the shelf

    diagram

    The vertical lines (vertical only in reality) are defined with equal spacing and that spacing as any other Euclid object is occupying fewer FoV sectors if it moves away. The smaller distance between vertical lines in the bottom of the image results from that the bottom of the image is located further from viewpoint. This causes the nearest part of lines to be projected with bigger recorded distance in the top of the image then in the bottom of the image.

    shelf diagram

  • Third image: the face in the top left corner is affected with two phenomena together. First of all the face is tilted so that top left corner of the face is closer to the pupil plane and appears bigger (you can reproduce it with looking in the mirror and tilting your face differently: the part of face which will be closer to the mirror will be bigger). Viewing the photograph from incorrect distance makes it even worse.

  • The image of the wall: same as with shelves image the lower part of wall is farther from pupil plane, etc.

It makes it even more complex that eye is a rectilinear imaging system as well (you will never see lines straightening and curving depending on the angle of view) so you will see different shapes in the periphery of the eyesight than in the center of the eyesight.

Conclusion

  • there are no optical distortions on the photos which you posted
  • you should look at the image from exact position to neutralise geometrical phenomena
  • pay attention to what you see in person: camera does not see much differently from you if you close one eye. None of the images recorded anything what you could not see in person

Answered by Euri Pinhollow on April 30, 2021

Look at the straight lines in the ceiling tiles or the wall and window frames. Those images are corrected to be perfectly rectilinear. If you don't want a face to be distorted like that, people need to point their head straight ahead instead of turning it to the camera. Having the face parallel to the sensor plane saves it from being flattened into a pancake.

In contrast, with a fisheye people (and balls which cannot avert their "face" as it is everywhere) in the corners look just as fine staring straight at the camera as they do from straight ahead. But then architectural lines curve around like anything.

The images you show are heavily corrected, and a building front would look perfectly straightforward since the camera broadens sideway features just as much as perspective compresses them. Walls have the trick cold of looking straight ahead instead of at the camera.

Answered by user95069 on April 30, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP