Photography Asked by KRA2008 on November 2, 2021
Imagine I take a photo of the same scene using two cameras, one right after the other, from the exact same spot. The cameras may be different in every way possible. They could have very different lenses, image sensor sizes, and image sensor densities, as well as other properties. I then want to crop down the picture taken by the camera with the “larger” view so that the total stuff in it matches the total stuff in the picture taken by the camera with the “smaller” view – not in terms of how things look, but in terms of what each photo contains and how much of each thing, so that at the end if you ignored depth of field, exposure, focus, etc and just looked at the contents you might say they were the same photo.
I keep reading all these articles about crop-sensors and lenses and that’s great and they’re giving me a great feel for the causes and effects here but I can’t find a guide that describes the precise math of the situation. It seems like the two big inputs here are focal length and sensor size, and those two together will determine the field of view, and is that the key value to make the crop?
In my specific case, I’m working with mobile phone cameras and I can interrogate their specs programmatically. It happens that with iPhones I am directly given a field of view value. Is that all I need for the math? Do I simply crop down the larger field of view image proportional to the smaller? e.g. If one camera has 60 degree FOV and one has 70 degree FOV, do I just crop down the larger field of view image to be 6/7 of its original height and width? This seems correct to me but it doesn’t seem to be working and I’m having trouble deciding if I’m doing the cropping wrong or if I’m just going in the wrong direction or not factoring something else in.
After some more digging and thinking I've found there are two reasons the approach I outlined in my question won't work.
#2 is the more serious blocker which makes #1 kind of irrelevant. It just really looks like the angular field of view provided by the code does not match what is observed experimentally. It's also only provided by iOS - Android has stopped providing a value in the newest versions.
For completeness, I'll describe the math more but it's pointless anyway. The problem with the math is the lack of trigonometry. A diagram is probably the best way to show the situation:
Because my answer is ultimately that this problem cannot be solved because of the uncooperative devices, I won't go through solving the equations. However, from that diagram you can solve for the things you need in order to do the cropping, but again, it will fail because the angular field of view given by the phones is either wrong or is simply not provided.
Answered by KRA2008 on November 2, 2021
while the theoretical approach is interesting I think you will have better results with a more empirical approach.
Setup your cellphones / cameras on a tripod marking clearly its position so it is not going to move in between camera changes.
At a fixed distance setup a board with clearly printed markings. You don't need to do a lot of them. Just enough, especially towards the border.
Take pictures of this board with the different cameras.
Check the results in your computer and write down your findings. How much do you have to crop out in order to go from camera A photo to camera B?
Make a matrix of this crop factors for all of your cameras combinations (which I presume are not too many to do this).
Answered by Duncan Drake on November 2, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP