TransWikia.com

Does higher MP magnify the subject?

Photography Asked on April 20, 2021

Sorry for a simple (or even wrong) question as I am new in this area. Given the same setting, same distance, does a higher MP camera magnify the subject or its the same? If it magnify the subject, how do you work out the magnifying ratio using the MP?

6 Answers

No, more megapixels does not magnify the image (meaning the subject image). Focal length (zoom) magnifies the subject size in the image. Magnify is a word referencing the subject size within the image frame. Magnification is increased by longer focal length and/or by closer subject distance.

More megapixels is a larger digital image (if measured in pixels), but that is not magnification, as it does not change the lens view of the subject. More pixels can improve the digital sampling reproduction of the lens detail. And more enlargement may see that lens detail larger and see small detail more clearly, but neither can increase the lens detail in the image. Magnification is a longer lens or closer distance that does increase subject size in the sensor frame, allowing the same pixel resolution of finer detail now seen.

More megapixels does Not increase the resolution of the lens, but it likely does increase the reproduced resolution of the viewed detail present, seeable if you enlarge it to see it, and more megapixels does allow greater viewing enlargement of that detail. But the lens resolution is still the absolute limit on image detail. Only the lens creates image detail, digital sampling can only try to reproduce what is there.

I'm copying my comment below up to here, with improved wording.

Cropping an image smaller and then necessarily enlarging more (to be the same viewing size again) does superficially "look" exactly like zooming. This is just an illusion, and my site at https://www.scantips.com/lights/cropfactor.html has elaboration about exactly that illusion (in regard to crop factor).

The cost in lost pixels of later cropping any finished image greatly lowers the resolution of any enlargement.

Regarding crop factor on the sensor, the cropped image is simply a smaller image (in terms of sensor size in mm). It may have the same megapixels as a full frame image, but the necessary greater enlargement factor of any smaller cropped image lowers the viewed resolution (and viewing is the only goal). It "spreads" the pixels further apart, showing fewer pixels per inch.

Some of us ignore that, and do imagine a cropped sensor magnifies using a shorter lens, but it simply is not magnification. It may be very adequate in most cases, but in the most critical case, it is just detrimental greater enlargement of a smaller image and a shorter lens. There is no other way to say it.

Magnification routinely refers to actual lens magnification.

Answered by WayneF on April 20, 2021

The standard meaning of optical magnification/magnification ratio has to do with the physical size of the object and the physical area of the image plane (film/sensor) that it occupies when recorded.

E.g. if 1cm on a ruler is recorded occupying 1cm on the sensor then it is at a 1:1 magnification ratio. I.e only 3.5cm can be recorded on a FF (35mm wide) sensor due to the magnification... MP's do not affect that. Nor does sensor size (e.g. only 2.4cm can be recorded on an APS (24mm wide) sensor at 1:1).

Edit to answer comment: "The magnification I am thinking is "more detail" as in taking a picture of a newspaper from a distance, does a higher MP camera see the characters clearer?"

That is the resulting recorded resolution and not magnification/magnification ratio per se. The resulting recorded resolution depends on which aspect of the imaging chain is the limiting factor.

For instance, a lens set to f/11 can only project ~ 16MP onto a full frame sensor (median/green wavelengths). That lens resolution could be reduced to 14MP if the sensor has an anti-aliasing filter over it. The recorded resolution could be farther reduced to 12MP if that is the resolution of the sensor.

And the final resolution could then be reduced by cropping (MP remaining). Similarly, the lens resolution would also be reduced by using extension tubes or teleconverters to enlarge the image circle; causing the sensor to crop/record less of it (which is why the effective aperture/exposure/DOF/diffraction changes).

But in this example the recorded resolution cannot be greater than 16MP regardless of the sensor resolution, because the lens would still be the limiting factor. Conversly, a lens set to f/2.8 could project ~ 240MP (median/green wavelengths) onto a FF sensor, in which case the sensor/AA filter will be the limiting factor (up to 240MP). Any time the sensor is limiting the recorded resolution then greater sensor resolution (MP's) will help. Actually, there is almost always some nominal increase in recorded resolution with higher MP's due to increased oversampling.

Edit to answer comment: "So if we take the lens limit out of the equation, a higher MP camera can record a higher maximum resolution raw image compare to a lower MP camera, and this higher resolution raw image can be printed larger in size at the same 300ppi scale compare to a lower resolution raw image, which essentially magnifies the subject on the print and I can see more details there, right?"

Technically, magnification is anything that causes an increase in the apparent/relative size, and enlargement is an increase in the physical size. Magnification includes the size of the object being viewed, the distance at which it is viewed, and any optics that may/may not be in use. E.g. a projector and an enlarger both use optical magnification to create an apparently/relatively larger view; but an enlarger is used to create a new version of larger physical dimensions (a print). And greater recorded resolution (MP's) allows for more of both/either.

A digital image file has no defined physical size; but it is essentially a digital negative, and its' size as relates to magnification/enlargement is the physical size of the sensor... just as a negative or slide is for film. If you have two images of differing MP's from sensors of the same size, and you view them both at the same zoom level, the higher resolution image will be larger... it is, at that moment, being viewed at a greater magnification. So, depending on how that resolution is used, more MP's is more magnification. Whether or not that will reveal more detail depends on the actual recorded resolution (without some other limitation imposed, it will). And what physical sizes that relates too will depend upon the resolution (PPI setting) of the monitor/device in use. The same is true as to how much larger you can print the higher MP file to reveal additional detail/maintain quality.

Similarly, just moving closer to, or farther away from the display/print changes the magnification at which it is being viewed (and that changes the CoC requirement).

https://www2.uned.es/personal/rosuna/resources/photography/Diffraction/Do%20sensors%20outresolve.pdf

Answered by Steven Kersting on April 20, 2021

The job of the lens is to project an image of the outside world onto the surface of the camera’s imaging chip. The imaging chip is fractured into millions of photosites. A count of these photosites reveals the MP computation.

As to magnification: We tally this value by dividing the actual dimensions of the subject vs. its size as projected by lens. As an example, suppose an object is 1 meter in size and it images as 5mm on the image sensor. The magnification is 1000 ÷ 5 =200. Because this image is tiny compared to the object, we assign a negative sign. Thus for this example, the magnification is -200X.

Keep in mind, this is the magnification realized at the image plane of the camera. We typically view this image via a computer monitor or TV screen or by making a print on paper. Since the camera’s imaging chip is tiny, viewing an image that is this small is unproductive. In other words, when we display this image for viewing, we typically apply magnification. Likely, your viewing method apples 10X magnification, making the displayed image large. Now you are viewing an image of target object at a magnification of 200 ÷ 10 = 20X. That translates – the image of the 1000mm object appears on your computer screen as an image 50mm (about 2 inches).

When we take pictures, the size of the displayed image is influenced by:

a. Camera to subject distance

b. Focal length of the taking lens

c. Magnification applied to create the displayed image

In other words, moving the camera closer or further from the object regulates magnification. Zooming the lens or mounting a longer of shorter fixed focal length lens regulates magnification. The degree of magnification applied to generate the displayed image regulates magnification. Also, the apparent size of the displayed image changes depending on the distance, observer to displayed image, and whether a magnifying glass is used when we view the displayed image.

Answered by Alan Marcus on April 20, 2021

@user1589188 -- Your supposition would be correct if the pixel count of the camera and the pixel count of the display are exact duplicate. This is not the case. Generally the camera has a far greater pixel density. Thus the camera is likely to capture far more detail than the computer display can handle. The logic of the computer’s graphic display board will cast out some of the pixels and display some.

Further, the camera fractures the image into red, green, and blue pixels based on a predetermined scheme. The computer display also has a predetermined scheme. The computer display considers one pixel to be comprised of a red subpixel, a green subpixel and a blue subpixels.

What I am trying to explain, the mechanism of capture and display is far more complicated than you think. The display logic attempts display the image with correct proportions. The pixel count camera vs. display is far from a 1:1 scheme. This is also true when the camera image is printed. Likely the printer’s resolution is far below the camera’s resolution.

Answered by Alan Marcus on April 20, 2021

More pixels do not magnify the subject more.

Quite the contrary: more pixels divide the scene into smaller pieces.

The magnification I am thinking is "more detail" as in taking a picture of a newspaper from a distance, does a higher MP camera see the characters clearer?

Smaller pieces can mean greater detail if the lens is up to the challenge.

The focal length of the lens determines magnification and the physical size of the sensor (width and height are measured in millimeters, not in megapixels) determines how wide of an angle of view will be captured at that magnification.

Having said that, when one views any image taken with a camera using a sensor the size of a postage stamp (some postage stamps are very small, others are a bit larger - as are camera sensors - but none are as large as the screen or papers upon which we typically view photos), what one looks at is usually an enlargement of the image at the size it was captured by the sensor.

When we look at two images from cameras with different pixel densities at "100% magnification" they will not be enlarged by the same enlargement factor. The one with more pixels per unit area will be enlarged more than the other so that each pixel in each image is displayed at the same size as a single pixel group on the monitor.

Let's suppose we have two sensors the same size: 36mm x 24mm, or what we call "full frame" or 35mm (because it is the same size as a frame of 135 format film that was commonly called "35mm film"). One is 3000 x 2000 pixels, or 6 megapixels. The other is 6000 x 4000 pixels, or 24 megapixels. The pixel pitch of the first sensor is 12 microns (µm). The center of each photosite is 0.012mm from the pixels immediately to its left, right, top, and bottom. The pixel pitch of the second sensor is 6 µm. There are twice as many pixels per millimeter both in width and height as the first sensor. This gives the second sensor four times as many pixels occupying the same area when compared to the first sensor.

If we view both images scaled to fit our monitor so that the height of the image fills the height of our monitor they will both be enlarged by the same amount. Let's say our monitor is an FHD 1920 x 1080 pixel 24" monitor with a pixel pitch of 96 pixels per inch.

The first image is 2000 pixels tall being scaled onto 1080 pixels on the monitor.
The second image is 4000 pixels tall being scaled to fit 1080 pixels on the monitor.

In the first case, each pixel on the monitor represents 3.43 pixels from the camera's sensor. (1.85 pixels in width by 1.85 pixels in height).

In the second case, each pixel on the monitor represents 13.72 pixels from the camera's sensor (3.7 x 3.7 pixels).

But when we view the image at "100% magnification" one camera pixel is represented by one monitor pixel group. A pixel group consists of one red, one green, and one blue sub-pixel.

We will only see a portion of each image on the screen because the screen does not have enough pixels to show every pixel in either image. If our monitor is a FHD 24" monitor it displays about 96 pixels per inch.

When we view the 6 MP image at 100% (one image pixel per one monitor pixel), it would take 3000 pixels to display the full width of the image, and 2000 pixels to display the full height. At 96 ppi that would require a monitor that is 31.25" wide and 20.83 inches tall. Such a monitor would have a 37.6" diagonal and roughly "2.5K" resolution.

When we view the 24 MP image at 100% (one image pixel per one monitor pixel), it would take 6000 pixels to display the full width of the image and 4000 pixels to display the full height of the image. At 96 ppi that would require a monitor that is 62.5" wide and 41.66 inches tall! Such a screen would have a 75.2" diagonal and roughly "11.5K" resolution!

But since we are stuck with our mundane 24" FHD monitor, we only see a 1920 x 1080 pixel piece of each image. That's roughly 35% of the 6 MP image's pixels, but it's only 8.64% of the 24 MP image's pixels!

In other words, to view at "100% magnification" the image from the sensor with pixels half as wide and half as tall has to be enlarged to four times the same area for each pixel to occupy the same number of pixels on the monitor as the image from the sensor with pixels twice as wide and twice as high as the other one.

If it magnify the subject, how do you work out the magnifying ratio using the MP?

Technically speaking, it enlarges the image more when we view a higher megapixel image at "100% magnification" on the same monitor.

It takes four times as many megapixels to give two times the linear enlargement when viewing an image at "100% magnification" on the same monitor.

It also takes four times as many megapixels to give a print size with twice the linear dimensions when printed at the same number of pixels per inch.

If you can print an 8x10 at 300 ppi with an 8.6 MP camera (taking the aspect ratio into account, a 3:2 aspect ratio camera would need to have enough pixels to crop the long side to 1.25 the length of the short side, so one would need 3600 x 2400 pixels to get 3000 x 2400 pixels for a 300 ppi 8X10) then one would need a 34.6 MP camera to print a 16 x 20 at 300 ppi.

Basically, assuming both sensors are the same physical size, you can take the ratio of two cameras in megapixels and the square root of their ratio will give you the amount of increased printing size allowed at the same ppi. Likewise, if you take the square root of the ratio between the megapixel count of each camera, it will tell you how much more the higher megapixel images are being linearly enlarged when viewing them on the same monitor at "100% magnification."

If you're comparing a 26 MP camera to a 20 MP camera, then the ratio of their resolution is 1.3:1. The square root of that ratio is 1.14:1. This means you could print 14% larger in terms of linear measurements with the 26 MP sensor compared to the 20 MP sensor.

It also means when viewing images at "100% magnification" on the same monitor, the 26 MP images would be linearly enlarged 14% more than the 20 MP sensor.

Answered by Michael C on April 20, 2021

Megapixels doesn't equal magnification. Pixels simply fill a container. Think of water in a glass. You need 2 oz of water to fill a 2 oz glass. Pouring in 8 oz of water doesn't do anything for you. However, you need 8 oz of water for an 8 oz glass. If you put 2 oz in it you will never fill it. Your sensor is the same. That is why more megapixels fills a larger photo size. Now, if you use alot of megapixels for a small image your software lets you crop an image much closer. It's the same as discarding the extra water. The magnification doesn't change. It's the same as you cutting a 5x7 piece out of an 11x14. The extra get thrown away just like the extra water over flows the glass.

Answered by user85781 on April 20, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP