It is widely known how sensor size influences angle of view (the value describing this called focal length conversion factor, or field of view conversion factor, or simply crop factor). But what about depth of field?

You won’t find too much literature on depth of field equivalence on different formats. This is possibly because the majority of DoF calculators are inherently flawed, and you can’t arrive at the correct result using them. More on this later – now let me ask you a question:

I photograph a scene with a full-frame 35mm camera using a 50mm lens. The lens is focused to 10m distance, and the aperture used is f/8. I will print the image at 30x45cm size. What lens and aperture should I use on an 1.6x crop factor APS-C sensor camera if I want the resulting print to look the same? By same I mean identical framing and identical depth of field. Of course both prints are viewed from the same distance.

Please spend a minute thinking about it before reading further.

:

Ok, now we can discuss the results!

The focal length part is easy: just divide the full-frame focal length by the crop factor.

I tell you the correct answer to the aperture part before delving into the the details. You should do the same: divide the full-frame aperture by the crop factor.

That is, you have to use a wider, 31.25mm lens and open up the aperture to f/5.

So the depth of field conversion factor is same as the crop factor. Frankly, this simplifies how one can quickly calculate it in the field.

**The Math**

I’ll let you do the actual calculations as an exercise (optionally you can read my solution here), but definitely want to talk about the correct way of calculating depth of field. We usually start with determining the hyperfocal distance .

Where is the lens’ focal length and is the F-number. As the focal length is negligible compared to the hyperfocal distance, in practice we can safely use:

The problem child is , which denotes the circle of confusion. No it’s not a group of photographers arguing about depth of field, this number represents the amount of blur on the sensor plane that is still perceived as sharp detail on the final print.

Where is the resolution of the viewer’s eye expressed in cycles per degree, is the viewing distance in millimeters, and is the print’s magnification (calculated as the print’s linear dimension divided by the sensor’s linear dimension).

As you can see the circle of confusion depends on the print’s magnification, the viewing distance and the viewer’s eye condition. Any depth of field calculator that doesn’t let you input these values is just a waste of time. Actually those unusable calculators just take a fixed for some smallish print size and less than 20/20 eye condition. But to arrive at the correct depth of field equivalence factor you have to begin with a correct .

Note that sensor resolution does not play a role in circle of confusion and thus depth of field. It limits maximum magnification (that still looks good), however.

From here the near and far depth of field is calculated with the following equations (or their approximations).

Where is the subject distance.

**Interesting Consequences**

*Diffraction limited depth of field* is the same for any two sensors having the same number of megapixels. Even if they have different *diffraction limited apertures*. That is, the diffraction limited aperture is an 1.6x smaller F-number for an 1.6x crop factor camera than for an equal megapixel full frame camera.

f/5.6 maximum aperture zoom lenses on APS-C cameras are a joke. Who would want to shoot with a f/9 lens on a full frame camera?!?

You need wider maximum aperture lenses on APS-C cameras than you would on full frame. The new Sigma 18-35 f/1.8 lens is a good step in this direction.

You can capture the exact same looking image on an APS-C crop sensor camera that you could on a full frame one. You’ll just need a wider, faster (and higher resolution and more expensive) lens.

## Speak Your Mind