## Wednesday, October 28, 2020

### DOFIS: Resolution FoM tweak

Just a quick post to say I've tweaked the algorithm in DOFIS that provides a figure of merit to show how resolution changes as you adjust focus and aperture.

In the latest version, the overall resolution is estimated by continuing to assume a perfect lens, ie the glass resolution is not considered the governing element of the overall system resolution.

The overall system resolution (R), in lp/mm, is estimated from:

Where S is the theoretical maximum resolution of the sensor in lp/mm and D is the MTF50 diffraction based resolution at aperture N and focus x, ie changing magnification. This estimate may be viewed in the DOFIS menu under Info.

The resolution FoM, in %, that is displayed on the LV, is 100*(R at N,x)/(R at min N, ie wide open, and infinity)): as shown in the last post.

## Tuesday, October 27, 2020

### DOFIS: Resolution Figure of Merit

There are lots of ways resolution gets spoken about in photography: the resolution of the sensor; the resolution of the lens; the resolution of the screen on which we are viewing the image; the resolution of the projector that is being used.

When you look at the literature, it becomes rather confusing: moving from a simple 1-D view of resolution, eg the number of sensor photo-sites on the camera sensor; through to resolution at a systems level, eg the convolution of sensor+lens+aperture+projection+other-stuff.

Of course, the one thing many leave out is the weakest link: us! The human eye-brain element in discussing resolution, what we might call 'photographic acuity'.

Like many, I'm aware of the limitations of my eyes, especially after a couple of macular and cataract related operations on one eye; resulting in my vision being blurred in that eye and with a different 'white balance' relative to the other eye.

I guess what I'm saying is, there is little point trying to come up with an absolute measure of resolution, when we each will perceive the final image in different ways: eg sharpness through to artistic interpretation.

Nevertheless, as I continue to develop DOFIS as THE Magic Lantern focusing tool, I felt there was a need for some form of relative appreciation of resolution: a relative figure of merit that provides feedback as we change aperture and focus.

I believe there are two main camera-centric components for us to pragmatically think about: the sensor and the lens.

The sensor's maximum resolution may, at first sight, look easy to address, after all we know the pitch or size of a sensor photo-site and can, as a reasonable approach, use two of these to give us a line pair appreciation of resolution. For instance, my 5D3 has a pitch of 6.22 microns, giving us a maximum line pair resolution of 500/6.22 = 80 lp/mm.

Although in this post I'm not discussing MTFs, here is a reminder of how lp/mm and Modulation Transfer Function (MTF) go together:

Before we take a more realworld look at things, let's discuss why an appreciation of image resolution is important, especially for those wishing to print their images.

Let's say, we wish to print our image on 8x10 in, good quality, photo paper. As we are capturing our image on a full frame 5D3, let’s, for convenience, say the sensor is 1in, ie let's ignore the aspect ratio. That is, a 10x magnification when we print.

Our printer says it can print at, say, 600 dots per inch as a minimum, which we will recognise as 300/25.4 = 12 lp/mm.

BTW our eyes are usually tested by using a test chart, eg a Snellen Chart or a Landolt Chart or a Tumbling E chart for those that are illiterate.

As an aside, I came across this useful insight:

"A score of 20/20 or 6/6 on a Snellen chart, or 0 logMAR score, means you can resolve details down to 1 minute of arc, which is 1/60 degree. The tangent of 1/60 degree is 0.000291, or 2.91e-4. If you are viewing a photograph from 1 m away, then the size of an object that is 1 minute of arc in size is 1000 * 0.000291 = 0.291 mm. That is the size of one line, either white or black, in a resolution, while a “line pair” requires one of each. So a line pair is twice as large, or 0.582 mm per line pair. To convert mm per line pair into line pairs per mm, just take the reciprocal: 0.582 mm per lp is 1.72 lp/mm.

An example in the other direction: Suppose you have a photograph which you know can resolve 5 lp/mm. Thus each line pair occupies 0.2 mm, and a single line is 0.1 mm. Suppose you are viewing the print from a distance of 250 mm. Then the tangent of the angle subtended at your eye is 0.1/250 = 0.0004. The arctan of that value is 0.0229 degrees, or 1.37 arc minutes. This is larger than 1 arc minute, so it corresponds to a line on the Snellen chart where the letters are 1.37 times as large as they are on the 20/20 line - approximately 20/27.5.

So if your print only resolves 5 lp/mm, and you have 20/20 vision, the print (at 250 mm) is not as sharp as your eye, and the print may not appear completely sharp. On the other hand, if the print resolves 8 lp/mm, then the minimum feature size on the print is 0.86 arc minutes, which is equivalent to a Snellen chart line of 20/17. If you have 20/20 vision, the print has more detail than your eye can see.

The logMAR score is a little more complicated, since it is the base-10 logarithm of the minimum angular resolution in arc minutes. That adds one step to the calculation."

Cutting to the chase, the minimum lp/mm resolution, on a printed image, is considered acceptable at between 5-8 lp/mm, viewing an 8x10 print at a normal viewing distance of 10in.

However, tests have been carried out that indicate some (albeit a very small number) can see a difference between an image with 15 lp/mm compared to 30 lp/mm, but not between 30 and 60 lp/mm.

Only you, the viewer will know what is acceptable, ie to you and hopefully others enjoying your art :-)

For now lets stick with 5 lp/mm, ignoring all other things, that is equivalent to 50 lp/mm on the sensor; which is OK on my 5D3, ie an 80 lp/mm sensor.

However, if we wish to print a 16x20 in image, ie a sensor to print magnification of 20, then the sensor resolution needs to be 5 lp/mm x 20, ie 100 lp/mm, but only if we continue to view the print at 10in. Most, however, would view a larger print, further back than 10in, eg at 20in. Thus if we view prints at their 'comfortable for the eye' distance, a print resolution of 5-8 lp/mm is a reasonable lower limit.

For more exacting prints, for example close scrutiny by a judge, maybe, aiming for 10-15 lp/mm seems a sensible maximum print resolution.

Of course, all the above is theoretical, as, when we take/make images in a real world, things can only get worse than the simple view presented above.

Because others (including past posts of mine) have presented the theory of diffraction, I will not be repeating that here: just using the results/conclusions.

I'm also going to ignore the 'non-diffraction optical quality of the lens', as it is near impossible to model this. I'm going to assume the lens, at all aperture values is OK, ie it's not the weakest link.

Having said that, I am going to model a lens component that degrades resolution, ie the lens aperture.

Thus, the only two things I'm going to model are the sensor and diffraction.

For the sensor, the Bayer arrangement complicates things, eg reducing the resolution of diagonal linear features:

Plus, the various filters that get used, eg low-pass; anti-aliasing; infrared, ultraviolet; etc, all contribute to 'degrading' the resolution/quality of the captured image.

However, as these are a constant, I'm going to ignore them.

As for how many sensor photosites contribute to the absolute (sensor-based) resolution, I'm going to use a simple (maximum) metric from here: https://downloads.leica-microsystems.com/Leica%20FS%20C/Newsletters/LeadingInvestigator_6.pdf

That is, you need a minimum of three photosites to address aliasing; ignoring the impact of the colour array (eg Bayer) arrangement and the various sensor filters. Thus on my 5D3, ignoring all other things, the theoretical maximum lp/mm resolution is 1000/(3*6.22) = 53 lp/mm.

Thus, if printing a 8x10 print, ie a 10x magnification, or a larger print viewed further way, the theoretical best I could achieve is 53/10, or 5 lp/mm, which should be OK when standing at the 'normal viewing distance of 10in ;-)

Note: the above ignores fancy post processing that attempts to 'sharpen things up'.

As for diffraction, I'm using the focus corrected model that is already in DOFIS, namely an MTF50 based number, based on an adjusted Dawes limit:

Where: lambda is the average wavelength in microns, ie 0.55 for a visible band sensor; N the aperture number; m the magnification at the point of focus; and p the pupil magnification.

For example, on a lens with a minimum aperture of f/4, focused at infinity, the diffraction based MTF50 lp/mm value is 172 lp/mm, ie way more than the sensor physically can resolve. In this situation the sensor limit of 53 lp/mm becomes the dominant resolution.

But at a magnification of 1, when using a macro lens, at, say, f/22, the diffraction based lp/mm value is 15 lp/mm. Clearly this now becomes the dominant resolution.

Bringing the above together, we have a pragmatically simple way of providing user feedback on resolution impacted by the sensor and diffraction:

Here we see that, until the diffraction reaches a critical value, ie the maximum sensor resolution, we will assume the sensor is the limiting element. Once diffraction gets larger than the critical value, the diffraction will drive the resolution.

As we could be mislead by the numbers, we will base the user feedback on showing the % degradation in resolution, relative to the sensor maximum, ie 1000/(3*sensor-pitch), ie with the aperture wide open and thus with negligible/low diffraction.

A refinement on the above model, and a more realistic one, is to take the sensor and diffraction in quadrature, to arrive at a ‘system’ resolution.

When running you will typically see something like this:

Here we see we see we are using a 100mm lens, in fact a Canon macro lens, and that we are focused at 3.65m. The DOFIS relative and diffraction corrected DoF is 28cm in front and 33cm behind the point of focus. DOFIS is also telling us that we will need to take about 7 images if we wish to focus stack to the hyperfocal. Note the RoT hyperfocal at 25 microns is 100/10/0.25 = 40m.

We also see a green box, indicating that, at f/8, diffraction is not yet dominating things. The 100% tells us that the resolution is maxed out and dominated by the sensor.

Finally, as we don't see any macro info in the top right DOFIS area, we must be at a magnification less than 0.5 (which is what I set as my trigger point to change depth of field models).

Lets close the aperture down to f/16 and see what happens:

As we haven't changed focus we are at the same magnification as before. However, we now see a red box, indicating that diffraction is now driving things. The near and far (relative) depth of fields have increased, although, because we are in diffraction aware mode (the + sign), not by much. Nevertheless we now only need to take 5 brackets to reach the hyperfocal.

Finally we also see that, because of diffraction, our resolution has reduced to 78% of the wide open, sensor maximum.

Let's now leave things at f/16 but focus to achieve the maximum magnification:

Now we see we are focused at the minimum focus distance of 30cm and we are at a magnification of 1.0. Because we are over a magnification of 0.5, the depth of field model has switched to a one that accounts for diffraction at the macro level, ie
m shown. We therefore see the depth of field, either side of the point of focus, being estimated at 1.3mm.

Finally, we see the resolution has reduced to 37% from its maximum value, ie because of diffraction.

As this has been another long post, I'll end it here.

As usual I welcome feedback on this post and, of course, DOFIS.

## Monday, October 26, 2020

### The Hyperfocal Distance: Understanding the traps

Warning: this post may be heavy going for some, so jump to the bottom line if you wish.

In previous posts I hope I have convinced readers that the hyperfocal distance is a very powerful tool for photographers to know.

For the landscape photographer, it can help you achieve the best balance between infinity focus, to a defined defocus blur, and knowing the near depth of field, ie at half the hyperfocal.

For those wishing to carry out focus bracketing it can help you estimate where to focus in the scene, eg at, say, 3H, H, H/3, H/5 etc

For portrait photographers seeking to blur out their backgrounds, it can help you find the best settings to ensure the model is in focus and you achieve the desired bokeh.

Simply put, it is a very powerful thing to know the hyperfocal, and the usual equation is presented as:

Where f is the focal length of the length, N the aperture number, eg f/10, and the c the so-called circle of confusion, ie a measure of blur that we deem acceptable, before we judge the focus to be 'unacceptable out of focus'.

As many know, an even simpler form of the hyperfocal is to drop the f term and just use the (f^2)/(Nc) term. Which can, be simplified further by using the Rule of Ten (RoT): http://photography.grayheron.net/2018/11/infinity-focusing-in-your-head-rule-of.html

However, this simple equation has a few traps and it pays all photographers to appreciate what's going on behind the maths. Once you do, things can get simpler...with confidence.

All photographers will be comfortable with f and N above, as these are the bread and butter settings we use all the time: but the circle of confusion (c) is where many get confused!

The first trap some may fall into is choosing a 'non-optimised' value for c, resulting in an 'incorrect' infinity focus or a depth of field assessment.

Pragmatically, most photographers accepted that c, in mm, can be set between 0.03/crop to, say, (2-3)*(sensor photo-site size). That is, on my 5D3, with a crop of 1, between 0.03mm (30 microns) to, say, 2*6.3, or about 13 microns. The higher figure will be ok for PC display, but I may consider using the smaller value for c if I was trying to create a tack sharp print for close scrutiny.

There is no hard and fast rule to use when setting c. The best advise is get to know what works for you, noting that accounting for diffraction will become more important as you stop your aperture down, ie total blur may be estimated from taking the defocus blur and diffraction blur in quadrature. But, for now, in this post, let's ignore diffraction.

The second trap is assuming that the hyperfocal is measured from the camera, eg the sensor plane, or the front of the lens: after all the nice camera manufacturers put a mark on the camera to show us where the sensor plane is and we can see the front of the lens

In fact the hyperfocal equation above is positioned from the front principal plane of a thin lens model. Of course, in the thin lens model the front and rear principal planes, and the entrance and exit pupil are all coincident:

In the thin lens model, the lens, relative to the sensor, is positioned at b=f*(1+m), that is at f+f*m, where the f*m term is the lens extension required to achieve focus away from infinity. That is, at infinity, where m here is the thin lens magnification, approaches zero, the lens is positioned at one focal length from the sensor.

For example, if a 50mm lenses is focused at the minimum focus, a 'standard' lens, eg not a cinema lenses, will need to move the focusing lens group by 50*m. Thus at a maximum  magnification of 0.2, the effective focal length moves from 50mm, to 50+10 = 60mm.

This movement will manifest itself in the field of view changing as one focuses from infinity to the closest focus.

Without proof, the magnification at the hyperfocal is simply (Nc)/f; thus, at the hyperfocal, b is f+Nc. However, with, say, N set to f/10, and c at 0.03, we can see that the lens extension at the hyperfocal is really small, and independent of the focal length, ie 0.3mm in this example. That is, we can ignore it and say that the thin lens hyperfocal distance, from the sensor, is:

The third trap is of course using a thin lens model. Although we are unable to model the actual lens design, we can, as has been presented in previous posts on DOFIS developments, create a pretty good model of a lens, by pulling a thin lens apart and even accounting for asymmetry, ie pupil magnification:

In the above, illustrating p >1, the main unknown is how much to pull the lens apart, so that the model matches the lens specification, eg maximum magnification and minimum focus distance.

From previous posts we know we can estimate t at any focal length (F) by only knowing the lens minimum focus distance (X) from the sensor and the magnification at that distance (M):

To make things as simple as possible, we will accept the values that the manufacturer quotes in their specification. We could also measure X and M ourselves.

This then gives us the most complete equation for the hyperfocal, relative to the sensor plane, you can get:
Or in the fully expanded form:

As before, we could sensibly drop the Nc term, as it is less than a mm.

As an example, let's take a Canon 14mm f/2.8L lens. The maximum magnification (M), ie at the minimum focus distance (X) of 200mm from the sensor, is quoted at 0.15. Plugging these figures into the equation for t, we arrive at a figure of 76mm. That is we need to split the thin lens model by 76mm.

Or put another way, the thin lens model is underestimating the hyperfocal distance, of this14mm focal length lens, by 76mm.

As another example, let's take the Canon 100mm f/2 with a quoted maximum magnification of 0.14 at the minimum focus distance of 900mm. Once again using our equation for t we can estimate the amount we need to split the thin lens model apart; and that is: -28mm !!!!

That is a negative amount!

Do we still use the equations?

Absolutely, as all we have done is move from a retrofocus designed lens (14mm), eg the physical lens is longer than the focal length, to a telephoto lens (100mm), eg the lens is shorter than the focal length.

We have now reached a critical understanding. If we use the simple form of the hyperfocal, eg the RoT + 2f or (f^2)/(Nc) + 2f, with a retrofocus lens we will always be focusing short of the 'true' hyperfocal, ie t is positive. Whereas, with a telephoto lens we will be focusing beyond the hyperfocal, ie t is negative.

Thus, for lenses less than, say, 50mm, where we are moving into the retrofocus designed lens zone, we are more sensitive to knowing, or accounting, for t. Plus, of course, as we move to very wide lenses, the hyperfocal is getting less all the time.

As a final example, let's look at a rather extreme example of a wide angle lens: the (non-fisheye) IRIX 11mm. The X and M are quoted as 280mm and 0.07, giving a t of 100mm.

Using the RoT estimate for the hyperfocal, ie at f/10, we arrive at 11/10, which means that the hyperfocal is at 1.1m but at a CoC of 11 microns. So let's adjust this to a more sensible CoC of 22 microns, ie half the RoT hyperfocal distance to 550mm. Adding in our thin lens 2*f factor, gives a thin lens hyperfocal, from the sensor, of 550+22 = 572mm.

But we now know that t is 100mm, ie about 20% of the hyperfocal. That is, we should be focusing at 672mm, 572+100, from the sensor; and not at 572mm.

Bottom line: assuming you are still with me, we can pragmatically collapse all the above, and previous posts on the hyperfocal, into a few rules of thumb, to be used  when hyperfocal focusing:
• Knowing the hyperfocal is of most value/importance when using a lens of about 50mm or less;
• For ‘short’ telephoto lenses, one can safely estimate the hyperfocal in our head by using the adjusted-RoT, with the 2*f term, as the thin lens correction term will only shorten/reduce this number, ie t is negative;
• Using the hyperfocal approach for long telephoto lenses is likely not worth it, say, beyond 100mm, ie a 25 micron RoT hyperfocal of 40m;
• For wide angle lenses, a safe rule of thumb is to focus at, say, 20% beyond the RoT value;
• Infinity blur at the hyperfocal will, by definition, be the CoC;
• Focusing at n times the hyperfocal will result in an infinity blur of the CoC/n;
• CoCs much less than two sensor pixcels are a sensible lower limit, with 30/crop (in microns) as the largest value;
• When focus bracketing from infinity to near, focus at 3*H, H, H/3, H/5..., and safety use the simple RoT number.
As usual I welcome any feedback or comments on this post.

## Monday, October 19, 2020

### DOFIS: Now for the macro shooters

If you have been following my DOFIS developments, you will be aware that DOFIS provides Magic Lantern photographers and videographers with the best available, in-camera, information, eg:

• A lens model that matches the manufacture's or measured information, eg magnification at focus;
• Feedback on the effective focal length, eg lens breathing;
• Field of view as the lens changes focus;
• Diffraction aware depth of field, either relative to the sensor or the point of focus;
• Focus informed Infinity blur information, so you can infinity focus to a blur criterion;
• Focus bracketing feedback for manual focus bracketing;
• Auto focus bracketing, with or without exposure bracketing;
• The entrance pupil location, ie the pano no-parallax relative to the sensor.

In this post I'm pleased to publish the latest version of DOFIS, which now incorporates information for the macro shooter.

As far as DOFIS is concerned, the macro model gets switched on if the magnification is greater than 0.5, but you can adjust this threshold in the DOFIS menu if you wish, eg to 0.7. Note this is an arbitrary choice, ie there is no hard line between using one DoF model or another.

Once the focus is at a point where the magnification is greater than the threshold, then DOFIS switches from using a depth of field model based on geometric optics, to one that assumes you are diffraction limited. The model I've chosen to use, without proof here,  is one based on a quarter lambda wavefront error, namely: the near and far depths of field, either side of the point of focus, are calculated from:

Where: lambda is the wavelength of interest, ie 0.55nm here, but would change if you were shooting in the IR bands; N is the aperture at infinity, where m is 0, ie what Canon and ML report as the aperture; m is the magnification at the point of focus; and p is the pupil magnification.

Of course, just using the above does not help that much, as we also need to account for diffraction. In the non-macro zone we 'conveniently' combine the optical and diffraction blurs in quadrature; to estimate the CoC as impacted by diffraction.

The way I've handled depth of field and diffraction in the DOFIS macro model is to provide the user feedback on both the DoF and diffraction, so they can make informed decisions, eg regarding macro focus stacking.

The same diffraction model is used whether DOFIS is in the macro or non-macro zone. Namely, the diffraction blur spot is estimated to be:

Where all the variables are as above.

Note that the (1+m/p) term is the so-called 'bellows factor' that we use to arrive at the effective aperture number, and in estimating the focus dependent FoV. This tells us why magnification is important in macro photography and not, say, in landscape photography.

A useful thing to note is the magnification at the hyperfocal is N.C/f. Where C is the circle of confusion and f the focal length. Thus at, say, a focal length of 30mm and an aperture of f/10, with a CoC of 0.03, the magnification will be 10*0.03/30 = 0.01; which we can ignore, and thus the diffraction, if focused from near the hyperfocal to infinity, will simply be: 2.44*0.55*N.

On the other hand, if I was using my Canon 100mm macro at the closest focus, ie a magnification of 1, then the diffraction will be twice that at infinity.

The impact of the diffraction is, of course, to soften the image, ie insult the resolution that we can achieve.

Once again, without proof, I've decided to adopt a model, for the optical resolution, based on the so-called Dawes limit, and adjust that to match the 50% MTF value: see clarkvision.com for more info, eg https://clarkvision.com/articles/scandetail/#diffraction. This then gives us a way to estimate the MTF50 lp/mm resolution, ie:

[As an aside, for those that wish to understand the MTF, have a look at https://www.youtube.com/watch?v=iBKDjLeNlsQ ]

Let's now look at how the above runs in DOFIS.

In the above screen capture we see a new menu item called "Diff Feedback". All this does is toggle the diffraction feedback on or off, ie some users may not wish to see this additional information.

Assuming you are a macro shooter and you have switched the diffraction feedback on, you will now be looking at something like this:

Here, in the top right, we see the new diffraction information, here a green box, telling us we are not diffraction limited, based on our current camera settings, ie a 100mm (macro) lens, set to an aperture of f/6.4 and focused at 1.09m. Additionally, DOFIS is telling us that the relative near DoF is 23mm, the far DoF is 23mm, and the diffraction aware DoF is on (+). The S tells us that DOFIS is using a registered split lens model, ie it's as accurate as you can be.

Finally, in the green box is the lp/mm that DOFIS is estimating at the MTF50 point, based on an adjusted Dawes criterion. Here it is 98 lp/mm on the sensor.

[Note we are assuming a perfect lens, which is impossible to achieve. In real life the lens 'quality' will play an important part in the overall camera-lens quaility, eg resolution achievable, in the final image. Having a great camera and poor lens, or a high quality lens on a 'poor' camera is not a photographic match made in heaven.]

As we are not yet in the macro zone,  DOFIS is telling us that we will will need about 25 focus brackets to get to the hyperfocal, ie #25 is shown.

The criterion used to define being diffraction limited is based on the camera's sensor, and is hard coded in the script by the user, ie between 2-3 sensor pixcels. On my 5D3 I have this set to 15 microns.

Let's now stop down the aperture until we see the diffraction feedback turn red:

In this case we see the diffraction feedback change to red at f/11, focused at 1.09m, thus we know that we are in the sensor defined, diffraction limited zone.

At the moment we are still not in the macro zone, ie magnification greater than 0.5, so let's reduce focus until we are just in the macro zone:

We can see we are in the macro zone as DOFIS has changed from showing a relative DoF display (R) to showing an M. We see the magnification being reported at 0.53, at a focus distance of 34cm from the sensor, and the near and far DoFs, either side of the point of focus are 1.3mm, using the quarter wave model. The lp/mm MTF50 value is estimated 38.

By now I'm sure some are saying so what? How is all this information helping me capture macro shots?

The key thing is to not worry about the absolute numbers too much: it's a model of reality and, as we know, all models are wrong...but some are useful.

In the end, only the user will be able to 'calibrate' the worth of the DOFIS information, ie by noting the DOFIS numbers and confirming the captures are OK.

For example, in the above we see that the lp/mm (on the sensor) estimated at 38. Of course, what is acceptable will depend on how you process and present the final image, eg Facebook vs a print in a competition.

To gauge what is an acceptable MTF50 lp/mm, it is useful to remember that people with an average/good eyesight can resolve 5 lp/mm on a print at a 'normal' viewing distance. But of course this is on the final image; we thus need to define this on the sensor. Let's assume we are using a full frame sensor, ie 36x24mm and that our final print is 360x240mm. This gives us a sensor-print enlargement of 10. Thus we should be seeking a sensor lp/mm value of no less than 50, which we had at a focus of 1.09m, ie 98 lp/mm, but when we went into the macro zone, this dropped to 38 lp/mm on the sensor.

We could, of course accept, this, take our image and see if all is OK. If it is we know that a DOFIS reported MTF50 lp/mm value of 38 is OK.

The only way we can change this value, ie increase the MTF50 resolution, is to reduce the aperture, but, of course, at the cost of the depth of field, which is already small.

This is why in the macro zone, where we will likely be diffraction limited, DOFIS provides the user information to inform their capture decisions.

As a final illustration, let's show what things look like at the maximum magnification, ie 1.0.

Here we see we are at a magnification of 1.0, ie we are at the minimum focus distance of the lens, and that, at f/11, we know we have a lot of diffraction, eg about twice that at infinity, and that our MTF50 lp/mm has dropped to 28, ie on the sensor. We also see the depth of field is still small, ie 0.6mm either side of the point of focus.

Let's say we wished to increase the depth of field by stopping the aperture down to, say, f/22:

We now have a depth of field of 2.6mm either side of the point of focus, but at the cost of the diffraction limited resolution failing to 14 lp/mm.

Whether the above is acceptable is totally down to the user and the final image presentation format.

As this as been a rather long post, and rather technical, let me now draw things to a close by saying, DOFIS now supports the macro shooter and provides information that, hopefully, will be useful in capturing macro images, especially those that require focus stacks, ie where you need to ensure you don't have 'focus gaps'.

As usual I welcome feedback on this post and DOFIS.

## Monday, October 12, 2020

### Why pano shooters should stop down

In the last post I alluded to the complication that, in many lenses, the pupil 'plane' is in fact a surface, ie it varies as you move away from the optical centre.

For most photography, eg portraiture etc, this nuance is an irrelevance. That is, over simplifying things, focus on the eye and go from there.

Where the non planar attribute becomes of interest is when one is capturing panoramic brackets.

Let's first remind ourselves what the 'problem' is and, as an example, let's take a Canon 24mm F1.4L, focused at infinity. Thanks to https://www.photonstophotos.net we can gain some insight into this, and other, lenses:

On the above screenshot we see the lens positioned relative to the sensor, at a flange offset of 44mm. We also see the two principal points, the two pupil locations and the two focal points. All these 'cardinal points' being on the optical axis.

Let's now switch on the entrance (red) and exit (blue) pupil surfaces and set the lens to it's widest aperture:

As implied in the last post: not what you might expect. That is, both surfaces, are highly non-planar.

As far as the entrance pupil (blue) is concerned, this means that the entrance pupil will vary along the optical axis, according to the angle of the ray entering the lens.

So, the obvious question is: can we reduce the effect, so that the spread of the entrance pupil along the optical axis is minimised?

Most landscape photographers, who are taking panoramic brackets, and have a near point of interest, will be interested to know that, by stopping down the aperture, one can reduce the variability. For instance, taking the same lens as above at f/10, results in the following pupil surfaces:

Thus, by stopping down the aperture, say, to a sensible f/10 , the entrance and exit pupil surfaces have 'collapsed' towards the optical axis and become 'near-planar'. That is, the pupil location, that we can measure or calculate, and which is on the optical axis, is a good measure of the pupil’s location, albeit with some lenses, eg especially very wide or fisheye lenses, the entrance pupil will not be a single point.

Bottom line: unless you know your lens, don't shoot panos wide open ;-)

## Sunday, October 11, 2020

### A poor man's optical bench

As we have seen in previous posts, on a photography lens, knowing the pupil magnification becomes important if you wish to know macro depths of field and the field of view as you change focus, eg FoV changes because of focus breathing.

One way to estimate the pupil magnification is to simply eyeball it. That is, by looking at the lens from the back and the front, you can guess the ratio of the exit pupil diameter to the entrance pupil diameter. Crude, but better than assuming it is unity, ie ignoring it.

Another way is to take an image of the in focus exit and entrance pupils, with a fiducial, eg a ruler, also in focus next to the lens, ie the ruler being in the same approximate plane of focus as the pupil. After scaling the images in Photoshop, so that the rulers are overlayed, one can measure the pupil diameters, ie in Photoshop pixels, and, once again, estimate the magnifications from the ratio of the exit to entrance diameters. This approach is much more accurate than eyeballing, but may be dependent on your aperture's design, eg number of blades. So, make sure you measure in the 'same' place:

The ultimate measurement approach is to use an optical bench or table, to locate the exit pupil along the lens axis. Of course, most photographers don't have an optical bench laying around: https://en.wikipedia.org/wiki/Optical_table

So I got thinking and decided to create my own 'poor man's' optical bench; and use it to see if I could estimate the pupil magnification.

After some experiments, my final arrangement looked like this:

The main part of my 'optical bench' is a cheap macro rail system, eg this one from Neewer, which I had laying around, unused ;-)

Of course, the movement means we will be limited to what lenses we can measure, but the movement should be ok for the lenses we are mainly interested in.

The set up is pretty simple. Align the lens with the axis of the camera you are using to help with the measurement; with the exit end of the lens to be measured towards the camera. In the above, I'm using a 100mm macro lens on my 5D3.

As we are interested in the pupil magnification (p), we need to find the location of the exit pupil, which is located on the lens axis at f(m+p) from the sensor.

Of course, with the lens being tested set to infinity, this collapses to f*p, ie the magnification (m) is zero at infinity.

Therefore, if we can measure the distance from the exit pupil to the sensor, we have a quick way of estimating the pupil magnification (p).

The optical bench will allow us to measure the exit pupil position relative to a know fiducial, in this case the end of the lens that mates with the camera.

The caveat being that we are assuming a planar, ie flat, exit pupil plane: which, according to the lens design, it may not be. If you wish to understand this, a great page is: https://www.photonstophotos.net/GeneralTopics/Lenses/OpticalBench/OpticalBenchHub.htm

For example, here is the exit pupil surface for a Canon 24mm prime, wide open (thanks to photonstophotos):

Whilst here is the exit pupil surface, once again wide open, for a Canon 24-105mm RF lens:

In other words: it's complex and not predicable, unless you know the lens design in detail! The best advise is to stop down, which forces the pupil surface towards the lens centre and thus towards a more planar surface.

We also know, from the manufacturer, what the flange distance is, ie the distance from the sensor to the surface that mates with the lens. For example the Canon EoS Full Frame cameras have a flange distance of 44mm:

Therefore, in the case of the Canon EoS, the pupil magnification (p) may be estimated from (d+44)/f, where d is the distance from the pupil to the end of the lens, which we will find using our 'optical bench':

• Set the lens you are measuring to the focal length of interest, if you have a zoom, and the focus to infinity;
• If necessary, reduce the aperture so you can clearly see the aperture;
• Compose things so you can see both the pupil and the end of the lens;
• Using your camera, wide open, focus on the pupil as best you can, ie zoom in LV and rock focus back;
• Note the position on the rail;
• Without touching the camera's focus, or moving the camera or lens, rack the macro rail until the surface that mates with the camera comes into focus, ie zoomed in and once again rocking focus;
• Note the position;
• If you wish, repeat the above and take an average.

You now have the distance (d).

As an example, let's take my 24-105 F/4L at 24mm.

First, let's look at the 'in-focus' pupil:

As implied above, if you zoom in don't expect a razor sharp image, as the exit pupil surface is a complex thing:

However, by rocking focus back and forward, you will find the best focus: accepting we are measuring at the edge of the exit pupil, and not on the optical axis; although stopping down the aperture of the lens we are measuring will help.

As for the image of the rear of the lens: that is easier to confirm:

Using my 'poor man's' optical bench,
I measured d at 49mm, which gave me a pupil magnification for the 24-105mm, at 24mm, of 3.87, ie (49+44)/24.

So there you have it: three different ways you can estimate/measure the pupil magnification of a lens.

As usual I welcome feedback on this post: good and bad ;-)

## Saturday, October 10, 2020

### DOFIS: Exposure, Focus and Pano bracketing

Another quick post of one of my 'tabletop experiments, this one showing how, with the help of Magic Lantern and DOFIS, you can easily create complex bracket sets. This one being a three degrees of freedom challenge: exposure+focus+composition.

To illustrate the process, I used my 24-105mm F/4L, at 35mm in portrait mode, mounted on a nodal rail and a pano rotator:

DOFIS told me that the no-parallax point, ie the entrance pupil, was at 83mm from the sensor, which I duly set and checked, ie by quickly rotating the camera on the pano head and confirming all was OK: which it was.

I then positioned the camera for 'composition', and confirmed I would be taking four pano stations.

I then decided to exposure bracket using ML's Dual-ISO, ie at 100/1600.

I then set the base exposure using ML's Auto ETTR, having set an aperture at f/11: giving a shutter of 0.6s.

Finally, I set the nearest focus and noted this figure, so I could reset it for each pano station.

I set full auto focus bracketing in DOFIS and noted it said it would take around 4 backets: in fact it took 5.

I then repeated the following process:

• Position for the first pano;
• Press my DOFIS auto focus bracketing button (the RATE button on my 5D3);
• Let DOFIS capture the Dual-ISO focus brackets;
• Rotate for the next pano station;
• Reset focus;
• Press Rate and repeat the above until the final pano station.

After ingesting the 20 images into Lightroom, ie 5 focus brackets at 4 pano stations, I first 'developed' the Dual-ISOs.

I then did a round trip to Helicon Focus, of the Dual-ISOs from each pano station: thus giving me four focus merged images for pano blending using Lightroom's photo merge to pano feature.

I used the perspective blend, for a realistic look, and the resultant image looked like this:

Resulting, after cropping, in a 9409x5648 pixel final image, created from 20 images.

Bottom line: once again Magic Lantern and DOFIS makes taking complex, eg 3 degrees of freedom, bracket sets a breeze.