Saturday, April 27, 2019

Continuing the Deep Focus story


In this post I'm publishing the latest version of my Magic Lantern Lua script to achieve deep focus.

As we know, lenses are very complex things, for example, here is a cross section through the Canon 24-105 F/4L lens, showing the various individual lens element.



As can be imaged, light rays will follow a complex path, ie as exemplified by the following illustration from an arbitrary lens:


As photographers, or, more correctly, as photographers who wish to play around with a few equations, we usually make use of a few simplifying assumptions. For example, the so-called Thin Lens Model:


The simplicity of the lens equation hides some real power. For example, using various solutions, eg introducing the ideal of reproduction magnification, ie  (Image size)/(Object size) = M, we can show, using similar triangles, that the v and b distances can be stated in terms of the magnitude and the focal length and the object to image distance (x), ie:



Also photographers make use of other useful equations, like the hyperfocal equation (H), which tells us where to focus to achieve a depth of field between H/2 and infinity; and where the focus quality, in terms of (microns) of blur will be never be worse than the so-called circle of confusion (CoC).

But as we know, the Thin Lens model is hardly representative of a real lens. A better model would be to use a Thick Lens model, which may be seen as a Thin Lens model pulled apart.


However, although lens manufacturers tell us some lens specifications, eg maximum magnification, what they don't report is where the lens principal planes sit; and although there are procedures to estimate these positions, they are rather fiddly, and hence, in this script we will continue to assume a thin lens.

Typically, for viewing a 10in print from a full frame camera, at 10in away, the CoC is stated at around 30 microns. Less than this if on a crop camera, ie CoC = (CoC-FF)/Crop. 

Of course, a CoC of 30 microns is often not enough, ie because of closer scrutiny by judges; and can be more, ie web image on Facebook!

As for a minimum blur, pragmatically two sensor pixels is a reasonable limit, ie around 12 microns on my 5D3.

Knowing the CoC, the focal length and the aperture then allows us to estimate the near and far depths of field.

But as we are coding up our solution, we will seek out the ‘correct’ forms for the depths of field.

Thus the near (n) and far (F) depths of field may be written as:




For focus stacking we are seeking solutions when the far depth of field of the current image is the same as the near depth of field of the next image, as shown below:




We thus need to solve for d (note: not the same d in the Thick Lens model above), the amount to move from the current focus position, namely d in this expression:


Some would give up at this point, as it looks difficult to solve for d. However, help is at hand in the form of Wolfram Alpha, which means we can say that d is:


We now seem to have everything we need: we know our focus position (x), thanks to Canon and Magic Lantern; we know the aperture and the focal length; and we know what overlap blur we want.

So there is no confusion, we have not really solved our problem. That is we know the focus position relative to the sensor, and we know the hyperfocal, near depth of field and far depth of field, relative to the lens front principal plane. What we don't know is the position of the front principal plane relative to the sensor.

So the best we can do is assume our thin lens estimate, ie the only principal plane sits at b.

The final step is to code the above up as a ML Lua script: which I have done and which you can down load from the right, in the script list area: ML Get Focus Bracketing Script.

All you need to do is to register your lens in the script, ie name, minimum focus distance and maximum focal length (ie if it is a  zoom).

The script is a so-called Lua simple script, in that it doesn’t have a menu. To run it, simply go to the ML Scripts menu and select run.

The default script assumes a Thin Lens model, however, by commenting in one line you can experiment with using the script in a Thick Lens mode.

I have also written a help script that allows you to press one key to jump you to the ML Scripts menu. I use the RATE key on my 5D3.

The script uses the ML CoC as the required overlap (defocus) blur and to explicitly calculate H. The script does not use the ML depth of field estimates or worry about diffraction.

The script will carry out a calibration step, ie checking what direction the lens moves in, some go left and some go right when command to move towards infinity.

Once the calibration is complete, the script will pause and tell you how many brackets it estimates (+/- 1ish) it will take. At this point you can adjust the focus, aperture and focal length. You can also ETTR, assuming you have selected the ETTR SET option, ie not a long (2+ seconds) half shutter press: the text will change colour to green to indicate you are ready to stack.

The UI shows you the estimated number of brackets, the defocus, diffraction and total blurs at infinity (in microns).

You can ETTR (via SET) when the UI is being shown.

Doing a half shutter press will initiate the image capture phase and pressing any other key, will exit the script. But that key will also function as normal. On my 5D3 I exit with the RATE key.

At each focus the script can also capture exposure brackets, using the ML Advanced Bracketing. Note you can also turn on Dual ISO mode. The script will switch both of these off when it takes the bookend images (see below).

[Note: at the moment this will fail if the Advanced Bracketing chooses (sic) to use BULB mode: sorry, this is a problem in ML that I can’t fix. So, if this does occur, ensure the slowest bracket is fast enough not to trigger the BULB mode.]

The script will also take two (dark) bookend images, to help differentiate the bracketing sequence in post.

So there you have it: perfect focus and exposure bracketing, thanks to Magic Lantern Lua and making a few simplifying assumptions.

In future posts I’ll provide further insight into the script’s use and limitations, and show how to process the images in post.

No comments:

Post a Comment