People often ask me about my Magic Lantern enhanced approach to photography, including landscape focus stacking. So I thought I would be bold and put down, what I believe, is the optimum approach to landscape stacking: care of Magic Lantern – sorry Nikon/Sony world!
Unlike macro stacking which is a simple, linear process, landscape focus stacking is a non-linear process. You cannot stack by simply moving the lens or camera relative to the scene in fixed increments.
In landscape focus stacking we are usually trying to get everything from the nearest focus of the lens to ‘infinity’ in focus; and our first port of call is usually the trusted hyperfocal approach.
Note that, when using the hyperfocal approach, we can only bring the focus towards us by a maximum of half the hyperfocal distance. Thus if the hyperfocal is, say, 4m away, then my nearest point of acceptable ‘sharpness’ is about 2m away. Objects closer than 2m will be in the ‘unacceptably out of focus’ or ‘unsharp’ zone.
With Magic Lantern (the latest nightly version) we now have an inbuilt depth of field calculator that dynamically provides Live View feedback on the near and far depths of field. This feedback even accounts for diffraction and you can select the total blur (nee circle of confusion) according to your personal needs.
Rather than speak of circle of confusion on the sensor, I prefer to use the term sensor blur spot, as I’m attempting to account for lens ‘out of focusness’ and diffraction. Thus my blur spot is the combination of these two things in quadrature, ie SQRT( Out-of-Focus^2 + Diffraction^2).
If you look on line at the various depth of field calculators, you will normally be asked to specify a circle of confusion, and most of these calculators will ignore diffraction. Exceptions are the DOF apps I have previously written about.
For wide open apertures and ‘snaps’, we shouldn’t get that hung up over all this blur spot and circle of confusion stuff. However, if you are seeking out the ‘perfect’ landscape or shooting with a closed down aperture, and you desire is to sell this or hang it for others to see, then you will likely be trying to capture the ‘best’ image you can, eg exposure and focus.
So, let’s start by assuming we are creating a print: which is a more exacting need than, say, a JPEG for a blog page.
Informed wisdom, after trawling the web, may lead you to a ‘circle of confusion’ criterion of around 0.030mm (some will say 29 microns) for a full frame DSLR. By the way, if you are shooting a different camera format, the CoC will be different, eg for a cropped DSLR the CoC will be less, by the crop factor.
The first ‘problem’ we need to address is that the ‘normal’ CoC advice ignores our viewing conditions, or more correctly assumes a specific viewing arrangement, and ignores post capture cropping etc. It also assumes an average viewer, with average eyesight, ie not some eagle-eyed judge!
Although I'm not an optician, I have read that an average adult with healthy vision can resolve between 5 to 8 lp/mm (line pairs per millimeter), at a ‘normal’ viewing distance of 10 inches. If we take the more exacting value, and assuming a viewing distance of 20 inches, for example, we wouldn't need more than 4 lp/mm on the print.
Hence, the CoC, or blur spot, you select needs to be treated as a variable according to your presentational needs. Once again, I’m only suggesting this exacting approach for those ‘special images’, not for everyday snapping, where you can use a (generic) non-variable blur spot.
Rather than boring you with more theory and words, let’s jump to a simple formula that you can use to estimate the required blur circle in microns:
Blur Spot (microns) = (C x M) / (P x L)
Where C is the size of the (cropped) image on the sensor; M is the distance ratio compared to a ‘normal’ 10in viewing distance, ie if you intend to view at, say, 24 inches away, M would be 2.4; P is the print size; and L is the number of line pairs per mm that you are using as your ‘eye sight’ criterion (typically between 5 to 8). Note you should keep C and P in the same units, eg mm or in or whatever.
As an example, let’s assume we are looking to create a 16 x 20 in (ie about 500mm wide) print, that will be seen in a gallery (so we will use 8 lp/mm) at the closest viewing distance of, say, 24in, and that our image is taken from a slightly cropped area of our full frame image, with a cropped (sensor) dimension of about 30mm (out of the 36mm total sensor width).
The suggested blur spot may be estimated as: 30 x 2.4 / (500 x 8) = 0.018mm (18 microns).
Obviously this is much smaller than is suggested by the generic web number, ie some 29 microns. By the way, the 29 micron (generic number) may now be seen as indicative of 5 lp/mm viewed at 10in; in other words the web-based advice is at the lower end of our acceptable sharpness and eye sight. In other words, OK for snaps and blogs, but maybe not judges!
So what should we take from the above?
For ‘normal’ (print) photography needs, eg images presented on a PC you should use a non-lp/mm approach, ie simply use a 29 micron blur spot for a full frame (and scale down according to your camera’s crop), ie accounting for out of focusness and diffraction. If you need to manage your (print) image presentation to a more exacting standard, eg because you wish to account for cropping etc, or to cover a judge’s scrutiny, or ensure a gallery hanging is optimised, then start with the simple formula above to estimate your combined/total blur spot.
Let’s now return to the real point of this post: what is the optimum Magic Lantern based focus stacking workflow for (exacting) landscapes. BTW there is a key point here: we are not shooting fast moving or changing scenes. We are enjoying life and capturing images at a leisurely pace: taking in nature’s beauty as we capture some, hopefully, award-making images :-)
As I have discussed ML in some detail in previous posts, I won’t bore you by repeating all that ‘stuff’ again. I’ll also assume you are a Magic Lantern user and familiar with the power of ML. So here, based on my experience and experimentation, is the optimised focus stacking workflow for landscapes (based on my 5D3 set up). That is capturing the maximum sharpness your camera-lens system can achieve, over its entire focus range:
- Using the simple formula above, estimate your case-specific blur spot. Or simply use the ‘default’ full frame blur spot of 29 microns;
- Go to the ML DoF sub-menu, under the focus menu, and enter the blur spot (unfortunately called circle of confusion in ML). Also ensure diffraction aware is selected. Note the ML nightly assumes a visible band camera. You will need to change the code for an IR camera (if you look in the ML source code I have suggested a number);
- Also in the focus menu select follow focus and (under focus settings) select the follow focus controls that work for you, ie +/- or -/+. BTW this workflow requires a lens that reports focus length and distance, ie not a manual lens;
- Ensure you have ETTR switched on. I personally prefer to not use the mid or shadow S/N options, ie I put these to zero and, usually, put % highlight to a low number, eg 0.1% or so. If you leave mid and shadow S/Ns at non-zero you will risk overexposing highlights. The reason I adopt this strategy is I can also decide, after a test image, to use dual-ISO or Auto-bracketing if I need to, ie if I have a very high dynamic range scene;
- Switch to LV and you should see two things: depth of field and focus distance being reported, towards the lower right of the LV screen; and the follow-focus controls should be in the middle of the screen;
- Take an ETTR reading and decide if this is acceptable. If you feel you are unable to capture the full dynamic content of the scene, then augment with dual-ISO or Auto-bracketing (see previous posts on how to optimise auto-bracketing or dual-ISO captures);
- Compose your scene: this one is down to you!
- Now the magic starts. Using the (on my 5D3) ‘joy stick’ toggle, move the focus to the macro end and until the soft stop is reached. Toggling up and down moves the focus faster than left to right;
- The depth of field will now report the focus distance between you and your closest focus point, as well as the distance from you to the nearest and farthest depth of field points. Note the largest DoF number, ie the distance from you to the farthest DoF point;
- Capture the first image;
- Using the follow focus controls (note you could use the lens ring but the follow focus controls are more exacting) move the plane of focus such that the nearest DoF distance is now slight less than the farthest one you just remembered;
- Capture another image and repeat the above until ML reports that your farthest DoF is infinity, Graphically you have achieved this:
All that is required now is for you to use your favourite post processing software to blend the images together.
As this has become one of my longest posts, I’ll simply stop writing here! As usual, I welcome feedback, including any questions you may have. Please feel free to comment below.