In previous posts I have described how I’ve been ‘playing around’ with focus stacking. Not macro focus stacking, which is a virtual necessity, but focus stacking for landscapes, which, I suggest, is more challenging.
In macro photography we approach focus stacking in a relatively linear fashion. For example, we first calculate the width of the ‘depth of focus’, ie the zone either side of THE focus point that remains in acceptable focus, based on some criteria, eg lens optics, size of sensor pixels, required printing quality andviewing distance. We then use this to work out the number of ‘focus steps’, or images, to take that covers the subject we are capturing. Thirdly, as we overlap the images we can throw these images at focus stacking software and, magically, we end up with our subject ‘tack sharp’ from front to back, despite the narrowness of the depth of (acceptable) focus. The down side of macro-based focus stacking is that the acceptable depth of focus is usually sub-mm, and for a reasonably sized subject, ie cms, we need to take lots of images, ie 10s to 100!
Landscape focus stacking is different and we thus need to approach our focus stacking workflow in a different way. There is no need for focus rails and capturing 10s of images for landscapes.
As I have written before I have settled on three (iOS) Apps from georgedouvos.com. I believe George’s three Apps represent an essential set of tools to help photographers achieve tack sharp (landscape or architectural) image capture.
The ‘complication’ we face in landscape/architectural focus stacking is that our image capture focus distances are non-linearand make use of hyperfocal calculations. For example, using theFocusStacker App and deciding to capture for an image blur ‘quality’ of 20microns, at the 24mm end of my lens I know I need to take three images at 3, 5 and 15ft. These three images, once merged in a focus stacking program, will provide a ‘tack sharp’ image (20 micron blur spot) from about 2.5ft to infinity.
My first, manual, attempts at landscape/architectural focus stacking were based on pre-calibrating my lens in the house, ie by focusing at 3, 5 and 15 ft and marking the lens rotation from one of the two lens end stops, ie the infinity end of the ‘macro’ end.
This approach is relatively accurate and yielded good results. The limitation, however, is clear. For my 24-105mm lens, the three marks are only good for one configuration of focal length (my choice was 24mm) and depth of field (I choose 2.5ft to infinity). Clearly I could not mark my lens with every configuration of FL or depth of focus.
So I turned my thoughts to automation. Could my Promote Remote, CamRanger or Magic Lantern technology help me? The answer was partially.
At the moment the ML developers are rather focused (pun intended) on other things, so my module-based feature request, related to focus stacking, is falling on deaf ears. My alternative thinking is that an ML script may get me close to an automatic solution: I will give this some more thought.
The Promote Remote needs further conversations with the boffins at Promote.
So what about the CamRanger? My experiments here were fruitful, in that I used the CamRanger, at my 3, 5 and 15 ft pre-focused points, to record the CamRanger steps needed to move the lens to the required distance from a known starting point, eg the lens rotation stop at the macro end. I found this a repeatable process, eg to move from the zero point to the 3, 5 and 15 ftfocus points, I needed to step the lens, using CamRanger, by x, y and z ‘clicks’.
At the moment CamRanger cannot ‘record’ such a focus stack set, thus, in the field, I would need to zero the lens and then click the required number of times to the required focus points,eg 3, 5 and 15 ft, say. I could imagine carrying a ‘look up table’ of various sequences to cover differing FLs and distance capture needs. Remember you can use CamRanger to take shots (bracketed or non-bracketed) all without touching the camera. Thus CamRanger represents a semi-automatic approach that I will definitely be experimenting with.
But what about my goal of a fully automatic approach? Well, until ML, Promote Remote or CamRanger come up with such a capability, I believe I have the next best thing. That is using the Canon (but I’m sure this would work for Nikon user as well) EOS (tethered) utility and AutoHotKey (AHK is at www.autohotkey.com) scripting engine running on Windows (once again I’m sure the Mac world has a similar scripting environment).
So far I have demonstrated the approach on my desktop PC and have ordered a Dell 8” Windows 8.1 Tablet to take the approach into the field. That is the tablet will allow me to tether my 5DIII to the EOS tool in the field, this alone will be useful, and use focus stacking scripts that I will run via an AHK script.
So what have I achieved so far?
I first got the EOS Utility and AHK running on my desktop PC. I then plugged my 5DIII into my PC, which automatically opened the EOS Utility. I could now operate my camera without touching it, including refocusing (assuming you have an auto-focus lens on the camera), directly from the EOS Utility.
I next carried out a calibration phase for the 24mm end of my 24-105mm F/4L, where I used the EOS Utility to rotate the lens to the macro end stop and used the EOS Utility on the PC to step the lens to a known focus distance, ie I used a focus target at 3, 5 and 15 ft in this proof of principle experiment. I now had the required number of EOS Utility steps/clicks to bring the lens to the three focus points from its (macro) end stop.
I then created an AutoHotKeys script, which clicked the right things in the EOS and LiveView PC windows, such that all I had to do was open the AHK .exe file and the EOS Utility would be automatically operated to capture three images at 3, 5 and 15 ft, without any intervention from me. The ‘only’ downside is that it does take seconds between shots to reposition the lens.
So where next?
Well my Christmas experiment is to move the EOS/AHK capability to my new Dell Windows 8.1 8” tablet. I will also write a few more AHK scripts to cover other focusing needs, ie the 24-105 will focus down to about 1.5ft, hence I can imagine a script that operates over this focus range.
Bottom line: new technology is constantly emerging and we need to keep on experimenting!