In previous posts I have spoken about the need for focus stacking. Not for macros, which is almost inevitable because of the shallow depth of field, ie of the order of a mm on my 100mm macro lens at a magnification of 1, but for landscapes.
When using very wide angle lens, we can usually ‘get away’ with a single image. For example on my 5DIII if I use my 12mm lens at F/16, and focus at about 2ft, ie just past the hyperfocal distance (based on a non-stressing blur circle), everything from about 1ft to infinity will be in acceptable focus.
But what if I’m using my 24-105mm F/4L at, say, 24mm. The hyperfocal distance is now about 6 ft, and my range of acceptable focus has ‘shrunk’ to 3ft to infinity, ie I’m losing 2ft in the near field.
Of course this only becomes an issue if you wish to ensure focus over such a large range. But let’s say you do. How should you focus bracket?
There are Apps out there to help you, eg the one I recommend is Focus Stacker from George Douvos, but this still leaves the problem of how to set the focus points on the camera.
Well Magic Lantern, once gain, is able to help, as long as you have a lens that reports focus information to the camera, ie like most Canon lenses or, from my experience, Sigma lenses..
To set up ML-enhanced focus stacking for landscapes, ensure the DOF display is on, under the ML Focus menu, and that you have the right units: I use cm for finer reporting. Then, in LV, you will see three pieces of critical information. You will see the distance you are focusing at, ie the third piece of info from right on the ML info bar. Above this you will see the near and far depths of focus.
As an example of how to carry out an ML-enhanced landscape focus stack, I set up a quick and dirty example in my kitchen tonight.
The daffodils were about 18in from the camera’s sensor plane, and at that focus, and at F/16, the depth of acceptable focus was about 6in behind and 3in in front, ie one image was not going to hack it.
So in LV, I simply noted what ML was saying, took my first image, and moved the focus ring such the second image overlaped the first, in focus. I then took another image, having noted the ML focus readings. Then moved to my second overlap point, where I noted ML said the front focus distance was infinity, and took my third and final image.
At no time did I look at the focus ring on the lens. I simply used the ML supplied information in LV.
I then ingested the images into LR. Made a few exposure adjustments on the first image, eg hightlights, and synced to the other two. I then exported to Photoshop as layers from LR, auto aligned the three images and auto blended them.
After making the return trip to LR I finished off the post processing, eg sharpening and softproofing for sRGB.
The resultant (usual boring) image shows that the scene is sharp from 18in to 'infinity'.
For me, once again, this shows the power of ML. Of course the ML side could be better, eg currently the user can not select the CoC or the pupil ratio, for the depth of field calculation.
No comments:
Post a Comment