Thursday, June 6, 2019

What do you think?

As we create and store our images we derive personal pleasure: but what about sharing?

I know photographers who smother their images with watermarks, so that others may not 'steal' their work.

I must say, I feel a little more relaxed than most and so I've decided to publish my images in  a digitally presentable form.

Here are a few images from our Cornwall trip this week and I would welcome feedback on what you think about the presentation style.





Wednesday, May 29, 2019

Script Update

Just a quick post to say I've updated the Get Focus Bracketing script to better handle zoom lenses.

Simply take an image at the two focal length ends of your zoom lens, calculate the magnifications and register the lens in the script.

Monday, May 27, 2019

Splitting Things Apart

As those that have been following my recent posts know, I’ve been trying to ‘solve’ the problem of focusing with Magic Lantern (ML), be it in an automatic focus bracketing script or in a script that provides manual focus information.

The issue can be ignored if one is focusing away from the ‘macro’ end of a lens, ie towards the hyperfocal and beyond; but as you focus at the near end of a lens, we have a problem between ML and the real world.

The problem being that ML (and Canon) provide us with focus information relative to the sensor plane of the camera. Whereas all the depth of field and hyperfocal equations that we use are referenced from the front principal plane of the lens, which we don’t know and can vary greatly according to the lens design and focal length: even being positioned outside the lens in some cases.

There are techniques to estimate the principal planes, but these are complex and hardly worth doing for ‘normal’ (non-scientific) photography. However, if we limit our attention to landscape-style photography, where we are seeking large depths of field, near to far, with wide angle lenses; then we can make a few pragmatic assumptions and try and ‘resolve’ the ML/Canon focus distance problem.

What we are seeking is a way to convert from the ML sensor-2-object distance to the lens-2-object distance.

To understand the problem, let’s first remind ourselves what the thin lens model looks like; that virtually all depth of field equations are based on. The model looks simple, compared, say, to a modern complex lens, but it is nevertheless powerful. 




The thin lens model, which assumes a symmetric lens with unity pupillary magnification, has a single principal plane and all depth of field equations can be derived from the standard, so-called, conjugate equation (after correcting for image inversions, ie using the convention that real is positive), namely:


With a little geometry we can easily show that the distances u and v maybe derived as follows:

That is, u and v are derived just from the focal length (f) and the magnification (m).

In the thin lens model, the sensor to object distance x = u + v, and, recognising that the magnification is simply the image size divided by the object size, with a little bit more math, we can show that the magnification (m) is:


It certainly looks like we are on to a winner with the thin lens model, so let’s test it against a real-world lens. In this instance my Sigma 12-24mm DG HSM lens.

Like all manufacturers, Sigma does not provide information on the location of the principal planes, but they do provide the maximum magnification and the minimum focus distance. So let’s use this information to test the thin lens model.

As the Sigma 12-24mm is a zoom, the maximum magnification occurs at the longest focal length of 24mm. Sigma quotes a minimum focus distance of 280mm and the maximum magnification is stated as 0.17.

BTW it is very easy to test the above numbers, or derive your own reference numbers, by focusing at a suitable distance and taking an image of an object of known size, eg a ruler. Then in post you can estimate the magnification.

From the thin lens model we can easily show that x is:



Thus, using the Sigma lens quoted information for its maximum magnitude (0.17) and at its maximum focal length (24), we can calculate what the thin lens model thinks the focus distance is, which should be the minimum focus distance of 280mm.

In fact the thin lens model says we should be focused at 193mm, which clearly is not 280mm.

This flags up the first problem we have, namely the thin lens model is not universally useful over the entire focus range and becomes ‘challenged’ towards the macro end. So we need a better model, which is where the split thin lens model comes in.

In the split thin lens model we create two principal planes by splitting apart the standard thin lens model:


We can then solve for t, the split lens thickness, giving us:


Where M is the magnification at focal length (F) and minimum focus distance (X): either taken from the manufacturer’s specifications or measured yourself.

For a zoom lens we simply measure the magnification at the two focal length extremes and scale in between. For WA zoom lenses a reasonable assumption is to linearly scale.


The final part of the model is to carry out a focus distance adjustment for m, by simply using the thin lens equation, but now corrected by the split lens thickness. This magnification is 'only' used on the image side when bracketing, ie to estimate the lens to object distance.

We now have everything we need. 


In the case of a zoom lens, knowing the magnification at the minimum focus distance and focal length extremes, allows us to estimate the split thin lens thickness at any focal length in between.

Once we know this, we can convert from ML sensor-2-object distances (x) to lens-2-object distances, via a simple mapping:



Where t is our lens thickness at focal length f and m is the magnification at that focal length and at focus distance x.

Final reflections: clearly the above approach is only one way of bridging the gap in our lens understanding. Without knowing the details of a lens design, we could guesstimate the relationship between the ML/Canon distances and the lens to object distances; or we could make an assumption that the ML/Canon distance is the same as the lens to object distance.


Assuming that the lens to object distance is the same as the sensor to object distance is OK when you are focusing around and beyond the hyperfocal; however, as your focus moves more towards the macro, then you shouldn’t make that assumption: at least when focus bracketing.

The split thin lens model is clearly not perfect, eg we ignore pupillary magnification; it also likely misses out on many nuances associated with optics, but it is most likely a better approach than ignoring the problem: at least when scripting.

Sunday, May 26, 2019

Seeing the Depth of Field

For those that are maybe skeptical regarding the challenge of deep focus photography, eg for landscapes or cityscapes, I thought I'd post a few reflections.

Trying to do deep focus photography with a long lens is not recommended, as the number of focus brackets will be high.

Trying to offset the number of brackets by shutting down the aperture will not help that much, because of the increase in diffraction.

Thus deep focus photography is well suited to wide angle lenses that are not stopped down too much, ie by 2-3 stops from their widest aperture is a sweet spot.

When auto bracketing on my 5D3 I tend to use my 12-24mm Sigma.

To illustrate why 24mm is the widest I would go, consider the use case: the minimum focus distance of the Sigma is 280mm. The maximum magnification at the 24mm end is 0.17. If I use an aperture of, say, f/7.1, this will result in around 16 brackets from 280mm focus to my infinity focus at 3 x hyperfocal, using my current  bracketing algorithm. I set the bracket to bracket overlap at 30 microns.


To further illustrate why you need this large number of brackets I've created a lo-fi video of how Helicon Focus stacks these 16 images. In this video you get a real impression of the limited depth of focus for each bracket.

I'll keep refining the script, because it's fun for me to do. For instance it's now at release 1.65  ;)




Sunday, May 19, 2019

Focus on Flax

As I mainly post about technical stuff, I thought I’d upload an image of a field of flax near where we live. It was taken with my 5D3 and a Sigma 12-24mm at 12mm. 

The exposure was set via Magic Lantern ETTR at 1/200s. I had Dual-ISO switched on at 100/1600 and the aperture was set at f/8. 

I set the lens to the minimum focus distance of 280mm and let my auto focus bracketing script do the rest. 

After processing the Dual-ISO in Lightroom, I sent three images over to Photoshop and used layer masks to blend the images, after first auto aligning. 

After returning to Lightroom, I finished off the image, including selectively using the new texture slider both positively, for the field, and negatively, for the sky. 




Tuesday, May 14, 2019

Multi-image, Deep Focus Photography: Magic Lantern Helper Script


For those that seek to achieve ‘deep focus’ in their photography, ie from very near to infinity, you know that wide angle lenses are your friend. For example, an IRIX 11mm lens on a Full Frame camera, with an aperture set to f/10, with have a hyperfocal distance (H) of around 1.1m, at an infinity defocus blur of 11 microns. This follows from the Rule of Ten, where the hyperfocal is simply FL/10, when the circle of confusion, in microns, is the focal length (FL) in mm and the aperture is f/10.
 
Such a single image set up will provide a very high-quality focus over a deep focus field, ie less than 11 micron defocus blur, between 550mm (H/2) and infinity

But what if there was a feature you needed to include in the focus at, say, 150mm? The simple answer is you couldn’t achieve this in a single image and maintain focus quality through the scene.

This is where multi-image, deep focus techniques come into play, ie focus bracketing.
One of the challenges with multi image focus bracketing, apart from the wind, is knowing where to focus from image to image.

At the simplest level, for perfect focus bracketing, one would focus at H, H/3, H/5, H/7 etc. Thus, with four images, where H/7 being the shortest with a near and far depth of field at H/8 and H/6, the merged, focus stacked, depth of field will cover from H/8 to infinity. 

Thus, in the example above, with an H of 1.1m, focusing at 1100/7, ie at about 157mm, will achieve a four image depth of field from 1100/8 to infinity, ie from 137mm to infinity.

But notice how the near depth of field collapses with each image. For example, when focused at H, the near depth of field for that image was some 550mm behind the point of focus, ie at H/2. In the fourth image, taken at H/7, the near depth of field, ie relative to the point of focus, is H/56, some 20mm!!!!! In other words, you must really want that extra 20mm to do multi-image deep focus photography.

Of course, knowing distances in theory is fine on paper, but not much help in the field. So, as usual, Magic Lantern to the rescue; all assuming, of course, your lens reports focus distance etc. If it doesn’t, you are out of luck!

Over the years I’ve experimented with various ways of getting focus feedback using ML. I’ve tried automatically moving the lens, which has got better as Lua has matured (thanks to a1ex at ML); plus I’ve tried various Focus Bar arrangements that provide visual feedback to the user.

Although I like my latest focus bar (DoFBar), the down side is its (LV) readability in the field, especially without a hood/shade in bright sunlight. Plus, I (over)loaded it with features: that is, it’s just too complicated.

Last month I released my latest auto focus bracketing script (GFBS), which, IMHO, runs well: at least on my 5D3. Today, I’m releasing the latest version of my focus bar, that I’ve simplified, and targeted 100% towards deep focus photography and in-field (LV) viewing.

I’m calling this script the Bracketeer; and you can download it from the link on the right.
I believe the Bracketeer does three things rather well:

  • First it continuously shows you the defocus blur at infinity, the diffraction blur and the total blur at infinity, ie the defocus and diffraction blurs taken in quadrature. In addition to this blur information in microns, the script displays a simple traffic light system as an aid to focusing: 
    • Red: focus is less than H (H being based on the blur as set in the ML CoC, which is used as the overlap defocus blur) 
    • Yellow: focus is between H and 2*H, ie infinity defocus blur between the overlap blur and half of the overlap blur 
    • Green: focus is between 2*H and 4*H, ie overlap_blur/2 to overlap_blur/4
    • Orange: focus is greater than 4*H and less than ‘infinity’, ie you are now over focusing, but still based on camera distance information 
    • White: focus is at ‘infinity’, ie there is no distance information to be gained from the camera
  • Second it provides a continuous estimate of the number of focus bracket estimate from the current point of focus to the current hyperfocal; 

  • Thirdly, the killer feature, is the visualisation of the image to image focus overlap, ie between the current point of focus and the last image captured. The visualisation is prioritised to show the amount of overlap or the overlap gap.

The script uses the ML-set circle of confusion as the overlap (defocus) blur criterion. For a full frame I recommend this be set around 20-30 microns, and crop scaled on an APS-C camera. The script can be tweaked for IR, ie use a frequency in the script of 0.850, say, compared to 0.550 for a visible band camera.

The script has a simple menu, ie its either on or off; and it can be used in any direction, ie near to far or far to near. 

Once running the script continuously displays the three pieces of focus information. If the script is running alongside the auto bracketing script, the auto bracketing script will deconflict itself, ie you can’t have both scripts running at the same time. However, I recommend these two scripts be loaded as a pair; but note the auto bracketing script requires an AF lens, whereas the Bracketeer doesn’t.

The following illustrate the UI and show traffic lights in action.






In the above we see the point of focus moving through the red, yellow to green states of focus. We also see the blur information changing. The final yellow traffic light warns us that we are now focusing past 4*H. If we were it infinity, this traffic light would show white.

The full focus bracketing (bar) feedback only kicks in once an image has been taken. Before an image has been captured the top and bottom bars show the current focus info relative to the focus state at the script’s start up, ie time zero. 

After an image is captured, the top bar will show the last image’s depth of field, whereas the lower bar will always show the current point of focus’s depth of field. 

The following screen captures illustrate the two bars in action. The red ‘zone’ showing a focus gap. The left hand side of the bar display is positioned at the minimum of the two near depths of field. The right hand side is positioned at either H, if one of the bars far depth of field is greater than H, or at the maximum of the two bars’ far depth of field if both are less than H.






Finally, here is a test image I took with using the Bracketeer script; running on my IR converted EOSM. The focal length was 11 mm, the ISO 100, the aperture was set to f/6.3 and the shutter was at 10s. Using the Exif-tool GUI, we see that the Canon recorded (upper) focus information of the images taken, at: 0.19m, 0.25m, 0.40m, 1.54m and 3.84m.



Rather that drone on about the script, I will leave those with a curious mind to try it out (remove any of my old/legacy scripts). As usual I welcome feedback of any kind, especially how to make the script better.

Sunday, May 12, 2019

Late Spring Cleaning

As Magic Lantern Lua has evolved, alongside my Lua scripting skills, some of the scripts I wrote a while ago are now out of date. 

For this reason I’ve tidied up the list of scripts on the right, both ML and CHDK, and put a single link to the legacy ones, ie the ones I don’t support or recommend.