Sunday, March 8, 2020

Near-Far, Zero-Noise Bracketing

It’s a rather windy and wet Sunday, so an ideal opportunity to carry out some indoors photography experiments: this time ‘perfecting’ what I call the Near-Far, Zero-Noise Bracketing technique.

I’ve evolved this technique mainly for use with my manual lenses, to ensure I capture the ‘optimum’ 4-image, focus and exposure bracket set.

In this post I’m using my OM-D M5II, with my manual, Laowa 15mm f/4 Wide Angle Macro, which means I can focus ‘anywhere’, ie (really) near to (far) infinity.

The technique, on my OM-D M5II, using an infinity blur criterion of 15 microns, is rather simple, but I constantly find it works. It goes like this: 

  • In manual mode, put the lens at the widest aperture, ie F/4 in this case, and focus on the nearest object you wish to see in focus, typically for this technique to work this object will typically be greater than, say, 0.5m away. If it's closer than this, you may need to inject intermediate focus-exposure brackets;
  • Set the lens aperture to F/7.1 to F/8, but on my MFT camera, no more, as diffraction will begin to get the better of you;  
  • Using the OM-D LV, with exposure blinkies on and the LV histogram to guide you, set the exposure for the highlights and take your first image;  
  • Take note of the exposure compensation and adjust the exposure by 4 stops, and take your second image;  
  • Reset focus for the background, eg I have set my lens infinity at the lens hard stop, using the fotodiox dlx stretch adapter, so all I need to do is set the focus to the hard stop point on the lens, where I know I will have optimum infinity focus;  
  • Take the third image at the current exposure;  
  • Finally, change the exposure by -4Ev and take the fourth and last image.
Ingest the four images into Lightroom, and use LR’s HDR capability to process the two exposure brackets. Then, in my case, I do a round trip with these two processed images to Helicon Focus. Once I’m back in LR, I process for a look.

As an example, here are the four captured zero-noise images, just taken in my kitchen. Note the focus difference between the tap and the chairs:

Here are the two LR HDR merged images, ie covering near and far, merged exposure brackets:

Here is the final ‘Near-Far, Zero-Noise’ image. BTW it was a windy day outside, so some tree movement can be seen:

I hope some found this post of value/interest; and, as usual, I welcome feedback on this or any previous post.

Saturday, March 7, 2020

In-field Manual Lens Calibration

I have several manual lenses that I’ve acquired for my EOS cameras, for example: a rather unusual Venus Optics wide-angle lens, the Laowa 15mm f/4 Macro Lens; and a Rokinon 14mm f/2.8.

Like others, I try and get the maximum value out of these lenses by using then with adapters on mirrorless cameras, eg the Canon EOSM.

Recently I made the decision to down size my travel gear and introduce a Micro Four Thirds (MFT) format camera: choosing to buy a second hand Olympus OM-D M5 Mk 2.

I was drawn to the OMDM5II because it was small and light weight, and it had so many features, eg non-macro focus stacking.

My first adapter was a relatively cheap K&F Concept adapter. However, either this was a badly made one, or the manufacturer had the relative flange distances wrong; as infinity focus was way out. For example, the 14mm Rokinon’s infinity focus was at about 0.3m on the lens.

Some lenses can be manually calibrated by partially disassembling them, but not all. Plus, I had an idea that I wished to try out, namely, realizing an infield calibration approach, whereby I would decide the infinity I needed for that shoot.

For example, the first useful 'infinity' would be at the ‘classical’ hyperfocal (H), eg giving an infinity blur of 30/crop microns. The second infinity would be where the infinity blur was around two sensor pixels, eg around H/2-H/3, according to your camera. With the third infinity at the 'visible' infinity. As you move from H to the visible infinity, your blur at infinity will, of course, reduce, but so will your near depth of field, moving from H/2 to H.

So I purchased a Fotodiox DLX Stretch Lens Mount Adapter, obviously for the Canon EOS  EF/EF-S Lens to Micro Four Thirds (MFT) version, which comes with a Macro Focusing Helicoid and Magnetic Drop-In Filters, ie rear NDs. 

It was a bit of a gamble, as I didn’t know how the adapter, which is designed to work as a macro bellows, would handle the infinity correction.

Having now tried it on both manual lenses, I am pleased to report that the Stretch Adapter works fantastically well.

Once fitted to the OMD M5II, all I do is: decide where I wish to calibrate, eg H, 3H or ‘visible infinity’;
set my aperture to the widest it will go; set my lens focus ring to the infinity mark or even the infinity hard stop; and adjust the adapter until things look tack sharp on the zoomed in LV.

For landscape photography this is great. But what about portrait photography? Once again, the adapter shows its utility. In this case I set the lens to sensible distance which is also registered on the lens, eg 1m, say, and go through the same process as above.

Bottom line: if you have EOS lenses and a mirrorless camera, but, sadly, not an EOSM, then you may be interested in acquiring the Fotodiox DLX Stretch Lens Mount Adapter, which will give you in-field, micro calibration of your manual lenses.

Augmented Reality Depth of Field

The current Depth of Field Info script (DOFI) was written to be as un-intrusive as possible, whilst displaying the maximum (blur) info associated with focus, for example when undertaking manual focus bracketing or setting the infinity focus.

The latest version of ML DOFI, as usual downloadable from the link on the right, now includes an augmented reality (AR) option, that shows the focus and various key distances (3*hyperfocal (3*H), H, H/3, H/5, H/7) on the ground plane. That is at the optimum focus positions for focus stacking.

Of course, because the script doesn’t know the camera’s full orientation, this version assumes that the ground plane’s height is specified by the user and that the ground plane is parallel to the axis of the lens. In most cases the AR mode will be used with the camera on a tripod.

The AR mode can be switched on and off in DOFI’s menu.

To use the AR mode, first ensure you have the correct info set in the menu, ie tripod height and camera format.

After that, all you need to do is centre the display on the infinity horizon, ie level the camera. The AR mode shows infinity as a black dot.

To illustrate DOFI’s AR mode, this screen capture shows what the AR display looks like if you are focused short of the ground plane’s intersection with the lens field of view, ie the point of focus is hidden. 

Here we see the hyperfocal shown by the yellow dot with a green bar, and the 3*H distance. For infinity focusing, the sweet spot is between H and 3H, that is infinity blurs between the ML set Circle of Confusion and a third on this. Going less than a third of the CoC is not likely to gain you anything, as your defocus blur will be less than two sensor pixels.

In this next screen capture we have refocused towards infinity and the point of focus, as projected on the ground plane, is shown as a red dot, indicating it is less than the hyperfocal.

Finally, this screen capture shows the focus dot has turned green, indicating we are focused at greater than the hyperfocal. We also see we are focused less than 3*H.

As is the case with all DoF info, the AR mode is there to aid/inform focus setting. DOFI’s DoF and FoV calculations are based on a thin lens model and are a reasonable approximation away from the macro end, say, at H/7 or greater.

Finally, I welcome feedback, especially ideas that could make DOFI better.

Monday, February 17, 2020

First thoughts on ‘switching’ to MFT

For British readers, saying that Micro Four Thirds cameras are like Marmite will be well understood, ie they divide opinion.

As a Canon-guy, and a Canon-guy that only uses Magic Lantern or CHDK augmented cameras (5D3, EOSM(Vis), EOSM(IR), EOSM3, G1X, G5X, G7X, S95) I have been reluctant to explore other camera manufactures; although I did‘play around’ with the Sony S6000 a couple of years ago; but sold it on.

However, a recent travel experience has led me to explore the Medium Format sensor option, with a crop of 2; and,specifically, the Olympus brand.

My recent experience pivoted around having to carrying all my 5D3-based infrastructure with me on a tour of Scotland, iein and out of hotels and out shooting day and night. A lot of heavy ‘stuff’, when you factor in the camera, the lenses, the (large) tripod, the gear head, the laptop etc etc etc.

I was drawn to the Olympus MFT cameras for five reasons: their size, relative to the 5D3; their mirrorless advantage that allows me to use ‘specialist’ lens adapters, such as the ND Throttle; their ‘button re-programmability’, but I won’t say much about their menus; their focus bracketing; and their Live Time feature, that allows one to see the exposure (and histogram) evolving in real time.

As this is an experiment for me, I decided to ‘go cheap, and buy into a ‘last gen’ Olympus. Thus, I bought a second-hand OM-D M5 Mk II, rather than get a MkIII. I also ‘went cheap’ on dedicated lenses and decided to buy at the ‘kit end’ rather than the ‘pro end’: Olympus 14-42mm, 40-150mm, plus a Samyang 7.5mm Fish Eye.

In addition to the (dumb) ND Throttle, which is there for Long Exposure capture, I purchased a basic ‘dumb’ adapter for my Canon EOS lenses; as the OM-D M5-II has great focus peaking, I think this will be OK for my type of photography. I don’t do sports or bird photography, that would require fast acting AF tracking; ‘all’ I do is put a camera on a tripod and try and slow down.

To complete my ‘new’, lightweight travel set up, I will likely throw in my CHDK G7X or G5X, and, of course, my newly acquired Peak Design Travel Tripod: an incredible piece of engineering design.

Finally, to complete the downsizing, no more laptops for me when I’m on the road. In future I’ll simply use my existingiPad Pro 10.5, loaded with Lightroom and Affinity Photo (£16, one off purchase), as my Photoshop substitute, as Adobe need to do a lot more with the iPad-based Photoshop before I consider it useable.

I’m looking forward to trying out my new, old stuff, and will write about my ‘on the road’ MFT experiences in future posts.

Saturday, February 1, 2020

Wecome to the DOFI family

In the last month I've completely overhauled my Depth of Field scripts.

I have done this for two reasons. First, I've refined my use of output-based focusing, eg knowing the infinity blurs; and second, I've now ported these ideas from my EOS Magic Lantern cameras (5D3 and various EOSMs) , over to my CHDK Powershot cameras (G1X, G7X, G5X and S95).

My approach is to use my Depth of Field Information (DOFI) scripts to constantly tell me:
  • The optical, defocus blur at infinity in microns;
  • The diffraction blur through the scene in microns;
  • The total blur at infinity in microns;
  • An estimate of how many focus brackets will be required to seamlessly focus stack from the current position to the hyperfocal;
  • Where I am relative to the hyperfocal
  • Whether the current position has a positive or negative overlap with the last image captured.
More information on the Magic Lantern version of DOFI, simply called DOFI, can be found here:

More information on the CHDK version of DOFI, called DOFIC, may be found here:

The third member of the DOFI family is called DOFIZ, which adds exposure bracketing control to DOFIC. More information on DOFIZ may be found here:

The latest version of the scripts will always be available from the script download list on the right.

Bottom line: for those that want optimum focus, you likely will not do better than using either DOFI or DOFIC; and for the lucky Powershot photographers, DOFIZ gives you full control over your exposure bracketing, including as you are focus stacking. 

As usual, I welcome feedback on the DOFI scripts.

Thursday, January 23, 2020

2020 Reflections on Focus: Part 1

As many know, I’ve enjoyed myself for some time trying to get the best out of focusing; especially in the following areas: 
  • Always knowing the impact of diffraction blur on my captures 
  • Always knowing the near and far depth of fields (DoFs) 
  • Understanding the impact of the defocus blur criterion, ie the so-called Circle of Confusion (CoC)
  • Understanding the impact, especially for non-macro, but close-up or near-field photography, of not knowing the ‘lens zero’ 
  • Knowing how many focus brackets I will need to take to ensure I cover the scene from the current focus to infinity 
  • Knowing where to position for the next focus, when focus bracketing, so there are no focus gaps 
  • Knowing the best/optimum infinity focus position for a required image quality
Hidden in the above are some real challenges.

For example, as (non-macro) photographers, we are comfortable with the concept of the hyperfocal distance (H). Where H is classically, and simply, written as ((f*f)/(N*C) + f); which, practically we can reduce to (f*f)/(N*C).

Where f is the focal length, N the aperture number and the C the CoC, ie the acceptable ‘out of focusness’, or defocus blur, that we can tolerate. The CoC varies according to the camera being used, the display medium (screen vs print), the size that the image is to be displayed, the distance the viewer is at, and the ‘scrutiny’, ie competition or not.

But where is H measured from?

What is often not said, is that H, and the near and far DoFs that follow from H, are derived from the Thin Lens model and there is hardly a camera in the world that uses such a lens!

Your DSLR or P&S camera certainly doesn’t.

For non-macro photography, modern/real lenses may be modelled as thick versions of the thin lens. With a front principal plane and rear principal plane, ie a lens thickness. For example, see this post for more information:

A classical (symmetrical) thin lens, of course, has a single lens element with the front and rear principal planes located at the centre of the, single, thin lens. But, real lenses are also, usually, not symmetrical, so one should (ideally) also account for the so-called pupillary magnification when estimating DoFs.

‘Luckily’, lens manufacturers help us out a lot, by telling us nothing about the above!!

Thus, we are forced to make use of the thin lens model or modification to it; whether we like it or not.

Fortunately, for most, non-macro, photographers the above lens nuances are irrelevant, as, the distance between the lens zero and H and the sensor (where focus is needed) and H, is small, relative to H. Plus, most photographers know that H is only there to guide them, and they know to add some ‘focus insurance’, eg focus beyond H, never in front of H.

Some, for example portrait or nature photographers, or those wishing to make artistic use of defocused areas in their images, will find the above, more than enough, for their needs. But what if you what to create a sharp, high quality print, covering from ‘near to far’? That is, you are a landscape photographer :-)

If you wished to capture your scene with a single image, you could, of course, seek to increase the DoF by closing down the aperture. But we know this is not a good way to go, as all we are doing is trading defocus blur for diffraction blur; plus, artistically, these two blurs are different.

High diffraction blur everywhere can hardly be called an artistic element of an image! As said above, defocus blur has the potential to help with your artistry, eg helping draw the viewer to certain areas of the image; whereas diffraction ‘softens’ the image everywhere.

The rest of this post is aimed at the reader who knows they want more (practically beneficial use) out of hyperfocal focusing, especially to achieve, so-called, ‘deep focus’: but are unsure how to achieve this.

I have come to call my approach to focusing: ‘output-based focusing’, as opposed to ‘input-based focusing’.

In input-based focusing we select/predetermine the focus, either through calculation, manually via the Live View or through AF, then lock everything down.

With an output-based approach, we additionally seek to dynamically adjust focus, knowing some additional information, to meet the image (output) presentational needs and, as we will see in future post, where necessary, augment focusing, eg through informed focus bracketing.

In this post we will restrict ourselves to a base use case: namely where we wish to maximise the focus quality in the image sharpness, from near to far: to infinity, but not beyond!

Let’s first discuss ‘infinity’. Simply put we can practically define this as when the photographer focuses way into the far field and, when the captured image is reviewed, there is no focus-based difference between that image and one that was taken by focusing farther away.

Ignoring diffraction for now, another way of describing the above is to say that the lens defocus blur has reached a size, such that the viewer can’t discern a difference. From a theoretical perspective we can also sensibly say that, when the defocus blur is less than two sensor pixels, then we have reached that point. This is our (practical/sensible) definition of focus infinity.

[To complete the picture, we should mention, in passing, Rayleigh criterion, Airy disks, diffraction patterns, Bessel Functions and Bayer layers etc! But, ignoring all the science and maths, simply put, all we need to know is that we can’t resolve things that are less than the Airy radius, and pragmatically, for (digital) photographers, this translates to saying, it is pointless seeking (defocus) blurs much less than, say, two of your camera’s sensor pixels].

Of course, at the point of focus, the defocus blur is always zero. As we move away from the focus point, our defocus blur increases, but not symmetrically. This starts to hint at a good place to be, ie between the hyperfocal, where the defocus blur at infinity is the CoC, ie the ‘just acceptable’ defocus point, and the focus position where the infinity blur is around two sensor pixels.

The thing to note here is that this is camera specific. But then again, so is the hyperfocal (H), as it is based on the crop-sensitive, ‘circle of confusion’: which is simply the defocus blur at infinity, when focused at the hyperfocal distance. Thus, knowing H means we really know one of the key pieces of information to allow us to go to an output-based approach to focusing. That is focusing in a more informed way, using output-based, microns of blur, rather than only worrying input distances to ‘focus’ the lens.

To illustrate what this means, in a full frame camera, like my Canon 5D3, this equates to infinity blurs falling between the hyperfocal 30 microns (um) and 2 sensor pixels, ie 12um on my 5D3, and certainly not lower than 6 microns, ie a single pixel. On a crop sensor, you would adjust these numbers according to the camera, eg 30/crop and (1-2)*sensor-pixel-size. Once again, for now, ignoring diffraction.

We also know that if you focus at H, the near DoF will be at H/2 and the far DoF will be at infinity. We also know that if you focus at infinity, the far DoF is, of course, also at infinity; but that the near DoF has now moved to the hyperfocal. 

Key point: you cannot obtain acceptable focus at less than your chosen hyperfocal distance, without changing something, ie your acceptable CoC criterion or aperture etc. Or, put another way, always know the hyperfocal distance when doing landscapes, which, as we will see below, is easy.

Thus, adopting an ‘I always focus at infinity’ approach, means that you are throwing away H/2 worth’s of depth of focus in the near field. Which in some situations may be OK: but not in all.

Assuming you are a cautious photographer, you will likely seek out a little ‘focus insurance’ and, rather than try and focus ‘exactly’ at the ‘hypothetical’ H, you will focus slightly beyond this. But where?

As we will see, up to twice H, ie short of the 2*pixel-pitch point, is a good place to settle. For example, focusing at 2*H means the ‘standard’ 30um infinity defocus blur falls to 15um, ie half of that at H.

The ‘convenience’ being that, the only maths you need to do is to know with this approach to focusing, is knowing your hyperfocal and how to double it.

So, let’s look at output-based focusing, making use of a previous post, where I introduced my ‘Rule of Ten’ approach. The original post on RoT that I wrote may be found here:

But first, let’s remind ourselves of the impact of focusing beyond the hyperfocal.

Ignoring second order effects, the (non-macro) near and far DoFs may be approximated as: NDoF = H*x/(H+x) and FDoF = H*x/(H-x)

Let’s ignore the far DoF, as this is at infinity if we are focusing beyond H, and only look at the near DoF, and ask the question: what have we lost by focusing at 2*H, rather than at H?

The NDoF approximation tells us that when x is 2*H, the NDoF will be at (H*2H)/(H+2H) = 2H/3. That is, we have ‘lost’ H/6 worth of focus, ie (2H/3 – H/2). So, if your hyperfocal is, say, at 1.2m, and you instead focus at 2.4m, all you have lost in the near field is 200mm of depth of field. But, of course, at infinity your focus quality has doubled, ie from the, just acceptable, CoC-based defocus blur to half of that.

Further, we are now seeing where, for the landscape photographer, the (camera-specific) focus sweet spot is; namely, and assuming you are using a sensible aperture, eg F/8-10, between the hyperfocal you are using and where the defocus blur is, say, 2 pixels. So, on my 5D3 this means between H and around 3*H.

Thus, we now have a reasonable infinity focus (starting) strategy, covering us when we focus beyond H and towards infinity. First, know your H, double it and focus there, and check if at 2/3rd of H you are content with the focus cover. Job done!

At this point, many will be saying, OK, but this isn’t much help to me, as: I don’t know where H is; and it’s too complicated for me to calculate it in my head; and I’m not going to muck about with an App on my phone or a calculator: I just want to take pictures!

WARNING: Ignore those that say, focusing at one third into the scene is good enough. This is based on a myth that depth of field is split 2/3 in the far field and 1/3 in the near field. This is only true when focusing at H/3, ie uncomfortably less than H, and we don’t want to be there! Having read this post, you know you can do much better than this!

So, let’s using the ‘Rule of 10’ (RoT) focus distance to progress our ideas.

The RoT states that, at an aperture of F/10, the hyperfocal distance in meters is the focal length, in mm, divided by 10, at a CoC of your focal length.

As an example, assume I’m shooting with a 24-105mm lens at 24mm. I’m at F/10, a reasonable place for a landscape photographer to be on a full frame camera, then my hyperfocal distance is at 2.4m, ie focal length in mm divided by 10 and, at this focus, the CoC will be 24 microns, ie slightly better than the usually used 30um (on a full frame or 30/crop on a crop sensor camera, say, for convenience, 20 on a typical DSLR crop sensor).

Although you can use RoT with any length lens, it really comes into use for those shooting with wide angle lenses, say wider than 30mm on a full frame; and wish to achieve a high-quality focus at infinity and maximise the depth of focus in the near field.

As an example, let’s now assume I’ve switch to my 12mm prime lens and it is set to the RoT aperture, ie F/10.

The RoT distance, in meters, is simply the (focal length in mm)/10 = 12/10, ie the hyperfocal is at 1.2m. At this RoT distance, the RoT CoC is 12 microns, ie the focal length. For high quality work this is about as good as I’m going to get.

But, if I knew I was ‘only’ shooting for on-line/projector display, ie not print scrutiny in a competition, I might think 12 microns is a bit of an overkill, thus I could comfortably ‘back off’ the CoC to, say, 24 microns, ie double the RoT number. So, rather than focus at 1.2m, I move my hyperfocal to 0.6m, ie half of 1.2m. At this adjusted RoT-based H, I know that my near DoF is always half of H (near DoF = H/2), thus giving me a near DoF of 0.6/2 = 0.3m. All done in my head, with no calculators or look-up tables, and all I needed to do was know my focal length and do some doubling or halving of low digit numbers.

I’ll leave the reader to experiment with the output-based RoT approach, as you can use it in many ways to help with your specific focusing needs, including using it to inform artistic-based focusing: but that’s another story, for another time.

As this is the first of several posts I intend to publish on focusing, I’m going to stop at this point, as I think we have achieved a sound, single image, output-based, starting point. In future posts I will discuss using RoT to inform focus bracketing and then we will progress to how the Canon photographer, who uses Magic Lantern or CHDK technology, can make use of my in-camera focusing ‘apps’, ie scripts. For now, I suggest, irrespective of what camera you use, you focus on honing your RoT skills, to always know your hyperfocal ;-)

BTW if you have any questions on the RoT approach, or anything else I’ve said in this post, please feel free to add a comment at the bottom. I will always post a reply ;-)

Tuesday, January 7, 2020

Extreme ETTR processing

 *** This is an update on the original post ***

Ignoring the artistic dimension of an image, which includes focus etc, there are two technical areas that we tend to worry about: dynamic range and, what I call, tonal quality.

Dynamic range addresses clipping, ie lost data, eg as long as we have not clipped highlights or shadows, we have fully captured the dynamic range.

But DR doesn't address the quality of the tonal resolution we captured, which talks to the post processing dimension, ie moving the data around for artistic reasons and ensuring tonal 'smoothness'. This tonal dimension accounts for the way a digital file is captured, eg:

The above shows the 'problem', the image data on the left, ie in the shadows has lower tonal bandwidth (or resolution if you like) per stop than the image data in the highlights.The fact is, half of our tonal resolution is in the right most stop.

So I thought I would do an experiment and try some extreme ETTR, ie use ML's ETTR plus activating Dual-ISO.

During my trip to Scotland last year, I had the opportunity to shoot some extreme ETTR shots. That is shots where I wanted to maximise the tonal data. 

My 'mistake' was not creating a reference image, ie ETTR alone. But we're all human ;-)

I used Magic Lantern's Dual-ISO, to ISO bracket at 100 and 1600 in the same image.

As an example of what such extreme image capture looks like, here is the base/RAW capture that I took with my 5D3:

If you zoom in you will see the interlaced Dual-ISO capture, that is, the lines are switched between using ISO 100 and ISO 1600.

The histogram in Lightroom looks like this:

As I say, an extreme ETTR capture.

The first job is to create a base image from the Dual-ISO, which is simply accomplished with the Dual-ISO Lightroom plugin: resulting in this base image ready for post processing:

The LR histogram now looking like this:

The final stage is to first use LR to create an image suitable for Photoshop, where I used various TKActions to help me arrive at the final 4:3 ratio image:

Clearly, such extreme image capture is not to everyone's taste. But each to their own I say :-)