Sunday, January 21, 2018

What would we do without Live View?

[Note I've updated this post to be clearer about the Magic Lantern advantage]

As part of the process of (re)acquainting myself with my 24mm TSE (tilt/shift) lens I decided to remind myself how to use the tilt function: thanks to Austrian army Captain Theodor Scheimpflug, who used it in devising a systematic method for correcting perspective distortion in aerial photographs. Plus is was an indoor day as it was cold, wet and sleety outside.


As we know a lens that tilts allows us to control the plane of focus and move it away from being orthogonal to the axis of the lens. Although there are ways of setting the tilt, based on calculations, a far easier approach is to use the live View screen.

The test scene was a row of our cat’s toys, which she then decided to take an interest in: so I had a live model as well.

The objective was to get everything tack sharp from the near to the far: something that would be near impossible with a normal 24mm lens.

After I had angled the lens axis down by about 30 degrees, thanks to my geared head, I set about the (Magic Lantern enhanced) focusing process, namely:
  • Set the aperture to as wide as possible, ie F/3.5 for me;
  • Zoom in (x10) on the LV screen and set focus on the nearest object you wish to see in focus;
  • Still zoomed in, move to farthest object and use tilt to bring that into focus;
  • Go back and forth until you are satisfied that focus is not changing;
  • Set the aperture to about two stops lower than the minimum, ie F/7.1 in my case;
  • Use ML ETTR to set the shutter speed, ie 2.5s at ISO 100 for this indoor shot. Note this is the ML advantage, i.e. metering after you have tilted directly off the sensor's LV.  Without ML you should meter before you tilt as the camera metering will likely be thrown out. 
In the end I needed about 3 degrees of tilt, but because I was using the LV process to set focus, the angle became irrelevant. As for focus, I was at about 1.3m according to my lens reporting, thanks to Magic Lantern.

Here’s the final proof, ie that the TSE lens is an incredible tool: thanks Polly :-)



Friday, January 19, 2018

Shifting into a new tool


I’m about to buy a new tool, or is that a new toy, for my 24mm TSE Mk II L, tilt/shift lens(https://goo.gl/9TB5gr); so I thought I would reacquaint myself with this incredible lens.

BTW I’m not disclosing what the new tool is yet: let’s keep the suspense going ;-)

Because the TSE-24mm reports focus, I’m able to use my Focus Bar script to achieve the optimum focus, which in this case was about 5m; I was at F/11 and wanted to achieve an infinity blur of around 10 microns.

The indoor test image (of our cat Polly enjoying herself) was taken with the help of Magic Lantern, ie I used the RAW spotmeter to ensure the shadow areas where I wished to see details were correctly exposed, then I selected the ML Auto Bracketing to ensure I captured the highlights.

Before taking the images, because I was using my gear head, I put the 5D3 into its electronic level mode, and ensured the horizontal and verticals were nulled out.

I choose to shoot in landscape mode, to maximise the field of view of a two-image pano. I then set the composition by first shifting the full 12mm to the right, and let ML take an exposure bracket set. Then I simply shifted fully to the left, as in landscape mode there is sufficient overlap to ‘get away’ with two images, and let ML take another bracket set.

After ingesting into Lightroom, I made a virtual copy of the highlight image from each bracket set and reduced these two images by 2Ev in LR, ie to (virtually) extend the bracket set by one.

I then processed the exposure brackets in LR and carried out a few exposure corrections on one of the images, and synced these to the other merged bracketed image. Finally, I used the LR pano merge, with boundary warp to correct a very minor edge offset, to achieve the final image.

Here is a screen view from Lightroom and a JPEG of the final 9719x3828 pano, ie a 32in print at 300 dpi.


Bottom line: I’ve been reminded that the 24mm TSE is, most probably, my best quality lens. I can’t wait for my new tool to arrive so I can take this incredible lens to new heights ;-)



Thursday, January 18, 2018

Further experiments with F/22 Bracketing

Had a chance tonight to play around with the F/22 Bracketing idea and came across the first problem: sensor dust etc.

That is, at F/22, of course, any dust on the sensor will be pretty clear.

Here is tonight's test image: this time taken with my Irix 11mm at F/22.

I took seven manual images between about 0.7m and 1m.

The sensor dust is clear on the wall.


Bottom line: this could be the end of the F/22 Bracketing idea :-(

Wednesday, January 17, 2018

An alternative/new approach to Focus Stacking



In previous posts I have spent at lot of time addressing landscape (sic) focus stacking. That is non-macro focus stacking, where the focus point between frames needs to be moved a considerable distance; and a distance that is, relative, to macro focus stacking, difficult to calculate.

Although Magic Lantern has a focus stacking feature, this is better suited to macro-biased focus stacking, ie it can’t cope with landscapes. Hence my efforts to develop landscape focus stacking tools, such as my auto landscape bracketing script and my focus bar script.

In this post I offer an alternative approach and one that, at first, may appear a wasted effort, because we are going to use the smallest aperture of F/22.

Reading that last sentence may have forced some to look away and scoff. After all we all know the diffraction effect, which, simply put convolves an additional blur on top of the lens defocus blur. For example, we usually convolve the two dominant blurs in so-called quadrature, ie total_blur = SQRT[defocus_blur^2 + diffraction_blur^2].

However, stay with me and read the ‘bottom line’: but first a few reminders.

We all know that diffraction is only related to the aperture. That is the diffraction blur is simply equal to k.N. Where k is a constant and N is the aperture number. Thus the diffraction blur at F/16 is twice that at F/8, and at F/22 we have twice the diffraction blur that we have at F/11.

The other characteristic of diffraction blur is that it is relatively constant through the scene, unlike defocus blur that varies throughout the scene and that is zero only at the point of focus.

If we focus at a distance S, the defocus blur at infinity can be estimated from (focal_length^2)/(N*S) and the so-called hyperfocal distance can be estimated (ignoring diffraction) from (focal_length^2)/(N*CoC), where the ‘circle of confusion’ is the defocus quality term, eg, say 0.03mm (30 microns) for an on-screen image, for a 35mm sensor.

Finally, for completeness, an estimate (sic) of the near and far depths of field can be obtained from the following:

Near DoF = H*S/(H+S) for all S focus distances

Far DoF = H*S/(H-S) for S less than H, ie at S greater or equal to H the Far DoF will be infinity

From the above we can see the basics of the hyperfocal approach, ie if we focus at the hyperfocal (S = H), the Near and Far DoFs are H/2 and infinity, respectively. We can also use H to estimate the ‘loss’ of near DoF if we focus past H, ie towards infinity, ie if we focus at a distance of 2*H then the near DoF is reduced from H/2 to 2H/3; and if we focus at 3*H, the near DoF is 3H/4. In general, for x greater of equal to 1, the near DoF is x*H/(x+1).

The upside being that we gain quality in the far field, ie at a focus of 2*H the infinity defocus blur is half that at H, and at 3*H it is a third. In general, in the far field, the blur is never more than CoC/x, for x greater or equal to 1; and it reaches this at infinity.

Note, focusing between H and 3*H is a sweet spot. There is little point seeking infinity blurs less than twice the sensor’s pitch. As an example my 5D3 has a sensor pitch of about 6.3 microns, so blurs less than about 12 microns are rather worthless, as you need two pixels to create a line pair.

Finally, if we do decide to focus at infinity, as Harold Merklinger advocates, the near DoF will ‘collapse’ to H; the infinity blur will be zero (but note above) and the smallest feature we will be able to resolve in the scene will simply be the size of the aperture, ie focal_length/N.

Thus it appears we have three options if, in a single image, we wish to maximise the focus from infinity to a near point:
  • Focus at the hyperfocal and realise a DoF from H/2 to infinity, but only achieve an infinity quality (CoC criterion) of ‘just acceptable'
  • Focus at infinity and realise a DoF from H to infinity and achieve an infinity quality beyond (sic) that which is resolvable by the sensor
  • Focus between H and x*H (x greater than 1 and less than or equal to about 3) and realise a DoF from x*H/(x+1) to infinity and achieve an infinity quality (CoC criterion) of CoC/x
But let’s say the none of the above works. That is the near DoF is still too far away and/or the far field quality is not acceptable: then and we need to resort to focus stacking.

In past posts I’ve covered the classical approach to (landscape) focus stacking: that is take several images, at different focus positions (eg using my auto bracketing script or my Focus Bar script to position the brackets), and combine these in a focus stacking programme, like Helicon Focus, Zerene Stacker or Photoshop. But in this post we will ignore this approach.

As we know, one alternative is to ‘recover’ the near DoF by simply decreasing the aperture. For example, if we first select F/11 and calculate H, then by stopping down the aperture to F/22, ie halving it, we will decrease H by 2; and, of course, the near DoF will reduce to H/4, ie half of H/2.

However, in using F/22 we have also doubled the diffraction blur and the ‘wisdom’ out there is that we should not use (35mm) apertures much beyond F/11 or F/16. But let’s ignore what others say and experiment.

The ‘new’ technique, which I’m calling F/22 Bracketing, is relatively simple and made simpler if you are using Magic Lantern: but the technique is usable without ML. It goes like this and we will assume a 24mm lens for illustration and use an infinity blur objective of 15 microns, ie half that of the standard CoC, 0.03/2:
  • Set the camera to F/22 to maximise the defocus DoF;
  • Estimate the (normal, ie 0.03 CoC without diffraction) hyperfocal distance, ie 24*24/(22*0.03), which is about 0.87m
  • As we are seeking a high quality image, ie infinity defocus blur of 0.015, x is 2, ie 0.03/0.015) we need to focus at x*H, say, at about 1.7m
  • Estimate the near DoF, from x*H/(x+1), which is 2H/3 or about 0.58m (at a blur of 0.03).
So at F/22, and maximising the far field (infinity) quality, we will capture an image with a DoF that runs from 0.58m, with a CoC criterion of 30 microns, to infinity, where the CoC criterion is 15 microns. Of course, in the far field the defocus blur will never get more than 0.015, but in the near field, objects closer than 0.58m will suffer defocus bluring beyond 30 microns.

BTW if we were using a wide angle 12mm lens the above would come out at:
  • H = 0.22m (@ CoC of 0.03)
  • Infinity defocus blur = CoC/2 = 0.015
  • Focus at 2*H = 0.44m to achieve the 0.015 infinity blur, ie high quality focus
  • Near DoF = 2*H/3 = 0.14m (@ a defocus blur of 0.03)

But we still haven’t dealt with that diffraction softening.

The key aspect of the technique is to ‘recover’ the diffraction blur by using a super-resolution approach, augmented by the power of Photoshop. That is simply take additional images around the point of focus, where each image is ‘pixel displaced’ relative to the others.

Although we are seeing some cameras with built-in sensor shifting, most cameras do not have this feature. A poor man’s version of this is to either physically move the image, ie if you are hand-holding, or, as we are likely to be on a tripod, and to account for long shutter times, slightly shift the focus between images.

If you have Magic Lantern you can simply go to the Focus menu and Focus Stacking. Select the number of images in front and behind the focus point, say, 3 for a total number of images of 7, and the step size to the smallest, ie 1. That’s it: ML does the rest.

The final stage is to post process in Photoshop by brining all the images into a layered file, auto align the layers, create a smart object out of the layers and use the median stacking Smart Filter. Finally, use the Smart Sharpen in Lens Blur mode to fine tune the data.

So is it all worth it?

Maybe, maybe not: but I had fun experimenting with the idea and trying it out.

By the way, here is a test shot I just did, with a zoomed in (2:1) screen shot of some detail. On the left is the image at the point of focus and on the right the ‘F/22 focus stacked’ one. I used three images either side of the point of focus, as taken by the ML focus stacking feature, and processed the resultant 7 images as above. There is certainly a quality difference, ie the F/22 approach is clearer/sharper.




Bottom line: I think there maybe something in this F/22 Bracketing technique, and I can see it may have value for, say, indoor architectural shooting, ie not many trees blowing in the wind. I’ll carry on experimenting and report my findings in future posts.