Tuesday, January 7, 2020

Extreme ETTR processing

 *** This is an update on the original post ***

Ignoring the artistic dimension of an image, which includes focus etc, there are two technical areas that we tend to worry about: dynamic range and, what I call, tonal quality.

Dynamic range addresses clipping, ie lost data, eg as long as we have not clipped highlights or shadows, we have fully captured the dynamic range.

But DR doesn't address the quality of the tonal resolution we captured, which talks to the post processing dimension, ie moving the data around for artistic reasons and ensuring tonal 'smoothness'. This tonal dimension accounts for the way a digital file is captured, eg:

The above shows the 'problem', the image data on the left, ie in the shadows has lower tonal bandwidth (or resolution if you like) per stop than the image data in the highlights.The fact is, half of our tonal resolution is in the right most stop.

So I thought I would do an experiment and try some extreme ETTR, ie use ML's ETTR plus activating Dual-ISO.

During my trip to Scotland last year, I had the opportunity to shoot some extreme ETTR shots. That is shots where I wanted to maximise the tonal data. 

My 'mistake' was not creating a reference image, ie ETTR alone. But we're all human ;-)

I used Magic Lantern's Dual-ISO, to ISO bracket at 100 and 1600 in the same image.

As an example of what such extreme image capture looks like, here is the base/RAW capture that I took with my 5D3:

If you zoom in you will see the interlaced Dual-ISO capture, that is, the lines are switched between using ISO 100 and ISO 1600.

The histogram in Lightroom looks like this:

As I say, an extreme ETTR capture.

The first job is to create a base image from the Dual-ISO, which is simply accomplished with the Dual-ISO Lightroom plugin: resulting in this base image ready for post processing:

The LR histogram now looking like this:

The final stage is to first use LR to create an image suitable for Photoshop, where I used various TKActions to help me arrive at the final 4:3 ratio image:

Clearly, such extreme image capture is not to everyone's taste. But each to their own I say :-)

Tuesday, December 31, 2019

There’s Wide Angle photography, then there’s the Venus LAOWA 15mm F4 Wide Angle Macro photography

Finding myself with a few days over Christmas to ‘play around’ with my photography, I decided to remind myself of a few ‘specialist’ lenses I had acquired, but not really used.

As we know, macro photography requires either having macro lenses, that have magnifications of 1:1, or trying to get close with a normal lens, with or without extension tubes; although typically the closest you will get to a focused object is about 200mm from the sensor.

However, if you have the Venus LAOWA 15mm F4 Wide Angle Macro, you can focus at about 5mm from the front of the lens! 

In addition, this lens has a shift feature that allows shifts of +/- 6mm. As it’s a manual lens, you can also set the aperture to F/32.

So overall, it’s a very unusual lens, as you can see from my 5D3 set-up.

Of course, being a manual lens, Magic Lantern interaction is rather restricted, however, ML can still help out, for example Auto ETTR works a treat, as does Dual-ISO.

As you can imagine, if you are focusing 5-10mm away from the front of lens, where you can achieve the 1:1 magnification, light will be rather compromised.

However, as an example of what the lens can achieve I set up an indoor test and focused on the front most flower, used ML to get an ETTR exposure, switched on Dual-ISO, and then, using the DoF scale on the lens, ‘simply’ manually focus stacked to infinity (this is a rather crude process where you have to guess the rotation, which is the same each time.

The resultant 11 images were ingested into Lightroom, Dual-ISO processed, then they did a round trip to Helicon Focus; finally finishing off in Lightroom again. Here are the first and last images:

And here is the final processed image.

Bottom line: The Venus LAOWA 15mm F4 Wide Angle Macro is a unique lens, but one that requires a bit of effort to use. Luckily Magic Lantern, as usual, does some of the heavy lifting.

Sunday, December 1, 2019

Field testing the LE Simulator Script

The latest version of my Long Exposure (LE) Simulator script, that allows you to capture LE exposures without NDs, may be downloaded from the right-hand link.

This version tidies up a few things, to better link to my workflow, which is:

  • Compose and set the base exposure using your preferred method, eg ML ETTR, and remember it
  • Consider taking an Auto Bracketing exposure set (or after the MLV is captured)
  • Run the LE Sim script from the ML script menu, having set the time of the LE in the ML Bulb menu (note: don’t switch ML Bulb on).
  • If your base exposure is not within the script’s limits it will warn you, ie between 0.15s and 5s
  • If the camera is not in LV mode, the script will ask that you put it in LV mode
  • The script will then set things up and ask that you put it in Canon MOVIE mode
  • The script will now take you to the ML Movie menu, where Full Resolution LV (FLV) will have been selected, along with RAW video
  • The script will ask you to switch on FPS override. You can adjust if required, ie to better match the desired base exposure
  • You should also check that the RAW Video capture is adequate, ie green and continuous. If not, adjust the FPS and, if required, exposure
  • Press the Trash button to exit the ML menus
  • You should now see the scene: if not press MENU twice, ie on and off
  • Do a final check of the composition and exposure, and go back to the ML menu to adjust if you need to
  • Do a 2+ second long half shutter press to capture the MLV LE simulation or a less than 2 second press to exit the script without taking an MLV
  • After exiting the script will switch off MLV video, but note that you will need to switch off the Canon Movie mode to return to photo mode.
As for RAW processing, simply import into MLV App and export the MLV as an averaged TIFF.

As for post processing, well that's down to your artistic desire. This is why I take an auto bracket set as well as the MLV video, as I can blend any of the auto brackets or any of the induvial MLV FLV frames, with the averaged MLV.

As an example, this image was taken at our weekend break at Port Quin, in Cornwall. I first captured an Auto Bracket set and then 133 frames worth of MLV FLV capture at 2 fps and a shutter of 1/3s. That is a simulated exposure of about 45s, ie 133 * 1/3s.

I processed the MLV and blended the MLV averaged one, for the land and sea, with one of the Auto bracket captures that covered the sky, as I didn’t want to smooth out the sky.

The script appears pretty robust, at least on my 5D3, and thus now I can carry out 'emergency LE' captures, ie without NDs.

As usual I welcome comments and feedback on this post.

Sunday, November 24, 2019

No more ND filters?

So let's jump to the bottom line: you need ND filters. 

But what if you left them at home and see/imagine that killer Long Exposure (LE) shot? 

What if the ND you are carrying in the field is not enough: it will give you a 2 second exposure, but not a 20 second one.

This is where photographers turn to 'hacks' to help them out. For instance, if you need a 20s exposure and all you can get is a 2 second one, then simply take 10 images and use, for example, Photoshop to simulate a 20 second image by stacking the images and either using mean or median statistics.

Ok, it requires more work, but such is life. If creating that image was important for you, then spending the time to make it will be worth it.

An advantage of the multi-image approach, compared to the single image ND version, is that you have the higher shutter speed images as well. Thus, in post, you can blend individual 2s images with the processed '20s' one.

LE photography is not to everyone's taste. However, when you see moving water smoothed out by an LE, there is no doubt it transforms the image's look, eg removing the high frequency content.

Another good reason to use LE photography is when you wish to remove people (or noise) from the scene. As long as the people are moving, they can magically be made to disappear in post.

So far, and rather unusually for me, I've made no mention of Magic Lantern: because, bluntly, you don't need ML to capture n images for LE post processing.

Without ML, one would, say, use an intervalometer to capture the n images that are needed to simulate the single ND version. But, of course, the shutter still needs to be actuated for each image, assuming you are in Live View or have the mirror locked up.

But with ML we can capture the sensor without any shutter action. For example, ML has had a (shutter time limited) full resolution silent picture option for a while, which we can use to create the images that we will later process in post.

But we still end up with n individual images on the card. No big deal I hear you say, but there is a 'better' way.

Thanks to the hard work of the ML developers, we also have, so-called, Full-resolution Live View video capture. The maximum frame rate is low, about 7 fps, but this isn't an issue for our LE capture needs.

The first thing you need to do is load the 4K raw video recording from the Experimental Build page (https://builds.magiclantern.fm/experiments.html). There are other builds of the 4K raw video, but the one on the experimental page should get you going.

As for using it in this LE mode: let's just say up front that it's fiddly and can be a bit flaky. 

Make sure you enable the required modules, ie crop-rec; mlv-lite; mlv-play. Plus any other modules you need, eg ETTR and Dual-ISO etc.

The in-field workflow I use is as follows:
  • Switch on LV
  • Set your exposure for the scene, I personally use ML's ETTR. You should ideally aim for this to be, say, between 1/5s to 1s;
  • Go into the ML Bulb timer in the Shoot menu and set the ND time you wish to simulate
  • Go into the ML Movie menu and switch on the following in the following order: Crop mode to full-res LV; Raw video on; FPS override to, say, 1
  • Exit the ML menu
  • Switch to Canon video mode, where you will likely see a pink mess on the screen
  • Toggle the MENU button on and off, which hopefully will give you a clear image of the scene
  • Go into the ML video menu and confirm the resolution is ok
  • Check the exposure etc. The ML exposure should show something close, (may not be identical) to the exposure you set, where as the Canon exposure will say something else, eg 1/30s maybe
  • Go back into the ML menu and the Scripts menu and run the little LE helper script that I created, which can be downloaded on the right. All this script does is switch the video recording on and off according to the time you set in the Bulb timer
  • Once the mlv video has been created, switch out of the Canon video mode (the script should have switched off the ML video stuff)
The post processing workflow goes something like this:
  • Download the MLV-App (https://www.magiclantern.fm/forum/index.php?topic=20025.0)
  • Open the MLV App and load in the MLV video you created
  • Check it visually to see all the frames look the same - warning some may be corrupted
  • Export the video with the average MLV preset
  • Re import the averaged MLV and export it as a TIF
  • Process it as you desire
Here is a real world example I took this afternoon. The base exposure was f/16 at 1/4s, but I wanted an LE exposure at 30s. I played around with the fps and 3 fps looked about right. I ran my script and ended up with a 35s mlv with 106 frames. A single, full resolution, frame looks like this:

As we can see, there is lots of distracting high frequency 'stuff' in the water and, of course, there are people moving around on the bridge, as they were throughout the video capture.

Having processed the above in MLV App I ended up with the following 30s LE simulated image:

Of course the sky is horrible, as it really was. So a quick trip to Luminar 4 and we have a new sky. OK, I know it needs more work ;-)

So there you have it. Thanks to the hard work of a whole community of videographers and developers over on ML, a simple photographer like me, now has an additional tool to use.

As usual I welcome feedback on this post, especially any corrections and/or suggestions to improve the workflow.

Tuesday, November 12, 2019

More on tilted focus stacking with Tilter

In the previous post I mentioned tilted focus stacking, and I hear some may say, but why? 

Afterall, is not one of the advantages of a TS-E lens that you achieve an ‘optimum’ focus state/plane that you can’t achieve in a single non-tilted image?

Whilst this is may be true sometimes, especially if you are seeking out a single zone/wedge of focus within an image that will be ‘out of focus’ elsewhere, it is not generally true. For example, say you are in a Cathedral and you are shooting close to the ground and you wish to have the beautiful flagstones on the floor captured, from near to far.

With a TS-E lens you might seek to position your hinge point on or very near the ground plane and focus towards/at infinity. Doing this will achieve a focus wedge, with the blur at the edges of the wedge equal to the, so-called, circle of (least) confusion, setting: typically anything from 10-30 microns on a full frame, eg anything less than two sensor pixel widths being ‘pointless’. This scenario looks like this:

Here we see the (yellow) field of view of the TS-E lens, positioned at the appropriate J height, and the tilted depth of field of the TS-E, both relative to the cathedral floor.

But what if there was a upright tombstone that we also wished to get into focus and was within the FoV of the lens, for instance, but positioned at, say, H/3, as in this scenario:

Here we see a focus failure. An alternative approach could be to focus normally, ie non-tilted, on the tombstone, but, of course, the far depth of field would drop off too much (assuming we don’t wish to push the aperture down too much because of diffraction; as the tombstone is positioned in front of the hyperfocal/2. This would lead us to consider (planar) focus stacking, which, of course with Magic Lantern and my in-camera focus stacking script(s) is easy to achieve :-)

But if we reverted to a focus-stacked, non-tilted approach, the artistic dimension will be changed, eg no differentiation in focus, as focus stacking with a non-tilted lens will ensure everything from the near focus to, say, infinity, will be ‘acceptably’ sharp.

This is where tilted focus stacking could be a useful (artistic) tool, ie allowing you to keep a lowish (HQ) aperture setting, eg two stops down from the maximum aperture. Illustratively things look like this:

Here we see a second tilted image (red), relative to the first one, shown in blue. We have now achieved our goal: the cathedral floor is tack sharp, the near tombstone is in perfect (acceptable) focus and the rest of the image, above the second image’s top/near depth for field boundary is ‘out of focus’.

Before progressing, let’s remind ourselves of the (general) hinge model we are using, thanks to the work of others such as
Emmanuel Bigler and Harold Merklinger:

Here we see an important feature of the model, namely, at the hyperfocal (H), the depth of field parallel to the sensor is simply J, ie the hinge height, which sits vertically under the front principal plane of the lens. Also, if we are focused at x (x is also projected on the ground plane above for clarity, as it is in Tilter) in front of H, as above, we will be tilting away from the ground plane, and if we are focused beyond H, we will be tilting towards the ground plane.

All this tells us that the angle of plane of maximum sharpness, relative to the ground plane, may be simply estimated (ie we should not use the above for doing macro work) as simply atan(J/x). Thus at x = H, the far/lower, tilted DoF just touches the ground plane, with the plane of sharp focus at an angle of atan(J/H); and focused at infinity, atan(J/infinity) = zero, so the plane of sharp focus is along the ground plane, with the near/higher or far/lower DoFs equally positioned either side of it.

From the above, it is also relatively simple to estimate the number of brackets from the current focused position, ie upwards to cover the top FoV, behind H, and downwards to cover to the ground plane.

I hope in a later version to account for camera tilt, but, for now, Tilter assumes a zero camera tilt. However, the Tilter screen does now shows the estimated number of brackets to cover from the current position to the ground plane. For example, in the screen image below we see that the plane of sharp focus is positioned at an angle of 52 degrees and the lower DoF is at 27 degrees. We thus need to bracket down from here, if we wish to also have the plane of sharp focus along the ground. Tilter tells us that we need one more image, ie a total of two tilted focus brackets.

In this example, Tilter also tells us that the (U = Upper) half FoV is 26 degrees. This FoV feedback provides FoV estimates for landscape, portrait, fully shifted landscape and fully shifted portrait.

As usual I welcome feedback, including corrections [:-)], and any ideas to make Tilter ‘better’.

Saturday, November 9, 2019

Using Tilter

In the last post I introduced my latest ML Lua Script: Tilter. Which I created to support Tilt-Shift lens use, eg a 24mm TS-E L2 in my case. Note the script will need (minor) tweaking for a non 24mm TS-E L2.
As a minimum the script is useful as an, in-camera, education tool, which you can use without a TS-E lens, as long as the lens reports focus, ie dynamically showing how tilt and focus can change a camera’s depth of field.

However, the real power of the script is in how it supports/informs your TSE-E use, mainly in two ways:

  • As an aid to getting the best focus engagement, which with a TS-E lens is slightly more complex than with a non-TS-E lens;
  • As a tilted focus-stacking aid, ie to optimally extend focus beyond that achievable in a single frame.

It is important to understand that the Tilter is based on the hyperfocal distance (H), and that Tilter shows the side on camera view from the near field, that is from the lens’s (estimated) front principal (ie assuming a split thin lens model), out to 3*H. 

Thus, as you change the aperture, Tilter will always show the tilted focus wedge relative to the current hyperfocal. It may look that the tilt angle is changing as you adjust aperture, but, of course, it is not. Only the hyperfocal is changing. The tilt can only be changed by adjusting the lens or refocusing.

Since the first post I’ve added a couple of features, that I find useful.

First, Tilter can now show an estimation of the vertical Field of view of the camera – once again based on assuming a thin lens model. This can be switched off in the script’s menu and, because we are dealing with a lens that can be shifted, the FoV can be based on four sensor ‘sizes’ – in the case of my TS-E 5DIII, two real and two virtual:

  • Normal Landscape = 36x24mm (wxh)
  • Normal Portrait = 24x36mm
  • Vertically Shifted Landscape = 36x48mm
  • Vertically Shifted Portrait = 24x60mm

You need to explicitly set the sensor configuration in the Tilter’s menu. Note that the FoV shows the full +/- shift cover, which of course requires post processing, pano-stitching to achieve: which, by the way, is easy to achieve in LR.

Also note the this version of Tilter assumes the lens non-tilted axis is parallel to the ground plane. You can still use Tilter with a camera that is rotated relative to the ground plane, and I'll discuss this in a future post.

Second, Tilter now shows an estimate of whether the focus on the ground plane is within the FoV of the above set sensor configuration. This is shown by four dots, whose display, once again, can be switched off. If the dot is red then the focus is behind that particular FoV marker, if green it is in front of that FoV. The left-hand dot represents the widest TS-E setting, ie the 24x60 configuration above, and the right-hand dot is the narrowest (36x24mm) configuration. Once again this can be switched on and off: note is off in this post.

Note that you may be disappointed with this feature as the number of differentiating focus steps that Canon reports in the ‘far field’ is not large. That is, you may see all red or all green dots. 

With the FoV switched on, the screen now looks like this:

Note that the centre dot, used for focus stacking, is red; as, in this case, no image has yet been taken. Plus we see that all the dots are green in this case, ie focus is set in front of the narrowest FoV. We also see the tilted DoF relative to the FoV.

There are two main use cases for the Tilter, ie beyond the educational one. In this post I’ll cover the non-shifted use case where you wish to ensure the ground plane (near to far) is in focus, but that the vertical field of view, in the near or far field, may not be enough, ie you need to do tilted focus stacking.

Of course, with a tilting lens we have two definitions of ‘the ground plane is in focus’. First we can elect that the lower/far DoF (just) touches the ground plane, as shown here:

Secondly, we could elect that the plane of maximum sharpness (‘zero’ optical blur) aligns with the ground plane, as shown here:

One use case is that I’m seeking to ensure the ground plane is tack sharp, ie the tilted plane of maximum sharpness is coincident with the ground plane. In addition, the lens is set up to be parallel to the ground plane, ie no additional angles to worry about.

Step 1: focus the TS-E lens using whatever method you like, eg some like setting a closed down aperture, ie F/3.5, and iteratively bending for the background and focusing for the foreground – followed by opening the aperture to the required shooting configuration and setting the exposure. With Tilter you can also estimate the height that the centre of the lens is away from the ground plane and use the Tilter menu to help you approximate the initial tilt angle. In this case, my 5DIII was about x mm above the table top, so Tilter helped me estimate that the initially set tilt should be y degrees. I then used the iteration method to arrive at the final tilt-focus settings and this first image, which could have been an exposure bracket set:

Step 2: using the on-screen Tilter image, assess if the DoF is enough for your needs. That is all the things you want to be ‘in focus’, ie not outside of you defined CoC focus criteria, are in focus. If they are, you’re done! Also, note that the centre dot of the Tilter feedback is now green, indicating that the current focus is overlapping the last taken image: which of course it will, if you haven’t yet refocused.

Step 3: Assuming not everything is covered by your single image, you now need to use tilted focus stacking to achieve a greater DoF coverage. Without Tilter this would be difficult to impossible, as you would not know how much to refocus to achieve an image to image focus overlap. As we are focus stacking from the ground plane, all we need to do is to refocus towards the camera, ie away from infinity, and watch the focus stacking feedback in Tilter. That is refocus towards away from infinity until the centre dot turns red, then (nudge) refocus towards infinity until it just turns green again. On the RHS of the Tilter screen you also have some data to help you out, ie angles and the optical blur (set in the ML menu), as well as the diffraction blur and the convolved total blur, ie at the depth of field (angular) planes. Take your next image and re-evaluate the situation in Tilter, ie have you covered your needs yet? If not repeat Step 3.

To further help with tilted focus stacking, it is often useful to know the angle of the highest object you wish to see in focus. You could guess it, but a better approach is to use some technology. For instance, I use the Theodolite app from http://hrtapps.com/theodolite/

Once you have taken your image data, all you need to do is post process your images, eg:
  •  Ingest into LR and undertake any RAW-based, eg WB etc, processing, and sync across all your images 
  • Merge any exposure brackets you took
  • Focus stitch your (merged) images in the application of your choice, eg Helicon Focus
  • Return to LR
  • Make your art by further post processing

In future posts I will look at other use cases and, hopefully, provide some real world examples.

As usual I welcome feedback, advice and, in this case, any Tilter development suggestions.

Monday, November 4, 2019

In-camera TS-E Lens Simulation

Anyone with a tilt shift lens knows that they take a bit of effort to use. Shifting is relatively easy and a great way to achieve panoramas, for example, this pano, of the Fairy Pools in Skye, was taken with my 24mm TS-E, using positive and negative vertical shifting.


When it comes to tilting, things get a little bit more complicated, especially if you go back to basics and try and understand the Scheimpflug principle. 

Lucky others, such as Emmanuel Bigler and Harold Merklinger, have created an accessible way of modelling a tilted lens:

Where the J height is simply the focal length divided by the sine of the tilt angle.

To complete the model, as the hyperfocal distance is measured from the front principal plane, but Canon/ML gives us the focus distance from the sensor plane, a split thin lens model is used to calculate the TS-E 'thickness'. See http://photography.grayheron.net/2019/05/splitting-things-apart.html

As can be seen, a tilted lens has a very simple geometry that results in the angular depth of field being the J height, at the hyperfocal, ie parallel to the sensor plane.

Using the above model, it is a simple process to write a Magic Lantern Lua script to provide in-camera feedback and functionality.

The script is called Tilter and, as usual, can be downloaded from the right.

The script is positioned in the ML Focus menu and has two menu items. The first one simply switches the script on and off. The second allows the user to enter the estimated hinge angle. The menu offers 1/10 degree increments, but setting a TS-E lens to better than half to a third of a degree is about the best you will achieve.

The menu looks like this:

The Hinge or Tilt Angle, which is adjusted in 1/10 degree increments, dynamically shows the J height in cm. That is the position of the 'ground' plane, vertically below the (down tilted) lens and parallel to the non-tilted lens axis. In the above screen shot, the Hinge Angle has been entered as 7, ie 7/10 of a degree, and at a focal length of 24mm this equates to a J height of 196cm.

The script, obviously, runs in LV, and if turned on the user interface looks like this (note the camera settings are purely illustrative and note I'm handholding):

On the left we see a representation of : 
  • the ground plane (white horizontal line) that contains the hinge point;
  • the point of focus, as projected on to the ground plane;
  • the lens axis (orange) that is parallel to the ground plane;
  • the depth of field wedges (upper and lower) and the plane of sharp focus, all pivot around the hinge. The plane of sharpest focus passes through the point of focus of the lens, on the (non tilted) lens axis;
  • the hyperfocal distance, ie the white vertical line;
  • the ground plane is projected out to three hyperfocal distances, with 2H also shown as a tick mark;
  • the red dot at the (far left) centre of the display is used to provide focus stacking feedback; turning green if the current focus overlaps the last image;
  • finally, 15 degree tick marks are shown for convenience.

On the right hand side of the screen, various numerical data are shown:

  • The upper depth of field (U);
  • The plane of maximum sharpness (F);
  • The lower depth of field (L);
  • The J height in cm;
  • The optical, diffraction and RMS total blurs, in microns. Note the optical blur is that set on the ML DoF menu for the 'circle of confusion'.
The script uses a 'corrected' ML calculated hyperfocal, and ML should be set to simple mode, ie not diffraction aware. The correction being a single focal length, as the ML hyperfocal distance assumes a thin lens at infinity. 

In the above example the focus is set less than the hyperfocal and thus shows as a yellow dot, through which the plane of sharpest focus passes, ie if the dot was positioned on the lens axis. In this case the angle is estimated as 48 degrees. 

In this case the upper depth of field, ie where the optical blur equals that ML set CoC, ie at an angle of 57 degrees, and the lower is at 34 degrees. The J height being 1.96m, eg the camera is on a tripod at 1.98m above the ground.

The blurs show that the ML CoC was set to 20 microns and the diffraction blur is estimated at 8 microns, giving an RMS total blur of around 21 microns.

To consolidate what is happening, let's look at another screen shot:

In this example I'm using a very high aperture (f/18, giving a diffraction blur of 24 microns). We now see that the focus point has turned green, meaning that I'm focusing between H and 3H. Beyond 3H, the focus point turns red. Focusing beyond 3H will be difficult with a 24mm TS-E, as Canon focus feedback collapses in the far field, ie relative to the near field.

The above shows the optimum configuration for tilting, where the lower depth of field covers the ground.

An additional feature of the script is that it supports tilted focus stacking. To use this, simply take an image. The next image will now be referenced to the previous and if the (slanted) depths of field positively overlap, the red dot at the centre of the circle will turn green.

In subsequent posts I'll talk about work flow and how to achieve tilted focus stacking.

As usual I welcome feedback on this post and the script.