Monday, February 17, 2020

First thoughts on ‘switching’ to MFT

For British readers, saying that Micro Four Thirds cameras are like Marmite will be well understood, ie they divide opinion.

As a Canon-guy, and a Canon-guy that only uses Magic Lantern or CHDK augmented cameras (5D3, EOSM(Vis), EOSM(IR), EOSM3, G1X, G5X, G7X, S95) I have been reluctant to explore other camera manufactures; although I did‘play around’ with the Sony S6000 a couple of years ago; but sold it on.

However, a recent travel experience has led me to explore the Medium Format sensor option, with a crop of 2; and,specifically, the Olympus brand.

My recent experience pivoted around having to carrying all my 5D3-based infrastructure with me on a tour of Scotland, iein and out of hotels and out shooting day and night. A lot of heavy ‘stuff’, when you factor in the camera, the lenses, the (large) tripod, the gear head, the laptop etc etc etc.

I was drawn to the Olympus MFT cameras for five reasons: their size, relative to the 5D3; their mirrorless advantage that allows me to use ‘specialist’ lens adapters, such as the ND Throttle; their ‘button re-programmability’, but I won’t say much about their menus; their focus bracketing; and their Live Time feature, that allows one to see the exposure (and histogram) evolving in real time.

As this is an experiment for me, I decided to ‘go cheap, and buy into a ‘last gen’ Olympus. Thus, I bought a second-hand OM-D M5 Mk II, rather than get a MkIII. I also ‘went cheap’ on dedicated lenses and decided to buy at the ‘kit end’ rather than the ‘pro end’: Olympus 14-42mm, 40-150mm, plus a Samyang 7.5mm Fish Eye.

In addition to the (dumb) ND Throttle, which is there for Long Exposure capture, I purchased a basic ‘dumb’ adapter for my Canon EOS lenses; as the OM-D M5-II has great focus peaking, I think this will be OK for my type of photography. I don’t do sports or bird photography, that would require fast acting AF tracking; ‘all’ I do is put a camera on a tripod and try and slow down.

To complete my ‘new’, lightweight travel set up, I will likely throw in my CHDK G7X or G5X, and, of course, my newly acquired Peak Design Travel Tripod: an incredible piece of engineering design.

Finally, to complete the downsizing, no more laptops for me when I’m on the road. In future I’ll simply use my existingiPad Pro 10.5, loaded with Lightroom and Affinity Photo (£16, one off purchase), as my Photoshop substitute, as Adobe need to do a lot more with the iPad-based Photoshop before I consider it useable.

I’m looking forward to trying out my new, old stuff, and will write about my ‘on the road’ MFT experiences in future posts.

Saturday, February 1, 2020

Wecome to the DOFI family

In the last month I've completely overhauled my Depth of Field scripts.

I have done this for two reasons. First, I've refined my use of output-based focusing, eg knowing the infinity blurs; and second, I've now ported these ideas from my EOS Magic Lantern cameras (5D3 and various EOSMs) , over to my CHDK Powershot cameras (G1X, G7X, G5X and S95).

My approach is to use my Depth of Field Information (DOFI) scripts to constantly tell me:
  • The optical, defocus blur at infinity in microns;
  • The diffraction blur through the scene in microns;
  • The total blur at infinity in microns;
  • An estimate of how many focus brackets will be required to seamlessly focus stack from the current position to the hyperfocal;
  • Where I am relative to the hyperfocal
  • Whether the current position has a positive or negative overlap with the last image captured.
More information on the Magic Lantern version of DOFI, simply called DOFI, can be found here: https://www.magiclantern.fm/forum/index.php?topic=24762.msg224057#msg224057

More information on the CHDK version of DOFI, called DOFIC, may be found here: https://chdk.fandom.com/wiki/Depth_Of_Field_Info_-_Chdk_version

The third member of the DOFI family is called DOFIZ, which adds exposure bracketing control to DOFIC. More information on DOFIZ may be found here: https://chdk.fandom.com/wiki/DOFIZ:_DOFIC_with_Exposure_Bracketing

The latest version of the scripts will always be available from the script download list on the right.

Bottom line: for those that want optimum focus, you likely will not do better than using either DOFI or DOFIC; and for the lucky Powershot photographers, DOFIZ gives you full control over your exposure bracketing, including as you are focus stacking. 

As usual, I welcome feedback on the DOFI scripts.

Thursday, January 23, 2020

2020 Reflections on Focus: Part 1

As many know, I’ve enjoyed myself for some time trying to get the best out of focusing; especially in the following areas: 
  • Always knowing the impact of diffraction blur on my captures 
  • Always knowing the near and far depth of fields (DoFs) 
  • Understanding the impact of the defocus blur criterion, ie the so-called Circle of Confusion (CoC)
  • Understanding the impact, especially for non-macro, but close-up or near-field photography, of not knowing the ‘lens zero’ 
  • Knowing how many focus brackets I will need to take to ensure I cover the scene from the current focus to infinity 
  • Knowing where to position for the next focus, when focus bracketing, so there are no focus gaps 
  • Knowing the best/optimum infinity focus position for a required image quality
Hidden in the above are some real challenges.

For example, as (non-macro) photographers, we are comfortable with the concept of the hyperfocal distance (H). Where H is classically, and simply, written as ((f*f)/(N*C) + f); which, practically we can reduce to (f*f)/(N*C).

Where f is the focal length, N the aperture number and the C the CoC, ie the acceptable ‘out of focusness’, or defocus blur, that we can tolerate. The CoC varies according to the camera being used, the display medium (screen vs print), the size that the image is to be displayed, the distance the viewer is at, and the ‘scrutiny’, ie competition or not.

But where is H measured from?

What is often not said, is that H, and the near and far DoFs that follow from H, are derived from the Thin Lens model and there is hardly a camera in the world that uses such a lens!

Your DSLR or P&S camera certainly doesn’t.

For non-macro photography, modern/real lenses may be modelled as thick versions of the thin lens. With a front principal plane and rear principal plane, ie a lens thickness. For example, see this post for more information: http://photography.grayheron.net/2019/05/splitting-things-apart.html

A classical (symmetrical) thin lens, of course, has a single lens element with the front and rear principal planes located at the centre of the, single, thin lens. But, real lenses are also, usually, not symmetrical, so one should (ideally) also account for the so-called pupillary magnification when estimating DoFs.

‘Luckily’, lens manufacturers help us out a lot, by telling us nothing about the above!!

Thus, we are forced to make use of the thin lens model or modification to it; whether we like it or not.

Fortunately, for most, non-macro, photographers the above lens nuances are irrelevant, as, the distance between the lens zero and H and the sensor (where focus is needed) and H, is small, relative to H. Plus, most photographers know that H is only there to guide them, and they know to add some ‘focus insurance’, eg focus beyond H, never in front of H.

Some, for example portrait or nature photographers, or those wishing to make artistic use of defocused areas in their images, will find the above, more than enough, for their needs. But what if you what to create a sharp, high quality print, covering from ‘near to far’? That is, you are a landscape photographer :-)

If you wished to capture your scene with a single image, you could, of course, seek to increase the DoF by closing down the aperture. But we know this is not a good way to go, as all we are doing is trading defocus blur for diffraction blur; plus, artistically, these two blurs are different.

High diffraction blur everywhere can hardly be called an artistic element of an image! As said above, defocus blur has the potential to help with your artistry, eg helping draw the viewer to certain areas of the image; whereas diffraction ‘softens’ the image everywhere.

The rest of this post is aimed at the reader who knows they want more (practically beneficial use) out of hyperfocal focusing, especially to achieve, so-called, ‘deep focus’: but are unsure how to achieve this.

I have come to call my approach to focusing: ‘output-based focusing’, as opposed to ‘input-based focusing’.

In input-based focusing we select/predetermine the focus, either through calculation, manually via the Live View or through AF, then lock everything down.

With an output-based approach, we additionally seek to dynamically adjust focus, knowing some additional information, to meet the image (output) presentational needs and, as we will see in future post, where necessary, augment focusing, eg through informed focus bracketing.

In this post we will restrict ourselves to a base use case: namely where we wish to maximise the focus quality in the image sharpness, from near to far: to infinity, but not beyond!

Let’s first discuss ‘infinity’. Simply put we can practically define this as when the photographer focuses way into the far field and, when the captured image is reviewed, there is no focus-based difference between that image and one that was taken by focusing farther away.

Ignoring diffraction for now, another way of describing the above is to say that the lens defocus blur has reached a size, such that the viewer can’t discern a difference. From a theoretical perspective we can also sensibly say that, when the defocus blur is less than two sensor pixels, then we have reached that point. This is our (practical/sensible) definition of focus infinity.

[To complete the picture, we should mention, in passing, Rayleigh criterion, Airy disks, diffraction patterns, Bessel Functions and Bayer layers etc! But, ignoring all the science and maths, simply put, all we need to know is that we can’t resolve things that are less than the Airy radius, and pragmatically, for (digital) photographers, this translates to saying, it is pointless seeking (defocus) blurs much less than, say, two of your camera’s sensor pixels].

Of course, at the point of focus, the defocus blur is always zero. As we move away from the focus point, our defocus blur increases, but not symmetrically. This starts to hint at a good place to be, ie between the hyperfocal, where the defocus blur at infinity is the CoC, ie the ‘just acceptable’ defocus point, and the focus position where the infinity blur is around two sensor pixels.

The thing to note here is that this is camera specific. But then again, so is the hyperfocal (H), as it is based on the crop-sensitive, ‘circle of confusion’: which is simply the defocus blur at infinity, when focused at the hyperfocal distance. Thus, knowing H means we really know one of the key pieces of information to allow us to go to an output-based approach to focusing. That is focusing in a more informed way, using output-based, microns of blur, rather than only worrying input distances to ‘focus’ the lens.

To illustrate what this means, in a full frame camera, like my Canon 5D3, this equates to infinity blurs falling between the hyperfocal 30 microns (um) and 2 sensor pixels, ie 12um on my 5D3, and certainly not lower than 6 microns, ie a single pixel. On a crop sensor, you would adjust these numbers according to the camera, eg 30/crop and (1-2)*sensor-pixel-size. Once again, for now, ignoring diffraction.

We also know that if you focus at H, the near DoF will be at H/2 and the far DoF will be at infinity. We also know that if you focus at infinity, the far DoF is, of course, also at infinity; but that the near DoF has now moved to the hyperfocal. 


Key point: you cannot obtain acceptable focus at less than your chosen hyperfocal distance, without changing something, ie your acceptable CoC criterion or aperture etc. Or, put another way, always know the hyperfocal distance when doing landscapes, which, as we will see below, is easy.

Thus, adopting an ‘I always focus at infinity’ approach, means that you are throwing away H/2 worth’s of depth of focus in the near field. Which in some situations may be OK: but not in all.

Assuming you are a cautious photographer, you will likely seek out a little ‘focus insurance’ and, rather than try and focus ‘exactly’ at the ‘hypothetical’ H, you will focus slightly beyond this. But where?


As we will see, up to twice H, ie short of the 2*pixel-pitch point, is a good place to settle. For example, focusing at 2*H means the ‘standard’ 30um infinity defocus blur falls to 15um, ie half of that at H.

The ‘convenience’ being that, the only maths you need to do is to know with this approach to focusing, is knowing your hyperfocal and how to double it.

So, let’s look at output-based focusing, making use of a previous post, where I introduced my ‘Rule of Ten’ approach. The original post on RoT that I wrote may be found here: http://photography.grayheron.net/2018/11/infinity-focusing-in-your-head-rule-of.html

But first, let’s remind ourselves of the impact of focusing beyond the hyperfocal.

Ignoring second order effects, the (non-macro) near and far DoFs may be approximated as: NDoF = H*x/(H+x) and FDoF = H*x/(H-x)

Let’s ignore the far DoF, as this is at infinity if we are focusing beyond H, and only look at the near DoF, and ask the question: what have we lost by focusing at 2*H, rather than at H?

The NDoF approximation tells us that when x is 2*H, the NDoF will be at (H*2H)/(H+2H) = 2H/3. That is, we have ‘lost’ H/6 worth of focus, ie (2H/3 – H/2). So, if your hyperfocal is, say, at 1.2m, and you instead focus at 2.4m, all you have lost in the near field is 200mm of depth of field. But, of course, at infinity your focus quality has doubled, ie from the, just acceptable, CoC-based defocus blur to half of that.

Further, we are now seeing where, for the landscape photographer, the (camera-specific) focus sweet spot is; namely, and assuming you are using a sensible aperture, eg F/8-10, between the hyperfocal you are using and where the defocus blur is, say, 2 pixels. So, on my 5D3 this means between H and around 3*H.

Thus, we now have a reasonable infinity focus (starting) strategy, covering us when we focus beyond H and towards infinity. First, know your H, double it and focus there, and check if at 2/3rd of H you are content with the focus cover. Job done!

At this point, many will be saying, OK, but this isn’t much help to me, as: I don’t know where H is; and it’s too complicated for me to calculate it in my head; and I’m not going to muck about with an App on my phone or a calculator: I just want to take pictures!

WARNING: Ignore those that say, focusing at one third into the scene is good enough. This is based on a myth that depth of field is split 2/3 in the far field and 1/3 in the near field. This is only true when focusing at H/3, ie uncomfortably less than H, and we don’t want to be there! Having read this post, you know you can do much better than this!

So, let’s using the ‘Rule of 10’ (RoT) focus distance to progress our ideas.

The RoT states that, at an aperture of F/10, the hyperfocal distance in meters is the focal length, in mm, divided by 10, at a CoC of your focal length.

As an example, assume I’m shooting with a 24-105mm lens at 24mm. I’m at F/10, a reasonable place for a landscape photographer to be on a full frame camera, then my hyperfocal distance is at 2.4m, ie focal length in mm divided by 10 and, at this focus, the CoC will be 24 microns, ie slightly better than the usually used 30um (on a full frame or 30/crop on a crop sensor camera, say, for convenience, 20 on a typical DSLR crop sensor).

Although you can use RoT with any length lens, it really comes into use for those shooting with wide angle lenses, say wider than 30mm on a full frame; and wish to achieve a high-quality focus at infinity and maximise the depth of focus in the near field.

As an example, let’s now assume I’ve switch to my 12mm prime lens and it is set to the RoT aperture, ie F/10.

The RoT distance, in meters, is simply the (focal length in mm)/10 = 12/10, ie the hyperfocal is at 1.2m. At this RoT distance, the RoT CoC is 12 microns, ie the focal length. For high quality work this is about as good as I’m going to get.

But, if I knew I was ‘only’ shooting for on-line/projector display, ie not print scrutiny in a competition, I might think 12 microns is a bit of an overkill, thus I could comfortably ‘back off’ the CoC to, say, 24 microns, ie double the RoT number. So, rather than focus at 1.2m, I move my hyperfocal to 0.6m, ie half of 1.2m. At this adjusted RoT-based H, I know that my near DoF is always half of H (near DoF = H/2), thus giving me a near DoF of 0.6/2 = 0.3m. All done in my head, with no calculators or look-up tables, and all I needed to do was know my focal length and do some doubling or halving of low digit numbers.

I’ll leave the reader to experiment with the output-based RoT approach, as you can use it in many ways to help with your specific focusing needs, including using it to inform artistic-based focusing: but that’s another story, for another time.

As this is the first of several posts I intend to publish on focusing, I’m going to stop at this point, as I think we have achieved a sound, single image, output-based, starting point. In future posts I will discuss using RoT to inform focus bracketing and then we will progress to how the Canon photographer, who uses Magic Lantern or CHDK technology, can make use of my in-camera focusing ‘apps’, ie scripts. For now, I suggest, irrespective of what camera you use, you focus on honing your RoT skills, to always know your hyperfocal ;-)

BTW if you have any questions on the RoT approach, or anything else I’ve said in this post, please feel free to add a comment at the bottom. I will always post a reply ;-)

Tuesday, January 7, 2020

Extreme ETTR processing

 *** This is an update on the original post ***

Ignoring the artistic dimension of an image, which includes focus etc, there are two technical areas that we tend to worry about: dynamic range and, what I call, tonal quality.

Dynamic range addresses clipping, ie lost data, eg as long as we have not clipped highlights or shadows, we have fully captured the dynamic range.

But DR doesn't address the quality of the tonal resolution we captured, which talks to the post processing dimension, ie moving the data around for artistic reasons and ensuring tonal 'smoothness'. This tonal dimension accounts for the way a digital file is captured, eg:


The above shows the 'problem', the image data on the left, ie in the shadows has lower tonal bandwidth (or resolution if you like) per stop than the image data in the highlights.The fact is, half of our tonal resolution is in the right most stop.

So I thought I would do an experiment and try some extreme ETTR, ie use ML's ETTR plus activating Dual-ISO.

During my trip to Scotland last year, I had the opportunity to shoot some extreme ETTR shots. That is shots where I wanted to maximise the tonal data. 

My 'mistake' was not creating a reference image, ie ETTR alone. But we're all human ;-)

I used Magic Lantern's Dual-ISO, to ISO bracket at 100 and 1600 in the same image.

As an example of what such extreme image capture looks like, here is the base/RAW capture that I took with my 5D3:


If you zoom in you will see the interlaced Dual-ISO capture, that is, the lines are switched between using ISO 100 and ISO 1600.

The histogram in Lightroom looks like this:


As I say, an extreme ETTR capture.

The first job is to create a base image from the Dual-ISO, which is simply accomplished with the Dual-ISO Lightroom plugin: resulting in this base image ready for post processing:


The LR histogram now looking like this:


The final stage is to first use LR to create an image suitable for Photoshop, where I used various TKActions to help me arrive at the final 4:3 ratio image:
 

Clearly, such extreme image capture is not to everyone's taste. But each to their own I say :-)

Tuesday, December 31, 2019

There’s Wide Angle photography, then there’s the Venus LAOWA 15mm F4 Wide Angle Macro photography

Finding myself with a few days over Christmas to ‘play around’ with my photography, I decided to remind myself of a few ‘specialist’ lenses I had acquired, but not really used.

As we know, macro photography requires either having macro lenses, that have magnifications of 1:1, or trying to get close with a normal lens, with or without extension tubes; although typically the closest you will get to a focused object is about 200mm from the sensor.

However, if you have the Venus LAOWA 15mm F4 Wide Angle Macro, you can focus at about 5mm from the front of the lens! 


In addition, this lens has a shift feature that allows shifts of +/- 6mm. As it’s a manual lens, you can also set the aperture to F/32.

So overall, it’s a very unusual lens, as you can see from my 5D3 set-up.



Of course, being a manual lens, Magic Lantern interaction is rather restricted, however, ML can still help out, for example Auto ETTR works a treat, as does Dual-ISO.

As you can imagine, if you are focusing 5-10mm away from the front of lens, where you can achieve the 1:1 magnification, light will be rather compromised.

However, as an example of what the lens can achieve I set up an indoor test and focused on the front most flower, used ML to get an ETTR exposure, switched on Dual-ISO, and then, using the DoF scale on the lens, ‘simply’ manually focus stacked to infinity (this is a rather crude process where you have to guess the rotation, which is the same each time.

The resultant 11 images were ingested into Lightroom, Dual-ISO processed, then they did a round trip to Helicon Focus; finally finishing off in Lightroom again. Here are the first and last images:





And here is the final processed image.


Bottom line: The Venus LAOWA 15mm F4 Wide Angle Macro is a unique lens, but one that requires a bit of effort to use. Luckily Magic Lantern, as usual, does some of the heavy lifting.

Sunday, December 1, 2019

Field testing the LE Simulator Script

The latest version of my Long Exposure (LE) Simulator script, that allows you to capture LE exposures without NDs, may be downloaded from the right-hand link.

This version tidies up a few things, to better link to my workflow, which is:

  • Compose and set the base exposure using your preferred method, eg ML ETTR, and remember it
  • Consider taking an Auto Bracketing exposure set (or after the MLV is captured)
  • Run the LE Sim script from the ML script menu, having set the time of the LE in the ML Bulb menu (note: don’t switch ML Bulb on).
  • If your base exposure is not within the script’s limits it will warn you, ie between 0.15s and 5s
  • If the camera is not in LV mode, the script will ask that you put it in LV mode
  • The script will then set things up and ask that you put it in Canon MOVIE mode
  • The script will now take you to the ML Movie menu, where Full Resolution LV (FLV) will have been selected, along with RAW video
  • The script will ask you to switch on FPS override. You can adjust if required, ie to better match the desired base exposure
  • You should also check that the RAW Video capture is adequate, ie green and continuous. If not, adjust the FPS and, if required, exposure
  • Press the Trash button to exit the ML menus
  • You should now see the scene: if not press MENU twice, ie on and off
  • Do a final check of the composition and exposure, and go back to the ML menu to adjust if you need to
  • Do a 2+ second long half shutter press to capture the MLV LE simulation or a less than 2 second press to exit the script without taking an MLV
  • After exiting the script will switch off MLV video, but note that you will need to switch off the Canon Movie mode to return to photo mode.
As for RAW processing, simply import into MLV App and export the MLV as an averaged TIFF.

As for post processing, well that's down to your artistic desire. This is why I take an auto bracket set as well as the MLV video, as I can blend any of the auto brackets or any of the induvial MLV FLV frames, with the averaged MLV.

As an example, this image was taken at our weekend break at Port Quin, in Cornwall. I first captured an Auto Bracket set and then 133 frames worth of MLV FLV capture at 2 fps and a shutter of 1/3s. That is a simulated exposure of about 45s, ie 133 * 1/3s.

I processed the MLV and blended the MLV averaged one, for the land and sea, with one of the Auto bracket captures that covered the sky, as I didn’t want to smooth out the sky.
 


The script appears pretty robust, at least on my 5D3, and thus now I can carry out 'emergency LE' captures, ie without NDs.

As usual I welcome comments and feedback on this post.

Sunday, November 24, 2019

No more ND filters?

So let's jump to the bottom line: you need ND filters. 

But what if you left them at home and see/imagine that killer Long Exposure (LE) shot? 

What if the ND you are carrying in the field is not enough: it will give you a 2 second exposure, but not a 20 second one.

This is where photographers turn to 'hacks' to help them out. For instance, if you need a 20s exposure and all you can get is a 2 second one, then simply take 10 images and use, for example, Photoshop to simulate a 20 second image by stacking the images and either using mean or median statistics.

Ok, it requires more work, but such is life. If creating that image was important for you, then spending the time to make it will be worth it.

An advantage of the multi-image approach, compared to the single image ND version, is that you have the higher shutter speed images as well. Thus, in post, you can blend individual 2s images with the processed '20s' one.

LE photography is not to everyone's taste. However, when you see moving water smoothed out by an LE, there is no doubt it transforms the image's look, eg removing the high frequency content.

Another good reason to use LE photography is when you wish to remove people (or noise) from the scene. As long as the people are moving, they can magically be made to disappear in post.

So far, and rather unusually for me, I've made no mention of Magic Lantern: because, bluntly, you don't need ML to capture n images for LE post processing.

Without ML, one would, say, use an intervalometer to capture the n images that are needed to simulate the single ND version. But, of course, the shutter still needs to be actuated for each image, assuming you are in Live View or have the mirror locked up.

But with ML we can capture the sensor without any shutter action. For example, ML has had a (shutter time limited) full resolution silent picture option for a while, which we can use to create the images that we will later process in post.

But we still end up with n individual images on the card. No big deal I hear you say, but there is a 'better' way.

Thanks to the hard work of the ML developers, we also have, so-called, Full-resolution Live View video capture. The maximum frame rate is low, about 7 fps, but this isn't an issue for our LE capture needs.

The first thing you need to do is load the 4K raw video recording from the Experimental Build page (https://builds.magiclantern.fm/experiments.html). There are other builds of the 4K raw video, but the one on the experimental page should get you going.

As for using it in this LE mode: let's just say up front that it's fiddly and can be a bit flaky. 

Make sure you enable the required modules, ie crop-rec; mlv-lite; mlv-play. Plus any other modules you need, eg ETTR and Dual-ISO etc.

The in-field workflow I use is as follows:
  • Switch on LV
  • Set your exposure for the scene, I personally use ML's ETTR. You should ideally aim for this to be, say, between 1/5s to 1s;
  • Go into the ML Bulb timer in the Shoot menu and set the ND time you wish to simulate
  • Go into the ML Movie menu and switch on the following in the following order: Crop mode to full-res LV; Raw video on; FPS override to, say, 1
  • Exit the ML menu
  • Switch to Canon video mode, where you will likely see a pink mess on the screen
  • Toggle the MENU button on and off, which hopefully will give you a clear image of the scene
  • Go into the ML video menu and confirm the resolution is ok
  • Check the exposure etc. The ML exposure should show something close, (may not be identical) to the exposure you set, where as the Canon exposure will say something else, eg 1/30s maybe
  • Go back into the ML menu and the Scripts menu and run the little LE helper script that I created, which can be downloaded on the right. All this script does is switch the video recording on and off according to the time you set in the Bulb timer
  • Once the mlv video has been created, switch out of the Canon video mode (the script should have switched off the ML video stuff)
The post processing workflow goes something like this:
  • Download the MLV-App (https://www.magiclantern.fm/forum/index.php?topic=20025.0)
  • Open the MLV App and load in the MLV video you created
  • Check it visually to see all the frames look the same - warning some may be corrupted
  • Export the video with the average MLV preset
  • Re import the averaged MLV and export it as a TIF
  • Process it as you desire
Here is a real world example I took this afternoon. The base exposure was f/16 at 1/4s, but I wanted an LE exposure at 30s. I played around with the fps and 3 fps looked about right. I ran my script and ended up with a 35s mlv with 106 frames. A single, full resolution, frame looks like this:


As we can see, there is lots of distracting high frequency 'stuff' in the water and, of course, there are people moving around on the bridge, as they were throughout the video capture.

Having processed the above in MLV App I ended up with the following 30s LE simulated image:



Of course the sky is horrible, as it really was. So a quick trip to Luminar 4 and we have a new sky. OK, I know it needs more work ;-)



So there you have it. Thanks to the hard work of a whole community of videographers and developers over on ML, a simple photographer like me, now has an additional tool to use.

As usual I welcome feedback on this post, especially any corrections and/or suggestions to improve the workflow.