Monday, January 18, 2021

MUSIC: now for those with senstive hearing

In this post I'm pleased to announce that MUSIC is now 'fully' compatible with ML's Full Resolution Silent Picture, DNG photo capability, ie no mechanical shutter action. Note: I personally have decided not to 'play' with the 'video stuff' ;-)

As for ML versions: I've only tested MUSIC on a 5D3 with the latest Lua Fix build (2020-12-28 18:15).

The usual FRSP limitations apply, ie shutter speeds need to be between about 0.3 to 15 seconds. MUSIC doesn't check FRSP shutter compatibility, so if you see things not working as expected, check that your shutter speed is in the 'sweet zone'.

To use FRSP with MUSIC, you must have the Silent module switched on. MUSIC will handle the FRSP set up for you: so all you need to do is enable the silent module.

Note that you must also have other 'features' switched off, eg advanced bracketing etc, and others switched on, eg expo override and expo sim should be on.

Also, I've limited FRSP to only work on the 5D3; but you can remove this check in the script if you wish. Plus AF needs to be off if using FRSP. I did this as I found I was trying to balance too many things in trying to get FRSP robustly running in MUSIC.

Another limitation is that FRSP can not achieve seamless (near zero gaps in time) image to image capture. If you need this, then use the normal shutter approach.

To make FRSP 'robust' in MUSIC, I've added a card-specific feature, ie FRSP write delay. On my 5D3, with my card, I find a delay of 1000ms about right. If you see 'dropped frames' or the script prematurely finishing, then you may need to adjust this delay.


In the above screen capture from my 5D3, we see the new FRSP feature, which is either enabled (yes) or not (no). We also see that I've set, after experimenting, a FRSP write delay of 1000ms. This is on top of the script managed delay, created by waiting for LV to return: during FRSP capture LV is 'blanked' by ML and I use this to manage FRSP capture timings.

Also note that FRSP requires focus shift mode to be set to none. MUSIC has some error checking built it, but isn’t perfect ;-)

We also see in the above one of the use cases, albeit taken indoors, where I wanted to create a 90s simulated LE, ie with 16 images: that is the base shutter was about 5.5s. Note I requested a zero delay at the start and a zero additional delay between images.

In this next screen capture we see a typical people elimination use case, where I'm requesting 4 FRSP captures with an image to image delay on 10 seconds (hoping the people move between images):


Of course, as these are FRSP DNGs, they are 'hidden' in the normal Canon in-camera review, but, once ingested into my PC, I can open Bridge, say, where I see the following:

As I requested bookends, each image sequence is clearly delineated. The only downside is that the current FRSP has no metadata, ie shutter time etc. So if this is important to know, then you will need to make a note of this in the field. Plus, remember the above images required no shutter action, ie the capture was silent.

Once I'm in Bridge, or Lightroom, I can then review the images and decide what I want to do. BTW, each of the above was first ETTRed via ML, at an ISO of 100 and at f/10.

As an example, let's look at the 16 image, 90s simulated exposure, and look at the post processing.

After first adjusting one image in ACR, eg bring down the highlights and push up the shadows and blacks, I synced this image to the other 15, and opened up the 16 images in Photoshop.

I ran a stack script, to bring all the images into a single file.

As I was on a tripod, indoors and had no shutter vibration, in this case, I didn't bother aligning the images.

I than ran my merge script and broke the images up into 4s. This first pass automatically adjusted the opacity of each layer (to 1/1. 1/2, 1/3 and 1/4) and set the name of each layer to its block name eg Block 1 or Block 2 etc.

I then manually selected each block of four and merged them, leaving me with four new layers. From here I simply ran the merge script again and merged the resultant opacity adjusted blocks. Arriving at this image (JPEG scaled for this post):


From here I applied Smart Sharpening and adjusted the contrast:

Although this test was a 16 image (FRSP) simulated LE, it clearly is wasted on such a static scene. However, this is also a noise reduction shot, and the final image has a NR of 4 over a single image, eg very clean dark areas and near zero colour noise.

Bottom line: now that MUSIC can exploit ML's FRSP, I now have the ability to create shutterless captures, albeit limited to between 0.3 and 15s.

As usual I welcome any feedback on this post and/or MUSIC.




A post for the non-Canon users

As regular readers will know, this blog is biased towards Canon Magic Lantern users, but built on a technical foundation that is agnostic to the type of camera you shoot with.

Both DOFIS, the ML script aimed at focus and exposure bracketing, and MUSIC, the ML script aimed at non exposure and focus bracketing, eg for super resolution, noise reduction, long exposures and people elimination, are scripts that simply automate processes that every photographer can do.

Thus for focus every photographer should be aware of the Rule of Ten to estimate the hyperfocal in their head, and the odd/even rule that guides you where to re-focus when landscape focus bracketing.

As for multishot image bracketing for Super Resolution (SR), Noise Reduction (NR), Long Exposures (LE) and People Elimination (PE), then my merge Photoshop script will come in useful, unless you love processing smart object with statistics ;-)

Bottom line: although I write my blog for my personal enjoyment, ie a cathartic record of my photographic journey, and thus it is biased towards Canon Magic Lantern (and CHDK users), it is, nevertheless, built on a technical foundation that should be of interest to any photographer that shots with a digital camera, or even a film camera, eg the RoT is a general rule.

Thursday, January 14, 2021

MUSIC: additional notes

As mentioned in the last post, I'm revisiting my 'legacy' scripts. My aim being to only have two Magic Lantern scripts for all my photography needs.

The first part of the exercise was the DOFIS (Depth Of Field Info Script), that runs in the background, and integrates into the ML upper bar. This script covers all my focus (tilted or not) and exposure bracketing needs.

The MUSIC (MUlti Shot Image Capture) script is thus going to become my 'other' script: biased towards bracketing use cases, other than focus and exposure.

In the first release of MUSIC, I covered super resolution (SR) and noise reduction (NR). 

In this post I'm releasing the next iteration of MUSIC, covering two more use cases:

  • Simulated Neutral Density (ND), ie a simulated Long Exposure (LE) capture
  • People Elimination (PE)

I covered the simulated LE capture use case, in a previous post, so I won't regurgitate all the details here as the idea is simple: capture n, x second, contiguous images and merge them in post into a single image, thus simulating an n.x LE shot.

To capture a simulated LE bracket set, all one needs to do is set the exposure for the scene then use MUSIC to capture your brackets:

In the above screen capture we see that MUSIC has been used to set a 20 image bracket set. MUSIC tells us that the simulated LE time is 10s, ie the base shutter is at 0.5s.

We have requested a 5s delay at the start of the bracket capture, plus the image to image (I2I) delay is set to zero.

Finally, dark frame bookends are requested, so that we may clearly identify the LE bracket set in post.

All we need to do now is run the script and process in post, eg just like an SR image, using either Smart Objects and statistics, or by adjusting the opacity of each layer, either manually or by using the Merge Layers java script in Photoshop (downloadable from the right).

To eliminate people, we need to use the I2I delay, to inject a delay between successive images. The choice of delay will depend on the dynamics of your scene, ie the mobility of the people.

The following shows a typical PE bracket capture set up:

Here we see a 10 second I2I delay has been set.

Post processing, once again by either a Smart Object route,  ie with median statistics, or adjusting the opacity of the layers. However, the best approach with people removal is to use the Smart Object route with the median statistic, as the frame averaging approach will generate more ghosting.

As usual I welcome any feedback on this post, and if you have any ideas for new DOFIS or MUSIC features, please don't hesitate to contact me.


Monday, January 11, 2021

Welcome to 2021

Last year I decided to 'withdraw' all my ML-Lua legacy scripts and start afresh, as many of the scripts had evolved over a period and changed considerably, as my ideas matured.

The first script I looked at was the ML script for providing focus infomation, ie for taking focus bracket sets, exposure bracket sets and pano bracket sets etc.

The resultant script was called Depth Of Field Information Script (DOFIS) which, IMHO, provides the ML shooter the best tool for setting focus, ie:

  • Knowing the lens principal planes and the exit and entrance pupil positions;
  • Knowing the infinity blur at any focus position;
  • Setting infinity by knowing the blur;
  • Estimating the position of the pano pivot point, relative to the sensor;
  • Providing feedback when undertaking landscape focus bracketing;
  • Carrying out exposure bracketing, with or without focus bracketing;
  • Providing additional information to support the user, eg focus breathing impact on FoV.

Today I'm releasing the next refreshed script. This one aimed at supporting the shooter who is looking to create bracket sets to reduce noise and/or create super resolution images.

I'm calling the script the MUlti Shot Image Capture (MUSIC) script, and it is downloadable from the right hand 'My Scripts' area.

I'm not going to waste time discussing noise reduction and super resolution bracketing, as others have covered this, eg:

MUSIC is a very simple script and arguably isn't required, however, I decided to write it as it has a few features that support the user, that are not found elsewhere, eg:

  • Dark frame bookends
  • Tripod-based focus jiggling
  • Focus step measuring

Dark frame bookends is something I find to be essential if I'm capturing many bracket sets, as the bookends help differentiate the various bracket sets in post.

Focus jiggling is a non proven way to do super resolution and based on the 'assumption' that moving the lens focus between images may (sic) create small image to image movements at the sensor pixel level, eg a few microns, because of lens element tolerances and movements. As I say, this remains unproven. However, if you can't carry out 'normal', handheld super resolution bracketing, eg because of shutter speed, then focus jiggling may introduction enough image to image jitter to allow you to carry your super resolution processing. If it doesn't, at least you will be able to reduce the noise in your image, ie by sq.root(n); where n is the number of images taken.

The focus measurement feature in MUSIC allows you to measure the number of small focus steps between your current focus and infinity. This is simply a bonus feature, not related to NR or SR image capture.

MUSIC's menu looks like this:


I believe the menu is self explanatory, but note:

  • The number of images is limited to between 4 and 64 (but you can change this in the script)
  • The delay is limited to between 0 and 5 seconds (once again change in the script if you wish)
  • Focus mode has two options: At this focus and, if AF enabled, move to the hyperfocal before capturing the NR/SR brackets
  • Focus shift mode allows you to focus jiggle in a specific direction, eg towards or away from infinity. This is useful if you are focused at the lens extremes

To run the script simply press SET with 'Run Script' selected. If 'Measure?' is selected the script will provide on screen feedback of the number of (small) steps between the current focus and infinity.

As for post processing: as others have written about this, I won't go into detail here, other than a few words on how I do super resolution processing:

  • Having ingested the images into LR or PS, I tweak the basic RAW settings, eg highlights and shadows;
  • I stack all the images into one document;
  • If undertaking super resolution processing, I upscale the document by, say, 200%, using nearest neighbour;
  • I auto align the images;
  • I create a smart object and use the median statistic to create a single image stack;
  • I flatten the image and either leave as is or reduce in size by 50%.
Bottom line: I intend to slowly refresh/republish my legacy scripts. Hopefully MUSIC will help those looking to capture NR or SR bracket sets. As usual I welcome any feedback on this post.

Sunday, December 13, 2020

LensSim 1.2

As I have discussed in previous posts, LensSim is not there to model the actual insides of your lens, as it is impossible to know what the lens designer has optically laid out internally. We also have no idea what lens elements are moving and thus how, for example, focus is managed.

What we can say is that representing a modern lens by a highly simplified 'thin lens' model is the worst we can do.

The DOFIS model, on which LensSim is based, is about the best we can do, in ignorance of the lens design and by carrying out lens modelling in ray tracing CAD packages.

Although not perfect, I've now added a representation of the entrance and exit pupils into LensSim, using the following estimate for the entrance pupil radius:

Where f is the focal length, m the magnification, p the pupil magnification, and N the f-number at infinity, ie printed on the lens.

The exit pupil radius is assumed to be p times the entrance pupil radius.

In LensSim things now look like this:

Here we see a representation of the entrance and exit pupils, positioned (rotated 90 degrees out of their optical plane) at their location on the lens axis.

Bottom line: LensSim remains a tool to help understand your lens and identify some key properties that are useful to know, ie front principal location, entrance pupil location and the FoV changes due to lens breathing. But remember, LenSim is not modelling the actual design. All we can say is it is a better representation than a classic thin lens model.

As usual I welcome any feedback on this post.

Monday, December 7, 2020

LensSim: Data Sheet

Another minor tweak to LensSim, this time to tidy up the 'Lens Data Sheet'.

The functionality is:

  • Add your lens name on sheet 1;
  • Enter your lens data on sheet 2;
  • Set the focus to the minimum, in order to get info on lens extension;
  • Adjust the graph's dimensions and the lens position as required;
  • Take a screen grab, eg Shift-Win-S on a Windows 10 PC.

As an example, here is a screen grab I just took of a hypothetical 24mm lens:

This data sheet provides all the key info, eg:

  • The lens name
  • The lens input info
  • The position of the front principal, the zero of the hyperfocal and DoF distances. Plus this is where the zero of the hinge height sits in a tilt/shift lens;
  • The position of the entrance pupil, important to know if wishing to calculate depths of field 'correctly', plus this is the zero of the field of view and the pano pivot location;
  • The FoV at the focus position, hence set focus to the MFD to get the bellows impact on the FoV; plus the FoV at infinity, ie the maximum FoV;
  • The width of the hiatus is simply focal length divided by the aperture number at infinity. 

Finally, LensSim is also useful if you wish to explore how a lens changes its output characteristics, as you change various input data, eg MFD, MaxMag, Pupil Mag, Focal Length, etc. Plus remember you can set any LensSim slider to play an animation.

Bottom line: Using LensSim you can create a data sheet for each of your lenses, eg adding additional info as required.


Friday, December 4, 2020

LensSim UI Tweak

I wasn't happy with the output UI in LensSim, so I've simplified it to show just the cardinal points, the hiatus distance, and the entrance and exit light cones, ie to/from the pupils.

The entrance light cone shows the field of view and the exit light cone ends up on the sensor plane.

The following shows the latest UI, but note this screen grab doesn't represent a real lens.

Hopefully this makes LensSim easier to read.

As usual I welcome any feedback on this post.