Sunday, November 24, 2019

No more ND filters?

So let's jump to the bottom line: you need ND filters. 

But what if you left them at home and see/imagine that killer Long Exposure (LE) shot? 

What if the ND you are carrying in the field is not enough: it will give you a 2 second exposure, but not a 20 second one.

This is where photographers turn to 'hacks' to help them out. For instance, if you need a 20s exposure and all you can get is a 2 second one, then simply take 10 images and use, for example, Photoshop to simulate a 20 second image by stacking the images and either using mean or median statistics.

Ok, it requires more work, but such is life. If creating that image was important for you, then spending the time to make it will be worth it.

An advantage of the multi-image approach, compared to the single image ND version, is that you have the higher shutter speed images as well. Thus, in post, you can blend individual 2s images with the processed '20s' one.

LE photography is not to everyone's taste. However, when you see moving water smoothed out by an LE, there is no doubt it transforms the image's look, eg removing the high frequency content.

Another good reason to use LE photography is when you wish to remove people (or noise) from the scene. As long as the people are moving, they can magically be made to disappear in post.

So far, and rather unusually for me, I've made no mention of Magic Lantern: because, bluntly, you don't need ML to capture n images for LE post processing.

Without ML, one would, say, use an intervalometer to capture the n images that are needed to simulate the single ND version. But, of course, the shutter still needs to be actuated for each image, assuming you are in Live View or have the mirror locked up.

But with ML we can capture the sensor without any shutter action. For example, ML has had a (shutter time limited) full resolution silent picture option for a while, which we can use to create the images that we will later process in post.

But we still end up with n individual images on the card. No big deal I hear you say, but there is a 'better' way.

Thanks to the hard work of the ML developers, we also have, so-called, Full-resolution Live View video capture. The maximum frame rate is low, about 7 fps, but this isn't an issue for our LE capture needs.

The first thing you need to do is load the 4K raw video recording from the Experimental Build page (https://builds.magiclantern.fm/experiments.html). There are other builds of the 4K raw video, but the one on the experimental page should get you going.

As for using it in this LE mode: let's just say up front that it's fiddly and can be a bit flaky. 

Make sure you enable the required modules, ie crop-rec; mlv-lite; mlv-play. Plus any other modules you need, eg ETTR and Dual-ISO etc.

The in-field workflow I use is as follows:
  • Switch on LV
  • Set your exposure for the scene, I personally use ML's ETTR. You should ideally aim for this to be, say, between 1/5s to 1s;
  • Go into the ML Bulb timer in the Shoot menu and set the ND time you wish to simulate
  • Go into the ML Movie menu and switch on the following in the following order: Crop mode to full-res LV; Raw video on; FPS override to, say, 1
  • Exit the ML menu
  • Switch to Canon video mode, where you will likely see a pink mess on the screen
  • Toggle the MENU button on and off, which hopefully will give you a clear image of the scene
  • Go into the ML video menu and confirm the resolution is ok
  • Check the exposure etc. The ML exposure should show something close, (may not be identical) to the exposure you set, where as the Canon exposure will say something else, eg 1/30s maybe
  • Go back into the ML menu and the Scripts menu and run the little LE helper script that I created, which can be downloaded on the right. All this script does is switch the video recording on and off according to the time you set in the Bulb timer
  • Once the mlv video has been created, switch out of the Canon video mode (the script should have switched off the ML video stuff)
The post processing workflow goes something like this:
  • Download the MLV-App (https://www.magiclantern.fm/forum/index.php?topic=20025.0)
  • Open the MLV App and load in the MLV video you created
  • Check it visually to see all the frames look the same - warning some may be corrupted
  • Export the video with the average MLV preset
  • Re import the averaged MLV and export it as a TIF
  • Process it as you desire
Here is a real world example I took this afternoon. The base exposure was f/16 at 1/4s, but I wanted an LE exposure at 30s. I played around with the fps and 3 fps looked about right. I ran my script and ended up with a 35s mlv with 106 frames. A single, full resolution, frame looks like this:


As we can see, there is lots of distracting high frequency 'stuff' in the water and, of course, there are people moving around on the bridge, as they were throughout the video capture.

Having processed the above in MLV App I ended up with the following 30s LE simulated image:



Of course the sky is horrible, as it really was. So a quick trip to Luminar 4 and we have a new sky. OK, I know it needs more work ;-)



So there you have it. Thanks to the hard work of a whole community of videographers and developers over on ML, a simple photographer like me, now has an additional tool to use.

As usual I welcome feedback on this post, especially any corrections and/or suggestions to improve the workflow.

No comments:

Post a Comment