Monday, May 9, 2022

Canon M3: ETTR Meter

 NOTE: This is an update of the original post

Assuming, like me, you use an ETTR exposure approach, then you will know that the challenge is knowing when you have reached the 'optimum' ETTR setting.

If you have a camera that gives you a liveview histogram, then you have a fighting chance. If your camera also has, so-called, live view blinkies (highlight warning), then you have an additional edge.

Having said that a LV histogram is your ETTR friend, it isn't a perfect one. Most/all LV histograms are 8-bit, or certainly not 14-bit RAW.

Take the Canon M3 running CHDK. The CHDK histogram is an 8-bit extraction of the viewport data. That is a histogram with a 256 tonal resolution: darks to lights.

The CHDK histogram is certainly better than nothing, and, relative, to the Canon histogram, at least it can be displayed in log mode.

One downside of the histogram is 'hidden' from most users: that is the way the data is mapped to each stop.


In the above we see a representation of how linear data is laid out per stop, based on a 16, 14, 12, 10 or 8 bit 'RAW' histogram. Each stop showing the amount of tonal data that is captured. For instance, in a 14-bit histogram, representative of a typical RAW image, we see there are 8192 tonal variations in the top stop and 4096 in the next stop down; whereas in an 8-bit image we only have 128 in the top stop and 64 in the next stop down.

However, things are more complicated, as the above assumes the data remains linear throughout, whereas the viewport, from which the histogram is created, is gamma encoded: I assume at 2.2. Plus the viewport will be impacted by the Camera's picture style settings and WB, eg if UniWB is being used.

Thus the vertical bars on the histograms are not unit stops apart, they just divide the histogram into 5 'zones'. On an 8 bit gamma-based histogram, the three top 1/3 stops are placed at about: 229-255; 206-228; 185-205, compared to a linear encoded one at: 202-255; 160-201; 127-159. 

Thus, for the gamma encoded CHDK and Canon histograms, the 1/5 line down from the highlights is, in fact, at about 2/3 stops down from saturation.

Finally, another problem in using a CHDK or Canon histogram is that it they are not always easy to read: they are rather small on the LCD; and, anyway, when ETTRing we are only interested in the 'top stop'. 

In order to help nail an ETTR setting, I'm pleased to introduce an 'ETTR Meter' option into my M3 Landscape Bracketing Script (downloadable from the right).

The ETTR  Meter gives visual feedback on the three 1/3 stops down from 100% saturation.

The user can set two trigger values to control the meter, which are set as a percentage of the total histogram count: namely:

#ettrpl = 1 "low ETTR 1/10 %" [1 10]
#ettrpu = 1 "high ETTR %" [1 10]

Thus we can set the first trigger (yellow) between 0.1% to 1% and the second (red) to between 1% to 10%. In the following example I used 0.5% and 5%.

The ETTR Meter can be switched on via the M3 script menu under exposure help.

When running the ETTR Meter sits in the middle of the M3 info bar and shows the status of the top three 1/3 stops: highlights on the right.


In the above
(UniWB) screen grab we see the CHDK (log, compared to the Canon linear) histogram and the M3 DoF bar at the top, show: we are focused at 710mm; we need two focus brackets to reach the hyperfocal; the near depth of field is at 496mm and the infinity blur is 32 microns. Note the overlap circle of confusion was set at 15 microns.

The ETTR Meter's three green bars show that the histogram count in all three third stops is less that 0.5%. So let's change exposure.


Here we see that we are getting closer to the ETTR point, as the top 1/3 stop remains green, whereas the next two 1/3 stops down are yellow, showing the count in each is over 0.5% of and less that 5% of the total histogram count. So let's increase exposure again.

Here we see the final ETTR position, with the exposure just sending the top 1/3 stop into the red. We could back off, but experience says this setting will likely be OK.

Bottom line: The ETTR Meter is there to support your ETTR setting on the M3, which doesn't have a  highlight warning feature in liveview. In the end, your knowledge and experience as a photographer, together with the CHDK and/or Canon histogram, and the ETTR Meter, should mean you capture the 'best' ETTR data for RAW post processing.

As usual I welcome any feedback on this post or any of my posts.






Sunday, April 24, 2022

I still believe the Canon M3, with CHDK, is one of the best travel cameras ;-)

As many know, I have a lot of Canon cameras (R, M3, M(Vis), M(Full Spectrum), M(IR), G1X, G5X, G7X and an S95). But my current favourite is my caged M3.

The M3, with its EF-M lenses, has a small and lightweight footprint, and, with its ability to run my CHDK M3 Landscape Bracketing Script, creates an incredible capture device.

The M3 script covers all focus and exposure bracketing use cases, including handheld exposure bracketing.

The latest version of the script, downloadable on the right, now offers two hand held bracketing routes. One based on a three bracket logic, and one on a two bracket logic.

The three bracket logic is enabled by selecting HandH option under exposure bracketing and triggering the capture via the script’s ‘second shutter button’, ie the M-Fn button.

The two bracket option, which easily be augmented with an additional ISO bracket, is selected by using the Canon shutter, having selected sky bracketing, with a set Ev shift, at the first image’s ISO value, or an auto-ETTR capture at ISO 100. In this case the in-field workflow goes like this:

  • Select the sky bracketing logic you desire, eg a fixed Ev shift or Auto-ETTR
  • Set an appropriate shutter delay in the script
  • Set aperture for the scene
  • Focus, using the script's feedback as required
  • Set the shutter speed to the handheld limit, eg 1/30s in the images I took today
  •  Adjust ISO, including using Auto-ISO, so that the shadows are appropriately captured according to the histogram
  • (If needed, switch on ISO bracketing as well, which will give you three brackets)
  • Finally, compose and press the Canon shutter, holding the camera as stationary as possible 
  • After the camera has captured the two images, adjust the shutter, and ISO if required, noting the shutter speed and ISO are reset to those you set for the first image, ready for the next bracket set adjustment

After ingesting into Lightroom, my post processing workflow goes like this:

  • Pre-process the images with PureRAW 2
  • Set a linear profile on the two images
  • Process though LR's HDR Merge
  • Use LR luminosity masks and toning to create the final image

To illustrate what the script can do, ie handholding in a high dynamic range environment; here are a couple of handheld images I snapped today at my local National Trust property: The Vyne, near Basingstoke.



As usual I welcome any feedback on this post, or any of my posts.

Monday, April 18, 2022

An over due update

As I haven't posted here for many months, I thought I would give an update on my photography.

The biggest news, and one of the reasons I haven't been posting for a while, is that I've transitioned my main camera from a 5D3 DSLR to an EOS R mirrorless. The decision wasn't easy, as the R (at the moment) doesn't run Magic Lantern; although there are a few 'code jockeys' trying to get ML up and running on the R. 

So, goodbye to: RAW spotmeter; RAW LV histograms; RAW auto ETTR setting; Dual-ISO capture; complex exposure bracketing control; and, sadly, Lua scripting. 

The decision was taken when I knew I could get suitable EF-2-RF lens adapters, eg with VNDs etc. At the moment I have no intention to buy any RF lenses, as I have suitable adapters to use all my EF manual and auto lenses, as well as my Mamiya 645 lenses.

So, I'm now up and running and nearly have my 'new', ie I bought it secondhand, EOS R fitted with: a Smallrig cage and handle; the CamRanger Mini controller; and a battery grip. I'll be writing about the R-based photography and kit in future posts.

Another transition, thanks to Adobe Photoshop making my desktop PC GPU 'out of spec', was to move to a new PC, from Chillblast, comprising of a: 12th Gen Intel(R) Core(TM) i9-12900K, running at 3.19 GHz; 64GB of RAM; Nvidia GeForce RTX 3060 with 12GB of RAM; A ThunderBolt 4 card; and an ASUS 4k monitor.

The final transition, that kept me from posting, was to move service providers, thus forcing me to get into DNS settings, to ensure the historic links to my blog all work. I failed on my own, but managed to get help from a colleague: once again, many thanks Nigel :-)

Despite being off the blogging grid, I have managed to update a few things on the CHDK side. In particlar my M3 Landscape Bracketing Script, now over a 1100 lines of code, which can be download on the right. The latest version now includes the 'optimum' exposure bracketing logic when handholding, which goes like this:

  • Bracket 1: ETTR and take the first bracket at ISO100 (the script checks to see if this first bracket is faster than the handholding base shutter that is set)
  • Bracket 2: Shutter speed is adjusted to the (user set) handholding value, whilst keeping the ISO at the base value. 
  • Bracket 3: The final bracket is taken at the handholding shutter value but at an ISO value where the camera exhibits 'ISO invariant-like' shooting, eg ISO 800 for the M3.

BTW for those interested, here are the links to two MIT papers I used to inform the above.

https://people.csail.mit.edu/hasinoff/pubs/hasinoff-hdrnoise-2010.pdf

https://people.csail.mit.edu/hasinoff/hdrnoise/hasinoff-hdrnoise-2010-supp.pdf 

On the M3, pragmatically, things look like this (once again many thanks to Bill at PhotonsToPhotos (https://www.photonstophotos.net/)

Thus if I need to push the base ISO (100), ie for shutter speed control, I would not go above ISO 800, unless, once again for shutter speed, eg controlling motion blur, I needed to.

As for the ETTR setting, ie for the highlights, the script has an auto ETTR feature that you can exploit, ie press the RIGHT button and fine tune the resultant shutter value until the histogram looks OK. I personally use the CHDK histogram rather that the Canon one, as I can put it into log mode. I also have a UniWB image that I can bring into play, ie to set a custom WB that better matches to the RAW.

So what does the new bracketing look like?

Here is a three bracket set, captured by the M3 script, of the, rather dark, belfry at one of our local churches:



After ingesting into Ligthroom, I pre-processed the images through PureRAW 2. The resultant DNGs were then processed through the Lr HDR Merge feature, I then used a linear profile,  which resulted in the following final handheld snap, after a little bit of tone and colour correction:

So, to conclude, I'm back blogging, with a new main camera and new desktop PC. 

As usual, I welcome any comments on this post or any of my previous posts.


Monday, November 1, 2021

More thoughts on Multi Image Capture: PART 2

In the last post I started discussing multi image capture for post processing frame averaging. In this post I’ll look at using frame averaging as an alternative to exposure bracketing, ie to extend dynamic range (DR).

[BTW the first post was amended to tidy up my language, thanks to some comments I received on the DPReview forum: many thanks to @Entrophy512].

In standard exposure bracketing we would set our camera to a base ISO, say, ISO 100 on my Canons, and capture as many images, separated, say, by 2Ev, to ensure our exposures capture enough information, down into in the shadows, and contain at least one image with no blown out highlights, for post processing through one of the following typical workflows:

  • Auto tone mapping based blend
  • Auto fusion based blend
  • Manually blend

Although auto tone mapping had a bad reputation in the early days, these days most tone mapping software can achieve a reasonable natural look. With fusion based auto blending, eg LR/Enfuse, potentially creating an even more nature blend. 

However, both these approaches, despite auto correction tools, can introduce unsightly artifacts associated with movement, say, of trees, between images.

Manually blending, with or without luminosity masks, potentially offers the highest quality result, but does require more skill/effort in post processing.

All the above have a similar capture workflow:

  • Capture one image for the non blown out highlights
  • Capture enough images, at varying exposures, to ensure the shadow details you are interested in are captured

Or, put another way:

  • Capture one image for the non blown out highlights
  • Ensure the noise in the shadows detail areas is acceptable for post processing

Without going down a rabbit hole of detail, many photographers will recognise the following sources of noise:

  • Shot noise, related to the statistical nature of light. Shot noise follows a Poisson distribution, but may be approximated to a normal distribution well away from the shadows, ie in 'good' exposure. Shot noise can be reduced in post processing as the noise fluctuates around the mean exposure.
  • Dark current or thermal noise is mainly a problem for astro and/or extremely long exposures, which leads to the use of additional technologies that keep the sensor cool
  • Readout noise is, as it implies, generated after the sensor has gathered its signal. The main impact here is the 'extra' gain the photographer introduces, ie the ISO
  • Finally there are other noise sources that can impact the image, eg:
    • So-called 'reset noise', where the sensor pixel does perfectly zero itself after capturing an image
    • Fixed pattern (column) noise, which is normally seen as vertical lines in the shadow areas of an image. Fixed pattern noise will tend to be additive between two identical images, taken next to each other, without pixel shifting, say
    • Quantisation noise that is introduced in the analogue to digital conversion process

Not all noise sources are the same, and some we can ignore, ie as photographers we have no/little control over them; although in the end we observe an integrated impact of all the various noise sources. 

For most photographers I believe it is worth understanding two sources of noise in particular, as 'optimum' camera settings and post processing can be used to reduce these noise sources, if needed, ie extended the DR of the post processed image.

Ignoring image to image motion for now, and keeping things simple, shot and ISO noise reduction can be achieved by simply averaging multiple images: either in the camera, in the case of the Phase One IQ4; or in post for the rest of use ;-)

As noise reduction goes as the square root of the number of images processed, we can thus reduce the noise by, say, a factor of 4, by taking 16 images and averaging then, either through opacity or smart object statistics. 

This approach shows the most benefits in the shadow areas of an image, where the signal (number of photons captured/converted), and the tonal content, are low. However, it can also help to clean up a 'perfectly’ exposured scene, as shot noise is still visible at, say, just below maximum well capture, and can be clearly seen if you zoom in on a flat, featureless, well lit, surface.

At this point many will be saying: why bother?

After all, 'normal' exposure bracketing is fine and it's simple. But as stated above, it does come as a price:

  • Integrated capture time, eg a base (ETTR) exposure of a high dynamic scene, say needing a 4Ev lift for the shadows, will typically be achieved with three exposures, for tonal overlap, at 1s, 4s and 16s, ie an integrated capture time of at least 21s
  • During the capture wind movement may become a problem
  • Post processing may be OK, ie if there was no movement or movement is an artistic element of the image, but it might require some post processing effort to eliminate unsightly artifacts, ie to try and make the image look more natural or organic

If, on the other hand, we took a burst of 16 images at an exposure of 1s, the noise in the shadows and highlights will be reduced by a factor of 4. Thus, if shooting at ISO 1600, the post processed image would have noise characteristics of an ISO 100 image. Or if shot at ISO 100, the photon noise will look like an image shot at around ISO 6.

Shooting at high ISOs may be a useful tool to reduce fixed pattern noise, however, the ISO that you need to shoot at will be camera dependent. From my experience, a useful rule of thumb is to seek out an ISO just above where your camera exhibits ISO invariant like characteristics and above where the fixed pattern noise becomes less noticeable. Thus on my Canon M I might use an ISO of 800 - 1000, but on my 5D3 I would use an ISO of 1600. 

Exploiting the dynamic range of modern cameras and using the above insight, allows us to consider an alternative high dynamic range, ie where we would normally bracket, exposure capture strategies, based on using one or two fixed exposures capture sets, ie one set for the scene; or one set for the highlights and another for the shadows. 

If you also wish to exploit ISO, ie for a camera that is not fully ISO invariant through its ISO range and where the scene’s DR is not too 'extreme', the capture workflow could go like this (composition and focus ignored):

  • Set an ETTR exposure for the highlights at the base ISO
  • Capture 4 images
  • Increase the ISO to, say, ISO 800 (ie set at your ISO invariant point)
  • Capture the appropriate number of images, eg 8 in this case
  • Post process

Note that for cameras that are fully ISO invariant, ie from their base ISO, you only need to capture sufficient images to address the noise in the shadows. That is shoot at a single ISO value.

To further emphasise the potential practicality of the above, consider where cameras are at the moment. The new kids on the block can capture images at 20-30 images per second, with no buffering.

As for the above example, ie a two set ,4+8 image capture, for the highlights and shadows appropriately; if the base exposure was 1s, as in the exposure bracketing above, the integrated capture time would now be 16s, as opposed to 21s. 

As for a post processing workflow it could go like this:

  • Ingest in Lightroom and adjust the RAW exposures in each of the two image sets. For the ETTR image set, use curves, ie keep the highlights fixed. Keep RAW sharpening set to zero as we don’t wish to add additional ‘noise’
  • Export both data sets to Photoshop
  • Merge each of the two image sets into their own Smart Object and use Mean statistics, or adjust the individual layer opacities: manual or use the merge script (on the right)
  • Stack the two flattened image sets and use manual blending to bring the 'best bits' of each forward

The attached three images are a simple test of the above I just took with my EOS M, at a focal length of 11mm:



The top image is one of four ETTR exposures taken at a ISO 100, f/8 and at 1/15s. The middle image is one of eight taken at the base exposure, but ISO shifted to ISO 1600: note I 'only' took 8 images in this case, ie rather than 16. Thus the shadow noise reduction will result in the image taken for the shadows looking like it has noise as if it was around a ISO 200 image.

The last image is a quick development of the other two, where I processed the two stacks in Photoshop, manually blended in the appropriate details from each image, and finished toning and colour grading back in Lightroom.

As usual for me, this has been a 'bit of fun'. Will I use it as an alternative to 'normal exposure bracketing? 

Maybe: as I have all my Canon cameras running in-camera Lua scripts that greatly speed up, capturing image sets like the above. Having said that, I believe most modern cameras are able to capture the above in a very lean manner. Plus the frame averaging approach to extending DR has the potential advantage of creating more natural or organic looking images where there is movement in the scene, eg wind.

As usual I welcome any comments on this post or any of my posts.


Friday, October 29, 2021

More thoughts on Multi Image Capture: PART 1 (updated)

Anyone who has read my blog knows that I love playing with technology and especially creating in-camera scripts. In this post, although there is a link to CHDK and Magic Lantern scripting, the ideas I'm discussing can be adopted by anyone with a camera.

Multi Image Capture is used to gather data beyond what is possible in a single image, eg: achieving greater depth of field; covering a larger dynamic range than the sensor can capture in a single image; realising an artistic vision, for example a long exposure; for reducing noise; to eliminate people from your image...etc etc

In other words, multi image capture is an essential tool for all serious photographers.

The two most common multi image capture use cases are, of course, increasing the depth of field and/or covering a large dynamic range. Focus and exposure bracketing, where we change a camera or lens prosperity between captures, are well know techniques and I'm not going to repeat the basics here. A search of this blog will bring up previous posts on both subjects.

Before I develop my multi image capture ideas further, it is useful to remind ourselves what most are brainwashed to believe as they start their photography: The so-called Exposure Triangle.

Yes it's a triangle, but it doesn't really represent the 'photon-based' exposure, ie formed from the aperture size, that controls the 'flow' of light on the sensor, and a shutter time, that allows the flow to fall on the sensor for a fixed amount of time.

ISO is simply a magnification that gets used to increase the brightness of the captured scene, ie both the signal and some components of the noise.

Many modern cameras are ISO invariant, and applying an ISO setting above the base ISO in the camera, is broadly no different to applying it in post. Other cameras, like my Canon M or my Canon 5D3, only exhibit ISO invariant like characteristics above a certain (camera specific) ISO.

Another thing that follows from the above is that shutter time is coupled to capture time. For example, if I wish to create a long exposure artistic look, eg smooth out some flowing water, then I need to match my shutter speed to the capture time that I need for my artistic vision.

This could be achieved by closing down the aperture, but as we know, the downside of this is that diffraction blur will increase, to the potential detriment of the image quality we are seeking. 

To overcome such problems we usually introduce neutral density filters to reduce the amount of light we can capture, allowing us to run with longer shutter times at the ‘optimum’ aperture. But NDs come at a cost.

First, in our pockets, as one ND is not going to cover all our needs; and also in terms of image quality, ie that extra glass (or even plastic), especially with stacked NDs, will only degrade your image quality. Thus, the thought of buying an expensive camera and lens, and then putting a cheap ND in front of it, is something to think about.

It would be far better if we could decouple the aperture, which is our primary depth of field tool, from the shutter time, and our capture time needs.

This is where multi image photography comes in and, in particular, image or frame averaging, where we don’t change camera or lens properties between shots, which gives us an alternative 'exposure/capture triangle' - note this is just a visual alliteration to the original exposure triangle:

Here we see, on the left, our driving vision for the image, which, on the right, which we wish to create. 

We will set the aperture (N) to achieve the depth of field we wish to realise, and set focus appropriately, eg hyperfocally, non-hyperfocally or multiple times if focus bracketing.

We choose a total/integrated capture time, eg for: long exposure impact; or reducing noise; or extending the captured dynamic range. Plus set the camera to the base ISO, to ensure maximum dynamic range coverage.

If the shutter time, to achieve an acceptable exposure, is similar to the capture time we are seeking, including, if required, a small ISO lift from the base, then we are there: all in a single exposure.

We could also iterate a bit with aperture, needing to play off depth of field with shutter-induced motion blur: but that's detail.

If, on the other hand we don't want to change our aperture, and our shutter time falls short of the capture time we wish, then we may decide to try a low strength ND, ie trying to stay away from high strength NDs, with their colour casts and image degrading glass/plastic. 

Once again, this may do it. However, if it doesn't, then we take multiple images at a single shutter time to ensure the integrated shutter time matches the desired capture time. For example, with a 1/10s shutter and a 2 second total capture goal, we will need 20 images; which, in passing, will help us reduce the noise in post capture processing.

For now we will ignore motion-strobe effects etc and assume the images are captured in a seamless/gapless fashion.

Although ISO is still there to be used, ie as a moderator, for the kind of photography we are talking about in this post, eg tripod based city/landscapes etc, we will usually wish to remain at the base ISO.

With the above we thus only need to carry one or two (maybe) low strength NDs and/or a circular polariser, which we will use to reduce the number of images we take and help take 'near' seamless/gapless images. That is, we need to match/respect the camera's image buffer limitations.

Let's now finish this post and look at the best way to achieve the above. Simply go out and buy yourself a Phase One XT IQ4 150Mb camera, which does all the above in-camera and delivers you a single (gapless) RAW image of any capture time.

OK, you don't have over 50,000 pounds to spend: then we will need to create an in field and post processing multi image capture work flow, to achieve a very similar result: for free!

But that's a story for future parts of this post.

As usual I welcome any comments on the above or any of my posts.



Tuesday, October 19, 2021

M3 LBS minor update

In the last post I introduced the LE/NR and Super-Resolution features in the M3 Landscape Bracketing Script. As a result of some testing, I've refined/simplified the UI.

The top half of the menu now looks like this:


Here we see that a Super Resolution bracket set has been requested under the Focus Bracket? menu item. The number of super-res images to be taken is now defined via the negative ND feature. A -3 value, as above, takes the script to take 8 images. A -5 value would result in 32 images.

There are now two focus move options that attempt to introduce image to image pixel dither. In the above Off is shown, meaning that no focus dithering will be attempted, ie the super-res bracket set 'just' captures the required number of images for either LE or NR post processing.

The two differ schemes are -+ or ++.


In the above we see that the -+ scheme has been selected, which will adjust focus either side of the point of focus, ie half and half. Each time adjusting focus towards infinity, having first re-positioned away from infinity. 

In the ++ scheme, shown below, focus differing is attempted from the point of focus, towards infinity. As stated in the last post, pixel dithering via refocusing is not a guaranteed approach, however, if the angle of view doesn't change, at  least you will have a LE/NR bracket set.

As a reminder, the UI looks like this, ie giving you constant focus feedback until you run a focus bracket set:


In the above we see that the console option is switched on, thus giving feedback from the script. This option can be disabled if required.

In this above, the top bar shows:

  • We are focused at 259cm from the sensor
  • That 2 focus brackets are required to get to the hyperfocal
  • That we are at 13mm focal length
  • That the shutter is set to 1.3s
  • That the near Depth of field is at 130cm
  • That the defocus infinity blur is at 16micons, in other words we are focused just short of the hyperfocal, which is based on a 15um overlap setting, as shown above in the menu
To capture the super-res bracket set, all we need to do is to push the the M-Fn button, which acts as the script's auxiliary shutter button. In this case, because we are not requesting a focus bracket set, the script will keep running, ie you can repeat the super-res bracketing as many times as you wish, adjusting focus and exposure in between captures.

As implied above, the noise reducation or long exposure bracket set operates in a similar way to that of the super-res one: but in the LE/NR case focus is not changed at the LE/NR bracket set is captured at the 'infinity' focus position, as is the sky bracket, ie after focus bracketing.