Thursday, June 26, 2014

Consolidation Series: Part 1

As many are aware I use this blog as a way to discuss Canon and, in particular, Magic Lantern enhanced photography. I do this knowing there are many photography blogs out there and I see no need to duplicate what they are saying…unless, of course, I disagree with them :-o)
What seems to be missing are people writing about their real-world ML-based photography experiences. There are quite a few ML blogs aimed at videographers and, of course, there is the forum at the home of ML ( 

I hope my ramblings are useful to some, as an ML-based workflow is different to a ‘normal’, non-ML, workflow; and, because ML is still maturing, there is the risk of conflicting information being around, especially as not all ML ‘features’ work on all Canon EOSs .

This week I gave a class at my Camera Club ( on ML (and CHDK). Other than two brave Nikon shooters who turned up, the room was full of Canon shooters; obviously! The surprise to me, however, was that only two, out of about, 25, were using ML. The purpose of the class was to give the attendees the basic information required to get ML loaded on their cameras and, hopefully listening to me, the confidence to take the ‘ML plunge’.

The class was based around various ‘case studies’, where, IMHO, an ML-based workflow gives you greater chance of getting the 'best data’. As I said in the class, ML will not make you a better photographer! It will, however, help you get better quality data for your post processing; and I am one who believes (digital) photography happens in a PC! The camera is ‘just’ a piece of hardware to gather data. There is really no such thing as ‘in-camera (digital) photography’. RAWs need processing in a PC, and in-camera JPEGs are post processed in the camera, but with hardly any user control.

For those that didn’t attend my class, in this post, and the following posts, I will present some case studies where, IMHO, an ML-based workflow really helps.

Case Study #1: Nailing your exposure

  • Without ML we would either use the in-camera metering and/or some form of external metering. In either case we are ‘constrained’, unless we carry out exposure adjustments, to the meter’s view of the world, ie an 18% grey world. Thus, according to where we meter and what metering scheme we use, the camera will underexpose or overexpose the scene, ie snow is grey and coal is grey to a camera’s meter. Is this important? You bet. 
  • If we now turn to a histogram view of our scene, which may be seen as a simple ‘map’ of the spread in tonal data we have captured, ie black to white or ‘no’ photons to a sensor’s pixel having a full well of photons, then the right hand (highlight) end of the histogram has most of the tonal data. For example, because of the way digital data is captured, the right hand stop is allocated half of the tonal resolution, then the next stop down half again, all the way to the black end, where, at the lowest (recordable) stop, we have hardly any tonal variability within a stop. 
  • To overcome these ‘digital quirks’ we learn to adopt an Expose To The Right (ETTR) approach. That is we look at the camera’s histogram, usually in Live View (LV) or on review mode having taken a test image, and ‘guess’ how many stops to adjust our exposure by. The problem with this approach is three-fold. First the camera’s histogram is generated from a scaled down JPEG. Second the histogram is usually difficult to physically see/resolve on our camera LCDs, especially in bright sunshine (which is why I use a Hoodman Loupe). Thirdly, when we used our camer’s metering we only metered a part of the scene and our (JPEG) histogram, which covers the entire scene, is not easily correlated to the metered value.
  • Like others I  prefer to use a modified ETTR terminology, to encompass what we are really trying to do: that is ensure the histogram is pushed the right as much as possible, without over exposing the image. Thus we have the alternative term: Histogram and Meter Settings To The Right (HAMSTTR). So how does ML help us with our HAMSTTR-based exposure workflow?
  • With ML we can get a ‘manual’ ETTR hint by simply looking at the RAW (yes RAW) histogram. I personally prefer to use the RAW histogram display as a bar in LV, with the hint in Ev units. Thus, having composed my scene and used the Canon metering to get me to first base, I simply switch to LV (unless I’m already in LV) and look at the RAW histogram bar. If the hint says 2.5Ev, I know I need to adjust my shutter (usually it’s the shutter) by just over 2 stops, ie 7-8 clicks, as I’ve set my camera for 1/3 stops. The ML RAW histogram also provides a visual feedback once you ‘go over’ the ETTR boundary, ie the histogram bar turns red and the ETTR hint goes negative.
  • If I’m feeling lazy, and I do now most of the time because of ML, I will switch on ‘Auto-ETTR’ in the exposure menu, which gives me various options. First, I need to decide how I wish to use A-ETTR. For single shooting, or getting a base bracket when bracketing, I personally prefer to invoke A-ETTR through the SET button on my 5DIII. Whilst I’m in the ETTR menu I will also decide on a few other parameters to help control my exposure, eg: % of blown out highlights to accept (useful for managing speculars, ie otherwise you may end up with a very underexposed image); how to protect the mid-tones and shadow areas, ie at the expense of potentially blowing out highlights; and whether to link Dual-ISO (see another case study for more info on Dual-ISO) with ETTR. 
  • With A-ETTR enabled my work flow is simple: compose and focus; push the SET to nail the ETTR exposure; and press the shutter. This work flow works if I am hand holding or on a tripod. If hand holding I will usually also set the minimum shutter speed in the ETTR menu, say, to the 1/FL guide levels, which, in turn, may mean A-ETTR increases the ISO; however, on my 5DIII I’m fairly tolerant to high ISOs, eg up to 1600. The only time there may be a clash here, is if I’m also using Dual-ISO, ie hand holding forces the base ISO from 100 to, say, 800, and my dual-ISO setting could also be 800. On a tripod, I ensure the lowest shutter setting in the ETTR menu is adjusted accordingly, ie tens of seconds (unless my vision requires a faster (minimum) speed). 
  • One extra advantage I have found with an ML ETTR-based workflow is that it slows me down! This is because you need to wait for LV to be used to capture the RAW histogram, on which ETTR will make its adjustments (but note no test image is captured on the CF card, only LV in invoked).
  • So our first case study’s message is: an ML-based HAMSTTR (nee ETTR) workflow will virtually guarantee you have the best quality (photonic) data in your image…no guessing!
In future posts in this ‘Consolidation series’ I will take a look at other ML-based case studies. Hopefully, the above, and future, posts in this series will help those Canon users who have not yet taken the ML plunge, take it!

No comments:

Post a Comment