Thursday, June 26, 2014

Consolidation Series: Part 1



As many are aware I use this blog as a way to discuss Canon and, in particular, Magic Lantern enhanced photography. I do this knowing there are many photography blogs out there and I see no need to duplicate what they are saying…unless, of course, I disagree with them :-o)
 
What seems to be missing are people writing about their real-world ML-based photography experiences. There are quite a few ML blogs aimed at videographers and, of course, there is the forum at the home of ML (www.magiclantern.fm). 

I hope my ramblings are useful to some, as an ML-based workflow is different to a ‘normal’, non-ML, workflow; and, because ML is still maturing, there is the risk of conflicting information being around, especially as not all ML ‘features’ work on all Canon EOSs .

This week I gave a class at my Camera Club (www.enchantedlens.org) on ML (and CHDK). Other than two brave Nikon shooters who turned up, the room was full of Canon shooters; obviously! The surprise to me, however, was that only two, out of about, 25, were using ML. The purpose of the class was to give the attendees the basic information required to get ML loaded on their cameras and, hopefully listening to me, the confidence to take the ‘ML plunge’.

The class was based around various ‘case studies’, where, IMHO, an ML-based workflow gives you greater chance of getting the 'best data’. As I said in the class, ML will not make you a better photographer! It will, however, help you get better quality data for your post processing; and I am one who believes (digital) photography happens in a PC! The camera is ‘just’ a piece of hardware to gather data. There is really no such thing as ‘in-camera (digital) photography’. RAWs need processing in a PC, and in-camera JPEGs are post processed in the camera, but with hardly any user control.

For those that didn’t attend my class, in this post, and the following posts, I will present some case studies where, IMHO, an ML-based workflow really helps.

Case Study #1: Nailing your exposure

  • Without ML we would either use the in-camera metering and/or some form of external metering. In either case we are ‘constrained’, unless we carry out exposure adjustments, to the meter’s view of the world, ie an 18% grey world. Thus, according to where we meter and what metering scheme we use, the camera will underexpose or overexpose the scene, ie snow is grey and coal is grey to a camera’s meter. Is this important? You bet. 
  • If we now turn to a histogram view of our scene, which may be seen as a simple ‘map’ of the spread in tonal data we have captured, ie black to white or ‘no’ photons to a sensor’s pixel having a full well of photons, then the right hand (highlight) end of the histogram has most of the tonal data. For example, because of the way digital data is captured, the right hand stop is allocated half of the tonal resolution, then the next stop down half again, all the way to the black end, where, at the lowest (recordable) stop, we have hardly any tonal variability within a stop. 
  • To overcome these ‘digital quirks’ we learn to adopt an Expose To The Right (ETTR) approach. That is we look at the camera’s histogram, usually in Live View (LV) or on review mode having taken a test image, and ‘guess’ how many stops to adjust our exposure by. The problem with this approach is three-fold. First the camera’s histogram is generated from a scaled down JPEG. Second the histogram is usually difficult to physically see/resolve on our camera LCDs, especially in bright sunshine (which is why I use a Hoodman Loupe). Thirdly, when we used our camer’s metering we only metered a part of the scene and our (JPEG) histogram, which covers the entire scene, is not easily correlated to the metered value.
  • Like others I  prefer to use a modified ETTR terminology, to encompass what we are really trying to do: that is ensure the histogram is pushed the right as much as possible, without over exposing the image. Thus we have the alternative term: Histogram and Meter Settings To The Right (HAMSTTR). So how does ML help us with our HAMSTTR-based exposure workflow?
  • With ML we can get a ‘manual’ ETTR hint by simply looking at the RAW (yes RAW) histogram. I personally prefer to use the RAW histogram display as a bar in LV, with the hint in Ev units. Thus, having composed my scene and used the Canon metering to get me to first base, I simply switch to LV (unless I’m already in LV) and look at the RAW histogram bar. If the hint says 2.5Ev, I know I need to adjust my shutter (usually it’s the shutter) by just over 2 stops, ie 7-8 clicks, as I’ve set my camera for 1/3 stops. The ML RAW histogram also provides a visual feedback once you ‘go over’ the ETTR boundary, ie the histogram bar turns red and the ETTR hint goes negative.
  • If I’m feeling lazy, and I do now most of the time because of ML, I will switch on ‘Auto-ETTR’ in the exposure menu, which gives me various options. First, I need to decide how I wish to use A-ETTR. For single shooting, or getting a base bracket when bracketing, I personally prefer to invoke A-ETTR through the SET button on my 5DIII. Whilst I’m in the ETTR menu I will also decide on a few other parameters to help control my exposure, eg: % of blown out highlights to accept (useful for managing speculars, ie otherwise you may end up with a very underexposed image); how to protect the mid-tones and shadow areas, ie at the expense of potentially blowing out highlights; and whether to link Dual-ISO (see another case study for more info on Dual-ISO) with ETTR. 
  • With A-ETTR enabled my work flow is simple: compose and focus; push the SET to nail the ETTR exposure; and press the shutter. This work flow works if I am hand holding or on a tripod. If hand holding I will usually also set the minimum shutter speed in the ETTR menu, say, to the 1/FL guide levels, which, in turn, may mean A-ETTR increases the ISO; however, on my 5DIII I’m fairly tolerant to high ISOs, eg up to 1600. The only time there may be a clash here, is if I’m also using Dual-ISO, ie hand holding forces the base ISO from 100 to, say, 800, and my dual-ISO setting could also be 800. On a tripod, I ensure the lowest shutter setting in the ETTR menu is adjusted accordingly, ie tens of seconds (unless my vision requires a faster (minimum) speed). 
  • One extra advantage I have found with an ML ETTR-based workflow is that it slows me down! This is because you need to wait for LV to be used to capture the RAW histogram, on which ETTR will make its adjustments (but note no test image is captured on the CF card, only LV in invoked).
  • So our first case study’s message is: an ML-based HAMSTTR (nee ETTR) workflow will virtually guarantee you have the best quality (photonic) data in your image…no guessing!
In future posts in this ‘Consolidation series’ I will take a look at other ML-based case studies. Hopefully, the above, and future, posts in this series will help those Canon users who have not yet taken the ML plunge, take it!

Friday, June 20, 2014

Overcoming the Shock Factor



In this post I offer one illustration of how to embrace new post-processing tools: all from the comfort of your own home!
 
When adopting any new tool or post-processing workflow, there is the obvious initial ‘shock factor’, and unless you get through this phase, there is a risk that you won’t adopt the technique, ie “…too much effort to learn…”.

In my experience one can break through the ‘shock factor’ barrier quickly these days by doing several ‘obvious’ things, albeit at increasing cost: 

  • Read web-based articles; 
  • Download an e-book on the subject; 
  • Download some video training; 
  • Attend a seminar/class.

Reading what others have written on their web site will get you going, but there is a risk here that you may get conflicting information, as Photoshop (the tool I would argue you should be using) doesn’t always have a single way to achieve an outcome.

I have found downloading an e-book (assuming there is one on your subject) to be slightly better than ‘randomly’ searching the internet; as you get one author’s ‘integrated’ view.

Better still, and these days relatively easy and cost-effective, is to download a video training session.

Finally, attending a class, by a guru, may be the best, but will certainly be the most expensive, eg class cost plus (unless you are lucky) hotel, flight & food.

These days I tend to seek out downloadable video training courses; as I have personally found these to offer the greatest value in my PS post-processing development. After all, I can watch them on my PC and do parallel work in PS; or I can download them on my iPad and watch them at anytime: there is nothing better than being ‘trained’ at 30,000 ft!

As an example, let’s take luminosity masking (LM). 

I believe LMs are one of the most powerful tools in any post-processing workflow. Like many I first explored LM via ‘handraulically’ creating the masks myself, which means you need to understand alpha-channels etc, plus remember special multiple keyboard shortcuts. My experience is this: you can use LM by doing things yourself and those that wish to carry out a penance should consider this approach.

However, it is much better to step on the shoulders of others!

The first step I took was to download one of the free LM action panels, many also being associated with free on-line training, eg http://christopherodonnellphotography.com/exposure-blending-luminosity-masks or http://www.hdrone.com/2013/04/an-introduction-to-luminanceluminosity-masks-in-digital-blending/ . Better still is to seek out the LM gurus who have distilled their knowledge into PS actions and have related video training; but, rightly, are asking for a few dollars so that you can become a better image maker.

IMHO the ‘best’ of the LM gurus are the ‘partnership of Tony Kuyper (http://www.goodlight.us/writing/tutorials.html) and Sean Bagshaw (http://www.outdoorexposurephoto.com/video-tutorials/the-complete-guide-to-luminosity-masks ).

Tony’s LM PS Action Panel and Sean’s video training will, together, turn you into an LM convert and ‘expert’. Also, Tony has just updated his LM panel, which is now a ‘command-centre power house’ to making quick, but complex, changes to your image. Tony has an offer on at the moment (http://www.goodlight.us/specialoffers.html ) that means for about 20 cups of Starbuck, I believe you get the best armchair training on luminosity masking around, ie all PS Actions/Panel and Videos for $79.

Bottom line: as photographers we continually seek to gain ever more skill and knowledge. The dilemma we face is that the camera-side of photography is relatively easy to learn and doesn’t really help that much in creating an image with that je ne sais quoi. The ‘only’ way in a digital world to make an impactful image is to spend time in, say, Photoshop (or some other post processing software). In my experience you can ‘shortcut’ your post-processing development by downloading video training and buying into other peoples’ (PS Action) developments. As an example, if you have not made the transition to luminosity masking, or undertaken any web-based training, here is a great opportunity to see if spending a few dollars (with Tony Kuyper and Sean Bagshaw) can help you move towards being a better maker of photographic art.


New Mexico Early Summer Storm


Monday, June 2, 2014

And yet another step in the right direction


Although I am still limited to non-Holly-Grail shutterless captures at the moment, ie no embedded EXIF; I have, I believe, taken another step in the right direct.
 
The silent DNG captures are about 1930x1087 pixels, which for a static HD timelapse is OK. But what about if you wish to do some panning and/or zooming? Well I could buy some expensive motorized track or head to move the camera as I capture the timelapse images. I can also use LRTimelapse, for example, to pan around in a virtual sense. But this doesn't correct for perspective, so the results can look unnatural.

This is where Panolapse (http://www.panolapse360.com/ ) comes in, as it applies perspective correction to create natural panning. It adds rotational motion to a sequence, essentially acting as a (virtual) motorized-head. It also does:
  • Panning. Simulate rotational panning with perspective correction.
  • Zooming. Animate a lens zoom in or out of your scene.
  • Blend frames. Interpolate RAW metadata like exposure, contrast, white balance, saturation, and more.
  • Deflicker. Smooth out changes in brightness.
  • Autoexposure. Get perfect exposure no matter what camera settings you're at, analyzing changes in aperture, shutter speed, and ISO.
  • Combine JPG images into a video. Export to high-quality images or video (jpg, mp4, mov).
  • Fisheye Lens support. Works with both normal lenses and fisheyes.
  • Animate stitched panoramas. Supports 360° equirectangular panoramic images.
So, after trying out the Panolapse demo, I bought a full license, after all it was less than 20 Starbucks!

It is early days; however, so far I have adopted the following workflow:
  • Set up the ML-enhanced timelapse sequence as before;
  • Ingest DNGs into the PC;
  • Open up PS-CC and under scripts select Image Processor…
  • Carry out a batch upscaling. For example to 3800 wide, which gives you more pixel real-estate for virtual panning and zooming;
  • Finalize your ‘masterpiece’ in Panolapse, including deflickering etc.
Here is the result of tonight’s experiment: an HD quality timealapse, with panning and zooming, and captured without a single 5DIII shutter actuation; the real time was just over 65 minutes.