Saturday, December 21, 2013

In-field Tethering

In previous posts I have described how I’ve been ‘playing around’ with focus stacking. Not macro focus stacking, which is a virtual necessity, but focus stacking for landscapes, which, I suggest, is more challenging.


In macro photography we approach focus stacking in a relatively linear fashion. For example, we first calculate the width of the ‘depth of focus’, ie the zone either side of THE focus point that remains in acceptable focus, based on some criteriaeg lens optics, size of sensor pixels, required printing quality andviewing distance. We then use this to work out the number of ‘focus steps’, or images, to take that covers the subject we are capturing. Thirdly, as we overlap the images we can throw these images at focus stacking software and, magically, we end up with our subject ‘tack sharp’ from front to back, despite the narrowness of the depth of (acceptable) focus. The down side of macro-based focus stacking is that the acceptable depth of focus is usually sub-mm, and for a reasonably sized subjectie cms, we need to take lots of imagesie 10s to 100!


Landscape focus stacking is different and we thus need to approach our focus stacking workflow in a different way. There is no need for focus rails and capturing 10s of images for landscapes.


As I have written before I have settled on three (iOS) Apps from georgedouvos.com. I believe George’s three Apps represent an essential set of tools to help photographers achieve tack sharp (landscape or architecturalimage capture.


The ‘complication’ we face in landscape/architectural focus stacking is that our image capture focus distances are non-linearand make use of hyperfocal calculations. For example, using theFocusStacker App and deciding to capture for an image blur ‘quality’ of 20microns, at the 24mm end of my lens I know I need to take three images at 3, 5 and 15ft. These three images, once merged in a focus stacking program, will provide a ‘tack sharp’ image (20 micron blur spot) from about 2.5ft to infinity.


My first, manual, attempts at landscape/architectural focus stacking were based on pre-calibrating my lens in the house, ie by focusing at 3, 5 and 15 ft and marking the lens rotation from one of the two lens end stops, ie the infinity end of the ‘macro’ end.


This approach is relatively accurate and yielded good results. The limitation, however, is clear. For my 24-105mm lens, the three marks are only good for one configuration of focal length (my choice was 24mm) and depth of field (I choose 2.5ft to infinity). Clearly I could not mark my lens with every configuration of FL or depth of focus.


So I turned my thoughts to automation. Could my Promote Remote, CamRanger or Magic Lantern technology help me? The answer was partially.


At the moment the ML developers are rather focused (pun intended) on other things, so my module-based feature request, related to focus stacking, is falling on deaf ears. My alternative thinking is that an ML script may get me close to an automatic solution: I will give this some more thought.


The Promote Remote needs further conversations with the boffins at Promote.


So what about the CamRanger? My experiments here were fruitful, in that I used the CamRanger, at my 3, 5 and 15 ft pre-focused points, to record the CamRanger steps needed to move the lens to the required distance from a known starting point, eg the lens rotation stop at the macro end. I found this a repeatable process, eg to move from the zero point to the 3, 5 and 15 ftfocus points, I needed to step the lens, using CamRanger, by x, y and z ‘clicks’.


At the moment CamRanger cannot ‘record’ such a focus stack set, thus, in the field, I would need to zero the lens and then click the required number of times to the required focus points,eg 3, 5 and 15 ft, say. I could imagine carrying a ‘look up table’ of various sequences to cover differing FLs and distance capture needs. Remember you can use CamRanger to take shots (bracketed or non-bracketed) all without touching the camera. Thus CamRanger represents a semi-automatic approach that I will definitely be experimenting with.


But what about my goal of a fully automatic approach? Well, until ML, Promote Remote or CamRanger come up with such a capability, I believe I have the next best thing. That is using the Canon (but I’m sure this would work for Nikon user as well) EOS (tethered) utility and AutoHotKey (AHK is at www.autohotkey.comscripting engine running on Windows (once again I’m sure the Mac world has a similar scripting environment).


So far I have demonstrated the approach on my desktop PC and have ordered a Dell 8” Windows 8.1 Tablet to take the approach into the field. That is the tablet will allow me to tether my 5DIII to the EOS tool in the field, this alone will be useful, and use focus stacking scripts that I will run via an AHK script.


So what have I achieved so far?


I first got the EOS Utility and AHK running on my desktop PC. I then plugged my 5DIII into my PC, which automatically opened the EOS Utility. I could now operate my camera without touching it, including refocusing (assuming you have an auto-focus lens on the camera), directly from the EOS Utility.


I next carried out a calibration phase for the 24mm end of my 24-105mm F/4L, where I used the EOS Utility to rotate the lens to the macro end stop and used the EOS Utility on the PC to step the lens to a known focus distance, ie I used a focus target at 3, 5 and 15 ft in this proof of principle experiment. I now had the required number of EOS Utility steps/clicks to bring the lens to the three focus points from its (macro) end stop.


I then created an AutoHotKeys script, which clicked the right things in the EOS and LiveView PC windows, such that all I had to do was open the AHK .exe file and the EOS Utility would be automatically operated to capture three images at 3, 5 and 15 ft, without any intervention from me. The ‘only’ downside is that it does take seconds between shots to reposition the lens.


So where next?


Well my Christmas experiment is to move the EOS/AHK capability to my new Dell Windows 8.1 8” tablet. I will also write a few more AHK scripts to cover other focusing needs, ie the 24-105 will focus down to about 1.5ft, hence I can imagine a script that operates over this focus range.


Bottom line: new technology is constantly emerging and we need to keep on experimenting!

Sunday, December 8, 2013

Focus Stacking



I believe our development as photographers goes in cycles of continuous increasing (bootstrapping) our skills both on the technical and artistry side of our craft. Today, thanks to a winter storm where we live, I spent the day at home concentrating on increasing my technical skills; and in particular how to achieve tack sharp images over the depth of focus that I require for my capture.

All photographers are conscious of the limitations that our lenses and cameras bring to our craft, namely: depth of focus and aberrations inherently linked to the ‘physics’ of our equipment. When we first develop as photographers we soon become aware that you can only be in focus in one plane that is orthogonal to the lens axis (assuming you are not using a tilt shift lens). All other image slices are, by definition, out of focus. However, we accept this ‘out of focusness’ as long as it looks acceptable when we print or project it (on paper or our computer screen).

Typically we define a print to be tack sharp by line pairs per mm, eg 10 lp/mm for a 250mm print size is usually considered ‘good’, and we talk of circles of confusion for our sensors, eg for my Canon 5DIII a CoC of about 30 microns is typically quoted as the number to use for depth of focus calculations. 

There are many depth of focus calculators on line and you can download Apps to your iPhone or IPad as well. The majority of these calculators, however, suffer in one key area, namely they don’t account for diffraction. 

After comparing several Apps and looking into some math, I have now settled on three complementary Apps from the same author: http://goo.gl/qU792m. If nothing else I encourage you to read the articles that are on the home page.

I have all three photography Apps (TrueDOF-Pro, OptimumCS-Pro and FocusStacker), and can recommend all three as money well spent. Having all three, and reading the author’s articles, will greatly increase your depth of focus understanding and your tack sharp image capture.

Rather than repeat what you can read, here is an example of what can be achieved. The attached is a test shoot I took using FocusStacker with my 14mm lens (on the 5DIII). To get the best tack sharp image, rather than use a blur spot of 30 microns (blur spot = the RMSQ (CoC and the diffraction spot)) I used one of 15 microns.

Using TrueDoF-Pro I knew that I would need to shoot at just under F8 and focus at about 9ft. These numbers would give me an acceptable focus from about 4.5ft to infinity. But I knew I could do better by focus stacking. 

Turning to FocusStacker, I knew I needed to set my aperture to F7.1 and take 5 images at 1.1ft, 1.4ft, 2.0ft, 3.3ft and 10ft. Taking these images would increase my (tack sharp, 15 micron class) depth of focus from infinity down to 1ft. About as good as it gets!

To make things interesting I also decided to do this in a high contrast environment (internal lights and an outside scene) that needed bracketing, so at each focus point I took five brackets.

Having captured the data, it was ‘simply’ a matter of following this workflow:

  • Ingest the images into LR
  • Carry out basic corrections, eg white balance
  • Export from LR each set of five brackets to Photomatix 5
  • Use Fusion in Photomatix rather than Tone Mapping (more photo realistic)
  • Auto import back into LR
  • Export the five fused brackets into Photoshop-CC as layers
  • Align the layers
  • Auto Blend the layers
  • Auto import back into LR
  • Finish off in LR

Bottom line: as we develop our photography skills, we become ever more critical of our efforts. Composition, colour balance and other artistic skills sets are all well and good, however, if the data captured is too soft, or just out of focus, then we will not achieve the desired result: personal satisfaction and, hopefully, praise. Single focus image capture may not always be sufficient. If you wish to capture large depths of focus, eg 1ft to infinity, then focus stacking is a must, and so are some calculations. I thoroughly recommend the three Apps I have mentioned above at http://goo.gl/qU792m

 

Saturday, November 16, 2013

I’ve got the Pano bug again!



Although I’ve experimented with pano capture in the past, I’ve decided to spend a little more time on getting the best out of the technique. Pano capture can really help with in-field work flow as it can reduce/eliminate the need to keep changing lenses: essential for dusty environments

For instance if I’m shooting with my 70-200mm F4L or 24-105mm, then, rather than switch to a wide angle lens, eg 16mm, what I would do is capture a pano of the equivalent image FOV.

At the moment I’m waiting for a tripod mounted 360 VR head to arrive from China, ie I can’t justify a US or UK made one at GBP400+. Until then I’ve experimented with hand held panos. The one below is a 24 image one I took of our home in New Mexico, to prove out my workflow, which was:

  • Put the camera in manual and be careful about filters, especially circular polarizers, ie best take them off;
  • Select an average exposure for the FOV I was interested in (I will talk about HDR panos in a future post); 
  • Lock in the exposure so it doesn’t vary across the sequence;; 
  • Take the pano sequence and select the ‘best’ directional strategy, ie with clouds moving left to right, take the panos up and down and to right, ie not left to right and down; 
  • Take the important areas first, eg people, and build up the pano sequence around this pivot image. Also if the subject is moving, eg a duck on water, take the duck in the first image and try and place the duck in the overlapping areas of subsequent images (the software will eliminate the duplicate ducks);
  • Ingest into Lightroom; 
  • Select a base image, eg the house area, and adjust image; 
  • Sync all other images to this image;
  •  Export images as 16-bit TIFFs to a folder, or export directly to your pano software if this is linked to LR;
  • Import into your favorite pano software and let it do its magic. In the case below this resulted in a 735Mb TIFF file; 
  • Make any tweaks to the image in PS CS-CC or LR; 
  • Convert to a JPEG, in the case below that resulted in a 30Mb file as I didn’t bother reducing the quality, ie I went for near-lossless.

Although I have PS CS-CC, I processed the image below in Microsoft ICE, which is a free and powerful tool: (http://research.microsoft.com/en-us/um/redmond/groups/ivm/ICE/)

Bottom line: the next time you go out shooting try going with one lens but don’t limit yourself in terms of FOV, ie think panos!


Thursday, November 7, 2013

More practice? Or more equipment?



Being a photographer and living in New Mexico has many benefits. The biggest as far as I’m concerned is being a short distance from Bosque del Apache, which was established in 1939 to provide a critical stopover for migrating waterfowl.

Most know the refuge for the thousands of Sandhill Cranes, Snow Geese and other waterfowl that winter here each year: http://www.fws.gov/refuge/Bosque_del_Apache/

According to the Bosque del Apache web site: “Petroglyphs tell the story of an ancient people that lived and hunted in the area. The river and its diversity of wildlife have drawn humans to this area for at least 11,000 years when humans migrated along this corridor, sometimes settling to hunt, fish and farm. Artifacts and stone tools found nearby tell us that nomadic paleoindian hunters pursued herds of mammoth and bison in the valley.”

For photographers a visit to Bosque del Apache is an opportunity to try out your long lens and BIF (Birds In Flight) techniques. Although I have a Sigma 150-500mm, on today’s trip I wanted to experiment with my Canon 70-200 F4L and Canon x2 extender, on my 5DMkIII. The 5DMkIII has the latest firmware, so it is able to exploit the 5D’s auto focus at F8, which is the widest you can go with the extender attached using the F4L glass. Also, I decided to restrict myself to hand held on this trip.

So what did I find out?

First the 70-200’s IS is slow and rather clunky. In fact I switched it off in the end as I was losing BIF shots. I found the best strategy was to put the camera in Tv mode with a floating ISO, ie auto-ISO. This way I could address the shutter speed for the distance I was shooting at and the BIF needs. Typically, I had the shutter speed at 1/1000 or above.

Second, shooting BIF is hard! I hardly had any real tack sharp shoots: the best is below. I tried to perfect my panning skills and could see some increase in sharpness if I panned ‘correctly’.

Thirdly, I need longer lenses!!! It is clear to me, that if you are shooting in a natural environment, rather than in a captive one, your subjects will usually be too far away! Hence, you end up throwing away most of your sensor data through close cropping on your subject. At 400mm I was struggling and I don’t think my Sigma 500mm would have helped that much, as at 500mm it tends to a softer image.

Clearly I need a 600mm plus Canon L lens! Well I can only hope!


Sunday, October 27, 2013

My ML-enhanced High Dynamic Range (DR) Workflow

This post follows from the previous one. In this post I share with you my Magic Lantern (ML) enhanced workflow.

I assume the reader is familiar with ML, Advanced Bracketing, Auto-ETTR and Dual-ISO. If you are not then start here: http://www.magiclantern.fm/
 
So here is the workflow:
  • Enable the appropriate modules, eg Auto-ETTR and Dual-ISO;
  • Compose and focus the scene and assess the DR of the scene, either using ‘guess work’, in-camera metering (ML or Canon) or use an external meter ( I use an Sekonic L-750DR);
  • Based on your metering decide on one of the following basic capture strategies:
    • If the DR allows it, ie low and containable in a single image capture, use a single exposure and set metering handraulically using your photographic skills (in whatever mode you decide to use, ie Tv, Av or M). This is the non-ML-enhanced base approach; 
    • As above, but get some help by using ML (double-half press or SET, ie not ‘Always-On’ or Auto-Snap) Auto-ETTR (to obtain a single image capture) and ensure maximize the quality/quantity of the image file, ie maximize the number of useful photons captured and converted, without blowing out highlights. A further refinement here is to switch on dual-ISO as well, but I prefer not to use this as part of my photographic workflow; 
    • Use Auto-Snap or Always-On AETTR and first meter for the highlights you wish to ‘protect’ (recompose as required) and use this as the starting image for the AETTR capture. Using this approach you will get two images, one with good highlight capture and the other with, likely blown out highlights, but with good shadow/mid exposure (according to your AETTR settings), ie based on the AETTR algorithmics. This is a good strategy for capturing a two-image bracket set, ie as long as the scene’s DR is not too large for your camera. This two-image bracketing is fast and virtually guarantees you will never have blown out highlights that are important to you); 
    • Switch off AETTR (and dual-ISO) and switch on advanced bracketing and select the number of brackets to cover your metering or use the auto setting, which will mean more image captures will it will result in a full DR bracket set.
  • Ingest into Lightroom; 
  • For the single image captures I will then carry out basic LR processing as normal; 
  • For the two-bracket (auto-snap) capture I will adjust the images, eg to ensure good highlights in one and good shadows/mid-tones in the other. I will throw these two images down two post-processing paths. First I will use LR/Enfuse, and then I will use ‘Merge to 32-bit HDR’. I then have two image files to ‘play around’ with, a 16-bit one and 32-bit one; 
  • For the advanced bracketing set I will once again try several post-processing routes, eg Photomatx, HDR Efex Pro 2 or Merge to 32-bit HDR’;
  • In all case I will then usually go into Ps-CC and finish off the image with a variety of post-processing tools.
The attached image is one I took this morning to illustrate the ML-enhanced, AETTR, 2-bracket approach. Although not an award-winning image, it does show that all the highlights and all the low and mid-tones have been adequately captured.

So, in conclusion, I’m not saying the above is the THE way to go, but, for me, it works and I thank ML for that! For those with a Canon camera, I once again encourage you to explore ML, especially the nightly builds that include the AETTR module.