Wednesday, May 27, 2015

ND Bracketing for LE/HDR scenes

As all photographers know the weak link in our photography, in addition to ourselves, {:-)}, is our equipment and, in particular, our lens-sensor system’s inability to capture every scene in a single image. Thus we revert to using various bracketing strategies to recover any shortfall eg:
  • We do exposure bracketing to capture very high dynamic ranges;
  • We do focus bracketing to extend our depth of fields.
We also use Neutral Density (ND) filters to capture long exposures to give an impression of time passing in a single image, for example moving water and clouds can be ‘smeared out’.

But what happens if we have an ND scene with a high dynamic range?

This is where we can turn to, what I call, ND bracketing. BTW I haven’t come across this idea elsewhere, but I don’t claim to be the first to use the technique.

To understand the ‘problem’ associated with using NDs, it is worth reminding ourselves of their ‘normal’ use. For instance, if I wish to capture an image at, say, an exposure of 2 minutes (120s), I could place the camera in bulb mode and work out the required ND number. However, as I don’t have an infinite set of ND filters, the ‘usual’ way to go is, knowing my ND number, set the appropriate exposure without the ND and then substitute the ND. Thus if I have a 10-stop ND and wish to take a 120s exposure, I would set the camera at about 1/8s and use the ND.

In theory it should work, however, experience shows that you will not have a perfect ND and thus you may have to take a couple of exposures to nail the perfect one. Of course you could use a variable ND set up, but these bring their own (calibration) complications, especially if front mounted on a wide angle lens.

In a previous post I talked about my preferred LE solution of using a rear-mounted variable ND, like the Vizelex ND Throttle. Although this only works with a mirrorless camera, it allows you to use any focal length lenses, without any ‘wide angle’ artefacts.

But what if I also need to do exposure bracketing?

This is where the problems set in. If my single image ND exposure is say, 2min/120s, and that adequately captures the highlights, then if I need to cover the shadows, I will need to take longer exposures, eg at, say, 2 and 4 stops up. But now I’m taking images at 8mins and 32mins! Not only is impracticable because of time, but also the image will exhibit different characteristics at 2mins compared to 8min and 32mins. Blending these images together could result in ‘motion artefacts’.

This is why ND bracketing is worth exploring.

In ND bracketing one exploits the variable ND filter to capture multiple exposure brackets, all with the same exposure time. Thus all brackets exhibit similar temporal artefacts, eg the clouds or water will have similar ‘structural’ characteristics; bracket to bracket.

You could also use different ND filters, eg a 10 stop and, say, an 8 stop, but the variable ND approach is more flexible.

The Sony A6000 + ND Throttle workflow I use goes like this:
  • Decide on the exposure time – let’s use 30s;
  • Compose the scene and set the focus and aperture;
  • Temporally set the ISO to, say, 1600 – but I will capture at ISO 100;
  • Set the exposure to 4 stops down from 30s (ISO 100 vs ISO 1600), ie about 2s;
  • Adjust the exposure, using the Sony’s blinkies and histogram, by adjusting the ND Throttle, ie the amount of ‘NDness’. That is adjust the ND density until the histogram and blinkies look right for the highlights, ie using an ETTR bias, albeit based on the JPEG-based histogram, unlike the Canon Magic Lantern RAW histogram;
  • Reset the ISO to 100, which means the exposure time needs to be reset to 30s;
  • Take the exposure;
  • Without changing the ISO, exposure time or aperture, adjust the ND Throttle by a couple of stops, ie moving the histogram to the right by about two stops;
  • Capture a second image;
  • Repeat the above step until you feel you have captured all the shadow details;
Process with your favourite post processing software. I use LR/Enfuse as I find the tone mapping software doesn’t handle the ND brackets very well.

As usual with these posts, here is a test image I took inside our home. The three brackets were taken at 30s with a fisheye lens on my A6000.



Bottom line: although ND bracketing is not an automatic process, it appears it might address the problem of taking LE images of high dynamic scenes, especially if you don’t wish to see temporally different features between the brackets. I would love to hear if others have experimented with this idea.

Sunday, May 24, 2015

Quick postscript on the Varavon Multi Finder

I’ve now had a chance to ‘play around’ with the Varavon, which on my 5D3, with a 24-105mm F/4L, looks like this:

My initial impressions have not really changed: the Varavon is a great tool if you rely on Live View, as I do as a Magic Lantern shooter.

So far I have only come across one ‘downside’, and that is the Varavon, although robust, is still manufactured from plastic, as well as metal parts. And, as with all plastic parts, you do get an impression that things could break if you are heavy handed. Thus, I just need to take care when switching between view modes, as this is when you need to move some of the plastic parts.

As far as the ‘good’, the clarity one gets, irrespective of the sunshine, is fantastic, with or without the eye piece.

Another killer benefit is the ability to realise critical focus. Even without ML functionality, ie using ‘just’ Canon LV focusing, the ability, at a magnification of 10, to clearly see your focus is a godsend.

If you then add in ML’s depth of field reporting, with diffraction correction, you have, IMHO, one of the best, in-camera, focus control set ups around.

Finally, as promised, I just found out that my still life image, Fallen, was well placed in this month’s photo competition at our Camera Club. The scoring used is up to five points each, against; composition, impact and technique: for a maximum of score of 15. My submission received a score of 15 and was placed 2nd, ie the judge ‘preferred’ another image that also scored 15. The judge correctly commented that I should be ‘careful with masking’: when I submitted the image I was pushed for time and I was aware one of my masks was far from perfect - so a good catch by the judge there.

Saturday, May 23, 2015

Gadgets & Gizmos

Like many I’m attracted to technology and gadgets, although, also like others I try and not buy everything. However, the time comes when my bank account gives way and use St Amazon to get some ‘bright shinny thing’.

This post is about one of my recent purchases.

As those who read my posts know, I’m a dedicated Magic Lantern based photographer. That is I use ML’s additional functionality to enhance the quality of my digital my captures, eg: focus and exposure setting. However, the ‘downside’ of ML is the fact you need to rely on the Live View screen, as this is where the ML enhancements are displayed – and nowhere else; and, as any LV user knows, the LV screen is nearly useless in bright sunshine, even if it articulates. Compound this with ‘old eyes’ and you’re in trouble.

Up until now I have made good use of a Hoodman Collapsible Hoodloupe.


This cheap technology allows me to clearly read the ML-enhanced LV in the brightest of conditions. However, things get more complicated when I’m shooting at some ‘difficult’ angle or position. For example, if I’ve placed my camera a foot off the ground on a small tripod: BTW the one I use is the FotoPro FPH-53P
There is no way I can easily get down to use my Hoodman if I’m using the FPH-53P on the groud: say, to adjust my composition and ML or camera settings.

My immediate ‘techy’ thought was to buy an HDMI monitor, as the ML enhanced LV screen is fed out of the Canon HDMI port (great for reviewing ML on your TV), however, looking at options told me two stories. First, cheap (under $200) meant questionable quality; and good quality meant (very) expensive. Also, the thought of ‘mucking’ about with HDMI cables and a 5in (say) HDMI monitor wasn’t that attractive. And, of course, if I went this route I would still have sun glare to manage.

What I really wanted was something like an old box camera had: a flip up viewer that allowed you to look down on the image you were capturing.
So, after much reviewing, I opted for the Varavon Multi Finder.

Although a little bit more than I was expecting to pay, the quality is just about there, as is the functionality. I can use the ‘viewing loupe’ in three different ways: straight through in a simple sun shield mode; with the additional eye piece; and looking down through mirrors to see a normal view of the LV screen. In addition, the loupe is solidly attached and easily removed from the camera.

Would I use this all the time? Of course not, however, I consider it a great addition to my ML-based workflow. I can now see the ML information on the brightest of days and even when the camera is ‘on the ground’. In addition, if I ever do get into videography, the Varavon will be a great asset.

Thursday, May 14, 2015

Time to Slow Down


Stuck at SFO waiting for a flight to the UK, so a chance to post a few thoughts on my passion: photography.

To day, with all our technology, we are attracted to images that just seem ‘amazing’, because, say, they stopped some incredible action: for example on the sports field or when, say, an explosion goes off. In other words images our eyes and brain could not hope to see without our technology.

At the other extreme we also use technology, neutral density filters, to ‘smear’ out time, thus capturing the emotion of time passing in a single image; for example water that turns into a milky, ghost-like feature.

Then there is one of the oldest forms of photography: still life. I kind of middle ground between the two extremes above, where the subject simply doesn’t move. In other words exposure time becomes an ‘irrelevance’. Of course originally, with photographers only being able to capture exposures over minutes, still life was the way to go. Now it is a choice.

So I was pleased to see that this month’s competition at my local camera club was “Still Life” and thus I had an ideal opportunity to try out this ‘old’ genre.

Wiki tells us that “still life photography is a demanding art, one in which the photographers are expected to be able to form their work with a refined sense of lighting, coupled with compositional skills. The still life photographer makes pictures rather than takes them. Knowing where to look for propping and surfaces also is a required skill.”

Having collected various unusual ‘bits and bobs’ over many years, I decided to incorporate these into my image. We also had a rather organic looking pot that I promised my wife I would break. As for a backdrop I decided to use our mantelpiece. I used a 24mm focal length on my 5D3, and the F/18 exposure, at ISO 100, was 13 seconds. The image was captured in available light.

After playing around in PS-CC I arrived at this final view. I like it, but will the competition judge? Well, as an honest man, I will make a full disclose in a couple of week’s time :-)

Fallen



Monday, May 4, 2015

Magic Tweak

Just a quick post to alert ML users that the latest nightlies incorporate my diffraction corrected depth of field reporting: go to original post
 
I was pleased to see the latest nightly, as it was based on my coding which I proved by compiling my own version of ML.

The only 'downside' is that the current implementation may appear slightly confusing to some as someone else decided to tweak my version.

The important thing to note is that if the on-screen feedback is in white text, then the depth of field is being correctly reported to account for both defocus blur and diffraction blur. For a full frame camera this is up to about F/16, ie beyond that diffraction gets the better of you and you are advised to not go there. The code automatically accounts for cropped sensors.

If the text goes yellow/orange this is a WARNING that the depth of field reporting can no longer be relied on and the near and far information has little real value. So back off, back to white text.

By switching to the ML menu screen when this occurs, you can ascertain what criterion is being breached, ie either the Airy limit, ie defocus blur is too small, or the diffraction limit has been reached, ie diffraction blur is greater than the total blur, assumed as 29 microns for a full frame and scaled for an APS-C sensor. This can be changed in the code, for example for a more exacting standard.

By noting the near and far values (when white) you can undertake landscape focus stacking, ie ensure each refocused image overlaps the last one, until the far value shows infinity.



Finally, the diffraction assumes a visible band camera. If you have an IR converted camera you will need to change one number in the code. For those that are interested on what a bit of ML code looks like, pop along to view the source.

The ML reporting includes the hyperfocal, which is the point of focus where everything from half that distance to infinity falls within the acceptable depth of field, or acceptable 'out of focusness', as only one point/plane is truly in focus. The hyperfocal is the largest depth of field possible for a given f-number. The hyperfocal point is easily spotted when the far reporting just shows infinity. For insurance it is best to just focus beyond that point.

The near DoF is the nearest distance, relative to the sensor, where objects appear in focus according to a combination of defocus and diffraction blur, taken in quadrature.

The far DoF is the farthest distance, relative to the sensor, where objects appear in focus according to a combination of defocus and diffraction blur, taken in quadrature.