As digital photographers know, as soon as you push the shutter button, you have lost control. What do I mean my that? Well as soon as you push the shutter the Camera manufacturer is in charge until the next time you gain control: when you post process the image (data) in the PC.
Assuming you are capturing RAW images, after all why would you want to let the Camera process only 8-bit JPEGs, there is a lot going on in the background and foreground, to get you the image you want and that you would wish to mount on your wall at home, or enter into a competition. For example, In most cameras:
- The linear data from the raw file needs to be baselined by subtracting a Black Level
- The White Balance is adjusted according to the camera setting
- As part of the colour correction process the captured image data is channel clipped
- For most cameras the Bayer colour filter array, eg laid out as RGGB quartets, is Demosaiced
- Colour Transforms and Corrections are applied
- Finally, a non-linear step (all the above steps are linear), a suitable Gamma correction is applied, so things look OK on your monitor
As I say: do you feel in control?
Because of the complexities involved with modern digital photography, camera manufacturers have tried to simplify our lives; thus, thank goodness, the mathematical complexities alluded to above are hidden from us by, say, Lightroom or ACR etc.
Having said that things can sometimes still appear complicated, rather than ‘just’ being complex. Take Infra Red digital photography.
I’m fortunate enough to have an IR-converted 50D (720nm filter). This allows me to essentially use my 50D as a ‘normal’ camera, eg able to handhold. Without the conversion, and say using an IR filter on the front of the lens, I would be limited to long-exposures.
Up until now, I have only used my IR-50D through single image capture. So today, being stuck at home with a case of benign paroxysmal positional vertigo, I decided to explore IR bracketing, and in particular getting a high-key-look to my IR images.
Of course, for me, photography would be fun without making use of Magic Lantern. So my ML workflow looked like this:
- Use the ML RAW Spot meter to set the Ev value of the darkest area where I wanted to see detail;
- Use ML’s Auto Bracketing to capture as many images as required to ensure the highlights were captured.
Because I’m stuck at home, I took a test image inside the house. Here are the three brackets that the scene required (according to my spotmeter and the ML Auto-bracketing algorithm:
Rather than ‘lose control’ of the data, instead of tone-mapping the brackets, I decided to fuse the brackets using LR-Enfuse, a free/donation-ware plugin for Lightroom. Pre-processing for Enfuse was simply correcting for the lens and removing the ‘red cast’ with a custom calibration profile I have in LR, that I created in Adobe DNG Profiler. I also made a virtual copy of the darkest image (capturing the highlights) and reduced the exposure by a stop. This gave me three ‘real’ and one ‘virtual’ image to enfuse into a 4-bracket set. This ‘trick’ is always worth trying, ie creating a virtual bracket beyond the ones you have captured for real.
The Enfused image, without further processing looked like this:
I then decided to crop in and use a square format and, just using LR B&W processing, created the final, highly non-artistic image!
Bottom line: digital photography can be as complex and complicated as you wish. If science, engineering and maths turns you on, you can really enjoy yourself. If you want to cut to your artistic side, then the post processing tools usually get you there pretty quickly. It’s your choice: but, having said this, not understanding how your camera-lens-pc system works, means you will unlikely get the best out of your captured ‘data’.
No comments:
Post a Comment