Sunday, January 29, 2017

Plan Ahead

In previous posts I have spoken of the value of Apps in preparing for a photo trip. For example TPE and PlanIt!.

Some time ago another great App slipped in under the radar and, I believe, all landscape photographers should consider getting it. It is called The Photographer's Transit, or TPT: a digital shot planning tool.

TPT is published by the same team that created the great TPE App.


You can make you own mind up by visiting the TPT website or looking at some of their training videos

While we're on the subject of TPE, do you know you can now use TPE to help you predict sunset of sunrise 'hot spots' using Skyfire. Once again, rather than reading my words, pop over to the TPE site.

Skyfire uses multiple weather models and analyses numerous factors that affect sunrise and sunset color including:
  • Cloud type determination
  • Cloud height predictions
  • Gap light
  • Complex system behavior
  • Satellite weather information
  • Topography
The Skyfire algorithm is run against the latest weather data multiple times per day. Forecasts for both sunrise and sunset are generated for the next four days. Each forecast is refined using the latest input data when the algorithm is re-run.


TPE displays Skyfire as a colourful map overlay, alongside the critical time and light angle information. A spot-check API allows TPE to display the latest forecasts for your favorite locations all in one place.

Coverage is CONUS, 'bits' of Canada and most of Europe, including Iceland :-)


The developers say that their goal is to maintain an accuracy of 80% or better. And they say they're committed to keep improving that over time. In recent testing, they have been exceeding the 80% goal based on qualitative assessment of field reports and web-cam feeds.

Bottom line: If you have not tried a photography planning App: try one. From my perspective I can recommend PlanIt! as a standalone tool. Or if you what a larger toolbox then TPE with Skyfire and TPT (you can buy these as a deal from the Apple store) are not only great Apps, but great entertainment, as you sit on your sofa planning a future photography trip.

Wednesday, January 25, 2017

Script Update

Just a short post to say that I've updated the Auto Landscape Bracketing script: simplifying it; making a little bit more robust; and adding some error checking and user feedback.

Download it from the link on the right or from here.

Saturday, January 21, 2017

A killer Combination

In preparation for my Icelandic trip, I’ve been honing my timelapse knowledge and skills.

From a camera perspective I’ve decided to use my EOSMs (I have two) as my ‘dedicated’ timelapse platforms, leaving my 5D3 and my IR-50D for stills photography.

This week I took delivery of a new timelapse gadget: the Radian 2


The guys at Alpine Labs (https://alpinelaboratories.com/products/radian-2) have been a pleasure to buy from and have answered all my follow-up questions.

I bought the Radian to increase the drama in my timelapses by adding motion panning. The Radian controls the camera via a USB cable, in my case a 5D3, and allows you to control shutter and aperture etc. The Radian 2 'speaks' to a control/setting App that runs on my iPad.

I had also worked out that I should be able to trigger my EOSM as well: despite noting that the Alpine Labs site says that the EOSM can’t be used with the Radian. The ‘secret’ to getting the EOSM to work with the Radian is, of course, Magic Lantern.

Magic Lantern has an audio trigger mode and the Radian has a 2.5mm audio plug shutter output. Thus all I needed to buy was a 2.5mm (Radian shutter trigger) to 3.5mm (EOSM mic socket) audio cable.

My first attempts, however, didn’t work, as there appeared (sic) to be noise on the line that was randomly triggering the camera.

Thanks to some advice from one of the ML community (thank you Dan) I was educated into some camera settings that I had never looked at, ie in the Movie menu; namely audio gain: which was defaulting to auto. Once I had selected manual and adjusted a few (Canon) settings, the Radian now triggers the EOSM at the correct time.

I’ll be posting more about the EOSM-Radian combination in future posts. For now, it’s worth saying that Magic Lantern enabled EOSM and the Radian 2 are a killer combination for those looking to do motion and Holy Grail timelapses, eg via the ML Auto ETTR feature.

Wednesday, January 18, 2017

The Start of an Adventure

As I’m just under a month away from, what I hope, will become a ‘trip to remember’, over a week in Iceland with a group of photographers, I thought it was timely to start posting.

From my perspective this is an exciting event in my life, and, at the same time, a little bit daunting; as there is so much to think about.

So, in the spirit of ‘learning from others’, I’ve decided to try and write regular posts about my preparation for the trip, as well as about the trip itself. Hopefully others will pick up on my ‘mistakes and experiences’.

The trip, which is in February, will obviously be challenging simply from a personal ‘survival’ perspective, ie it will likely be cold and windy beyond the normal UK weather. Another difference, which brings many positives, is the latitudinal difference between the South of England and Iceland.

In this post, I’ll be restricting myself to saying a few words about clothing, location and weather. In a subsequent posts I’ll talk about some of the photography equipment I will take with me and, just an importantly, not take with me!

In addition to topping up my wardrobe, ie more layers, hand warmers, waterproof over trousers etc etc, I was alerted to the need to prepare for walking on ice: where standard boot soles will not suffice.

There seems to be two basic styles, one with spikes and one with helical wire. As I don’t intend to go mountain climbing I have taken a gamble on the Yaktrax Pro helical wire technology, hoping they won’t ‘clog up’:



As a photographer, the weather means drama in the clouds, which impacts the quality of light; or, because it is Iceland, the quality of the darkness, eg for photographing the Aurora…I hope!

So I reached for a book I have had for a while and will be (re)educating myself on clouds! The book is called The Cloudspotter's Guide  and is written by GavinPretor-Pinney of the Cloud Appreciation Society: https://cloudappreciationsociety.org/
 


As for location, I have various (iPad/IPhone) Apps that, in the past have helped me prepare for photography trips, some I have mentioned in previous posts, for example: PlanIt! for Photographers and TPE.

The one I will talk about in this post is PhotoPills: http://www.photopills.com/

PhotoPils is one of those Apps that, at first sight, can look overwhelming. However, it is an App I can recommend to all photographers. Like other Apps, it allows you to plan/envision your trip from the comfort of your armchair.: as well as do many other things As an example, take sunrise in the South of England; when I am in Iceland.

As photographers know, as the sun sets and rises, the quality of light changes. Not only because of your location, but also because of the local weather, which will affect the passage of light through the atmosphere.

As photographers we know why the sky is blue, eg Rayleigh scattering, and know that the quality of light cannot be predicted with absolute certainty. Thus the weather is usually something you have to assess closer to your shooting date: so I’ll say no more about Icelandic weather for now.

The one thing you can guarantee will not change, and you can look ahead years in advance and be sure things won’t change, is the timing of the local sunset and sunrise. For convenience, the sun’s elevation/declination is talked about in zones, eg Civil Twilight, Nautical Twilight and Astronautically Twilight.



All these zones, including Golden (0 to +6 degrees) and Blue ‘Hour’ (-4 to -6 degrees), that attract photographers, are, of course, part of the sunset (or sunrise) continuum, ie the above ‘zones’ merge into each other and don’t have ‘hard’ interfaces.

So what would these ‘photographic magical times’ look like if I stayed at home, rather than travelled to Iceland. Well PhotoPills allows me to see this in a very graphical way eg let’s take sunset in the UK on the 10th February 2017:


So how will this change when I’m in Iceland?

Well dramatically I would say, as can be seen from this PhotoPills map of Iceland’s sunset on the 10th February 2017:
 


At home I would usually be limited in my Golden Hour or Blue Hour shooting time. When I’m in Iceland, at first sight, it looks like a relative photography paradise, ie ‘lots’ of time to capture images.

But, as usual, I’m sure I will run out of time. It also looks like I should move up to Greenland for a full day's Golden Hour photography :-)

As this is the first post I intend to publish on my ‘Icelandic Adventure’, I will draw a line under this one at this point; reflecting that, even before a photographic trip, you have the opportunity to prepare and envision things, especially if you are going to a location you have never been to before.

One final thought: if anyone reading this has any insight/advice for shooting in Iceland, I would welcome you sharing in a comment below ;-)

Monday, January 2, 2017

Further experiments in Super Resolution Photography

In a previous post I wrote about how to use a Magic Lantern Lua script (on the right) to create auto super resolution brackets. In this post, I offer some insight into why SR photography may be of interest to you, and a Photoshop action to automate PS-CC post processing: well at least for a four layer SR stack.

As usual my test image is a view in our house.

I first used the SR Lua script to create a four bracket SR set, using the jiggle and move option.

After ingesting into LR, I simply exported as layers into Photoshop.

I had previously created a script that took the four layers, scalled up by 200%, aligned the layers, reduced the opacity according the usual recipe for SR processing (100%, 50%, 33%, 25%), flattened the layers, and reduced the image scale back down, ie by 50%. For those interested here are the script commands:

Action: 4 Layer SR

Select back layer

Without Make Visible

3

Select front layer

Modification: Add Continuous

Without Make Visible

1, 3, 4, 5

Image Size

Width: 200%

With Scale Styles

With Constrain Proportions

Interpolation: nearest neighbor

Align current layer

Using: content

Apply: automatic

Without Vignetting Removal

Without Geometric Distortion Correction

Select back layer

Without Make Visible

6

Select forward layer

Without Make Visible

3

Set current layer

To: layer

Opacity: 50%

Select forward layer

Without Make Visible

4

Set current layer

To: layer

Opacity: 33%

Select forward layer

Without Make Visible

5

Set current layer

To: layer

Opacity: 25%

Flatten Image

Image Size

Width: 50%

With Scale Styles

With Constrain Proportions

Interpolation: nearest neighbor


As for a comparison, I made good use of the new LR-CC compare mode (in the Library module).

First, here is the base (RAW) image:


Here is the LR comparison after processing both the a single RAW .CR2 and the PS-CC processed SR image with the same settings, ie lens correction and detail sharpening.


I hope you agree, that if you are interested in getting the maximum image quality and detail out of an image, a simple 4-layer SR approach appears to have real benefits (accepting the subject shouldn't be moving). 

The SR (on the left) clearly exhibits detail and the single RAW is softer and lacking the detail in the SR version.

As usual I welcome any feedback on this approach to SR photography, ie using an ML script and a PS-CC script (at least for a four image bracket set).




Friday, December 30, 2016

How far can you push things?



As I prepare for a photography trip in Feb (more about this soon), I’ve used the Christmas break to reset my Magic Lantern configuration. Up until now I’ve been using the Canon 5D3-113 firmware, as, in ML, this is the ‘best’; but only if you are a videographer - which I’m not.

The downside of using the 113 Canon firmware is that it doesn’t support F/8 AF.

Why is this important? Because my 70-200 F/4L with a x2 extender means that the minimum (widest) aperture is F/8. So I took the decision to move the 123 ML version.

Moving back and forth between 113 and 123 is a simple matter for loading thee appropriate Canon firmware, doing and in-camera update and then following this with the appropriate in-camera ML update.

Having assured myself that all my ‘go to’ scripts work, ie: Auto Bracketing, Auto HFD, LE Simulation and Super Resolution bracketing (see on the right), I’ve spent a few hours testing the F/8 AF capability.

As I’ve been on a Christmas (garden) safari for a week [:-)], I had lots of willing subjects: as long as I fed them!

As many will be aware, unless you have a camera with an ISO invariant sensor (and I don’t), then you need to appreciate the ‘limitations’ of the sensor as you set exposure. For example, recognising you may need to prioritise something, eg shutter speed; use ETTR for the case where the camera electronics limit things, eg below about 1600 for my 5DIII, and simply push whatever image comes out of the camera in the ISO invariant zone, eg about 1600 for my 5DIII. I’ve used this chart for a 5DII from here before: http://www.clarkvision.com/articles/iso/
So my ‘test case’ from my garden safari, on a foggy, dull, sunless days, was a real extreme shot. I was aiming to handhold with the 70-200 at F/8 with a x2 Extender, to achieve a very fast shutter of 1/1600. This resulted in an ISO of 12800, ie 7 stops from ISO 100.

I processed the image in LR and used Piccure+ as well. The resulting image would not win me any prizes, but it does show what an ISO 12800 image looks like on the 5DIII.


Next step: more testing :-)

Tuesday, December 20, 2016

A new technique for Super Resolution Photography...maybe?

As readers of my posts know, I try and get as much as I can out of the (Canon) camera technology; helped, of course, by Magic Lantern.

So far, accounting for my bias towards landscapes, cityscapes and architecture photography, through various scripts (see the right hand side), we can automatically:

  • Take exposure brackets, ie according to the dynamic range of the scene. The camera works out the number of brackets to take;
  • Take focus brackets, from a focus point to the macro end of a lens, or the infinity end, or to the hyperfocal distance (HFD) or from one focus point to another. The camera handles the diffraction and the bracket to bracket overlap;
  • Take both exposure and focus brackets at the same time;
  • Take a simulated long exposure brackets, in Full Resolution Silent Picture mode, ie no shutter actions, or with the ‘normal’ mechanical shutter;
  • Take any number of brackets, ie to use when you are trying to remove people from the scene;
  • Move the lens to the HFD at the push of a button.

In this post, we are going to take the next step in automatic bracketing, namely: automatic bracketing for super resolution photography.

At the heart of super resolution photography (SRP) is trying to use the equipment you have, like a Canon 5D3 in my case, to emulate a, say, Canon EOS 5DS R. In this post I will show how to enhance the resolution of your camera without spending a penny on a new camera. In my case, using a 5D3, to a camera equivalent to 90 megapixels (MP)

SRP is not new. For example, Hasselblad had a 200 MP camera system, but that 200 MP images was made by a 50 MP sensor, using a special sensor-shift mechanism inside the camera. Typically it would take six separate images, each with slightly different sensor positions with only a pixel of difference between each shot. The camera would then automatically re-align those images and combine them together to produce a photo with four times the amount of resolution.

Last year the Olympus OM-D E-M5 II was announced as the first consumer level camera to feature this, sensor shift, technology. Similar to the Hasselblad , the OM-D E-M5 II takes 8 consecutive photographs with its 16 MP sensor. After shooting these 8 photos, once again each with a different sensor position, it then combines the data from all 8 images into a 40 MP plus image.

But what if you don’t have a camera that can shift the sensor? Are there other ways to achieve a similar result?

Bluntly: yes!

First, you can hand hold and shoot the 8, say, images and post process those images into a SR image in Photoshop. The technique requires hand holding, as taking the images on a tripod would result in image to image perfect alignment; and we need to the images to be different. Having said this, the multi stacking no sensor movement approach has some advantage in reducing noise, ie 8 images (albeit perfectly aligned) will give you a three stop reduction in noise, eg an ISO 800 shot will look like an ISO 100 one.

But such perfect alignment will not help us achieve the SR ‘uplift’.

The problem with hand holding is that you are restricted in your shutter speed, eg to 1/FL, or for an average photographer, say, faster than 1/40s.

To overcome the handholding limitations, I offer the following as an alternative technique, well at least for the Magic Lantern Canon shooter. I’m not aware of anyone else coming up with this idea: but I’m prepared to eat humble pie :-)

The key to my technique is to exploit a known ‘weakness’ in still photography lenses, ie the technique may not work with cine lens (although I can’t test this as I don’t have one).

The feature we are going to exploit is a normal lens’s focus breathing or focal length shortening, feature; which is the name given to the change of focal length (and hence angle of view and magnification) when the focus distance of a lens is changed. It is easy to see the impact of focus breathing by simply looking at an image on the Live View screen at say the closest focus point, and then move to infinity. The image changes on the sensor: which is exactly what we want!

Knowing all lenses exhibit focus breathing, led me to speculate if I could exploit this as form of sensor shifting, ie take multiple images at different focus points.

The immediate thought may be: no way, as the images will be differently out of focus between images. But this is where a pragmatic limitation comes in: use wide angle lenses and ‘limit’ things to the HFD.

As a reminder, if one focuses at the HFD, then every thing between the 1/2 HFD to infinity will be in focus, according the out-of-focus criterion you have chosen, eg the circle of confusion and, if you are being critical, the diffraction blur added in quadrature. Luckily for Magic Lantern shooter, all this Depth of Field maths is built in to ML and accessible via Lua scripting.

The other advantage of focusing at the HFD is that moving to infinity means that only 1/2 of the distance to the HFD is ‘lost’. Thus, say, if the HFD was at 2m, ie the DoF went from 1m to infinity, then moving the focus to infinity would result in an acceptable DoF from 2 to infinity, ie all we lost 1m in the near field.

This illustration shows the depth of field visually, with diffraction accounted for. It is taken from an excellent article here: http://www.georgedouvos.com/douvos/Image_Sharpness_vs_Aperture.html

Conceptually the ‘new’ (I think {;-}}) technique I’m offering goes like this:
  1. Compose your scene on a Tripod;
  2. Set the exposure for your scene, ie any exposure you like;
  3. Set the lens to the HFD and take an image;
  4. Move the lens towards infinity and take another image, and repeat until you are at infinity and have taken the required number of images, each with a slightly different image because of the focus breathing OR simply jiggle the lens via the IS;
  5. Ingest the images into your PC and stack them in Photoshop;
  6. Increase the image size by 200%;
  7. Auto align the images;
  8. Turn the image stack into a smart object;
  9. Run the stack mode as medium or mean (and wait a long time!);
  10. Collapse the stack an finish post processing.
The only real problem with the above is step 4, ie repeatedly touching the lens; as manual control of focus position is not good/repeatable as the lens gets towards infinity. The IS jiggle trick works, ie moving the images relative to each other 'randomly' by a pixel, but you have to keep manually intervening. 

But help is on hand in the form of my latest Auto Super Resolution Stacking Script (see below).

My Lua script runs under ML and will automatically control the lens (which must have AF). The script will find the HFD, and, in the updated version below, jiggle the lens.

The script only requires three inputs:

  • The user selected number of super resolution images to take, ie 4 to 20;
  • A user selected delay, 0 to 5 seconds;
  • What sensor shift strategy to use;
  • And whether to add a bookmark image at the start and end, ie a dark frame to differentiate the SR stack.
So what are the results?

Well you will have to wait to see as, so far in my experiments all I have done is to get the auto stacking script (see below) running and take a few SR image stacks in our back garden, using my 24mm F/4L at 24mm. The stack comprised of 9 images, covering the HFD (about 1.8m) to the ‘infinity’ end of the lens. Each images was the usual 30MB sized CR2 file, ie 5760 x 3840 pixels. The resultant SR image was 516MB and had 11645 x 7756 pixels. In future posts I will explore some test images.

So why would you bother?

Apart from the curiosity factor, which drives me, there are occasions when you need all that pixel real-estate, eg a super large print on a bill board. Another reason photographer’s enjoy having a lot of megapixels to play with is because it gives us a lot more room to crop and manipulate our images in post production.

Of course, the downside is the shear workload in simulating a super large ‘mega pixel’ sensor, ie a 90MP equivalent on my 5D3. Thus the technique is one to use with care!

For now, without evidence, I offer this ‘new’ approach to super resolution photography, ie shooting on a tripod at any shutter speed, with any (Canon EOS) camera, and look forward to hearing from others regarding their thoughts on this post, especially the Canon Magic Lantern users.


Auto Super Resolution Bracketing Script

--[[
This script simulates a super resolution stack for post processing...
by simulating sensor shifting by changing focus. Because of this, the script is 'limited' to taking...
sharp images between the HFD and infinity.
This script assumes you are on a tripod, ie not handholding as 'normal' in super resolution stacking (without a special sensor)
This version does NOT work with FRSP...at the moment :-)
Should be in LV
Canon review should be set to OFF
Usual caveat: script was written for my workflow and enjoyment, and 'only' tested on a 5D3

Version 0.2

*******************************************************************************************************
*                                                                                                      *
* If you reference this script, please consider acknowledging me: http://photography.grayheron.net/   *
*                                                                                                      *
*******************************************************************************************************
--]]

function my_shoot() -- to grab an image
    camera.shoot(false)
end

function jiggle(around)
    for i = 1, around do
    key.press(KEY.HALFSHUTTER)
    msleep(100)
    key.press(KEY.UNPRESS_HALFSHUTTER)
    end
end

function move() -- to HFD position irrespective of lens starting position
    if lens.focus_distance ~= 0 and lens.af then -- lens is OK
        if lens.dof_far ~= lens.dof_near then -- ok to move
            if lens.focus_distance < lens.hyperfocal then
                repeat
                    lens.focus(-1,2,false)
                until lens.focus_distance >= lens.hyperfocal
                repeat
                    lens.focus(1,1,false)
                until lens.focus_distance <= lens.hyperfocal
                repeat
                    lens.focus(-1,1,false)
                until lens.focus_distance >= lens.hyperfocal
            else
                repeat
                    lens.focus(1,2,false)
                until lens.focus_distance <= lens.hyperfocal
                repeat
                    lens.focus(-1,1,false)
                until lens.focus_distance >= lens.hyperfocal
            end
        else
            beep (3, 200 , 500)  -- warning message
            display.notify_box("Check aperture", 3000)
        end
    else
        beep (3, 200 , 500)  -- warning message
        display.notify_box("Check lens settings", 3000)
    end
end  

function check_lens_ready() -- just in case
        if lens.focus_distance ~= 0 and lens.af then -- lens is OK
            repeat
            msleep(100)
            until lens.focus(0,1,false)
        end
end

function check_bookend() -- adds an over exposed FRSP frame
    if supermenu.submenu["Bookends?"].value == "yes"
    then
        local tv = camera.shutter.ms
        local av = camera.aperture.value
        camera.shutter.ms = 1
        camera.aperture.apex = 9
        my_shoot()
        camera.shutter.ms = tv
        camera.aperture.value = av
    end
end

function go() -- and run the script
    local lens_ok = true
    local num_step = 0
    local del = 0
    local count = 0
    local inf = 100000 -- pseudo infinity
    local pos = 0
    menu.close()  -- just in case
    if lens.af and lv.enabled then -- alright to use script
        msleep(supermenu.submenu["Delay"].value * 1000)
        check_bookend() -- requested or not
        move() -- back to HFD to start super resolution bracketing
        count = 0
        for count = 1, supermenu.submenu["Number of images?"].value do
            count = count + 1
            my_shoot()
            if count ~= supermenu.submenu["Number of images?"].value then
                if supermenu.submenu["Sensor shift mode"].value == "IS Jiggle" then
                    jiggle(2)
                elseif supermenu.submenu["Sensor shift mode"].value == "Lens Move" then
                    check_lens_ready() -- just in case
                    lens.focus(-1,1,false) -- move towards infinity
                else
                    check_lens_ready() -- just in case
                    lens.focus(-1,1,false) -- move towards infinity
                    jiggle(2)
                end
            end
            msleep(200)
        end
        check_bookend() -- requested or not
        beep (3, 200 , 1000)  -- and display message
        display.notify_box("Script Finished Running", 5000)
    else -- something is wrong
        beep (3, 200 , 500)  -- and display message
        display.notify_box("Sorry, can't use the script", 5000)
    end
end

supermenu = menu.new
{
    parent = "Shoot",
    name = "Super Resolution Bracketing",
    help = "Works best with wide lens: be warned!",
    depends_on = DEPENDS_ON.LIVEVIEW,
    submenu =
    {
        {
            name = "Run Script",
            help = "Does what it says after pressing SET",
            depends_on = DEPENDS_ON.LIVEVIEW,
            select = function(this) if lv.enabled then task.create(go) end
            end,
        },
        {
                    name = "Number of images?",
                    help = "Number of super res images to take",
                    min = 4,
                    max = 20,
                    value = 4,
                    icon_type = ICON_TYPE.BOOL,
                    select = function(this,delta)
                        this.value = this.value + delta
                        if this.value > this.max then
                            this.value = this.min
                        elseif this.value < this.min then
                            this.value = this.max
                        end
                    end,
               },
        {
            name = "Delay",
            help = "Delays script start by stated number of secs",
                    min = 0,
                    max = 5,
                    value = 0,
                    icon_type = ICON_TYPE.BOOL,
                    select = function(this,delta)
                        this.value = this.value + delta
                        if this.value > this.max then
                            this.value = this.min
                        elseif this.value < this.min then
                            this.value = this.max
                        end
                    end,
        },{
            name = "Sensor shift mode",
            help = "Choose one of three strategies",
            choices = {"IS Jiggle","Lens Move","Both"},
        },
        {
            name = "Bookends?",
            help = "Places an underexposed frame at start and end of bracket set",
            choices = {"no","yes"},
        }
    }
}