As a bit of personal entertainment I decided to see what exploiting multiple technologies can achieve, and see if there are any 'risks', eg one technology fighting another.
My experiment was to use my 2nd hand Canon M3 and load it up with technology.
The subject matter of the test was rather limited, because of Covid (I'm still awaiting my 2nd jab) and the British weather (rain), to taking place in our library.
My 'vision' was to create a wide angle, deep focus, floor-level image. That is the kind of image I might wish to capture in a cathedral once I start getting out.
As stated above, I was using a 2nd hand Canon M3: an APS-C camera with pretty good features, that you can pick up for around 200GBP. To this I added an existing lens I have: an 11-22mm EFM.
On to this base arrangement I then added a Smallrig cage and a handle, and a Canon EVF I had.
Although the cage adds weight, I find it introduces an ability for me to add 'stuff' to the camera, and makes the M3 feel right in my large hands. Finally, it also adds additional structural protection to the already robust M3.
As for achieving a low level shot, on this occasion I used my Move Shoot Move Z platform: a great piece of solid technology, that, unlike similar (friction-based) products, locks into place.
When all the above is added together, my test setup looked like this: note I could have bolted the MSM platform to one of my Platypod plates, for extra stability, but didn't bother in the test:
Now some may say this looks like a Frankenstein-like affair and they may be right: but it works for me.
Note the handle placement, which is there so I can pick the camera up and use the handle when in the portrait mode. Also note the additional tripod plate that is bolted to the cage, meaning the cage is an L bracket.
So much for the hardware-based technology; let's now add in some essential software-based technology in the form of the Canon Hack Development Kit (CHDK). Additionally, thanks to one of the CHDK gurus (Philmoz), let's add in the CHDK build with a 64-bit floating math library, as the base CHDK 'only' has a 32-bit integer math library.
The final piece of technology is to add on my reworked M3 Bracketing script (that exploits the 64-bit float capability) and all is well in the world :-)
As this was a test, all I did was place the camera on the ground next to where I was sitting: there was no composition in this test ;-)
In the script settings I had selected the option to focus bracket from the minimum focus distance to three times the hyperfocal (H). H was based on an overlap blur of 20 microns. Thus the infinity defocus blur shot was taken at 20/3 microns.
The camera was set to f/8 and I used an ISO of 400, at a shutter speed of 4s.
Once set up, all I needed to do was to get the script running and press SET. The script then handled every thing else.
To give an impression of what each image's exposure looked like, here is a Lightroom grab:
Here we see the infinity shot, at 3*H, before any LR adjustments, showing the histogram looks OK, ie I didn't need to request the script to exposure bracket as well.
The script created a seven image focus bracket set, although on inspection in LR, the first focus bracket appeared to be redundant, ie no in focus elements in the scene.
The first step in post processing was to use a new piece of software technology, namely DxO PureRAW, which resulted in seven new 'RAWs', ie PureRAW processed DNGs, in Lightroom, that made the Canon (ISO400) CR2 images look really good, ie RAW sharpening and RAW noise reduction.
The second step was to use the base Lightroom technology to reduce the highlights and increase the shadows, select the seven DNGs and, from Lightroon, do a round trip to the Helicon Focus technology.
Once Helicon Focus had returned the processed focus stack to Lightroom, I used one more piece of software 'technology', a preset I have created to extract the most out of a RAW-based image.
The above is the focus stacked image as returned from Helicon Focus, whereas the image below is the final image after applying my LR preset, using a bit of deconvolution sharpening in LR, and using the LR transform module to straighten things up:
So, there you have it, another boring image that shows what can be done if you are mad enough to want to do things differently, and augment your camera and post processing with technology.
As usual I welcome any feedback on this post or any of my posts.