Computational Photography and landscape focal stacking

August 17, 2018  •  Leave a Comment

2018-06-05 15-38-25 (B,Radius6,Smoothing3)2018-06-05 15-38-25 (B,Radius6,Smoothing3)

When: 4th June 2018

Where: Richmond West Dyke Trail, BC, Canada

How: Canon EOS 60D with EF L 70-200 mm F4.0 1/1250 sec and F5.6 1/500 sec

Focal stacking is normally associated with micro photography. Because of the extremely shallow depths of filed achieved with photographing very near to small objects such as jewellery, in order to achieve a sharp image of the whole object, it requires many images taken at small increments of distance from the object, usually using a micro adjustable rail to hold the camera. Then in post processing, the multiple images are merged, using only the pixels considered to be in focus. This is called focal stacking. This technique can be used in landscape photography, by taking images of foreground objects (e.g. flowers) with a large aperture, and shooting the background at a distant focus point with a suitable aperture for the available light. By merging the two or three images with software like Helicon Focus, , one can produce an image with both the foreground objects and the background in equally good focus (see above attached example).

Now the computational power required for focal stacking can be found in recent smart phones and could be included in compact cameras. So now a smart phone or compact camera,, can incorporate multiple fixed focal length lenses and produce near simultaneous images of varying depths of field. The first consumer example of use of this technique is with Portrait Mode on the Apple iPhone X. But the potential exists to use even more lenses with different focal lengths and perspectives, to create an image file with a user selectable depth of filed. This would eliminate the need for a real time decision on what focal length and aperture to select at the time of taking an image. The use the computational power in the device potentially could replace optics to provide creative images that utilize depth field in the same way traditional DSLR has been used with its large, expensive optical exchangeable lenses. Both Google and Apple are investing heavily in exploring the use of computational photography. So perhaps we may see more interesting mainstream innovation for photography on Android and Apple devices. Who knows, maybe Canon and Nikon might finally decide to be more technically innovative.

 

 

 


Comments

No comments posted.
Loading...

Archive
January February March (15) April May (5) June July August September October November December (2)
January February March (1) April May June July August September October November December (1)
January February March April May (2) June July August September October November December
January February March (1) April (1) May June July August September October November December (1)
January February (1) March April May June July August September October (1) November December
January February (1) March April May June July August September October November December
January February March April May June July August (1) September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December
January February March April May June July August September October November December