Long-Exposure Composite Photography
An advanced guide to interval shooting
By K David in Articles and Tips on Jan 29, 2015
This article presents a method to simulate long-exposure images using your DSLR's interval shooting mode. This method is geared toward advanced-level DSLRs such as the K-3 and 645Z, but other bodies have this functionality in menus (such as the K-7 and K-5) though the functionality is limited compared to the K-3 and 645Z. If you don't have a Pentax with the proper interval shooting modes, this technique can be reproduced in Photoshop, too. This article presents the methods to perform long-exposure simulation in-camera as well as in post.
Simulating long-exposure photography presents a long-exposure alternative to stacking filters on your camera when you want to take a long-exposure shot in bright light. Long-exposure simulation uses your high-end Pentax's Interval Composite mode. The Interval Composite mode is described in greater detail in the pending article that focuses on the K-3's creative photographic modes. In short, it takes repeated images (up to 2,000) at a set interval and composites them in-camera. The camera can either keep or discard all the process photos.
Process: Pentax K-3 and 645Z
In your K-3 or 645Z's drive mode menu, select Interval Composite shooting, decide if you want the process photos, and select your blending mode. The camera will do the rest. For blending, averaging mode is the ideal mode to simulate long-exposure photos.
Tip: invest in a larger SD card -- at least 64 gigabytes -- and always save your images. In some cases, the best long-exposure blend may not be the penultimate image in a sequence but may be an earlier constituent. If you allow the camera to discard all the images, then you may lose the best version of a still image. In Raw+, the K-3 can take around 900 images on a 64 gigabyte memory card, so that size should be ample for all but the most ludicrous projects. On the 645Z, the image volume will be significantly less and a 128 gigabyte memory card may be a better option.
Process: Pentax K-7, K-5, and Others
Previous to the current generation of flagship DSLRs, the Pentax lineup included multiple exposure shooting in some cameras as a menu option. In the K-7, it is in the second menu window under the Multi-exposure setting. The K-7 and other cameras offer up to only nine photos in this mode and the only blending is averaging (this is not a selectable option.)
Process: Entry-level Pentax DSLRs
For DSLRs without an interval composite mode or suitable multiple exposure mode, you'll need to collect multiple images yourself. Ideally, you'll have a remote control for this so that you don't introduce camera shake by pressing the shutter release. Counting in your head or using a watch or smartphone stopwatch app, you'll need to manually take all the photos at a set rate. After taking all the photos, you'll need to manually create the simulated long exposure in Photoshop (or other post processing programs -- this article only discusses Photoshop.)
To use Photoshop, you will need an extended version because extended has the statistics function. To access the statistics in Photoshop Extended, follow this menu path: File > Scripts > Statistics.
This is the statistics dialogue box. The various statistical image blending modes are shown.
The animated gif below shows a series of images I took to demonstrate various compositing modes in CS6. Typically, Median will be the best option for simulating long exposures. This animation shows the image progression.
Blending Mode Results
Here are images of the various blending mode results. Mean, the first, is the best for simulating long-exposure images. Median can also produce long-exposure simulations, though with less reliability and different results than Mean. The other modes have different uses, some of which are primarily image forensics.
Mean and Median
Mean and Median vary in the manner in which they average. Here's how they function.
Mean looks at each pixel and determines the average value. This is to say that it adds the value of each pixel in the layer, divides the value by the number of layers, and assigns the pixel that value. This allows each pixel to have a value not achieved in any of the contributing images, but a value directly in the middle of all the image's values. Mean is the script you would use, too, if you take 10 photos of a scenic area and want to blend them to remove tourists.
Median works by looking at each pixel's value and sorting them low to high. It then selects that value that splits the list in the middle -- the value at which point half of the pixels are of greater value and half are of lesser value. So with Median, the value of each pixel will be the middle range of all the pixels and can only be a value obtained by a pixel in one of the contributing images.
The result is that Mean provides a smoother simulated long-exposure. Median provides a simulated long-exposure that appears more like the contributing images.
Maximum and Minimum
Maximum and Minimum work on the images in the same way but in different manners. Here's how they function.
Both maximum and minimum modes stack all the images and then compare pixels. Maximum keeps the pixel with the greatest (lightest) value. Minimum keeps the pixel with the lowest (darkest) value.
Neither Maximum nor Minimum will create a simulated long exposure. However, they have creative uses with long-duration simulation as they can create blending masks that will help reduce digital noise, eliminate distracting elements, or improve image tonality when blended into the simulated long exposure image.
Range, Variance, and Standard Deviation
I won't lie. This is the point where I had to start researching how these modes work. Adobe's help center has an article on this here, but it reads like Space Shuttle schematics.
Range performs Maximum and Minimum blending and then subtracts the Minimum data from the Maximum.
Variance tells you the variance from the Mean. This process creates a Mean blend (but not as a delivered image product) and then squares the data (the pixel data are multiplied by themselves.) This data result is then divided by the Mean blend data to create the variance output. Variance can help uncover hidden image flaws or be used as a blending mode mask to create interesting lighting effects in your image.
Standard Deviation begins by finding the average pixel data by running a Mean blend (but does not provide the blend as a delivered image product.) Standard Deviation then determines the mean value of each pixel in that Mean blend result for the entire image (the same as though you were to select Filter > Blur> Average.) Next it square roots each pixel's data in the Mean blend and compares that result to the average pixel data value. The output shows how great the deviation of each pixel is from the average pixel.
Summation, Entropy and Kurtosis
I'll talk only briefly about these as they are not likely to have direct benefit to long-exposure simulation. They could, under certain circumstances, provide a layer mask that could help retrieve detail, remove noise, or polish other elements of the image.
Summation adds the data from each pixel until it reaches white. At some point, all pixels would eventually be white with enough layers. This mode would be useful to recover lost shadow detail in a portion of an image by using this result as a semi-transparent and blended mask.
Entropy has uses for helping determine image encoding data requirements. This is another mode that could have some limited use as a layer to help recover some image data.
Kurtosis is an amplified standard deviation. In this image it's basically white because there's not that much tonality variation in the Mean image. This may have greater use with a more dynamic image.
Skweness, according to Adobe, "is a measure of symmetry or asymmetry around the statistical mean." So, Skewness first derives the image Mean and then looks at each image contributing to it to determine how many pixels vary from the mean data for the pixel and how large a range of variance there is. Skewness would be good to add some unrealistic lighting effects to an image, but only if used responsibly and in small doses.
The biggest challenge is gauging your timing. In doubt, aim for a shorter duration. Here's a photo I was particularly happy with last year when I began experimenting with this technique.
It's basically flawless, right? Well, let's zoom in on the clouds on the left.
Saw-tooth clouds are a terrible and distracting element in cloud trails photos. This photo arose from combining multiple images taken approximately ever 10 seconds. The clouds were moving way too fast for that long of a duration. Had I timed the images with an interval of not more than three seconds, then the clouds would have blended smoothly.
Here's a more evolved sample:
This resulted from taking the photos at a two-second interval for about three minutes. I'll spare you the pixel peeping, there's no saw-tooth clouds.
Star trails are even more susceptible to timing issues that are cloud trails.
This is a fine image, right? I mean, I suppose if an image gets small enough pretty soon all its flaws will be hidden. For this image I took 30-second exposures and spaced them at five seconds to let the sensor cool down between shots. Here was the result, at 100% (click to enlarge):
Less a star trails shot than a star dots shot. So be mindful of your between-exposure timing whenever you simulate a long exposure. Nothing spoils a long-exposure effect better than it clearly being made by combining some number of short exposures.
This is a great technique with a lot of creative possibilities. Here are some samples to spur your creative muscle.
Sunset at Point Arena
Thirty Minutes of Sunset
Lime Ridge Oak Tree
So the simulated long exposure effect can lead to some dramatic results. But it's not an effect limited to static images. Here are two animations (part of a larger project I've been working on for a bit more than a year) that use interval shooting and interval composite modes.
Cloud Movements Animation. I made this piece by combining each of about 200 constituent images (the option to record the process with interval composite shooting) and encoding them in a video at a 10-frames-per-second speed.
Big Dipper. Similar to the above, I made this animation with constituent images for a star trail.
The ideas outlined above aren't the only ways to use long exposure simulation. Please share some of your ideas and samples in the comments.