Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
04-07-2020, 06:17 AM   #1
Pentaxian




Join Date: Feb 2015
Photos: Gallery
Posts: 12,173
How is multishot super-resolution truly possible?

There is pixel shift, and half-pixel pixel shift, featured in some camera models: I'll call that method 1.
It was also claimed that it is possible to produce super-resolution using multiple exposures stacked externally with image processing software: I'll call that method 2.

An articles about method 2:
Do You Want More Resolution? Use Super Resolution | Fstoppers
http://photoncollective.com/enhance-practical-superresolution-in-adobe-photoshop


This post is about method 2.
Image noise is reduced via image stacking, that is for me easy to understand.
The smallest feature a single exposure can resolve is one pixel wide. I imagine that two images are aligned on top of each other by comparing image features, pixel to pixel.
Now, how can superior resolution be obtained from randomly taken exposure if the maximum precision of frame to frame alignment can't exceed the pixel pitch?
The super-resolution with randomly taken frames is a hoax, unless I missed something?


Last edited by biz-engineer; 04-07-2020 at 06:36 AM.
04-07-2020, 07:01 AM - 2 Likes   #2
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
I've done it and it works. That said, the novelty was not worth the hassle or the huge file sizes for the method I tried.


Steve
04-07-2020, 07:17 AM - 1 Like   #3
Pentaxian
Wasp's Avatar

Join Date: Mar 2017
Location: Pretoria
Photos: Gallery
Posts: 4,650
The thing with noise is that it is random. So if you have a series of identical pictures the noise will be spread in different places (or pixels) from image to image. Stack the images and the noise goes away. Full disclosure: I haven't tried this myself.
04-07-2020, 07:58 AM - 3 Likes   #4
Site Supporter
Site Supporter




Join Date: Jun 2008
Location: Idaho
Photos: Gallery
Posts: 2,360
It doesn't have so much to do with noise but the fact that when the camera or the sensor is moved less than one pixel (consider it a sub pixel-shift movement), the values of all the pixels change depending on details in the image smaller than the pixels themselves. That change represents finer details in the image than a single image is able to resolve and with the proper software and known shifts (the shift value must be known), higher resolution images can be obtained. It happens all the time in your eyes. Small amounts of jitter in eye muscles allow you to see better than if your eyes were perfectly still (assuming you have good vision to begin with). It has also been done is astronomy and military photography to boost resolution but as stevebrot mentioned, it's time consuming and hard to do compared to snapping a single image. It also doesn't work with moving images since motion negates comparing sensor shifted pixels to get more information. The technical end of it has to do with sampling theory and gets pretty complicated, but it does work. Though sample size may be large, information about something smaller than the sample size can be obtained if enough samples (shifted) are taken.


Last edited by Bob 256; 04-07-2020 at 08:06 AM.
04-07-2020, 08:29 AM - 1 Like   #5
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,113
Method 2 relies on incidental micro-motions of the camera between frames to create the kinds of subpixel offsets between the frames that pixelshift creates.

For example, suppose frame 2 is displaced 5.4 pixels to the left of frame 1. That means if you shift frame 2 by 5 pixels to nearest-pixel alignment for stacking, there's still a 0.4 pixel offset that can be used to resolve subpixel features in the image. It's not as good at the exact 0.5 pixel offset created by pixel shift, but it's still usable. Frame 3 might be 5.2 pixels further left of frame 2 or 10.6 pixels left of frame 1. So now the data contains image pairs with 0.2, 0.4, and 0.6 subpixel offsets that can be used to estimate subpixel features. Collect enough frames and by chance, the set of relative offsets between all pairs of frames will include many useful fractions-of-a-pixel offsets that can be used. (The number of image pairs increases with the square of the number of frames.)

So it does work but the quality of the result depends on the number of frames and random chance. Whereas pixelshift always gets a perfect 1/2 offset sampling, method two may take more frames to get similar quality results.

(Note: this is the simplified version that assumes linear motion and monochrome sensors. Rotation of the sensor between frames creates periodic patterns of subpixel offsets across the frame. A Bayer filter sensor means that we want offsets of 1/2 the 2x2 CFA pattern or one pixel offsets between frames to get best results. But the overall logic and functionality remains the same -- random offsets between frames provide pair-wise data for super-resolution processing that goes beyond the noise reduction of simple stacking.)
04-07-2020, 08:30 AM   #6
Pentaxian




Join Date: Feb 2015
Photos: Gallery
Posts: 12,173
Original Poster
QuoteOriginally posted by Bob 256 Quote
moved less than one pixel (consider it a sub pixel-shift movement), the values of all the pixels change depending on details in the image smaller than the pixels themselves.
In principle yes, I agree. But how can images be aligned with sub-pixel precision without knowing in advance how much shift was between frames?

---------- Post added 07-04-20 at 17:35 ----------

QuoteOriginally posted by photoptimist Quote
For example, suppose frame 2 is displaced 5.4 pixels to the left of frame 1. That means if you shift frame 2 by 5 pixels to nearest-pixel alignment for stacking, there's still a 0.4 pixel offset that can be used to resolve subpixel features in the image.
I get that. What I don't get is how frames can be aligned while not containing the same information and if no information other than pixels is provided. It's like if the camera moves frame by 0.5 pixel, you know it and can use it for alignment, but if the move between frames is random you would need to know the underlying high res image to realign frames. What I means it if you sample a sine wave (2D, amplitude, time) with two clocks shifted by known delay, you can combine sample as if the sampling frequency is double, you can also sample the sine wave with two clock and random delay between clock and still rebuild the oversampled sine wave because you already know it is a sinewave. But in case of an image, you don't know what the image is, you've only got the samples.

Last edited by biz-engineer; 04-07-2020 at 08:38 AM.
04-07-2020, 08:39 AM   #7
Loyal Site Supporter
Loyal Site Supporter




Join Date: Mar 2009
Location: Gladys, Virginia
Photos: Gallery
Posts: 27,603
My guess is that there is much less extra information that you get that you would think. Olympus has their super resolution mode, but honestly studying the photos, while there is a bump in detail, it isn't anything like the amount of increase you would think you would get going from 20 megapixels to 80 megapixels.

In my experience if your goal is to add detail in a landscape situation, you are far better shooting multiple images panorama style and stitching.

Pentax's pixel shift is more about adding color detail and decreasing noise than adding resolution, although I suppose it does some of that as well.

04-07-2020, 08:42 AM   #8
Pentaxian




Join Date: Feb 2015
Photos: Gallery
Posts: 12,173
Original Poster
QuoteOriginally posted by Rondec Quote
there is a bump in detail, it isn't anything like the amount of increase you would think you would get going from 20 megapixels to 80 megapixels.
To me super-res image given by Olympus looks more like free of noise (thanks to stacking) with sharpening, I didn't see images when you can really see detail no present in a single frame and detail visible in the super-res. file. Although the super-res image looks smoother, that's for sure.

---------- Post added 07-04-20 at 17:44 ----------

QuoteOriginally posted by Rondec Quote
In my experience if your goal is to add detail in a landscape situation, you are far better shooting multiple images panorama style and stitching.
Sure , with the downside that you eventually need a tripod with/withpout pano head. Whereas if super-res really works, it can be done hand-held (like K1 II HHPS).
04-07-2020, 09:47 AM - 1 Like   #9
Senior Member




Join Date: Oct 2015
Posts: 142
I think the key point is that it is possible to get better-than-pixel-pitch alignment. The key first step in the "manual" process is to enlarge the pixel counts 4x or more using interpolation. As the "true" pixels are an average of the new interpolated pixels (ignoring the fact they were partially interpolated anyway due to bayer array), slight sub-pixel shifts will have pixels with slightly different averages of the real world values. Considering the example of 2 pixels it might be easier to see with an example:

True world has a gradient from 100 to 0 value (with pure 100 to the left and pure 0 to the right). Gradient is one pixel pitch wide for simplicity.

Images:
75 - 25 (each pixel is averaging the left half and the right half of the gradient, the true gradient is perfectly centered on the pixels)
90 - 40 (each pixel is averaging left and right halves but shifted less than a half pixel to the left a bit so it's more bright overall)
50 - 0 (shifted a full half pixel to the right, so the left pixel is taking the center of the first image and the right has only pure black. This is the best sub-pixel-shift case.)

Now you expand these images to 3 pixels wide with interpolation
75 - 50 - 25
90 - 60 - 40
50 - 25 - 0

Best alignment can be done to something like this:
90 - 60 - 40
75 - 50 - 25
------50 - 25 - 0

Stacking would output
82.5 - 53.3 - 30 - 0
or somesuch, that is a much better description of the "true state" of a linear gradient from 100->0. More interpolated pixels and more frames with different offsets would help fill this out and capture the full range.

Of course it works well in this case because the linear interpolation is "correct" -- but it works even if this is not the case with enough frames, just not quite as quickly/obviously.

Last edited by fehknt; 04-07-2020 at 09:48 AM. Reason: spaces didn't space out my stacking example
04-07-2020, 09:53 AM   #10
Site Supporter
Site Supporter
microlight's Avatar

Join Date: Sep 2011
Location: Hampshire, UK
Posts: 2,127
I’ve done super-resolution quite a few times using this article as a guide: A Practical Guide to Creating Superresolution Photos with Photoshop and I also found that it works - as long as you take enough exposures. Much lower noise, and sharper detail - and to pick up on biz’s point - it not only can be done hand-held; it must be done hand-held as the method relies on micro-movements to mimic what pixel-shift does.

I can’t explain the micro-detail of how or why it works, but it does. (biz - you’ve clearly thought about this a lot but I didn’t get from your post whether you’ve actually tried it in practice yet.) I tried dropping from 20 exposures to four or six, but the increased resolution drops even though the file size remains of the same order due to the upscaling required.

An alternative method of increased resolution, as Vincent said, is stitching multiple images, Brenizer-like - but this can produce even larger file sizes depending on how many images you stitch.
04-07-2020, 10:13 AM - 1 Like   #11
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,113
QuoteOriginally posted by biz-engineer Quote
In principle yes, I agree. But how can images be aligned with sub-pixel precision without knowing in advance how much shift was between frames?

---------- Post added 07-04-20 at 17:35 ----------

I get that. What I don't get is how frames can be aligned while not containing the same information and if no information other than pixels is provided. It's like if the camera moves frame by 0.5 pixel, you know it and can use it for alignment, but if the move between frames is random you would need to know the underlying high res image to realign frames. What I means it if you sample a sine wave (2D, amplitude, time) with two clocks shifted by known delay, you can combine sample as if the sampling frequency is double, you can also sample the sine wave with two clock and random delay between clock and still rebuild the oversampled sine wave because you already know it is a sinewave. But in case of an image, you don't know what the image is, you've only got the samples.
Actually, you don't need to know the underlying high res image to realign to the nearest pixel and then also estimate subpixel offset between the frames.

One approach is to cross-correlate the image pairs with different offsets to find the offset that that maximizes the cross-correlation. That gets you the nearest integer-pixel alignment. Next, you analyze the shape of the peak in the cross-correlation and consider the following three examples of how different sub-pixel offset change the shape of the cross-correlation curve in predictable ways:

1) If the images were perfectly aligned, you'd see a cross-correlation values that go low, medium, high, medium, low for the offsets around the aligned value. The shape of the cross correlation curve would be symmetric about the central value.

2) If the images were perfectly offset by exactly half a pixel, you'd see a cross-correlation values that go low, medium-low, medium-high, medium-high, medium-low, low for the offsets around the aligned value. The shape of the cross correlation curve would be symmetric about the mid point between the two central values.

3) If the images were offset by a quarter pixel, you'd see a cross-correlation values that go low, medium-low, medium-high, medium, low for the offsets around the aligned value. The shape of the cross correlation curve would be asymmetric and biased toward the closer offset by an amount related to the actual fractional offset.

In any case, If you fit the three highest values of the cross-correlation curve to a quadratic equation and then solve for the location of the peak of that curve, that location value will be a decent estimate of the sub-pixel offset.

Another approach finds the integer alignment as in approach 1 and then does an FFT of the two images and calculates the subpixel offsets from the differences in the phases of the FFTs.

There are other approaches in the literature.

The deeper point is that fractional-pixel offsets do create measurable differences between the frames and those measurable differences can be used to estimate the fractional-pixel offset.

P.S. Back in the late 1980s, I developed the algorithms for using image data (and a third approach) to directly estimate the perspective equations (2-D offset, scale, rotation, and 2-D keystoning) that related overlapping frames to each other for panoramic tiling. The typical accuracies of those estimates were about 0.2 pixels.
04-07-2020, 11:13 AM   #12
Site Supporter
Site Supporter




Join Date: Mar 2017
Photos: Gallery | Albums
Posts: 568
Noise reduction and increasing detail aren't exactly the same thing. Stacking to reduce noise is common and well-accepted in astrophotography. But doing that isn't absolutely going to increase resolution. It's just potentially going to improve what one can see within the existing resolution.
04-07-2020, 11:40 AM - 1 Like   #13
Moderator
Loyal Site Supporter
Loyal Site Supporter
pschlute's Avatar

Join Date: Mar 2007
Location: Surrey, UK
Photos: Gallery
Posts: 8,110
I understand there are occasions where super high resolution is required, but for 99% of the time for casual or serious ameteurs it is not. Look at the great photographers of the past working with the available technology, high resolution was a pipe dream.

A great image and one that will be remembered as iconic, depends on many factors. Resolution will never be one of them
04-07-2020, 11:43 AM   #14
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
QuoteOriginally posted by microlight Quote
I’ve done super-resolution quite a few times using this article as a guide: A Practical Guide to Creating Superresolution Photos with Photoshop and I also found that it works - as long as you take enough exposures.
Yep...and the article explains how it works too! Thanks also to the comments above that quite nicely expand the explanation in concise detail. I would suggest that doubters give it a try.


Steve
04-07-2020, 11:47 AM - 1 Like   #15
Otis Memorial Pentaxian
stevebrot's Avatar

Join Date: Mar 2007
Location: Vancouver (USA)
Photos: Gallery | Albums
Posts: 42,007
QuoteOriginally posted by pschlute Quote
I understand there are occasions where super high resolution is required, but for 99% of the time for casual or serious ameteurs it is not. Look at the great photographers of the past working with the available technology, high resolution was a pipe dream.
When they wanted higher resolution, they simply used a larger negative and fine-grained film; a side-effect being that tonality is enhanced as well. I have seen 8x10 and larger contact prints by Edward Weston and Ansel Adams and even at normal viewing distance, they are spectacular.


Steve
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
alignment, camera, detail, exposure, frame, frames, gradient, half, iconic, image, images, information, lot, megapixels, method, output, photography, pixel, pixels, print, prints, quality, resolution, shift, stitching, technique
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
Nature A Truly Beautiful Looking Hibiscus Flower. Tonytee Post Your Photos! 10 07-23-2019 04:43 PM
Nature A Truly Beautiful Oregon Tulip. Tonytee Post Your Photos! 3 10-23-2018 11:48 PM
Nature A truly beautiful Celosia Plant Flower. Tonytee Post Your Photos! 17 09-04-2018 08:59 PM
Nature A truly lovely, red rose. Tonytee Post Your Photos! 7 08-24-2018 08:13 PM
Nature A truly beautiful red flower. Tonytee Post Your Photos! 3 08-17-2018 02:49 PM



All times are GMT -7. The time now is 07:40 AM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top