Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
08-14-2017, 06:30 PM   #16
Site Supporter
Site Supporter
BruceBanner's Avatar

Join Date: Dec 2015
Posts: 5,404
Original Poster
QuoteOriginally posted by Adam Quote
There is about a 1/4s delay between the shots regardless of shutter speed, so there's not much that can be done about it.

Also with MC on, hand-held pixel shifting should essentially result in a single conventional image (it will cancel out the merge with the remaining 3 frames). Not something that is intended, nor should it (theoretically or practically) carry any benefit given the current implementation. As others have pointed out, manual stacking should prove to be a superior alternative.
QuoteOriginally posted by dcshooter Quote
The bolded statement is absolutely meaningless if it's meant to prove the superiority of Pixel shift over other "superresolution" techniques.

Using multiple exposures and averaging pixel values over multiple aligned images gives you data just as "real" as using pixel shift. Or do you believe sampling error magically disappears just because the same photosite is used for the R, G, and B channels? If that were the case, Foveon sensors would be perfectly noise-free at any ISO value. And they are terrible at high ISO.

Increasing the number of sample events, regardless of whether they have been RGB interpolated prior to averaging as with the Photoshop composting method described above, will still give you resulting values that are closer to the "real" color value for each pixel, i.e. higher quality data. It's simple statistics.

The way the PS algorithm works, if motion is detected, for the exported image, it simply throws out 3 of the exposures for a given pixel and goes with the first one. For a hand-held series of exposures (i.e. the subject at hand), this can literally mean the entire image has its PS information discarded, leaving you with a giant file that is entirely interpolated, no better in quality than a non-PS image.
Yeh, I was going to ask for clarification for this. In camera, once a pixelshifted shot has been taken, I can see no way to confirm its file size other than checking on the computer once transferred the file/s across, then I can see the image file is 150mb+ and therefore likely a ps shot or hdr etc. I wondered, with all this handheld ps talk of using one image and discarding the other 3 due to too much movement, if that meant the ps shot was same size as a regular non ps shot or if it still had the same file size and now contained nasty artifacts.
I can check for example yesterday of some shots I was taking PS handheld with MC On and that the dotted matrix stitching affect is widely visible on a lot of the shots, and the file is large in size. It most definitely does not look like a single frame with 3 disregarded, ie it looks way worse. So I am convinced at those times when no tripod is available then to use manual stacking over PS hand held for sure.

My question then is... what scenario do you ever want to use PS with Motion Correction On? I mean... thus far my success in PS has been stationary subjects such as flowers and very still insects, taken of course with tripod and longish exposure times... but all these had MC Off... If I had MC On, perhaps an insects leg moved slightly if I needed a 4 second exposure etc, MC On is better or...?


QuoteOriginally posted by stevebrot Quote
You have the camera in hand and should be able to do an apples-to-apples comparison in less time than it took any of us to respond. But since we are all here, it has been tried before by users on this site and the result without MC was ugly. As Adam noted, with MC on, one basically gets the first of the four exposures. The rest of the data are simply thrown away.

@dcshooter's use of standard modified "super resolution" technique (requires slight movement between frames) is a much better approach if a tripod is not available.


Steve
Come Steve.. this is me we're taking about. Thick as mince and here for the banter
But in all seriousness, with MC On, yesterday my PS images do not look like one normal regular shot, it looks worse, its not like 3 were thrown away, but rather a bad job of processing all 4...


QuoteOriginally posted by Rondec Quote
Honestly, I run these images through Raw Therapee and I will click the button that shows the areas that need masking. If that area encompasses fifty percent or more of the image, then I don't bother with the pixel shift at all, rather I use the Amaze and pick the sharpest image of the four to develop.
I keep hearing of this program Raw Therapee, I am only a LR and PS kinda guy... perhaps I should be looking more into RT? If I'm understanding what I am hearing correctly, it appears with RT you can load the 150-200mb pixel shifted DNG file into it, and if the job processing the 4 images is not ideal, you can remove one or more of the 'layer's leaving a 2 or 3 layered pixel shifted image that might lack resolution of the full 4 but is cleaner with less of those dotted matrix stitching artifacts? If the whole PS job is bad, then with RT you could kill 3 of the PS shots and be left with just one 'regular' DNG file to work with (the first frame etc)?
Perhaps this is also possible in PS? I am just not that much of a PS wizard...

08-14-2017, 06:43 PM - 1 Like   #17
Administrator
Site Webmaster
Adam's Avatar

Join Date: Sep 2006
Location: Arizona
Photos: Gallery | Albums
Posts: 51,594
QuoteOriginally posted by BruceBanner Quote
Yeh, I was going to ask for clarification for this. In camera, once a pixelshifted shot has been taken, I can see no way to confirm its file size other than checking on the computer once transferred the file/s across, then I can see the image file is 150mb+ and therefore likely a ps shot or hdr etc. I wondered, with all this handheld ps talk of using one image and discarding the other 3 due to too much movement, if that meant the ps shot was same size as a regular non ps shot or if it still had the same file size and now contained nasty artifacts.
It will always contain the 4 constituent images in RAW. In JPEG mode, you get one photo which has automatically been processed and merged in-camera.

QuoteOriginally posted by BruceBanner Quote
I can check for example yesterday of some shots I was taking PS handheld with MC On and that the dotted matrix stitching affect is widely visible on a lot of the shots, and the file is large in size. It most definitely does not look like a single frame with 3 disregarded, ie it looks way worse. So I am convinced at those times when no tripod is available then to use manual stacking over PS hand held for sure. My question then is... what scenario do you ever want to use PS with Motion Correction On? I mean... thus far my success in PS has been stationary subjects such as flowers and very still insects, taken of course with tripod and longish exposure times... but all these had MC Off... If I had MC On, perhaps an insects leg moved slightly if I needed a 4 second exposure etc, MC On is better or...?
There's usually no reason not to use MC unless you're certain nothing in your scene is moving/swaying in the wind. I this case you save a little bit of processing time after the image is captured. In RAW you can pick whether or not to enable it retroactively.

Adam
PentaxForums.com Webmaster (Site Usage Guide | Site Help | My Photography)



PentaxForums.com server and development costs are user-supported. You can help cover these costs by donating or purchasing one of our Pentax eBooks. Or, buy your photo gear from our affiliates, Adorama, B&H Photo, KEH, or Topaz Labs, and get FREE Marketplace access - click here to see how! Trusted Pentax retailers:
08-14-2017, 06:52 PM   #18
Site Supporter
Site Supporter
rechmbrs's Avatar

Join Date: Jan 2007
Location: Conroe, TX USA
Posts: 423
Spatial resolution measurement

There are a number of items on the floor right now. I'd like to break off the spatial resolution part and with some assistance look at a method of estimating some information about the resolution of Pixel Shift on K 1 versus the resolution of a single frame from the same Pixel Shift dataset. Once we have a method agreed upon then we can try stacking the frames after removing the Bayer pattern, whatever super-resolution is....

If I can get a volunteer to supply the first part (Pixel Shift data), WE can try to measure resolution from it.

REPLIES:
Good idea?
Volunteers?
Just go away!

RONC

---------- Post added 08-14-17 at 21:23 ----------

Shooting with MC on you loose nothing if the jpeg is bad as you can generate one from the DNG without MC either in camera or PDCU program.
Not only are PixelShift files about 4 times a single shot DNG, there is a parameter in the MAKERS header that tells you it is PS.

RONC
09-23-2017, 04:58 PM - 1 Like   #19
Veteran Member




Join Date: Mar 2017
Location: Otago, New Zealand
Posts: 422
I've only skim read to here, so I apologize if I've missed something, but something that leaped out at me is the motion correction bit.

My understanding from usage (along with doing product photography as well as rebuilding sensors for IR etc) runs like;

You only really want to ever use pixel shift on a tripod, not hand held.

Motion correction is to compensate for movement in the scene, not movement of the camera - so it's useful in a landscape where you have the camera on a tripod, but there might be occasional light puffs of wind and you might have some leaves moving in one shot and not in the others (one from four), so the camera will subtract that part of that frame where the movement occurred (leaving three from four frames to average the colour of the pixel).

If you use this feature handheld you are likely to have movement between all frames, so you will have only one of four frames taken, or you will have glitches in the image where the camera has misjudged what colour a section is - either way no advantage over a single frame hand held.

If you are in a studio where nothing will move between frames you want to turn the movement correction off - this is when everything it totally locked down (this is the ideal scenario).

Pixel shift will not increase resolution, it will increase colour depth per pixel, and thus it will increase the level of information in the image - A regular bayer sensor will create an image with a colour structure like;

R G R G R G R G R
G B G B G B G B G

So for each cluster of four pixels on the sensor you will have RGBG represented, which is then converted to RGB for the final image

A Foveon sensors image or a pixel shifted image will have a read like

RGB RGB RGB RGB RGB RGB
RGB RGB RGB RGB RGB RGB

Thus it will have far more colour information per pixel site in the resultant image.

While this does result in a perception of increased detail, this is due to more accurate colour information rather than a greater number of pixels (which is how you get more detail).

You still get the same level of stepping at a pixel level, just the averaging (and thus perception of detail) is better at an image level due to the greater amount of colour information per pixel in the image.

If you want to do this handheld you are better off shooting four frames and using photoshop to allign and merge them manually and then erase sections where local movement occurred.

Otherwise you might also take one frame, duplicate the layer, change the size of one of the layers by one pixel and then use layer effects to increase the colour depth per pixel for printing (I used to use a technique like that years ago for printing large prints with small files - I haven't felt the need to do it since the 10MP days so my memory of the methodology is a bit hazy).

Hopefully someone will make sense of what I just said and be able to translate it from me to not-me.

09-23-2017, 06:17 PM - 1 Like   #20
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,121
Pixel shift most certainly does increase resolution significantly.

It may be true that the Bayer filter means the image pixels have a color pattern such as:

R G R G
G B G B
R G R G
G B G B

That filter design means that resolution in red has 1/4 the pixels of the full array (1/2 the linear resolution of details), the same with blue, and green has 50% of the pixels (about 70% the resolution of the full array). In normal mode, the K-1 gets a 9 MPix red image, a 9 Mpix blue image, and an 18 MPix green image which it interpolates to create a 36 MPix color image.


But that's only in one frame. In pixel shift, the second frame shifts the sensor array one pixel to the left to get color array pattern covering the scene of:

G R G R
B G B G
G R G R
B G B G

Then it shifts one pixel up to get:

B G B G
G R G R
B G B G
G R G R

Finally, it shift one pixel back to the right to get:

G B G B
R G R G
G B G B
R G R G

Those four images are stored in the massive RAW file that is 4X the size of a regular one.

If you look carefully, every location in the scene was measured once in red, once in blue, and twice in green.

The image developer stacks the four images to create a true 36 MPix x 3 color image with the green image have a noise-reduced double-sampling. The result is BOTH much higher resolution AND better color depth.


Yes, you could just grab a hand-held stack of images and use super-resolution techniques but you need a lot more than 4 images to get resolution results as good as pixel shift because the super-resolution stacking technique relies on a lot of luck that every bit of the scene was visited by sensor pixels of all the three colors. With only 4 hand-held frames stacked, about 32% of pixel locations will be missing the red pixel data, about 32% will be missing blue pixel data, but about 6% will be missing green pixel data. Even with 8 frames stacked, about 10% of pixel locations will be missing the red pixel data, about 10% will be missing blue pixel data, but almost all (99.6%) will have green pixel data. It will be good, but not as good as pixel shift which ensures ever spot gets measured.
09-23-2017, 07:08 PM   #21
Veteran Member




Join Date: Mar 2017
Location: Otago, New Zealand
Posts: 422
That was actually what I said.

The way that I am using the term 'resolution' is in referencing line pairs per mm in the image.

By that logic;
resolution is how many pixels across the image is by how high, you need more pixels to have more resolution - You need at least three horizontal pixels to represent a vertical line pair, hence the whole megapixel race thing.

Colour depth is how much information there is at each pixel site. hence the idea of foveon sensors or Pixel Shift, which does make it easier to percieve subtle edges.

There is four times the information in the raw file because each pixel site is sampled four times - the resolution isn't quadrupled, the colour depth per pixel is.

(bar the edges I guess, because trigonometry & physics & stuff)
09-24-2017, 07:58 AM   #22
Site Supporter
Site Supporter
rechmbrs's Avatar

Join Date: Jan 2007
Location: Conroe, TX USA
Posts: 423
QuoteOriginally posted by photoptimist Quote
Pixel shift most certainly does increase resolution significantly.

It may be true that the Bayer filter means the image pixels have a color pattern such as:

R G R G
G B G B
R G R G
G B G B

That filter design means that resolution in red has 1/4 the pixels of the full array (1/2 the linear resolution of details), the same with blue, and green has 50% of the pixels (about 70% the resolution of the full array). In normal mode, the K-1 gets a 9 MPix red image, a 9 Mpix blue image, and an 18 MPix green image which it interpolates to create a 36 MPix color image.


But that's only in one frame. In pixel shift, the second frame shifts the sensor array one pixel to the left to get color array pattern covering the scene of:

G R G R
B G B G
G R G R
B G B G

Then it shifts one pixel up to get:

B G B G
G R G R
B G B G
G R G R

Finally, it shift one pixel back to the right to get:

G B G B
R G R G
G B G B
R G R G

Those four images are stored in the massive RAW file that is 4X the size of a regular one.

If you look carefully, every location in the scene was measured once in red, once in blue, and twice in green.

The image developer stacks the four images to create a true 36 MPix x 3 color image with the green image have a noise-reduced double-sampling. The result is BOTH much higher resolution AND better color depth.


Yes, you could just grab a hand-held stack of images and use super-resolution techniques but you need a lot more than 4 images to get resolution results as good as pixel shift because the super-resolution stacking technique relies on a lot of luck that every bit of the scene was visited by sensor pixels of all the three colors. With only 4 hand-held frames stacked, about 32% of pixel locations will be missing the red pixel data, about 32% will be missing blue pixel data, but about 6% will be missing green pixel data. Even with 8 frames stacked, about 10% of pixel locations will be missing the red pixel data, about 10% will be missing blue pixel data, but almost all (99.6%) will have green pixel data. It will be good, but not as good as pixel shift which ensures ever spot gets measured.
Photoptimist,

Thanks for the response. I have a couple of comments and questions.

QuoteOriginally posted by photoptimist Quote
The image developer stacks the four images to create a true 36 MPix x 3 color image with the green image have a noise-reduced double-sampling. The result is BOTH much higher resolution AND better color depth.
Being rather pedantic, I refrain from using the stack term with Pixel Shift as stack implies to most it is a summation or addition. I use composite instead. The green channel typically is not summed as only one value is used in PDCU, ACR, and RT.

In the last paragraph you have "% of pixel locations will be missing the pixel data." I have a general mistrust of the % sign (zero divided by zero???) and wonder how you derived the values for the channels.

I'd like to create a resolution display probably using a 2-D Fourier transform but have not found a way that is simple enough to be interrupted by the large knowledge difference across photographers. Resolution entails more than a cutoff spatial frequency but also the notches caused by sampling. If you might have an idea, I'd be willing to try it.

Regards,
RONC

09-24-2017, 08:02 AM   #23
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,121
QuoteOriginally posted by sqrrl Quote
That was actually what I said.

The way that I am using the term 'resolution' is in referencing line pairs per mm in the image.

By that logic;
resolution is how many pixels across the image is by how high, you need more pixels to have more resolution - You need at least three horizontal pixels to represent a vertical line pair, hence the whole megapixel race thing.

Colour depth is how much information there is at each pixel site. hence the idea of foveon sensors or Pixel Shift, which does make it easier to percieve subtle edges.

There is four times the information in the raw file because each pixel site is sampled four times - the resolution isn't quadrupled, the colour depth per pixel is.

(bar the edges I guess, because trigonometry & physics & stuff)
If you take a picture of a red object with a K-1 and measure the resolution in line pairs per mm, you'll find that the K-1 acts like a 9 MPix camera because it samples red in only 9 million places on the scene.

If you take a picture of a red object with a K-1 in pixel shift mode and measure the resolution in line pairs per mm, you'll find that the K-1 PS acts like a 36 MPix camera because it then samples red in a total of 36 million places on the scene.

Pixel shift most assuredly boosts resolution in line pairs per mm, especially in red and blue but also in green, top. (see Pentax K-3 II Review - Pixel Shift Resolution mode for examples)

The color depth of a Bayer filter image is actually better than you think because the demosaicer averages together neighboring pixels. But the resolution of the shape of color object is much worse with Bayer than with pixelshift.

---------- Post added 09-24-17 at 09:30 AM ----------

QuoteOriginally posted by rechmbrs Quote
Photoptimist,

Thanks for the response. I have a couple of comments and questions.



Being rather pedantic, I refrain from using the stack term with Pixel Shift as stack implies to most it is a summation or addition. I use composite instead. The green channel typically is not summed as only one value is used in PDCU, ACR, and RT.

In the last paragraph you have "% of pixel locations will be missing the pixel data." I have a general mistrust of the % sign (zero divided by zero???) and wonder how you derived the values for the channels.

I'd like to create a resolution display probably using a 2-D Fourier transform but have not found a way that is simple enough to be interrupted by the large knowledge difference across photographers. Resolution entails more than a cutoff spatial frequency but also the notches caused by sampling. If you might have an idea, I'd be willing to try it.

Regards,
RONC
Pedantic is good! It's just another term for precise and accurate! Yes, pixelshift is not like most other stacking algorithms for the reasons that you state. Pixelshift is more of a careful disassembly of the 4 carefully captured frames with reassembly into a final true full resolution full color image.

The percentage figures in the last paragraph come from simple statistical calculations of the chance that a given point in the scene (i.e., a given pixel in the output image) was ever visited by all three categories of color sensor. For example if one shoots four hand-held frames will little motions between each frame, it's entirely possible that by chance a certain pixel in the scene was only sampled by green camera pixels. The probability of that occurrence is (1/2)^4 or 1/16. One can use basic probability math to estimate the chance that a place in the scene is never sampled in red, or blue, or green, or whatever. And if a place was not sampled by all colors, the resolution will be lower because the stacker was forced to interpolate.

A more careful analysis of this kind of superresolution stacking of images under camera motion will reveal very complex pixel-by-pixel patterns of higher and lower resolution in which some pixels were well sampled by all three colors and others were not. There's even a tiny chance that the superresolution stack is no better than a single frame because by chance the Bayer filter colors lined up on every shot.
09-24-2017, 12:52 PM   #24
Site Supporter
Site Supporter
rechmbrs's Avatar

Join Date: Jan 2007
Location: Conroe, TX USA
Posts: 423
QuoteOriginally posted by photoptimist Quote
If you take a picture of a red object with a K-1 and measure the resolution in line pairs per mm, you'll find that the K-1 acts like a 9 MPix camera because it samples red in only 9 million places on the scene.

If you take a picture of a red object with a K-1 in pixel shift mode and measure the resolution in line pairs per mm, you'll find that the K-1 PS acts like a 36 MPix camera because it then samples red in a total of 36 million places on the scene.

Pixel shift most assuredly boosts resolution in line pairs per mm, especially in red and blue but also in green, top. (see Pentax K-3 II Review - Pixel Shift Resolution mode for examples)

The color depth of a Bayer filter image is actually better than you think because the demosaicer averages together neighboring pixels. But the resolution of the shape of color object is much worse with Bayer than with pixelshift.

---------- Post added 09-24-17 at 09:30 AM ----------

Pedantic is good! It's just another term for precise and accurate! Yes, pixelshift is not like most other stacking algorithms for the reasons that you state. Pixelshift is more of a careful disassembly of the 4 carefully captured frames with reassembly into a final true full resolution full color image.

The percentage figures in the last paragraph come from simple statistical calculations of the chance that a given point in the scene (i.e., a given pixel in the output image) was ever visited by all three categories of color sensor. For example if one shoots four hand-held frames will little motions between each frame, it's entirely possible that by chance a certain pixel in the scene was only sampled by green camera pixels. The probability of that occurrence is (1/2)^4 or 1/16. One can use basic probability math to estimate the chance that a place in the scene is never sampled in red, or blue, or green, or whatever. And if a place was not sampled by all colors, the resolution will be lower because the stacker was forced to interpolate.

A more careful analysis of this kind of superresolution stacking of images under camera motion will reveal very complex pixel-by-pixel patterns of higher and lower resolution in which some pixels were well sampled by all three colors and others were not. There's even a tiny chance that the superresolution stack is no better than a single frame because by chance the Bayer filter colors lined up on every shot.
I understand the statistics but there must be an associated movement to go with it. Are you assuming a one pixel maximum movement and how often is the movement relative to the exposure time?

The reason I is large jitter probably violates our assumptions about the statistics of both the signal and all noise types. We assume that the statistics are common for nearby pixels or other processes will fail in some way. This commonality is what really makes pixel shift so powerful. Just averaging a bunch of pixels without concern for their relationships to each other is not following what we know about good image processing methods. Super resolution sounds good but it, like interpolation, buys one little.

RONC
09-24-2017, 01:44 PM   #25
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,121
QuoteOriginally posted by rechmbrs Quote
I understand the statistics but there must be an associated movement to go with it. Are you assuming a one pixel maximum movement and how often is the movement relative to the exposure time?

The reason I is large jitter probably violates our assumptions about the statistics of both the signal and all noise types. We assume that the statistics are common for nearby pixels or other processes will fail in some way. This commonality is what really makes pixel shift so powerful. Just averaging a bunch of pixels without concern for their relationships to each other is not following what we know about good image processing methods. Super resolution sounds good but it, like interpolation, buys one little.

RONC
I'm assuming a frame-to-frame statistical distribution of motion with a mean of at least a couple of pixels, a standard deviation of at least a couple of pixels, and random angle. It doesn't take much motion for the chance of an R, G, or B pixel landing on a given spot to become almost totally random. Of course, there's always some chance of really weird results with interactions between hand-held motions (which probably have pretty strong autcorrelation in velocity and direction) and the geometry of the Bayer filter (which has strong structure for some angles and frame-to-frame spacings).

The most sophisticated stacking algorithms used in astrophotography and remote sensing really do model each pixel as a rectangular sampling area that intersects with other rectangular sampling areas and the algorithm can compute the most likely sub-pixel resolution structure of the scene that would give rise to all the data. But it takes a lot of frames, a lot of computer power, and a sophisticated calibration of the lens and sensor. And it can never guarantee uniformly high resolution across the frame because of interactions in all the periodic sampling functions.

Pixel shift needs only four frames and always improves a big boost to resolution as long as the lens is good enough, the scene is stationary, and the lighting levels are stable.
09-24-2017, 03:37 PM   #26
Veteran Member




Join Date: Mar 2017
Location: Otago, New Zealand
Posts: 422
Yes, the ps image will look at every pixel site for colour - but that is still not the same thing as resolution.

The solid red thing is a red herring for two reasons - the first is that the demosaisizer algorithm will return a 36mp red image if all it can see is red - it won't return 9mp (an image with 99.9% red and random scattered green & blue flecks would likely fail - but honestly who does that?)

Secondly, the demosaisizer algorithm looks at surrounding pixels to determine colour, but you are forgetting that it also looks at each pixel individually to work out the gamma - a false colour simply cannot be lighter than the representative pixel.

The sub sampling effectively happens twice - once for colour and once for light & dark - The edges in a normal image are typically represented by tonality more than colour.

This is kinda fun - I could imagine us having an interesting conversation over coffee.

Have you considered the implications of polarisation on this - given that the waveform amplitude of different coloured light are different and the probability of those light streams impacting different pixel sites differently at the edges given the time difference between frames. I might have to have a play to see if it's a thing


I think we are agreeing though that hand held pixel shift is a bad idea and likely to degrade the image.
09-24-2017, 04:27 PM - 1 Like   #27
Pentaxian
photoptimist's Avatar

Join Date: Jul 2016
Photos: Albums
Posts: 5,121
Resolution is the ability to resolve two objects as separate things rather than one blob. If I create a scene with two red dots (or two thin red lines) on a black background and measure how close they can be before a camera can no longer resolve them, then I'd see that a both the Foveon and pixelshift camera have twice the resolution as the normal Bayer filter camera becuase they make 4X the number of red channel measurements. With the Bayer camera, the red dots or lines must be at least 4 pixels apart so that in a given RGRGRGRGR row of pixels, there is a dark R pixel between two bright R pixels. With pixelshift and Foveon, the camera can resolve lines or dots with only a 2 pixel spacing.

The model by which the demosaicer uses surrounding pixels to estimate color for a patch and separately estimate light-and-dark variations makes dangerous assumptions about the nature of the object. The demosaicer assumes color does not vary much over short distances which is to say a Bayer camera cannot resolve high-resolution variation in color. It's why pictures of fine black-and-white objects often have weird color artifacts. It also causes problems with star color in astrophotography depending upon how the bright point of the star falls more on one color or another.

But we can agree about handheld pixelshift and talking over coffee!
09-25-2017, 02:35 AM   #28
Loyal Site Supporter
Loyal Site Supporter




Join Date: Mar 2009
Location: Gladys, Virginia
Photos: Gallery
Posts: 27,650
QuoteOriginally posted by sqrrl Quote
Yes, the ps image will look at every pixel site for colour - but that is still not the same thing as resolution.

The solid red thing is a red herring for two reasons - the first is that the demosaisizer algorithm will return a 36mp red image if all it can see is red - it won't return 9mp (an image with 99.9% red and random scattered green & blue flecks would likely fail - but honestly who does that?)

Secondly, the demosaisizer algorithm looks at surrounding pixels to determine colour, but you are forgetting that it also looks at each pixel individually to work out the gamma - a false colour simply cannot be lighter than the representative pixel.

The sub sampling effectively happens twice - once for colour and once for light & dark - The edges in a normal image are typically represented by tonality more than colour.

This is kinda fun - I could imagine us having an interesting conversation over coffee.

Have you considered the implications of polarisation on this - given that the waveform amplitude of different coloured light are different and the probability of those light streams impacting different pixel sites differently at the edges given the time difference between frames. I might have to have a play to see if it's a thing


I think we are agreeing though that hand held pixel shift is a bad idea and likely to degrade the image.
My experience has been that pixel shift does add detail to images as well as improving color depth. If you pixel peep at images that have been pixel shifted and look at them at one hundred percent, they have more detail in the images than standard single shot images do. It probably isn't noticeable in most situations and the resulting image isn't any more pixels across than a standard image, but it is definitely a more high quality image in most respects.
09-25-2017, 04:22 AM   #29
Site Supporter
Site Supporter
rechmbrs's Avatar

Join Date: Jan 2007
Location: Conroe, TX USA
Posts: 423
QuoteOriginally posted by Rondec Quote
My experience has been that pixel shift does add detail to images as well as improving color depth. If you pixel peep at images that have been pixel shifted and look at them at one hundred percent, they have more detail in the images than standard single shot images do. It probably isn't noticeable in most situations and the resulting image isn't any more pixels across than a standard image, but it is definitely a more high quality image in most respects.
In light of what I stated in my last post, so many comment that the Pixel Shift process is for pixel peepers and buys one nothing once it is sub-sampled for display, I would like to point out that because of the coherency gained from the Pixel Shift process it also enhances later processes such as sharpening. We should always try to keep in mind what benefits are gained much later in processing when we choose both which processes to apply and what parameters to use.

RONC
09-25-2017, 05:40 AM   #30
Loyal Site Supporter
Loyal Site Supporter




Join Date: Mar 2009
Location: Gladys, Virginia
Photos: Gallery
Posts: 27,650
QuoteOriginally posted by rechmbrs Quote
In light of what I stated in my last post, so many comment that the Pixel Shift process is for pixel peepers and buys one nothing once it is sub-sampled for display, I would like to point out that because of the coherency gained from the Pixel Shift process it also enhances later processes such as sharpening. We should always try to keep in mind what benefits are gained much later in processing when we choose both which processes to apply and what parameters to use.

RONC
I have always been of the opinion that you want the very best base image possible as your starting point -- good dynamic range, color depth, hopefully no blurred areas, blown out highlights, or aberrations. If you have that, then your post processing is going to go a lot better than if you have significant problems at the beginning.

As you say, you are then able to sharpen and bump shadows and do the other things you would to any image and do so more aggressively if needed without having as much penalty.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
camera, chance, color, dng, example, hand, image, iso, k-1, line, mc, method, photography, pixel, pixels, process, resolution, results, scene, shift, shutter, trade, vs
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
hand-held shutter speed enyaw Troubleshooting and Beginner Help 20 04-23-2017 06:17 PM
Lake Union Fireworks - hand held Btparques Monthly Photo Contests 5 09-11-2016 04:01 AM
Pixel Shifting and Lightroom 6.3 RAW godavari Pentax DSLR Discussion 16 03-22-2016 10:51 AM
Pixel shifting tuggie76 Pentax K-3 & K-3 II 17 01-27-2016 02:56 PM
Pixel Shifting & Virtual Tours RockvilleBob Digital Processing, Software, and Printing 8 01-26-2016 03:03 AM



All times are GMT -7. The time now is 06:27 AM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top