Trying to get some grip on image processing, I started with taking 50 frames with my Pentax K-5. The settings are as follows:
- Manual setting
- ISO100
- Exposure 1/8000s
- Cap on viewfinder and lens.
- K-5 completely in the dark at 22°C
- Frame format DNG
- Interval between frames is 10sec.
- All noise processing, filtering, correction performed by the K-5 were switched off.
These are the conditions for shooting BIAS frames as far as I understand.
The frames are imported in Python using rawpy (the DCraw modules allowing to read and interpret DNG frames).
The frames are uploaded in rgb 16 bit format what gives me an array with the rgb channels.
I took at random one pixel somewhere in the middle of the frame and compared the channel intensities for all frames in the sequence.
Bias frames show the error of the zero value measured by the 14bit ADC of the K-5, and I expect that this error values is rather stable and small.
To my big surprise I obtain the following result:
The same pixel (pixel [1500, 1500]) has fluctuating values of the channel intensities with up to 25% of the maximum value that can be represented between the different frames taken.
For me this is huge and probably something went wrong in the reading of the frames. If I divide the rgb16 array in Python by 256, I get an 8bit representation of the frame that can be visualized.
With light frames made with the same camera and stored in a DNG file, my Python procedure allows me to see the 8bit picture in the real colors.
My question to the forum is, whether this fluctuation of the pixel rgb values from frame to frame is normal and to be expected?
Thanks in advance,
Stefan