Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
11-20-2018, 05:14 AM   #76
Unregistered User
Guest




An admission of error. I'd had the idea that a jpeg recorded at highest quality & resolution was uncompressed data. There are such things as uncompressed JPEGs but my cameras don't record them. Looking at the FAQ page for the K-50 (K-50 | FAQ | Support | RICOH IMAGING) under "memory cards", I see a table that says that three times as many images can be stored on a 32Gb card in JPEG format as one may with raw format images, even though the number of pixels recorded and resolution are the same for both. So, no question that's a compressed format, thus a "lossy" image etc.

I still maintain that editing of such a JPEG to correct all the same problems that one might with raw data is perfectly feasible; but the people who say you can't get the same range of possibilities with JPEGs in terms of white balance, exposure value, etc., are correct. I've only just gotten into this question recently after having read something someone else wrote (in a discussion of image editing software), and being a compulsive optimizer, I had to figure out what the "best" way for me to set up my cameras is. I'm going to keep doing raw+, because I don't need the functions you can't have with raw data storage (full-auto multi-shot for example), the memory cards are of sufficient capacity that one could rule the world (of photographs at least), and, well, I've been pretty lucky so far, but I can't remember when I last had to do any serious editing - a lot of cropping and touching up unfortunate artifacts of human culture that show up (e.g., power lines) sometimes, but that's about it. But just in case... (I've seen the word, "recovery" used more than once in this context). It's insurance, or as we say in my neck of the woods, IN-surance.

And let me say that I admire BruceBanner's consistent civility and drive to learn. Just don't make him angry ... you won't like him when he's angry.

Oh, and as to the raised eyebrows regarding an earlier post, I referred to this comment with approval:

QuoteOriginally posted by StevenVH Quote
When shooting in RAW none of the in-camera effects (WB, Custom Image, etc.) are applied to the image. In-camera effects only apply to JPEGs processed by the camera. See p.148 of the manual for more info.
which resulted in a further comment:


QuoteOriginally posted by AndrewG NY Quote
This is half-true. The settings you make in-camera will get applied to the embedded JPEG that is used when reviewing on LCD (and in some browser software on the PC), and the white balance will be recorded as well and will normally be used as the default during later raw image post-processing. So it is true that these are not part of the raw sensor data recorded, but there are still benefits to making some efforts for white-balance in-camera.
Makes perfect sense to me both as to the raw data conversion that's occurring when you display the image (whether on the camera or in software), as well as to the effects on JPEGs. And the way I understand AndrewG's comment, the white balance stored as ExIf data in the raw format is taken as the user's preferred starting point when the software gets it, but that it's merely another parameter than can be reset to suit.

I'd point out further that the white balance that you can set manually is one of the settings that gets saved with the user defined profiles. I never take defaults and turn off most of the funky in-camera processing options.


Last edited by Unregistered User; 11-20-2018 at 06:34 AM.
11-21-2018, 11:12 AM   #77
Veteran Member




Join Date: Feb 2016
Posts: 706
You may also wish to consider this when talking/thinking about JPEG vs raw.

A camera JPEG is limited to 8 bits (not aware of any camera that uses JPEG 2000). This really means a hard stop limit in DR of up to 8 EV vs up to 14 EV with a raw generated from most modern DSLR and the newer Hasselblad I believe up to 16 EV. Generally you cannot exceed the cameras bit depth in DR terms I.e. 14 bit means a limit of 14 stops. Bear in mind that these are theoretical values and usable data assuming some pp will limit real world DR

Not necessarily a big deal, old analogue colour film and particularly transparency probably did not approach 8 stops usable (no data to hand)

Truth is that 8 bit JPEG has been and is very usable as long as the scene DR does not exceed 8 stops and the needs of editing and colour gamut are within requirements and you are not going to be pushing the data too much.
11-22-2018, 02:02 AM   #78
sbh
Site Supporter
Site Supporter
sbh's Avatar

Join Date: Oct 2012
Location: Black Forest, Germany
Photos: Gallery
Posts: 849
QuoteOriginally posted by TonyW Quote
...

Truth is that 8 bit JPEG has been and is very usable as long as the scene DR does not exceed 8 stops and the needs of editing and colour gamut are within requirements and you are not going to be pushing the data too much.
Afaik 8bit is not equal to 8 stops. Bit is the amount of shades which the file can save between both ends.
Leaving rgb aside for a moment and just take b/w as example, a 1bit image would only have two shades: black or white (1/0). A 2bit file would have 4 shades (00,01,10,11). A three bit file has more and so on.

So it's the amount of shades (aka details) between the brightest and darkest rather than clipping highloghts or shadows.

However, with more than one bit the file can contain more details in dark or bright areas. Less bit depth might sometimes be percieved as clipping because there is less detail, for example in bright skies or dark shadows.

A 8bit RGB file does 256 shades of each R, G and B. Mix them together and we can get a maximum of 16.8 million colours.

A 16bit RGB file has 65536 shades of each R, G and B.
11-22-2018, 03:16 AM   #79
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by sbh Quote
Afaik 8bit is not equal to 8 stops. Bit is the amount of shades which the file can save between both ends.
Leaving rgb aside for a moment and just take b/w as example, a 1bit image would only have two shades: black or white (1/0). A 2bit file would have 4 shades (00,01,10,11). A three bit file has more and so on.

So it's the amount of shades (aka details) between the brightest and darkest rather than clipping highloghts or shadows.

However, with more than one bit the file can contain more details in dark or bright areas. Less bit depth might sometimes be percieved as clipping because there is less detail, for example in bright skies or dark shadows.

A 8bit RGB file does 256 shades of each R, G and B. Mix them together and we can get a maximum of 16.8 million colours.

A 16bit RGB file has 65536 shades of each R, G and B.
I may not have expressed it well enough but I believe that although perhaps not obvious that you will find that there is an indirect link between bit depth and dynamic range. The link is the fact that the Dynamic Range of a camera cannot really exceed the total number of bits in a digital capture.

If you accept that the first stop uses 1/2 the available levels and each further stop 1/2 of the proceeding levels it may be easier to picture where you run out of data. Maybe the following illustrates what I was trying to say. This is how it was explained to me many years ago


http://1.bp.blogspot.com/-br7TwQVK3qM/UHgnBPei_VI/AAAAAAAAFDs/VDzrnCNFDZs/s1..._histogram.jpg

Attached Images
 

Last edited by TonyW; 11-22-2018 at 05:56 AM.
11-23-2018, 06:34 AM   #80
Pentaxian




Join Date: May 2013
Location: Hertfordshire, England
Posts: 845
QuoteOriginally posted by TonyW Quote
You may also wish to consider this when talking/thinking about JPEG vs raw.

A camera JPEG is limited to 8 bits (not aware of any camera that uses JPEG 2000). This really means a hard stop limit in DR of up to 8 EV vs up to 14 EV with a raw generated from most modern DSLR and the newer Hasselblad I believe up to 16 EV. Generally you cannot exceed the cameras bit depth in DR terms I.e. 14 bit means a limit of 14 stops...

...Truth is that 8 bit JPEG has been and is very usable as long as the scene DR does not exceed 8 stops and the needs of editing and colour gamut are within requirements and you are not going to be pushing the data too much.
This seems misleading to me, or perhaps that's simply due to my own limited understanding of the maths, physics and computing involved.

I would expect that the full bit depth and dynamic range of the sensor would be recorded and used by the camera to make its 16-bit raw data file output. However, I would also expect that all of that data would be utilised by the camera's firmware, when it is processed in-camera into a viewable image file according to the user's chosen settings (white balance, custom Image, D-range, etc.), all the data of the processed image then being compressed into the recorded 8-bit JPEG file.

The implication of this would seem to be that the JPEG file has the potential to contain some data from everywhere in the full dynamic range of the sensor but, as the info shown in the table in post 79 shows, the number of levels in the dynamic range recorded in the 8-bit JPEG file are many fewer than in the 16-bit raw data file.

There are still sufficient levels that it would seem likely to have little effect on the experience of someone viewing an image from a JPEG file recorded at high quality settings. However, the fewer available levels might become distinctly noticeable when we try to post-process the JPEG file, possibly leading, for example, to colour banding in areas of slightly varying tones (e.g. a blue sky), and to more noise artefacts in the shadow areas which contain the fewest levels.

Philip

Last edited by MrB1; 11-23-2018 at 06:46 AM. Reason: spelling
11-23-2018, 10:23 AM   #81
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by MrB1 Quote
This seems misleading to me, or perhaps that's simply due to my own limited understanding of the maths, physics and computing involved.
May be misleading because I have failed to put it across in an understandable format? Have attached another graphic to illustrate Bit Precision and how I understand it all goes together. Probably not needed so not included the simple maths behind it all (I only do simple maths ). The chart is from my own notes from years back and I trust I have not made any gross errors in calculation
Please be aware that the values quoted relate to ADC precision only and are a theoretical maximum. You cannot use them to calculate image file results for 8, 16 bit files. It should also be borne in mind that noise and your tolerance to it will be a limiting factor.

QuoteQuote:
I would expect that the full bit depth and dynamic range of the sensor would be recorded and used by the camera to make its 16-bit raw data file output.
Yes within the limitations of the capture including ADC limits. Your camera may well be classed as 12, 14 or in the case of Hasselblad 16 bit. But the actual amount of DR achieved from the analogue capture and the subsequent sampling and conversion by the ADC may leave you feeling short changed in real world figures. Have a look at this chart from Bill Claff (a man whose work I trust) you will see that for various reasons full bit depth is not usable. I picked the particular cameras because I have owned and used 3 of them and other two as the lowest performing I could find on his tests. I dont believe the actual testing methodology of huge importance to most of us but if we are comparing we should be careful to use the same testing methods and not mix and match i.e. dont try and use DXOmark and sites like Photonstophotos as they use totally different methods DXO always coming out higher (think they use a back illuminated Stouffer wedge?). As an aside and sorry for mentioning here Nikon (purely based on user experience). When Nikon announced a 14 bit camera that DXOmark evaluated as having a plus 14 stop DR we did not see (AFAIK) a huge changeover from the Canon users that only had 12 stops.

Photographic Dynamic Range versus ISO Setting
QuoteQuote:
However, I would also expect that all of that data would be utilised by the camera's firmware, when it is processed in-camera into a viewable image file according to the user's chosen settings (white balance, custom Image, D-range, etc.), all the data of the processed image then being compressed into the recorded 8-bit JPEG file.
I agree all of the usable data is taken into account and as you say compressed (data discarded, I believe first to go colour data then finally luminance if necessary). You have little control of what data discarded.

QuoteQuote:
The implication of this would seem to be that the JPEG file has the potential to contain some data from everywhere in the full dynamic range of the sensor but, as the info shown in the table in post 79 shows, the number of levels in the dynamic range recorded in the 8-bit JPEG file are many fewer than in the 16-bit raw data file.
Yes I believe that is correct and the JPEG file has the potential to contain any data within the file. However you do not have too much control of how it chooses it data and what it is going to discard. Any AI will be contained within the camera manufacturers algorithms for the JPEG conversion.

What I think important to take away from the raw v JPEG debate is that once you have literally baked your JPEG cake you cannot really unbake it and recover discarded data. So if you only have a JPEG and you realise that you need more detail in the shadow areas an edit may not be possible due to introducing noise artifacts/posterisation etc. With your raw file you should be able to push the data much more. This also applies to highlights. FWIW a "correctly" exposed JPEG will result in an underexposed raw file or perhaps more correct to say a less than optimally exposed raw. Based on the assumption that you are going to try and have the time to optimise exposure. BTW I am not suggesting I am perfect I can and have made some doleful errors when calculating exposure, slide/transparency in particular being the super picky

QuoteQuote:
There are still sufficient levels that it would seem likely to have little effect on the experience of someone viewing an image from a JPEG file recorded at high quality settings. However, the fewer available levels might become distinctly noticeable when we try to post-process the JPEG file, possibly leading, for example, to colour banding in areas of slightly varying tones (e.g. a blue sky), and to more noise artefacts in the shadow areas which contain the fewest levels.

Philip
Exactly. If the scene dynamic range (or what you select to capture) fits comfortably within perhaps a 6-8 stop range then your JPEG is good to go and yes it is possible to break your image (including raw) if you need to push your capture in post too far.

Sorry for the rather wordy reply but I was concerned by the fact that my original could be seen as misleading therefore needed some clarification by offering some further information
Attached Images
 

Last edited by TonyW; 11-23-2018 at 10:29 AM. Reason: added image
11-24-2018, 05:05 AM   #82
sbh
Site Supporter
Site Supporter
sbh's Avatar

Join Date: Oct 2012
Location: Black Forest, Germany
Photos: Gallery
Posts: 849
QuoteOriginally posted by TonyW Quote
I may not have expressed it well enough but I believe that although perhaps not obvious that you will find that there is an indirect link between bit depth and dynamic range. The link is the fact that the Dynamic Range of a camera cannot really exceed the total number of bits in a digital capture.

If you accept that the first stop uses 1/2 the available levels and each further stop 1/2 of the proceeding levels it may be easier to picture where you run out of data. Maybe the following illustrates what I was trying to say. This is how it was explained to me many years ago


http://1.bp.blogspot.com/-br7TwQVK3qM/UHgnBPei_VI/AAAAAAAAFDs/VDzrnCNFDZs/s1..._histogram.jpg
Interesting discussion. I see what you mean. Not sure why the first stop takes half of the available levels. I would think thery are used where they are needed.

However, i think it comes down to how granular the brightness levels are resolved. With less bits there should be less vertical bars in the histogram. Perhaps 256 per R, G, and B channel for 8 bit.

With more bits there should be more bars but less high because the pixels are spread out over more individual shades.

Leaving colour aside, in b/w it is a little easier to see. I found this examle quite useful.
1-bit image below. Notice that the histogram is only two bars left and right.


5-bit image which has 32 bars (aka shades) in the histogram. But the dynamic range of the sensor is still covered.


11-24-2018, 10:25 AM   #83
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by sbh Quote
Interesting discussion. I see what you mean. Not sure why the first stop takes half of the available levels. I would think thery are used where they are needed.
This is just down to the way that digital acquisition works i.e. it is Linear.

It is sampling discrete areas from an analogue capture. The sampling is linear hence the 1st stop taking half the levels and then working down stop by stop halving the previous. It is possible to view the raw capture data in a number of applications in the individual RGBG channels and also in a colour demosaiced version. I can assure you that you would not like it as the image being raw is just that raw data without any processing you will find that the individual RGB channels are very dark and low contrast and likewise the demosaiced version is very flat dark and very green. That is why all raw editors have to apply WB, TRC etc etc to give an acceptable first view ready for further refinement in editing. Not that it of any value but I could show an example or two if this is not clear


QuoteQuote:
However, i think it comes down to how granular the brightness levels are resolved. With less bits there should be less vertical bars in the histogram. Perhaps 256 per R, G, and B channel for 8 bit.

With more bits there should be more bars but less high because the pixels are spread out over more individual shades.
I think there is a distinct danger here of overthinking this and equate too much what you see in the Photoshop or other histogram being directly linked to the bit depth precision of the image capture. Not sure I understand your reference to granular brightness levels and being resolved.

Your PS histogram usually displays exactly 256 brightness levels from 0 to 255 left to right. The height of the histogram is calculated relative to other tones and normalised and scaled accordingly. This is not the same as image acquisition and bit depth precision from analogue to digital via the ADC system.

Below is a selection of images going from 0 to 255 like your images it displays distinct levels in this case split to show relationship when we split the levels but this is purely an editing excercise in your editor of choice.


Your camera will do similar in converting its analogue capture by sampling to produce your image which will be limited by the bit depth precision of the conversion i.e.your camera is 14 bits you have 14 bits maximum (probably 12 or less) at your disposal between zero sensel and full sensel saturation.
Attached Images
 
11-26-2018, 02:21 PM - 1 Like   #84
Veteran Member
emalvick's Avatar

Join Date: Feb 2008
Location: Davis, CA
Photos: Gallery
Posts: 1,642
Rather than overthink it, recognize that JPG is a standardized format, with standardized practices for how it is used. While theoretically you might be able to represent all the data from a larger bit-depth image within a JPG, the JPG format was likely developed for most normal image histograms, which will have data throughout the full 8-bit range it was designed for. JPG is an old format, and it predates a lot of what we are currently using. Other formats try to update this, but the reality is that JPG itself is a bit limited, yet that is what everyone knows and uses when it comes to a standard (8-bit) format.

By the way, while limited to 8-bits, the quality of what you can get out of JPG is also going to be controlled by how compressed the image is and the quality one sets in the camera and in software. A comment a while back suggested that because a JPG file is smaller it is compressed and lossy; that is partially true for a JPG, but not for all file formats. For instance, you can compress some file formats without loss (Tiff, Dng, PNG to name a few). And, JPGs are smaller predominantly because they are 8-bit rather than 16-bit. That ends up being a lot of potential data lost. Interestingly Tiff files can be larger than your RAW images or smaller depending whether they are completely uncompressed or compressed. And, you can bet that any image format where the files out of your camera shot at the same resolution vary in size is compressed (albeit not necessarily lossy). For instance, your Pentax camera will shoot PEF and DNG files at various sizes because it does somewhat compress the files.
11-27-2018, 04:43 AM   #85
Unregistered User
Guest




One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality.
11-27-2018, 05:00 AM   #86
sbh
Site Supporter
Site Supporter
sbh's Avatar

Join Date: Oct 2012
Location: Black Forest, Germany
Photos: Gallery
Posts: 849
QuoteOriginally posted by TonyW Quote
... Not sure I understand your reference to granular brightness levels and being resolved.
That's what you show in your three graphics. The first gradient has more shades available (brightness levels) which makes it a smooth gradient. The others only have less brightness levels. Images with less bits have less brightness levels. That's what occurs in so-called banding often seen in images of blue skies from dark blue to bright blue. More bits means more brightness levels and smoother gradients.

I don't understand the phrase "1st stop taking half the levels". I'm afraid that this is beyond my understanding but maybe you could explain what "the 1st stop" is? The darkest part of the image?

There was an interesting video from FilmmakerIQ on "Diving into Dynamic Range" on youtube (just for those who are interested).
11-27-2018, 06:28 AM   #87
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by dlh Quote
One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality.
Sorry but that information is incorrect perhaps you meant to say something different or I have misunderstood?

For example TIFF can be 32 bit if you want or even 64 bit, and the number coinciding with levels can happen to make a big difference to IQ.

Last edited by TonyW; 11-27-2018 at 10:14 AM.
11-27-2018, 09:25 AM   #88
Veteran Member




Join Date: Feb 2016
Posts: 706
QuoteOriginally posted by sbh Quote
That's what you show in your three graphics. The first gradient has more shades available (brightness levels) which makes it a smooth gradient. The others only have less brightness levels. Images with less bits have less brightness levels. That's what occurs in so-called banding often seen in images of blue skies from dark blue to bright blue. More bits means more brightness levels and smoother gradients.
Thanks for your clarification and yes that is correct it was just your use of "granular" that threw me - not that it is incorrect, just that I had not heard the word granular or granularity referenced to digital images.

QuoteQuote:
I don't understand the phrase "1st stop taking half the levels". I'm afraid that this is beyond my understanding but maybe you could explain what "the 1st stop" is? The darkest part of the image?
In photographic terms a stop (not to be confused with an f-stop or f-number) is often referred to as a means to identify a doubling or halving of exposure. In analogue terms for film an increase in value of +0.3 meant a doubling of density and a -0.3 value meant a halving of density (just in case you are from the film era). I could have said EV I guess. But a camera capture is actually a digital conversion (sampling if you like) of the analogue signal it records at the time of exposure (photon collecting) and it goes through a digital converter (ADC) to become the numbers which are the digital file either raw, JPEG or TIFF

The 1st stop contains half the amount of the total exposure from the brightest part of the image down and is the way with linear capture each stop down uses half the levels and continues until you run out.

Does this rough graphic help?
Top the scene we wish to capture

Middle, 1 channel Green of the 4 channels captured and how it looks in raw and the typical histogram look of an unprocessed channel. Other channels will look very similar visually but their response is based on the colour of the array sensel capturing the data.


Bottom. A composite of the RGBG channels after demosaic but prior to TRC, WB and any other tweaks to make the image nice in your raw converter of choice.
Attached Images
 

Last edited by TonyW; 11-27-2018 at 09:43 AM.
11-28-2018, 04:36 AM   #89
Unregistered User
Guest




QuoteOriginally posted by TonyW Quote
Sorry but that information is incorrect perhaps you meant to say something different or I have misunderstood?

For example TIFF can be 32 bit if you want or even 64 bit, and the number coinciding with levels can happen to make a big difference to IQ.
Yes, I think you misunderstood. My point was that there are different aspects of data representation in which the number of bits or bytes is used to describe the stored image data. One of those merely has to do with how the data is put into blocks, strips, or some other chunks, as part of a record structure, which has nothing to do with "color depth", but merely identifies a standard by which software can interpret the data. What I'm talking about is similar to the idea that you could have a personnel record in a database with a standard that defines how long each field is, and the data type contained therein. So for example, you could say Dcl (Lastname, Firstname) char(32);, and the length of that field, being thirty-two bytes, has nothing to do with how tall the person may be, his age, etc.
11-28-2018, 06:51 AM - 1 Like   #90
Veteran Member




Join Date: Dec 2010
Location: Ontario
Photos: Gallery
Posts: 3,332
QuoteOriginally posted by dlh Quote
One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality.
Where has anyone used the bit depth of a JPEG or TIFF in this way?

"8 or 16 bit TIFF" is referencing the number of bits per channel, higher means you are capable of storing more tonal information.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
adjustments, camera, color, colors, data, dng, engine, exposure, file, files, image, information, jpeg, jpg, lot, lr, microsoft, office, online, ooc, photography, photoshop, pictures, post, profile, sensor, settings, shot
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
New Pentax flash AF360FGZ Mark II version much better then the old version? Theov39 Flashes, Lighting, and Studio 6 02-27-2017 09:35 AM
RAW+ : How to apply JPG camera settings to RAW? raider Digital Processing, Software, and Printing 6 06-20-2015 07:21 PM
K-S2 JPG's versus K5ll JPG'S and K50 JPG's LoneWolf Pentax DSLR Discussion 22 03-28-2015 12:58 PM
RAW+ - JPG different from RAW? 7samurai Pentax DSLR Discussion 26 11-23-2010 08:36 AM
K-X shows more noise when shooting RAW than JPG??? crossover37 Pentax DSLR Discussion 11 04-20-2010 12:46 PM



All times are GMT -7. The time now is 05:51 PM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top