Pentax/Camera Marketplace |
Pentax Items for Sale |
Wanted Pentax Items |
Pentax Deals |
Deal Finder & Price Alerts |
Price Watch Forum |
My Marketplace Activity |
List a New Item |
Get seller access! |
Pentax Stores |
Pentax Retailer Map |
Pentax Photos |
Sample Photo Search |
Recent Photo Mosaic |
Today's Photos |
Free Photo Storage |
Member Photo Albums |
User Photo Gallery |
Exclusive Gallery |
Photo Community |
Photo Sharing Forum |
Critique Forum |
Official Photo Contests |
World Pentax Day Gallery |
World Pentax Day Photo Map |
Pentax Resources |
Articles and Tutorials |
Member-Submitted Articles |
Recommended Gear |
Firmware Update Guide |
Firmware Updates |
Pentax News |
Pentax Lens Databases |
Pentax Lens Reviews |
Pentax Lens Search |
Third-Party Lens Reviews |
Lens Compatibility |
Pentax Serial Number Database |
In-Depth Reviews |
SLR Lens Forum |
Sample Photo Archive |
Forum Discussions |
New Posts |
Today's Threads |
Photo Threads |
Recent Photo Mosaic |
Recent Updates |
Today's Photos |
Quick Searches |
Unanswered Threads |
Recently Liked Posts |
Forum RSS Feed |
Go to Page... |
|
Search this Thread |
11-20-2018, 05:14 AM | #76 |
Unregistered User Guest |
An admission of error. I'd had the idea that a jpeg recorded at highest quality & resolution was uncompressed data. There are such things as uncompressed JPEGs but my cameras don't record them. Looking at the FAQ page for the K-50 (K-50 | FAQ | Support | RICOH IMAGING) under "memory cards", I see a table that says that three times as many images can be stored on a 32Gb card in JPEG format as one may with raw format images, even though the number of pixels recorded and resolution are the same for both. So, no question that's a compressed format, thus a "lossy" image etc. I still maintain that editing of such a JPEG to correct all the same problems that one might with raw data is perfectly feasible; but the people who say you can't get the same range of possibilities with JPEGs in terms of white balance, exposure value, etc., are correct. I've only just gotten into this question recently after having read something someone else wrote (in a discussion of image editing software), and being a compulsive optimizer, I had to figure out what the "best" way for me to set up my cameras is. I'm going to keep doing raw+, because I don't need the functions you can't have with raw data storage (full-auto multi-shot for example), the memory cards are of sufficient capacity that one could rule the world (of photographs at least), and, well, I've been pretty lucky so far, but I can't remember when I last had to do any serious editing - a lot of cropping and touching up unfortunate artifacts of human culture that show up (e.g., power lines) sometimes, but that's about it. But just in case... (I've seen the word, "recovery" used more than once in this context). It's insurance, or as we say in my neck of the woods, IN-surance. And let me say that I admire BruceBanner's consistent civility and drive to learn. Just don't make him angry ... you won't like him when he's angry. Oh, and as to the raised eyebrows regarding an earlier post, I referred to this comment with approval: This is half-true. The settings you make in-camera will get applied to the embedded JPEG that is used when reviewing on LCD (and in some browser software on the PC), and the white balance will be recorded as well and will normally be used as the default during later raw image post-processing. So it is true that these are not part of the raw sensor data recorded, but there are still benefits to making some efforts for white-balance in-camera. I'd point out further that the white balance that you can set manually is one of the settings that gets saved with the user defined profiles. I never take defaults and turn off most of the funky in-camera processing options. Last edited by Unregistered User; 11-20-2018 at 06:34 AM. |
11-21-2018, 11:12 AM | #77 |
You may also wish to consider this when talking/thinking about JPEG vs raw. A camera JPEG is limited to 8 bits (not aware of any camera that uses JPEG 2000). This really means a hard stop limit in DR of up to 8 EV vs up to 14 EV with a raw generated from most modern DSLR and the newer Hasselblad I believe up to 16 EV. Generally you cannot exceed the cameras bit depth in DR terms I.e. 14 bit means a limit of 14 stops. Bear in mind that these are theoretical values and usable data assuming some pp will limit real world DR Not necessarily a big deal, old analogue colour film and particularly transparency probably did not approach 8 stops usable (no data to hand) Truth is that 8 bit JPEG has been and is very usable as long as the scene DR does not exceed 8 stops and the needs of editing and colour gamut are within requirements and you are not going to be pushing the data too much. | |
11-22-2018, 02:02 AM | #78 |
Leaving rgb aside for a moment and just take b/w as example, a 1bit image would only have two shades: black or white (1/0). A 2bit file would have 4 shades (00,01,10,11). A three bit file has more and so on. So it's the amount of shades (aka details) between the brightest and darkest rather than clipping highloghts or shadows. However, with more than one bit the file can contain more details in dark or bright areas. Less bit depth might sometimes be percieved as clipping because there is less detail, for example in bright skies or dark shadows. A 8bit RGB file does 256 shades of each R, G and B. Mix them together and we can get a maximum of 16.8 million colours. A 16bit RGB file has 65536 shades of each R, G and B. | |
11-22-2018, 03:16 AM | #79 |
Afaik 8bit is not equal to 8 stops. Bit is the amount of shades which the file can save between both ends. Leaving rgb aside for a moment and just take b/w as example, a 1bit image would only have two shades: black or white (1/0). A 2bit file would have 4 shades (00,01,10,11). A three bit file has more and so on. So it's the amount of shades (aka details) between the brightest and darkest rather than clipping highloghts or shadows. However, with more than one bit the file can contain more details in dark or bright areas. Less bit depth might sometimes be percieved as clipping because there is less detail, for example in bright skies or dark shadows. A 8bit RGB file does 256 shades of each R, G and B. Mix them together and we can get a maximum of 16.8 million colours. A 16bit RGB file has 65536 shades of each R, G and B. If you accept that the first stop uses 1/2 the available levels and each further stop 1/2 of the proceeding levels it may be easier to picture where you run out of data. Maybe the following illustrates what I was trying to say. This is how it was explained to me many years ago http://1.bp.blogspot.com/-br7TwQVK3qM/UHgnBPei_VI/AAAAAAAAFDs/VDzrnCNFDZs/s1..._histogram.jpg Last edited by TonyW; 11-22-2018 at 05:56 AM. | |
11-23-2018, 06:34 AM | #80 |
You may also wish to consider this when talking/thinking about JPEG vs raw. A camera JPEG is limited to 8 bits (not aware of any camera that uses JPEG 2000). This really means a hard stop limit in DR of up to 8 EV vs up to 14 EV with a raw generated from most modern DSLR and the newer Hasselblad I believe up to 16 EV. Generally you cannot exceed the cameras bit depth in DR terms I.e. 14 bit means a limit of 14 stops... ...Truth is that 8 bit JPEG has been and is very usable as long as the scene DR does not exceed 8 stops and the needs of editing and colour gamut are within requirements and you are not going to be pushing the data too much. I would expect that the full bit depth and dynamic range of the sensor would be recorded and used by the camera to make its 16-bit raw data file output. However, I would also expect that all of that data would be utilised by the camera's firmware, when it is processed in-camera into a viewable image file according to the user's chosen settings (white balance, custom Image, D-range, etc.), all the data of the processed image then being compressed into the recorded 8-bit JPEG file. The implication of this would seem to be that the JPEG file has the potential to contain some data from everywhere in the full dynamic range of the sensor but, as the info shown in the table in post 79 shows, the number of levels in the dynamic range recorded in the 8-bit JPEG file are many fewer than in the 16-bit raw data file. There are still sufficient levels that it would seem likely to have little effect on the experience of someone viewing an image from a JPEG file recorded at high quality settings. However, the fewer available levels might become distinctly noticeable when we try to post-process the JPEG file, possibly leading, for example, to colour banding in areas of slightly varying tones (e.g. a blue sky), and to more noise artefacts in the shadow areas which contain the fewest levels. Philip Last edited by MrB1; 11-23-2018 at 06:46 AM. Reason: spelling | |
11-23-2018, 10:23 AM | #81 |
Please be aware that the values quoted relate to ADC precision only and are a theoretical maximum. You cannot use them to calculate image file results for 8, 16 bit files. It should also be borne in mind that noise and your tolerance to it will be a limiting factor. Quote: I would expect that the full bit depth and dynamic range of the sensor would be recorded and used by the camera to make its 16-bit raw data file output. Photographic Dynamic Range versus ISO Setting Quote: However, I would also expect that all of that data would be utilised by the camera's firmware, when it is processed in-camera into a viewable image file according to the user's chosen settings (white balance, custom Image, D-range, etc.), all the data of the processed image then being compressed into the recorded 8-bit JPEG file. Quote: The implication of this would seem to be that the JPEG file has the potential to contain some data from everywhere in the full dynamic range of the sensor but, as the info shown in the table in post 79 shows, the number of levels in the dynamic range recorded in the 8-bit JPEG file are many fewer than in the 16-bit raw data file. What I think important to take away from the raw v JPEG debate is that once you have literally baked your JPEG cake you cannot really unbake it and recover discarded data. So if you only have a JPEG and you realise that you need more detail in the shadow areas an edit may not be possible due to introducing noise artifacts/posterisation etc. With your raw file you should be able to push the data much more. This also applies to highlights. FWIW a "correctly" exposed JPEG will result in an underexposed raw file or perhaps more correct to say a less than optimally exposed raw. Based on the assumption that you are going to try and have the time to optimise exposure. BTW I am not suggesting I am perfect I can and have made some doleful errors when calculating exposure, slide/transparency in particular being the super picky Quote: There are still sufficient levels that it would seem likely to have little effect on the experience of someone viewing an image from a JPEG file recorded at high quality settings. However, the fewer available levels might become distinctly noticeable when we try to post-process the JPEG file, possibly leading, for example, to colour banding in areas of slightly varying tones (e.g. a blue sky), and to more noise artefacts in the shadow areas which contain the fewest levels. Philip Sorry for the rather wordy reply but I was concerned by the fact that my original could be seen as misleading therefore needed some clarification by offering some further information Last edited by TonyW; 11-23-2018 at 10:29 AM. Reason: added image | |
11-24-2018, 05:05 AM | #82 |
I may not have expressed it well enough but I believe that although perhaps not obvious that you will find that there is an indirect link between bit depth and dynamic range. The link is the fact that the Dynamic Range of a camera cannot really exceed the total number of bits in a digital capture. If you accept that the first stop uses 1/2 the available levels and each further stop 1/2 of the proceeding levels it may be easier to picture where you run out of data. Maybe the following illustrates what I was trying to say. This is how it was explained to me many years ago http://1.bp.blogspot.com/-br7TwQVK3qM/UHgnBPei_VI/AAAAAAAAFDs/VDzrnCNFDZs/s1..._histogram.jpg However, i think it comes down to how granular the brightness levels are resolved. With less bits there should be less vertical bars in the histogram. Perhaps 256 per R, G, and B channel for 8 bit. With more bits there should be more bars but less high because the pixels are spread out over more individual shades. Leaving colour aside, in b/w it is a little easier to see. I found this examle quite useful. 1-bit image below. Notice that the histogram is only two bars left and right. 5-bit image which has 32 bars (aka shades) in the histogram. But the dynamic range of the sensor is still covered. | |
11-24-2018, 10:25 AM | #83 |
It is sampling discrete areas from an analogue capture. The sampling is linear hence the 1st stop taking half the levels and then working down stop by stop halving the previous. It is possible to view the raw capture data in a number of applications in the individual RGBG channels and also in a colour demosaiced version. I can assure you that you would not like it as the image being raw is just that raw data without any processing you will find that the individual RGB channels are very dark and low contrast and likewise the demosaiced version is very flat dark and very green. That is why all raw editors have to apply WB, TRC etc etc to give an acceptable first view ready for further refinement in editing. Not that it of any value but I could show an example or two if this is not clear Quote: However, i think it comes down to how granular the brightness levels are resolved. With less bits there should be less vertical bars in the histogram. Perhaps 256 per R, G, and B channel for 8 bit. With more bits there should be more bars but less high because the pixels are spread out over more individual shades. Your PS histogram usually displays exactly 256 brightness levels from 0 to 255 left to right. The height of the histogram is calculated relative to other tones and normalised and scaled accordingly. This is not the same as image acquisition and bit depth precision from analogue to digital via the ADC system. Below is a selection of images going from 0 to 255 like your images it displays distinct levels in this case split to show relationship when we split the levels but this is purely an editing excercise in your editor of choice. Your camera will do similar in converting its analogue capture by sampling to produce your image which will be limited by the bit depth precision of the conversion i.e.your camera is 14 bits you have 14 bits maximum (probably 12 or less) at your disposal between zero sensel and full sensel saturation. | |
11-26-2018, 02:21 PM - 1 Like | #84 |
Rather than overthink it, recognize that JPG is a standardized format, with standardized practices for how it is used. While theoretically you might be able to represent all the data from a larger bit-depth image within a JPG, the JPG format was likely developed for most normal image histograms, which will have data throughout the full 8-bit range it was designed for. JPG is an old format, and it predates a lot of what we are currently using. Other formats try to update this, but the reality is that JPG itself is a bit limited, yet that is what everyone knows and uses when it comes to a standard (8-bit) format. By the way, while limited to 8-bits, the quality of what you can get out of JPG is also going to be controlled by how compressed the image is and the quality one sets in the camera and in software. A comment a while back suggested that because a JPG file is smaller it is compressed and lossy; that is partially true for a JPG, but not for all file formats. For instance, you can compress some file formats without loss (Tiff, Dng, PNG to name a few). And, JPGs are smaller predominantly because they are 8-bit rather than 16-bit. That ends up being a lot of potential data lost. Interestingly Tiff files can be larger than your RAW images or smaller depending whether they are completely uncompressed or compressed. And, you can bet that any image format where the files out of your camera shot at the same resolution vary in size is compressed (albeit not necessarily lossy). For instance, your Pentax camera will shoot PEF and DNG files at various sizes because it does somewhat compress the files. | |
These users Like emalvick's post: |
11-27-2018, 04:43 AM | #85 |
Unregistered User Guest |
One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality.
|
11-27-2018, 05:00 AM | #86 |
I don't understand the phrase "1st stop taking half the levels". I'm afraid that this is beyond my understanding but maybe you could explain what "the 1st stop" is? The darkest part of the image? There was an interesting video from FilmmakerIQ on "Diving into Dynamic Range" on youtube (just for those who are interested). | |
11-27-2018, 06:28 AM | #87 |
One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality. For example TIFF can be 32 bit if you want or even 64 bit, and the number coinciding with levels can happen to make a big difference to IQ. Last edited by TonyW; 11-27-2018 at 10:14 AM. | |
11-27-2018, 09:25 AM | #88 |
That's what you show in your three graphics. The first gradient has more shades available (brightness levels) which makes it a smooth gradient. The others only have less brightness levels. Images with less bits have less brightness levels. That's what occurs in so-called banding often seen in images of blue skies from dark blue to bright blue. More bits means more brightness levels and smoother gradients. Quote: I don't understand the phrase "1st stop taking half the levels". I'm afraid that this is beyond my understanding but maybe you could explain what "the 1st stop" is? The darkest part of the image? The 1st stop contains half the amount of the total exposure from the brightest part of the image down and is the way with linear capture each stop down uses half the levels and continues until you run out. Does this rough graphic help? Top the scene we wish to capture Middle, 1 channel Green of the 4 channels captured and how it looks in raw and the typical histogram look of an unprocessed channel. Other channels will look very similar visually but their response is based on the colour of the array sensel capturing the data. Bottom. A composite of the RGBG channels after demosaic but prior to TRC, WB and any other tweaks to make the image nice in your raw converter of choice. Last edited by TonyW; 11-27-2018 at 09:43 AM. | |
11-28-2018, 04:36 AM | #89 |
Unregistered User Guest | Yes, I think you misunderstood. My point was that there are different aspects of data representation in which the number of bits or bytes is used to describe the stored image data. One of those merely has to do with how the data is put into blocks, strips, or some other chunks, as part of a record structure, which has nothing to do with "color depth", but merely identifies a standard by which software can interpret the data. What I'm talking about is similar to the idea that you could have a personnel record in a database with a standard that defines how long each field is, and the data type contained therein. So for example, you could say Dcl (Lastname, Firstname) char(32);, and the length of that field, being thirty-two bytes, has nothing to do with how tall the person may be, his age, etc.
|
11-28-2018, 06:51 AM - 1 Like | #90 |
One more area of data storage confusion: the word length of the image data - when they say JPEG is an "eight-bit standard", what that refers to is the number of bits in the smallest addressable unit, which is one byte, eight bits, in the JPEG standard. TIFF's can be eight or sixteen bits. But whether the image data is stored as, for example, 32768 sixteen bit chunks or as 65536 eight bit chunks makes no significant difference, and no difference at all in terms of image quality. "8 or 16 bit TIFF" is referencing the number of bits per channel, higher means you are capable of storing more tonal information. | |
These users Like BrianR's post: |
|
Bookmarks |
Tags - Make this thread easier to find by adding keywords to it! |
adjustments, camera, color, colors, data, dng, engine, exposure, file, files, image, information, jpeg, jpg, lot, lr, microsoft, office, online, ooc, photography, photoshop, pictures, post, profile, sensor, settings, shot |
Thread Tools | Search this Thread |
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
New Pentax flash AF360FGZ Mark II version much better then the old version? | Theov39 | Flashes, Lighting, and Studio | 6 | 02-27-2017 09:35 AM |
RAW+ : How to apply JPG camera settings to RAW? | raider | Digital Processing, Software, and Printing | 6 | 06-20-2015 07:21 PM |
K-S2 JPG's versus K5ll JPG'S and K50 JPG's | LoneWolf | Pentax DSLR Discussion | 22 | 03-28-2015 12:58 PM |
RAW+ - JPG different from RAW? | 7samurai | Pentax DSLR Discussion | 26 | 11-23-2010 08:36 AM |
K-X shows more noise when shooting RAW than JPG??? | crossover37 | Pentax DSLR Discussion | 11 | 04-20-2010 12:46 PM |