Quote: In your example of PP the subject doesn't look like what was there,
And you think yours does?. There is quite a bit here you don't understand. Your image uses only the bottom 2/3s of the spectrum available to the camera. The image is essentially under-exposed, and I'm pretty sure you lost some detail in the dark areas, which fortunately are pretty small. It doesn't matter whether you think that is what you saw. The reason we adjust the image to use the full range of the levels window, is because that's what our eye does. So unless your eye is damaged, and your iris isn't opening and closing, that isn't the way you saw it. I've had this discussion with many students, and I had as one of my first exercises a project where I could demonstrate exactly what I'm talking about. I had a graduated scale with 12 graduations from white to black attached to the bottom of the picture. By having the students try and match the image, I could use the graduated scale to help them understand that when they didn't max out their detail in the scale it also affected how the picture looked. It was just easier to see it on the scale that to talk about parts of the picture.
The contrast range in nature can be as high as 20,000:1. On a film print the highest you could achieve was 120 to one. Think about that for a second There is nothing you can do to match it in print or on your monitor. Every image is a representation of what is out there, not an exact copy. So, no, I can say 100% for certain. Your image is not what you saw. It is a representation of what you saw, with less contrast etc. The camera quite simply doesn't have the ability to capture what you saw. And the sooner you get that out of your head, and realize that what you are portraying is a personally selected subset of what was there, and that there is absolutely nothing you can do to create the dynamics of reality, then you can start working on getting the best representation of what you saw according to your vision. And your vision is clouded by your interest and the way your brain works. So often if there is a person in our frame, we will want it to have special place in the picture, because that's what happens when our brain processes the information. IT focuses on what it's been conditioned by years of evolution.
So what I would argue we are trying to capture is not what's there, that would make for really boring pictures, what we try to capture is what we ourselves saw through our brain's filter, which decides what we see in an image before we even get a chance to tell it different.
But you can train your brain to see reality different. Hunters will see animal signs you and I won't even see, because they have trained their brains to look for them.
So, this is way more complicated than claiming you're trying to make the image as close as possible to what you saw. You aren't talking about what you saw, you're talking about what you remember. You don't see the way a camera sees. Your eye's dynamic range is about 7 EV, but your eye adjusts to light quickly enough you can look at different parts of a scene in isolation fast enough that the impression of your retina is actually way more than that. the camera can capture 13 EV, but it's static not dynamic, so even the way you see can not be duplicated on an image . No one has any record of what you saw, not even you, so it's an impossible standard to work to.
A more appropriate response is to create an image that has the same emotional impact as what you saw... and that has little to do with what was originally there. So, unless you took the picture of that leaf on a grey overcast day that was depressing the heck out of you because there was way too much blue light and it was so dark that your iris was wide open and couldn't get enough like to see properly, that isn't what you saw. Your eye has it's own white balance system that corrected for the blue light thing. It has an exposure meter that adjusts the iris,( it's aperture, ) to get a balanced exposure, and it has the ability to open and close that aperture as it looks at brighter or darker parts of a scene, an ability your camera doesn't even have. So no, that image is not what you saw, not even close. And memory is a funny thing. You don't even know what you saw. You certainly didn't see the narrow DoF interpretation you captured. Your eye doesn't see like that, your camera does.
Your job should you choose to accept it is to take the image the camera records and create the most compelling image you can from it. It may be something that's close to what you saw or it may be an image that conveys what you felt, with certain parts of the image held back and certain parts accentuated for emotional impact. But, don't confuse any of it with some kind of reality. Human reality is way too dynamically fluid to capture on a piece of paper, or a computer screen.
Forget what you think you saw, the camera didn't capture it, and the feelings etc. you had at the time you captured the scene are gone. Work with what you captured.