Black point and dynamic range, what is true.

10 replies809 views
Soothsayerman avatar
Is this statement true.

In an image of a DSO (or any object) where there exists true black, then not setting a true black point is a loss of dynamic range. In other words, if my black background is dark grey instead of black and I have a white point that is just below clipping in the image, then not setting a black point is a loss of dynamic range.

Is that a true statement?

If it is true, then is the same true of not setting a true white point if true white exists in the image?

So lets say it is true for the moment.

Is there an instance where it would be advantageous, for whatever reason, to not do this?  Or would not doing this in any instance point to an error in post processing where a true black point and a true white point exist?

The answer is maybe to the last question but apart from that, not setting a true white point and black point is leaving dynamic range on the table and turning your 14 or 16 bit camera into a 10 bit or worse camera.

Now when you expose an image you may or may not be using the entire range of your histogram for a variety of reasons and it is wrong to assume that you are supposed to of course, those decisions are made in post processing.
Joon Ren avatar
I think this video will help explain what the white, black and midtone points are all about and how changing them will impact image properties https://www.youtube.com/watch?v=LRwPK4UDYPc&t=446s
Soothsayerman avatar
Adam is the man, and it does help explain what is going on.  The idea is a simple but brilliant one developed by Ansel Adams years ago to get the most dynamic range out of a photo by using the zone system. A range of black and white levels to expose correctly and stretch correctly.
wsg avatar
Southsayerman.  I am not a trained photographer but I have a fair amount of experience taking pictures, astro and non astro.  When you or I or Adam Block talk about black point or white point we are talking about the histogram which, in any image, is a sort of map of the darkness and lightness of the image including color ranges or at least that is the way I think of it.

It may be true to also say that there is a point in most images astro or otherwise, where black becomes grey and grey becomes white, but…

To address your original question, I think there are a few points of discussion to be made related to astro photography and specifically astro photography seen on this website.

First let me say I think you are correct, dynamic range is a product of the position of the black and white points and most images on Astrobin should have a black point, after all most of us are photographing deep sky, what we used to call outer space, and if the recent JWST images have taught us anything it is that space is very dark. 

What seems to happen around here is that "artistic" interpretation and "personal taste" have a greater appeal than the histogram, and black point particularly.  Many members that know what the histogram means often lose track of black point in the pursuit of stretching to show dust and faint nebulosity while many, many others might not know what black point is at all.

The real advantage is to understand how the histogram works and execute it properly.

scott
Helpful
Soothsayerman avatar
Thanks Scott for the thoughtful reply and I agree.  Understanding what the histogram is and how to use it to its fullest is the main point.  There are many ways to do this depending on the software one uses to process their images. In Photoshop, GIMP, Affinity, Lightroom and I'm sure others setting the white and black point is very straightforward and easy. In some astrophotography software it is not quite as straightforward but easily enough done.

For a lot of astrophotographers, I am not sure if it is an artistic motivation or just not realizing what dynamic range is or just not thinking about it.  I suppose it could be any and be all of them.  I do think though that you can have proper dynamic range and be as artistic as you want.  The man that made the "zone" system famous for maximizing dynamic range (histogram) in his finished photos was Ansel Adams who was very artistic and was "documenting" the beauty of nature in black and white film.   He wanted to capture as much of the dynamic range as possible because it was truer to real life and presented an opportunity for more dramatic photos.  I think his motivations align well with astrophotography.

Contrasts in luminance in photography and cinematography are dramatic and more engaging and more accurately represent reality many times.  I think this is particularly true in astrophotography but many do not take advantage of it.

For more information you can look at the following links.

https://en.wikipedia.org/wiki/Zone_System

https://www.institute-of-photography.com/ansel-adam-zone-system/
Helpful Insightful Respectful Engaging
wsg avatar
Thanks Scott for the thoughtful reply and I agree.  Understanding what the histogram is and how to use it to its fullest is the main point.  There are many ways to do this depending on the software one uses to process their images. In Photoshop, GIMP, Affinity, Lightroom and I'm sure others setting the white and black point is very straightforward and easy. In some astrophotography software it is not quite as straightforward but easily enough done.

For a lot of astrophotographers, I am not sure if it is an artistic motivation or just not realizing what dynamic range is or just not thinking about it.  I suppose it could be any and be all of them.  I do think though that you can have proper dynamic range and be as artistic as you want.  The man that made the "zone" system famous for maximizing dynamic range (histogram) in his finished photos was Ansel Adams who was very artistic and was "documenting" the beauty of nature in black and white film.   He wanted to capture as much of the dynamic range as possible because it was truer to real life and presented an opportunity for more dramatic photos.  I think his motivations align well with astrophotography.

Contrasts in luminance in photography and cinematography are dramatic and more engaging and more accurately represent reality many times.  I think this is particularly true in astrophotography but many do not take advantage of it.

For more information you can look at the following links.

https://en.wikipedia.org/wiki/Zone_System

https://www.institute-of-photography.com/ansel-adam-zone-system/=post-related

I see you just became a member and have processed your first image that was acquired by someone else,

congratulations and welcome to Astrobin!

scott
Lynn K avatar
If you are imaging near the Milky Way, more likely there will not be any void showing only dark emty space.  One has to only look at wide field images to see that there is alot of dust out there in our galaxy, and it is not black. 

The Adam Block tutorial is very interesting.  Even though he uses  the term black point, it is not defined the way I learned years ago using Photoshop.  To set the "Black Point" in Photoshope you set numerical point such as 20, and then no pixel would be below/darker than 20. You didn't clip the data as Adam is showing being done. What Adam is demonstrating is simular to using "Levels" in Photoshop.  There, by sliding the black point to the right,  you would Clip all data to the left. Also if you set the white point to say 240, then no pixel would be lighter than 240.  Again, in my experience, what Adam is doing is more simular to moving the right slider in "Levels" to the left in order to do a linear stretch.   A lot of the tutorial explains how "Curves" effects the data and why it is no longer linear. 

Lynn  K.
Helpful
Soothsayerman avatar
Lynn K:
If you are imaging near the Milky Way, more likely there will not be any void showing only dark emty space.  One has to only look at wide field images to see that there is alot of dust out there in our galaxy, and it is not black. 

The Adam Block tutorial is very interesting.  Even though he uses  the term black point, it is not defined the way I learned years ago using Photoshop.  To set the "Black Point" in Photoshope you set numerical point such as 20, and then no pixel would be below/darker than 20. You didn't clip the data as Adam is showing being done. What Adam is demonstrating is simular to using "Levels" in Photoshop.  There, by sliding the black point to the right,  you would Clip all data to the left. Also if you set the white point to say 240, then no pixel would be lighter than 240.  Again, in my experience, what Adam is doing is more simular to moving the right slider in "Levels" to the left in order to do a linear stretch.   A lot of the tutorial explains how "Curves" effects the data and why it is no longer linear. 

Lynn  K.

Yeah there are a lot of ways to go about this, the thing to remember is the end goal of maximizing dynamic range.  You can enter a numeric value, you can manipulate the levels or the curves and you can actually sample the image itself.  I like the way Adam does it and explains it.  You can grok this whole thing mathematically or visually which is why Ansel Adams method became so popular because it simplified the concept.  The great thing and curse about all  of this are the myriad of ways available to accomplish the same thing.
Thanks Scott for the thoughtful reply and I agree.  Understanding what the histogram is and how to use it to its fullest is the main point.  There are many ways to do this depending on the software one uses to process their images. In Photoshop, GIMP, Affinity, Lightroom and I'm sure others setting the white and black point is very straightforward and easy. In some astrophotography software it is not quite as straightforward but easily enough done.

For a lot of astrophotographers, I am not sure if it is an artistic motivation or just not realizing what dynamic range is or just not thinking about it.  I suppose it could be any and be all of them.  I do think though that you can have proper dynamic range and be as artistic as you want.  The man that made the "zone" system famous for maximizing dynamic range (histogram) in his finished photos was Ansel Adams who was very artistic and was "documenting" the beauty of nature in black and white film.   He wanted to capture as much of the dynamic range as possible because it was truer to real life and presented an opportunity for more dramatic photos.  I think his motivations align well with astrophotography.

Contrasts in luminance in photography and cinematography are dramatic and more engaging and more accurately represent reality many times.  I think this is particularly true in astrophotography but many do not take advantage of it.

For more information you can look at the following links.

https://en.wikipedia.org/wiki/Zone_System

https://www.institute-of-photography.com/ansel-adam-zone-system/=post-related

I see you just became a member and have processed your first image that was acquired by someone else,

congratulations and welcome to Astrobin!

scott

Thank you Scott for the welcome message I appreciate it?
Congrats on being snarky and insincere.
Just to be clear though, I have been processing images and photography as an amateur and professional since the early 1980's, how time flies!
dkamen avatar
Hi,
 
I may have misunderstood something fundamental in the discussion, but based on my limited understanding have two remarks:

1. White is the lowest signal level that saturates your sensor and the infinite number of signal levels above that. So there is no "true white" and  if your image doesn't already include white e.g. in star cores, setting it is usually not a good thing. If anything, emulates an inferior sensor that saturates faster.
2. Similarly, black is the highest signal level that fails to trigger your sensor, and anything below that. True black exists theoretically (complete absense of signal) but not in a long exposure within the general proximity of a diffuse DSO. Again, if it is not already present, setting it emulates an inferior (less sensitive) sensor and that usually manifests as unnaturally looking "edges" in your DSO. 


As for bit depth, it is largely unrelated.

At the extreme, you can imagine a binarized version of the image, captured by a 1bit sensor. Anything above e.g. 128 is rendered as white (255), anything below is black (0). By definition, the "true black" parts of the image are black, the "true white" parts of the image are white and the full range of the histogram is covered in a maximal manner ("maximum dynamic range"). Will that image look better than a grayscale counterpart with values between 16 and 235, just because the latter has a smaller dynamic range? I very much doubt it.

cheers,
D.
Helpful Insightful Engaging
Soothsayerman avatar
Hi,
 
I may have misunderstood something fundamental in the discussion, but based on my limited understanding have two remarks:

1. White is the lowest signal level that saturates your sensor and the infinite number of signal levels above that. So there is no "true white" and  if your image doesn't already include white e.g. in star cores, setting it is usually not a good thing. If anything, emulates an inferior sensor that saturates faster.
2. Similarly, black is the highest signal level that fails to trigger your sensor, and anything below that. True black exists theoretically (complete absense of signal) but not in a long exposure within the general proximity of a diffuse DSO. Again, if it is not already present, setting it emulates an inferior (less sensitive) sensor and that usually manifests as unnaturally looking "edges" in your DSO. 


As for bit depth, it is largely unrelated.

At the extreme, you can imagine a binarized version of the image, captured by a 1bit sensor. Anything above e.g. 128 is rendered as white (255), anything below is black (0). By definition, the "true black" parts of the image are black, the "true white" parts of the image are white and the full range of the histogram is covered in a maximal manner ("maximum dynamic range"). Will that image look better than a grayscale counterpart with values between 16 and 235, just because the latter has a smaller dynamic range? I very much doubt it.

cheers,
D.

A white point (often referred to as reference white or target white in technical documents) is a set of tristimulus values or chromaticity coordinates that serve to define the color "white" in image capture, encoding, or reproduction. White point in the sRGB space is defined as the values 255, 255, 255 for RGB. The white point is used to determine the maximum limit or tolerance of luminance before additional luminance can no longer be retained.  It is also used in colorimetry or when you take an image to set the "white balance" or color temperature measured in kelvins of the overall image.  None of this in professional photography is new or controversial it has been this way since the creation of the sRBG color space. 

Black point is the opposite and defined in the sRBG color space as 0, 0, 0 for RGB. The black and white points are used together to determine the breadth of the measures of luminance or the range of your grayscale in a particular image.  What is retained by your camera or sensor may or may not accurately capture the breadth of the gray scale or how many variations of gray can be captured.  The breadth is commonly referred to as dynamic range or how many shades of gray or luminance can be captured between pure black and pure white.  Photographers just use the number of f/stops of range to get an idea of this.

It is very common to speak of dynamic range in the context of bit depth because the higher bit depth enables a greater range of variations of gray to be retained. This all began in the professional photography world when digital sensors became a thing and photographers wanted a measurement that would translate well into f/stops.  That is not the reason it was expressed this way but it is why it became popular to express it this way.  So we all know that a 12-bit Raw file can record each pixel brightness with 4096 steps of subtlety, whereas a 14-bit one can capture tonal information with 16,384 levels of precision. It is about how many levels of precision can be retained.  Noticed I said retained and not necessarily captured. For this discussion, retained is all that matters because that is the precision of the information you have to work with in your image file.

So if you actually have an image you are processing that depicts an object or scene that contains the sRGB value of 0, 0, 0 that is your black point.  Notice I said not what your camera captured but what is actually contained in the scene.  What is actually captured is  very dependent on many many things. Likewise if the same is true of the scene in regards to the color white with an sRGB value of 255, 255, 255 that is your white point. The reason professional and other photographers use these points is to insure that during the processing of the image they create an image that reflects the true dynamic range of the image. The reason bit depth is used in this context is because bit depth determines the amount of precision or levels of gray that can be retained by the device used to capture the image and consequently the amount of the levels of gray in the image.  Most people can tell the difference between a black and white un-dithered scene capture on an 8 bit device vs a 14 bit device.

I hope that clears up the confusion.
Helpful
Soothsayerman avatar
Hi,
 
I may have misunderstood something fundamental in the discussion, but based on my limited understanding have two remarks:

1. White is the lowest signal level that saturates your sensor and the infinite number of signal levels above that. So there is no "true white" and  if your image doesn't already include white e.g. in star cores, setting it is usually not a good thing. If anything, emulates an inferior sensor that saturates faster.
2. Similarly, black is the highest signal level that fails to trigger your sensor, and anything below that. True black exists theoretically (complete absense of signal) but not in a long exposure within the general proximity of a diffuse DSO. Again, if it is not already present, setting it emulates an inferior (less sensitive) sensor and that usually manifests as unnaturally looking "edges" in your DSO. 


As for bit depth, it is largely unrelated.

At the extreme, you can imagine a binarized version of the image, captured by a 1bit sensor. Anything above e.g. 128 is rendered as white (255), anything below is black (0). By definition, the "true black" parts of the image are black, the "true white" parts of the image are white and the full range of the histogram is covered in a maximal manner ("maximum dynamic range"). Will that image look better than a grayscale counterpart with values between 16 and 235, just because the latter has a smaller dynamic range? I very much doubt it.

cheers,
D.

Here is an example of what I am talking about.

black point not set
https://imgur.com/gallery/xPE3XNy
black point set
https://imgur.com/gallery/g7Z7V07

Within the context we're talking about, we would say the image with the set black point contains a wider dynamic range because it depicts more shades of gray all the way to black. We are not destroying or losing any information by doing this. If there is something in the image that is very close to black we want to make sure is visible we can raise the luminance of that shade of gray.  We would set the black point to true black because true black exists in the scene we are capturing.  That is a different matter than if the sensor captured true black. What the sensor captures and the end result of our post processing are two different things because there does not exist a way for a sensor to capture every nuance in every situation with 100% fidelity. 

All of this said, someone may decide their personal preference is different, however it is important to understand dynamic range and what it means in relation to stretching and the histogram.
Helpful