16 bit camera, whats the future?

20 replies1.1k views
Daniel Renner avatar
I have an ASI 1600mm pro which has a 12 bit sensor. My next upgrade will most likely be the 2600mm pro that has a 16 bit sensor.
But how far away are we from the next bit upgrade camera and will it make a huge difference from the 16 bit ?
Is the next step 18bit?
Engaging
ks_observer avatar
Why do you care about bit number?
Clint Lemasters avatar
https://www.photometrics.com/learn/camera-basics/bit-depth

I was wondering the same and this found the above article insightful. Going up another bit would definitely get you more gradient in your image. Per the above it looks pretty substantial going from 12 to 14 to 16. Not sure going above 16 bits is going to give most astrophotographers a useful/perceptible ddifference due to our own visual limitations.

Clear skies!

Clint
Daniel Renner avatar
Clint Lemasters:
https://www.photometrics.com/learn/camera-basics/bit-depth

I was wondering the same and this found the above article insightful. Going up another bit would definitely get you more gradient in your image. Per the above it looks pretty substantial going from 12 to 14 to 16. Not sure going above 16 bits is going to give most astrophotographers a useful/perceptible ddifference due to our own visual limitations.

Clear skies!

Clint

thanks! I guess i was thinking more about QE.
ks_observer avatar
Also note:
Stacking images increases bit depth by the formula:
n= bit depth increase
number of samples = 2^(2*n)
So to increase by 2 bits, you have to take 16 subframes.
So your 12-bit sensor is really at least a 14-bit sensor for most cases.
Andy Wray avatar
To be honest, I think this whole going for bigger numbers thing is a way for companies to make more money, but doesn't necessarily result in better images. 

It's like the digital camera megapixel thing … I have some wonderful photos taken of my daughter when she was a toddler using a 1.4M pixel Olympus 1400XL that look almost as good as my 24M pixel Sony that I use today.  OK, maybe not quite as sharp, but most people don't notice the difference looking at photos around my house.

I would suggest choosing a camera that has the right sensor size and resolution for your OTA for what you want to image and then getting one with a good well depth and sensitivity.
Helpful Insightful Respectful Engaging
Ruediger avatar
Hi, 

I am a bit confused by the answers because it seems everybody is using bit depth in his answers in a different context.

I think, it is necessary to define which bit depth is meant. What I see in the OP,  the OP quotes is the bit depth of the AD converters. Please do not mess it up with fullwell or quantum efficiency or even resolution or values of a pixel while stacking. 

For the final result more then 16 bit makes no sense since there are no displays which can produce so many colors. Also the human eye cannot distinguish more gradients (actually less). A higher bit depth makes only sense when when you do a tone mapping and compress the histogram. But here you lose again information. This refers to the output, not processing, where you may need higher bit depth. 

Concerning the ADC bit depth: it should be able to cover the range of full well, so that each detected photon reps. electron can be reflected in a different numeric output. Or in other words: unity gain in all situations. Fullwell < max range ADC. 

CS
Rüdiger
Andy Wray avatar
Ruediger:
Hi, 

I am a bit confused by the answers because it seems everybody is using bit depth in his answers in a different context.

I think, it is necessary to define which bit depth is meant. What I see in the OP,  the OP quotes is the bit depth of the AD converters. Please do not mess it up with fullwell or quantum efficiency or even resolution or values of a pixel while stacking. 

For the final result more then 16 bit makes no sense since there are no displays which can produce so many colors. Also the human eye cannot distinguish more gradients (actually less). A higher bit depth makes only sense when when you do a tone mapping and compress the histogram. But here you lose again information. This refers to the output, not processing, where you may need higher bit depth. 

Concerning the ADC bit depth: it should be able to cover the range of full well, so that each detected photon reps. electron can be reflected in a different numeric output. Or in other words: unity gain in all situations. Fullwell < max range ADC. 

CS
Rüdiger

You are right, my analogy of digital camera pixels may have taken it off-topic ... I was just meaning, don't get hung up on numbers.  I do agree with you that 16 bits per channel (as he is considering a 16 bit mono) is more than he would ever need.
Tommy Blomqvist avatar
There seems to be little need to go higher than 16-bits per channel as that doesnt improve the result much. But said that each each type of cone cell in a human eye can register around 100 different colors making the eye capable to register around 1 million colours.

16 bit - 65 536
24 bit - 1 677 216
32 bit -  4 294 967 296

So maybee 24 bit would be useful (and 32 bit overkill) but it also depends how the camera is using the levels with filters and/or Bayer mask…
Ruediger avatar
Tommy Blomqvist:
There seems to be little need to go higher than 16-bits per channel as that doesnt improve the result much. But said that each each type of cone cell in a human eye can register around 100 different colors making the eye capable to register around 1 million colours.

16 bit - 65 536
24 bit - 1 677 216
32 bit -  4 294 967 296

So maybee 24 bit would be useful (and 32 bit overkill) but it also depends how the camera is using the levels with filters and/or Bayer mask...

Hi Tommy,

unfortunately you won’t benefit from more pixel depth. Standard office displays have a LUT (look up tables, which map values to pixel signal) of 8-10 Bits. High end NEC graphic displays have a 14 bit LUT and Eizo’s graphic displays 16 bit. These LUTs compress and truncate your color space within their limits. Already comparing the 14 and 16 LUTs there is no difference visible to the human eye and more or less a selling argument. 

Increasing bit depth is a pure theoretical aspect with little to no practical advantage. You won’t recognize a difference. Moreover the question is: will the person who is looking at you picture have also such a device too? Also it needs hardware calibration to display it properly. 

CS
Rüdiger

addendum:
I did a quick search to explain the 2 million colors a human eye can distinguish. It is capable of:
20 saturation steps
200 colors
500  brightness steps 

if you multiply 20x200x500 you come up to 2 million variations. That means already 8bit per channel are sufficient to map it. That is the reason why RGB works with 8 bit per channel. If you work in other color spaces you need different bit depths to map them. E.g. Lab, CMYK,  color space.

update: rephrased some ambiguous phrases and clarified some points.
Helpful
Tommy Blomqvist avatar
Ruediger:
Tommy Blomqvist:
There seems to be little need to go higher than 16-bits per channel as that doesnt improve the result much. But said that each each type of cone cell in a human eye can register around 100 different colors making the eye capable to register around 1 million colours.

16 bit - 65 536
24 bit - 1 677 216
32 bit -  4 294 967 296

So maybee 24 bit would be useful (and 32 bit overkill) but it also depends how the camera is using the levels with filters and/or Bayer mask...

Hi Tommy,

unfortunately you won’t benefit from more pixel depth. Standard office displays have a LUT (look up tables, which map values to pixel signal) of 8-10 Bits. High end NEC graphic displays have a 14 bit LUT and Eizo’s graphic displays 16 bit. These LUTs compress and truncate your color space within their limits. Already comparing the 14 and 16 LUTs there is no difference visible to the human eye and more or less a selling argument. 

Increasing bit depth is a pure theoretical aspect with little to no practical advantage. You won’t recognize a difference. Moreover the question is: will the person who is looking at you picture have also such a device too? Also it needs hardware calibration to display it properly. 

CS
Rüdiger

addendum:
I did a quick search to explain the 2 million colors a human eye can distinguish. It is capable of:
20 saturation steps
200 colors
500  brightness steps 

if you multiply 20x200x500 you come up to 2 million variations. That means already 8bit per channel are sufficient to map it. That is the reason why RGB works with 8 bit per channel. If you work in other color spaces you need different bit depths to map them. E.g. Lab, CMYK,  color space.

update: rephrased some ambiguous phrases and clarified some points.

Yes you are right but we must also look (!) at the fact that the cameras and eyes normally doesn't work in exactly the same way.
And the fact that what the camera captures will be processed and put on a screen and THEN caputered again with someones eyes.

Look at dark movie even on a modern high end TV and you will notice that the luminance are in steps making the space were the aliens live like an dark union 😉

But in real life I suspect that 16 bits per channel is what we are gonna end up with (with maybe 14 bits in lower end cameras) in both mono and OSC-cameras. 

But I am mostly guessing so I can be terribly wrong.
Ruediger avatar
But in real life I suspect that 16 bits per channel is what we are gonna end up with (with maybe 14 bits in lower end cameras) in both mono and OSC-cameras.

Yes, I share your prediction.
Only one thing could become a game changer: a significant increase of full well. Then we need wider ADC. 

CS
Ruediger
Soothsayerman avatar
I look at is like this. Do most people even get the most out of the currently available color depth anyway?

Do you process your images in the sRGB color gamut or ProRGB color gamut? Do you process in 16 bits at a minimum? Do you use destructive file formats and how destructive is your noise reduction? things to think about.

What are the capabilities of your monitor? and do you have it calibrated? If not and you are serious you need to get a color calibrator for your monitor.  You wouldn't take pictures through an un-collimated telescope so why isn't your monitor calibrated? How do you know exactly what you are looking at? You can get a Spyder color calibrator.

The human eye can detect more color gradients that can be produced by an rgb color pallet.  So when processing your images you need to be using PROrgb and not Srbg color pallet with at least a 16 bit color depth.  This will yield a wider color gamut.  Does everyone do this? I have no idea.

To get the maximum utility you need to shoot in mono with 6 "colors" RGB + SII, OIII and HA or whatever bands you think will give you the widest color gamut if the widest is what you are aiming for of course.

What the "best practices" are would be a fruitful debate, but I get the feeling that there is more to be gained from technique because I am not sure how hard very generally the current technology envelope is being pushed.  I know I have a ways to go before I hit the current technology limits, my own skill level is mediocre. 

https://en.wikipedia.org/wiki/Color_depth#12-bit_color
Guillermo de Miranda avatar
I would much rather have the newer generation focus on noise reduction. Getting rid of the cooler (which adds a lot of cost) would be the true game changer.
Well Written
Lynn K avatar
16 bit CCD cameras have been out for over 20 years. The SBIG 237a made pre 2000 was 16 bit. I have that camera. I think Rudiger's comments are right on. The 16 bit is fairly new to CMOS due to it's individual pixel AD converters. They solved that problem and now have caught up to CCD. In my openion, being a CCD, imager for 17 years and now going to CMOS as many others, the biggest avatage of CMOS is its low read noise, which have changed the way imagers think about acquisition. Also, maybe more importantly is the high QE and small pixels for large chips.
Lynn K.
Helpful
Rodrigo Roesch avatar
I believe the next generation cameras will have more efficient sensor such as  the new  stacked sensor from Sony. They came with a new  CMOS Image Sensor Technology with 2-Layer Transistor Pixel, Widens Dynamic Range and Reduces Noise by Approximately Doubling Saturation Signal Level”.
Ruediger avatar
Rodrigo Roesch:
I believe the next generation cameras will have more efficient sensor such as  the new  stacked sensor from Sony. They came with a new  CMOS Image Sensor Technology with 2-Layer Transistor Pixel, Widens Dynamic Range and Reduces Noise by Approximately Doubling Saturation Signal Level”.

Hello Rodrigo,

can you please help me: what do you mean with „efficient“? Some sensors reach 90% QE (some even more), which means, they are already very close to perfectness. Hence there is not much room to improve. Full well can be improved, which reflects directly in increased dynamic range. Totally agree to that. But efficiency 🤔

CS
Rüdiger
Rodrigo Roesch avatar
More efficient  for to capture different wavelength. It is true that some are 90%,  but  it is not 90% all across, so yes, there is sill room for efficiency. I addition, dark current improvement . We need to cool down the camera to reduce it, would it be good not to do that?
Ruediger avatar
Rodrigo Roesch:
More efficient  for to capture different wavelength. It is true that some are 90%,  but  it is not 90% all across, so yes, there is sill room for efficiency. I addition, dark current improvement . We need to cold down the camera to reduce it, would it be good not to do that?

Definitely a valid point. But this will be a real challenge.
Rodrigo Roesch avatar
I guess, at this point the new cameras are not the limiting factor, it is the growing light pollution smile, but this is a different topic.
Björn Arnold avatar
I believe the next trend will be lucky DSO imaging, where cameras are running without additional cooling and having exposure times of a few seconds and even sub-seconds. Goodbye guiding and high-end encoders. For that to come, read noise will be further reduced. BitDepth won't have to be higher than 10 to 12 bit. Maybe even 8bit will do it. Statistics will then do all the heavy lifting.

Overall, the vast majority of us will use sensors that are produced for the consumer market or machine vision systems in industry. Look what's going on there and you'll get an idea what we can buy in the future.

The "next generation" is already here. CCDs have been improved but unnoticed to most of us as hardly anybody here would pay the money for it. 100k for a sub APS-C sensor? Nope.
Helpful Insightful Engaging