Presentation "Noise and Astrophotography"

Jon RistaByron MillerChristian BennichArun HTim Hawkes
62 replies2.8k views
Christian Großmann avatar
Hello fellow astro photographers,

the discussions on the internet about choosing the right sub exposure times always have a potential to heat up peoples exchange of opinions. Although I may regret writing these lines, I thought I share with you a presentation on YouTubes "The Astro Imaging Channel" that was released some days ago. It is one video more about the topic and I find especially this one really interesting. In case you may, too, I'd like to share the link:

https://www.youtube.com/watch?v=1PUTWfWgD0g

In NO way, I want to start another of these discussions here. Everybody is free to make his own conclusions and as we all know, there are a lot of opinions out there. I already made my own conclusions and the presentation clears up some questions I had in mind and didn't find the courage to ask for the reasons mentioned above. Although I saw some other presentations (like the one from Dr. Robin Glover or another one by @John Hayes (also on TAIC), this one lines up with the topic and is at least a third explanation that helped me combine some puzzle pieces of the stuff I am not really into. So I thought it may be interesting for some other members like me, too.

But please, if this topic bothers you, just ignore this entry and be nice

CS
Christian
Oscar avatar
Christian Großmann:
In NO way, I want to start another of these discussions here.

What is better? Short exposures or long exposures?

hehehehe!! 
Byron Miller avatar
My only complaint is that I wish all these great youtube videos were blogs instead. Video just isn't my thing smile
Well Written
Christian Bennich avatar
Thank you @Christian Großmann - always great with more perspectives. 
I will give it a view/listen now, as the “darkness” (not much these days) settle in DK.
Scott Badger avatar
I thought it was one of the best presentations on noise, calibration, and exposure that I’ve seen.

Cheers,
Scott
Well Written
Byron Miller avatar
One thing that kind of "lift my brow" was the dancing around showing the multiple sub exposures of the same galaxy after talking about integration as being averaging.  SNR itself isn't averaged, it's the square root of the final stack.

So, for example, if you hit diminishing returns at 100 subs of 5-minute exposures and your individual sub is 10 snr, then the final stack is 100 SNR, not the average of 10 that some may think based on the statements around individual sub quality.

500 minutes of integration is 500 minutes of integration, regardless of sub exposure length.  There is also a diminished return on individual subs and from my experience, noise rejection and averaging works better then more subs you take and the more you dither.  Long subs mean fewer dithers.  I did like he talked about full well and contrast - and that's why i'm interested in maybe creating an algorithmic exposure time based on full well preservation and incoming light (calculated from prior exposure) and then always averaging say 10% above diminished returns vs dark calibration of fixed times.  I'd also add that averaging varying levels of signal is an additive effect rather than averaging the same level of signal - for example as the target hits better parts of sky or sky glow changes/diminishes or gets higher above horizon the average will improve. The more subs (the closer to diminished returns) the more this average improves. Fewer subs have less to average so one bad sub or losing a sub can have a larger impact… but not entirely objective since if you hit diminished returns anyway, you'd want to make sure you're imaging to full well and target quality not the individual sub snr as its own measure)

As much as he has the same notions as i do about overcoming read noise, I almost feel the same about dark noise.  I'm not anti-dark calibration, but if the PSNR and such can be calibrated out or reduced with flat calibration then you may not need dark calibration as it introduces noise.  The less you dither and the fewer subs you run you *must* do dark calibration.  I'd love to run some analysis and see how much dark calibration is needed on modern CMOS if your goal is to always image say 10% above diminished returns and use dithering.

The rest of the math/graphs i really enjoyed. A lot of people don't seem to understand diminishing returns but i felt that point kind of got lost when he again focused on the individual subs ;)
Jon Rista avatar
Byron Miller:
My only complaint is that I wish all these great youtube videos were blogs instead. Video just isn't my thing

Same here! I have always been a better and more attentive reader. Its tough to sit and watch something that's long. Further, with text, I can always find what I need very quickly, especially if its indexed, I can snip parts of it out and share it very easily, etc. With video, any of those things are so much harder to do.
Jon Rista avatar
Byron Miller:
One thing that kind of "lift my brow" was the dancing around showing the multiple sub exposures of the same galaxy after talking about integration as being averaging.  SNR itself isn't averaged, it's the square root of the final stack.

So, for example, if you hit diminishing returns at 100 subs of 5-minute exposures and your individual sub is 10 snr, then the final stack is 100 SNR, not the average of 10 that some may think based on the statements around individual sub quality.

500 minutes of integration is 500 minutes of integration, regardless of sub exposure length.  There is also a diminished return on individual subs and from my experience, noise rejection and averaging works better then more subs you take and the more you dither.  Long subs mean fewer dithers.  I did like he talked about full well and contrast - and that's why i'm interested in maybe creating an algorithmic exposure time based on full well preservation and incoming light (calculated from prior exposure) and then always averaging say 10% above diminished returns vs dark calibration of fixed times.  I'd also add that averaging varying levels of signal is an additive effect rather than averaging the same level of signal - for example as the target hits better parts of sky or sky glow changes/diminishes or gets higher above horizon the average will improve. The more subs (the closer to diminished returns) the more this average improves. Fewer subs have less to average so one bad sub or losing a sub can have a larger impact... but not entirely objective since if you hit diminished returns anyway, you'd want to make sure you're imaging to full well and target quality not the individual sub snr as its own measure)

As much as he has the same notions as i do about overcoming read noise, I almost feel the same about dark noise.  I'm not anti-dark calibration, but if the PSNR and such can be calibrated out or reduced with flat calibration then you may not need dark calibration as it introduces noise.  The less you dither and the fewer subs you run you *must* do dark calibration.  I'd love to run some analysis and see how much dark calibration is needed on modern CMOS if your goal is to always image say 10% above diminished returns and use dithering.

The rest of the math/graphs i really enjoyed. A lot of people don't seem to understand diminishing returns but i felt that point kind of got lost when he again focused on the individual subs ;)

Hmm, there are a few things in here I feel need to be addressed.

There is a key caveat about the following:
500 minutes of integration is 500 minutes of integration, regardless of sub exposure length.

Its just not that simple. This would only be true, if you had zero read noise. Noise in our images is multi-sourced...it gets introduced all over the place. Most of the key sources of noise are "temporally random" in nature with "temporal growth", meaning they are random in time and grow over time. Since their growth is time dependent, it doesn't matter how many subs, only the total time integrated, to determine how much of this kind of noise you integrate in total. This is shot noise...photon (from object and light pollution), dark current, etc. One source of noise, however (which itself is also actually multi-sourced, at various different locations in the electronics of the readout pipeline, which manufacturers just lump into one term), is temporally random, but not of temporal growth: read noise. Read noise in fact may have temporally random aspects and some that are not so temporally random, but the key is that you get ONE UNIT of read noise or EVERY READ. 

So read noise is temporally randomish in nature but its growth factor is by sub count. Therefor, 500 minutes of integration is NOT just 500 minutes of integration. It depends on how many subs you acquired to get that 500 minutes. For a given camera, 50 subs is going to have less read noise than 500 subs.  Even if your read noise is very low, lets say 1.85e-, which is pretty darn low in the grand scheme of things...it still matters. With 50 subs, you would have about 13e- total read noise. With 500 subs, you would have over 41e- total read noise. That is NOT a trivial difference. Assuming you had a total of 500 minutes of data in either case, you would have the same object signal, Sobj. However the 50 sub image, assuming you weren't wildly clipping, is going to have a better SNR than the 500 sub image.

Due to read noise, YES even today with CMOS cameras, stacking fewer longer subs is always going to be better. Better than what, exactly, is going to be somewhat relative, but generally speaking there is a balance point...where you reach the optimal balance between sub length and sub count. With modern CMOS cameras, that balance has been shifted SIGNIFICANTLY from where it was, and thankfully, too!! It used to be people might stack 10-15 super long subs, and if they lost just one sub, that could mean losing 30 minutes or even much more of their total integration. CMOS cameras have definitely changed that, but...they haven't changed the fundamental rules, nor the underlying physics. There is STILL a balance point, and imagers should still seek to find what that balance point is for each of the camera's they use.

So its not just as simple as 500 minutes is 500 minutes regardless of exposure length. Read noise growth is relative to sub count (Csubs), and therefor the more subs you have, the more read noise you have. It IS true that Sobj is the same regardless for a given total amount of exposure time, but the SNR of Sobj over all your noise sources would NOT be the same for different sub counts for that total exposure time.

Dark current (and thus dark current shot noise, SQRT(Sdark)), UNLIKE read noise, is a temporal noise. Temporal in nature, temporal in growth. UNLIKE read noise, for a given camera at a given temperature, you get the same amount of dark current REGARDLESS of how many subs you stack. The same goes for photon shot noise (SQRT(Sobj)) which is a temporal noise. You mentioned something else:
As much as he has the same notions as i do about overcoming read noise, I almost feel the same about dark noise.  I'm not anti-dark calibration, but if the PSNR and such can be calibrated out or reduced with flat calibration then you may not need dark calibration as it introduces noise.  The less you dither and the fewer subs you run you *must* do dark calibration.  I'd love to run some analysis and see how much dark calibration is needed on modern CMOS if your goal is to always image say 10% above diminished returns and use dithering.

Dark calibration actually has nothing to do with dark noise (which I assume is dark shot noise, SQRT(Sdark)). Calibration has everything to do with FPN, and in the case of darks that would be DFPN (Dark Fixed Pattern Noise). You actually have it backwards...with ewer subs, you would usually have stronger object signal that buries the DFPN deeper, rendering it LESS of a problem. FPN, Fixed Pattern Noise, is a different kind of noise. It is not temporally random in terms of scale, it grows the same amount in a given amount of time. Its growth is temporal, meaning it gets stronger and stronger over time (you accumulate more and more of it over time.) There is a BASELINE amount of DFPN in very frame, though, dominated by your bias signal, and the more frames you stack, the stronger this baseline DFPN is going to get. If you do not calibrate, then the more subs you stack, the stronger this form of dark fixed pattern noise is going to get. The shallower the sub (usually the case if you are stacking more subs), then the less object signal you have to bury the dark signals, and the more likely the DFPN is going to show through your background signal.

The entire reason we calibrate, is to remove DFPN, so that it will simply no longer be present and thus be unable to grow. Dark current shot noise is in fact a noise term we simply CANNOT correct with dark calibration. It is a temporally random noise with temporal growth. Being random in time (and actually also in space, as it usually has a Poisson distribution), calibration can do nothing to reduce it (in fact, the random terms captured in the master dark are in fact going to increase the total random noise in each calibrated sub, which is why we try to stack at least a certain number of frames into our calibration masters so we can average that remnant to a minimum.)



Finally, a note on integration as well. Technically speaking, we could determine SNR with averaging, if we wanted to. It is in fact effectively what's going on when we stack, which uses averaging rather than just addition (we would run out of numerical space if we simply added all the data.) 

Consider, the fairly ubiquitous formula for computing basic SNR (this excludes any aspect of FPN, but with proper calibration any remnant FPN is going to be small enough that it won't matter in most use cases):
SNR = (Sobj * Csubs)/SQRT(Csubs * (Sobj + Sdark + Nread^2))

Pretty basic formula. Averaging simply transforms that, to this:
SNR = Sobj/(SQRT(Csubs * (Sobj + Sdark + Nread^2)) / Csubs)

The latter formula would in fact better demonstrate what stacking does, and more clearly explain how we can "average down" the noise terms. With modern averaged stacking algorithms, we aim to hold the signal constant, while reducing its uncertainty. Which is exactly what the second formula here demonstrates. 

FPN can be shown in these formulas as well. Usually there is some small remnant DFPN and FPN term after calibration, but if your calibration masters are well crafted, then those remnant terms will usually not matter until you are well into the hundreds if not thousands of subs being integrated, so we can usually leave them out. If you are also dithering enough, any remnant FPN term is going to in essence be converted into a temporally random noise term in the stack as well, and one that usually has a sub-electron scale.
Helpful Insightful
Byron Miller avatar
Jon Rista:
Its just not that simple. This would only be true, if you had zero read noise. Noise in our images is multi-sourced...it gets introduced all over the place. Most of the key sources of noise are "temporally random" in nature with "temporal growth", meaning they are random in time and grow over time. Since their growth is time dependent, it doesn't matter how many subs, only the total time integrated, to determine how much of this kind of noise you integrate in total. This is shot noise...photon (from object and light pollution), dark current, etc. One source of noise, however (which itself is also actually multi-sourced, at various different locations in the electronics of the readout pipeline, which manufacturers just lump into one term), is temporally random, but not of temporal growth: read noise. Read noise in fact may have temporally random aspects and some that are not so temporally random, but the key is that you get ONE UNIT of read noise or EVERY READ.


Thanks for the reply Jon!

Nothing is simple for sure   I know the read noise is there for every sub.  The one unit though doesn't mean much on it's own in reality though right?

The contribution of read noise is not cumulative in a simple additive sense. Instead, it adds in quadrature, meaning its impact is reduced when averaging multiple exposures.  

In B1 and B2 skies, I'd image "sufficiently long" since you have much lower shot noise... but in b3 and "worse" skies, shot noise is already so sufficient that I agree with the video read noise may as well not exist. (noise is funny math... not quite cumulative even on a single sub.. forgot the formula but its in video)
Jon Rista:
Dark calibration actually has nothing to do with dark noise (which I assume is dark shot noise, SQRT(Sdark)). Calibration has everything to do with FPN, and in the case of darks that would be DFPN (Dark Fixed Pattern Noise). You actually have it backwards...with ewer subs, you would usually have stronger object signal that buries the DFPN deeper, rendering it LESS of a problem. FPN, Fixed Pattern Noise, is a different kind of noise. It is not temporally random in terms of scale, it grows the same amount in a given amount of time. Its growth is temporal, meaning it gets stronger and stronger over time (you accumulate more and more of it over time.) There is a BASELINE amount of DFPN in very frame, though, dominated by your bias signal, and the more frames you stack, the stronger this baseline DFPN is going to get. If you do not calibrate, then the more subs you stack, the stronger this form of dark fixed pattern noise is going to get. The shallower the sub (usually the case if you are stacking more subs), then the less object signal you have to bury the dark signals, and the more likely the DFPN is going to show through your background signal.


In the video they talked about dark calibration calibrating out DFPN, i should have been more clear.    What i'm more curious about is if you dither and image to diminishing returns on total integration and image long enough to preserve full well and contrast, that the image quality may be as good or better than imaging according to your dark calibration library or fixed times.

If you don't have many subs, then dithering alone won't remove all the *FPN and dark calibration would be a necessity.  If you have a lot of subs and hit that "10% above diminishing returns" as a sort of goal I'm curious what the output would be.   Again, on my premise that your sub is long enough for your skies and well.

In cases where diminishing returns on integration is many many hours, lots of people choose to sacrifice contrast/well depth so they don't need to integrate for days...  so "nothing is simple for sure" 5 or 10 minute subs for 10+ hours is a lot of subs... i'm happy to throw a thread ripper at it and let it rip in any case vs "meh, too long to compute, I'm going to blow out my stars because my darks are 10 mins"

Me personally, I like how stacking averages out noise.  I've seen lots of people use bad dark frames which subtract from the quality of their image and introduce more noise.  Even dark frames are law of averages, aren't they? people have religious debates about when enough is enough or not enough.   Some people say 20, some people say 50.     If you dither every sub by a large enough margin, won't the averaging of your subs if sufficiently into "diminishing returns" achieve the same correction of FPN as say, making a master dark with random average num of subs? 

When i use CCD data, you bet i use dark frames or on older cmos with bright amp glow.  When i'm on my 6200, I'm still experimenting. I have a dark library because i'm not anti-darks, but I really want to see how i can "unbox" imaging and use modern integration and bias towards full-well imaging and see how that turns out.  
Jon Rista:
FPN can be shown in these formulas as well. Usually there is some small remnant DFPN and FPN term after calibration, but if your calibration masters are well crafted, then those remnant terms will usually not matter until you are well into the hundreds if not thousands of subs being integrated, so we can usually leave them out. If you are also dithering enough, any remnant FPN term is going to in essence be converted into a temporally random noise term in the stack as well, and one that usually has a sub-electron scale.


This is exactly what I want to experiment with.    "Well crafted" can leave a lot up to the imagination though.

Let's say I image from horizon to horizon, my SNR is increasing higher in the sky and then going back down as it gets back closer to horizon.  If you have more subs to average, couldn't the SNR of the final stack be better than fewer longer subs that have smaller variation of noise between them because of higher saturation on sensor, but still suffering from the same cumulative effects of noise/signal as it changes?  The sky is changing as my scope goes from horizon to horizon, but in the video, it made it sound like heavily saturating a single sub was always preferrable and I struggle with that.  

what of the impacts of seeing and aperture (is your scope within the average turbulence??) and pixel scale and scaling or drizzling?  if you're under sampled, 2x drizzling is better with more subs isn't it? if you're over sampled, resampling averages again right?

For example, doesn't gaussian resampling benefit more from more subs?  i'm interested in playing around with these because like you said at the beginning, it's not so simple...

thanks again for the response, always great to cross paths with ya!
Frank Alvaro avatar
Scott Badger:
I thought it was one of the best presentations on noise, calibration, and exposure that I’ve seen.

Cheers,
Scott

 Me too, it was excellent.
Byron Miller avatar
it's not a bad video, but it's pretty verbatim of what Dr Glover shared in 2019 with a bit more bias to individual sub length ;)  I did appreciate more discussion into the *FPN on this video and would love to have Dr Glover do a 2024 version of his video with 6200/2600/533 sensors. (and perhaps ones on the horizon)

https://youtu.be/3RH93UvP358?si=2SQD6bkBME4fucia
Arun H avatar
Byron Miller:
The contribution of read noise is not cumulative in a simple additive sense. Instead, it adds in quadrature, meaning its impact is reduced when averaging multiple exposures.


Hi,

This is not correct, or maybe not correctly expressed. Yes, read noise adds in quadrature. But the impact of read noise is never "reduced by averaging". As the number of subs adding to a total integration time is increased, the contribution of read noise will always increase. The total noise contribution in the final image is

SQRT(N*R^2+S+D)

Where S is the Signal, D is the dark current, R is the read noise in each sub. Increasing the number of Subs will always increase the noise.
Helpful Insightful
Jon Rista avatar
Byron Miller:
Jon Rista:
Its just not that simple. This would only be true, if you had zero read noise. Noise in our images is multi-sourced...it gets introduced all over the place. Most of the key sources of noise are "temporally random" in nature with "temporal growth", meaning they are random in time and grow over time. Since their growth is time dependent, it doesn't matter how many subs, only the total time integrated, to determine how much of this kind of noise you integrate in total. This is shot noise...photon (from object and light pollution), dark current, etc. One source of noise, however (which itself is also actually multi-sourced, at various different locations in the electronics of the readout pipeline, which manufacturers just lump into one term), is temporally random, but not of temporal growth: read noise. Read noise in fact may have temporally random aspects and some that are not so temporally random, but the key is that you get ONE UNIT of read noise or EVERY READ.


Thanks for the reply Jon!

Nothing is simple for sure   I know the read noise is there for every sub.  The one unit though doesn't mean much on it's own in reality though right?

The contribution of read noise is not cumulative in a simple additive sense. Instead, it adds in quadrature, meaning its impact is reduced when averaging multiple exposures.  

In B1 and B2 skies, I'd image "sufficiently long" since you have much lower shot noise... but in b3 and "worse" skies, shot noise is already so sufficient that I agree with the video read noise may as well not exist. (noise is funny math... not quite cumulative even on a single sub.. forgot the formula but its in video)


You seem to be missing the fact that we are talking about relative exposure times. You mentioned that 500 minutes is 500 minutes...but that is not the case, and the only thing that matters there is how much total read noise you have. If you are choosing short exposures and stacking lots of them, then your choosing to reduce the exposure per frame, and thus how much you are swamping read noise. Its a double-edged sword...MORE subs, thus more read noise, and read noise is ALSO swamped to a lesser degree.

In the end, 500 minutes with lots of short subs is going to be noisier than 500 minutes with longer subs....unless, somehow, you are swamping the read noise SIGNIFICANTLY in either case. If that was the case, then I would offer that most likely you are clipping a lot of signal on the other end of the dynamic range, especially with the longer subs. 

There are always two key forces in play that drive choosing an optimal exposure length. Read noise, pushing you to longer exposures, and clipping, pushing you to shorter exposures. Somewhere between those two factors is s balance point at which you find the optimal exposure length per sub.

Anyway, the point I was trying to make is that 500 minutes is not just 500 minutes. There IS a difference in the amount of total noise you have, and it depends on how many subs you are stacking. Previously I used s 1.85e- read noise level to determine that with 50 subs vs. 500 subs, the difference in read noise was quite large, 13e- vs. 41e- and a difference that shouldn't be ignored. Especially considering that to use shorter exposures, you are most likely going to be swamping read noise by a lesser degree, which increases the impact that 41e- read noise is going to have. The situation could be worse...read noise levels are often higher than 2e- with CMOS cameras. 
Byron Miller:
Jon Rista:
Dark calibration actually has nothing to do with dark noise (which I assume is dark shot noise, SQRT(Sdark)). Calibration has everything to do with FPN, and in the case of darks that would be DFPN (Dark Fixed Pattern Noise). You actually have it backwards...with ewer subs, you would usually have stronger object signal that buries the DFPN deeper, rendering it LESS of a problem. FPN, Fixed Pattern Noise, is a different kind of noise. It is not temporally random in terms of scale, it grows the same amount in a given amount of time. Its growth is temporal, meaning it gets stronger and stronger over time (you accumulate more and more of it over time.) There is a BASELINE amount of DFPN in very frame, though, dominated by your bias signal, and the more frames you stack, the stronger this baseline DFPN is going to get. If you do not calibrate, then the more subs you stack, the stronger this form of dark fixed pattern noise is going to get. The shallower the sub (usually the case if you are stacking more subs), then the less object signal you have to bury the dark signals, and the more likely the DFPN is going to show through your background signal.


In the video they talked about dark calibration calibrating out DFPN, i should have been more clear.    What i'm more curious about is if you dither and image to diminishing returns on total integration and image long enough to preserve full well and contrast, that the image quality may be as good or better than imaging according to your dark calibration library or fixed times.

If you don't have many subs, then dithering alone won't remove all the *FPN and dark calibration would be a necessity.  If you have a lot of subs and hit that "10% above diminishing returns" as a sort of goal I'm curious what the output would be.   Again, on my premise that your sub is long enough for your skies and well.

In cases where diminishing returns on integration is many many hours, lots of people choose to sacrifice contrast/well depth so they don't need to integrate for days...  so "nothing is simple for sure" 5 or 10 minute subs for 10+ hours is a lot of subs... i'm happy to throw a thread ripper at it and let it rip in any case vs "meh, too long to compute, I'm going to blow out my stars because my darks are 10 mins"

Me personally, I like how stacking averages out noise.  I've seen lots of people use bad dark frames which subtract from the quality of their image and introduce more noise.  Even dark frames are law of averages, aren't they? people have religious debates about when enough is enough or not enough.   Some people say 20, some people say 50.     If you dither every sub by a large enough margin, won't the averaging of your subs if sufficiently into "diminishing returns" achieve the same correction of FPN as say, making a master dark with random average num of subs? 

When i use CCD data, you bet i use dark frames or on older cmos with bright amp glow.  When i'm on my 6200, I'm still experimenting. I have a dark library because i'm not anti-darks, but I really want to see how i can "unbox" imaging and use modern integration and bias towards full-well imaging and see how that turns out.


Dithering simply imparts a temporal randomness to ANOTHER noise term. FPN is not eliminated with dithering, it simply becomes random, or maybe randomish, thus allowing it to be averaged down with stacking like other random noise terms. You still have MORE NOISE if all you do is dither, though, because you still have the FPN terms.

Calibration, on the other hand, ELIMINATES the FPN terms. They are no longer there at all to add to the total noise in the image. IMO, elimination of a noise term is best, if you can achieve it. The only noise terms you can eliminate are those that are fixed, and I strongly encourage everyone to calibrate in order to eliminate those terms entirely. I also still recommend everyone still dither, as there are usually artifacts that will appear in integrations that are intrinsic to the nature of a gaussian (or poisson) noise distribution that can still be eliminated (or greatly minimized) with dithering.

Dithering is essential for optimal results. Calibration is essential for optimal results. Don't skimp on either, if you want optimal results.


Regarding diminishing returns. There are definitely diminishing returns on how much you swamp read noise. Beyond swamping it by a factor of 10x, you only gain a couple percent improvement in SNR. Those diminishing returns set in rather quickly, hence why most imagers don't bother exposing each sub beyond the 10xRN^1 criteria. In some cases, you might find that you have trouble swamping read noise that much, depending on the dynamic range of your scene, but again, you only lose a few percent if you say swamp by 8xRN^2. 

Diminishing returns with continued integration, however, sets in a lot slower. Every time you double integration time, you improve your SNR by 40%. That is a lot better than a few percent. Having been someone who has integrated tens of hours per channel before, I can state from experience that it can continue to improve IQ for quite a long while before those "diminished returns" stop returning any real value. It depends a lot on what you care about capturing...sometimes, it may take 10 hours just to barely capture some signals, and doubling your integration can improve those signals quite a lot. Quadrupling your integration (i.e. 40 hours) could turn a barely perceptible signal into something you could reasonably process. A key example that comes to mine is OU4, or the Squid Nebula. In about 12 hours I was able to barely resolve most of it, and it was still very noisy. Twenty hours would have improved the signal by 40%, which would still probably not have been enough. Forty hours would have doubled the signal quality, which might have been enough, but I never got that far.

Diminishing returns does not mean no returns, and exactly when "diminished" occurs is entirely relative to the signal(s) of interest. There is not some hard, concrete wall of say 10 hours at which point there is no further value in continued integration. Diminished returns from integration sets in slowly. Every imager is going to have to determine for themselves whether they need to continue or not, and do so for each and every target they image. There is almost always a fainter signal. Diminished returns on brighter signals in the same field of view says nothing about returns that may still be viable on faint signals, especially when you may not even have picked up any signal on such faint objects yet! Sometimes it can take ten(s) of hours to even begin to resolve a signal, let alone integrate enough data for a reasonable, processable SNR. This is still true with CMOS cameras. They have certainly made it a lot easier to pick up fainter signals (I remember the days when capturing something like Soap Bubble nebula was practically impossible and anyone who could barely reveal it was considered a hero!) but they have not eliminated the challenge. I still find it fairly rare to find images of say the Orion belt and sword area that clearly depict the extensive amount of faint blue reflection nebula in that region...those blue reflections are very faint signals, and diminished returns on those set in FAR later than diminished returns even on the dark dust in the area.
Byron Miller:
Jon Rista:
FPN can be shown in these formulas as well. Usually there is some small remnant DFPN and FPN term after calibration, but if your calibration masters are well crafted, then those remnant terms will usually not matter until you are well into the hundreds if not thousands of subs being integrated, so we can usually leave them out. If you are also dithering enough, any remnant FPN term is going to in essence be converted into a temporally random noise term in the stack as well, and one that usually has a sub-electron scale.


This is exactly what I want to experiment with.    "Well crafted" can leave a lot up to the imagination though.

Let's say I image from horizon to horizon, my SNR is increasing higher in the sky and then going back down as it gets back closer to horizon.  If you have more subs to average, couldn't the SNR of the final stack be better than fewer longer subs that have smaller variation of noise between them because of higher saturation on sensor, but still suffering from the same cumulative effects of noise/signal as it changes?  The sky is changing as my scope goes from horizon to horizon, but in the video, it made it sound like heavily saturating a single sub was always preferrable and I struggle with that.  

what of the impacts of seeing and aperture (is your scope within the average turbulence??) and pixel scale and scaling or drizzling?  if you're under sampled, 2x drizzling is better with more subs isn't it? if you're over sampled, resampling averages again right?

For example, doesn't gaussian resampling benefit more from more subs?  i'm interested in playing around with these because like you said at the beginning, it's not so simple...

thanks again for the response, always great to cross paths with ya!

The additional signal you have acquired has nothing to do with whether calibration will remove DFPN and FPN or not. These are fixed patterns intrinsic to the sensor itself. FPN terms are noise terms that can be completely ELIMINATED with proper calibration. Wouldn't you prefer to eliminate noise, if you can, rather than just try to average MORE noise down? The more noise you have, the more effort (i.e. more total integration) it is going to take to average it all down and improve your object signal to a quality level. IMHO, its always better to eliminate a noise term entirely if you can, rather than to try and use other means to reduce it. FPN can be eliminated!! Remember that!

FWIW, ultra wide field imaging does present some additional challenges that may be unique compared to narrower fields. If you are imaging with any amount of LP on the horizon, then yes, you might run into some quirks with flat calibration. Those challenges are largely restricted to ultra wide fields (i.e. milky way imaging). For fields that don't span quite so much of the open sky, however, calibration corrects CAMERA defects that lead to pattern noise terms (and also shading, from the optics, dust motes, etc. which technically are also another pattern), and it really shouldn't matter what is IN the field. Calibration should correct those fixed patterns regardless.

Crafting a good master dark and master flat is not very ambiguous. You need to calibrate (flats only) and integrate the frames properly, and I guess make sure the frames were captured properly, that is really all here is to it. In all my years in the hobby, having helped countless people with processing issues, I've found its fairly common that people are either integrating their masters incorrectly, or using them incorrectly. Sometimes the frames are "missmatched" which is usually an easily correctable acquisition issue. The most common miss-use of a master is usually scaling the master dark, which IMHO is one of the most common issues with dark calibration (and IMO, one of the worst features of PI, especially since it is used by default with WBPP, and one of the key reasons I will never use it! )

MASTER DARK:

a. Acquire at the same temperature, gain and exposure length as the light frames.
b. Integrate WITHOUT any scaling or normalization, simple averaging, with high-sigma rejection only (the only things you want to reject from dark frames are temporal issues like cosmic ray strikes.)
c. Subtract from each light frame, and use an output pedestal to make sure calibrated lights don't clip to black. DO NOT USE DARK SCALING!!!

MASTER FLAT:

a. Ideally acquire at the same temperature, as well as same gain, as light frames (for optimal PRNU matching.)
b. Calibrate with master bias (follow master dark rules, except use shortest exposure time possible) or with master flat dark (follow master dark rules, only match to flats.)
c. Integrate with multiplicative normalization, simple averaging. Again, rejection should be simple and high sigma only if used. 
d. Divide from each light frame, do not use any kind of scaling. 




Regarding saturation, I'm not saying you should saturate any sub. That's relative to the previous discussion on read noise, and remember that I stated there is a balance point where you achieve optimal results. Depending on your available DR, you may have to choose to saturate some stars a little, in order to swamp the read noise to a reasonable degree. But overly saturating any sub is not optimal, nor is undr-exposing such that you are not swamping the read noise to a sufficient degree. There is an OPTIMAL configuration for every camera...I think people should aim for the optimal. 

Finally, on drizzling. I like to drizzle, simply because I like how drizzling works and the way it integrates the data, better than standard integration. I have always felt my stars take on better profiles, and my noise profile is more pleasing, with drizzled integrations than standard integrations. So, I think drizzling has value regardless of how you are sampled. Even if you are not undersampled, you could drizzle 1x and just benefit from the improved process, or drizzle 2x, then say deconvolve with the highly sampled data, maybe also do a light pass of noise reduction, then downsample by a factor of 2x back to "normal" size, and continue processing. I find that drizzled integrations offer many benefits, and I think they should be a standard part of anyone's workflow. It IS extra time and effort to do drizzling in most cases, but if you want the best results, I think its worth the time. And yes, drizzling is optimal with more subs, however with CMOS cameras and how they are most often used, usually you have plenty of subs for optimal drizzling results regardless.
Helpful Insightful
Byron Miller avatar
Arun H:
Byron Miller:
The contribution of read noise is not cumulative in a simple additive sense. Instead, it adds in quadrature, meaning its impact is reduced when averaging multiple exposures.


Hi,

This is not correct, or maybe not correctly expressed. Yes, read noise adds in quadrature. But the impact of read noise is never "reduced by averaging". As the number of subs adding to a total integration time is increased, the contribution of read noise will always increase. The total noise contribution in the final image is

SQRT(N*R^2+S+D)

Where S is the Signal, D is the dark current, R is the read noise in each sub. Increasing the number of Subs will always increase the noise.

Right, I understand the math. But i don't think this number going up in isolation matters since it is always proportional to the increase of SNR when stacking isn't it?

Let's be hypothetical and say I image 100 subs at 120 seconds because i know 

a) anything longer on sub is diminished returns
b) i need to stack at least 100 subs before i hit dminished returns




Say this exposure doesn't blow out any stars and is ample for my bortle/skyglow.  If i want to shoot 240 seconds because I prefer to expose longer then I'd reduce my gain below unity gain as to not blow out my stars and saturate my full well. Doing so, would double the read noise even though I double my exposures to reduce the integration time. The effective SNR is the same in the integrated subs is it not?  Glover beat about this in his "perfect sensor" analogy.

I used to have a google sheet where i calculated diminished returns of the stack but some reason i guess i had linked to it rather than copied it any my source is deleted... off to go whip up one 

I've never seen the graph invert in a way to suggest that read noise for integrating more subs would cause harm even though technically the read noise is going up, it's still proportional to the signal slowly improving isn't it?
Arun H avatar
Byron Miller:
I've never seen the graph invert in a way to suggest that read noise for integrating more subs would cause harm even though technically the read noise is going up, it's still proportional to the signal slowly improving isn't it?


The way to understand this is to clearly define the problem statement.

In your example, you have 120 seconds x 100 subs = 1200 seconds total integration.
Going to 240 second subs requires 50 subs for the same total integration.
  1. In each one of these cases, you will gather the same total light from the sky, since that is simply the irradiance on the focal plane times the area of the pixel, times the responsivity times the total time. For the gathering of this light, the number of exposures is irrelevant, the total time is what matters, so long as you are not saturating the sensor in any one exposure. The shot noise from the light coming from the sky in the stack will be Sqrt (L).
  2. Let us say that the true signal from you object is O. We will assume we can neglect the shot noise from this object if it is much fainter than the sky. It also will be additive.
  3. In each one of these cases, the total dark current accumulation will be the same - it depends on temperature and sub exposure length but it, like signal, is additive. The shot noise from dark current will be Sqrt(D) where D is the total dark current. The only way to reduce this is by reducing sensor temperature.
  4. What's different is that, when you use 240 subs, you will have 1.414 times the contribution from read noise as you did with 120 subs. Reducing your gain will actually make it worse because, for most sensors, read noise is higher at low gains. The total read noise component in your stack is Sqrt(N)*Ri, where N is the total number of subs you took.

So the overall SNR, assuming the same integration time is:

O/Sqrt(N*Ri^2+L+D)

You can see that increasing N reduces your SNR in all cases. Glover's general recommendation is to choose your sub exposure length so that in each sub, Ri is much less than the contribution of dark current and shot noise. Beyond that, you get to diminishing returns. But it is still mathematically correct that SNR will be reduced with increasing number of subs. There is never a case when increasing subs reduces your total noise. Put mathematically, you never "overcome" read noise. You simply make it less important compared to other sources of noise that you cannot control - usually shot noise from the background sky.

Edit: I realize I over simplified a little. So I corrected this to differentiate between sky background noise and object signal.
Helpful Insightful
Byron Miller avatar
Jon Rista:
You seem to be missing the fact that we are talking about relative exposure times. You mentioned that 500 minutes is 500 minutes...but that is not the case, and the only thing that matters there is how much total read noise you have. If you are choosing short exposures and stacking lots of them, then your choosing to reduce the exposure per frame, and thus how much you are swamping read noise. Its a double-edged sword...MORE subs, thus more read noise, and read noise is ALSO swamped to a lesser degree.


Not necessarily.   I'm not supporting under exposing for the sake of more subs, I'm saying to image long enough that you are correctly exposed for that image rather than exposed based on your dark library.

At unity gain, I overcome read noise by 95% in just over 10 seconds.



A typical CCD - the yellow line takes much longer. I totally get this point and i'm not arguing it.

In the context of correctly exposing, there is diminishing returns in SNR as shown here:



I know the signal rate changes from horizon to horizon, so my interest is to sample "long enough" to hit diminishing returns on individual exposure and integrate long enough to be about 10% above diminishing returns on total integration. (i could still image 10% above diminished returns on my variable time exposure too - why not?)

It's my understanding that Integration time is Integration time, they don't call it integration quantity.  As i mentioned above, the read noise is s till proportional to the SNR is it not?

My throwing in darks into the equation is not about debating about dark calibration or not, but the fact that dark scaling is terrible, so you can't scale for variable image exposure length based on variable darks and that in the end, when you integrate enough that you are in diminishing returns of SNR and you have dithered, I haven't had any negative impacts of not dark calibrating. FPN doesn't seem to survive this averaging as equally as if it was removed by the averaging of dark frame subtraction.  Rejection subs showing it rather well.

Dark calibration is some random number of subs you use to try and reduce FPN that would be reduced by the same average of your total number of subs in integration where i'm curious if that averaging on modern sensors today is better than dark calibration alone since we are talking about large numbers of subs to begin with where the sensor has already been swamped, but the objective is now WELL DEPTH and contrast, not "how long my darks are".

I wish I could have people over for a BBQ to chew on this over some good food
Byron Miller avatar
Arun H:
You can see that increasing N reduces your SNR in all cases. Glover's general recommendation is to choose your sub exposure length so that in each sub, Ri is much less than the contribution of dark current and shot noise. Beyond that, you get to diminishing returns. But it is still mathematically correct that SNR will be reduced with increasing number of subs.

I don't think the closing statement is true, or at least, it doesn't make sense. I've never seen an integration time graph have a convergence where read noise will increase disproportionate to SNR if the quality of the subs isn't in question.

So i don't disagree on the math to show numbers go up... but i do disagree that they go up disproportionally as to cause more harm than good which i feel is being insinuated here as a reason to take fewer longer subs because of read noise.
Jon Rista avatar
@Byron Miller You are still focused on individual sub length and the diminishing returns there. As I mentioned in my previous post, sub length has to do with how much you swamp read noise, and yes, the diminishing returns on that come in quite quickly. 

That wasn't what I was addressing in my previous posts though, I was addressing the notion that "500 minutes of integration is 500 minutes of integration, regardless of sub exposure length." This is not quite the same thing as individual sub length and diminishing returns, and the number of subs you stack and the amount of TOTAL read noise you have in the end, does matter. I see it as being relatively simple: more noise is more noise, period. I you are stacking lots of short subs, even if they are swamping the read noise, and you have 40, 50, 60e- total read noise, and if you could expose longer without any serious consequences and stack fewer, and have less total read noise...say 10e-, or 15e-, then that's a good thing. So long as you can expose longer without any serious consequences, then expose longer, stack fewer subs. Even if you go beyond 95% SNR criteria, if you are not losing anything significant, go longer. Even if the returns are diminished, there are STILL returns! Its always going to be better, unless for some reason you simply cannot track well enough. With CMOS cameras, longer is certainly a relative term...that may mean 60 seconds instead of 30, or 120 seconds instead of 45, or something like that. I'm not necessarily saying expose for 10 minutes. If you can expose longer without any serious consequences (i.e. a little bit of additional bright star clipping is usually not going to matter), then longer is better. 

Before I continue, I'm curious...where do you usually image? You mention exposures of just 10 seconds swamp the read noise to 95% SNR (same as 10xRN^2). Are you imaging under high LP? If so, then that is the critical factor here. It would sound like you are imaging under very high LP, and if that is the case, then worrying about other noise terms is indeed kind of moot. In the case of high LP, the pollutant signal is a VASTLY greater problem than read noise, and even FPN. The thing about LP is its not just some offset you can easily eliminate, its a pollutant SIGNAL, which brings with it variations in that signal (frequently in color, which leads to color noise, not just noise) which can even have structure. This pollutant signal is by far the worst thing any astrophotographer has to deal with. If this is what you are dealing with, I'd stop worrying about any hardware factor, and find a way to eliminate the LP first and foremost. 

Use narrow band filters. Or find and use a nearby dark site. Something. Anything to eliminate the LP. Until you do that, if LP is your largest source of noise, then no amount of camera hardware is really going to matter, no matter how good the camera may be. LP is the single most insidious and hateful thing astrophotographers have to deal with. Eliminating this, first and foremost, is IMHO the best thing any astrophotographer can do. (Viable dark sites are often far closer than most people think!)

Now if you are not dealing with high LP...maybe you have an extremely fast telescope. If that's the case, say something like RASA or Hyperstar, then your single biggest limiting factor is going to be dynamic range. Scopes like that absolutely JAM signal into stars, far more and far faster than they do on background signal. Stars saturate ludicrously fast with such scopes, and if it is easy to swamp the read noise, then I'd say find the highest dynamic range gain with the lowest possible read noise, and use that. If you aren't up for finding that optimal gain, then just use minimum gain. Forget unity, find the maximum possible DR you can. These scopes will still crush any camera with even 14 stops of DR, but 14 stops is better than say 12, or 11, or 8 stops. 

I need to know more, though, as swamping read noise in just 10 seconds is unusual. There has to be a reason why.

Under more normal circumstances, we are usually not going to be working at such extremes. In which case, "short" vs. "long" exposures are going to be more along the lines of say 60 seconds vs. 240 seconds, or something like that. Under more normal circumstances, you are going to be facing certain tradeoffs...clip some stars to achieve 95% SNR, or end up with less signal per sub in order to avoid clipping. Something like that. Now, if you are able to achieve 95% SNR without clipping and still have headroom? No harm in exposing longer. With CMOS cameras? Longer is still in the "fairly short" realm, and its not like modern equipment can't guide well enough for 5 minute subs. If you CAN do 5 minute subs, you are better off using them. Doesn't matter if the returns are diminished, there will still be returns, and that certainly isn't going to harm anything.
Jon Rista avatar
Byron Miller:
Dark calibration is some random number of subs you use to try and reduce FPN that would be reduced by the same average of your total number of subs in integration where i'm curious if that averaging on modern sensors today is better than dark calibration alone since we are talking about large numbers of subs to begin with where the sensor has already been swamped, but the objective is now WELL DEPTH and contrast, not "how long my darks are".


I don't think I understand this... It doesn't make sense to me, and maybe there is a misconception about FPN here. Dark calibration does not reduce FPN. Dark calibration ELIMINATES FPN.

I am also not sure how the length of the darks ...is a factor? Your darks match the length of your lights, period. If you are using 10 second lights, then you need 10 second darks, matched to the same temperature and gain. If you are using 240 second lights, then the darks must match accordingly... Swamping has to do with read noise, which you will have  just the same whether you calibrate or not...

FPN is a noise term. We normally ignore this term in the math, because the assumption is that its calibrated out, thus being eliminated (effectively entirely, unless you are stacking a huge number of subs). So, the math here is what we usually think about:
SNR = (Sobj * Csubs)/SQRT(Csubs * (Sobj + Sdark + Nread^2))

This DOES NOT INCLUDE the FPN terms. If you are NOT dithering enough, then the FPN terms, to one degree or another, will GROW like your object signal. Therefor, they pose the risk of limiting your SNR, and depending on exactly how much FPN your camera has, that limitation can start kicking in relatively "quickly" these days with CMOS cameras (they tend to have higher FPN due to their pixel and readout architectures compared to CCDs). If you do not dither and do not calibrate, then THIS is actually your SNR formula:
SNR = (Sobj * Csubs)/(SQRT(Csubs * (Sobj + Sdark + Nread^2)) + Sdfpn + Sfpn)

It should be easy enough to see how and why fixed pattern noise can limit your SNR.

If you ARE dithering, then you have ADDITIONAL noise terms similar to read noise (introduced per read) that must be added in quadrature, not normally reflected in the above equation. Right? You have the DFPN term, which if it has any structured pattern, may require very significant dithers that are randomized enough in scale to make sure that structure is randomized enough not to correlate in the stack (and even then, its tough to completely avoid any correlation between, say, horizontal or vertical banding structure). Even if you are able to totally randomize the DFPN, you still have another, additional noise term that has to be added in quadrature:
SNR = (Sobj * Csubs)/SQRT(Csubs * (Sobj + Sdark + Sdfpn + Nread^2))

You also have the FPN from PRNU, which is ANOTHER ADDITIONAL noise term. You have to factor that in as well and add it in quadrature with all the rest, too:
SNR = (Sobj * Csubs)/SQRT(Csubs * (Sobj + Sdark + Sdfpn + Sfpn + Nread^2))

These terms could be fairly significant... Lets say the dark fpn is another 1e-, and the fpn is another 1.3e-. So you now have a lot more noise to overcome by your signal, than you originally thought. You were previously only considering the read noise. If you read noise is say 1.85e-, then you would need 1.85^2*10 background sky signal, in the DARKEST areas of the frame (i.e. vignetted corners?), to sufficiently swamp the read noise. But if you account for these additional dithered FPN terms, you need 2.47^2*10 background sky signal instead. Accounting for just read noise, you need ~34e- background sky signal, however accounting or the randomized FPN terms, you need 61e- background signal...just about a FACTOR OF TWO difference! Simply dithering your FPN, depending on exactly how significant it is, could introduce very non-trivial additional noise into your images, and require a change in the assumptions about what it actually takes to "swamp" all the camera noise.

These two FPN factors could be smaller, but they could also be much larger as well. DFPN banding patterns, for example, can be quite significant in terms of scale in electrons, and worse...they may not be randomizable in such a way that completely eliminates any correlation in the stack. If there IS some correlation, then your SNR is actually going to be affected by that. You would have a NON-quadrature term that you would have to add to your noise term:
SNR = (Sobj * Csubs)/(SQRT(Csubs * (Sobj + Sdark + Sdfpn + Sfpn + Nread^2)) + Sdfpn_remnant)

FPN from PRNU is not immune to having larger scale structure that could correlate in the stack, ultimately leaving you with something like this, which could impose localized limitations on SNR in certain areas of the field:
SNR = (Sobj * Csubs)/(SQRT(Csubs * (Sobj + Sdark + Sdfpn + Sfpn + Nread^2)) + Sdfpn_remnant + Sfpn_remnant)

Sure, dithering can to some degree or another, allow the FPN terms to be averaged down. But its often imperfect. Dithering can be insufficient, or patterns can be of such a nature that parts of the patterns can still correlate in the stack. Why not just do the easy thing, and calibrate? Calibration will ELIMINATE, not just average down or reduce, the FPN. Then you just have this:
SNR = (Sobj * Csubs)/SQRT(Csubs * (Sobj + Sdark + Nread^2))

The thing about the consequences of correlated FPN... You are not necessarily going to overtly notice what it does to your images. A limit on SNR will start subtle, and increase the more you stack. But its not an obvious thing. You won't necessarily see overt patterns showing through your data (although, you certainly could!! I see it often enough in a lot of images people share here.) The best way to OBSERVE a limit on SNR, is to progressively stack more and more subs into different integrations, then blink through those integrations with a focus on the areas where you believe FPN may be limiting your SNR. You will eventually notice that the noise and signal in those areas ceases to change as you continue to stack more and more data. If you were NOT limited by FPN, then the signal/object structure would stop changing, but you would continue to see changes in and reduction to the random noise in the area (which might require progressively stronger stretches to observe), no matter how much you stacked. 

FPN can be a silent SNR killer, and its effects are not necessarily overtly obvious. Many imagers these days say they "See" no problems in their images when they don't calibrate. I would offer that their observational techniques are...not broad enough. ;)
Helpful Insightful Engaging
Arun H avatar
Byron Miller:
So i don't disagree on the math to show numbers go up... but i do disagree that they go up disproportionally as to cause more harm than good which i feel is being insinuated here as a reason to take fewer longer subs because of read noise.


Just to make sure we are talking about the same things, the following two statements are both true:
  1. For fixed subexposure time, SNR will always go up with the number of subs, because accumulated signal increases faster than noise. Therefore, 100 one second subs will have better SNR than a single one second sub, and 1200 one second subs will have better SNR than 100 one second subs.
  2. For the same total integration time, fewer, longer subs will result in a stack with better SNR than many shorter subs. Therefore, a stack comprised of 120 ten second subs will have better SNR than 1200 one second subs, due to the effect of read noise.

That said - swamping read noise is one component of making a sub exposure time decision. The quality of your mount/guiding, risk of saturating bright regions, risk of loss of a larger portion of your work due to bad images for one or other reason - all of these would also factor into your decision on what sub exposure time to choose.
Well Written Helpful Insightful Engaging Supportive
Byron Miller avatar
Jon Rista:
I don't think I understand this... It doesn't make sense to me, and maybe there is a misconception about FPN here. Dark calibration does not reduce FPN. Dark calibration ELIMINATES FPN.


It's my understanding that Dark calibration subtracts the average FPN from individual subs and that average can vary depending on how many darks you integrate into your master dark.  This calibration improvement is particularly noticeable in scenarios where thermal noise and fixed pattern noise are a significant component of noise.  I was following along with the lecturer in wondering if on modern sensors, it's much like read noise - not nearly as significant as it once was.  As i understand it, dark calibration increases your read noise in the individual subs and this varies with how long your master dark is integrated and how well its average averages things out.

Since read noise is random, this goes back to me talking about integration time - more subs to average out the random noise. Just like more lights integrated do the same right?

Dark calibration isn't perfect - what is the magic average subs to integrate? how well is the thermal properties of your imager? how clean is the power? There is Random noise and temporal noise left - that is also easier to average out with more dithered integrated subs right? 

I'm curious if highly dithered subs, appropriately exposed and integrated to diminishing returns will be of quality because of the increased averaging out of FPN and other noise - is that just as good as dark calibration on modern sensors that don't have amp glow, have low read noise and low dark current to begin with?????

I know that dark calibration improves the SNR of the individual sub by removing FPN, but you still have photon noise, thermal noise, read noise and you introduced some read noise to remove something that integration will remove too if highly dithered.  Does dithered integration remove enough to be at the transition point???

During integration, clipping will remove outliers and that does better the more you have to stack right?

If the subs are integrated and you have median or sigma clipping, the outliers are removed and the FPN isn't contributing to the noise of the integrated stack is it?  Instead of subtracting FPN of individual subs before integrating, I'm just integrating and rejecting it.  Highly dithered and highly stacked data could reduce more noise could it not by averaging out the effects of noise left over from dark calibration as well right??

during integration that noise floor is averaged out and i'm curious if averaging it out is good or better than dark calibration's averaging out and subtraction?

The reason dark calibration came back, is i said if you don't dither enough and have enough subs for good rejection then of course you have to dark calibrate (and we got side tracked on what dark calibration actually was) - especially if you're not on a modern cmos.
Jon Rista:
I am also not sure how the length of the darks ...is a factor? Your darks match the length of your lights, period. If you are using 10 second lights, then you need 10 second darks, matched to the same temperature and gain. If you are using 240 second lights, then the darks must match accordingly... Swamping has to do with read noise, which you will have  just the same whether you calibrate or not...


The idea is if you don't need to dark calibrate, then you don't need to image your subs according to dark calibration - that one could image entirely based on their sensors well depth and diminishing returns on swamping your sensor without blowing it out.

With CMOS, swamping itself isn't dependent on a library of fixed times that most people try and use with dark calibration.  
Jon Rista:
This DOES NOT INCLUDE the FPN terms. If you are NOT dithering enough, then the FPN terms, to one degree or another, will GROW like your object signal. Therefor, they pose the risk of limiting your SNR, and depending on exactly how much FPN your camera has, that limitation can start kicking in relatively "quickly" these days with CMOS cameras (they tend to have higher FPN due to their pixel and readout architectures compared to CCDs). If you do not dither and do not calibrate, then THIS is actually your SNR formula:


My entire point is to image to diminishing returns with highly dithered data. It falls apart if you do not a) image enough subs b) don't dither.
Jon Rista:
These terms could be fairly significant... Lets say the dark fpn is another 1e-, and the fpn is another 1.3e-. So you now have a lot more noise to overcome by your signal, than you originally thought. You were previously only considering the read noise. If you read noise is say 1.85e-, then you would need 1.85^2*10 background sky signal, in the DARKEST areas of the frame (i.e. vignetted corners?), to sufficiently swamp the read noise. But if you account for these additional dithered FPN terms, you need 2.47^2*10 background sky signal instead. Accounting for just read noise, you need ~34e- background sky signal, however accounting or the randomized FPN terms, you need 61e- background signal...just about a FACTOR OF TWO difference! Simply dithering your FPN, depending on exactly how significant it is, could introduce very non-trivial additional noise into your images, and require a change in the assumptions about what it actually takes to "swamp" all the camera noise.


I don't disagree with this, it is after all not that simple  

I know if you shoot narrowband, the SNR of a 10-hour stack shot at 120 seconds (not swamped enough) is about 65% of the SNR of 300 seconds. The other point is that for Hydrogen, going from 300 seconds to 600 seconds is only about 5% but maybe for OIII, it's worth it.  I'm curious if we're at that point we can free imagers from thinking everything has to be dark calibrated in static blocks of time as if dark calibration is magic - when they can dither and image and to their full well and integrate to diminishing returns.  After all, that goal is peak SNR isn't it?

Someone imaging the squid at a very low signal rate may run 20 minute subs near horizon and 35 minute subs at apex because they imaged according to their sky, scope, sensor and their well depth rather than image to their dark library. For something in color - an OSC imager, it may be 30 to 45 second subs or for RGB Mono, it may be 135 second subs. This still doesn't change the fact the total integration time is what matters.  It would be neat to calculate this against readings of sky flux so you can reduce exposures from moonglow and such as needed too.

I know in the end, imaging is about optimizing sub frame durations and maximizing total integration and calibration is important.  I'm just curious if dark calibration is necessary if you hit the point of diminishing returns on single sub and integration time.    For example, if you do flats and calibrate flats with bias, won't your flats divide out some FPN? i always flat calibrate and i'm annoyed at some of the online scopes that have flats from 6 months ago.

I feel like for many people, they image to their dark library no matter what because they have been told over and over dark calibration is necessary.  it very well was necessary, but is it still necessary?   You couldn't dither enough if your old 8300 CCD required 40 minute exposures to rely on integration rejection as it still may not have been fully swamped at 40 minutes and dark calibration was absolutely necessary because holy cow dark frames on ccd are gnarly...  ditto with early cmos  - good luck averaging out amp glow when you could just dark subtract it.

but is this absolutely our reality today?

i pixel peep on a lot of images and i watch a lot of videos from well known processor and a lot of them start with masters that don't look so hot... "oh hey, here is a 3 hour video where 2 hours we're removing a dead CCD column and fixing some left over motes"

i guess tl;dr

few subs, longer exposures, dark calibrate because integration won't solve it all.
more subs, correctly exposed, highly dithered, flat calibrated, imaging bias towards diminishing returns on integration = best of all worlds if you ask me. (but dependent on a totally good sensor and big stacks)
Byron Miller avatar
Arun H:
Byron Miller:
So i don't disagree on the math to show numbers go up... but i do disagree that they go up disproportionally as to cause more harm than good which i feel is being insinuated here as a reason to take fewer longer subs because of read noise.


Just to make sure we are talking about the same things, the following two statements are both true:
  1. For fixed subexposure time, SNR will always go up with the number of subs, because accumulated signal increases faster than noise. Therefore, 100 one second subs will have better SNR than a single one second sub, and 1200 one second subs will have better SNR than 100 one second subs.
  2. For the same total integration time, fewer, longer subs will result in a stack with better SNR than many shorter subs. Therefore, a stack comprised of 120 ten second subs will have better SNR than 1200 one second subs, due to the effect of read noise.

That said - swamping read noise is one component of making a sub exposure time decision. The quality of your mount/guiding, risk of saturating bright regions, risk of loss of a larger portion of your work due to bad images for one or other reason - all of these would also factor into your decision on what sub exposure time to choose.

Right, but they're just mathematical statements, not what I'm talking about.

This hypothetical is exclusive of being exposed to diminishing returns and integrated to diminished returns.   It's just numbers to prove a point that is true if i was under exposed.

All of those variables are exactly why i'm interested in variable timed subs and support my questioning of taking the longest individual sub as defacto.  👍 (which means dark calibration is out the door)
Jon Rista avatar
Byron Miller:
Jon Rista:
I don't think I understand this... It doesn't make sense to me, and maybe there is a misconception about FPN here. Dark calibration does not reduce FPN. Dark calibration ELIMINATES FPN.


It's my understanding that Dark calibration subtracts the average FPN from individual subs and that average can vary depending on how many darks you integrate into your master dark.  This calibration improvement is particularly noticeable in scenarios where thermal noise and fixed pattern noise are a significant component of noise.  I was following along with the lecturer in wondering if on modern sensors, it's much like read noise - not nearly as significant as it once was.  As i understand it, dark calibration increases your read noise in the individual subs and this varies with how long your master dark is integrated and how well its average averages things out.

I'm just going to stick with this for the moment, until you've fully grasped FPN. 

FPN is like your object signal. Technically speaking, it IS a signal. The bias offset, is a signal. It can be considered a NOISE, however, because it can interfere with the signal of interest and your ability to discern the true nature of that signal. FPN can appear to be just as random as a temporally random noise, say photon shot noise, or it could exhibit strong structure.

The key with FPN, is that it is FIXED. The average of FPN, IS the FPN. Here is an old animation, that I've been able to use over and over again, to demonstrate. The first frame here is a single frame, and represents one full unit of read noise and the FPN. Read noise in this case was pretty high, IIRC, 3-4e-, maybe more. I forget exactly what gain I used for this demonstration, and IIRC it was an ASI1600, so the read noise might in fact be 4.88e- RMS. In any case...the first frame here is one single frame, one full unit of read noise. As the animation progresses, I stack more and more frames, which "averages down" the read noise (a temporally and spatially random noise), and strengthens the FPN. 




By the time the stack is just 16 frames deep you can see the PATTERN (DFPN in this case, as it so happens, this is a master dark.) By 25 frames, the pattern is fairly strong. You can still see that there are changes to the random noise all the way up to stacking 256 frames, though the pattern is very clearly depicted by about 64 frames or so. 

The only changes from one frame to the next in this animation, are the number of frames stacked into the master, and as that number increases we see the read noise decrease. Eventually, the read noise is so small that it won't have any meaningful impact on any light frame calibrated. Even if you were stacking thousands of light frames, a 256 frame master has such a minute remnant of read noise, its doubtful stacking any more frames would offer any benefit of any kind. 

What you are seeing here, is me resolving or maybe better, REVEALING, the DFPN of this particular camera. Now, the read noise is pretty high if it was indeed at the minimum gain with an ASI1600...with todays cameras, you can often see the pattern without any stacking at all with read noise levels sometimes as low as 1.5e- at HCG gain modes. FPN with modern cameras is usually at least as much as, and sometimes higher than, the most popular CCD cameras in use for AP at the end of the CCD era. CCDs had one key benefit over CMOS, and that was having just one (or maybe two, for very large sensors) readout pipelines...one set of circuitry...one shift register, one output register, one floating diffusion, one amplifier, one ADC. That provided a certain amount of consistency in how each pixel's charge was handled, and it helped manage DFPN. CMOS sensors, on the other hand, have unique circuitry for each pixel, or maybe every 2x2 group of four pixels (called a 4-shared readout architecture). There are usually different floating diffusion and amplifier unit per readout group, and there is usually a different ADC for each column. This tends to lead to higher DFPN with CMOS. Now, CCDs had other issues...like column defects, so they weren't immune to structured patterns, and in fact over their lifetimes they tended to accumulate more and more defects which would change their dark patterns over time...something CMOS sensors are not very susceptible to (although I've certainly heard of powerful enough cosmic ray strikes causing entire sensors to go bad, so not entirely immune either.) 

SO...DFPN IS the average of your dark frames. FPN IS the average of your flat frames. These are, technically, signals...averaging many dark or flat frames together, is just averaging down the RANDOM noise terms, to reveal the fixed patterns. So, yes, a master dark is the average of FPN. Really, the FPN is there, as it always is, in a single frame. The averaging just reduces the remaining read noise to a point where its no longer a factor. The FPN doesn't actually change. Therefor, a WELL CRAFTED master dark, should quite accurately calibrate out that fixed pattern. Ironically, the Panasonic M sensor in the ASI1600, was a particularly bad sensor when it came to DFPN. Any change in gain, usb traffic setting, and a few other things could cause the DFPN to change dramatically...which made it a bit of a problematic sensor at times. Sony sensors, on the other hand, have far lower drift in DFPN, and usually a well crafted master dark for a given gain can be reused for quite some time, and the correction of that FPN should be very good. Hot and cold pixels can indeed change over time, but they represent only a tiny fraction of the pixels in the sensor, and they do NOT represent the whole pattern. EVERY pixel in the sensor (as you should be able to see above) will have some response to bias voltage and dark current, and therefor EVERY pixel is part of the pattern (even if that pattern APPEARS to be spatially random...if you average a bunch of dark frames together, you'll see the same thing as above...read noise averages down, but the pattern remains, even a spatially random one.) 

THIS is why we do dark calibration. It has nothing to do with read noise. It has nothing to do with dark current SHOT noise. It has everything to do with DFPN. We average many dark frames into a master dark, in order to reduce the amount of read noise from the dark that we introduce into each and every light frame we calibrate. For stacks of say 50 frames or so, 25 dark frames is usually plenty good enough. For stacking a hundred frames, you might want a 36 frame master. For stacking hundreds of frames, I usually went with a 49 frame master dark. FPN IS the average...and FPN IS the reason we calibrate. FPN is usually plenty significant today with CMOS cameras, in most cases probably more significant than it ever was with the top of the line CCD cameras of yesteryear. Its worth calibrating.

To bring my previous post into full clarity. The pattern is the pattern. It generally doesn't change, except maybe slowly (and often trivially) over time (mainly more hot/cold pixels, but there could be other nuanced changes as well), unless we are talking about a sensor like the Panasonic M, which seemed to change its pattern frequently in response to gain or other driver setting changes. Because the pattern doesn't change, subtracting a well crafted master dark from a light frame, REMOVES the pattern from the calibrated light frame. Entirely. The DFPN from the camera is no longer present in that light frame. It can no longer cause any problems. You should still dither, and dithering has added benefits beyond randomizing FPN (including DFPN, FPN from PRNU, or remnant patterns from other sources). If you only dither, and do not calibrate, then the original camera FPN is NOT removd, it remains, and it therefor represents ADDITIONAL noise, above and beyond the read noise, dark current shot noise, object shot noise, and also light pollution shot noise. MORE noise. 

Why would you leave FPN in place and only dither, if there is a very easy and effective way to ELIMINATE it ENTIRELY? 

This is really my question, to everyone who proclaims that calibration is no longer necessary. The great irony is that FPN is usually higher with CMOS cameras, not lower. It is generally very easy to remove, to eliminate, with proper calibration. So it honestly confuses me, why anyone would forego calibration, and just dither... You are then dealing with more noise. And not just spatially random noise, which at least has a relatively pleasant distribution, but noise that usually has some kind of structure, usually repetitive structure, that the human eye is very good at detecting... Honestly, it totally baffles me why there is even a debate on this these days... Especially dark calibration, it is SO EASY to do. Earlier today I was browsing through images, and I kept thinking of this thread...so many images today, exhibit the problems inherent in NOT calibrating. I encountered quite a few images with walking noise that clearly demonstrated a lack of dark calibration. I encountered several other images that had notable mottling, discoloration along the edges, and other issues that indicated either a lack of proper dark calibration, or a lack of flat calibration, or both. 

These issues are generally very EASY to fix, because the things that cause them can be completely ELIMINATED from the light frames, eliminating the source of so many of these problems. This is why I fight so hard against the notion that calibration is no longer necessary. I strongly believe it is more necessary today than ever, and I sadly see the evidence for why all over the place.
Helpful Insightful Engaging
Jon Rista avatar
Byron Miller:
i guess tl;dr

few subs, longer exposures, dark calibrate because integration won't solve it all.
more subs, correctly exposed, highly dithered, flat calibrated, imaging bias towards diminishing returns on integration = best of all worlds if you ask me. (but dependent on a totally good sensor and big stacks)

I think you have these backwards? One, you should be dithering, period, no matter how long your subs are. I've been dithering since I started imaging. 

Two, longer exposures will bury the DFPN in deeper object and background sky signal... So, technically, you would be more able to get away with NOT dark calibrating, with longer exposures, rather than shorter. Shorter exposures are more likely to allow FPN to show through.

When stacking LOTS of short subs, even with dithering, you run the high risk that portions of the FPN will correlate in the stack. By that, I mean, two vertical bands, for example, even though dithered, might stack on top of each other. The more subs you stack, the more likely this could occur. The more often such bands stack on top of each other, the more likely you are to see some banding in your integration. Its a matter of chance, but its also a probability, and the probability increases as stack depth does. So, again, I think you have it backwards...even at 95% criterion, even with dithering, you are more likely to have a lower quality integration.

I'm not necessarily advocating for long exposures, per-se. I'm really advocating for identifying the optimal configurations for your camera, and then advocating for producing an optimal integration. Uncalibrated data is not optimal.
Helpful Insightful Engaging
Byron Miller avatar
Jon Rista:
I'm just going to stick with this for the moment, until you've fully grasped FPN.


Yes, i know what FPN is.

I can see it in my rejection subs when I integrate a well dithered set of a large number of subs.

Here's a sample rejection sub i had floating around from an old 2600 i had:



pretty impressive what pixel rejection can do!

So when I did a lot of testing of this on a 2600mc pro, i noticed something odd if i dark calibrated.  The noise measurement of my dark calibrated images was lower as you would expect, but the image quality was totally meh. Here's a screenshot zoomed in on same area of sky, left side was just highly dithered and integrated and the right side was highly dithered, dark calibrated and integrated.



I shared this before and it put me down the rabbit hole of wondering if darks were making things worse.  On my dark calibrated subs, i had a lot more color noise and hot pixels





again, left side is not dark calibrated, same stack, but right side was... and yeah, the pixinsight noise script says right side has less noise.

so for me, it seemed to do better to average down the noise, rather than subtract it... even though the average noise level was higher, the image was a lot easier to work with.

anyone else see this?
​​​​​​​
Helpful Insightful Respectful Engaging