Byron Miller:
Jon Rista:
Its just not that simple. This would only be true, if you had zero read noise. Noise in our images is multi-sourced...it gets introduced all over the place. Most of the key sources of noise are "temporally random" in nature with "temporal growth", meaning they are random in time and grow over time. Since their growth is time dependent, it doesn't matter how many subs, only the total time integrated, to determine how much of this kind of noise you integrate in total. This is shot noise...photon (from object and light pollution), dark current, etc. One source of noise, however (which itself is also actually multi-sourced, at various different locations in the electronics of the readout pipeline, which manufacturers just lump into one term), is temporally random, but not of temporal growth: read noise. Read noise in fact may have temporally random aspects and some that are not so temporally random, but the key is that you get ONE UNIT of read noise or EVERY READ.
Thanks for the reply Jon!
Nothing is simple for sure
I know the read noise is there for every sub. The one unit though doesn't mean much on it's own in reality though right?
The contribution of read noise is not cumulative in a simple additive sense. Instead, it adds in quadrature, meaning its impact is reduced when averaging multiple exposures.
In B1 and B2 skies, I'd image "sufficiently long" since you have much lower shot noise... but in b3 and "worse" skies, shot noise is already so sufficient that I agree with the video read noise may as well not exist. (noise is funny math... not quite cumulative even on a single sub.. forgot the formula but its in video)
You seem to be missing the fact that we are talking about relative exposure times. You mentioned that 500 minutes is 500 minutes...but that is not the case, and the only thing that matters there is how much total read noise you have. If you are choosing short exposures and stacking lots of them, then your choosing to reduce the exposure per frame, and thus how much you are swamping read noise. Its a double-edged sword...MORE subs, thus more read noise, and read noise is ALSO swamped to a lesser degree.
In the end, 500 minutes with lots of short subs is going to be noisier than 500 minutes with longer subs....unless, somehow, you are swamping the read noise SIGNIFICANTLY in either case. If that
was the case, then I would offer that most likely you are clipping a lot of signal on the other end of the dynamic range, especially with the longer subs.
There are always two key forces in play that drive choosing an optimal exposure length. Read noise, pushing you to longer exposures, and clipping, pushing you to shorter exposures. Somewhere between those two factors is s balance point at which you find the optimal exposure length per sub.
Anyway, the point I was trying to make is that 500 minutes is not just 500 minutes. There IS a difference in the amount of total noise you have, and it depends on how many subs you are stacking. Previously I used s 1.85e- read noise level to determine that with 50 subs vs. 500 subs, the difference in read noise was quite large, 13e- vs. 41e- and a difference that shouldn't be ignored. Especially considering that to use shorter exposures, you are most likely going to be swamping read noise by a lesser degree, which increases the impact that 41e- read noise is going to have. The situation could be worse...read noise levels are often higher than 2e- with CMOS cameras.
Byron Miller:
Jon Rista:
Dark calibration actually has nothing to do with dark noise (which I assume is dark shot noise, SQRT(Sdark)). Calibration has everything to do with FPN, and in the case of darks that would be DFPN (Dark Fixed Pattern Noise). You actually have it backwards...with ewer subs, you would usually have stronger object signal that buries the DFPN deeper, rendering it LESS of a problem. FPN, Fixed Pattern Noise, is a different kind of noise. It is not temporally random in terms of scale, it grows the same amount in a given amount of time. Its growth is temporal, meaning it gets stronger and stronger over time (you accumulate more and more of it over time.) There is a BASELINE amount of DFPN in very frame, though, dominated by your bias signal, and the more frames you stack, the stronger this baseline DFPN is going to get. If you do not calibrate, then the more subs you stack, the stronger this form of dark fixed pattern noise is going to get. The shallower the sub (usually the case if you are stacking more subs), then the less object signal you have to bury the dark signals, and the more likely the DFPN is going to show through your background signal.
In the video they talked about dark calibration calibrating out DFPN, i should have been more clear. What i'm more curious about is if you dither and image to diminishing returns on total integration and image long enough to preserve full well and contrast, that the image quality may be as good or better than imaging according to your dark calibration library or fixed times.
If you don't have many subs, then dithering alone won't remove all the *FPN and dark calibration would be a necessity. If you have a lot of subs and hit that "10% above diminishing returns" as a sort of goal I'm curious what the output would be. Again, on my premise that your sub is long enough for your skies and well.
In cases where diminishing returns on integration is many many hours, lots of people choose to sacrifice contrast/well depth so they don't need to integrate for days... so "nothing is simple for sure" 5 or 10 minute subs for 10+ hours is a lot of subs... i'm happy to throw a thread ripper at it and let it rip in any case vs "meh, too long to compute, I'm going to blow out my stars because my darks are 10 mins"
Me personally, I like how stacking averages out noise. I've seen lots of people use bad dark frames which subtract from the quality of their image and introduce more noise. Even dark frames are law of averages, aren't they? people have religious debates about when enough is enough or not enough. Some people say 20, some people say 50. If you dither every sub by a large enough margin, won't the averaging of your subs if sufficiently into "diminishing returns" achieve the same correction of FPN as say, making a master dark with random average num of subs?
When i use CCD data, you bet i use dark frames or on older cmos with bright amp glow. When i'm on my 6200, I'm still experimenting. I have a dark library because i'm not anti-darks, but I really want to see how i can "unbox" imaging and use modern integration and bias towards full-well imaging and see how that turns out.
Dithering simply imparts a temporal randomness to ANOTHER noise term. FPN is not eliminated with dithering, it simply becomes random, or maybe randomish, thus allowing it to be averaged down with stacking like other random noise terms. You still have MORE NOISE if all you do is dither, though, because you still have the FPN terms.
Calibration, on the other hand, ELIMINATES the FPN terms. They are no longer there at all to add to the total noise in the image. IMO, elimination of a noise term is best, if you can achieve it. The only noise terms you can eliminate are those that are fixed, and I strongly encourage everyone to calibrate in order to eliminate those terms entirely. I also still recommend everyone still dither, as there are usually artifacts that will appear in integrations that are intrinsic to the nature of a gaussian (or poisson) noise distribution that can still be eliminated (or greatly minimized) with dithering.
Dithering is essential for optimal results. Calibration is essential for optimal results. Don't skimp on either, if you want optimal results.
Regarding diminishing returns. There are definitely diminishing returns on how much you swamp read noise. Beyond swamping it by a factor of 10x, you only gain a couple percent improvement in SNR. Those diminishing returns set in rather quickly, hence why most imagers don't bother exposing each sub beyond the 10xRN^1 criteria. In some cases, you might find that you have trouble swamping read noise that much, depending on the dynamic range of your scene, but again, you only lose a few percent if you say swamp by 8xRN^2.
Diminishing returns with continued integration, however, sets in a lot slower. Every time you double integration time, you improve your SNR by 40%. That is a lot better than a few percent. Having been someone who has integrated tens of hours per channel before, I can state from experience that it can continue to improve IQ for quite a long while before those "diminished returns" stop returning any real value. It depends a lot on what you care about capturing...sometimes, it may take 10 hours just to barely capture some signals, and doubling your integration can improve those signals quite a lot. Quadrupling your integration (i.e. 40 hours) could turn a barely perceptible signal into something you could reasonably process. A key example that comes to mine is OU4, or the Squid Nebula. In about 12 hours I was able to barely resolve most of it, and it was still very noisy. Twenty hours would have improved the signal by 40%, which would still probably not have been enough. Forty hours would have doubled the signal quality, which might have been enough, but I never got that far.
Diminishing returns does not mean no returns, and exactly when "diminished" occurs is entirely relative to the signal(s) of interest. There is not some hard, concrete wall of say 10 hours at which point there is no further value in continued integration. Diminished returns from integration sets in slowly. Every imager is going to have to determine for themselves whether they need to continue or not, and do so for each and every target they image. There is almost always a fainter signal. Diminished returns on brighter signals in the same field of view says nothing about returns that may still be viable on faint signals, especially when you may not even have picked up any signal on such faint objects yet! Sometimes it can take ten(s) of hours to even begin to resolve a signal, let alone integrate enough data for a reasonable, processable SNR. This is still true with CMOS cameras. They have certainly made it a lot easier to pick up fainter signals (I remember the days when capturing something like Soap Bubble nebula was practically impossible and anyone who could barely reveal it was considered a hero!) but they have not eliminated the challenge. I still find it fairly rare to find images of say the Orion belt and sword area that clearly depict the extensive amount of faint blue reflection nebula in that region...those blue reflections are very faint signals, and diminished returns on those set in FAR later than diminished returns even on the dark dust in the area.
Byron Miller:
Jon Rista:
FPN can be shown in these formulas as well. Usually there is some small remnant DFPN and FPN term after calibration, but if your calibration masters are well crafted, then those remnant terms will usually not matter until you are well into the hundreds if not thousands of subs being integrated, so we can usually leave them out. If you are also dithering enough, any remnant FPN term is going to in essence be converted into a temporally random noise term in the stack as well, and one that usually has a sub-electron scale.
This is exactly what I want to experiment with. "Well crafted" can leave a lot up to the imagination though.
Let's say I image from horizon to horizon, my SNR is increasing higher in the sky and then going back down as it gets back closer to horizon. If you have more subs to average, couldn't the SNR of the final stack be better than fewer longer subs that have smaller variation of noise between them because of higher saturation on sensor, but still suffering from the same cumulative effects of noise/signal as it changes? The sky is changing as my scope goes from horizon to horizon, but in the video, it made it sound like heavily saturating a single sub was always preferrable and I struggle with that.
what of the impacts of seeing and aperture (is your scope within the average turbulence??) and pixel scale and scaling or drizzling? if you're under sampled, 2x drizzling is better with more subs isn't it? if you're over sampled, resampling averages again right?
For example, doesn't gaussian resampling benefit more from more subs? i'm interested in playing around with these because like you said at the beginning, it's not so simple...
thanks again for the response, always great to cross paths with ya!
The additional signal you have acquired has nothing to do with whether calibration will remove DFPN and FPN or not. These are fixed patterns intrinsic to the sensor itself. FPN terms are noise terms that can be completely ELIMINATED with proper calibration. Wouldn't you prefer to eliminate noise, if you can, rather than just try to average MORE noise down? The more noise you have, the more effort (i.e. more total integration) it is going to take to average it all down and improve your object signal to a quality level. IMHO, its always better to eliminate a noise term entirely if you can, rather than to try and use other means to reduce it. FPN can be eliminated!! Remember that!
FWIW, ultra wide field imaging does present some additional challenges that may be unique compared to narrower fields. If you are imaging with any amount of LP on the horizon, then yes, you might run into some quirks with flat calibration. Those challenges are largely restricted to ultra wide fields (i.e. milky way imaging). For fields that don't span quite so much of the open sky, however, calibration corrects CAMERA defects that lead to pattern noise terms (and also shading, from the optics, dust motes, etc. which technically are also another pattern), and it really shouldn't matter what is IN the field. Calibration should correct those fixed patterns regardless.
Crafting a good master dark and master flat is not very ambiguous. You need to calibrate (flats only) and integrate the frames properly, and I guess make sure the frames were captured properly, that is really all here is to it. In all my years in the hobby, having helped countless people with processing issues, I've found its fairly common that people are either integrating their masters incorrectly, or using them incorrectly. Sometimes the frames are "missmatched" which is usually an easily correctable acquisition issue. The most common miss-use of a master is usually scaling the master dark, which IMHO is one of the most common issues with dark calibration (and IMO, one of the worst features of PI, especially since it is used by default with WBPP, and one of the key reasons I will never use it!

)
MASTER DARK:
a. Acquire at the same temperature, gain and exposure length as the light frames.
b. Integrate WITHOUT any scaling or normalization, simple averaging, with high-sigma rejection only (the only things you want to reject from dark frames are temporal issues like cosmic ray strikes.)
c. Subtract from each light frame, and use an output pedestal to make sure calibrated lights don't clip to black. DO NOT USE DARK SCALING!!!
MASTER FLAT:
a. Ideally acquire at the same temperature, as well as same gain, as light frames (for optimal PRNU matching.)
b. Calibrate with master bias (follow master dark rules, except use shortest exposure time possible) or with master flat dark (follow master dark rules, only match to flats.)
c. Integrate with multiplicative normalization, simple averaging. Again, rejection should be simple and high sigma only if used.
d. Divide from each light frame, do not use any kind of scaling.
Regarding saturation, I'm not saying you should saturate any sub. That's relative to the previous discussion on read noise, and remember that I stated there is a balance point where you achieve optimal results. Depending on your available DR, you may have to choose to saturate some stars a little, in order to swamp the read noise to a reasonable degree. But overly saturating any sub is not optimal, nor is undr-exposing such that you are not swamping the read noise to a sufficient degree. There is an OPTIMAL configuration for every camera...I think people should aim for the optimal.
Finally, on drizzling. I like to drizzle, simply because I like how drizzling works and the way it integrates the data, better than standard integration. I have always felt my stars take on better profiles, and my noise profile is more pleasing, with drizzled integrations than standard integrations. So, I think drizzling has value regardless of how you are sampled. Even if you are not undersampled, you could drizzle 1x and just benefit from the improved process, or drizzle 2x, then say deconvolve with the highly sampled data, maybe also do a light pass of noise reduction, then downsample by a factor of 2x back to "normal" size, and continue processing. I find that drizzled integrations offer many benefits, and I think they should be a standard part of anyone's workflow. It IS extra time and effort to do drizzling in most cases, but if you want the best results, I think its worth the time. And yes, drizzling is optimal with more subs, however with CMOS cameras and how they are most often used, usually you have plenty of subs for optimal drizzling results regardless.