120s vs 300s subs

John HayesFreestar8nArun HGuiem KimiChristian Großmann
31 replies2.1k views
Guiem Kimi avatar
Hi!

I read a lot about recommended exposure times according to your setup, target, sky conditions, etc. Saw several videos about this subject. Either way, I wanted to run some tests with my equipment so I can have my own conclusions. I just want to share them with the community. There is nothing new here. Just my results.

- Camera: ASI 294MC PRO
- Scope: TS Optics CF APO 80
- Total integration time: 1h (30 x 120s subs VS 12 x 300s). I know it would be better to gather more data because 12 subs is not so many...
- Target: M 31
- The 2 images have been processed with a basic workflow, just to keep it simple and to be able to compare them.

Please forget about the stars. I just wanted to check the galaxy itself.

120s:



300s:



So, my thoughts are:

- The both of them are almost  the same. Single subs showed less noise (not much) in the 300s ones.
- 120s has a little more contrast.
- Core is blown out in the both of them. This is expected and HDR will be the way to go for this project.

So, I am staying at 120s because:

- Lots of subs, so if I have to discard several of them, this is less painful.
- SNR gets improved.
- Data storage is cheap, so the amount of subs is not a problem at the moment.

Well. Just wanted to share this although I know it doesn't show any new information. But this kept me thinking  a lot of time until I have tried myself.

CS.
Guiem.
Helpful Engaging Supportive
Christian Großmann avatar
Hi,

thanks for sharing your results. I know about all the discussions and until a certain degree, I agree with your results. But I am not concerned that this is really the case. M31 is maybe not the best example, because it is quite bright. If you try the same on a faint nebula and you are imaging with narrowband (I know your camera is color), then I guess things may be different. An example may be M27 (Dumbbell nebula). Take a look at the images here and you see nearly all of them showing only the inner bright core of it. But if you look deeper in some of the photos, you see those "wings" appear. They are really faint. If you take short subs of M27, the signal may never be detected in the noise. You really have to take longer exposures to get those details. I know, in theory, statistics should be on our side and with enough exposures you should at one point see those faint things, too, but in my experience, I get better results with longer exposures. At least signal wise. There are obviously other drawbacks, but I am by far no expert in astro photography smile

I am concerned that this may be the beginning of a long discussion, but thats my own experience and it kinda make sense to me. So I may suggest, you do the same on a fainter target and maybe with a longer total exposure time.

Just my thoughts…

CS

Christian
Helpful Insightful Respectful Engaging
John Hayes avatar
If you aren’t bandwidth constrained and you don’t mind the extra processing time, your results confirm that you should generally use the shortest exposures that your read noise allows.  You generally want to operate so that you are photon noise limited so it’s a good idea to expose so that the photon noise is ~10x larger than the read noise.

John
Helpful Insightful
Guiem Kimi avatar
Christian Großmann:
thanks for sharing your results. I know about all the discussions and until a certain degree, I agree with your results. But I am not concerned that this is really the case. M31 is maybe not the best example, because it is quite bright. If you try the same on a faint nebula and you are imaging with narrowband (I know your camera is color), then I guess things may be different. An example may be M27 (Dumbbell nebula). Take a look at the images here and you see nearly all of them showing only the inner bright core of it. But if you look deeper in some of the photos, you see those "wings" appear. They are really faint. If you take short subs of M27, the signal may never be detected in the noise. You really have to take longer exposures to get those details. I know, in theory, statistics should be on our side and with enough exposures you should at one point see those faint things, too, but in my experience, I get better results with longer exposures. At least signal wise. There are obviously other drawbacks, but I am by far no expert in astro photography


@Christian Großmann Sorry, I forgot to mention that I was aiming to the OSC broadband case. You are totally right and I did not take into account that M31 is quite a bright object that may be hiding the differences. So, as you suggest, I will try in fainter objects as well. Thanks!

John Hayes:
If you aren’t bandwidth constrained and you don’t mind the extra processing time, your results confirm that you should generally use the shortest exposures that your read noise allows. You generally want to operate so that you are photon noise limited so it’s a good idea to expose so that the photon noise is ~10x larger than the read noise.


@John Hayes  I will find info about how to do that. Thanks for the tip!
Arun H avatar
- SNR gets improved.


Why is this given the total integration time is the same?
Ashraf AbuSara avatar
This question has been answered very well by Dr. Robin Glover. I always refer to his presentation. Read noise is so low from modern cooled cmos cameras that for broad band targets they are quickly overwhelmed by light pollution for anyone living under Bortle 5 and above. If you are shooting at F7 like I am (C11 with 0.7x Reducer) with monochrome camera, for L filter I would really only need 24s exposures to achieve near lowest total noise in my final stack living under Bortle 5/6 as long as the integration time is equal. For OSC cameras (because they have bayer filter) I would have to multiply by 3, so 72s exposures are fine. Probably shorter because I am really under Bortle 6. 

For Narrowband 3nm filters on a monochrome camera, it would be 10x24, so 240s. I usually go for 300s because that is where my darks are at, but might start cutting it shorter. 

Notice how the brightness of the object is not a factor in his equation btw. How faint it is does not matter as long as the integration time is the same and our Read noise is being significantly overwhelmed by light pollution.

So in the end what matters (as long as integration time is equal) is:
1) Type of Camera (CCD / CMOS)
2) Filters (Monochrome / OSC bayer filter / Narrowband)
3) Focal Ratio
4) Light pollution level. 

I strongly recommend you watch the whole thing if you have not already, but you can skip to the end for the TL;DR conclusion.

https://www.youtube.com/watch?v=3RH93UvP358
Helpful Insightful Engaging Supportive
Guiem Kimi avatar
Arun H:
- SNR gets improved.


Why is this given the total integration time is the same?

I saw that the 120s subs have more noise than the 300s ones. Just the raw subs with no process at all. Only STF. But then when stacked (no calibration, just the lights) the 120s stack is less noisy than the 300s one (it is very subtle).

I read that the amount of noise reduction when stacking is the square root of the number of individual subs frames stacked. So I understand that more subs, more noise reduction.

I think in this case the comparison is not fair because there are only 12 subs for the 300s image. So, I think that if I stack 5 hours instead of 1 hour, the 300s would be much better in SNR.

Guiem.
Arun H avatar
I read that the amount of noise reduction when stacking is the square root of the number of individual subs frames stacked. So I understand that more subs, more noise reduction.


This is incorrect, or may be I misunderstood what you wrote. 

There are three main sources of noise - photon shot noise (both from subject and sky), dark current noise, and read noise.

The former two depend only on total integration time, but read noise increases the more subs you use to get to that total integration time. So if your only goal is to minimize noise, longer subs are better, but as John pointed out, once you use a sub exposure time where the dark current and sky background noise are about 10x the read noise in each sub, there is minimal benefit to longer subs.

Regardless, the only reason why SNR will be better  for the same total integration time  with shorter subs (and hence a larger number of individual subs for the same total integration time) is if there is variation in imaging conditions.

And yes, as Ashraf noted, Robin Glover has excellent videos on this.
Helpful Respectful
Guiem Kimi avatar
Arun H:
I read that the amount of noise reduction when stacking is the square root of the number of individual subs frames stacked. So I understand that more subs, more noise reduction.


This is incorrect, or may be I misunderstood what you wrote. 

There are three main sources of noise - photon shot noise (both from subject and sky), dark current noise, and read noise.

The former two depend only on total integration time, but read noise increases the more subs you use to get to that total integration time. So if your only goal is to minimize noise, longer subs are better, but as John pointed out, once you use a sub exposure time where the dark current and sky background noise are about 10x the read noise in each sub, there is minimal benefit to longer subs.

Regardless, the only reason why SNR will be better  for the same total integration time  with shorter subs (and hence a larger number of individual subs for the same total integration time) is if there is variation in imaging conditions.

And yes, as Ashraf noted, Robin Glover has excellent videos on this.

You're right Arun. Thanks for the correction. What made me understand the concept is this:



Arun H:
but as John pointed out, once you use a sub exposure time where the dark current and sky background noise are about 10x the read noise in each sub, there is minimal benefit to longer subs.

Thanks!
John Hayes avatar
Arun H:
I read that the amount of noise reduction when stacking is the square root of the number of individual subs frames stacked. So I understand that more subs, more noise reduction.


This is incorrect, or may be I misunderstood what you wrote. 

There are three main sources of noise - photon shot noise (both from subject and sky), dark current noise, and read noise.

The former two depend only on total integration time, but read noise increases the more subs you use to get to that total integration time. So if your only goal is to minimize noise, longer subs are better, but as John pointed out, once you use a sub exposure time where the dark current and sky background noise are about 10x the read noise in each sub, there is minimal benefit to longer subs.

Regardless, the only reason why SNR will be better  for the same total integration time  with shorter subs (and hence a larger number of individual subs for the same total integration time) is if there is variation in imaging conditions.

And yes, as Ashraf noted, Robin Glover has excellent videos on this.

Just to be clear, the more exposure time you have, the more noise you will get so don't think of it in terms of noise.  SNR is what counts.  Signal grows linearly with exposure time (or N frames); whereas Poisson noise grows as the square root of the exposure time (or N frames).  Therefore, SNR increases as the square root of exposure time.  Stacking doesn't reduce noise; it increases the ratio[b] [/b]of signal to noise.  Low noise relative to the signal is what makes an image look cleaner.

John
Well Written Helpful Insightful Concise
Arun H avatar
John Hayes:
Just to be clear, the more exposure time you have, the more noise you will get so don't think of it in terms of noise.  SNR is what counts


Yes, this is why I  was careful in the terminology I used:

"Regardless, the only reason why SNR will be better  for the same total integration time  with shorter subs (and hence a larger number of individual subs for the same total integration time) is if there is variation in imaging conditions."

and

"The former two depend only on total integration time, but read noise increases the more subs you use to get to that total integration time."


Both signal and noise increase with total integration time, but signal increases faster.

Was there something factually incorrect in what I posted?
John Hayes avatar
Arun H:
John Hayes:
Just to be clear, the more exposure time you have, the more noise you will get so don't think of it in terms of noise.  SNR is what counts


Yes, this is why I  was careful in the terminology I used:

"Regardless, the only reason why SNR will be better  for the same total integration time  with shorter subs (and hence a larger number of individual subs for the same total integration time) is if there is variation in imaging conditions."

and

"The former two depend only on total integration time, but read noise increases the more subs you use to get to that total integration time."


Both signal and noise increase with total integration time, but signal increases faster.

Was there something factually incorrect in what I posted?

Arun,
You did a fine job.  I was simply clarifying the importance of SNR relative to your comment specifically about noise itself  ("if your only goal is to minimize noise, longer subs are better...").   Longer subs increase noise while increasing SNR at a faster rate.  I know that you know that, but your mention of decreasing noise really means relative to the signal.

John
Helpful Insightful Respectful
Joe Linington avatar
I have a different method to determine sub time for RGB/LRGB imaging. I expose as long as possible until the number of pixels saturated climbs some (a few hundred) from the base. With my 294m in Bin1 this is around 45s for Lum and 60s for RGB. My OSC camera can handle 120s. I am sure if I used Bin2 with the 294m it would be longer but I haven't tried yet.
Helpful Concise
John Hayes avatar
Joe Linington:
I have a different method to determine sub time for RGB/LRGB imaging. I expose as long as possible until the number of pixels saturated climbs some (a few hundred) from the base. With my 294m in Bin1 this is around 45s for Lum and 60s for RGB. My OSC camera can handle 120s. I am sure if I used Bin2 with the 294m it would be longer but I haven't tried yet.

Joe,
Avoiding saturation is a good goal as well.  Binning on its own increases both signal and SNR so to avoid saturation binning 2x2 will require a shorter[i] [/i]exposure time.  Binning requires a bit more consideration of how you want to run your system relative to image detail but that's another, much longer discussion.

John
Concise
Freestar8n avatar
To provide a bit of clarity on some things that I think are misworded in this thread:

If you have a pile of images, the noise and signal in that pile is whatever it is.  You don't have a single, final image with noise and signal to talk about until you somehow combine them.  The main ways to combine them are 1) sum and 2) average.

If you sum the stack the noise increases as sqrt(N) and the signal increases as N, so noise increases and SNR also increases.

If you average the stack, the noise decreases as 1/sqrt(N) and the signal remains constant.  Again SNR increases as in the sum.

The only difference between the average and the sum is  just a scaling factor of N that has no impact on the quality of the end result.

I think most processing tools do a form of averaging, so in those cases it's natural to say the noise decreases in the stack.  The point is, it depends on context as to whether it is a sum or average - and I think usually the context is to average the stack.  Noise goes up or down depending on the implicit way the images are combined.

If you have a stack of the same exposure, T, the noise in the average of the stack goes down as 1/sqrt(N).

If you have a different stack of longer exposures but the same total time, the noise in the average will be lower than with the shorter exposures because it will have less total read noise contribution - but that difference may be negligible.

There is no benefit in having images under a variety of conditions and a range of qualities, as opposed to a consistent set of images of high quality.  Departures from high quality will only hurt the stack.

The key noise term not discussed here is very important - and that is fixed pattern noise.  FPN will increase linearly in the sum and is constant in the average - so it has potential to kill the benefit of a tall stack of images.  Dithering should greatly reduce it, though.

With low noise cmos cameras and typical light pollution, the choice of exposure time is determined more by convenience and the avoidance of star saturation than noise models - as shown by the OP's example.  I would never bin during acquisition even if heavily oversampled - because you can always bin or smooth in the last stage of processing.  The only motivation to bin during acquisition is smaller files, faster downloads, and faster processing.

Frank
Well Written Helpful Insightful
Christian Großmann avatar
With low noise cmos cameras and typical light pollution, the choice of exposure time is determined more by convenience and the avoidance of star saturation than noise models - as shown by the OP's example. I would never bin during acquisition even if heavily oversampled - because you can always bin or smooth in the last stage of processing. The only motivation to bin during acquisition is smaller files, faster downloads, and faster processing.

Hi Frank,

I think, your comment makes some things clearer for me and it all make sense. The only thing I do not agree with is your last sentence. I use a ZWO 183MM and later added a 294MM to my collection. They both have pixel sizes at around 2.4 microns. When I started doing astro photography, I bought a 10" f/5 Newton with an ES HR Coma corrector (1,06x scaling factor). The total focal length of this setup ist 1344mm at about 20kg on an EQ6. Without binning, this setup is really useless and I am extremely oversampled. Things get much worse if you have a slower scope. I took images without knowing what I was doing (meanwhile that hopefully changed a bit). I always kept my camera on Bin1. Sometimes I was not even able to stack the images, because the signal in the subs was so weak. Even if it works, what sense does it make to store high quality images of poor signal frames? It only blows up storage space, takes a lot more processing power and the processing time increases rapidly. While thats another story, without binning I won't be able to do an autofocus run whithin the span of the night .

Coming from real life photography, I always tried to keep the image quality as high as possible and kept my settings like I would have done with my DSLR. But I learned the hard way that this is the wrong way to do it, unless you get at least some advantages out of it. I am totally happy now to bin the upper setup with Bin3 on the 294mm and get nice data with "only" 5 Megapixels. Otherwise, I will have a really good basic to analyze the seeing effects at my location. It does not increase the quality of the content in my experience.

The first part of your comment is really interesting and I guess the last sentence may be kind of personal experience. But that's not what I experienced. I only wanted to add this to the discussion.

One word about learning astro photography. This changed even the way I take my images with a normal camera. I do a lot more images on higher ISO now than I did before AP.

Clear skies

Christian
Helpful
Scott Badger avatar
In these discussions of long and short subs, the statistical accuracy of an increased sample size rarely comes up. Isn't that and it's impact on the weighting used for integration a factor to be considered?

Cheers,
Scott
dkamen avatar

Hi,

FPN is not noise in the statistical sense, it is simply undesirable signal ("noise" in everyday terms) and that is precisely why it grows linearly with the number of subs, just like desirable signal does. It is corrected with calibration, that leaves you only with "proper" noise, the kind that decreases with the square root etc. Dithering will also help but must be very aggressive because the patterns typically are tens or even hundreds of pixels across.

Cheers,
D.
Well Written Helpful Insightful Concise Engaging
Arun H avatar
FPN is not noise in the statistical sense, it is simply undesirable signal ("noise" in everyday terms) and that is precisely why it grows linearly with the number of subs, just like desirable signal does. It is corrected with calibration, that leaves you only with "proper" noise, the kind that decreases with the square root etc


You are correct, but this is going to start a whole chain of ultimately pointless discussion largely having to do with terminology and ultimately what someone considers as "noise". There have been innumerable threads on this in CN and now also on AB. Some people are very passionate about FPN really being noise and get very upset when you point out, as you did, that it is different from random noise, repeatable, and hence corrected using calibration (as opposed to statistically random noise such as read noise, dark current noise, and photon shot noise which aren't).
Helpful Insightful
dkamen avatar
Well, I don't like FPN either. But if you have calibrated, it is simply not a factor in this discussion.
Freestar8n avatar
Christian Großmann:
With low noise cmos cameras and typical light pollution, the choice of exposure time is determined more by convenience and the avoidance of star saturation than noise models - as shown by the OP's example. I would never bin during acquisition even if heavily oversampled - because you can always bin or smooth in the last stage of processing. The only motivation to bin during acquisition is smaller files, faster downloads, and faster processing.

Hi Frank,

I think, your comment makes some things clearer for me and it all make sense. The only thing I do not agree with is your last sentence. I use a ZWO 183MM and later added a 294MM to my collection. They both have pixel sizes at around 2.4 microns. When I started doing astro photography, I bought a 10" f/5 Newton with an ES HR Coma corrector (1,06x scaling factor). The total focal length of this setup ist 1344mm at about 20kg on an EQ6. Without binning, this setup is really useless and I am extremely oversampled. Things get much worse if you have a slower scope. I took images without knowing what I was doing (meanwhile that hopefully changed a bit). I always kept my camera on Bin1. Sometimes I was not even able to stack the images, because the signal in the subs was so weak. Even if it works, what sense does it make to store high quality images of poor signal frames? It only blows up storage space, takes a lot more processing power and the processing time increases rapidly. While thats another story, without binning I won't be able to do an autofocus run whithin the span of the night .

Coming from real life photography, I always tried to keep the image quality as high as possible and kept my settings like I would have done with my DSLR. But I learned the hard way that this is the wrong way to do it, unless you get at least some advantages out of it. I am totally happy now to bin the upper setup with Bin3 on the 294mm and get nice data with "only" 5 Megapixels. Otherwise, I will have a really good basic to analyze the seeing effects at my location. It does not increase the quality of the content in my experience.

The first part of your comment is really interesting and I guess the last sentence may be kind of personal experience. But that's not what I experienced. I only wanted to add this to the discussion.

One word about learning astro photography. This changed even the way I take my images with a normal camera. I do a lot more images on higher ISO now than I did before AP.

Clear skies

Christian

What you're describing is exactly the issues of storage space and convenience I was talking about - when not binning vs. binning.  My point is there is no loss of effective well depth or any other downside to just using the small pixels - except for having to deal with file size and so forth.

But it is possible that some cameras will have loss of bit depth or other issues when binning during acquisition - so that is something to check.  They may even saturate earlier depending on how they bin.  With ccd cameras there was at least a potential reduction in noise by binning - but that doesn't happen with cmos - other than the usual SNR gain by batching 4 pixels into 1, which also applies with software binning after acquisition.

There is a key difference between "real life" photography and astro because the former usually works with one exposure, while the latter requires aligning and stacking many exposures.  The process of aligning and stacking adds an additional blur on the scale of the pixels due to the need to shift the image by sub-pixels and interpolate somehow into the final image.  The final blurring will be less if it is done with non-binned pixels, so that the final result after binning will be slightly sharper.  This is consistently ignored in these discussions, but it is quite real - and it means smaller pixels will always have some amount of benefit over larger ones - though the difference may be small.

In your case, if the storage is a burden and you are vastly oversampled and you are sure that binning during acquisition isn't limiting performance somehow - then it makes sense to go ahead and bin.  But I think it's important to confirm all these things because once you acquire binned - you can't unbin it.

Frank
Helpful Insightful
Freestar8n avatar

Hi,

FPN is not noise in the statistical sense, it is simply undesirable signal ("noise" in everyday terms) and that is precisely why it grows linearly with the number of subs, just like desirable signal does. It is corrected with calibration, that leaves you only with "proper" noise, the kind that decreases with the square root etc. Dithering will also help but must be very aggressive because the patterns typically are tens or even hundreds of pixels across.

Cheers,
D.

By definition FPN is a spatial noise term in the image and it is constant in time - so statistically it is indeed a noise term and statistically it adds linearly in the sum of the stack.  It is still noise and statistics, but it does not involve noise terms that are independent - so they don't add in quadrature.

There is no need for this to be controversial - it's just how it works, and when it is ignored you end up with ugly streaks in the background that don't make sense in typical discussions involving only read noise and shot noise.  If it isn't dealt with properly it will limit the theoretical 1/sqrt(N) noise reduction because there is a constant added noise term - in the average - that never goes down.

I don't know if the OP dithered the frames, but if not the FPN could be a significant term that reduces the benefit of longer exposures and more frames.  It can be present, and a dominant noise term, even if the streaks aren't visible.  So for comparisons like this you would always want to dither well.

Frank
Helpful Insightful Respectful
Sean Mc avatar
So now I have a follow up question. How many dithered lights before it becomes statistically moot?  (Noise wise not integration time wise)

30? 150? 3000?
Engaging
Freestar8n avatar
So now I have a follow up question. How many dithered lights before it becomes statistically moot?  (Noise wise not integration time wise)

30? 150? 3000?

In my model of how pattern noise behaves - if the dithers are far enough apart the pattern noise should go down as 1/sqrt(NDithers) - in the average.

The thing is - the pattern noise could be very large, in which case it dominates the other noise terms mentioned in this thread.  Or it could be negligible and dithers don't help much.  It isn't as easy to quantify or model since it isn't a simple property of the sensor like read noise and dark current.

I dither on every exposure since for me it only takes a few seconds.

In the context of this thread, there is a benefit in a large number of frames if each frame is dithered well - and technically that would somewhat penalize longer exposures since there might be fewer dithers.  But it may not matter much if the pattern noise isn't huge and you have a good number of dithers anyway.

Frank
Helpful Insightful Respectful
Christian Großmann avatar
So I'd like to ask one more question. If the noise goes down 1/sqrt (lets say it goes down) then there should be a number of frames where the SNR (lets say with a constant signal) will only improve slightly. So there should be a number of frames that makes sense in terms of noise "reduction" (I know it's not) and above that number you would not have much benefit from it noise wise. Or am I wrong here?

In other words, nowadays I tend to collect several hours of data for my images. I will get more signal (linear) with this but the noise keeps increasing only slightly (sqrt), which improves my SNR. So there should be a border, when it doesn't really matter if I take short or long subs or fewer or more dithers, as long as I have enough frames to get to the 1/sqrt graph where the graph is nearly linear? Did I miss something, here?

Christian