How long subs should I aim for?
Single choice poll 50 votes
42% (21 votes)
26% (13 votes)
32% (16 votes)
You must be logged in to vote in this poll.
Daniel Renner avatar
I never go over 300s exposures using my asi 1600mm pro, is that a bad thing?

I have read that some people says it doesn't matter if you take 2x300s exposures or 1x600s  but is it true in your opinion?

The risk of 600s is a potential of a 10 min subs ruined because of clouds or wind, but does it bring out fainter stuff? 

(I live in a bortle 5-6 zone and using eq6r pro with esprit 100ed with 3nm antlia filters)
Engaging
Tim McCollum avatar
It depends, you never want to over expose and blown out the stars or you subject.
I would say every target is different, although at 3nm there isn't much light making it to the sensor.
Tim
Arun H avatar
The effect of increased sub length is to minimize the effect of read noise at the possible expense of saturating bright stars.

There are three main noise contributors - dark current noise, photon shot noise, and read noise. Of these, read noise is the least important. Both dark current noise and photon shot noise increase with increased total exposure time and are independent of sub length. Read noise, however, is proportional to the number of subs, hence a greater number of subs yielding the same total integration time will have a greater read noise component. But so long as the sub length is chosen so the read noise is small compared to the shot noise in each sub, the sub length becomes irrelevant and only total integration matters.

There are formulae that allow you to calculate how much sub length for your sky needs to be to swamp out read noise. In truth, unless you are using very short subs, read noise is unlikely to matter much, and hence neither is sub length. A case can be made that, even if you don't completely swamp out read noise by a factor of 50 or so, it is still to your advantage to use shorter subs to avoid blowing out stars. And of course, shorter subs mean less demands on guiding accuracy and less loss to one off events.
Well Written Helpful Insightful
kuechlew avatar
I don't have sufficient experience to answer the question but would like to point out that there are more parameters than just the quality of 1x600s compared to 2x300s. As an example: You won't be able to avoid capturing satellites and/or planes over the course of one or even multiple imaging sessions. Rejection algorithms rely on statistical methods which require a sufficiently large number of frames. Having twice the number of frames may make all the difference between getting the rejection algorithm to work well or not. I'm regularly processing Telescope live data and on occasion they have defective columns in their sensors. I had a few issues getting rid of those with 600s frames when overall integration time was not high. The problem increases for mono data if the issue is only in one filter and the number of frames per filter in 1/3 or 1/4 of the overall number of frames. Doing statistics with less than 15 frames can get difficult.

So I would opt for the fourth option "it depends".

Clear skies
Wolfgang
Helpful Insightful Respectful Engaging Supportive
Andy Wray avatar
At unity gain (139 for the ASI1600MM Pro) and in a Bortle 5 area I currently do:

30 secs Lum
90 secs RGB
300 secs Ha/Sii/Oiii (7nm filters)

I think I would be tempted to do 600 secs narrowband for 3nm filters.

N.B. This is with a F5 8" Newtonian.
Lynn K avatar
As stated, IT DEPENDS on Camera, Light Pollution, Filter, Telesope Design and Object.  AND, Mounts tracking, AND your software processing ability.

I have  different thoughts about exposure time. For one, don't assume the whole astro imaging society use CMOS camras, and primarily ZWO.  They do not.  There is an attitude among Astrobin that CMOS has all way been used.  Actually, it is fairly new, and the CMOS revolution is primarily due to one manufacture, ZWO.

I have been imaging since 2005 and primarily used CCD.  I now have a QHY268M.  But still use CCD.  I fine the big advantage of CMOS is not low read noise but download time and QE.  My read noise in my QHY268M is not all that much lower than my Sony ICX814/694 Al chip cameras.  I have found imagers that say CCD is dead, have never used quality Sony CCDs.

I was not uncommon to see that out standing images a few yeas ago that used CCD cameras to use 20 to 30 min subs. And offen less that 20 of them.  A 10 to15 hour session was a very long session. The approace to short subs in the range of 60 to 100 with session being 15 to 20 hours is a result of CMOS.

The constant mention of Read Noise is a fairly new develoment.  Which is the inability of the chip to accuatly read out the correct photon to electron count of each pixel. Prior to that with CCD it was DARK NOISE. Which is the radom electron given of by heat in the chip. Larger Dark Noise requires better cooling.  Also, the constant mention of over saturated stars is fairly new, due to low well depth of CMOS.  Usually not an issue with CCD.  AND, with new star deletion software, not a big problem.  The stars can be replaced by short RGB sub images.

Shot noise is the radom nature of photons numbers as they land on indivigual pixels. Only more photons will even out the noise level through either larger scope, faster optics OR LONGER EXPOSURE. Also the QE of the camera can effect this. Shot noise is NOT increased by exposure Time, it is Decreased.  DARK noise is increased with heat generated by longer exposures.  A huge problem with DSLR.

NOW, to reply to your question.  "Is 600 min. subs worth it." With CCD, Yes.  With CMOS, not sure.  With Bortal 1-4 skies, I would say yes. In Bortle 5-8 skies, probaly not.  The sky glow will wipe out any advantage.  With narrow Bamd, deffinently needed with CCD, not so much with CMOS due to it's high QE.

As far as airplane/ satellite  trails, any good Sigma Clip stacking software should seen those as an out- layer  and reject those pixels.

As far as the real answer to your question, try it and fine out.  Pick a very faint object and do 5 x 10 min and 10 x 5 min sessions and see if there is a noticeable difference in the faint signal.  Given your prtictular site and equipment, it is probably the only way you will every really know.  That is what I am exacly doing with my new QHY268M.  I found 600 sec subs made a big difference with CCD narrow band.  Not a real difference going to 900sec.  I am hopeing to get simular results with 300sec using the high QEof the QHY268H Sony IMX571 chip.  I will see.

Lynn K.
Arun H avatar
Lynn K:
Shot noise is NOT increased by exposure Time, it is Decreased.  DARK noise is increased with heat generated by longer exposures.  A huge problem with DSLR.


This is incorrect. Shot noise increases with total exposure time, it is the nature of the Poisson distribution that governs the incidence of photons on the sensor. But signal increases as well, and does so faster than shot noise. So what increases is signal to noise ratio.

If the photon flux is S, then over time T, which is the total integration time, the mean signal is S.T and the noise (standard deviation of the distribution) is Sqrt(S.T). Hence SNR is Sqrt(S.T) which increases as the square root of total integration time. Sub times have nothing to do with it.
Lynn K:
Also, the constant mention of over saturated stars is fairly new, due to low well depth of CMOS.  Usually not an issue with CCD.  AND, with new star deletion software, not a big problem.  The stars can be replaced by short RGB sub images.


Most modern CMOS sensors have better dynamic range, higher QE,  higher well depth, and lower read noise than CCDs on a per square micron basis. Something easily verified by looking at published specs of these sensors. This is the outcome of vast sums of R&D money put into the development of CMOS driven by cell phones and, to a much lesser extent, DSLRs and other consumer cameras. Unless you are using unreasonably high gain settings with CMOS, you are less likely to saturate stars with a CMOS than with a CCD.
Helpful Insightful
Ioan Popa avatar
it depends on your camera and telescope, I do 900s for my f5.4 RC and 300-600s for narrowband on my RH 200 F3…if you have a camera with no well depth…specifically cmos cameras like 1600,183 or other smaller chips then 300-600s will just oversaturate the stars. I use to do 1800s for my 16200 CCD because it could.

Technically the exposure doesn't matter as much as long as you collect data, it simplifies stacking and reduces the load on your machine but in light polluted areas more exposure brings in more light glow and can make flattening harder.

So your question cannot be answered because it's based on your setup and more importantly can you mount track properly at 600s + ?

thanks,
ioan
Helpful
Brent Newton avatar
Everyone has covered the main point - that exposure time in a vacuum is not meaningful and the optics, local conditions, gain, and filters in use can all wildly vary what the "ideal exposure" is - but in short, either 300s or 600s is not necessarily the right answer (when we define 'right' as the exposure time which swamps the Read Noise and leads to a sky-limited SNR)
There are formulae that allow you to calculate how much sub length for your sky needs to be to swamp out read noise. In truth, unless you are using very short subs, read noise is unlikely to matter much, and hence neither is sub length. A case can be made that, even if you don't completely swamp out read noise by a factor of 50 or so, it is still to your advantage to use shorter subs to avoid blowing out stars. And of course, shorter subs mean less demands on guiding accuracy and less loss to one off events.

Jon Rista discusses this on this Cloudy Nights thread. Using this formula I found that while using Gain 76 in Bortle 1 skies my exposure time does not require much beyond 120s Luminance, and around 4 minutes in RGB. That doesn't mean Narrowband wouldn't benefit from a 300" exposure, but other factors may apply (3nm filters / excessively dim target / Gain 0 / etc)
Helpful Insightful
Daniel Renner avatar
Wow thank you all for commenting on this post! It's really interesting to read what you know and think about this!
Well Written Respectful
GalacticRAVE avatar
I strongly recommend the excellent talk by Robin Glover (the brain behind sharpcap) on exposure times, cooling, CMOS, gain etc.

https://www.youtube.com/watch?v=3RH93UvP358&t=0s
Well Written Concise
John Hayes avatar
Lynn K:
As stated, IT DEPENDS on Camera, Light Pollution, Filter, Telesope Design and Object.  AND, Mounts tracking, AND your software processing ability.

I have  different thoughts about exposure time. For one, don't assume the whole astro imaging society use CMOS camras, and primarily ZWO.  They do not.  There is an attitude among Astrobin that CMOS has all way been used.  Actually, it is fairly new, and the CMOS revolution is primarily due to one manufacture, ZWO.

I have been imaging since 2005 and primarily used CCD.  I now have a QHY268M.  But still use CCD.  I fine the big advantage of CMOS is not low read noise but download time and QE.  My read noise in my QHY268M is not all that much lower than my Sony ICX814/694 Al chip cameras.  I have found imagers that say CCD is dead, have never used quality Sony CCDs.

I was not uncommon to see that out standing images a few yeas ago that used CCD cameras to use 20 to 30 min subs. And offen less that 20 of them.  A 10 to15 hour session was a very long session. The approace to short subs in the range of 60 to 100 with session being 15 to 20 hours is a result of CMOS.

The constant mention of Read Noise is a fairly new develoment.  Which is the inability of the chip to accuatly read out the correct photon to electron count of each pixel. Prior to that with CCD it was DARK NOISE. Which is the radom electron given of by heat in the chip. Larger Dark Noise requires better cooling.  Also, the constant mention of over saturated stars is fairly new, due to low well depth of CMOS.  Usually not an issue with CCD.  AND, with new star deletion software, not a big problem.  The stars can be replaced by short RGB sub images.

Shot noise is the radom nature of photons numbers as they land on indivigual pixels. Only more photons will even out the noise level through either larger scope, faster optics OR LONGER EXPOSURE. Also the QE of the camera can effect this. Shot noise is NOT increased by exposure Time, it is Decreased.  DARK noise is increased with heat generated by longer exposures.  A huge problem with DSLR.

NOW, to reply to your question.  "Is 600 min. subs worth it." With CCD, Yes.  With CMOS, not sure.  With Bortal 1-4 skies, I would say yes. In Bortle 5-8 skies, probaly not.  The sky glow will wipe out any advantage.  With narrow Bamd, deffinently needed with CCD, not so much with CMOS due to it's high QE.

As far as airplane/ satellite  trails, any good Sigma Clip stacking software should seen those as an out- layer  and reject those pixels.

As far as the real answer to your question, try it and fine out.  Pick a very faint object and do 5 x 10 min and 10 x 5 min sessions and see if there is a noticeable difference in the faint signal.  Given your prtictular site and equipment, it is probably the only way you will every really know.  That is what I am exacly doing with my new QHY268M.  I found 600 sec subs made a big difference with CCD narrow band.  Not a real difference going to 900sec.  I am hopeing to get simular results with 300sec using the high QEof the QHY268H Sony IMX571 chip.  I will see.

Lynn K.

Lynn,
You are correct that "IT DEPENDS" but I want to add my two cents to Arun's comments.

1)  The CMOS started in the early 90's, but its acceptance in the market gained speed largely because of video requirements and for economic reasons.  It was major camera manufacturers like Olympus, Nikon, Canon, Sony, and others making DSLRs who first brought CMOS technology to the mass market.  ZWO did a great job of migrating that technology to dedicated, cooled astro-cameras--at a reasonable price and as I recall, QHY paralleled their efforts in the amateur astro-imaging market.  Cheap, high-performance, dedicated CMOS cameras certainly helped move many amateurs away from their DSLRs.

2)  Read noise has always been a key performance parameter for both CMOS and for CCD sensors; although you may be right that the amateur community seemed to suddenly figure it out and begin to discuss it as if it were more important than dark noise.  RN is indeed an important advantage because if it can be reduced to sufficiently low levels, it greatly relaxes the need for using longer exposures--no matter what type of sensor you use.  To be clear, RN is any noise that is not exposure dependent and it is a key parameter that determines the minimum exposure time.  You generally want to use an exposure that make RN much less significant than photon noise (<~0.1).

3) Photon noise (shot noise) increases as the square root of the number of photons that you collect, so it always increases with exposure time.  Perhaps what you meant is that SNR always increases with exposure time.  SNR increases with the square root of the number of photons you collect.

4) Your experiment with NB data is a good one.  Since NB data collects fewer photons, you always have to use longer exposures with a NB filter to get the photon noise (shot noise) significantly higher than the RN.  The higher QE will help with your CMOS sensor but I think that you'll find that you probably won't get to a factor of two relative to your CCD camera.  I believe that it will scale with ratio of the QE values at the filter wavelength.

John
Well Written Helpful Insightful
Lynn K avatar
John, thank you for your clarification. I appreciate your expertise on these subjects.

Lynn
Well Written Respectful
Arun H avatar
The evolution of CMOS in recent years has been really interesting to follow for those of us who are into regular terrestrial photography.

Earlier DSLR CMOS sensors were notoriously noisy, and I think some of the off chip analog to digital conversion led to the banding issues so familiar to those of us that used Canon DSLRs for astro work. There is in fact a dedicated Canon Banding Reduction script in PixInsight!

Sony brought several innovations into the sensor market that they implemented in their mirrorless cameras, things like BSI, on chip analog to digital conversion, faster read speeds etc. I think they purchased some of these technologies from others that developed them. Regardless, the expanding base of cell phones to which they were supplying CMOS sensors allowed them to make those investments. Canon, which make their own sensors, were forced to follow suit to keep up with their DSLRs. The astro photography community is the beneficiary of these developments. Current astro CMOS cameras pretty much exclusively use Sony sensors that they are developing for their cameras and for Nikon (which uses Sony CMOS).
Helpful Insightful Respectful Engaging