Lynn K:
As stated, IT DEPENDS on Camera, Light Pollution, Filter, Telesope Design and Object. AND, Mounts tracking, AND your software processing ability.
I have different thoughts about exposure time. For one, don't assume the whole astro imaging society use CMOS camras, and primarily ZWO. They do not. There is an attitude among Astrobin that CMOS has all way been used. Actually, it is fairly new, and the CMOS revolution is primarily due to one manufacture, ZWO.
I have been imaging since 2005 and primarily used CCD. I now have a QHY268M. But still use CCD. I fine the big advantage of CMOS is not low read noise but download time and QE. My read noise in my QHY268M is not all that much lower than my Sony ICX814/694 Al chip cameras. I have found imagers that say CCD is dead, have never used quality Sony CCDs.
I was not uncommon to see that out standing images a few yeas ago that used CCD cameras to use 20 to 30 min subs. And offen less that 20 of them. A 10 to15 hour session was a very long session. The approace to short subs in the range of 60 to 100 with session being 15 to 20 hours is a result of CMOS.
The constant mention of Read Noise is a fairly new develoment. Which is the inability of the chip to accuatly read out the correct photon to electron count of each pixel. Prior to that with CCD it was DARK NOISE. Which is the radom electron given of by heat in the chip. Larger Dark Noise requires better cooling. Also, the constant mention of over saturated stars is fairly new, due to low well depth of CMOS. Usually not an issue with CCD. AND, with new star deletion software, not a big problem. The stars can be replaced by short RGB sub images.
Shot noise is the radom nature of photons numbers as they land on indivigual pixels. Only more photons will even out the noise level through either larger scope, faster optics OR LONGER EXPOSURE. Also the QE of the camera can effect this. Shot noise is NOT increased by exposure Time, it is Decreased. DARK noise is increased with heat generated by longer exposures. A huge problem with DSLR.
NOW, to reply to your question. "Is 600 min. subs worth it." With CCD, Yes. With CMOS, not sure. With Bortal 1-4 skies, I would say yes. In Bortle 5-8 skies, probaly not. The sky glow will wipe out any advantage. With narrow Bamd, deffinently needed with CCD, not so much with CMOS due to it's high QE.
As far as airplane/ satellite trails, any good Sigma Clip stacking software should seen those as an out- layer and reject those pixels.
As far as the real answer to your question, try it and fine out. Pick a very faint object and do 5 x 10 min and 10 x 5 min sessions and see if there is a noticeable difference in the faint signal. Given your prtictular site and equipment, it is probably the only way you will every really know. That is what I am exacly doing with my new QHY268M. I found 600 sec subs made a big difference with CCD narrow band. Not a real difference going to 900sec. I am hopeing to get simular results with 300sec using the high QEof the QHY268H Sony IMX571 chip. I will see.
Lynn K.
Lynn,
You are correct that "IT DEPENDS" but I want to add my two cents to Arun's comments.
1) The CMOS started in the early 90's, but its acceptance in the market gained speed largely because of video requirements and for economic reasons. It was major camera manufacturers like Olympus, Nikon, Canon, Sony, and others making DSLRs who first brought CMOS technology to the mass market. ZWO did a great job of migrating that technology to dedicated, cooled astro-cameras--at a reasonable price and as I recall, QHY paralleled their efforts in the amateur astro-imaging market. Cheap, high-performance, dedicated CMOS cameras certainly helped move many amateurs away from their DSLRs.
2) Read noise has always been a key performance parameter for both CMOS and for CCD sensors; although you may be right that the amateur community seemed to suddenly figure it out and begin to discuss it as if it were more important than dark noise. RN is indeed an important advantage because if it can be reduced to sufficiently low levels, it greatly relaxes the need for using longer exposures--no matter what type of sensor you use. To be clear, RN is any noise that is not exposure dependent and it is a key parameter that determines the minimum exposure time. You generally want to use an exposure that make RN much less significant than photon noise (<~0.1).
3) Photon noise (shot noise) increases as the square root of the number of photons that you collect, so it
always increases with exposure time. Perhaps what you meant is that
SNR always increases with exposure time. SNR increases with the square root of the number of photons you collect.
4) Your experiment with NB data is a good one. Since NB data collects fewer photons, you always have to use longer exposures with a NB filter to get the photon noise (shot noise) significantly higher than the RN. The higher QE will help with your CMOS sensor but I think that you'll find that you probably won't get to a factor of two relative to your CCD camera. I believe that it will scale with ratio of the QE values at the filter wavelength.
John