What subframes do you keep / How many subframes?

14 replies1.2k views
Robert Winslow avatar
I know this question is very subjective, but I am trying to better my processing.

I will use NGC6960 (Witches Broom) as the image to discuss, as we all know it.

I have collected images for the last 2 months on nights that I could.  I am imaging it for 120 gains, with 300 S expose.  I have at this point about 300 GIG of data about 40 hours of subframes.  I am now processing the data, and as expected some data is higher quality than the rest.  I am using APP to stack the frames, and after registration, I check the quality score.  For this object my score is from 350 down to the low 50's.

I know it is best not to use all this data as bad data hurts the image quality, not all data is usable data IMHO.  While I have rules, I set for myself, no frames with a score under 100 as an example.  This weeds out allot of the real low quality.  Then setting rejection parms in APP, I set it to reject 10% of all frames based on quality. 

While this leads to nice looking images, I could not help thinking that weeding out more and using only higher quality frames it would look better.   So far this is true, and I now weed frames out that are under 200.  

I would like to understand from more experienced imagers what your strategy is with keeping and using frames.
Christian Großmann avatar
Hi Robert,

in the beginning, I kept everything. But I may have a different situation than other users. I own several NAS-Storage drives with Raid configurations to keep my standard photographs as well. So storage space was for a long time no problem. But things changed a bit.

I have a library of darks and bias frames, that I use to process my images. I use them for several months and from time to time, I update this library based on my needs to save time. I did process the subs of them to master frames and only keep those to save space.

I do flats every time I change something on my scope setup. That of course includes changing the camera angle. But usually I set up my rig for a target and leave it untouched for several days. So I reuse the flats for some nights (about 3, it depends). I do new flats every time I feel the need for it (for example after windy days, etc.). I also process the subs into a master flat and only keep this one.

With the lights, things are a bit different. I move the unused frames into a seperate folder. If they are really bad, I delete them. But most of the time, I keep them. All usable light frames I keep. I always had the feeling, that I will get better with image processing and want to be able to redo my images later on. It never happened, yet. It may never happen. But if it happens, I'm prepared smile
Keeping the light subs has the advantage, that you are able to add data later on and process everything again.

Storage space is quite cheap these days. The situation is much worse, if you are doing video work. But with astro photography, I like to keep the work that I've done. I'm proud of it.

Clear Skies

Christian
Helpful
David Nozadze avatar
Hi Rob, 

Thank you for bringing up this topic. It will be very interesting for me too to learn from more experienced people here. 

In my case, first of all I do the blinking for obvious issues, like trees and heavy clouds etc. Then I evaluate my subs for eccentricity, star quantity and SNR (in this sequence) for each separate channel. Eccentricity lets me discard all the frames, where the tracking was not good, low star quantity will mean poor visibility or thin clouds, which I did not notice in blinking. SNR will let me select the best of what is left. 

CS 

D
Helpful Respectful Concise Supportive
Sean van Drogen avatar
Hi Robert,

Dont know if I am any more experienced than you are, but I based my strategy on what I got from others.

For me I keep it all lights outside ones with obvious star trails, but they are very rare to begin with. Cold storage is relatively cheap and like Christian a nice NAS will help make sure its protected against a single disc failure. I now have around 380 hours of published data in my various captures and this takes up about 600GB storage on my NAS which has 12TB storage so lots of room to grow.

I also like to experiment to see differences so will integrate the top 50% of captures and compare that to an integration of all captures.
Also especially when using for instance the NSG script in PI the best and worst frames are relative. So a single sub that on 1 night might in be in the bottom 10% can be middle of the range when included in a 3 night set.

Similarly I keep master flats for each night and target together in a single folder and use a master dark library which I store and update every 6 months so I can easily match the rights darks with the correct date of capture.

CS
Sean
Helpful
Bob Lockwood avatar
Hi Robert,

Attached is an image I did of NGC 6960 12 years ago with equipment I haven’t used in 9 years. As for what I do, I pretty much do just what David Nozadze dose, I don’t do the math or anything that pre-Judges my work. I open everything using CCDStack and Blink all the images; I look at the stars for bloating, noise, and any guiding issues that is very rare. Keep in mind that this image was done with a Tak E-210 f/3 and an SBIG-ST-10XME. But to your question, what to keep and how many? This image is just 4 each x900s Ha-Olll, bin 1x1, and 3 x 240s each RGB  bin 2x2. All pre-processing at the time was with Maxlm-dl, and then moved to Photoshop. If I were to image this today in NB, I would max out at 5 hours each, Ha, Sll, Olll, at 900s or 1200s, I will rarely ever do more then 5 hrs in NB, and maybe some short 120s RGB’s for the stars. For an LRGB, I would probably do maybe 2/3 hours each, maybe 12x600s for L, and 12x900s for RGB, and again, I would just open all subs and toss anything I didn’t like. I get it if the sky condition will not let you do long exposures, but the process would be the same with lots of shorter exposures.        

Helpful
Robert Winslow avatar
Thanks for all the responses.  Let me be clear, my concern is not over storage.  As others have said, storage is cheap and me being an IT guy I have tons of storage including network attached storage.  What I am wondering more about is the impact on the image when using lower quality frames
Simon Todd avatar
I have found that unless the frames are severely washed out, then every photon counts.  I am trying to get the SGPro guys to include "Sky quality" as a "Key" in the filename, since I have an Eagle4 Pro with a Sky Quality camera, I could then filter the frames below a certain value very easily.

If after i create a stack I notice any oddities, I will then do what others have done and "Blink" the images in PixInsight to find the offending image(s).  I have lots of storage so that is not an issue, and since I use an 18 Core System, re-stacking the images is not really an issue either.

In the past I used to go off the 80/20 rule, but realised that the 20% of images I was discarding were actually usable.

The other side of things is with modern CMOS cameras like the ASI6200MM Pro which I have are fantastic at shorter exposure in high gain mode, which means you can actually take more image frames and boost that SNR in the final stack.  I make it a rule that with 60S exposures I do no less than 151 frames in each filter, and for 150S exposures, no less than 101 frames in each filter, why so short exposures you might ask?  Well I am imaging at F3.2 in Bortle4/5 and SGPro recommends those exposure times based on frame analysis.  So I generally stick to 60S with LRGB and 150S on NB, unless it's a weird target like M31 where I will also do much lower exposure times for the core.

So unless your frames are really bad, don't discard them, they will get weighted on quality in the stack anyway, so an image with a lower quality score will have less of a bearing on the final stack than an image with a higher quality score.

Just my opinion and my experience.

Simon
Helpful
jewzaam avatar
I'm in the camp of keep everything that isn't terrible and let the integration handle weighting things.  I will do an initial screen of the RMS and star count which are written to the filename.  Then I blink everything.  If there's obvious issues with a sub like obstructions or clearly visible gradient from clouds I will delete the frame.  I stopped keeping my rejected frames long ago.  I have never gone back to add them back in.
Helpful Concise
Dark Matters Astrophotography avatar
Generally, I toss anything that has tree bits, clouds, or trailed stars. Everything else I keep, unless something appears after integration that warrants an investigation. That usually doesn't happen though.
Well Written Concise
kuechlew avatar
Generally, I toss anything that has tree bits, clouds, or trailed stars. Everything else I keep, unless something appears after integration that warrants an investigation. That usually doesn't happen though.

A recent unintended  test of mine revealed that trees are handled well by the integration algorithm of Pixinsight if percentage of frames is less than 10% .
I didn't even have to apply  Adam Block's trick to fill the affected area with black pixels to help the rejection algorithm to deal with it. Still, I don't recommend this as common practice, but occasionally sh*t happens ... 

Clear skies
Wolfgang
Robert Winslow avatar
Very interesting responses.  So what I am hearing is that low quality frames are worth keeping and processing.  I guess I assumed that if the image quality was low, it would impact the completed image.
Well Written
Dark Matters Astrophotography avatar
Robert Winslow:
Very interesting responses.  So what I am hearing is that low quality frames are worth keeping and processing.  I guess I assumed that if the image quality was low, it would impact the completed image.


It depends on the overall size of the dataset in my experience. If I have a bunch of great data and a few lesser subs in the mix, leaving them in doesn't really do much to bring down the image quality since weighting takes care of that. I figure I do get to keep some of that signal in the image by not discarding them.
Mina B. avatar
I keep everything, except obviously flawed ones with huuuge startrails due to guiding failing due to clouds, trees.
I very very rarely have subs which have not pinpoint stars, and if it's only slightly eggy, I integrate them, APP algorithm takes good care of that.
Satellite trails, planes -> I integrate it. Slightly lighter background - I integrate it.
My FWHM cutoff in APP is 4", everything bigger gets tossed away because I figured for me personally, stuff like satellites, lighter backgrounds, gradients, and even slightly (we are talking slightly here - only recognizable if you zoom in) - get taken care of either while stacking or in postprocessing quiet easily. Big, out of focus stars, or bloated stars due to bad seeing are imo way harder to fix in postprocessing - and while it's doable, it takes time, needs a good star mask, and you still risk getting artifacts. It's easier if you have an object, that stands isolated in the starfield, like a galaxy, or a planetary nebula, than a nebula, that takes up the whole sensor - you risk getting artifacts in your nebula, and you just don't want it - easier to fix, again, if the stars only stand on black background.
So the more nebulosity, the stricter I am with FWHM as nearly my only criteria to toss out subs, and the lesser nebulosity, the less rigid I get - if it's only a few - let's say 10 out of 200 120s subs that are 4" and over, and I went for a galaxy, I just toss them. If it's half of the 200 subs that are like this, seeing was bad, and I just have to deal with it in post processing, can be painful though.
Helpful Insightful
kuechlew avatar
It's sort of a bad joke: We call ourselves "astrophotographers" which basically means photographing stars (greek "astron" = star) and then the biggest enemies in our images are the stars … smile

Clear skies
Wolfgang
Engaging
Simon Todd avatar
It's sort of a bad joke: We call ourselves "astrophotographers" which basically means photographing stars (greek "astron" = star) and then the biggest enemies in our images are the stars ... 

Clear skies
Wolfgang

On top of that with today's technology, we have to live with the cloud, and yet clouds are what we don't want when it comes to astro photography, and yet we store our images in the cloud too