Culling choices

3 replies39 views
Tony Gondola avatar

This is something that I’ve been observing for awhile now and wanted to share. My usual culling routine would be to reject images with both high eccentricity and high FWHM. But when taking a closer look it seems that a lot of my subs with high eccentricity still retained good detail in the non-stellar parts of the image. I think I understand the reason for this. let’s say you’re taking 60 sec. subs and you get a little wind gust or an RA guiding bump that lasts a few seconds. Because stars are so bright, this slight bump is enough to bring the stars out of round and produce a frame that would score low on eccentricity but, because the signal is building up so slowly in the fainter areas, they are relatively unaffected. Here’s a side by side comparison that shows the effect. These are 60 sec. subs of M-51 with a uniform histogram stretch and slight noise reduction to make the detail easier to see:

📷 result.pngresult.png📷 result-2.pngresult-2.pngYou’ll note that in the core, the high eccentricity frame actually has better detail then the best eccentricity frame and is close to the best FWHM frame. I’ve done this side by side test many times and always get a similar result. Of course you still want to discard frames that have extreme elongation or multiple star images but that’s about it in terms of eccentricity. You do want to cull deeply using FWHM as that really seems the best indicator of sub quality. I’ve also found the number of stars detected in the frame often correlates well with best FWHM frames so it can also be a powerful indicator of sub quality.

Just to give some numbers here, in the FWHM subs the best has an average of 3.12”, the worst comes in at 3.75” The worst eccentricity sub. has an average of 3.17”

Well Written Helpful Insightful Engaging
SonnyE avatar

I tend to only remove the very obviously bad images, when I catch them. Some examples would be an Airplane, or dawn frames. Not much as culling goes, really.

I more depend on my ASI Studio and its rejection settings (Is that AI stuff?) which is usually very mild at the stock settings. Sometimes nothing is rejected after my lightly culling the obvious. I might take 200 to 300 images at a certain exposure length, but throw away the last 40 to 60ish due to “dawn burn”, or maybe neighboring trees interfering. (I don’t cull the overhead wires generally, as another example.)

I like to often just run the bulk of my images after my casual culling and save as a jpg for the web. I usually start at a minimum of 30° and run to destruction at astronomical dawn.

My delete key forgives all sins. No picture = didn’t happen. 😄 My SSD I save my images on was getting critically full, so I deleted 2025’s image files to make more room for this years. The old stuff is just old stuff. Onward through the fog! 😜

Well Written Respectful Engaging
John Hayes avatar

Tony,

I agree with your conclusions. I used to cull everything until I did a comparison of my culled results with simply using PSF Signal weighting and dumping it all into the stack. In terms of FWHM, culling is better; however, that’s at the sacrifice of SNR and the difference in FWHM compared to using it all is very small. When the seeing conditions are really variable, I still cull but only to remove the very worst stuff. You can see my results in my Galaxy Studio presentation on TIAC and judge for yourself.

John

Well Written Helpful Respectful Engaging Supportive
Tony Gondola avatar

I think the effectiveness of useful little refinements like this depend a lot on the system in question and conditions. A small refractor that’s under-sampling probably won’t benefit much. With my 150mm aperture, I’m usually sampling at 0.66” or 0.38” per pixel with a tiny 585 sensor. It takes a lot to make things look good with a sensor that small, it pushes me to the wall every time. I’ve found little improvements like this have a large impact. Bottom line is everyone really should just do the testing and see what works best for them.

One last point and this is something I’m still refining. Combining the low frequency data from the full stack and the high frequency information from the deeply culled stack can (I think) give you the best of both worlds. I’m not sure this would be effective with all classes of objects but what’s nice about it is that no data is wasted…work in progress.

Well Written Helpful Insightful Respectful Engaging