An observation about culling

18 replies482 views
Tony Gondola avatar
I was watching the subs come in for the Crescent Nebula the other night and after awhile I noticed that when there would be a hiccup in the guiding (it was a windy night) the stars in the image would trail but the details in the nebula seemed unchanged. It really took a disturbance that lasted for most of the exposure to degrade the detail in the Nebula to any significant degree. This was while shooting Ha at 60 sec. 

Coming from a photography background my thinking was that, if a frame is degraded due to lets say, camera shake, the whole image is ruined. Applying that to Astrophotography told me that if the stars are wonky, the frame is bad. After looking at this closer I realized, that is not the case and here's why:

On a 60 sec. sub-frame I took that night, the noise floor was 491, the core of a med. bright star was 22,714 and the Nebula brightness was 550. Subtracting the noise floor gave me 22,223 for the star core and 108 for the nebula. If I calculate counts per second I get 370 for the star core and 0.98 for the Nebula. If there's say a 5 sec. disturbance in the guiding, 1850 counts from the star core will be displaced while only 4.9 counts will be displaced from the Nebula. 1850 counts that's not where it's supposed to be will be clearly noticeable and significant while the 4.9 counts from the nebula won't be.

This is of course, relative to the sub-exposure length. The shorter the disturbance as a percentage of sub-exposure length, the less the effect will be on the dim details but it will still effect the stars in a visible way. That tells me that stars are a poor indicator of image quality, especially in longer exposures. This is also what I see in stacking. If I go in and filter out the frames with wonky stars I end up with a good star field but at the expense of S/N in my subject. If I just include everything, the subject is more detailed than it was with the culled stack, even though the stars had minor problems which could easily be solved by sub-stacking the stars and general star removal processes.

The lesson I take from this is that for longer integration times (60 sec or more), culling should be kept to a minimum, if at all. For short exposures culling becomes important, giving you some leverage over the seeing and guiding quality. It's certainly an interesting finding that in my case was unexpected and changes the way I approach the work.

I would be curious about what the more experienced imagers here think.
Helpful Insightful Engaging
Chris White- Overcast Observatory avatar
Signal will accumulate, so those 4.9 DN will also become more visible if you stack a ton of smeared images.  Think about halos from high altitude clouds that dont show up in a single sub but appear with longer integration time.  Of course the movement is more random from wind and it doesnt happen every sub in your case, so this might not matter at all. 

I would worry more about getting good rejection absent artifacts from a squiggly star being rejected. You may beed to fiddle with sigma rejection and this could have an overall impact on optimal rejection across the frame. 

Best thing is to shield the wind. 

These are my worries, ive never had a wind problem so havent had to worry about this. The meaningful impact on your final result may look better including these subs. Twst test test… and share.
andrea tasselli avatar
Appearances can be deceiving… The effect is the same, the smearing is the same as well. So the question is really whether to prioritize resolution or SNR. In all but few circumstances SNR wins hands down.
NeilM avatar
When I started in Astrophotography (not that long ago) I did a manual culling process based on how the subframes looked and FWHM measurements.  For one of my recent challenging images (Antennae Galaxies), I did an experiment where I processed the image first using manually-culled images, and next using Pixinsight's Weight Batch PreProcessor.  I saw noticeably better results (more detail and lower noise) with the WBPP script versus my manual process.  I know that this was only a single data point on a specific target.
Well Written Helpful Concise Engaging
Tony Gondola avatar
andrea tasselli:
Appearances can be deceiving... The effect is the same, the smearing is the same as well. So the question is really whether to prioritize resolution or SNR. In all but few circumstances SNR wins hands down.

I agree about S/R but because of temporal differences I can see how the smearing might not be the same.

Here's a thought experiment:

Imagine a circular dispenser that drops 10 bbs in a circular pattern on to a bed of sand. It drops one bb every ten seconds forming a circular pattern. Next to the bb dispenser is an other dispenser is flowing bbs at a rate of 20 per second. Now think of passing a bar between the sand bed and dispensers at a random time. The bar takes say, 5 seconds to pass through the path of the bbs. The bar represents an upset due to guiding. Clearly as the bar passes through it will displace at most, one bb from the circular dispenser. It's also possible it won't displace any. The resulting pattern will still be largely circular. However many bbs from the fast stream will be displaced. Creating a very noticeable displacement of the pattern made by the faster stream.
Engaging
andrea tasselli avatar
Don't think the analogy is either appropriate or cogent.
Tony Gondola avatar
Can you expand on the why? I've posted this idea for comment. If the conclusions are incorrect it would be valuable to know why.
Dunk avatar
Just chuck everything into WBPP and let it decide ;-)
John Hayes avatar
For an incoherent imaging system, the image of the stars represents the time integrated point response of the imaging system.  The image signal is given by the convolution of the integrated point response (PSF) with the object irradiance.   In the ideal world a PSF with a delta function response perfectly reproduces the object.  A smeared PSF produces a smeared image.  Your best bet is to deconvolve the individual subs to remove the smear before stacking.  Unfortunately, BXT isn't trained to use weirdly smeared PSF functions but if it's simply an oblong star image,  it might work just fine.  If you have oddly shaped smeared PSF functions, you might have to use a more traditional approach to deconvolution using the actual PSF in the image.  Simply stacking images with motion blur will degrade image sharpness.

John
Well Written Helpful Insightful Concise
andrea tasselli avatar
The relative rate of photon arrival is the same and it only depends on the relative aperture for one thing (given the same sensor and sky condition) . Whether they are many or few the relative effect is the same as long the rate is larger than the disturbance's frequency. Given that the wind gusts may be few seconds long (plus the damping time of the system)  it would seem fair that other than being in anything less than single photon counting mode you would still be subject to the same smear due to random motion hence the effect is the same. Which is why measuring the single frame quality via some defined metric based on fwhm and associated shape information works so well in optimizing the weights in a weight based approach to frame integration. And why basing the assessment on how the stars visually look may/must produce an inferior result. Furthermore the arrival/detection time of each photon is distributed according to some probability function so while you can assume that on average you have n photon per unit time you cannot tell with certainty at what time they will arrive.
Tony Gondola avatar
Thanks John and Andrea, that gives me a much better handle on how to think about this.
Well Written Respectful
DiscoDuck avatar
Rejection of outliers during stacking could help mitigate the issue. As long as the smear is not supported by too many other images, it will not appear in the stack, yet the contribution of the "unsmeared" photons in that same sub will be counted. I have seen this work in a set of images I took under very windy conditions - the stack was better (and the stars not too bloated) than the subs would have led me to believe it should be without culling.
Well Written Helpful Insightful Concise Engaging
danieldh206 avatar
Some tricks when dealing with data from a windy night. 

Blink through and toss the ones with double stars. These sometimes end up in the stack because they appear to have a good signal and still register. 

Run WBPP twice if you have a limited number of subs. Run WBPP as you usually do. The second run of WBPP set the "Subframe Weighting" to Weighting Formula and chose Cluster.  This will weigh the sharpest image higher. Under Image Integration, set the minimum weight to 0.5 or higher.  This will result in one masterLight with more signal and a second, sharper masterLight, but with less signal.  

If you have a large number of subs, you only need to run WBPP once with the "Subframe Weighting" weighting formula set to "cluster" and Image Integration set to 0.5 or higher, and then process as usual.

Essentially, you are attempting to create a really sharp and crisp stars image and LUM channel that can be combined with a higher signal RGB or SHO image that has been slightly blurred through convolution.  

With OSC and a narrowband image, you will need to extract the CIE*L component from the "sharper" image to combine with the higher signal but slightly blurred image. 

If many subs are being rejected, it is a sign that more imaging time is needed. I only resort to trying different stacks of various weighting methods when I cannot get more imaging time on a target.
Helpful Engaging
Tony Gondola avatar
Yes, one more night in this should do it. I really didn't loose that many subs, just need more data to smooth things out.

It's funny but this thread has sort of wondered off from the original intent. It never was about dealing with windy conditions but rather an observation about the relationship between culling and sub-exposure length and how it's possible that in longer subs, the stars might not tell the while story about definition in the faint parts of the image.
Kevin Morefield avatar
My culling methods are pretty simple.  I get rid of any sub that is:

1) above a FWHM cut-off threshold (dependent on the FOV and the season)  OR
2) above .6 eccentricity (25% out of round  OR
3) any sub that shows glowing around the bright stars.  

Of those three rules I'd occasionally let the first two slip a bit but never the third rule.  That is in part because WBPP will not consider those subs with high cloud glow to be bad subs.  

My FWHM cut-off is 3.5" for the 530mm/full frame FOV, 2.5" for the 1050mm/full frame FOV, and 2" for the 2953mm/full frame FOV.  I consider the FOV rather than the image scale because I believe our eye/mind assesses sharpness as an amount of detail across the frame.  We are looking at each of the images on the same size of screen so for smallest details on the screen to be the same % of the screen I need smaller FOVs to have lower FWHMs.  Of course, the the FOV/FWHM combinations above are not linear so I'm bending to practicalities somewhat here.  The 530mm and 3.5" FWHM is going to appear to be the sharpest images of the three but It's easier to get under that 3.5" threshold than the others so I can be more selective.

I also benefit from remote imaging so I'm less time limited and can just go back and get more subs much easier than a traveler.  

Getting back to the OP's original question; when would I let the eccentricity threshold bend a bit?  If I'm collecting data that is more about SNR than resolution.  That could be simply RGB subs versus Luminance - but these days I use RGB only for stars so I want round stars.  I also use the RGB masters with the SuperLuminance so again I was sharp RGB.  In some cases where I'm collecting super faint OIII that is not rich in structures I could let eccentricity slip a bit.  And If I'm doing discovery work, like suspected PNs, those FWHM and Eccentricity threshold go way up.

Kevin
Helpful Insightful
Arun H avatar
Tony Gondola:
I was watching the subs come in for the Crescent Nebula the other night and after awhile I noticed that when there would be a hiccup in the guiding (it was a windy night) the stars in the image would trail but the details in the nebula seemed unchanged. It really took a disturbance that lasted for most of the exposure to degrade the detail in the Nebula to any significant degree. This was while shooting Ha at 60 sec. 

Coming from a photography background my thinking was that, if a frame is degraded due to lets say, camera shake, the whole image is ruined. Applying that to Astrophotography told me that if the stars are wonky, the frame is bad. After looking at this closer I realized, that is not the case and here's why:

On a 60 sec. sub-frame I took that night, the noise floor was 491, the core of a med. bright star was 22,714 and the Nebula brightness was 550. Subtracting the noise floor gave me 22,223 for the star core and 108 for the nebula. If I calculate counts per second I get 370 for the star core and 0.98 for the Nebula. If there's say a 5 sec. disturbance in the guiding, 1850 counts from the star core will be displaced while only 4.9 counts will be displaced from the Nebula. 1850 counts that's not where it's supposed to be will be clearly noticeable and significant while the 4.9 counts from the nebula won't be.

This is of course, relative to the sub-exposure length. The shorter the disturbance as a percentage of sub-exposure length, the less the effect will be on the dim details but it will still effect the stars in a visible way. That tells me that stars are a poor indicator of image quality, especially in longer exposures. This is also what I see in stacking. If I go in and filter out the frames with wonky stars I end up with a good star field but at the expense of S/N in my subject. If I just include everything, the subject is more detailed than it was with the culled stack, even though the stars had minor problems which could easily be solved by sub-stacking the stars and general star removal processes.

The lesson I take from this is that for longer integration times (60 sec or more), culling should be kept to a minimum, if at all. For short exposures culling becomes important, giving you some leverage over the seeing and guiding quality. It's certainly an interesting finding that in my case was unexpected and changes the way I approach the work.

I would be curious about what the more experienced imagers here think.

As John mentioned, stars are an indicator of the point spread function of the optical system. Distortion in stars is an indication of defects or deterioration of performance of the system and will be seen in loss of contrast, particularly at high frequencies. Here is a very nice paper that goes into this without being overly mathematical:

https://lenspire.zeiss.com/photo/app/uploads/2022/02/technical-article-how-to-read-mtf-curves-01.pdf
Well Written Insightful Respectful Concise
John Hayes avatar
If you really want to go crazy, this presentation goes into some of the gory details of how an imaging system works with an emphasis on MTF.

https://www.youtube.com/watch?v=_d7mMNlZxRQ

John
Bill McLaughlin avatar
Kevin Morefield:
Of those three rules I'd occasionally let the first two slip a bit but never the third rule.  That is in part because WBPP will not consider those subs with high cloud glow to be bad subs.


Pretty much my method. One point is that for some of my planetaries the RGB is used only for the stars so FWHM becomes less critical for the RGB than eccentricity since one can use star reduction on anything that is just stars.

Also, FWHM standards can be relaxed a small amount if  most or all of the detail in the object is pretty large scale on the given detector. My most recent image is a good example of both of the above, in fact.
Tony Gondola avatar
I'm assuming that since no one has directly addressed it, that the temporal, high flux/low flux basis of my idea has no significant effect. At least in the realm of "normal" sub exposure times (15 to 600 sec.)