Should the dither settings be changed to a lower number of pixels?
Multiple choice poll 34 votes
47% (16 votes)
53% (18 votes)
You must be logged in to vote in this poll.
Eugène de Goeij avatar
Dear colleagues,

My main scope/camera combi has the same FOV as my guidescope/camera combi. Main camera has 6.5 times more pixels resolution.
I dither between every shot 10 pixels which means 65 pixels in the main camera. Settling time 5s back in the in 2" window works fine.
One of the club members in The Netherlands says I should dither less aggressive as this might impact my picture. 
Is this dithering of 10 pixels too much / too aggressive? What could the downside be of my dither settings?

Advice is much appreciated!
Eugène de Goeij
The Netherlands

Quattro 8" with ASI 294MM Pro; Evoguide 50ED with ASI 290mm mini
andrea tasselli avatar
I never dither so it is hard to say other than why should you? You haven't got a Canon DSLR either…
Jérémie avatar
I couldn't give you a straight answer but just some thoughts : I can't see why it would be "too" agressive. The only downside as far as I can tell is that you're going to need to crop the image more at the end on the edges.
The purpose of dithering is partly to avoid having potential "defects" of the sensor always at the same place on the image, and have this defect visible on the stacked image (it allows you to distribute the defects across the image, and "dilute" its effects thanks to the statistical rejection algorithms…).
Helpful
Mina B. avatar
I just realised… I’m probably dithering not aggressively enough? I don’t have walking pattern noise but a fair share of resididual noise even with dithering and matching darks. Likely tho my lightpollution and small aparture combined with an IMX183 sensor play towards that as well, as I have less noise in Narrowband data or with very bright objects, probably higher SNR to start with smile
andrea tasselli avatar
I just realised… I’m probably dithering not aggressively enough? I don’t have walking pattern noise but a fair share of resididual noise even with dithering and matching darks. Likely tho my lightpollution and small aparture combined with an IMX183 sensor play towards that as well, as I have less noise in Narrowband data or with very bright objects, probably higher SNR to start with

Dithering only avoids fixed pattern noise and it doesn't take much displacement (few pixels would suffice) to avoid that. It does nothing to the overall noise floor.
Well Written Insightful Concise
Mina B. avatar
andrea tasselli:
I just realised… I’m probably dithering not aggressively enough? I don’t have walking pattern noise but a fair share of resididual noise even with dithering and matching darks. Likely tho my lightpollution and small aparture combined with an IMX183 sensor play towards that as well, as I have less noise in Narrowband data or with very bright objects, probably higher SNR to start with

Dithering only avoids fixed pattern noise and it doesn't take much displacement (few pixels would suffice) to avoid that. It does nothing to the overall noise floor.

Okay, thanks for clarification - so I can leave it as it is, thats good to know because Fixed Pattern Noise isn’t an issue for me since I started dithering.
kuechlew avatar
I'm dithering with about 20 pixels on the main camera and have no issues. So 65 pixels sounds a lot and why not reduce it if it doesn't hurt. I find Andreas comment interesting, is it really only Canon cameras? I know that they are particularly "bad" with their tendency for banding but I always thought that hot pixels can cause walking noise too. I'm referring to this discussion: Walking Noise: What is it? - Experienced Deep Sky Imaging - Cloudy Nights.
 
Clear skies
Wolfgang
Andy Wray avatar
andrea tasselli:
I never dither so it is hard to say other than why should you? You haven't got a Canon DSLR either...

I started to dither as everyone was saying "dither or die", but I actually didn't see the point of it with my mono CMOS sensor so gave up.  Am I missing anything here?
Manuel Peitsch avatar
Andy Wray:
andrea tasselli:
I never dither so it is hard to say other than why should you? You haven't got a Canon DSLR either...

I started to dither as everyone was saying "dither or die", but I actually didn't see the point of it with my mono CMOS sensor so gave up.  Am I missing anything here?

I don't think so. I did not dither, then when I started to increase the number of subs I realized I had defects.  So I started dithering with a color CMOS when I capture lots of subs. I actually dither every 5 subs with a limit of about 30 px. Has solved my issues ... hence am keeping these settings for all my plans.
CS
Manuel
andrea tasselli avatar
I'm dithering with about 20 pixels on the main camera and have no issues. So 65 pixels sounds a lot and why not reduce it if it doesn't hurt. I find Andreas comment interesting, is it really only Canon cameras? I know that they are particularly "bad" with their tendency for banding but I always thought that hot pixels can cause walking noise too. I'm referring to this discussion: Walking Noise: What is it? - Experienced Deep Sky Imaging - Cloudy Nights.
 
Clear skies
Wolfgang

I have no issue with FPN in any of my cameras, including the Nikons but excluding the Canons. There are 2 main reasons why I wouldn't bother dithering:

1. Unless for some miracle you have perfect PA you always end up with drift (in both directions). You can take advantage of that to avoid actual dithering.

2. Appropriate cosmetic correction is the cure for all the residual FPN you might have.

As I said before, I don't do it and I never felt the need of doing it. Processing takes care of it all and I'd rather waste less precious time with some of my mounts (e.g., the GEM28 has 3 min settling time!).
Helpful Insightful Concise
andrea tasselli avatar
Andy Wray:
I started to dither as everyone was saying "dither or die", but I actually didn't see the point of it with my mono CMOS sensor so gave up. Am I missing anything here?


If you don't see it ain't there. Simples...
Eugène de Goeij avatar
Dear all,
Thank to you so much all for your thoughts!
Next clear nights I am going to test different # pixels to see were the minimum amount of pixels needs to be.
Collegae of the Eindhoven astroclub looked at the data and saw that the result in the main camera on the x-axis varies between 12-170 pixel movement and on the Y-axis between 128-860 pixels so that seems a bit overkill for getting the positive effect of dithering.
Clear skies everyone!
Eugène
Jared Willson avatar
Yes, I would reduce the amount. Dithering has two downsides. First, it requires you to crop the edges of your frame. A few pixels or even a hundred is not so bad, but 800 pixels? That’s a lot of real estate. The second issue with dithering is that it reduces throughput. Time spent dithering and waiting for guiding to stabilize afterwards is time you aren’t collecting photons. It adds up. Larger dithers and frequent dithers use up more time.

Try a value close to one fifth of what you are using now. I suspect that will be just as effective against FPN while reducing cropping and increasing throughout.
Well Written Helpful Insightful Concise
Roman Pearah avatar
andrea tasselli:
I'm dithering with about 20 pixels on the main camera and have no issues. So 65 pixels sounds a lot and why not reduce it if it doesn't hurt. I find Andreas comment interesting, is it really only Canon cameras? I know that they are particularly "bad" with their tendency for banding but I always thought that hot pixels can cause walking noise too. I'm referring to this discussion: Walking Noise: What is it? - Experienced Deep Sky Imaging - Cloudy Nights.
 
Clear skies
Wolfgang

I have no issue with FPN in any of my cameras, including the Nikons but excluding the Canons. There are 2 main reasons why I wouldn't bother dithering:

1. Unless for some miracle you have perfect PA you always end up with drift (in both directions). You can take advantage of that to avoid actual dithering.

2. Appropriate cosmetic correction is the cure for all the residual FPN you might have.

As I said before, I don't do it and I never felt the need of doing it. Processing takes care of it all and I'd rather waste less precious time with some of my mounts (e.g., the GEM28 has 3 min settling time!).

But you also are losing the ability to do CFA Drizzle and have to fall back to debayering. That's a significant disadvantage in terms of recovering high frequency detail. Part of the reason so many people think the gap between mono and OSC is as wide as they do is that they're comparing non-interpolated data to interpolated data.
Well Written Insightful Concise
David Redwine avatar
Dithering is necessary, especially with CMOS cameras, but it reduces your effective FOV.  10 pixels is plenty, try 3 or 5 to see if that works

CS
andrea tasselli avatar
Roman Pearah:
But you also are losing the ability to do CFA Drizzle and have to fall back to debayering. That's a significant disadvantage in terms of recovering high frequency detail. Part of the reason so many people think the gap between mono and OSC is as wide as they do is that they're comparing non-interpolated data to interpolated data.

Absolutely not. It would work anyway because interpolation (debayering) occurs beforehand, at least in PI. And because main point n.1, i.e., you always drift for all practical purposes (the sum of drift and periodic error). And the gap isn't going to go way because you dither (or not).
David Redwine:
Dithering is necessary, especially with CMOS cameras, but it reduces your effective FOV. 10 pixels is plenty, try 3 or 5 to see if that works


It is necessary when the sensor diplays significant FPN otherwise ain't worth the bother. Or when you have very few frames, less than 3 per channel.
Steve Argereow avatar
Hi Eugene,

First of all, you dither to get rid of "walking noise".  I like you I used to dither on every sub.  I found out that if you only have about 20 subs you 
probably need to do that.  Once you get in the 50 60-sub range dither every other sub.  If you get above that 100 plus every third frame is probably ok.
This saved me a lot of time.  As far as pixel setting, I've tried different settings trying to go the minimum amount with as little settle time as possible.
     The great thing about dithering is that if you haven't gone aggressive enough, that data is still good data.  Just add more subs with increased
pixel movement that will correct the problem.   I'm a retired Engineer so I don't want to make this an engineering project.  I don't want it to feel
like work so I don't think about pixel to sensor scaling etc…  I just try something and if I like the results, I just keep doing it.

                                                                                                                                                            Best of Luck!
                                                                                                                                                            Steve A.
Helpful Respectful Engaging Supportive
John Hayes avatar
Dithering is a way to use the stacking filter to statistically minimize signals and patterns that are fixed with respect to the sensor.  Dithering helps with FPN but the correct way to deal with FPN is with flat-correction--not dithering.  Remember that FPN is the result of PRNU (Photo Response Non-Uniform) in the sensor so it's a source of uncertainty across the sensor that increases with signal strength.  Depending on the level of responsivity variation between signals, FPN can have a significantly larger impact on perceived noise than photon-noise, but that also depends on the brightness of the object.  FPN is multiplicative to the signal so it is corrected exactly like vignetting or radiometric fall-off by dividing the signal by the flat field data.  In fact, FPN is one of the three big reasons that we use flats in the fist place!  I have two QHY600M cameras.  One has almost no FPN (that I can see) but the other has a LOT, which requires good flat correction.

Dithering can help to further blur the effects of FPN but it is also very useful for removing residual dark noise and RBI ghost images around stars (and yes, both of these effects can happen in CMOS devices).  If you assume that the variations in FPN (and everything else) are mostly random across the sensor, the minimum dither distance is determined by the autocorrelation distance of the pattern.  That's the distance between two points on the surface where the values are statistically uncorrelated.  Unfortunately, many (most?) of these patterns are not randomly distributed and many such patterns have spatial structure.  That's why it's best to start with flat correction to remove FPN and to use a relatively large dither distance to "average out" any residual spatial patterns that may "poke through" the calibration process (for whatever reason--but usually due to variations with time).  The only way that a large dither distance hurts is when it becomes too large of a fraction of the total field.  If you keep it less than 1%-3%  of the full field, you won't be losing any significant amount of your field coverage by dithering over a big distance.

John
Helpful Insightful
jewzaam avatar
I've found dithering to be important.  There are always hot/cold pixels and if your guiding is good enough they will stack up in the same place on your final integration.  In that case you'll end up with a bright or dark spot.  You might get away with cosmetic correction to clean up some of these.  But it depends on many factors.  This spring I was using a Rokinon 135mm lens with ASI533MC and found at those pixels scales (almost 6") that CC was removing the core of smaller stars and I ended up with weird star donuts.  I was dithering so I could drizzle and not have blocks stars, so I just ditched CC.  I was shooting at f/2 broadband.  My rule of thumb is dither every 5 minutes for fast optics or 10 minutes for slower optics.  And make sure dither is 5 px on the main camera.  Remember with different focal length or pixel sizes between guiding and imaging scope/camera you have to do some math.

guide pixel dither count = (guiding focal length / imaging focal length) * (imaging pixel size / guiding pixel size) * imaging dither pixel count
Helpful Engaging
Roman Pearah avatar
andrea tasselli:
Roman Pearah:
But you also are losing the ability to do CFA Drizzle and have to fall back to debayering. That's a significant disadvantage in terms of recovering high frequency detail. Part of the reason so many people think the gap between mono and OSC is as wide as they do is that they're comparing non-interpolated data to interpolated data.

Absolutely not. It would work anyway because interpolation (debayering) occurs beforehand, at least in PI. And because main point n.1, i.e., you always drift for all practical purposes (the sum of drift and periodic error). And the gap isn't going to go way because you dither (or not).

But CFA Drizzle doesn't use the interpolated, debayered data. Debayering is done to get you through standard integration which also produces drizzle files. The drizzle process using the raw monochrome CFA data. As for drifting that's not a generalization I think one can make and then give out as advice about best practices for noise reduction. If I don't dither, for example, my frames are damn near in lock step. Everyone's mileage will vary. Even so, drifting doesn't get you the same result as random dithering since it's directional and is often why people end up with "walking noise".
Helpful Insightful Concise
Peter Myers avatar
andrea tasselli avatar
Ok. I checked the documentation and you're right. It does require the original calibrated/corrected CFA. This said, it does not require a random variation in displacements between frames but a non integer one, which is easily achieved when the images are not significantly undersampled. Additionally, you'd need a rather large number of files to make it work properly, which isn't something that is always achieved. The flip side is while providing better results in terms of alaising artefacts it does reduce the overall SNR (no free lunch). Besides, the raison d'etre of dithering is for dealing with FPN and random noise when number of frames is small. In this natural drift seems to me a suitable alternative to forced dithering IF the sensor isn't prone to produce FPN artifacts (e.g. Canon DSLR). As I said many times before I never dither and never ever had this sort of issues so I can only advise to dither if, after proper reduction techniques are applied, the data still show FPN in the usual form of "walking noise" or similar FPN artefacts.
Helpful Insightful
Roman Pearah avatar
Yes of course it reduces SNR per unit time. I could after all be the master of SNR but using one giant pixel but my images would be rather disappointing. My point was only that it's a trade-off one might not want to proscribe by failing to fulfill the prerequisites. But that's only the same thing as acknowledging that mono imaging is necessarily less SNR efficient, all else being equal, because it doesn't obtain all channels in parallel (but is rewarded by a "natural" preservation of high-frequency detail one would otherwise have to sacrifice in order to do so, unless you have multiple rigs).