Removing the fixed-pattern background from GSENSE4040-based camera images

21 replies1.2k views
Howard Trottier avatar
INTRODUCTION

This post presents a calibration strategy that removes virtually all of the fixed-pattern background in most deep-sky images taken with GSENSE4040-based cameras. This chip has a so-called scientific CMOS (sCMOS) architecture, which provides dual-gain readouts of each pixel. Cameras with this chip include the Finger Lakes KL4040, the SBIG AC4040, the Moravian C4-16000, and the QHY4040PRO. Some of these cameras have been promoted as replacements to CCD cameras based on the popular Kodak KAF-16803. However, the GSENSE4040 has an unusual and potentially intrusive pattern background that can compete with good target data at low light levels, unless it is removed in post-processing.

While this fixed-pattern background (FPB) is sometimes referred to as fixed-pattern "noise" (FPN), it is an intrinsic and persistent part of the signal generated by the 4040, and unlike actual noise (readout and shot), it cannot be suppressed by taking more data. This distinctive pattern appears to be generated by the novel dual-gain readout architecture of sCMOS chips. 

Moravian Instruments is upfront about the astrophotography implications of the FPB produced by the 4040, stating on the web page for their C4-16000 that "aesthetic astro-photography can be negatively influenced if these differences are not removed during image processing." Unfortunately, Moravian does not actually provide a processing strategy to remove the FPB, at least not in publicly-available documentation that I can find, and the recommendations that are currently provided by other manufacturers often fail to give adequate results. Some of the frustrations with the 4040 FPB that have been experienced by several astrophotographers are discussed at length in this Cloudy Nights thread.

I own the FLI KL4040 with a front-illuminated version of the chip, and for the past two years I've used the calibration procedure detailed in this post to remove all traces of the FPB from my images, without loss of good-quality data at low light levels. This calibration strategy differs from the standard two-step procedure of dark-subtraction and flat-field division only in how the exposures for the flat frames are chosen. The method is presented in detail further down, with illustrations using high-quality light-frame stacks for two targets.

My KL4040 images can be viewed on my Astrobin page, and I was lucky enough to have two of them appear in APOD: the Hercules Galaxy Cluster in 2020, and the Eastern Veil nebulain 2021. In addition, several owners of 4040-based cameras have asked about my approach, and were able to use it to dramatically improve the quality of their images. It seems plausible that the same strategy will also work with other dual-gain chips, such as the GSENSE2020, and the back-illuminated version of the GSENSE4040. However, I have not had access to images produced with other dual-gain chips,  and can only vouch for the effectiveness of this approach with the front-illuminated GSENSE4040.

OVERVIEW OF THE CALIBRATION TECHNIQUE

I give two examples further down that illustrate the results of the proposed calibration technique when applied to real images. But it will prove useful to start with an outline of the procedure, and the reasons why something like it is often necessary.  

For dark frames, one should follow the standard approach that is usually recommended for CMOS chip, which is to use exactly the same exposure time (and temperature) as the light frames, owing to the much larger dark current of most CMOS chips compared with CCDs; in other words, it is best to avoid using bias frames to interpolate the exposure times. This recommendation may be even more important for the 4040 (and possibly for other sCMOS chips), due to its intrusive FPB. Moreover, using a master bias does not remove the FPB.

On the other hand, to acquire flat frames for the 4040, it is often essential to adopt a fundamentally different criterion for the exposures compared with most CCDs and other CMOS chips. For conventional good-quality chips, the standard recommendation is to adjust the exposure time (and/or signal strength) to get a chip response at about 50% of saturation; this approach assumes a high-degree of linearity in the chip response, except near saturation, which implies that individual flat frames will differ from one another only by overall factors (except for noise, all other things being equal), and can be rescaled to a common mean value that divides out when the flat master is actually used.  

Published data for the response of the 4040 does show a high-degree of linearity, but the examples below strongly suggest that the low-level FPB has a nonlinear dependence on the signal strength that is significant for deep-sky astrophotography (on the other hand, for applications at high illumination levels, the FPB will be swamped by shot noise). Consequently, for many deep-sky targets, an intrusive amount of FPB may remain in the calibrated light frames, unless the exposures for the flats are chosen to closely match their mean values to the light-frame mean; this serves to roughly equalize the strength of the FPB in the two image types. Fortunately, an exact match is not needed for adequate suppression of the FPB. Empirically, it appears that a difference of a few hundred ADUs between the flats and lights is usually acceptable. Alternatively, a good rule-of-thumb is to build the master flat from a stack of frames with means that cover roughly the same range as the stack of light frames. 

To avoid introducing an excessive amount of noise from flat-frame exposures at that level, I find it necessary to stack a much larger number of flats than is usually the case with other chips. This is because the shot noise in the individual flats and lights will necessarily be similar, if their means are similar. In contrast, with conventional chips, where the flats are typically acquired at about 50% of saturation, the flat-frame noise is small relative to the light frames of most deep-sky targets. The number of flats can be minimized by applying aggressive noise reduction to the master flat; I typically apply a strong instance of PixInsight's MureDenoise script (variance set to 3.0), and then remove the first two layers. Nevertheless, I typically acquire 50-100 flat frames to beat down the noise in the master to the point where I can't notice any added noise in the calibrated light frames. I use twilight flats, and explain my strategy for acquiring them near the end of the post.

GENERAL COMMENTS

1) There are some excellent 4040 images out there, but I have not been able to find information on the post-processing strategies that were used. And there are situations where the FPB is not very intrusive and can simply be ignored, such as may happen when the target produces a much larger contribution to the chip response than the pattern background. Furthermore, when there is not enough good data at low light levels, it may be possible to suppress the FPB enough by brute force, using an aggressive black point, without compromising the rest of the image. But it often happens that the FPB competes with dim but good-quality parts of the data, and then brute force approaches to suppressing it will cause an unacceptable loss of data and image quality.

2) The PixInsight CanonBandingReduction script, developed by Georg Viehoever, can significantly reduce the FPB, as discussed in the CloudyNights post on the 4040 mentioned above. This adds some extra noise to the image, and does not suppress the FPB as effectively as an optimized flat. On the other hand, some extra effort is required to create an optimized master flat using the approach described here. Individual mileage may vary! I recommend to first try the PixInsight script if you have a 4040-based camera and have not done so already, and if so inclined, compare with the processing examples shown below, to get some sense of the relative quality of the two procedures, in the light of the additional overhead for creating optimized master flats.

3) I have seen some discussion about using dithering at extremely large amplitudes to partially suppress the 4040's FPB, but I find that the impact of aggressive dithering is limited, and in any case is unnecessary, if optimized master flats are used. I dither using amplitudes of just a few pixels, as it typical with other chips. 

4) If I've missed any other useful information out there, or if anything I've written here could use correcting, or clarification, I would welcome hearing about it, so please fire away with posts here, or messages to me! I have also posted this entire screed on CloudyNights.

FPB IN DARKS, FLATS, AND LIGHTS

A vivid illustration of the FPB produced by the 4040 chip is given by this screen capture of two master darks created with my FLI 4040:

The master dark on the left is from a stack of fifty 0.1-sec exposures at -10C, while the master on the right is from a stack of thirty-five 120-sec exposures, also at -10C. All frames were acquired using the HDR ("merged") camera mode, which automatically combines the low- and high-gain 12-bit ADC outputs on the fly, to produce a (pseudo) 16-bit image. The question of which camera mode to use is addressed in detail near the end of the post. The images were in linear form and displayed with a standard screen stretch in PixInsight. 

These examples show that the FPB consists of two components: a thicket of vertical bands, and seams across four quadrants. The quadrants are present because this architecture uses four adjacent chips, whose outputs are combined to produce the overall image. Reflections from the surface of the chip can reveal the quadrants, as seen for example in this webpage for the QHY4040. 

The FPB in the master dark on the right is partially obscured by hot pixels, which dominate the screen stretch, given the much longer exposure time than the master on the left, and the chip's appreciable dark current. This illustrates a more general point, which is that the FPB, in a given image of any type, may, or may not, be "obvious". That depends on several factors, which include the image type, the exposure time, the nature of the target, and what kind of post-processing may have been applied. But the FPB is always present in the raw data in darks, flats, and lights.

This screen capture shows a dark-subtracted master flat on the left, and a single dark-subtracted luminance frame of M77 on the right (240-sec exposure):

The master flat is from a stack of nineteen 0.1-sec twilight-flat frames, each of which was dark-subtracted before combining into the master (the mean ADU counts of the dark-subtracted flats ranged from about 1500 to 2000). These examples illustrate the important fact that intrusive FPB is present in flats and lights even after dark subtraction, though it often happens that the FPB is not "obvious" in individual dark-subtracted light frames (without careful inspection), but comes back to bite when many light frames are stacked, if an optimized master flat is not used.

Since the FPBs in dark-subtracted flats and lights are so similar, one might expect that flat division will remove it from the light frames. However, as argued above, an intrusive amount of FPB may remain even after flat calibration, unless the flat-field exposures are chosen to closely match the means of the flats to the light-frame means. This is demonstrated in the two examples coming up.

TWO SAMPLE IMAGING TARGETS

#1: M77 LUMINANCE STACK

The screen capture below compares the results of integrating the same stack of sixty-eight dark-subtracted luminance images of M77 (240-sec exposures), but with the flat calibration done using five different master flats, except at the top-left, where no flat calibration was done. The five masters were obtained from five different flat-frame stacks with means that spanned different ADU ranges, which are listed below. The light-frame stack had means in the range 450-650 ADU (all quoted ranges are approximate). All images were in linear form, and displayed with a standard PixInsight screen stretch, and all frames were again acquired in the camera's HDR mode (the discussion of which camera mode to use is coming up).

Top row, left to right: No flat calibration; Calibration using flat-frames with means in the range 50-100 ADU; Calibration with flat-frame range 150-300.
Bottom row: Range 450-650, which roughly matches the light-frame stack; Range 750-1000; Range 2000-2400.

There is a clear trend. The FPB is very intrusive in the top-left stack, where flat-calibration was not done; it is almost completely removed in the stack on the bottom-left, where the range of flat-frame means was chosen to closely match the range of light-frame means; and the FPB is again very intrusive in the stack on the bottom-right, where the means of the flats and lights had the greatest mismatch. The other cases show a gradual improvement leading up to the "optimal" stack, and then a gradual deterioration away from it. 

In case the comparisons are unclear at this small scale, here is a closeup along the centre-left edge of two cases:

Left: Optimal flat mean-ADU range 450-650; Right: flat range 2000-2400.

Four more comments to round out this example.

1) In the "optimal" flat calibration for this target, there is still a trace of the horizontal seam between the two chip quadrants on the left side of the image. That some vestiges of the FPB may remain with this approach is not surprising, since I've chosen to create the master flat from a stack of flat frames with different means, which by construction cannot exactly match the mean of every individual light frame (if any). In any event, an "exact" match between flats and lights is not possible in general, since the illumination across the flats and lights will usually be different, even if their overall means are identical. On the other hand, the visibility of the seams has been dramatically minimized using the "optimal" flat, and what little is left can be removed with some simple additional post-processing. In the case of my posted M77 image, I created a mask to select the seam that was not suppressed enough by the calibration, and used CurveTransformations to brighten that area to match the surrounding region. No trace of the FPB, including the seam, is visible in the posted image, and I did not have to resort to an aggressive black point to suppress any of it. 

2) The seams between the quadrants are not usually a problem after a reasonably-optimized flat calibration, and the next example target is a more typical case, where no further correction is needed.  This M77 image is a fairly extreme case in this regard, evidently because an extremely-bright nucleus is centred on the common corner between the quadrants, leading to a strong illumination of the seams. In situations like this, it is usually the case that the only trace of a seam that might be left after optimal flat calibration is the horizontal one on the left side of the image, but I don't know why the other seams don't also show up. 

3) The highest flat-frame ADU range of 2000-2400 that I used here shows that the currently-available manufacturer recommendations for calibration will not work in some, if not most, cases. One manufacturer's recommendation is to use flats at 30-50% of saturation, which is about 10X higher than the highest range shown above. Moreover, at that level, the flats will come entirely from the low-gain channel (as explained below), while the light frames here come almost entirely from the high-gain output, as will usually be the case with deep-sky targets (see the section on camera modes below for more info). Another manufacturer recommends taking flats at about 1/2 of the 12-bit high-gain channel output, or about 2000 ADU, which as shown above does not produce an acceptable result.

4) Whether or not any residual FPB in a calibrated light-frame stack is a problem, including when the flats are not fully optimized, depends on the details of the target, and the aggressiveness of the rest of the post-processing, and perhaps other factors. But I think that the FPB with this camera will invariably come back to haunt the imager, if it is not dealt with effectively.

#2: ABELL 426 LUMINANCE STACK

The following screen capture compares the result of integrating the same stack of seventy-six dark-subtracted luminance exposures of Abell426 (240-sec), this time with the flat-calibration handled in three ways:

Left to right: No flat calibration; Calibration with flat means in the range 350-550, which roughly matches the light-frame range; Flat-frame range 2000-24000.

The FPB is less prominent overall with this target than in the M77 example above, evidently because the illumination of the frame is fairly uniform, with localized features that are not excessively bright, and with relatively little interesting elements in the dim regions of the frame. The flat-calibration is again optimized by roughly matching the range of ADU counts of the flats to the lights. Despite the fairly benign overall appearance of the three examples at this scale, the FPB is noticeable, except with the optimized master flat, as seen the following closeups of the image centres in two cases:

Left: Optimal flat range 350-550; Right: flat range 2000-2400.

Whether or not the FPB in the unoptimized calibration on the right is a significant problem with this FOV depends somewhat on the goals of the image processing. The FPB still remaining in this case can be removed by brute force, using a fairly aggressive black point, with a loss of data that may be acceptable. On the other hand, as illustrated on the left, a brute force chop to the data can be avoided by using a reasonable optimization of the flat-frame stack. My final Abell 426 image is posted here.

ACQUIRING TWILIGHT FLATS

One might be concerned about the feasibility of obtaining the small ADU values that are needed for optimal flat calibration with this chip, typically just a few hundred ADUs, without using a flat panel. However, I have been using twilight flats, and find that an exposure time of 0.1-sec, starting roughly 20 minutes after sunset, gives plenty of frames with suitable mean ADU counts, down to less than 50 ADU, well before drifting stars become a problem. With CCDs, such short exposures would not be feasible, due to the severe shutter shadow that would result, but the rolling-readout mode of CMOS chips allows the shutter (if any is present!) to remain open during the entire acquisition process. I've found that four or five twilight runs may be necessary to get enough flats to cover the various ADU ranges that are relevant to my images.

TO MERGE OR NOT TO MERGE?

Last topic!

GSENSE4040-based cameras allow the user to save the dual low- and high-gain 12-bit ADC outputs as separate images, as well as to have the camera merge the two channels on the fly, producing a single "HDR" (or "merged") image that is supposed to mimic the output of a genuine 16-bit ADC. 

At least one manufacturer recommends to calibrate the low- and high-gain images separately, which requires saving both channel outputs for all three image types: darks, flats, and lights. In this approach, one prepares separate dark- and flat-masters for each channel, which are used to independently calibrate the corresponding light-frame stacks, with the calibrated stacks integrated separately. Calibration is completed by merging the two integrated outputs to produce a single calibrated 16-bit master light frame (details on the merging algorithm coming up). The rest of the post-processing (noise reduction, de-linearization, RGB combination, etc.) is done using conventional techniques. 

This two-channel calibration strategy obviously places a significant additional burden on the imager compared with using chips with genuine 16-bit ADCs, and does not avoid the need to optimize the flats. Moreover, the software currently provided by the manufacturers does not implement this procedure for post-processing, at least not for important platforms like PixInsight. This is not an ginormous barrier to using the 4040, and I've implemented a simple version of the two-channel calibration procedure in PixInsight. 

But who really wants to do all that extra work???!!! 

Fortunately, I think that for the vast majority of deep-sky targets, it is not necessary to deal directly with the two channel outputs!  

The claim here is that high-quality results can usually be obtained using the single HDR 16-bit output produced by the camera on the fly, which makes the entire acquisition and processing chain almost identical to what is routinely done with conventional CCDs and CMOS chips, except for the need to optimize the flat-frame exposures.

The reason is that almost all of the image data for most deep-sky targets comes from the high-gain channel. For example, with the recommended settings for the FLI KL4040, only those parts of the image above 3800 ADUs will come from the low-gain output, and that will usually occur only in small, localized regions of the target, mainly the cores of bright stars and galactic nuclei. Moreover, flats with small mean-ADU counts that are acquired in HDR mode will likewise come almost entirely from the high-gain channel. Consequently, an optimized HDR master flat will properly calibrate the high-gain parts of HDR light frames, with calibration errors arising at some level only near the brightest parts of the image. If the bright regions are small and localized, the calibration errors are not likely to be noticeable (and in any event, calibration is never perfect in practice, with any chip!). I have not noticed problems with this approach in any of my images. Although one might be inclined by this logic to use only the high-gain output, there is no advantage to doing so, and in fact, one would needlessly saturate moderately bright regions of the image, near the limit of the 12bit ADC (4096 ADU).

For the record, here is how the low- and high-gain channels are merged in HDR mode. The high-gain output is used for pixels where that channel's 12-bit ADC output lies below some threshold, otherwise the low-gain output is used by mapping it to the 16-bit ADU range above the threshold, using a multiplier and offset that are chosen in part to give continuity at the threshold. The manufacturers provide recommended values for the settings, although they can also be changed by the user. In the case of my KL4040, I use the recommended settings, with the gains of the low- and high-gain channels set to 2.8 and 16.5, respectively, the cross-over threshold set to 3800 ADU, and the multiplier and offset for the low-channel map set to 20.5 and -1228, respectively.

CONCLUSIONS

No conclusions, except to say that if anyone has the patience to read this long post, and has any constructive comments, criticisms, or additional information to add, please fire away!
Well Written Helpful Insightful Engaging
Howard Trottier avatar
In an earlier version of this post, I did not correctly identify the author of the script that is behind PixInsight's CanonDeBandingReduction tool. The post has now been corrected with the proper attribution.
-Howard.
Well Written Respectful
我可是汞 avatar
That's a great work, Howard. I'll try this method on my QHY268M since i found latticed texture in my image, which had never been found in my QHY695A image. Thanks for providing a new calibration tip!
Respectful Supportive
John Hayes avatar
Thanks for such an outstanding post Howard!  I don't have the same camera but it's great to see this kind of information being made available for those who do. A+

John
Well Written Respectful Supportive
Rouz Astro avatar
我可是汞:
That's a great work, Howard. I'll try this method on my QHY268M since i found latticed texture in my image, which had never been found in my QHY695A image. Thanks for providing a new calibration tip!

Did you try changing the USB traffic setting, that might help.
Howard Trottier avatar
我可是汞,  John, and Rouz:

Thanks for your kind comments!

I don't know much about the QHY695, but since it doesn't use the dual-gain sCMOS architecture, my hunch would be that these are unrelated issues. Rouz's suggestion to try changing the USB traffic setting sounds promising, from my experience with readout problems that I had with some other CMOS cameras when I had dialled up the USB rate too high.

Best regards,
Howard.
Well Written Respectful Concise
Rouz Astro avatar
Thank you for posting all that information! I wasn't aware that sensor was actually 4 sensors stitched together.

Sounds like  you have worked quite hard on this issue. Thank you for sharing this post! I suppose it will be very helpful to users of these cameras.

Makes you appreciate your results more.



I was looking into one of these primarily for the large FOV as you mentioned but opted to go for the IMX455 chip + the PW  0.66x reducer. 

Having read your post suspect the IMX will be easier to handle. 
 

Rouz
Respectful Supportive
Howard Trottier avatar
Hello again:

I finally see that my troubles were almost certainly self-inflicted, and I want to thank a member of CloudyNights, who uses the handle freestar8n, for reading my screed so carefully, and catching the off-hand remark that is the tell. 

I have been dithering by only a few pixels, after initially experimenting with a variety of dithering amplitudes, of up to 100 pixels, and convincing myself that this didn't help. But reading the comment by freestar8n forced me to realize that that makes no sense. Large-amplitude dithering *has* to work, precisely because the pattern is fixed. The only question is how big the amplitude must be, and freestar8n's suggestion of around 20 pixels  ought to do it.

I'll start using large dithering amplitudes right away, and will report back with confirmation that this works, as it must, and what the minimum amplitude might be. 

And time to eat more than a little humble pie, even in advance of trying sensible dithering: sincerest apologies to the manufacturers for suggesting that it was necessary to do something different to handle the 4040 chip than what is correctly the industry standard. smile

Short story: disregard my original post, except to marvel at its foolishness – the nonlinearity that I claimed to see was an artifact of not dithering enough!smile

Howard.
Helpful Insightful Respectful Engaging Supportive
John Hayes avatar
Howard,
It is actually possible to find the correct dithering distance using a little math.  As long as the dithering distance is greater than the HMFW of the autocorrelation function of the dark signal, you are good.  This would be an easy tool to implement in PI but it can also be done by hand since PI includes FFTs.  The autocorrelation is the inverse transform of the power spectrum of the image (which is the square modulus of the transform of the image).  Making the dither distance greater than maybe 3x-5x the width of the central peak will be completely safe.  Frank's advice to use 20 pixels is almost certainly a good number.  When I get some time it might be an interesting exercise to write a PixelMath routine to do this.

John
Well Written Helpful Insightful Respectful Engaging
Howard Trottier avatar
Hello once again!

After staring into space for awhile, I thought that it might be a tiny bit useful to elaborate a bit on why my evidence for a supposed nonlinear behaviour in the fixed-pattern noise was actually an artifact produced by the inadequate dithering. Not that this really matters, so please don't bother with this message, unless you want to kill some time.

(Incidentally, freestar8n on CloudyNights *also* correctly pointed that the pattern should be characterized as noise, i.e.[i] [/i]FPN, as everyone in the industry does, so that it was pointless for me to make a big deal about calling it a background, or FPB!)Anyway, here again is one of those sequences that I falsely claimed as evidence for the supposed nonlinearity:

In retrospect, it makes perfect sense that taking flats with means similar to the lights would actually be necessary to suppress the FPN in this light-frame stack, but *only* because the amplitude I used to dither was much too small. This plays out in the above sequence as follows.
 
It starts in the upper-left, where no calibration was used, and the pattern stands out in a big way, because it is actually *reinforced* by the inadequate dithering. The mean-ADUs of the flats increase when going from left-to-right along the top row, and then along the bottom row. In the bottom-left frame, the flats now roughly match the lights in mean values, and the pattern in the calibrated stack is mostly wiped out: the amplitude of the FPN now roughly matches in the flats and lights, and the locations roughly match, because flats cannot be dithered.
 
The sequence continues, with the means of the flats increasing along the bottom row: the amplitude of the FPN in the flats gradually becomes swamped by shot noise (they add in quadrature), while the amplitude stays the same in the poorly-dithered light frames. By the end of the sequence, the flats are at about 50% of saturation in the dominant high-gain channel (about 2000 ADU), and the FPN has largely disappeared from the flats. So in the final frame, it doesn't matter that the location of the FPN in the master flat is the same as in the lights (its amplitude is effectively zero), and the pattern in the calibrated stack has once again become obvious.
 
But all this provides evidence for nothing, other than that one should dither by alot, and take the flats at about 50% of saturation, both of which constitute the industry standard for CMOS chips with abundant FPN! 
 
Anyway, if you've bothered to come this far, thank you, and I hope that I have managed to make some sense!
 
Howard.
Helpful
Howard Trottier avatar
John Hayes:
Howard,
It is actually possible to find the correct dithering distance using a little math.  As long as the dithering distance is greater than the HMFW of the autocorrelation function of the dark signal, you are good.  This would be an easy tool to implement in PI but it can also be done by hand since PI includes FFTs.  The autocorrelation is the inverse transform of the power spectrum of the image (which is the square modulus of the transform of the image).  Making the dither distance greater than maybe 3x-5x the width of the central peak will be completely safe.  Frank's advice to use 20 pixels is almost certainly a good number.  When I get some time it might be an interesting exercise to write a PixelMath routine to do this.

John

Hi John:

Many thanks for the succinct explanation of how to estimate the requisite dithering amplitude. I'll compute the autocorrelation for myself, while accumulating light frames using a few dithering amplitudes, and then compare the results, which will give me an instructive cross-check of both. 

Thanks again for taking the time to write both of your followups to my initial post.

Warmest regards,
Howard.
Well Written Respectful Supportive
fred.germain2812 avatar
HI Howard,

As I told you I have tested some heavy dithering without managing to remove completely the FPN after calibration and integration in Pix, which is weird.
I will send here an animated gif from the blink process of the calibrated and registered lights and the integrated light frame.

Thanks again for your help

Fred
Freestar8n avatar
When dithering to remove FPN it is essential that the same dither location is never repeated - which is presumably what was happening here.  The goal of dithering is to randomize the locations of the FPN in the stack - and if you have any repeated locations it will stack linearly like a normal signal.  In that case it doesn't matter what the scale of the FPN structure is - it will stack linearly.

There is a secondary effect that larger scale structure, as is clearly present here, will remain perceptually prominent if the FPN isn't removed effectively.

In any case - I think most people tend to dither by a much smaller radius than they should - particularly when stacking a large number of frames where the location is likely to repeat.  The only motivation to use a small radius is if  it takes time to move a longer distance, or the pixel count of the sensor is small - but nowadays neither of those issues is a factor.

Anyway - I'm glad the result here is improved.

Frank
Helpful Respectful
Howard Trottier avatar
Hi Frank, and Fred:

Frank, thanks for echoing your discussion from the CloudyNights forum.  

Fred, my experiments with the dithering amplitude, from when I first started using the camera, produced results that sound like what you are describing, as best as I can recall. I can't explain at the moment how I justified my conclusion, in part because I don't remember important details, like the range of amplitudes that I tried, how systematically I tried to look for a trend, and whether the data was what I would now consider to be good enough quality.

But the argument that dithering with a sufficiently large amplitude will remove the FPN seems inescapable to me now. I will try to dig up my old data, but more importantly, I'll generate a bunch of new data for a new target, to do a systematic study, to make sure that I've got a new workflow under control. And it would definitely help me with all that should you share your results.

Also, I'm going to calculate the autocorrelation as a function of the dithering amplitude, following John Haye's suggestion, to get an a priori estimate of the necessary amplitude, and I'll report back with that ASAP. 

Many thanks,
Howard.
Rouz Astro avatar
While I don't use this chip myself, the discussions here gave me some very good insight.

Out of curiosity I will also try a larger dither distance with my IMX455 to see if that makes much difference.
Well Written
Howard Trottier avatar
Hi Rouz:

Thank you for your very kind email. Looking forward to hearing what you find.

Howard.
Well Written Respectful
John Hayes avatar
Howard Trottier:
Hello once again!

After staring into space for awhile, I thought that it might be a tiny bit useful to elaborate a bit on why my evidence for a supposed nonlinear behaviour in the fixed-pattern noise was actually an artifact produced by the inadequate dithering. Not that this really matters, so please don't bother with this message, unless you want to kill some time.

(Incidentally, freestar8n on CloudyNights *also* correctly pointed that the pattern should be characterized as noise, i.e.[i] [/i]FPN, as everyone in the industry does, so that it was pointless for me to make a big deal about calling it a background, or FPB!)Anyway, here again is one of those sequences that I falsely claimed as evidence for the supposed nonlinearity:

In retrospect, it makes perfect sense that taking flats with means similar to the lights would actually be necessary to suppress the FPN in this light-frame stack, but *only* because the amplitude I used to dither was much too small. This plays out in the above sequence as follows.
 
It starts in the upper-left, where no calibration was used, and the pattern stands out in a big way, because it is actually *reinforced* by the inadequate dithering. The mean-ADUs of the flats increase when going from left-to-right along the top row, and then along the bottom row. In the bottom-left frame, the flats now roughly match the lights in mean values, and the pattern in the calibrated stack is mostly wiped out: the amplitude of the FPN now roughly matches in the flats and lights, and the locations roughly match, because flats cannot be dithered.
 
The sequence continues, with the means of the flats increasing along the bottom row: the amplitude of the FPN in the flats gradually becomes swamped by shot noise (they add in quadrature), while the amplitude stays the same in the poorly-dithered light frames. By the end of the sequence, the flats are at about 50% of saturation in the dominant high-gain channel (about 2000 ADU), and the FPN has largely disappeared from the flats. So in the final frame, it doesn't matter that the location of the FPN in the master flat is the same as in the lights (its amplitude is effectively zero), and the pattern in the calibrated stack has once again become obvious.
 
But all this provides evidence for nothing, other than that one should dither by alot, and take the flats at about 50% of saturation, both of which constitute the industry standard for CMOS chips with abundant FPN! 
 
Anyway, if you've bothered to come this far, thank you, and I hope that I have managed to make some sense!
 
Howard.

Howard,
I want to clarify something here.  As defined by Janesick (Phonon Transfer, Sec 3.4), FPN is caused by variation in responsivity between pixels across the sensor (called PRNU).  It is fixed with respect to the sensor and unlike shot noise which is proportional to the square root of the signal, FPN is directly proportional to signal strength.  It appears in the form of spatial noise but as it relates to a measurement, it is neither a true signal nor a true noise source.  It is a characteristic of the sensor that relates to the uncertainty of the measurement between pixels and that's why it is historically called a "noise term."  Since we typically use pixel to pixel measurements to characterize measurement noise, the variance of FPN is added to the variance of other measurement noise sources to get the total variance as measured across the sensor.  FPN is a multiplicative contributor (the same as vignetting) so it is removed strictly by flat fielding.  Using dithering to reduce the effects of FPN simply reduces the effect by stack filtering and spatial averaging, which may require a LOT of data to  achieve a result as good as simple flat fielding.   If you have to carefully match light levels to get flat-fielding to work, the sensor is not linear.  Simply dithering to reduce the effects of FPN is akin to dithering to reduce the effects of dark current.  You can sometimes make things a little better but dithering is a bandaid in both cases.  Dithering is always a good idea to filter small random residual calibration errors and I agree that larger values are better than small values for many sensors, but something is screwy if you are really fighting FPN and flat fielding doesn't almost completely fixing it in the calibration process--before you stack.

John
Well Written Helpful Insightful
Freestar8n avatar
Howard Trottier:
Hi Frank, and Fred:

Frank, thanks for echoing your discussion from the CloudyNights forum.  

Fred, my experiments with the dithering amplitude, from when I first started using the camera, produced results that sound like what you are describing, as best as I can recall. I can't explain at the moment how I justified my conclusion, in part because I don't remember important details, like the range of amplitudes that I tried, how systematically I tried to look for a trend, and whether the data was what I would now consider to be good enough quality.

But the argument that dithering with a sufficiently large amplitude will remove the FPN seems inescapable to me now. I will try to dig up my old data, but more importantly, I'll generate a bunch of new data for a new target, to do a systematic study, to make sure that I've got a new workflow under control. And it would definitely help me with all that should you share your results.

Also, I'm going to calculate the autocorrelation as a function of the dithering amplitude, following John Haye's suggestion, to get an a priori estimate of the necessary amplitude, and I'll report back with that ASAP. 

Many thanks,
Howard.



Hi Howard-

Dithering is unfortunately fairly poorly understood and I don't know good write ups about it - but I have been describing it in CN for many years.  It wasn't long ago that not many people were doing it at all - and now it is pretty much the norm.  But I don't think the software and methodology is optimal for most.

The main thing about a dither radius is that it needs to be large enough that in N exposures you don't repeat the same location.  It's ok if there are some repeats, but it won't be as effective.  I think the people who feel only a few pixels radius is fine assume that any jiggling of the image will blur it or something.  But what matters is how the pixels line up in the stack - and any repeated positions will end up summing any pattern noise in that stack - rather than help cancel it.

The other thing is that the patterns and structure in your master dark don't directly correspond to the pattern that is actually being dithered - because you are stacking calibrated frames, and the relevant pattern noise is what remains after calibration - ideally with good masters.  The residual pattern will be a combination of what remains from imperfect dark subtraction, and what remains from imperfect flats and associated PRNU.  But the only way to see either of those things is by calibrating a  single dark or calibrating a single light - in which case the patterns and structure should be largely eliminated but what remains is an overall pattern noise present and very similar in each calibrated light.  As a result, you may not even know it is there because it could be purely random and gaussian with no structure.

So  you could have purely random and non-correlated pattern noise, but you still need a large enough dither radius so the dither positions never repeat.  And if you do have local structure in the residual pattern noise you would want to dither far enough to avoid correlations in the stack - within reason.

But in your case I think a primary issue was not having a good master dark and master flat - and matching dark flats - all in combination with a small dither radius.

Anyway - it sounds like things are working and that's great.  If you still find anything different about calibrating that camera vs. others I would be interested to hear it.  If there is any drift in behavior then nothing is "fixed" anymore and it's hard to correct for it systematically.

Frank
Helpful Insightful Respectful Engaging
Rouz Astro avatar
Howard,

Are you using darkflats as well?

I remember I had issues with persistent amp glow on a chip that could not be removed, the problem was solved by making darks for the flats as well as the lights.

The outcome was much better.
I didn't see if you have had done that but it might be worth a try as it very easy at this point.


Rouz
Helpful Concise Supportive
Howard Trottier avatar
Hi Rouz:

Thanks for asking about the darks.

I always create master darks from dark-frame exposures that are exactly matched to whatever is being dark-subtracted, and never interpolate the exposure times (that is, I never use bias frames). The 4040-based camera manufacturers all recommend to exactly match the dark-frame exposure times in that way, and avoid bias frames, as I think is typically the case with CMOS chips, due to their much higher dark current compared with good CCDs. I think that might be even more important for the 4040, since its FPN is so significant, at least  to my eyes! 

Here are two examples of master darks that I showed in the original post: 0.1-sec on the left from about 50 frames (used on twilight flats), and 120-sec on the right (used on corresponding light frame stacks), also from about 50 frames:

I'm still digesting the new information provided by John and Frank, and how this might explain why Fred and I found significant FPN remained after large-amplitude dithering.  A few other 4040 owners have communicated with me privately after these exchanges to clarify that they too had had limited success with large-amplitude dithering. 

I had been up almost round the clock for several days and crashed before  John and Frank posted their most recent comments, and am still a little punch drunk, so I probably won't have much to say of any coherence for another couple of days, but will be digging into all this in the meantime!

Many thanks for all of the very helpful posts that have been made on this thread, even after I ate all of that humble pie! 

Howard.
Helpful Engaging Supportive
Rouz Astro avatar
Hi Howard,

I can image it get tiring when is non-stop work. 

Thank you for the answer, I see the 0.1s darks for the flats. Was wondering if that had been done.

I did notice the dark current is significantly high on these and see you are really be on top of your calibration and processing to tame this chip!


Best regards,

Rouz
Freestar8n avatar
I would place priority on making sure you have dark flats that match the flats in exposure time, flats that have a good amount of flux in them, and darks that match the lights in exposure time and temperature.

As for dithering distance I was mainly responding to your earlier comment that it was 'small.'  If small meant only a few pixels then that would definitely be bad - but at the same time it doesn't need to be hundreds or something - hence my ballpark value of radius 20 for a large number of exposures.

Instead of looking at master darks or flats I would look at a single dark that has had the master dark subtracted from it.  That will tell you the pattern noise due to the dark that you are dithering to reduce.

Separately you can use a very faint light/flat exposure to mimic imaging a very dark uniform sky background - with the full exposure duration of a normal light but only a faint and uniform background corresponding to your typical image background.  And then calibrate that "light" the normal way - with master dark and master flat.  You can use a normal exposure of an empty part of the sky but it's nice to have no stars at all to confuse things.

You will then have two single exposures that capture the pattern noise due to the dark itself - and the pattern noise due to the dark in combination with PRNU in the light - and it will represent the amplitude and structure that is actually in your individual exposures - after calibration.  And that is the pattern noise you are actually dithering to reduce.  Ideally it will have much smaller amplitude than in the masters themselves, but some structure will remain.  If the structure seems more prominent and stubborn than it should be then there is still something wrong with the calibration.  But if the amplitude has been reduced a good amount and there is high variation in pixel values locally - then dithering well is likely to be very effective.

Frank
Helpful Insightful
Related discussions
About the calibration and usage of cameras with the Gsense 4040 sensor
Dear all, for almost exactly two years now we (Dr. Karl Remeis-Observatory Bamberg) are using a Moravian C4-16000 EC which houses the Gsense 4040 sensor from Gpixel for our university astronomy lab course and public outreach purposes. Initially we ha...
Directly addresses calibration and usage of Gsense 4040 sensor in Moravian camera.
Feb 28, 2024
[RCC] IC1396 Elephant Trunk bias feedback
I noticed in my dark frames and bias frames a specific pattern of noise. I used the weighted batch processing in Pixinsight and the integrated image showed some aberrations right in the same area. Below are links to the integrated (and in-progress pr...
Addresses fixed-pattern noise in calibration frames and processing solutions.
Aug 14, 2020
DSLR Shooting - How to manage Darks, Flats, Bias, etc.
Hi, Does anyone have any good tips on acquiring Darks, Flats, Bias, etc? I am very new to this all, shooting with DSLR, and would love some insight to improve my photos by using such. Thanks.
Calibration strategy for removing fixed-pattern background from deep-sky camera images.
May 7, 2020