Tim Hutchison:
I almost didn't want to post this message because I'm sure it will draw some negative responses and strong opinions. But in the end I just wanted to make sure that everyone has a clear understanding of what is going on.
I am an electrical engineer and have experience with electronics. Perhaps that credential will lend some credibility to this post.
There are multiple types of noise that we deal with in ap, but here we are talking about 2 types of noise.
Bias noise is noise introduced into the image by the camera itself. Typically the electronics of the camera introduce interference into the circuitry, resulting in noise when the signal is read.
Dark current noise is caused by heat that builds up in the sensor as it is being read. This noise is dependant on the time of the exposure (more heat build up over time) as well as the ambient conditions.
There are other types of noise, but we'll leave them out of this discussion.
Both of these types of noise are present in every image that we take. EVERY image!
What is interesting is that, for most cc'd cameras, the bias is largely the same across the range of operation, and that the dark current tends to scale very linearly with exposure time, if held at a consistent ambient temperature. Cmos behaves differently. The dark current doesn't scale linearly, it's more exponential, and the bias isn't always consistent either.
Regarding CCDs, because of the behavior of the bias and dark current, it became commonplace to extract the bias signal in a separate master so that it could be subtracted from the master dark. Once you subtract the master bias from the master dark, the master dark is left with only the dark current noise, that can be scaled based on exposure time. So, a single master dark could be used on both the light frame and flat frame. You would subtract the bias, and the scaled, bias subtracted master dark, from the light frame and from the flat frame. It was kind of a processing shortcut made possible by the way the CCD sensors behave.
CMOS sensors (the majority of CMOS sensors) are not so well behaved. Both the changes in bias for different exposure, brightness, etc. and the non linear nature of the changes in dark current for different exposure times made this impossible.
So for CMOS, it became common and recommended to take a flat dark where the temperature and exposure length matched you flat frame and use it to calibrate your flat frame. And take a dark where the temperature and exposure match that of your light frame and use that to calibrate your lights, and take no bias frames at all for either. This does not mean that there is no bias correction! It just means that the bias AND the dark current for that particular exposure/temperature are present in the master dark associated with each type of sub.
Taking and building a master flat dark to calibrate your flats, and taking and building a master dark to match your lights and ignoring bias completely will always work for both CMOS and CCD sensors. Again, this is because the appropriate bias signal will always be present in the master dark. By ignoring the bias frame we are just using that bias signal that is already present in the master dark.
It is very important, if you do this, to NOT scale the master darks. Turn off the "optimize" setting in PixInsight and make sure you are using a master dark that was built from frames taken at the same temperature and exposure as the type of frames you are processing.
It is my opinion that this is the appropriate way to calibrate most CMOS sensor images (I would say ALL but someone will find an edge case that I don't know about, so I'll say MOST). It will also work just fine for CCD sensor images, but it is also appropriate for CCD subs to take the shortcut, use bias frames, and scale the darks.
I hope that is helpful and clear.
Best.
Tim
Tim,
I want to clarify and correct a few points here.
First, there are two things that are important to understand when it comes to calibrating an image. The first is signal and the second is noise--and it is VERY important to understand the difference and to use the correct terms. Signal is what we get when we
average many measurements. The signal that we are most concerned about is the one that comes from the object itself, but it important to understand that we also have other (unwanted) signals mixed into a raw image including both dark and bias
signals. The second thing that we get is
noise. Noise
IS NOT unwanted signal! Noise is the variation in signal that we measure
about the average across many measurements and it is characterized by the standard deviation of the distribution. Noise is
almost always a by-product of signal (read noise is a notable exception.) It come from quantum nature of light and small particles and it follows Poisson statistics. So, be careful to distinguish between signals and noise! Calling everything "noise" is not only confusing, it is incorrect. Signals and noise are very different and do not behave mathematically in the same way, which leads to my next point.
Remember that signals can add, subtract, multiply, and divide; but when you add or subtract signals, noise can only
add in quadrature. (This is relatively easy to show mathematically but that's unnecessary here). I should also point out that when you multiply or divide signals, noise always increases--but by adding reciprocals in quadrature.
When we measure bias, we get the
offset that is set by the electronics in the camera that are typically there to keep the lowest possible output above zero. That offset (which is a signal) creates a little bit of noise. We also get a contribution from read noise and a few other more esoteric sources in a CMOS device. So, removing bias serves to subtract any electronic offset from your data--at the expense of increasing the noise in the result by a little bit. The bias offset may be important in the calibration process since offsets can play havoc when you divide by the flat data. A lot of folks get away without removing bias offset when calibrating but that's only because the bias offset is so small that it doesn't matter. If the bias signal is not very close to zero, it is not true that "
ignoring bias completely will always work for both CMOS and CCD sensors". Yes, the bias signal is in both the image data and the master dark data, but it is also in the master flat data. When you subtract the dark data from the image, you do indeed remove the bias offset; however, it is still in the flat data, which divides the dark subtracted image.
Regardless of how it might vary between sensors, the most important thing to understand about dark current is that it is very repeatable for any given exposure time and temperature. When you do dark calibration, you are subtracting the
dark signal and
adding noise to the result. Dark correction only works well when you match both temperature and exposure to your light data. That can be difficult for DSLR users.
Finally, I am not familiar with the notion that CMOS changes bias levels with exposure time and that runs completely counter to the very definition of bias signal. Bias signal is
defined as the signal that you measure with an exposure time vanishingly small (i.e. zero). As I've said, in most cases (when the electronics are properly set up to have near zero offset), a bias frame
mostly shows read noise. I think that it is correct to say that dark current and bias levels can vary with the mode, gain, and offset values that you select in a CMOS camera. It is almost NEVER going to work if you take data at one setting and then take calibration data using another.
THAT is a big no-no with a CMOS camera! If basic CMOS characteristics such bias levels, linearity, or gain were to vary with exposure time, CMOS would be useless as an imaging device for astronomy.
Flat data can be taken using relatively short exposures with both CCD and CMOS cameras. In general, it is possible to take flat data using short enough exposures (1-8 seconds) that flat-dark data is not needed. (Remember that dark current is proportional to exposure time.) Some flat panels aren't bright enough to take narrow-band flat data with exposures less than a minute or two and in that case, flat darks may become more important; although, I've never seen a problem with it. I also want to add that flat calibration is extremely important with CMOS sensors. Flat correction corrects for vignetting and radiometric light fall off; however don't forget that it also corrects for PRNU, which looks like
spatial noise that is linear with signal strength. PRNU is due simply to the variation in responsivity between pixels across the sensor and because a CMOS device has a separate amp for every pixel, it can be a serious issue with some sensors. I don't know about modern sensors, but older ones corrected for this effect during read out using on-board "trim" data stored in a LUT.
John