How bad is over-sampling?

9 replies684 views
Willem Jan Drijfhout avatar
Matching pixel-size to focal length sounds great, but in reality most modern CMOS cameras have pixels of 3.8 micron (few exceptions). So most of us with a long focal-length of around 2m and more (SCT's, RC's, CDK's etc.) will have highly over-sampled images. How do most people deal with that? Are you binning your data to increase pixel-scale? Or do you just carry the burden of larger than necessary files with associated demands on processing power and storage?
So far I have mostly just done the latter, but recently I've done some comparisons to find some answers. In my experience the over-sampled data do improve image quality over binned data (see below). When applying Nyquist theory, most focus is on stars. But for larger structures could it be that modern tools like BXT, NXT etc actually benefit from the extra information in the source? For more examples and details on how the comparisons were conducted, feel free to check out this blog.  
Looking forward to hear your thoughts. 
Well Written Helpful Insightful Respectful Engaging
Ralf Dinkelmeyer avatar
I also do oversampling. And one reason is, as you said, that sharpening with BXT, for example, works much better. In my opinion, this also makes drizzling unnecessary.
Sean Mc avatar
I think it comes down to the noise in the compared images and what bxt does with it.  I’d bet that the smaller scope would look virtually the same with more integration time.
Well Written
Rob Lyons avatar
A few variables have to be considered when it comes to oversampling. One is how good the seeing conditions are. If the seeing conditions don't support the image scale, then you are giving up signal by being oversampled. The next is guiding. At extremely high resolution (large scope with small pixels), the guiding has to be as good , well actually better, than your image scale. If it is not, then images are going to be on the soft side. 

I recently switched from a 2600 to a 294 on my 8" SCT and immediately noticed that my images were slightly sharper. Going from .55"/Pixel to .67"/Pixel had a noticeable improvement. I'm assuming the signal was being spread across too many pixels because the guiding was slightly worse than the image scale required. If I could get even larger pixels, I would. Being somewhere in the 1"/Pixel range is probably ideal in my skies. The improvement I see in this subtle change has made me happier with my images. When people say image scale is not a big deal, they are mistaken.

BlurXterminator can benefit from lightly oversampled images, but there is a limit to what can be deconvolved. Too much oversampling in either poor skies or with borderline guiding will not benefit from BlurX because it is just soft, to begin with. I would rather have a well-sampled image to begin with. It is less demanding in terms of guiding, and the files will be smaller, resulting in more efficient storage and processing. On a large scope you will still likely be oversampled, but within a range that BlurX will do a fantastic job.
Helpful Insightful Engaging
Bill McLaughlin avatar
I also do oversampling. And one reason is, as you said, that sharpening with BXT, for example, works much better.


Agreed. Oversampling is generally better, within reason, but that means any binning needs to be matched to the seeing. That is the critical point here. Oversampling at some average East coast or Midwest site with 2.5-3.0 arcsec seeing won't help that much. Doing so at Sierra Remote (or SAROS where I am at ) is another matter when average seeing is 1.5 and often hovers around 1.0 in the early AM. Location, Location, Location. No getting around seeing.

The bottom line is that whether you  bin or not should be based on the expected seeing. I like to see maybe 3-5 pixels across the present or expected seeing level ( as measured by a seeing monitor).  That gives BXT something to work with. That means that at 2500 mm (for example), you would bin if your seeing is fair to poor but not if it is good to excellent
Helpful Insightful Engaging Supportive
MaksPower avatar
Correct I’m imaging at f/12, and seeing is often around 1.5 arcsec and can drop to 1.0 after midnight.

I don’t bin the ASI2600 and am quite happy with 3-4 pixel stars, and no need to drizzle either.

If the seeing was as poor as 2-3 arcsec I’d be hopping into my car to find a better location.
Helpful Insightful
Jared Willson avatar
Oversampling is not too bad. If you aren't happy with the SNR, you can always down-sample after the fact. With CMOS cameras, the only real downside to oversampling is additional storage requirements as the files are perhaps larger than required. I'm imaging at well under 2,000mm so am not significantly oversampled, but the only time I find myself dropping my resolution is if I'm building a mosaic. Then I tend to bin on-camera. No real advantages to doing it on camera as I don't lower read noise and I don't gain any additional full well capacity, but I do reduce the storage and processing requirements and that can sometimes be helpful. I don't think I would worry about oversampling very much. In the days of CCD sensors it was more important, as it would have been a little harder to overwhelm 10-12 e- of read noise with a sub 4 micron sensor, but now that read noise has dropped to 1e- or so for most sensors in high gain mode, the advantages to larger pixels, even for longer focal length scopes, are mostly just gone. My recommendation is if you think you are oversampled, just don't worry about it unless you need faster processing or need to lighten your storage requirements. If you find you aren't getting enough data to make the image "smooth" with the smaller pixels and you aren't seeing any additional resolution from the smaller pixels, you can down-sample your final result.
Helpful Engaging Supportive
John Hayes avatar
The biggest penalty of oversampling is the loss of signal, which reduces SNR in the raw image.  Having said that, I agree with others that BXT seems to benefit a little from oversampling.  BXT doesn't violate information theory but I do believe that starting with over sampled data simply allows a "little" bit more information to be extracted that otherwise might not be useable with more traditional deconvolution methods.  BXT doesn't necessarily help with SNR but neural-net driven noise reduction algorithms can take it from there (e.g. with tools like NoiseExterminator)

John
Helpful Insightful Respectful
Arun H avatar
John Hayes:
The biggest penalty of oversampling is the loss of signal, which reduces SNR in the raw image.


Isn't it true though that, while signal is less at a pixel level (since Signal is simply Irradiance * pixel area) and smaller pixels have less area, the signal is not truly lost? For example, consider two cases:
  • 3.78 micron pixel size on four thirds sensor
  • 5.6 micron pixel size on four thirds sensor

Assume identical QE and identical optics/subject. Also assume that sub exposure time is set so read noise is swamped.

Irradiance on the sensor plane is the same for the same optics and subject. The 3.78 micron pixels will most certainly have lower SNR due to the smaller area. However, is it not true that (within reasonable limits) if I resample the 3.78 micron image so the pixel sizes are now 5.6 microns, I will end up with the same SNR as if I had started at 5.6 micron pixels?

There is an obvious computational and storage penalty to using small pixels, but I'd think that, so long as you are willing to accept this, there is no other penalty to small pixels.
Well Written Helpful Insightful Engaging
John Hayes avatar
Arun,
I think that you've got it.  Signal is irradiance * pixel area * responsivity * exposure time.  Assuming the same exposure time and responsivity, the signal always goes down with a smaller pixel.  In the limit of a pixel with zero area, the signal is also zero.  You can indeed resample the image, say by binning 2x2 to increase the signal by 4x and the noise by 2x for an increase in SNR of 2x.  This is something that you can easily verify with your own data using PI.  So, it doesn't matter if the pixel is bigger or you resample the smaller pixels in post processing.  You'll get the same result.  Of course, as you've said, the problem with smaller pixels goes back to a bandwidth problem and that becomes significant with a larger sensor.  That's why I'd personally love to have a IMX461 sized sensor with ~5 micron pixels.   I'd get better sampling AND smaller files both at the same time.

John
Well Written Helpful Insightful Respectful Engaging Supportive