Matching pixel-size to focal length sounds great, but in reality most modern CMOS cameras have pixels of 3.8 micron (few exceptions). So most of us with a long focal-length of around 2m and more (SCT's, RC's, CDK's etc.) will have highly over-sampled images. How do most people deal with that? Are you binning your data to increase pixel-scale? Or do you just carry the burden of larger than necessary files with associated demands on processing power and storage?
So far I have mostly just done the latter, but recently I've done some comparisons to find some answers. In my experience the over-sampled data do improve image quality over binned data (see below). When applying Nyquist theory, most focus is on stars. But for larger structures could it be that modern tools like BXT, NXT etc actually benefit from the extra information in the source? For more examples and details on how the comparisons were conducted, feel free to check out this blog.
Looking forward to hear your thoughts.

So far I have mostly just done the latter, but recently I've done some comparisons to find some answers. In my experience the over-sampled data do improve image quality over binned data (see below). When applying Nyquist theory, most focus is on stars. But for larger structures could it be that modern tools like BXT, NXT etc actually benefit from the extra information in the source? For more examples and details on how the comparisons were conducted, feel free to check out this blog.
Looking forward to hear your thoughts.
