Hi Björn
I tried doing that some years ago. I had the same idea you are talking about. What if we combined data from all the astrophotographers into one?
The thing is, as you already know, that an image is just a set of data. Each pixel contains information (signal) about the object being imaged, and on top of that the pixel also contains noise (all sorts of noise). Therefore you can calculate a signal to noise ratio (SNR). This is the basic principle of astrophotography, because we basically image an extremely faint object like a galaxy on an even darker background. This means that once you gain the signal, you also gain the noise.
The longer you expose, the better the SNR. However it doesn't scale linear. Typically if you double the exposure time, you will get an increase of SNR equaling the square root of two. An increase of exposure of 4 times, gives an increase of 2, and so forth. 100 times longer exposure "only" result in 10 times better SNR, so there is some sort of diminishing return on the time invested.
The data that any telescope/camera combination output is dependent on the setup, and also on the operator (photographer). Some are better than other, but when I started having this idea, there was one thing that kept me going. As long as you can see the object of interest in the image (the faint fuzzy), then the image contains valuable information that can be added to similar information from other images. That was my central philosophy.
I started out arranging a collaboration of roughly 10 danish amateur astronomers, through the forum for the Danish Astronomical Society. We collected data from the relatively large (less problems with resolution) planetary nebula called Jones Emberson 1. The result can be seen here:
https://www.astrobin.com/136820/?nc=userThis really motivated me, but gathering the data and keeping track of them were, let's call it, a challenge.
Next, I went all in, and collected as many images of the Andromeda Galaxy as I could. Several thousand jpg images from all over the internet. I started my astrophotography using homemade scripts and code, but quickly turned towards Pixinsight. That is the best software for combining large amounts of data hands down (sorry Maxim et al). Using Pixinsight I started experimenting and it literally took months and months. Stubbornness pays in the long run.
First thing to do is to chose one master image with a good wide framing. Then you align all the other images to the master. Use distortion correction! Also the master should contain as little distortion as possible. You'll learn that along the way.
Once the images are aligned (registered), you stack (integrate) them. A basic average stacking is a good start, and will get you a long way. Then you can experiment with all sorts of pixel rejections. For normalization I typically use scale and offset, but you can do without.
The final stack will blow your mind. Once you start stretching the stack you'll realize how high the SNR can get. This is a later version of M31 that I made, but it does show the potential (300+ images stacked):
https://www.astrobin.com/120204/?nc=userThe method makes it possible to create very deep images of the night sky. I've stopped publishing my images, but I still do them once in a while, when I want to see something, and I can't find a proper image. Recently I've been looking for galaxy clusters, and I've "seen" pretty far away using this method.
The method was dubbed "Crowd Imaging" (CI), by one of the members of the Jones Emberson team. This name also gives a hint at one of the very time consuming parts of it, being keeping track of all the participants. I decided to avoid copyright issues by only using Creative Commons licensed images, and I knew how much work each and every amateur astronomer put into making each separate image, so I decided to keep a spreadsheet with the names etc. of everyone involved. With several hundred people, that's time consuming. You might argue that this is covered by "Fair Use", but for the experiments I did and published, I think it's fair to give credit to everyone.
I highly recommend trying it. The method definitely works, and it has both strengths and limitations. You can get very high SNR, and if you start assigning different weights to each image based on SNR, FWHM etc. you can make different stacks that you can combine. An example could be a high SNR stack for that dark areas of the field, and a sharper one for the galaxy core etc.
Good luck, and feel free to ask. I'm pretty busy with other work right now, but I'll happily try to help. Most of all, this is all about diving into huge amounts of information (literally physical information in bits), and trying to optimize the method. That is the challenge, and that is where the real fun is found
