Proper Methods for Combining Data from Different Cameras

1 reply129 views
Brian Puhl avatar
Hey folks, looking for a little guidance here from some more experienced folks….

I am currently setting up my second mono rig which will ALMOST be an identical rig to the first with one exception.

Each rig will be an Esprit 100 with Antlia 3nm filter set on an EQ6R Pro.     The ONLY difference will be the camera.   One is a QHY268M, the other will be an OGMA 26MC (mono).    Both these cameras are IMX571 mono sensors.    The goal is to combine the data from the same filters to efficiently go much deeper on targets in half the time.   Currently I average around 20-40 hours per target, but I'm ready to double that.

Now if I were a guessing person, I'd imagine I'd need to match up the gain settings on the two cameras, since one is QHY and the other is Touptek, the numbers are much different.  I'd imagine I can achieve this through sensor analysis?

Now as for stacking workflow, does this make sense to simply calibrate separately then stack the images like normal?  Will WBPP allow this (or should I stick with manual image integration)?   The two cameras have a slightly different resolution, so I know I will need to manually select a reference frame.   Will I need LN or something of the sort?   My goal here is to stack data of the SAME filter from DIFFERENT cameras.   That's where this becomes more technical, I'm worried that it could reject the camera that MAY be slightly weaker because the gains didn't match up.    I've stacked mixed data sets before, but never tried something like this….


Thoughts?   Am I on the right track?   I hope my goal is clear.
Engaging
Daniele Borsari avatar
I think that using grouping keyword in WBPP will let you calibrate each data set from both cameras separately, then it will proceed to stack them as normal.
​​​​​​
Regarding the gain, just try and get the most similar setting (maybe by looking at mean ADU and/or sensor analysis and charts from the manufacturers).

WBPP should in theory weight the images and apply normalization (if enabled) to make images with different conditions (wether it is different cameras, sky quality, optics…) the same.

The best thing IMO is to test with a short integration and see how things turn out.

Daniele
Helpful