Combining dual telescope setup data for narrowband imaging efficiency (Mono Ha/OIII + OSC L-Extreme)

23 replies287 views
Pranalabs avatar

Good morning,

For some time now, I have been experimenting with how to get the most out of my dual setup.

Basically, I have the following instruments mounted in parallel on my mount, which capture the same subject synchronously:

1) A WO FLT 120 with an IMX571M monochrome camera and a complete set of LRGB and SHO filters (6.5 nm)

2) A WO GT81 with an IMX571C color camera with UV/IR filter and an L-EXTREME (7Nm Ha and OIII)

Beyond the easily manageable difference in FOV and resolution, I would like to use the two instruments to double (so to speak) the images collected per night, specifically those in the narrow band (Ha and OIII for now).

If this is feasible, I could then purchase an L-SinergY (OIII+SII) for the GT81 and complete all the narrow bands on the color camera.

Taking an example for a subject in HOO, for a hypothetical session (say 80 images of 5 minutes each) I would have:

1) FLT120: 40 x Ha and 40 x OIII in mono

2) GT81: 80 x L-Extreme (Ha+OIII)

My question is: what is the best way to integrate the images (I use WBPP) from the two setups in order to obtain two masterlights in Ha and OIII respectively?


Best regards

GT

Well Written Helpful Insightful Respectful Engaging
Tommy Mastro avatar

Unfortunately, I cannot answer your question. However, I once started a thread about combine OSC data and mono data, and in short, there was almost complete consensus not to do it. You’re only adding inferior data to superior data (the dumbed-down response).

Let me see if I can find that thread for you. There was a lot of good ideas on how to do it “if I insisted on doing it”. In other words, here are the best ways to combine the data but you still shouldn’t. I never really understood why.

I’ll reply back again shortly.

Tommy Mastro avatar

Tommy Mastro · Oct 11, 2025, 03:47 PM

Let me see if I can find that thread for you. There was a lot of good ideas on how to do it “if I insisted on doing it”. In other words, here are the best ways to combine the data but you still shouldn’t. I never really understood why.

Found it . . . Stacking LRGB (Or SHO) with OSC subs - is it possible & worth the effort? - Experienced Deep Sky Imaging - Cloudy Nights

I’m going to follow this thread. Hopefully smarter people than me chime in and make it make sense for me. :)

David Foust avatar

I don’t have experience blending mono and OSC Ha and OIII, but I do have experience creating HOO from OSC dual band, so perhaps I can offer a suggested workflow.

  • Of course, first stack your mono HA and OIII data to obtain those master lights.

  • Next, stack your OSC data as usual, to obtain a master light for that.

  • Next, you need to register your OSC image to your mono images, so they are aligned, and the blend process works seamlessly later. If your OSC FOV is wider than your mono FOV, I would suggest registering the Ha and OIII lights to the OSC light, as they would presumably fit within the FOV to a greater degree.

  • Crop all three images to an identical FOV. You will need to re-do an astrometric solution after the crop, and you can use ImageSolver for that.

  • Next, you will need to do your usual steps for each image… AutomaticDBE, GraXpert, or MSGC for Background Extraction and gradient correction, (SPCC color correction on your OSC image) and background neutralization, and then BlurXterminator and StarXterminator (or StarNet) to wind up with a star mask and starless image. Then you can stretch each with your preferred method (I’m Partial to SetiAstro’s Statistical Stretch and star stretch) and use NoiseXterminator (or GraXpert Noise reduction) to taste.

  • The OSC image will then need extracted into the mono color channels. There’s a handful of ways to do this out there, but one of the newest and most convenient ways is to use the DBExtract script, which will do all the steps for you. Basically, it will take the color image, break it out into RGB, where R would be the Ha data, and then it will create a synthetic G and combine it with B to give you an OIII image. You should now have your Ha and OIII lights from mono and Ha and OIII lights from OSC, let’s assume they’re called “HaMono” and “HaOSC”, and likewise for the two OIII images.

  • From here, I’m sure other’s processes will differ, but I would use Pixelmath to blend HaMono and HaOSC images, and then OIIIMono and OIIIOSC to create a single blended Ha and a single blended OIII in greyscale. Make sure “Use a single RGB/K expression” is checked, and under the destination section, select “Create New Image” and give it a unique Image ID, like “BlendedHa” (and likewise for OIII). You can use a simple formula like .5*HaMono+.5*HaOSC and likewise for OIII, but you and adjust the factor for each mono and OSC to blend to taste, but make sure they add up to 1 (like .6+.4 or .7+.3, etc). The result should be a single blended Ha image called “BlendedHa” and a single blended OIII image called BlendedOIII.

  • I usually end up doing a Foraxx Palette, so the for next step, you will need to create an intermediate HO image from the BlendedHa and BlendedOIII image. We will use this as a part of creating our final color image. I use Pixelmath for this, though others may have another method. Open pixelmath and combine the BlendedHa and BlendedOIII images with the formula (BlendedHa*BlendedOIII)^~(BlendedHA*BlendedOIII). Again, make sure “Use a single RGB/K expression” is checked, and under the destination section, select “Create New Image” and give it a unique Image ID, something simple like “HO”. This gives you a mono HO image.

  • Next, I use Pixelmath again to create the Foraxx color image. In Pixelmath, uncheck the box that says “Use single RGB/K expression",” so you can input each color channel individually and create a final RGB image. BlendedHa goes in the R channel, BlendedOIII goes in the B channel, and the formula HO*BlendedHa+~HO*BlendedOIII in the G channel.

  • You should now have a color image in Foraxx Palette, and you can do the remaining bits to taste, such as saturation, contrast, color adjustment etc. (For instance, I like to shift the colors to be more reds and blues rather than oranges, golds, and silver-blues etc.)

I will caveat all of this with I haven’t done the Mono+OSC elements myself, so I can’t guarantee this will give the result that you’re after, so you could try some alternative approaches, like processing the both the Mono and OSC data independently into two Foraxx images and blending at the end, or some other combination of steps.

If you’d like a video tutorial to follow, I adapted Paulyman Astro’s process to better match my workflow, but it’s a great video to watch and follow along while you develop your workflow for it.

As Tommy mentioned, most people recommend not doing it, but I think experimentation is always a good thing. It’s better to try and decide you don’t like it than not try at all.

Good luck and lmk if you have questions as I may have skipped a step while typing this out 😅

Well Written Helpful Engaging Supportive
Tommy Mastro avatar

Pranalabs · Oct 11, 2025, 01:44 PM

I would like to use the two instruments to double (so to speak) the images collected per night, specifically those in the narrow band (Ha and OIII for now).

I think it all comes down to what you (the OP) is looking for with your above sentence. Is your expectation that you will be able to collect all the band-passes/color lines you need every session? Or do you also expect this “double data” will also double your SNR (the benefit of additional subs)?

That is what I was trying to accomplish but I was told to reap the benefits of additional SNR, you would need to stack them altogether in one stack, not in two separate stacks. Once you stack them separately and then combine them in post processing, you are simply blending two images (there is no increase in SNR).

Your image will have all the band-passes and color you want, but you effectively lost half your potential SNR (which you would have had if using two OSC rigs or two Mono rigs).

Now, most importantly, please understand I have no idea what I’m talking about. I’m just trying to understand and regurgitate information I was given a while back. Confirm everything I say with someone who actually knows about these things.

Tommy

Helpful Insightful Respectful Engaging
andrea tasselli avatar
Pranalabs:
Good morning,

For some time now, I have been experimenting with how to get the most out of my dual setup.

Basically, I have the following instruments mounted in parallel on my mount, which capture the same subject synchronously:

1) A WO FLT 120 with an IMX571M monochrome camera and a complete set of LRGB and SHO filters (6.5 nm)

2) A WO GT81 with an IMX571C color camera with UV/IR filter and an L-EXTREME (7Nm Ha and OIII)

Beyond the easily manageable difference in FOV and resolution, I would like to use the two instruments to double (so to speak) the images collected per night, specifically those in the narrow band (Ha and OIII for now).

If this is feasible, I could then purchase an L-SinergY (OIII+SII) for the GT81 and complete all the narrow bands on the color camera.

Taking an example for a subject in HOO, for a hypothetical session (say 80 images of 5 minutes each) I would have:

1) FLT120: 40 x Ha and 40 x OIII in mono

2) GT81: 80 x L-Extreme (Ha+OIII)

My question is: what is the best way to integrate the images (I use WBPP) from the two setups in order to obtain two masterlights in Ha and OIII respectively?

Best regards

GT

*I don't use WBPP but the process can be done and I have done it in the past with satisfactory results, just process the two data streams in parallel and then combine the Ha-Mono with the Ha-OSC (potentially continuum subtracted) in the final Combined Ha (use ImageIntegration for that, scaling the addenda using SNR weight), Repeat the same with the OIII and or SII (again, possibly continuum subtracted) with the respective Mono channels. Blend everything together with Forrax. Done.
Helpful
Rick Krejci avatar

I’ve done much experimenting with this (130/140mm refractor with mono IMX455 with LRGBSHO and 90mm refractor with IMX571 both Mono and OSC).

My conclusions are:

  • For a luminance type of signal which is capturing detail, the smaller refractor subs when scaled up to match the longer refractor’s resolution tend to degrade S/N overall. Not a good use case. Same with any case where both scopes are capturing the same type of data (SHO/SHO, RGB/RGB L/L)…the data from the smaller scope will not be adding much, if any, value.

  • The best use case is using the smaller refractor to capture only RGB color data for broadband images, which is much less sensitive to resolution and can be de-noised without much impact to the final product. Example here 📷 M51 and IFN - Dual Dissimilar Camera/Telescope experiment https://app.astrobin.com/i/4a8kl2/

  • For Narrow Band, I tried to expand this same theory (large scope for mono luminance, small scope for color). I tried to do a mono narrowband luminance using a quad band radian triad ultra filter which passes Ha, Oiii, Sii and Hb with about 4.5nm pass. And did HOO with the smaller scope and did my normal Luminance Layering processing in PI. While the result looked somewhat pleasing, the major issue is the large disparity of signal strength between Ha and the other bands, so the pseudo mono narrowband luminance is dominated with Ha with no ability to separate the bands in the luminance. So the Oiii areas didn’t show up well when using the Radian filter as mono luminance. The resulting images are shown:

    1. Cat91 HOO (1 hour each channel)

    2. #1 with the 3 hour 140 multi band luminance

    3. 140 HOO (1 hour each channel)

    4. #3 with the 3 hour 140 multi band luminance. 📷 hoocomp.jpghoocomp.jpgWould have probably gotten better results above if I just used a dual-pass filter rather than the quad band since Hb and to an extent Sii contributed to the Ha domination. Have an L-Ultimate filter (3nm Ha and Oiii only) that I’ll try in the future, but will only be of any use for certain strong Oiii targets.

  • For SHO, with both scopes capturing the same data, per my first bullet point, the smaller scope will likely not contribute and will likely be detrimental in the final stack.

For my setup, the FOV of the larger scope with FF mono luminance is just a bit smaller than the smaller scope with APSC color, which works well for combining in PI. I basically just feed the subs for both scopes in, manually set a reference frame to one from the narrower FOV FF stack and let-er-rip. I up-scales the APSC to match the FF resolution and aligns them pretty much seamlessly and integrates them separately since they are different filters/data. Then I just proceed with normal processing. The only issue you need to watch is if the optics have different distortion characteristics (barrel or pincushion). Of so, you need to do a StarAlignment with splines so the stars match throughout the images.

Pranalabs avatar

Tommy Mastro · Oct 11, 2025, 04:57 PM

Pranalabs · Oct 11, 2025, 01:44 PM

I would like to use the two instruments to double (so to speak) the images collected per night, specifically those in the narrow band (Ha and OIII for now).

I think it all comes down to what you (the OP) is looking for with your above sentence. Is your expectation that you will be able to collect all the band-passes/color lines you need every session? Or do you also expect this “double data” will also double your SNR (the benefit of additional subs)?

That is what I was trying to accomplish but I was told to reap the benefits of additional SNR, you would need to stack them altogether in one stack, not in two separate stacks. Once you stack them separately and then combine them in post processing, you are simply blending two images (there is no increase in SNR).

Your image will have all the band-passes and color you want, but you effectively lost half your potential SNR (which you would have had if using two OSC rigs or two Mono rigs).

Now, most importantly, please understand I have no idea what I’m talking about. I’m just trying to understand and regurgitate information I was given a while back. Confirm everything I say with someone who actually knows about these things.

Tommy

Hi,

you've hit the nail on the head!

I would like to achieve parallel acquisition to “double the data”.

From the tests I have carried out, there is no SNR gain by combining the post-integrations.

I already suspected this!

The problem is pre-producing the OSC images so as to split the Ha and OIII after debayering (into two separate monochrome files) so that they can then be integrated with a subsequent WBPP batch together with the mono ones.

In practice, I would have to batch run a script such as DBXtract on a list of color files, obtaining two mono files as output, one for Ha and the other for OIII.

I tried using an ImageContainer passed to DBXtract, but it only writes the original image as output, clogging Pixinsight with a thousand windows.

Best Regards

Gianluca

Helpful Insightful Respectful Engaging
Pranalabs avatar

Rick Krejci · Oct 11, 2025, 05:04 PM

  • For a luminance type of signal which is capturing detail, the smaller refractor subs when scaled up to match the longer refractor’s resolution tend to degrade S/N overall. Not a good use case. Same with any case where both scopes are capturing the same type of data (SHO/SHO, RGB/RGB L/L)…the data from the smaller scope will not be adding much, if any, value.

  • The best use case is using the smaller refractor to capture only RGB color data for broadband images, which is much less sensitive to resolution and can be de-noised without much impact to the final product. Example here 📷 M51 and IFN - Dual Dissimilar Camera/Telescope experiment https://app.astrobin.com/i/4a8kl2/

Hi,

until now, I too have been using the long refractor for Luminance/SHO (1.24 “/pix) and the small one for RGB (2.2”/pix) with the color camera.

This approach pays off for broadband subjects (e.g., galaxies, LDN, LBN, reflection nebulae).

However, the best approach would be to use two instruments to capture the same “narrow band” and maximize the number of exposures per night, increasing the SNR (for emission nebulas etc).

I live under a Bortle 5/6 sky and need to capture many exposures...

Best regards

Gianluca

Helpful Insightful Respectful
Rick Krejci avatar

Pranalabs · Oct 11, 2025, 07:32 PM

Rick Krejci · Oct 11, 2025, 05:04 PM

  • For a luminance type of signal which is capturing detail, the smaller refractor subs when scaled up to match the longer refractor’s resolution tend to degrade S/N overall. Not a good use case. Same with any case where both scopes are capturing the same type of data (SHO/SHO, RGB/RGB L/L)…the data from the smaller scope will not be adding much, if any, value.

  • The best use case is using the smaller refractor to capture only RGB color data for broadband images, which is much less sensitive to resolution and can be de-noised without much impact to the final product. Example here 📷 M51 and IFN - Dual Dissimilar Camera/Telescope experiment https://app.astrobin.com/i/4a8kl2/

Hi,

until now, I too have been using the long refractor for Luminance/SHO (1.24 “/pix) and the small one for RGB (2.2”/pix) with the color camera.

This approach pays off for broadband subjects (e.g., galaxies, LDN, LBN, reflection nebulae).

However, the best approach would be to use two instruments to capture the same “narrow band” and maximize the number of exposures per night, increasing the SNR (for emission nebulas etc).

I live under a Bortle 5/6 sky and need to capture many exposures...

Best regards

Gianluca

I found when throwing all of the narrow band images on a WBPP stack for each band with both cameras combined, I did not see a benefit in S/N. The images from the shorter scope have to be up-sampled to stack with the longer one since the pixel sizes are the same, and I’m guessing WBPP weighs them very little in comparison to the longer FL ones when determining their weights.

In my case above with the Crescent Nebula with the Cat91 and Askar 140, the FWHMs were actually pretty similar since seeing wasn’t great and the Cat91 is a great scope with very tight stars. And the Cat91 was faster as well at f4.9. But if you compare #1 & #3, you’ll see how much less noise the image from the larger scope has despite the same exposure time and faster f ratio. Aperture wins is the lesson here.

I think with targets like the Veil area and Dumbell where Oiii is pretty strong, doing a HO dual band mono luminance on the larger scope like with the L-Ultimate and HO for color (either with separate filters on a mono camera or another L-Ultimate on a OSC) with the smaller one could be beneficial. But it will be a compromise since you’ll still be unable to balance the Mono narrowband Luminance between Ha and Oiii, so there will be a mismatch in Luminance and Chrominance. And typical SHO targets where Ha is dominant, I just use the scope with the FF camera that gives me the framing I want and don’t bother with the other scope/camera…along for the ride.

I encourage you to experiment on your own, but I will no longer be doing parallel captures with the dual scopes with matching dual chrominance imaging (i.e. both shooting SHO or RGB). Not worth the disk space or processing time.

Well Written Helpful Insightful Concise
Stjepan Prugovečki avatar

I am doing this all the time (sometimes even 3 setups). I have scopes on separate mounts though. Process is nicely described by @David Foust , nothing much to add , except that you can also combine RGB from your OSC with Lum from your mono (although I personally do not take Lum often, I rather spend all time on RGB) . I am not sure nvestment to dualband filter with SII line is fully justified , but if so I would suggest one with Hb/SII lines (Antlia ALP-T for example) .

You can have a look to mu gallery to get an impression what you can do with average equipment and average skills

Cs

Pranalabs avatar

Rick Krejci · Oct 11, 2025, 10:24 PM

Vi incoraggio a sperimentare da soli, ma non farò più acquisizioni parallele con i doppi telescopi con immagini a doppia crominanza corrispondenti (ovvero entrambe le riprese SHO o RGB). Non vale lo spazio su disco o il tempo di elaborazione.

Hi Rick,

you have highlighted a number of points that have given me a lot to think about. I will definitely do some more testing. At present, I still have a number of ideas to check out, and I will be happy to share the results with you.

Thank you

Gianluca

Well Written Respectful
Pranalabs avatar

Stjepan Prugovečki · Oct 11, 2025, 11:22 PM

I am doing this all the time (sometimes even 3 setups). I have scopes on separate mounts though. Process is nicely described by @David Foust , nothing much to add , except that you can also combine RGB from your OSC with Lum from your mono (although I personally do not take Lum often, I rather spend all time on RGB)

Hi Stjepan,

as @Tommy Mastro and @Rick Krejci pointed out the approach described by @David Foust (and which I have also used several times) works well when, for example, we collect RGB with the color camera and the shorter focal length telescope and luminance and narrow band with the longer focal length telescope (to simplify). The situation changes if we want to maximize the SNR on the same band by collecting from both instruments.

In this specific case, if we perform two separate stackings (one for mono and the other for OSC) and then combine them, we do not obtain an increase in SNR. On the contrary, from the tests I have carried out, there is a degradation of SNR!

Therefore, since I am stubborn, I would like to approach it with the following method:

  • Let's assume we have a set of 100 images taken with the OSC L-Extreme camera and a set of 100 OIII images taken with the MONO camera.

  • A first WBPP is performed with the OSC images, reaching debayering, but without normalizing, registering, or integrating them.

  • The files obtained must be fed into DBXtract to obtain two files from each image, one for the Ha channel and the other for the OIII channel. Here I got stuck because I couldn't find a way to do this with Pixinsight using a batch process that takes 100 images.

  • Once I have the separate Ha and OIII channels coming from the OSC camera, I start another WBPP, combining them with the mono ones taken with the monochrome camera, and stack everything together.

In this case, if my assumptions are correct, we should see an improvement in the SNR compared to the separate integration process...notwithstanding the fact that, as @Rick Krejci says, the difference in resolution could compromise the whole thing…..and a thousand other things could happen! But the beauty also lies in trying new things.

Now the problem is how to execute the third bullet point.

Can any Pixinsight “GURU” help me?

Best regards

Gianluca

Helpful Insightful Respectful Engaging
Pranalabs avatar

Rick Krejci · Oct 11, 2025, 10:24 PM

I found when throwing all of the narrow band images on a WBPP stack for each band with both cameras combined, I did not see a benefit in S/N. The images from the shorter scope have to be up-sampled to stack with the longer one since the pixel sizes are the same, and I’m guessing WBPP weighs them very little in comparison to the longer FL ones when determining their weights.

Hi Rick,

Your reasoning is correct. Perhaps I could do the opposite, i.e., undersample the longer one.... Maybe I'm talking nonsense.

Gianluca

Pranalabs avatar

Hi,

As a first test, I prepared a sample of 11 images of 300 seconds in OIII_OSC (2.2”/pix) and the same number in OIII_MONO(1.24”/pix). I registered them in WBPP and, unexpectedly, the OIII_OSC images were all “weighted” as better! I would have expected the opposite...

📷 Immagine 2025-10-12 112114.jpgImmagine 2025-10-12 112114.jpg📷 Immagine 2025-10-12 114416.jpgImmagine 2025-10-12 114416.jpgOn the right a single frame OIII_OSC (2.2”/pix) and on the left the OIII_MONO(1.24”/pix).

Stjepan Prugovečki avatar

Pranalabs · Oct 12, 2025 at 08:27 AM

Stjepan Prugovečki · Oct 11, 2025, 11:22 PM

I am doing this all the time (sometimes even 3 setups). I have scopes on separate mounts though. Process is nicely described by @David Foust , nothing much to add , except that you can also combine RGB from your OSC with Lum from your mono (although I personally do not take Lum often, I rather spend all time on RGB)

Hi Stjepan,

as @Tommy Mastro and @Rick Krejci pointed out the approach described by @David Foust (and which I have also used several times) works well when, for example, we collect RGB with the color camera and the shorter focal length telescope and luminance and narrow band with the longer focal length telescope (to simplify). The situation changes if we want to maximize the SNR on the same band by collecting from both instruments.

In this specific case, if we perform two separate stackings (one for mono and the other for OSC) and then combine them, we do not obtain an increase in SNR. On the contrary, from the tests I have carried out, there is a degradation of SNR!

Therefore, since I am stubborn, I would like to approach it with the following method:

  • Let's assume we have a set of 100 images taken with the OSC L-Extreme camera and a set of 100 OIII images taken with the MONO camera.

  • A first WBPP is performed with the OSC images, reaching debayering, but without normalizing, registering, or integrating them.

  • The files obtained must be fed into DBXtract to obtain two files from each image, one for the Ha channel and the other for the OIII channel. Here I got stuck because I couldn't find a way to do this with Pixinsight using a batch process that takes 100 images.

  • Once I have the separate Ha and OIII channels coming from the OSC camera, I start another WBPP, combining them with the mono ones taken with the monochrome camera, and stack everything together.

In this case, if my assumptions are correct, we should see an improvement in the SNR compared to the separate integration process...notwithstanding the fact that, as @Rick Krejci says, the difference in resolution could compromise the whole thing…..and a thousand other things could happen! But the beauty also lies in trying new things.

Now the problem is how to execute the third bullet point.

Can any Pixinsight “GURU” help me?

Best regards

Gianluca

Hi Gianluca,

My 2 setups are very close on FOV , and both cameras are the same (one OSC, another mono). Having big difference in imge scale (acrc sec/pixel ) could indeed be an issue (also depending which image scale you use as a reference ). I almost always image during multiple nights and with multiple filters .I stack per filter per nght and per scope. For dual band i integrate with Bayer drizzle

I extract Ha and OIII from DB for each night , then I stack the stacks (If the difference in SNR among nights is dramatic, you could put weighting on each stack). Do you get SNR as you would get by stacking it all together ? Sure yes, but the difference might not be as big as one would expect. I am not sure would you benefit from splitting each DB frame on separate channels and then stacking it together with mono , as you need to debayer before extracting and also the extracting routine modifies the data.

Rick Krejci avatar

Pranalabs · Oct 12, 2025, 09:46 AM

Hi,

As a first test, I prepared a sample of 11 images of 300 seconds in OIII_OSC (2.2”/pix) and the same number in OIII_MONO(1.24”/pix). I registered them in WBPP and, unexpectedly, the OIII_OSC images were all “weighted” as better! I would have expected the opposite...

📷 Immagine 2025-10-12 112114.jpgImmagine 2025-10-12 112114.jpg📷 Immagine 2025-10-12 114416.jpgImmagine 2025-10-12 114416.jpgOn the right a single frame OIII_OSC (2.2”/pix) and on the left the OIII_MONO(1.24”/pix).

Yea, I would have also intuitively expected the opposite. In WBPP, measurements are made before registration/scaling, so I think it’s just not optimized for combining the same type of data from such different setups.

When I was experimenting, my best result for combining data from each setup was generally registering them all together but stacking each separately and then using pixel math to blend them together like @David Foust mentioned with different weightings. That takes the auto-weighting of dissimilar stacks with WBPP out of the picture. I found that, for my setup, weighting something like 0.2 for the smaller scope and 0.8 for the larger was about the best subjectively. But it really didn’t yield a significant benefit to make it worth it.

Helpful Insightful Respectful
Pranalabs avatar

Rick Krejci · Oct 12, 2025, 03:23 PM

Quando facevo esperimenti, il mio miglior risultato nel combinare i dati di ogni configurazione era generalmente registrarli tutti insieme, ma impilarli separatamente e poi usare la matematica dei pixel per fonderli insieme, come ha menzionato @David Foust, con ponderazioni diverse.

Hi @Rick Krejci ,

i created a script that launches DBXtract in batch mode on a list of Dual Band OSC files (only with debayering), writing the Ha and OIII channels to disk. This allows me to stack them with the Ha and OIII channels captured in mono, recording and integrating them all together. Let's see what happens.

By

Gianluca

Pranalabs avatar

Hi,

after a whole day of running scripts and processing, we finally found the answer.

Is it worth the effort to stack narrowband files taken by two telescopes in parallel, one with a color camera with an L-Extreme filter and the other with a monochrome camera and narrowband filters?

📷 Senza-titolo-2.jpgSenza-titolo-2.jpgAssuming that both images are aligned in FOV and resolution, the result is striking. The SNR of the stacked image without the addition of 52 frames from the monochrome camera wins hands down.

I can't quite explain it, since the stacking was done at the source using both sets of images together and not subsequently.

Good to know.

In conclusion, it is a waste of time and computing power to record in parallel with the same bandwidth in order to maximize the SNR. If you have any thoughts on this, I would be very interested to hear them.

Best regards

Gianluca

Engaging
David Foust avatar

Thanks for chasing this one down the rabbit hole! I was curious if this would be worth trying some day and it's clear the answer is no, at least not under circumstances similar to yours. Did you also stack only the mono and run a SNR analysis? What was the result?

I'm guessing the SNR on the OSC may be higher because of signal bleed from overlap in bandpass from the Bayer matrix on the osc sensor?

Nonetheless… interesting result!

Well Written Respectful Engaging Supportive
Pranalabs avatar

David Foust · Oct 13, 2025, 05:50 PM

Thanks for chasing this one down the rabbit hole! I was curious if this would be worth trying some day and it's clear the answer is no, at least not under circumstances similar to yours. Did you also stack only the mono and run a SNR analysis? What was the result?

I'm guessing the SNR on the OSC may be higher because of signal bleed from overlap in bandpass from the Bayer matrix on the osc sensor?

Nonetheless… interesting result!

Hi @David Foust ,

Let's say that I stacked the 52 mono images but did not align and measure them on the FOV and resolution of those OSC.

If I measured them natively at 1.24"/pix, I would probably get a better but misleading SNR.

One thing I noticed is that WBPP has problems with weighting in this case. In my opinion, the difference in resolution and debayering combined do a lot of damage. Anyway, good to know. I have a few other ideas for future attempts, but I would have to create an ad-hoc script to replace WBPP, perhaps using Bayer Drizzle 1x....for now, this is fine.

My strategy will be as follows:

Broadband subjects:

1) OSC Camera @ 2.2"/pix for RGB imaging (UVIR or L-QEF filter)

2) Mono Camera @1.24"/pix for luminance or narrowband (e.g., Ha for galaxies)

Narrowband subjects:

1) OSC Camera @ 2.2"/pix to capture broadband RGB (UVIR or L-QEF filter) or capture Ha and OIII (L-Extreme) like a sort of Bin 2x (in a nutshell)

2) Mono Camera @1.24"/pix - Ha to use as Luminance or SII because I don't get it with the first setup. I would not shoot OIII and I will no longer stack the same band for both

Best regards

Gianluca

David Foust avatar

That makes sense. I wonder if you might get a different result with a different stacking program. Have you tried stacking in Siril or SetiAstro Suite?

Well Written
Rick Krejci avatar

Pranalabs · Oct 13, 2025, 06:53 PM

David Foust · Oct 13, 2025, 05:50 PM

Thanks for chasing this one down the rabbit hole! I was curious if this would be worth trying some day and it's clear the answer is no, at least not under circumstances similar to yours. Did you also stack only the mono and run a SNR analysis? What was the result?

I'm guessing the SNR on the OSC may be higher because of signal bleed from overlap in bandpass from the Bayer matrix on the osc sensor?

Nonetheless… interesting result!

Hi @David Foust ,

Let's say that I stacked the 52 mono images but did not align and measure them on the FOV and resolution of those OSC.

If I measured them natively at 1.24"/pix, I would probably get a better but misleading SNR.

One thing I noticed is that WBPP has problems with weighting in this case. In my opinion, the difference in resolution and debayering combined do a lot of damage. Anyway, good to know. I have a few other ideas for future attempts, but I would have to create an ad-hoc script to replace WBPP, perhaps using Bayer Drizzle 1x....for now, this is fine.

My strategy will be as follows:

Broadband subjects:

1) OSC Camera @ 2.2"/pix for RGB imaging (UVIR or L-QEF filter)

2) Mono Camera @1.24"/pix for luminance or narrowband (e.g., Ha for galaxies)

Narrowband subjects:

1) OSC Camera @ 2.2"/pix to capture broadband RGB (UVIR or L-QEF filter) or capture Ha and OIII (L-Extreme) like a sort of Bin 2x (in a nutshell)

2) Mono Camera @1.24"/pix - Ha to use as Luminance or SII because I don't get it with the first setup. I would not shoot OIII and I will no longer stack the same band for both

Best regards

Gianluca

Yea, that’s about where I landed, although for narrowband, using Ha or Sii for luminance will result in a large mismatch of luminance and chrominance which isn’t ideal. Might make a pretty picture, but you’ll end up emphasizing one band in luminance, so the chrominance mapping to luminance will be off, much like I saw in the crescent series I presented above.

In you example image, I guess I don’t understand how the addition of Mono images could degrade S/N, assuming the OSC stack was up-sampled to the same “/pix as the mono. Even in the example single image comparison image you posted earlier, the mono image had far less noise, and that was before up-sampling the OSC image to match the mono image.

Were the OSC images upsampled before putting in WBPP? If not, since the measuring/weighting in WBPP is done before any upscaling, it’ll put more weight than it should on the lower resolution image since it’ll probably have tighter FWHMs and perhaps similar S/N per subexposure. That’s why I combined each set first and then did pixel math to put less weight on the lower resolution stack. But, even then, it really wasn’t worth it as you concluded.

I think your conclusions are valid, just trying to understand your data to support them.

Pranalabs avatar

Hello,

I finally decided to use the Ha and OIII channels from the color camera and the SII channel from the monochrome camera. To standardize the resolutions between the two telescopes, I drizzled the OSC integration twice. You can see the result on my page.

Lessons learned:

  1. I was amazed by the quality of the data obtained from the L-Extreme filter, at least for bright subjects and under my Bortle 5/6 sky.

  2. To standardize resolutions, I found it advantageous to drizzle 2x the images coming from the OSC camera (taking care of dithering is therefore essential).

  3. Integrating images from the two setups into the same band in a complicated way is not only cumbersome but also offers no advantages in SNR (at least with the techniques I used).

  4. Duplicating the hours of imaging per night may therefore be feasible based on certain filter combinations depending on the subject.

Best Regards

Gianluca

Well Written Insightful Respectful Concise