Combining Different Focal Lengths with the Same Camera But with the Optolong L-eNhance

15 replies720 views
Zak Jones avatar
Hi all,

I am curious to know if it is possible to combine data from different focal lengths taken with the same camera?

For example, I am currently re-processing some data I shot of NGC 3372 (Carina Nebula) back in February this year taken through my Radian 61 with an Optolong L-eNhance filter. Fast forward to late May, I shot another dataset of NGC 3372 with my Canon 6Da but this time I used my Canon EF 400mm f/5.6L USM lens instead of my Radian 61. For reference, the focal length of my Radian 61 is 275mm.

My question is, would it be possible to combine these two datasets even though they are taken with different focal lengths, telescope/lens and an Optolong L-eNhance? I sure hope so, as it would be cool to combine the two datasets and merge them into one like I did with my Gum 15 data I shot last week.

I understand that sampling might have a factor in this, but I think it should be ok as it was taken with the same camera, although the sampling with my Canon 400mm data is slightly less undersampled from calculations according to the Astronomy Tools CCD Suitability website than my dataset shot with my Radian 61.

Looking forward to hearing everyone's thoughts and solutions!

Zak
Helpful Engaging
Joon Ren avatar
Yes, it can be done. Pixinsight’s Star Alignment will project any subs to the same FOV and orientation for a given reference sub. Which will allow direct stacking for all aligned subs regardless of focal length. Other software should have similar functions as well.
Well Written Concise
Stjepan Prugovečki avatar
Indeed, Star Alignment does the job , you can either take a shorter or longer FL as a reference. What you do with aligned frames later and how you combine the colors it is up to your taste.
Zak Jones avatar
Yes, it can be done. Pixinsight’s Star Alignment will project any subs to the same FOV and orientation for a given reference sub. Which will allow direct stacking for all aligned subs regardless of focal length. Other software should have similar functions as well.

Awesome, thanks for clarifying!

I thought so, just wanted to double check in case it was something that couldn't be done.

I have initiated another stack containing these two datasets, selecting one of my Radian 61 images as the reference image for image registration.

Zak
Zak Jones avatar
Stjepan Prugovečki:
Indeed, Star Alignment does the job , you can either take a shorter or longer FL as a reference. What you do with aligned frames later and how you combine the colors it is up to your taste.

Awesome, thanks for confirming, really appreciate it!

I have started another stack with these datasets and used my Radian 61 image as reference.

I will see what I can come up with in the edit!

Zak
Yungshih Lee avatar
Yes. In fact, I do this quite often. With PixInsight you can combine images taken with different scopes, cameras, binning settings, samplings, FOVs, focal lengths, filters, etc. In PixInsight you can either dump them all into WBPP (but resolution needs to be standardized first) and let it do the work or, as some pointed out, use star alignment/image integration or pixel math after stacking. 

Filters are generally not an issue, especially with OSC. 

It is worth noting it is usually better to use the smaller FOV as the reference frame so you don't get clear-cut edges in the final image. But you can crop it later as well.

Here is an example: https://www.astrobin.com/u26bof/F/
I used two scopes in different focal lengths, different cameras (one mono, one OSC), different filters, and even different binning settings (ASI294mm Pro was shot in bin1, so 4x the resolution).
Helpful Respectful Engaging Supportive
Zak Jones avatar
Yungshih Lee:
Yes. In fact, I do this quite often. With PixInsight you can combine images taken with different scopes, cameras, binning settings, samplings, FOVs, focal lengths, filters, etc. In PixInsight you can either dump them all into WBPP (but resolution needs to be standardized first) and let it do the work or, as some pointed out, use star alignment/image integration or pixel math after stacking. 

Filters are generally not an issue, especially with OSC. 

It is worth noting it is usually better to use the smaller FOV as the reference frame so you don't get clear-cut edges in the final image. But you can crop it later as well.

Here is an example: https://www.astrobin.com/u26bof/F/
I used two scopes in different focal lengths, different cameras (one mono, one OSC), different filters, and even different binning settings (ASI294mm Pro was shot in bin1, so 4x the resolution).

Awesome, glad to know that I can do what you did! I'm surprised that other people do it quite often!

I didn't check the standardized resolution first, so I will cancel the stack and restart it with the master files created to save time.

I did use my Radian 61 image as reference, as it was the smaller FOV out of the two datasets.

Awesome image of M101 and the supernova! I can't image M101 from where I live in Australia unfortunately due to it only rising a maximum of 8 degrees above the horizon, which is way to low to get decent images, plus it's obscured by neighbouring houses and it is also located in a light dome to the north. Definitely not the best conditions, but oh well, hopefully there might be a supernova in our neck of the woods soon as it's way overdue!

Zak
Zak Jones avatar
After much experimentation, I have managed to get the two datasets to stack together, although there is now another issue that has arisen.

The integrated image is coming out with a similar look to my Canon 400mm data, with not much of the Radian 61 data showing in the exposures. I might be doing something wrong, but I am not too sure.

I think it might be something to do with the exposure tolerance in WBPP. If so, what would I set it to? I did try two runs of WBPP with the exposure tolerance set to 2 on the first run, then I set it to 999 on the second run.

I could also try to combine and average the two different stacked datasets in PixelMath, if that is another possible solution.

I hope that I can sort this out, as I am really keen to find out if I can come up with a new rendition of the Carina Nebula with these two datasets.

Looking forward to hearing everyone's thoughts.

Zak
Engaging
Joon Ren avatar
Hmm, only one of the datasets uses the optolong right? WBPP might be weighing the non filter stack higher since the signal and stars are stronger. In that case what you can do is star align both datasets to one reference but stack them separately. Then in post processing, combine the Ha (red channel) and Oiii (green, as OSC has more green pixels and Oiii is actually turquoise) using Pixelmath where you can manually control their contribution to the final image. The L-Enchance data will be more prominent if you use narrowband techniques, otherwise it will get swamped by the other dataset which has more signal.
Helpful Insightful
Zak Jones avatar
Hmm, only one of the datasets uses the optolong right? WBPP might be weighing the non filter stack higher since the signal and stars are stronger. In that case what you can do is star align both datasets to one reference but stack them separately. Then in post processing, combine the Ha (red channel) and Oiii (green, as OSC has more green pixels and Oiii is actually turquoise) using Pixelmath where you can manually control their contribution to the final image. The L-Enchance data will be more prominent if you use narrowband techniques, otherwise it will get swamped by the other dataset which has more signal.

Yes, that's correct. My Radian 61 data used the Optolong L-eNhance, whereas my Canon 400mm data did not.

You might be right in that the weights are being determined by WBPP, will have to do more research into that. I will try and stack the two datasets separately and then combine them into PixelMath.

I have used Cuiv the Lazy Geek's HOO image PixelMath formulas to create a HOO image whilst also creating Ha and OIII masks. Would I be right in saying that I can try and use this method to extract the Ha and OIII channels, then how would I go about combining them with my RGB data, and what averaging method would be best for me to use?

Thanks for your tips, I really appreciate it!

Zak
Respectful Supportive
andrea tasselli avatar
Zak Jones:
I have used Cuiv the Lazy Geek's HOO image PixelMath formulas to create a HOO image whilst also creating Ha and OIII masks. Would I be right in saying that I can try and use this method to extract the Ha and OIII channels, then how would I go about combining them with my RGB data, and what averaging method would be best for me to use?


Know nothing about that chap. Be advised that whilst the R channel (of the L-eNhance) is mostly Ha the G&B channels aren't only OIII+Hb unless there is no broadband emission at all (so much so that you can use it for reflection nebulae in signficantly LP skies). As I suggested to you on a similar topic you should really be adding the in PixelMath once you used LinearFit on the two separate images. Be careful in deciding which one to use as master in registering them, as you don't want a dark border nor you want to oversample the image with the smaller scale.
Helpful
Yungshih Lee avatar
If you are referring to Cuiv's video on creating Hubble palette images with an OSC camera using masks, it could be outdated (if published over a year ago).  He now uses Bill Blenshan's Pixel Math scripts to do that. It creates better Hubble palette images more efficiently. That's what I use, too. I recommend doing a search for "another astro channel" on YouTube for some really excellent tools.

But regardless, those still may not get you what you are looking for. Mixing dualband and narrowband images usually doesn't work well unless you are taking a small portion of one to add to another, such as extracting RGB stars to add to a narrowband image or adding Ha data to a galaxy to emphasize starburst regions. You can certainly try playing with Pixel Math and see if you like what turns out, but that would most likely not give you a "conventional" astro image. 

I recently shot M16 on two consecutive nights and on the second night forgot to change my filter back to dualband (running too many rigs at the same time).  I ended up with two images, one RGB and one HOO. The best use I can make of the RGB image is to use the stars for my narrowband image.
Helpful Insightful Respectful Engaging Supportive
Joon Ren avatar

You can use the parts where you’re separating the Ha and Oiii layers. However, the part where only the Ha layer is used for luminance in OSC dualband techniques isn’t going to be suitable for combining filter/non filter data. Some possibilities just to pass the rough concept on combining L-Enchance/RGB data:
1) Split the RGB images into each R/G/B layer and combine the R and G/B with Ha and Oiii respectively. RGB combine the resulting layers back into a new image which now has both datasets​​​​​​.
2) Take the luminance from the RGB image and combine it with the Ha and Oiii layers. Then use this new luminance on the current RGB image using LRGB combination.

Lots of ways to go about (1) and (2). You can decide the combinations using different weightings in Pixelmath. (1) and (2) could even be used together. Or vary by stretching the different layers to be combined… the list goes on. Only caveat would be the stars.. I would recommend (not a must) to use either only the rgb or narrowband stars so that their colours don’t run wild. For the same reasons, it might be easier to execute the combinations on starless versions and adding stars back later.
Helpful
Zak Jones avatar
andrea tasselli:
Zak Jones:
I have used Cuiv the Lazy Geek's HOO image PixelMath formulas to create a HOO image whilst also creating Ha and OIII masks. Would I be right in saying that I can try and use this method to extract the Ha and OIII channels, then how would I go about combining them with my RGB data, and what averaging method would be best for me to use?


Know nothing about that chap. Be advised that whilst the R channel (of the L-eNhance) is mostly Ha the G&B channels aren't only OIII+Hb unless there is no broadband emission at all (so much so that you can use it for reflection nebulae in signficantly LP skies). As I suggested to you on a similar topic you should really be adding the in PixelMath once you used LinearFit on the two separate images. Be careful in deciding which one to use as master in registering them, as you don't want a dark border nor you want to oversample the image with the smaller scale.

All good, he's got awesome videos about astrophotography. Only mentioned him as I was following his technique for creating HOO images, but today I noticed that Bill Blanshan had updated his narrowband normalization PixelMath expressions to v5 a couple of months ago, which is a massive improvement over his previous versions. From here on, I am definitely going to be using it for my SHO and HOO images. It even has a HSO mode too, but I am unlikely to use it as my filters that I use don't really cover the SII side of the spectrum. I might give it a crack if I end up getting the Antlia ALP-T Hb and SII filter, which is unlikely at this point in time.

Yes, I've taken that onboard and will try it out on these images. I ended up restacking my Gum 15 data separately so that I will actually edit it to compare the results to my WBPP method I used by using PixelMath on the two stacked datasets instead of procrastinating over it.

I agree, I just need to really think about which one to use as reference. I tried it initially in WBPP by setting the reference to one of the Radian 61 images. It didn't turn out that well unfortunately, mainly due to the borders around the Carina Nebula that you explained in your reply. Will have to work it out so that it will align properly without affecting the pixel scale too much, but I will definitely try PixelMath once I finish editing my backlog of data I have shot over the past month.

Winter is in full swing here in Australia, and there have been many clear night recently, which I have been taking advantage of. Tonight I'm adding more data to a project I started last night on the Vela Supernova Remnant. Might take me a while to actually finish editing the full backlog, but I will get there in the end over time lol.

Zak
Zak Jones avatar
Yungshih Lee:
If you are referring to Cuiv's video on creating Hubble palette images with an OSC camera using masks, it could be outdated (if published over a year ago).  He now uses Bill Blenshan's Pixel Math scripts to do that. It creates better Hubble palette images more efficiently. That's what I use, too. I recommend doing a search for "another astro channel" on YouTube for some really excellent tools.

But regardless, those still may not get you what you are looking for. Mixing dualband and narrowband images usually doesn't work well unless you are taking a small portion of one to add to another, such as extracting RGB stars to add to a narrowband image or adding Ha data to a galaxy to emphasize starburst regions. You can certainly try playing with Pixel Math and see if you like what turns out, but that would most likely not give you a "conventional" astro image. 

I recently shot M16 on two consecutive nights and on the second night forgot to change my filter back to dualband (running too many rigs at the same time).  I ended up with two images, one RGB and one HOO. The best use I can make of the RGB image is to use the stars for my narrowband image.

Yes, that's it! He's got great videos, but you are right. The one I was following is very outdated now, especially since the introduction of Bill Blanshan's PixelMath expressions for narrowband normalization, which I only noticed today that he updated it to v5 a couple of months ago. It's a HUGE improvement over v4, so I will definitely be using them now moving forward.

I do use Bill's other expressions for my workflow as well, which are excellent. I am looking forward to seeing what he comes out with in the near future! I think that using Bill's HOO normalization expressions made an image I was happy with when I tried it out on some old data this morning. Will try it out on some of my other duoband data and see how it goes.

I haven't imaged M16 for a couple of years for some reason lol. Once I get this current project over and done with, I will try and image some other targets that I haven't shot for a while such as the Cat's Paw Nebula, Prawn Nebula and maybe the Dark Doodad Nebula again to try and add more data to my current 4 and a half hour project which I have left alone for the moment to focus on trying to have fun with my Optolong L-eXtreme.

I also need to try and shoot some RGB data for the stars, so then I can add them into my narrowband data. Will be a challenge for me as I haven't done it before, but I will see what prevails.

Zak
Zak Jones avatar

You can use the parts where you’re separating the Ha and Oiii layers. However, the part where only the Ha layer is used for luminance in OSC dualband techniques isn’t going to be suitable for combining filter/non filter data. Some possibilities just to pass the rough concept on combining L-Enchance/RGB data:
1) Split the RGB images into each R/G/B layer and combine the R and G/B with Ha and Oiii respectively. RGB combine the resulting layers back into a new image which now has both datasets​​​​​​.
2) Take the luminance from the RGB image and combine it with the Ha and Oiii layers. Then use this new luminance on the current RGB image using LRGB combination.

Lots of ways to go about (1) and (2). You can decide the combinations using different weightings in Pixelmath. (1) and (2) could even be used together. Or vary by stretching the different layers to be combined… the list goes on. Only caveat would be the stars.. I would recommend (not a must) to use either only the rgb or narrowband stars so that their colours don’t run wild. For the same reasons, it might be easier to execute the combinations on starless versions and adding stars back later.

Awesome, thanks for letting me know some pointers for combining broadband data with duo-band data, really appreciate it!

Yes, I do use the technique of extracting the stars before stretching, then processing the starless image and stars separately. That's for both broadband and duoband data.

I haven't actually tried shooting RGB data with RGB stars then combining them into the duoband data, it's something that I have been procrastinating over tbh lol, need to actually do it.

Zak