Tips for dark nebulas

12 replies625 views
Thomas avatar
Hi, I'm looking to see if anyone has tips for acquisition and/or processing of dark or reflection nebulas. In particular when using LRGB. Aside from darker skies and more integration time, is it better to get more RGB frames for these style images? I've done LRGB galaxies with no problem and avg around 4-6hrs Luminance and then about 30-45min for each rgb filter. Colors come out great and there is plenty of integration time with Luminance. If I was imaging say the Iris Nebula or the Dark Shark Nebula, should I be aiming to acquire a larger percentage of rgb frames? Is that what's needed to get more color in the reflection nebulas?
Well Written Engaging
David Nozadze avatar
Hi Thomas, 

I tried several dark nebulas myself and failed quite miserably each time smile

When I am glancing through others' works here, I notice that good quality shots have 4 or 5 times more time than the RGB combined. 

CS

D
Jeff Ridder avatar
My last two dark nebula shots made it to Top Pick, so I’m either starting to figure it out or got lucky 😂. For me…more luminance integration time is key. I need to do even more than I have been, but luminance first and foremost. 

For processing, depends on what you’re after — dark and smoky or that 3D’ish suspended in front of a background of stars look. For the latter, starless processing of the dark nebula and then blending back into original works well for me.
Well Written Helpful Insightful Respectful Engaging Supportive
Thomas avatar
Jeff Ridder:
My last two dark nebula shots made it to Top Pick, so I’m either starting to figure it out or got lucky 😂. For me…more luminance integration time is key. I need to do even more than I have been, but luminance first and foremost. 

For processing, depends on what you’re after — dark and smoky or that 3D’ish suspended in front of a background of stars look. For the latter, starless processing of the dark nebula and then blending back into original works well for me.

Thanks, your photos look great. I think where I'm struggling is getting the brownish color to the dust. Mine usually just come out shades of gray. Any tips for getting that brown color to pop? Like in your Irish shot.
Well Written Respectful Engaging
Jeff Ridder avatar
Thomas:
Thanks, your photos look great. I think where I'm struggling is getting the brownish color to the dust. Mine usually just come out shades of gray. Any tips for getting that brown color to pop? Like in your Irish shot.


I think that was mostly a matter of using all of the usual tools in PixInsight to get the balance right. Masks, curves, color saturation, etc. Nothing special with regard to how much RGB or how I treated it other than that.
Alberto Ibañez avatar
I would suggest you to go to a LvsRGB ratio of about 3. If you are looking for pooping out the colors, you need to be as near to pure RGB adquisition as possible to be able to get details on the different colors. I would suggest to avoid binning also.

I tried this myself on the shark nebula.


https://www.astrobin.com/43ckmd/

Apart of adquisition, there's a delicate task when processing this faint nebulas that is to recover local contrast on a low SNR background.

Good luck!
David Nozadze avatar
Alberto Ibañez:
I would suggest you to go to a LvsRGB ratio of about 3. If you are looking for pooping out the colors, you need to be as near to pure RGB adquisition as possible to be able to get details on the different colors. I would suggest to avoid binning also.

I tried this myself on the shark nebula.


https://www.astrobin.com/43ckmd/

Apart of adquisition, there's a delicate task when processing this faint nebulas that is to recover local contrast on a low SNR background.

Good luck!

  Hi Alberto! What a fantastic image! But, could you please explain, why binning is not desirable for dark nebulas? I thought , as I need to expose longer, with binning I would get greater full well depth and avoid star saturation. Obviously, your image proves that my understanding is not correct. But I really would be happy to know why. Thank you
Respectful Engaging Supportive
Alberto Ibañez avatar
Well, this can be like opening a can of worms, but...

In my opinion, using L and/or binning RGB is just a shortcut, a time saver to be able to achieve a reasonable SNR in a reasonable amount of time, but nothing can beat to a proper RGB adquisition (well, maybe a narrowband adquisition...).

As you can see, I'm not a big fan of L because:

-L usually has worst FWHM than RGB. (In my system, always by a lot. In fact, I had to develop a new technique to fix that in my Shark Nebula project)

-Many people says "L brings the detail, RGB just the color", and I just don't agree with that, and let me show why with an example:

Image on the left (RGBtest) is composed by a pure blue square (0,0,255) and pure red square (255,0,0).
The image on the right (Ltest) is the luminance, extracted from RGBtest.


Now imaging that this RGB test is called "The squared nebula". If you shoot it with the L filter, you're going to obtain Ltest. No detail at all.
But if you shoot it in pure RGB, you are going to obtain two images that each of them have details.

Apart of the explanation... I use L, of course, because I need also this tradeoff!

Best Regards.
Alberto.
Helpful Insightful Respectful Engaging
David Nozadze avatar
Alberto Ibañez:
Well, this can be like opening a can of worms, but...

In my opinion, using L and/or binning RGB is just a shortcut, a time saver to be able to achieve a reasonable SNR in a reasonable amount of time, but nothing can beat to a proper RGB adquisition (well, maybe a narrowband adquisition...).

As you can see, I'm not a big fan of L because:

-L usually has worst FWHM than RGB. (In my system, always by a lot. In fact, I had to develop a new technique to fix that in my Shark Nebula project)

-Many people says "L brings the detail, RGB just the color", and I just don't agree with that, and let me show why with an example:

Image on the left (RGBtest) is composed by a pure blue square (0,0,255) and pure red square (255,0,0).
The image on the right (Ltest) is the luminance, extracted from RGBtest.


Now imaging that this RGB test is called "The squared nebula". If you shoot it with the L filter, you're going to obtain Ltest. No detail at all.
But if you shoot it in pure RGB, you are going to obtain two images that each of them have details.

Apart of the explanation... I use L, of course, because I need also this tradeoff!

Best Regards.
Alberto.


Dear Alberto, 

Thank you so much for taking time and give such an excellent explanation. I think I understand this test. But, if I am not abusing your kind attention, I would like to ask more (as always, a good answer always generates additional questions and I really am eager to learn):

This test, in my opnion, shows a case of two fully saturated pixels. Naturally, L reading will be the same for both. Hence no distinction in detail (or no contrast, if this is the right expression). But if we would simulate several pixels with different signal levels, what would be the corresponding luminance image? 

FWHM - I usually break down my filter change sequence into at least three equal parts per night. For example, if I want to take 30 subs per each channel, I would split them into 3 cycles and run autofocus on each filter change (more cycles for L, naturally, but still with frequent autofocusing routine). Thus, I do get more or less consistent FWHM results (I think). Or perhaps I am wrong (again). Is there any other inherent optical reason why FWHM will be bigger in L then in color channels if the system is focused as sharply as possible?

Thank you!

D
Respectful Supportive
Rodney Bell avatar
What about filters. I've been working on the dark Shark now for weeks, been having bad weather. I have 8.03 hours on it now, will shoot for 15-20…

The UVIR-Cut seems to be pulling it out pretty good, I'm wondering about the Optolong L-Pro Broadband…I do OSC with an ASI294MC Pro

Thanks! Great job on that Shark…!

Rod
Alberto Ibañez avatar
David Nozadze:


Dear Alberto, 

Thank you so much for taking time and give such an excellent explanation. I think I understand this test. But, if I am not abusing your kind attention, I would like to ask more (as always, a good answer always generates additional questions and I really am eager to learn):

This test, in my opnion, shows a case of two fully saturated pixels. Naturally, L reading will be the same for both. Hence no distinction in detail (or no contrast, if this is the right expression). But if we would simulate several pixels with different signal levels, what would be the corresponding luminance image? 

FWHM - I usually break down my filter change sequence into at least three equal parts per night. For example, if I want to take 30 subs per each channel, I would split them into 3 cycles and run autofocus on each filter change (more cycles for L, naturally, but still with frequent autofocusing routine). Thus, I do get more or less consistent FWHM results (I think). Or perhaps I am wrong (again). Is there any other inherent optical reason why FWHM will be bigger in L then in color channels if the system is focused as sharply as possible?

Thank you!

D


Hi David, 

Sorry, I don't know if understood correctly... Do you mean how does it correlate in a "random" pattern? I understand that anyway there's an certain amount of pixels that will suffer this condition...

I made another test, let me know your thoughs.

In this case they are red and blue crossed stripes, with 2 different values, mixing toghether (ORIGINAL). Note that there's no Green in the original image, so the G channel is pure black.  I extracted the luminance of ORIGINAL and applied a shapening, resulting in ORIGINAL_LUM_ATWT.
Then I made a LRGB combination with the RGB channels from ORIGINAL  and L from ORIGINAL_LUM_ATWT. The result is LRGB COMBINATION_FROM_LUM_ATWT).
Then I extracted the RGB channels and I was very surprised. I expected how looks R and B (that's why I was curious to make the test), but didn't realized that even G has been messed!! :S



Then here you can see the effect of working the channels per separate, starting with the same ORIGINAL. Sharpening affect only the channel itself, as can be seen once they are extracted.


Sorry David, I don't know if I'm clarifying of messing everything!  

Regarding the FWHM, my understanding is that higher filter bandwidth results in higher FWHM, but I can't elaborate more, I'm far from being expert in optics...

Clear Skies!!
David Nozadze avatar
Alberto Ibañez:
David Nozadze:


Dear Alberto, 

Thank you so much for taking time and give such an excellent explanation. I think I understand this test. But, if I am not abusing your kind attention, I would like to ask more (as always, a good answer always generates additional questions and I really am eager to learn):

This test, in my opnion, shows a case of two fully saturated pixels. Naturally, L reading will be the same for both. Hence no distinction in detail (or no contrast, if this is the right expression). But if we would simulate several pixels with different signal levels, what would be the corresponding luminance image? 

FWHM - I usually break down my filter change sequence into at least three equal parts per night. For example, if I want to take 30 subs per each channel, I would split them into 3 cycles and run autofocus on each filter change (more cycles for L, naturally, but still with frequent autofocusing routine). Thus, I do get more or less consistent FWHM results (I think). Or perhaps I am wrong (again). Is there any other inherent optical reason why FWHM will be bigger in L then in color channels if the system is focused as sharply as possible?

Thank you!

D


Hi David, 

Sorry, I don't know if understood correctly... Do you mean how does it correlate in a "random" pattern? I understand that anyway there's an certain amount of pixels that will suffer this condition...

I made another test, let me know your thoughs.

In this case they are red and blue crossed stripes, with 2 different values, mixing toghether (ORIGINAL). Note that there's no Green in the original image, so the G channel is pure black.  I extracted the luminance of ORIGINAL and applied a shapening, resulting in ORIGINAL_LUM_ATWT.
Then I made a LRGB combination with the RGB channels from ORIGINAL  and L from ORIGINAL_LUM_ATWT. The result is LRGB COMBINATION_FROM_LUM_ATWT).
Then I extracted the RGB channels and I was very surprised. I expected how looks R and B (that's why I was curious to make the test), but didn't realized that even G has been messed!! :S



Then here you can see the effect of working the channels per separate, starting with the same ORIGINAL. Sharpening affect only the channel itself, as can be seen once they are extracted.


Sorry David, I don't know if I'm clarifying of messing everything!  

Regarding the FWHM, my understanding is that higher filter bandwidth results in higher FWHM, but I can't elaborate more, I'm far from being expert in optics...

Clear Skies!!


Dear Alberto, 

Thank you so much for such a detailed explanation! I think I understood what you mean quite well! I will repeat the same test myself to see the results even better. 

Once again, thank you for your time to share your knowlegde. 

Clear skies!

David
Respectful Supportive
Thomas avatar
Jeff Ridder:
Thomas:
Thanks, your photos look great. I think where I'm struggling is getting the brownish color to the dust. Mine usually just come out shades of gray. Any tips for getting that brown color to pop? Like in your Irish shot.


I think that was mostly a matter of using all of the usual tools in PixInsight to get the balance right. Masks, curves, color saturation, etc. Nothing special with regard to how much RGB or how I treated it other than that.

Thanks for the tips! I think where I might need to improve is the processing of the rgb image before I add in the luminance. Would it be possible for you to post a rgb vs lum comparison? I'm just curious what a good rgb looks like before adding in the luminance.
Well Written Respectful Engaging
Related discussions
Advice regarding a process for “perfect” flats
Would the following make sense? I am thinking about either buying or making a flatpanel where I can control the strength, so that I could always make flats that are eg. 2 seconds with perfect exposure and distribution on the histogram. Then I would b...
Jul 10, 2023
Both posts are incomplete, ending mid-sentence before the authors could finish their thoughts.
Pinched optics, backspacing error....or both?
I managed about two hours of imaging last night with my R200SS and have very quickly processed the images this morning; 40 x 180" frames. I've been using the scope for years now, so have the backspacing sussed (I think) and collimation is ok...
Jul 17, 2024
Both posts discuss astrophotography techniques and processing methods for capturing celestial objects.
Problems with image.
Here's a integrated image, after calibration, cosmetic correction, debayering and registration. There's much wrong with this image. The stars are not round and sharp. I am also wondering what's causing the diagonal color shifts on all 4 c...
Mar 19, 2023
Both posts are from astronomy enthusiasts seeking technical advice about improving their astrophotography image quality and processing techniques.
Why do I get this Halo in my exposures?
Hi everyone! So I am new to astrophotography and I recently upgraded my gear with a Samyang 135mm f/2 Lens with a canon 250d modified. The Lens seems very sharp but I get a strange halo in all my exposures. It looks like vignetting but I'm not qu...
May 30, 2023
Both posts are incomplete questions from astrophotographers seeking technical advice about imaging issues they're experiencing.
Halos in stars with IDAS LPS D3
Hi everyone, I use to shoot with an ASI2600MC + Esprit ED80 refractor (400 mm f/5). As I shoot in a bortle 5 sky I use an IDAS LPS D3 to have more contrast on the object, specially nebulae and avoid light pollution. This IDAS is screwed inside the Es...
Apr 15, 2023
Both posts are from astrophotographers seeking advice on capturing nebulae with specific equipment and techniques.