The How and Why of Superluminance

Mark McComiskeyandrea tasselliJohn HayesScott Badger
24 replies435 views
Mark McComiskey avatar

I am hoping to tap into the Astrobin braintrust. I regularly collect substantial amounts of RGB and Luminance data. I equally regularly replace the lightness component of my RGB images with the Luminance data.

If I run 10 hours of each of RGB and L, then throwing away the Lightness data from the RGB is essentially halving the amount of Lightness data I have (assuming 100nm bandwidths for the RGB and 300nm for the L filters), which seems silly. Assuming the FWHMs are equal across all channels, why bother with the Luminance in this approach?

Leave aside for these purposes any discussion about how one could collect less RGB. Assume FWHMs of the RGB and L are all roughly the same.

I am aware that an approach to not losing the lightness data in the RGB is to create a superluminance. But I have not found any serious discussion of the best approach to creating a superluminance that explains why that is the best approach and what the pros and cons are. There are advocates for running image integration on all the RGBL subs, for running image integration on the RGBL masters and for using pixelmath to blend the Luminance and the lightness from the RGB.

So that is my question to the deep thinkers out there: is there a generally best approach to creating a superluminance, if so what is it, why is it the best, and what downsides are there, if any, to using that superluminance versus just an L or the lightness in the RGB?

Well Written Respectful Engaging
andrea tasselli avatar
Data is data, just add them the masters up with SNR weights and you're done. Always done that way. Everything else is just mucking around.
Scott Badger avatar

I think there are good arguments to go with straight RGB, but for me it’s more a strategic decision to promote best resolution and maximum imaging time in poor and variable seeing conditions. I shoot L when seeing is better than average, and R,G,& B when average or worse.

In any case, I also (usually) create a super luminance, where I create second R, G, and B masters intended for the super luminance only. The subs for these masters are culled for fwhm much more aggressively than the RGB masters I use for color. I also cull the luminance master a bit more aggressively as I’m more than replacing any signal given up. I then integrate the 4 masters using PSF SNR for weighting and no rejection. The result will generally have a slightly larger average fwhm than the luminance master (even without the extra culling), but a significantly higher SNR.

When processing, one step that I’m not sure would work as well without using a luminance is applying HDRMultiScaleTransform. When I use it on the luminance, the result often looks non-optimal until combined with the RGB.

Cheers,
Scott

Well Written Helpful Insightful Respectful Engaging
Mark McComiskey avatar

Thanks, but I’m looking to understand the reasons why one approach is better than the other. It isn’t in fact always done in the way you described as I discovered when I searched the topic and found people advocating variety of approaches. It is entirely possible that they way you suggest is the best, but I am trying to get to a theoretical foundation of why.

Well Written Respectful
David Jones avatar

The L produced from RGB combination has the bandpass overlap between the filters which would ultimately give a bias at those particular wavelengths. Is it significant? Probably not, but it is there - unless it is somehow accounted for in the math at combination. I suspect most software doesn’t unless there are some filter selections like those in SPCC in PixInsight to account for it somehow.

The increase in the resolution gained is generally far more important for most folks than any bias incorporated. It is hard earned data, so using it makes sense, to me.

Dave

Well Written Helpful Respectful Engaging
Daniel.P avatar

Just like Andrea, I am adding the L, R, G & B channels to create a super luminance.

For this I am using PI ImageIntegration process, with no rejection and and weighting being either PSFSignalWeigth or PSFSNR depending on the object processed (PSFSignalWeigth for galaxies to promote best FWHM and PSFSNR for nebula where theSNR is the most critical).

So, if I have acquired 2h of each filters for example (so 8h in total) and in the case images are identical in term of FWHM and sky brightness, the result will be very close to a 4h Lum exposure : 2h of true Lum + 2h of (R+G+B) …

Daniel

Well Written Helpful Insightful Respectful Engaging Supportive
Michele Bayliss avatar

I am probably not nearly as technical as 99% of people out there so this may not answer your question at all but is just my process. I try to shoot L and RGB. Then I use image integration to create a superL throwing all those channels in the “pot” and using the L as a reference. I have measured the SNR of both the “plain” L and the super L and the super L is always better so I toss the L and use the super L.

For SHO images, I create a synthL and sub that in for the lightness and it ALWAYS adds detail. As for the WHY, that I can’t tell you as I’m the humanist astro photographer…

Well Written Helpful Respectful Concise Engaging Supportive
John Hayes avatar

Mark,

There are a couple of reasons for both a separate L-channel and for creating a “super” L-channel.

1) For any given exposure time, the signal will be higher with the L filter than from any of the RGB filters simply because more light gets through. So, the pure Lum-data will go deeper than any of the RGB channels. Remember that SNR varies as the square root of the signal strength so the Lum-data will also be “cleaner” than your RGB data. In a LRGB image, it’s the Lum-data that drives the overall SNR so that’s the primary reason to take the Lum-data in the first place.

2) You are right that you can further improve the SNR by not tossing out the luminance portion of the RGB combination. The trick is to properly combine it with the “real” luminance signal, which requires proper statistical weighting. Simply tossing everything into the ImageIntegration tool does not properly combine the results and in some circumstances, that approach can actually make things worse. I’ve had success with the following method.

1) Extract the Lum signal from the RGB combination converted to LAB space.

2) Make a copy of the file.

3) Make a copy of your Lum data.

4) Load the originals plus the copies (4 files) into the ImageIntegration tool.

5) Average them using “Additive with scaling” normalization and SNR weighting.

You need to use four files simply because the ImageIntegration tool won’t allow a weighted average using only two files. SNR weighting will insure that the combination is statistically “correct” with respect to signal strength. I believe that you could also use the PSF Signal Strength weighting factor and get a similar result. Juan has stated that the later is more statistically “robust” but I’ve never done any experimenting to see if there’s a measurable difference.

Remember that there is a small difference between the bandpass of your Lum-filter and the Lum channel that you pull from the RGB data, but in my experience, that issue has no significant effect on the result.

Hopefully this answers your question.

John

Well Written Helpful Engaging Supportive
Rodolphe Goldsztejn avatar

If not already done, I would suggest watching the official PixInsight tutorial videos on youtube. There’s one that addresses exactly that point and shows the benefits of creating a superluminance - essentially for SNR improvement.

https://youtu.be/Q2PLUI2hBvQ

My 2 cents.

Well Written Concise
Mark McComiskey avatar

To be clear, I am a believer in the benefits of a super luminance and I understand why it is better. I am really asking about the best approach for creating one.

Mark McComiskey avatar

John,

That makes sense. I have always felt that running integration using the individual R, G and B masters plus the Lum must be problematic given the massive mismatch in bandwidth. Cloning the Lum and RGB lightness to get over the 4 frame minimum makes a lot of sense.

Well Written
andrea tasselli avatar
Mark McComiskey:
John,

That makes sense. I have always felt that running integration using the individual R, G and B masters plus the Lum must be problematic given the massive mismatch in bandwidth. Cloning the Lum and RGB lightness to get over the 4 frame minimum makes a lot of sense.

*Only if you scale the members by their relative SNR
Kevin Morefield avatar

John Hayes · Mar 7, 2026, 05:48 PM

Mark,

There are a couple of reasons for both a separate L-channel and for creating a “super” L-channel.

1) For any given exposure time, the signal will be higher with the L filter than from any of the RGB filters simply because more light gets through. So, the pure Lum-data will go deeper than any of the RGB channels. Remember that SNR varies as the square root of the signal strength so the Lum-data will also be “cleaner” than your RGB data. In a LRGB image, it’s the Lum-data that drives the overall SNR so that’s the primary reason to take the Lum-data in the first place.

2) You are right that you can further improve the SNR by not tossing out the luminance portion of the RGB combination. The trick is to properly combine it with the “real” luminance signal, which requires proper statistical weighting. Simply tossing everything into the ImageIntegration tool does not properly combine the results and in some circumstances, that approach can actually make things worse. I’ve had success with the following method.

1) Extract the Lum signal from the RGB combination converted to LAB space.

2) Make a copy of the file.

3) Make a copy of your Lum data.

4) Load the originals plus the copies (4 files) into the ImageIntegration tool.

5) Average them using “Additive with scaling” normalization and SNR weighting.

You need to use four files simply because the ImageIntegration tool won’t allow a weighted average using only two files. SNR weighting will insure that the combination is statistically “correct” with respect to signal strength. I believe that you could also use the PSF Signal Strength weighting factor and get a similar result. Juan has stated that the later is more statistically “robust” but I’ve never done any experimenting to see if there’s a measurable difference.

Remember that there is a small difference between the bandpass of your Lum-filter and the Lum channel that you pull from the RGB data, but in my experience, that issue has no significant effect on the result.

Hopefully this answers your question.

John

I think it’s key to add that you do not use rejection when creating the SuperLum.

John Hayes avatar

andrea tasselli · Mar 7, 2026 at 07:03 PM

*Only if you scale the members by their relative SNR

That was my whole point and that’s why I recommended using weighted averaging by either SNR or PSF Signal Weight.

John

John Hayes avatar

Kevin Morefield · Mar 7, 2026 at 07:33 PM

I think it’s key to add that you do not use rejection when creating the SuperLum.

Thanks Kevin. That’s right and I neglected to mention that.

John

Respectful
Mark McComiskey avatar

A follow-on question: should DBE be applied to the masters before running integration?

andrea tasselli avatar
I'd hope the masters would have already been corrected for background flatness.
Mark McComiskey avatar

Not sure I follow. The unprocessed masters come out of preprocessing without having had gradients removed (apart from the impact of locaL normalization, if used). It’s pretty typical to then apply DBE or the equivalent to the masters to remove gradients that appear in the masters as an early step in the processing.

It occurred to me that removing these gradients from the Lum and the RGB before integrating the Lum and RGB lightness might help avoid any issues created by incompatible gradients and so few “subs” for the integration.

andrea tasselli avatar
That's what I'm saying. I expect that background gradient removal would be carried out before any further processing.
Mark McComiskey avatar

Got it. So you apply gradient removal before creating the Superluminance, just to confirm?

Well Written Respectful
andrea tasselli avatar
Mark McComiskey:
Got it. So you apply gradient removal before creating the Superluminance, just to confirm?

Yes indeed.
Charles Michaud avatar

Every time I’ve tried to create a Super Luminance using RGB data, I’ve obviously noticed a fairly clear difference and a meaningful gain in SNR. However, in practice, that improvement is often too subtle to be noticeable to the eye once it’s integrated into my full processing workflow, so for me it rarely feels worth the time required to build it.

My impression is that it might only really be worthwhile if the RGB integration time is much longer than the luminance integration time. But since that’s not really a logical way to structure an acquisition plan, it ends up being a fairly rare situation.

I mainly use luminance data to reveal contrast and dynamic range. In the end, I feel that the most noticeable choices affecting the final image come from aesthetic decisions, how contrast and stretching tools are applied to the luminance alone.

So I tend not to get distracted by things that are objectively better from a data standpoint but that, in practice, don’t translate into a clearly visible improvement in the final image, at least for me.

Well Written Helpful Insightful Respectful Engaging
Scott Badger avatar

John Hayes · Mar 7, 2026, 05:48 PM

I’ve had success with the following method.

1) Extract the Lum signal from the RGB combination converted to LAB space.

2) Make a copy of the file.

3) Make a copy of your Lum data.

4) Load the originals plus the copies (4 files) into the ImageIntegration tool.

5) Average them using “Additive with scaling” normalization and SNR weighting.

Hi John, I hadn’t heard of this method before and since I’m right at that point in my current processing, I’ll give it a try.

Before extracting the Lab lightness, are you doing anything with the RGB combo; SPCC, gradient correction, BlurX correct only, BlurX sharpen? And/or anything to either the Lum or lightness before integration?

Cheers,
Scott

Well Written Respectful
John Hayes avatar

Scott Badger · Mar 9, 2026 at 03:34 PM

John Hayes · Mar 7, 2026, 05:48 PM

I’ve had success with the following method.

1) Extract the Lum signal from the RGB combination converted to LAB space.

2) Make a copy of the file.

3) Make a copy of your Lum data.

4) Load the originals plus the copies (4 files) into the ImageIntegration tool.

5) Average them using “Additive with scaling” normalization and SNR weighting.

Hi John, I hadn’t heard of this method before and since I’m right at that point in my current processing, I’ll give it a try.

Before extracting the Lab lightness, are you doing anything with the RGB combo; SPCC, gradient correction, BlurX correct only, BlurX sharpen? And/or anything to either the Lum or lightness before integration?

Cheers,
Scott

Scott,

It is probably best to create the super-Lum combination with the image statistics that come right out of the raw stack. So, I’d recommend combining the raw, stacked Lum data with the Lum channel of the combined RGB data right after it’s stacked. Once you have everything combined, then do SPCC, gradient correction, BXT, NXT, or whatever you want on that result.

John

Well Written Helpful Respectful Concise Supportive
Scott Badger avatar

John Hayes · Mar 10, 2026, 12:31 AM

Scott,

It is probably best to create the super-Lum combination with the image statistics that come right out of the raw stack. So, I’d recommend combining the raw, stacked Lum data with the Lum channel of the combined RGB data right after it’s stacked. Once you have everything combined, then do SPCC, gradient correction, BXT, NXT, or whatever you want on that result.

John

Thanks John! I guessed that was probably the approach to use and already started with a couple variations. I’ll let you know how it goes tomorrow.

Cheers,
Scott

Well Written Respectful