Adding "L" Data To RGB Data in Pixinsight

Jim RaskettMikołaj WadowskiCraig Towell
29 replies749 views
Jim Raskett avatar
I have been struggling with my first LRGB dataset of M33 for several months now. I have been an OSC shooter for many years and have enjoyed OSC processing. I added a Minicam8 mono to my kit awhile back and have had tons of fun with mono processing, but only with narrowband data.

It seems to me that my biggest issue is how to incorporate the Luminance data into the RGB image in Pixinsight. I have seen several videos on YouTube that work somewhat for me, but they are all quite different in approach. I have been working with the Pixinsight LRBG Composition series and have just gotten to the point of adjusting the brightness/contrast of the non-linear processed L image to closely match the brightness/contrast of the partially processed non-linear RGB image. Then adding the L image to the RGB image with ChannelCombination in the CIE L*c*h* Color Space. This can result in a nice brightness/detail enhancement to the RGB image but only if I adjust the L image just right. Determining “Just right” is one of my issues, but more important, is this the best way to go about adding L to RGB?

Another question that I have is just how far to bring both the L image and the RGB image in processing before blending?

I have read that using the ImageBlend script offers another method and a better choice to combine L with RGB. I have looked at it, but would need some work to fully understand it.

I am looking for some basic information like combination methods and how/when blending plays into workflow.

Thanks a bunch for any tip in direction that you can give me.
Engaging
Mikołaj Wadowski avatar

Matching the intensity of L and RGB when combining LRGB non-linearly is, at least in my opinion, in the top 3 most difficult parts of processing. There is a simpler way though. L can be combined with an effectively perfect intensity match while linear using a very simple and predictable procedure:

  1. Basic processing on both L and RGB (Color calibration necessary!)

  2. Extract the average of the color calibrated RGB

  3. Linear fit L to that average

  4. Multiply RGB by L and divide by the median of L to preserve the original brightness - RGB * L / med(L)

If done correctly, you won’t need to worry about stretching both L and RGB to the same intensity, washed out colors, or other issues with non-linear combination.

As to how processed both images need to be processed before combining, it’s kinda preference. I would definitely remove gradients and calibrate the colors. I also like to blurx both images separately. Same with denoise - RGB at 100% and L to taste. But I prefer to extract stars after combination, so there aren’t any issues with galaxies getting extracted from RGB but not from L or vice versa.

Helpful Engaging
Jim Raskett avatar
Mikołaj Wadowski:
Extract the average of the color calibrated RGB

Linear fit L to that average


Hi Mikolaj,

Thank you very much for the response! Sounds like a fairly simply process, but I want to be sure what you mean by "Extract the average of the color calibrated RGB". It is probably something quite simple, but possibly I am over thinking what it means. What exactly am I extracting and what do I do with the extracted average?

Thank you! 
Jim
Well Written Respectful
Mikołaj Wadowski avatar

Jim Raskett · Jan 4, 2026, 03:23 PM


Hi Mikolaj,

Thank you very much for the response! Sounds like a fairly simply process, but I want to be sure what you mean by "Extract the average of the color calibrated RGB". It is probably something quite simple, but possibly I am over thinking what it means. What exactly am I extracting and what do I do with the extracted average?

Thank you! 
Jim

It’s just the average of the RGB channels avg($T[0], $T[1], $T[2]).
It is indeed simple but tbf I phrased it quite poorly.

Jim Raskett avatar
Hi Again Mikolaj,

I appreciate the response and the extraction PM worked perfectly. 

Having some Incompatible image geometry issues running LinearFit on the L image. I tried registering the two images and applying the same DynamicCrop tp both and still getting the same error.  
I am sure I am just being humbled by PI!

Much thanks,
Jim
Mikołaj Wadowski avatar

Jim Raskett · Jan 4, 2026, 04:54 PM

Hi Again Mikolaj,

I appreciate the response and the extraction PM worked perfectly. 

Having some Incompatible image geometry issues running LinearFit on the L image. I tried registering the two images and applying the same DynamicCrop tp both and still getting the same error.  
I am sure I am just being humbled by PI!

Much thanks,
Jim

Hi,

The image you’re creating with pixelmath is probably a color image. By default, pixelmath makes an image with the same amount of channels as the target image, so here it’s three. You just need to set it to create a new grayscale image instead, then it’ll work. Otherwise, you can’t linear fit a monochrome image to a, technically, color image.

Well Written Helpful Insightful Respectful Concise Supportive
Jim Raskett avatar
@Mikołaj Wadowski

I just figured that out! I converted the extracted to grayscale and LF worked perfectly. Below is the result. The Extracted RGB image on the left and the "L" image on the right that has had LF applied. The brightness looks quite different. Does this look like the expected result?

I apologize for so many questions. I didn't want for you to have to spend this much time on responding to my post!

Mikołaj Wadowski avatar

Jim Raskett · Jan 4, 2026, 05:47 PM

@Mikołaj Wadowski

I just figured that out! I converted the extracted to grayscale and LF worked perfectly. Below is the result. The Extracted RGB image on the left and the "L" image on the right that has had LF applied. The brightness looks quite different. Does this look like the expected result?

I apologize for so many questions. I didn't want for you to have to spend this much time on responding to my post!

Hm, yeah that doesn’t look right, they should be near identical in brightness.

I assume they have the same STF applied? It looks like they do but it’s better to rule out the simplest solutions first :)

If they do, try lowering the ‘reject high’ parameter in LF. If that doesn’t work, I see a dark halo around the galaxy in your RGB data, likely a gradient extraction artifact. Try redoing gradient extraction, this time being more careful not to extract actual signal. Lum on the other hand looks perfectly fine if you need a reference of what to aim for.

My guess is that either the dark halo or different clipping points might be confusing LinearFit (the latter being less likely but easier to test).

And don’t worry about the questions! These are perfectly valid. If you need any further help, feel free to ask.

Well Written Helpful Insightful Respectful Engaging Supportive
Jim Raskett avatar
Thanks Mikołaj,When I get time later this afternoon, I’m going to start from scratch. 
I appreciate all the help that you’ve given me so far and I will respond back with what I find. Thanks again.
Respectful Supportive
John Hayes avatar

Jim Raskett · Jan 4, 2026 at 01:55 PM

I have been struggling with my first LRGB dataset of M33 for several months now. I have been an OSC shooter for many years and have enjoyed OSC processing. I added a Minicam8 mono to my kit awhile back and have had tons of fun with mono processing, but only with narrowband data.

It seems to me that my biggest issue is how to incorporate the Luminance data into the RGB image in Pixinsight. I have seen several videos on YouTube that work somewhat for me, but they are all quite different in approach. I have been working with the Pixinsight LRBG Composition series and have just gotten to the point of adjusting the brightness/contrast of the non-linear processed L image to closely match the brightness/contrast of the partially processed non-linear RGB image. Then adding the L image to the RGB image with ChannelCombination in the CIE L*c*h* Color Space. This can result in a nice brightness/detail enhancement to the RGB image but only if I adjust the L image just right. Determining “Just right” is one of my issues, but more important, is this the best way to go about adding L to RGB?

Another question that I have is just how far to bring both the L image and the RGB image in processing before blending?

I have read that using the ImageBlend script offers another method and a better choice to combine L with RGB. I have looked at it, but would need some work to fully understand it.

I am looking for some basic information like combination methods and how/when blending plays into workflow.

Thanks a bunch for any tip in direction that you can give me.

If you look through their educational videos, Pixinsight has a great video about how to combine Lum with RGB. I also talked about this subject in my TAIC talk, “Galaxy Studio” that you can watch here: https://www.youtube.com/watch?v=1XBon7x6kio.

John

Jim Raskett avatar
Hi John,

Is this the Pixinsight instructional video that you were referring to? 
https://youtu.be/eciH4yn3r0Q?si=fXTFsnZsWIjMVIfA

I refered to that one in my original post, but started getting stuck. See below.

“I have been working with the Pixinsight LRBG Composition series and have just gotten to the point of adjusting the brightness/contrast of the non-linear processed L image to closely match the brightness/contrast of the partially processed non-linear RGB image. Then adding the L image to the RGB image with ChannelCombination in the CIE L*c*h* Color Space. This can result in a nice brightness/detail enhancement to the RGB image but only if I adjust the L image just right. Determining “Just right” is one of my issues, but more important, is this the best way to go about adding L to RGB?”

That video is well done and if it is the one that you are referring to, I  will dive back in.

Firstly though, I will watch You AIC presentation. Thank a bunch John!
John Hayes avatar

Yes, that’s the PI video that I was referring to. If you wade through the whole thing, it is pretty good. The method that I outline in my TAIC presentation is very similar. I think that if you can find the time, watch both videos (mine and the one from PI). Sometimes setting the speed at 1.25x helps to speed it up a bit and you can always slow down if you miss something.

Good luck with it!!

John

Well Written Respectful Concise Supportive
Marco Montella avatar

I read the available resources (including John’s excellent lecture at TAIC!), and it seems that LRGB merging was coded up and intended to work strictly on non linear inputs.

On the other hand, it’s not clear to me why the linear method should not be used. I have been following this pipeline for months, with (at least apparent) success:

RGB: gradient correction, color calibration, mild deconvolution, star removal

Lum: gradient correction, mild deconvolution, star removal

Linear Fit of Lum to RGB_L component > LRGB Combination

Interestingly, LF results are substantially better if the stars are removed from both reference and target. I imagine that (similar to SPCC), saturated pixels can throw off linearity and generate skewed results such as those shown by Jim Rasckett.

Jim Raskett avatar
Marco Montella:
I read the available resources (including John’s excellent lecture at TAIC!), and it seems that LRGB merging was coded up and intended to work strictly on non linear inputs.

On the other hand, it’s not clear to me why the linear method should not be used. I have been following this pipeline for months, with (at least apparent) success:

RGB: gradient correction, color calibration, mild deconvolution, star removal

Lum: gradient correction, mild deconvolution, star removal

Linear Fit of Lum to RGB_L component > LRGB Combination

Interestingly, LF results are substantially better if the stars are removed from both reference and target. I imagine that (similar to SPCC), saturated pixels can throw off linearity and generate skewed results such as those shown by Jim Rasckett.

Very interesting Marco! Looking forward to trying your technique hopefully later today. Yes, something definitely seems to be skewing the result, however, I have yet to try Mikołaj Wadowski's idea of lowering the reject high parameter in LF. 

Thank you so much for this information and I am looking forward to giving it a spin!

Jim
Jim Raskett avatar
Hi @Mikołaj Wadowski  and @John Hayes 

Thank you both for all of the very helpful information!

Mikoloj,
I will try to work more with the LF settings that you recommended very soon. The information that you have given me will be very helpful in my quest to correctly add L to RGB. Many thanks and I have to go out of town today so it might be a day or two before I get back to working with the data.


John,
I really appreciate that you let me know that I was on the right track with the PI instructional video. I kind of lost my way where I mentioned above, but after I finish your TAIC presentation, I am sure that I will have a much better understanding of the blending process. 
I hope to get back soon and update my progress on this thread. I am very grateful for your help!


A question for either of you folks. I know that this sounds strange (it really does to me!), but appling the same stretch to the RGB and Lum was not as simple as I thought. I figured that I could just apply a STF stretch, drag it to HT, and apply it to each image. Well, apparently it isn't that easy. The RGB image stretches fine, but the Lum turns white (per usual), but when the screen transfer function is disabled, the Lum stays white! 
I feel like a beginner here, but can you recommend a method to apply the same stretch to both the RGB and the Lum images? 
Thank you!
Respectful Supportive
Jim Raskett avatar
Marco Montella:
I read the available resources (including John’s excellent lecture at TAIC!), and it seems that LRGB merging was coded up and intended to work strictly on non linear inputs.

On the other hand, it’s not clear to me why the linear method should not be used. I have been following this pipeline for months, with (at least apparent) success:

RGB: gradient correction, color calibration, mild deconvolution, star removal

Lum: gradient correction, mild deconvolution, star removal

Linear Fit of Lum to RGB_L component > LRGB Combination

Interestingly, LF results are substantially better if the stars are removed from both reference and target. I imagine that (similar to SPCC), saturated pixels can throw off linearity and generate skewed results such as those shown by Jim Rasckett.

Very interesting technique Marco!
I quickly went through your method before I leave for the day and it went together quite easily. I am going to work with Mikoloj's and John's methods to compare.

Thanks!
Well Written Respectful
Craig Towell avatar

Not sure if it’s of any use but I follow the method whereby a stretched and processed Luminance image is combined with a stretched and processed RGB image using channel combination, but I use the CIE L*a*b colour space. I have the L box checked and select the luminance image for that. The a and b boxes are unchecked, and then I drag the instance onto the RGB image, which is then replaced with the LRGB image.

Before doing this I stretch the two images by eye to the same intensity, using the histogram in histogramtransformation as a guide.

Before combining I usually also give the RGB image a good blur with convolution tool, to really smooth it out. And also do some selective desaturation of the background if required if it is a bit colourful (using inverted luminance mask and the saturation tool)

Helpful
Jim Raskett avatar
Craig Towell:
Not sure if it’s of any use but I follow the method whereby a stretched and processed Luminance image is combined with a stretched and processed RGB image using channel combination, but I use the CIE L*a*b colour space. I have the L box checked and select the luminance image for that. The a and b boxes are unchecked, and then I drag the instance onto the RGB image, which is then replaced with the LRGB image.

Before doing this I stretch the two images by eye to the same intensity, using the histogram in histogramtransformation as a guide.

Before combining I usually also give the RGB image a good blur with convolution tool, to really smooth it out. And also do some selective desaturation of the background if required if it is a bit colourful (using inverted luminance mask and the saturation tool)

Thanks Craig,
Sounds like you’re a method is very similar to John’s. I haven’t watched all of John’s presentation yet, so I’m not sure if he does any blurring or desaturation.
I was about ready to give up on the luminance altogether, but I have several methods here in this thread to try out.
Thank you for sharing your experience!
Jim Raskett avatar
Just a quick query.

I see  that Adam Block has a video on YouTube about using the ImageBlend PI script for combining L with RGB.

Has anyone tried this? And what is your experience?
Well Written Engaging
starfield avatar

Hi Jim,

When I’m having a hard time blending the RGB and Lum, I do find the ImageBlend approach can help. The main advantage is that it allows you to interactively tweak the blackpoints/midtones of each of the two layers to help you get a better match. A couple of caveats:

1) You should try and get the Lum/RGB matching as closely as possible as described in the Pixinsight videos.

2) When you’re tweaking the black levels, you do need to make sure you’re not clipping the layer.

—Steve

Well Written Helpful Concise Supportive
Mikołaj Wadowski avatar

Jim Raskett · Jan 5, 2026, 11:42 AM

A question for either of you folks. I know that this sounds strange (it really does to me!), but appling the same stretch to the RGB and Lum was not as simple as I thought. I figured that I could just apply a STF stretch, drag it to HT, and apply it to each image. Well, apparently it isn't that easy. The RGB image stretches fine, but the Lum turns white (per usual), but when the screen transfer function is disabled, the Lum stays white! 
I feel like a beginner here, but can you recommend a method to apply the same stretch to both the RGB and the Lum images? 

Unless you linear fit them, Lum and RGB will almost always have different scaling and background levels. In your case, RGB is probably way darker as an image, and applying it’s STF to Lum overstretches Lum big time.

The only way I know to apply the same stretch is either the method I described, or @Marco Montella’s. They’re very similar, though I found that using LRGBCombination is sligthly less accurate, since it’s meant for non-linear images (could be me doing something wrong tho).

What Marco said about star extraction is interesting, though. If it is indeed the different clipping points that confuse LinearFit, you could extract stars before fitting as they said, and then obtain the linear fit functions. You can then apply them manually to the image before stars were removed, using pixelmath. This sounds more reliable than moving LF sliders around.

Helpful Insightful Respectful Engaging
Michele Campini avatar

Marco Montella · Jan 5, 2026, 08:27 AM

Linear Fit of Lum to RGB_L component > LRGB Combination

Can u explain in detail this step ? Thank u :)

And for confirmation, the strech after the LRGB right ?

Ferran Bosch (S.A.C.) avatar

📷 image.pngimage.pngfast ans easy

Jim Raskett avatar
Hi Ferran,

Yes, this method works fairly quickly and does produce nice results.

Workflow:
Gradient Correction to L, R, G, and B masterlights (autocropped)
LinearFit to R, G, and B using L as the reference.
For L: Stretched with MultiscaleAdaptiveStretch
For L: Light NXT
For L: AdvSharpening script
ChannelCombination on the R, G, and B images to produce a RGB image.
For RGB: SPCC
For RGB: BXT
For RGB: Light NXT
For RGB: MultiscaleAdaptiveStretch
LRGBCombination to apply L to the RGB image
Proceed with non-linear processing.

This is an easy starting point. I tried something similar before and didn't get the results that I was looking for, but got it this time.

Thanks to all that contributed to this thread. I am going to try all of the suggested methods and compare them to the result above. 

Thank you, thank you, thank you!

Jim
Helpful Respectful Supportive
Craig Towell avatar

Jim Raskett · Jan 6, 2026 at 05:35 PM

Hi Ferran,

Yes, this method works fairly quickly and does produce nice results.

Workflow:
Gradient Correction to L, R, G, and B masterlights (autocropped)
LinearFit to R, G, and B using L as the reference.
For L: Stretched with MultiscaleAdaptiveStretch
For L: Light NXT
For L: AdvSharpening script
ChannelCombination on the R, G, and B images to produce a RGB image.
For RGB: SPCC
For RGB: BXT
For RGB: Light NXT
For RGB: MultiscaleAdaptiveStretch
LRGBCombination to apply L to the RGB image
Proceed with non-linear processing.

This is an easy starting point. I tried something similar before and didn't get the results that I was looking for, but got it this time.

Thanks to all that contributed to this thread. I am going to try all of the suggested methods and compare them to the result above. 

Thank you, thank you, thank you!

Jim

You need to be using BXT on your L channel, not your RGB. The RGB is supplying the colour only, not the details. The way our vision works all the detail and the vast majority of the information contained in the image comes from the brightness values of each pixel( luminance). The colour values dictates the colour of each pixel (obviously) but it’s surprising just how poor an RGB image can be whilst giving a great LRGB image, if the L is good.

During capture, and processing, concentrate on the L channel and try to make sure it’s the best it can be.

📷 IMG_0491.jpegIMG_0491.jpeg

Helpful Insightful Respectful Engaging