[RCC] Galaxy post processing M101

10 replies•933 views
akshay87kumar avatar

Request you to provide your critique - It is my first attempt to post-process galaxy data. I obtained the data from TAIC workshop.

The LRGB subs were already stacked and registered, even cropped to good extent. I proceeded with these steps, providing my rationale to get your critique and feedback. There are some subs before and after Nova in the original data, but I did not bother to go into those details for now!

  1. Bckground removal of each channel individually

  2. Used RGB channel recombination to get a colour image. I tried to use LRGB recombination directly as well, but my images with that workflow would come out to be lacking colour. I did a few youtube videos - especially PixInsight which informed that the “lightness” of RGB and L channels have to be comparable. So i followed the following seuqence:
    2a - Linear fit using the L channel as reference
    2b - RGB combination followed by SPCC
    2c - Stretched the RGB channel (without removing stars yet, but ensuring stars dont become white). In this case, I just used the STF and applied it to local histogram and got a decent image. I have struggled with GHS usually, and at times when I have attempted to use Seti Astro statistical stretch, I get carried away with details like Target Median and Curves Boost. In my ‘greed’ to get maximum detail, I would end up using lot of curves boost that usually ruined the image. Finally I chose to just apply STF to HT and get the stretched done
    2d - Stretched the L channel similarly
    2e - Since the stretched L channel was too bright compared to RGB, I applied HDRMultiscale transformation (9 layers) to reduce the core and arm brightness
    2f - With the lightness (brightness) matching now, I injected L into the RGB channel and got my LRGB image.
    [Question]: I have seen tutorials suggesting to use CIELab (perceptive brightness) - what is the generall recommendation that you follow?

  3. I applied a galaxy core mask, and increased the saturation of galaxy (while avoiding the dark areas) - done in ~3 steps. Applied some S curves stretches as well.

  4. This gave a decent enough image. Since I had Ha layer as well, decided to play around with Continuum subtraction and CombineNBwithRGB also. This was a long trial and error. I actually started from the raw basics - using ChatGPT to get the raw pixel math as a step-by-step process. While this really helped me understand “why” and “how” the script does it, it was a very painful (but enjoyable) learning process. The first image was horrid, but I gave another pass using the scripts. While I do not remember the thresholds I used, the outcome was far easier than raw pixel maths.

    4a - Applied BlurXterminator and Graxpert Denoise to linear layer
    4b - Stretched the Ha channel by applying STF to HT (did not bother with GHS or scripts for stretching)

  5. Again applied galaxy core mask (the mask did not have ‘screen’ turn on, i.e. it was of uniform brightness), did Local Histogram Transformation - once with kernel radius ~60 and 15% strength, and again with kernel radius ~160 and 15% strength.

  6. With galaxy core mask still on, applied Unsharp Mask (default Std. Dev = 2, reduced the amount to 0.65, deringing turn on, no changes to dynamic range extension)

  7. Some last touchups using S curves and color saturation, boosting very very mildly the blue channel also.

Would appreciate your critique on:

  • The overall workflow - what looks OK and does not look OK in the image

  • BlurXterminator and Graxert DeNoise - I applied this only on the L channel, assuming that this is the channel that lends to all details in the image. Is this correct? Somehow I overdo NoiseXterminator and have since moved to Graxpert DeNoise - maybe a personal bias!

  • I struggled a lot with LRGB initially - what is the ‘right’ way to do this? The challenge with going through too many videos is that I am now confused whether it is done on stretched or unstretched image? Finally I went with PixInsight tutorial - performed linear fit on RGB channels, combined them in linear state and then stretched it. Then stretched L channel also. Did some HDRMultiscale Transformation to get the brightness down for L channel. I am sure there will be nebuale and galaxies where I cant make this brightness match - what is the right thing to do here?

  • Whether any linear fit is to be used on the Ha image also (using RGB or LRGB as reference, etc)? I feel there is a need, because I had to end up stretching the Ha image when my first output of continuum subtraction seemed to be having less brightness - just not sure what is the right thing to do here.

  • Anything related to Continuum Subtraction and HaRGB combination that I should know?

📷 M101_TAIC_Akshay Kumar.jpgM101_TAIC_Akshay Kumar.jpg

Quinn Groessl avatar

It’s pretty good. Personally I think I would have cropped it tighter on the right to completely get rid of that star and its diffraction spike. Then sticking with cropping, you didn’t crop enough off the bottom, there’s still stacking artifact down there. Just a few more pixels would have done it.

I don’t know what settings you used for SPCC, but the color correction is off. Sometimes it’s hard to know what settings to use when it’s physically not your camera/filters.

Even though I don’t particularly like the colors, the amount of saturation is pretty good. I think you did a pretty good job with your masking. Your Ha looks pretty good. Sometimes it can be tricky to get it to look like it’s naturally in the image, but yours does. There are some pretty red pixels in the background that are probably from when you added your Ha. Unless you’re adding a lot of Ha to the background, you might want to mask it out so that that doesn’t happen. Also your stars look decent. The colors are “meh” but the shape and brightness are pleasing. Last the dust in the spiral arms and the core look good to me. By that I mean stretched just about right and not overly processed.

There is some artifact in your image, it’s probably from BlurX or GraXpert denoise. It’s in the background areas, and it’s really hard to notice, but if you know what you’re looking for it’s there.

At what point exactly did you use BlurX anyway? Reading your post it seems you did it to basically your finished image?

Helpful Engaging
akshay87kumar avatar

Quinn Groessl ¡ Aug 19, 2025, 10:19 AM

Even though I don’t particularly like the colors

Now that I am relooking at the image, even I feel the color could be better - it isnt the usual blue of spiral arms 🙂 I really have a hard time getting the arms correct! It would either be all washed out almost white (when I was not getting LRGB right). When it came decent, then I got the arm colours way too off! What settings do you typically use to manage those colors - the RGB channels, or other ways of adjusting it?

Quinn Groessl ¡ Aug 19, 2025, 10:19 AM

At what point exactly did you use BlurX anyway

I used BlurXT on the linear Luminance image only, pretty much at the beginnig. So it was Background extraction → BlurXT → Stretch. I ran BlurXT with sharpen stars (20%), Non stellar sharpening (50%), reduce halos practically at zero.

Quinn Groessl ¡ Aug 19, 2025, 10:19 AM

the color correction is off

Yes, I did not have any clue of filters used, so I just ran it on the default settings after image solving. Do you mean that the star colours are off, or the spiral colours are not looking OK?

Engaging
Mikołaj Wadowski avatar
I think there's a lot of room for improvement in your workflow. I think the overall brightness of the image looks pretty good but I'm not a fan of the colors.
Bckground removal of each channel individually

I don't think I understand what you did here. If you just moved the black point such that the lowest-brightness pixel is 0.0, this is pointless and can be safely skipped. Background Neutralization (either the process or the one in SPCC) will rescale the image either way.
2a - Linear fit using the L channel as reference

Until recently I was making the same mistake. Extracting lightness from RGB is not a linear process, and while it can give close-enough results, it's better to extract the average of all RGB channels from a color-calibrated image. If you're going to use LRGB Combination process, this also doesn't matter too much I don't think, since you're going to have to visually match the brightness either way. More on that later.
Stretched the RGB channel (without removing stars yet, but ensuring stars dont become white). In this case, I just used the STF and applied it to local histogram and got a decent image. I have struggled with GHS usually, and at times when I have attempted to use Seti Astro statistical stretch, I get carried away with details like Target Median and Curves Boost. In my ‘greed’ to get maximum detail, I would end up using lot of curves boost that usually ruined the image. Finally I chose to just apply STF to HT and get the stretched done

Personally, I think applying STF to HT is a perfectly fine way to stretch, you don't need to use any fancy stretches IMO. The immediate preview built into STF is very handy and paired with proper HDR, like iHDR or HDRMT, it's more than enough.
2b - RGB combination followed by SPCC

I'm a bit unsure whether the colors, more specifically purple arms, are caused by inputting wrong data into SPCC, but it's possible. It's either that or Ha combination. You can always try plain old Color Calibration, it's simple and achieves pretty much the same in the end.
[Question]: I have seen tutorials suggesting to use CIELab (perceptive brightness) - what is the generall recommendation that you follow?

There are two ways to combine L with RGB I know of: LRGB Combination process and combining in pixelmath while both files are still linear. Personally I switched to the latter as it's much more convenient and there's less room for error. How it works is you decon/blurx and denoise the images first (I like to denoise RGB at 100% and then denoise Lum to taste), then linear fit Lum to an average of RGB. After that you use a simple pixelmath expression Charles Hagen came up with: `Lum * ($T / Avg($T[0],$T[1],$T[2]))` and that's it! Then you can stretch the image and work on it like normal. This ensures good color rendition as there's no mismatch between L and RGB brightness.
This gave a decent enough image. Since I had Ha layer as well, decided to play around with Continuum subtraction and CombineNBwithRGB also. This was a long trial and error. I actually started from the raw basics - using ChatGPT to get the raw pixel math as a step-by-step process. While this really helped me understand “why” and “how” the script does it, it was a very painful (but enjoyable) learning process. The first image was horrid, but I gave another pass using the scripts. While I do not remember the thresholds I used, the outcome was far easier than raw pixel maths.

Give Photometric Continuum Subtraction a shot, it's a one-click solution that always worked perfectly for me.


Maybe also try applying the denoise at a lower strength as it's starting to look a bit too smooth to me. I think that's it though, the rest of the workflow looks fine.

Hope all of that helps!
CS
Helpful
Quinn Groessl avatar

For LRGB it’s can be tricky. The way I do it is I stretch the RGB histogram until it looks good to me. Then I try to (about) match the L histogram to the RGB histogram. Sometimes I stretch it more, sometimes less, it takes me a bit of trial and error to get it how I like. For me that works to keep the colors in the LRGB image pretty true to what I’ve gotten in my RGB image.

Okay, just reading it it seemed like you were doing it on a near completed image because of how far down in your post you mentioned it.

I would say both. That bright star I thought you should cut off has green tint in it that it probably shouldn’t. Then a lot of the other stars are just kind of washed out. I can see the color that some are leaning towards, but they just don’t have it.

As for the spiral arms, they are more purple or magenta than what I’d expect. That really bright blue that you see in a lot of images would be taking it too far the other way.

It’s funny I was trying to find an image that I thought had really good colors and I found this one by Eric Coles (https://app.astrobin.com/i/r3kbme) and then I clicked the link to the data in your original post, and I see it looks like it’s his data that you’re using.

Helpful
Tony Gondola avatar

The color balance is certainly way off. I can’t zoom in to really see star colors but there seems to be a teal/green contamination to the brighter stars. Obviously the galaxy colors are off. the core looks good but the arms shouldn’t be purple. I’m not a PI guy so I can’t really give you processing advice. In general it might be a good idea to simplify the workflow until you can get a reasonable color balance.

Helpful
JayBac avatar

I think you need to crop out the noisy area at the bottom. That can mess you sometimes and produce unpredictable results. Also the arms should not be purple, you probably did that when you boosted the blue channel?

Look up a good youtube on processing LRGB galaxies try that and extend out from there. It takes a lot of practice to get it down.

Concise
Die Launische Diva avatar

I find it hard to obtain an RGB image with decent color balance given the available dataset. I suspect this is due to excessive light pollution combined with the galaxy occupying most of the frame, which makes gradient removal difficult. I wonder if anyone else has tried this. Did anyone else, for a moment, think the channels were wrongly assigned?

My advice is to start with the RGB data alone and try to obtain a reasonably color-balanced image.

Well Written Insightful Respectful Concise Engaging
Quinn Groessl avatar

So I did a quick process on the data from the zip in the website you linked. I have a bit of magenta on the spiral arms, but it wasn’t there until I added the Ha in. SPCC using the correct filters corrected the colors great for me.

SPCC on RGB, BlurX on RGB and L, NoiseX both, StarX both, only kept RGB stars. Used histogram saturation to stretch the stars how I wanted. Then I GHS to stretch both the L and the RGB images. LRGB combination to add the L to the RGB. A bit of playing around with the curves. Masked the galaxy so I wouldn’t affect the background. Added some saturation. Added Ha in. Reversed the mask and took some color out of the background to help balance it. Used unsharp mask to sharpen it (probably too much). Then rescreened the stars to finish.

📷 Image36.jpgImage36.jpg

Nedim Bevrnja avatar

Hey,

my curiosity got sparked as well, so I downloaded the data and had a look.

This is what I came up with:

Over time I’ve developed my own workflow, and this time I just did a quick and dirty run. But honestly, the data is amazing - pinpoint stars, incredible depth and sharpness… wow!


CS
Nedim

📷 M101_LRGB_Ha.jpgM101_LRGB_Ha.jpg

Erik Westermann avatar

It’s really quite a good capture. I found the color data a little noisy, but when I combined the L data, it improved a lot.

Well Written