[RCC] SHO M16 Current Version and Narrowband Processing

5 replies156 views
Antha Adkins avatar

I’m still working on narrowband imagery processing and would appreciate any comments you have on my current version of M16 or the workflow below. My goal here was to get as much detail and brightness as I could in the “jewel box” outer nebula while retaining the details on the inner nebula and pillars of creation. Also, I like color.

https://app.astrobin.com/u/acubedsf?i=w0nvua#staging

📷 M16_SHO4_stars_RGB_rot_crop_sign.jpgM16_SHO4_stars_RGB_rot_crop_sign.jpg

Workflow:

Separate data for RGB stars and SHO nebula

RGB stars

Channel Combination

DBE

SPCC

SXT

BXT (run after SXT to avoid making blotches in the nebula in the RGB version, not relevant here)

Arcsinh Stretch by 4

ScreenTransferFunction to Histogram Transformation

SHO nebula – version 2

LinearFit to brightest data, Oiii

ChannelCombination using SHO pallette

PixelMath to make one corner 1 to prevent stretching by DBE

DBE

BXT

NXT

ScreenTransferFunction to HistogramTransformation

NarrowbandNormalization

Curves

NXT

SHO nebula – version 3

Start with BXT version

GeneralizedHyperbolicStretch

NarrowbandNormalization

NXT

Curves

HistogramTransformation to shift black point

ImageBlend – version 4 (one shown)

Version 2 as base, Version 3 as blend, Darken

HistogramTransformation to adjust black

PixelMath combine RGB stars and SHO nebula

Thanks for your insights,

Antha

Helpful
Brian Puhl avatar

The biggest issue that stands out in this image is the lack of a neutral, or black background. It’s very blue. These little things can be solved easily at the end of the process by using HistogramTransformation and either aligning the black points…. or using the ‘Boosted STF’ function to expose any color cast in the background. The following screenshot is not an ideal image, but it’s to demonstrate how this could be completed. In this histogram window, you can see how the black points of the RGB do not align. You don’t even have to see the image preview to achieve this step. A subsequent step would be working in curves transformation to bring up the luminance just a tad to compensate for the darker area.

📷 image.pngimage.png

The core looks good from a distance, I can tell some care was taken during the process to preserve things. What stands out unfortunately (which holds true to the entire image) is something going on with the blue channel. A noise pattern that stands out even more once you bring up the midpoint. The following screenshot, I have not altered your data, only raise the RGB midpoint to illustrate what I’m seeing.

📷 image.pngimage.png

I’m not sure how you got here, but since majority of your issues here exist in the blue channel, I suspect it started with GHS.

All throughout the image this noise pattern is exhibited, and as a whole I suspect you were trying to push your data a bit too hard. The signal wasn’t exactly there, and the noise got amplified.
📷 image.pngimage.png

Alot of folks use GHS, but I’m not a fan of it. Personally I just use the auto-STF function, relax the midpoint and black points by a tad, then apply the STF to HT to stretch. Further stretching is accomplished with curves, each step is accomplished at a 2:1 zoom or better to inspect for noise. Similar to stretching out a rubber band, eventually you see the little cracks. Take care not to go too far with it.

Helpful Insightful Respectful Engaging
Brian Puhl avatar

Also worth noting, you cannot properly use BlurXterminator on a starless image unless you have sampled the PSF of your stars for a manual input. This is a critical function of how much BlurX sharpens.

Well Written Insightful Concise
Antha Adkins avatar

Thank you, Brian, this is very helpful. One of the things that I was struggling with was the background, and you’ve given me some useful pointers on that. I had not considered using HT to fix it. Is that something better done earlier in the flow?

I will look at the noise as well.

FWIW, I had sampled the PSF from the stars, just didn’t write it down as a step.

Thanks for your help!

Antha

Well Written Respectful Supportive
Brian Puhl avatar

Antha Adkins · Sep 20, 2025, 09:02 PM

Thank you, Brian, this is very helpful. One of the things that I was struggling with was the background, and you’ve given me some useful pointers on that. I had not considered using HT to fix it. Is that something better done earlier in the flow?

With narrowband, there are multiple ways to get to the same result. In my basic narrowband process, Linear Fit is not involved at all. From day to day in my own processes I try various different methods, such as combining the channels in linear and performing a basic background neutralization + color calibration (linear only function). Other times combining them after stretching and then using HT to align the black points properly. Care must be taken when stretching individual channels with curves to anchor the black point on your curve line, avoiding things like what happened in your image. I don’t believe there is any one way that is better than the other, but sometimes stretching the images before combining will allow you to boost those lower signal areas more cleanly.

The end goal is always the same. Present your data to it’s potential. Don’t try to stretch the rubber band too much, and apply conservative amounts of denoise. Create a neutral, appealing background if possible with no visible color casts. I strongly urge you to inspect your image at the pixel level after every single step, examining your noise profile.

If it helps keep you in check, create a basic SHO combination of your data set (unlinked STF to HT) prior to any manipulation. Compare to your end result. Did you lose detail? Did the noise pattern get whacky? Try creating a proper image without using any noise reduction. Use relaxed stretches and mild curves. It will take some effort, and you cannot stretch the data far, but hopefully it gets you in the mindset to use denoise less.

Helpful
Antha Adkins avatar

📷 M16_SHO_stars_RGB_advice4_crop_sign.jpgM16_SHO_stars_RGB_advice4_crop_sign.jpgHi Brian,

Thanks so much for your help with this.

Based on your feedback, I’ve done a number of things:

- Calculated the FWHM from a combined SHO with stars image – I had been using the FWHM from my RGB stars. In retrospect, I shouldn’t have thought they would be the same, especially since the RGB data was 20 second subs and the SHO data was 3 minute subs. I just hadn’t thought that through.

- Spent some time looking at the noise after each step. It turned out that NXT wasn’t happy with the bad edges on the original uncropped image, so the first application of NXT didn’t actually do much. I cropped after DBE and then NXT did a much better job. (Of course that also meant I had to crop my RGB stars at the same time so they’d stay aligned, which is what I had been trying to avoid.) I tried processing without NXT, but the noise just got so much worse as I stretched the image, so I left NXT in early. I did as you suggested and watched the noise as I stretched it, but then ran NXT again towards the end since it became necessary.

- Used HT to shift RGB peaks at the end and set the black point.

So the final workflow:

LinearFit to brightest data, Oiii

ChannelCombination using SHO pallette

PixelMath to make one corner 1 to prevent stretching by DBE

DBE

DynamicCrop to get rid of bad edges

Same DynamicCrop on RGB stars

BXT

NXT

NarrowbandNormalization

ScreenTransferFunction to HistogramTransformation

Curves

NXT

HT to align histograms and set black point

PixelMath to combine nebula with RGB stars

I’m happy to get more comments or declare victory.

Either way, your comments were really helpful and I’ve added some tricks to my toolbox.

Thanks!

Antha

Helpful Insightful Respectful