Wizard Nebula - Just can't seem to get it right

Craig DixonTaras_Mandrea tasselliDustin GazzJan Erik Vallestad
40 replies2.1k views
Craig Dixon avatar
Is it just me that has a particular set of data that I just can't seem to process well? I collected 8 hours of data on the Wizard Nebula in January and still don't have a finished image as I just can't seem to process it to my satisfaction, despite several attempts. The image attached is where I'm at with it. I haven't put this on my profile yet as I'm still not happy but I'm just not sure where I'm going wrong.

Equipment Used:
Skywatcher 72ED (Doublet refractor - 420mm), ASI533MC Pro, L-enhance, EQ6-R Pro, ASIAir Plus, 120mm mini guide cam & scope, EAF

Processing in Pixinsight:
Crop, SPCC, BlurX, NoiseX, GHS (as best as I can figure it out), StarX, Bill's HOO normalisation & then colour masks, Curves, etc

Note: I know the the general procedure for Bill's pixelmath tools is not to use colour calibration and then stretch using his unlinked stretch tool but I don't seem to get any better results that way either.

Is anyone able to give me any feedback? The image just looks kind of forced. Even after stretching gently in GHS, the true colour version just looks mottled and rough. When I compare what I'm getting to other images on AstroBin, it just looks like a completely different target.

I appreciate any help. Thanks in advance!

Well Written Respectful Engaging
andrea tasselli avatar
I guess you're right, it does look a bit forced. Blunted, for lack of better words.And odd looking. I'd never do the processing steps you mentioned, if you're looking to create a HOO-ish image.
Bob Lockwood avatar
Your processing must have changed as your other NB images look really good. Try not to fix this one, and just start over.
Brian Puhl avatar
It's really hard to tell, but your data looks kinda rough to start with.   The stars are a little fat, probably doublet things, but the mottled background suggests probably a significant amount of color noise.   You could probably share the stack on here if wanted, I'd be willing to give it a shot.   Also, since you're doing narrowband, I wouldn't worry about color calibration.  Some folks do it, but I see zero point in it.  Narrowband is generally false color.   Do you extract your red and green channels as narrowband?   I've never messed with L-enhance, but this is how I treat L-extreme data.    If the mottle is still prevalent, you could do some range selection masking.    Just a few things that I'd use, but that data looks pretty rough.
Craig Dixon avatar
Thanks for the replies. I can't attach the stacked file here as there is a 5mb limit but I've added it to Google Drive here:

https://drive.google.com/file/d/11VvbHIs2CH3OGGCS2Db8qpaVDPNXteCO/view?usp=sharing

This data does seem bad from the start. The background is very blotchy, even just after applying an STF. or an EZ Soft Stretch. It looks particularly terrible after star removal.

The stars are indeed really bloated, a lot of which is probably down to the doublet but there isn't anything I can do about that. This stack is made up of 300 sec subs so I did wonder if I might be better off reducing that exposure time to reduce bloat. At the time I took these subs, I always used to set a 300 second exposure when using the L-enhance just because that's what everyone seems to do but I now wonder if I'd be better with shorter subs. Would this control the stars a bit?

When going for a HOO image, I usually just use Bill Blanchan's pixel math but I've experimented with splitting the channels too. I don't normally colour calibrate either for the reasons you mention but I've tried so many different ways of processing with this data so try and figure out where I'm going wrong. With this particular data, it doesn't matter what I do, it just looks mottled. L-enhance is very similar to the L-extreme, the band passes are a but wider, that's all.
Helpful Engaging
wsg avatar
Craig.
Your data is fine, like my own it's not great but it is easily processed by the current available tools.  There isn't much wrong with your stars in the            integrated image.  Basic process-
In order of use
Using Pixinsight:
SPCC
Bxt
Nxt
StarX

Using Photoshop:  starless image
Light strtech with levels
Color adjustment with saturation and vibrance
curve adjusting for detail in shadows and black point

Pixel Math in PI to replace the stars

I've used Bill B's tools and they work great.  I'm not sure what you are doing wrong but this very basic process took me about 10 minutes.
I would try a smaller first stretch with GHS to start with and make smaller changes in the direction you want to go.


scott
Helpful Supportive
andrea tasselli avatar
Your color planes aren't aligned, I mean in the original image.
Brian Puhl avatar
I digress... the data isn't quite as bad as I thought.   I have no idea how you ended up with that image.    I just spent about 15 minutes here.  

BlurX, extract stars, extract R + G, toss B.   NoiseX, stretch, curve each channel as fit.  Combine HOO, (I use dreamsplease palette).    Reapply Ha as Luminance since it's a tad cleaner than Oiii.   Add stars.    The hardest part of this whole image is dealing with your stars.   I think you've got some substantial tilt towards the top left.    Normally I try to bring out some color, even if it's l-ehance/l-extreme stars by utilizing SCNR and cranking some color, but unfortunately that tilt made it extremely difficult... so I did what I could, then minimized them using Bill's star scripts.   I could have spent a bit more time neutralizing the background, but I think it's just fine as is.

Helpful Supportive
andrea tasselli avatar
The other post of mine somehow disappeared. So, here is my take (I don't change background levels):

Jan Erik Vallestad avatar
I hope you don't mind me having a quick and dirty go at it as well, first time processing OSC data though and I might have gone in a completely different direction than you were aiming for.



Stars
As I'm not familiar with any techniques doing OSC data I just started out by separating the channels and doing FWHME to determine PSF. Then I went back to the OSC image and applied BXT before I removed the stars (not unscreening them). Stretched the stars manually with the histogram tool, adjusted the RGB curves to my liking and applied SCNR - minor curves adjustments in the end.

Lum
I then extracted the L component from the RGB image, stretched it and applied NXT at around 70% and some increase to detail. I copied the image and clipped the blacks - then applied this as a mask to two other copies before applying MLT and USM to the respective lum copies and discarding the mask. To finish up the lum channel I merged them in pixelmath (original lum 40%, USM lum 40% and MLT lum 20%), I ran DarkStructureEnhance to wrap it up.

RGB
Applied unlinked STF and applied the values to the histogram, applied NXT, then did manual adjustments overall and on each curve (again, to my liking) before applying it. Played around with the black point a bit in GHS and did curves adjustments to RGB/K, L and reduced saturation a bit.

LRGB
Opened up LRGB and applied the lum channel to the RGB data, slightly lightened and reduced saturation - done some more minor adjustments with curves adjustments to the L component. Finished up with Dynamic Crop to get rid of artifacts around the edges.

LRGB+STARS
Simple combination in pixelmath: ~(~stars*~starless)

Export
Exported as 16bit TIFF, imported to LR and done some more slight adjustments lightening the shadows and even more carefully increasing the black point.

PS: I made copies of the originals all the way in case something went wrong. I also know that NXT should be applied in linear state according to RC, Adam Block and others - but the simple fact is that in the current version I only get weirded out and blotchy results if I do that. So I remedy it by applying the process to non-linear data where it has proven to be less destructive for me.


EDIT:
I forgot to add that I ran a simple ABE with function degree 3 and subtraction. I did that just after removing the stars from the image - and before  I extracted the lum.
Helpful
Die Launische Diva avatar
I agree with @andrea tasselli, the color planes aren't aligned. And his rendering is the best so far, probably of his simple and targeted workflow. Split RGB and perform star alignment with surface splines & distortion correction using the best channel in terms of FWHM as reference. Recombine and you'll have a better image to start with. BXT/SXT just hides the problem under the rug.

When troubleshooting, avoid complicated workflows. You are just adding more potential points of failure.
Helpful Insightful Concise
Craig Dixon avatar
I really appreciate all the responses here so thanks so much to everyone. It’s great to see how others process the same data differently and all versions above (apart from mine) look fantastic. There is quite a lot of info above that I don’t quite understand so I have much to learn. I’m glad that the data is good enough to be used but I’m now questioning everything I thought I knew about processing. I’ve learned so far mainly from watching YouTube tutorials, usually from the main channels but I’m definitely way off where I want to be with processing. I’m now wondering if I should be using a different resource to learn processing. Here is my basic workflow that I perform on all images:

Dynamic crop - just remove a few pixels from each edge unless anything looks particularly out of place.

DBE: I used to manually place the sample points evenly throughout the image, avoiding nebulosity but I saw a video recently on one of the popular YouTube channels (Can’t remember which) that said to set the sample size pretty large (I use 100, and 10 per row). Delete all samples apart from around the edge & set a tolerance of 2 and a shadow relaxation of 6. Then apply as divide, then again as subtract.

BlurX: Default settings

NoiseX: Default settings

Stretch: I was using EZ Soft Stretch but I’m trying to learn my way around GHS so have been using that lately.

StarX

Curves adjustments to starless as necessary/to taste. Maybe some local histogram equalisation, etc

Add saturation to stars using Curves.

Re-combine image using ~(~stars*~starless)

So my thoughts now as to where I’m going wrong are either:

A: I’m using the wrong tools to process.

B: I’m using the tools incorrectly

or C: I’m using the tools in the wrong order.

I note above that nobody has mentioned using DBE. I was under the assumption that this is one of the first things to do to any image. Could I be ruining the data by using DBE incorrectly?

I also have an Adobe CC subscription so have access to Photoshop. What’s the workflow on switching between PixInsight and PS? Just save as a 16 bit TIFF and swap between the two as and when you need to? I find the colour tools as well as masking to be far more intuitive in PS but have been trying to just stick to PixInsight for simplicity.

All of your attempts above looks so elegant in comparison to my attempt and after reading through this thread several times, my processing feels dry crude. I think I need to start again from scratch but where/hoe to start?
Helpful Respectful
Jan Erik Vallestad avatar
Craig Dixon:
I really appreciate all the responses here so thanks so much to everyone. It’s great to see how others process the same data differently and all versions above (apart from mine) look fantastic. There is quite a lot of info above that I don’t quite understand so I have much to learn. I’m glad that the data is good enough to be used but I’m now questioning everything I thought I knew about processing. I’ve learned so far mainly from watching YouTube tutorials, usually from the main channels but I’m definitely way off where I want to be with processing. I’m now wondering if I should be using a different resource to learn processing. Here is my basic workflow that I perform on all images:

Dynamic crop - just remove a few pixels from each edge unless anything looks particularly out of place.

DBE: I used to manually place the sample points evenly throughout the image, avoiding nebulosity but I saw a video recently on one of the popular YouTube channels (Can’t remember which) that said to set the sample size pretty large (I use 100, and 10 per row). Delete all samples apart from around the edge & set a tolerance of 2 and a shadow relaxation of 6. Then apply as divide, then again as subtract.

BlurX: Default settings

NoiseX: Default settings

Stretch: I was using EZ Soft Stretch but I’m trying to learn my way around GHS so have been using that lately.

StarX

Curves adjustments to starless as necessary/to taste. Maybe some local histogram equalisation, etc

Add saturation to stars using Curves.

Re-combine image using ~(~stars*~starless)

So my thoughts now as to where I’m going wrong are either:

A: I’m using the wrong tools to process.

B: I’m using the tools incorrectly

or C: I’m using the tools in the wrong order.

I note above that nobody has mentioned using DBE. I was under the assumption that this is one of the first things to do to any image. Could I be ruining the data by using DBE incorrectly?

I also have an Adobe CC subscription so have access to Photoshop. What’s the workflow on switching between PixInsight and PS? Just save as a 16 bit TIFF and swap between the two as and when you need to? I find the colour tools as well as masking to be far more intuitive in PS but have been trying to just stick to PixInsight for simplicity.

All of your attempts above looks so elegant in comparison to my attempt and after reading through this thread several times, my processing feels dry crude. I think I need to start again from scratch but where/hoe to start?

I agree with the last post, Andrea's image looks very good. I'm not sure where things go wrong for you but I just wanted to add a few things.

1. In images with lots of nebula etc ABE (Automatic Background Extractor) could be very useful, play around with the function degree and see how it works/affects your image.

2. Test by making a duplicate image before running NXT. Run it on one image in linear state and do it on the other in non-linear state and see if you can spot the difference. After you run NXT you could re-apply the STF to see what I mean. There is an issue with this. Especially running NXT at default would be a bit harsh IMO, try reducing it some or even doing some in linear and then a tiny amount as you finish up your image in it's final state.

3. I always remove the stars in the linear state as it gives more control - But in return it needs to be stretched manually with HT. I extract stars after BXT generally.

4. Generally I do color calibration (SPCC) first of all. It depends whether I treat my mono channels separately or combined. If there's a lot of stuff in the background I would also run bxt/sxt first - Then do DBE with large samples. But I often choose to see how ABE works first when doing nebula.

5.  Use masks when you do curves adjustments, and be careful with the adjustments to not overshoot it. 

6. Before stretching, play around with SCNR to remove some of the green. The values vary, but I usually start off trying something in the region of 40-70 on a preview and work my way towards what looks most balanced. 


If you need to switch between Adobe you'll need to save a 16bit TIFF file, I've never done that myself - but lots of people probably do. I prefer to only do minor touch up in LR before exporting to JPEG.
Helpful
Jerry Yesavage avatar
Weather really lousy here so thought would take a look. 

First, your RGB does not line up over the stars... not sure if this is collimation of registration as a problem. 

Second, I used BlurXterminator to try best to deal with these distorted stars.  They look better in the upper right close up, after Noise Exterminator was applied. 

The final image is the upper right and the GHS stretch is shown in the right side. This is an OK histogram pattern.  Nothing unusual about the stretch. 

I added HDRMT. 

I could have removed the stars and wildly stretched the image, but if you look at the histogram it is well proportioned, so did not want to mess with reality, which was no too bad.

Helpful Supportive
Zak Jones avatar


Decided to jump on the editing bandwagon lol.

Here's my attempt at your Wizard Nebula data. It was great processing data both from a target I can't image because of where I live and also data from a dedicated deep sky camera, that being the ZWO ASI533MC Pro. It came out awesome! Really thinking about getting a dedicated astrophotography camera now lol.

Thanks for providing your data, there's definitely nothing wrong with it!

Zak
Taras_M avatar
Craig Dixon:
Is it just me that has a particular set of data that I just can't seem to process well? I collected 8 hours of data on the Wizard Nebula in January and still don't have a finished image as I just can't seem to process it to my satisfaction, despite several attempts. The image attached is where I'm at with it. I haven't put this on my profile yet as I'm still not happy but I'm just not sure where I'm going wrong.

Equipment Used:
Skywatcher 72ED (Doublet refractor - 420mm), ASI533MC Pro, L-enhance, EQ6-R Pro, ASIAir Plus, 120mm mini guide cam & scope, EAF

Processing in Pixinsight:
Crop, SPCC, BlurX, NoiseX, GHS (as best as I can figure it out), StarX, Bill's HOO normalisation & then colour masks, Curves, etc

Note: I know the the general procedure for Bill's pixelmath tools is not to use colour calibration and then stretch using his unlinked stretch tool but I don't seem to get any better results that way either.

Is anyone able to give me any feedback? The image just looks kind of forced. Even after stretching gently in GHS, the true colour version just looks mottled and rough. When I compare what I'm getting to other images on AstroBin, it just looks like a completely different target.

I appreciate any help. Thanks in advance!


Hi, first thing, that bites my eyes are the stars. Try to stretch them by @Adam Block method: just a little bit lesse as autostretch does, the stars should be just not all white. Then SXT and process starless image to your desired level.
Second, color balance: OSC and Filter may cause some color disbalancing, try to beat it with curves.
Why are you stacking in DSS if you have/use PI?
Do you have stacked XISF in its virgin state (linear)?
Craig Dixon avatar
I've been trying to troubleshoot this again tonight and I think I've made some good progress. I've attached a screenshot showing where I'm at with it.IMAGE 1: Crop, DBE, SPCC, BlurX, NoiseX, StarX, EZ Soft Stretch IMAGE 2: Crop, SPCC, BlurX, NoiseX, StarX, EZ Soft Stretch IMAGE 3: Crop, SPCC, EZ Soft Stretch IMAGE 4: Crop, SPCC, StarX, EZ Soft Stretch IMAGE 5: Crop, SPCC, BlurX, StarX, EZ Soft Stretch, NoiseX
I've used EX soft stretch in all versions just to keep things simple but I think image 5 is the best. There is quite a big improvement between image 1 and 2 and the difference here is that I didn't do DBE in image 2. That leads me to believe that I'm either using DBE incorrectly or it's just not suitable for this image. The background does look pretty even and neutral to me in images 2-5 (no DBE) so maybe it isn't needed with this image.

As @Jan Erik V suggests above, applying NoiseX after the stretch does look better (image 2 in comparison to image 5). Even though the documentation for NoiseX does say to use it on linear data, image 5 is clearly better than image 2.

Re: The colour Chanel alignment... This is beyond my knowledge so I'm afraid I'll need some more explanation here.

Re: Tilt... This is also something I'm not familiar with so need to research.

Does it look like I'm on the right track here now?
Helpful Engaging
andrea tasselli avatar
You're on the right track there and, obviously, choosing which one to go for is a matter of personal taste. As for the issue of color plane mis-alignment you'd need tore-align the 3 separate RGB planes using thin-plate spline alignment checking all the tick-boxes. The cause it is due to your doublet optical prescription, causing LCA. As for the tilt, that is easy to diagnose and hard to correct. I'd try using shims.
Helpful Concise Supportive
Jan Erik Vallestad avatar
I agree, it seems to be getting along very well now. Good job!

As for the nxt in linear vs non linear - I wish I could provide some sources to back it up, but I simply cannot remember where I've had this discussion as it was a while ago now. I do remember talking about it and getting some kind of confirmation that it has issues that will be fixed. But which forum/platform this occured on eludes me now..
Dustin Gazz avatar
Really good data. I did my basic process in PI. Last one I cropped a bit, middle one I did a double star reduction. Not sure if this is kinda what you were thinking or not, but they ok to me. - "Dustin"
Joe Linington avatar
That was fun. Not bad data at all. I did re-align it by splitting the layers and then using staralign to register it to the red channel but I didn't have to use any special halo reducing tricks at all. It might be a touch over-processed but this was a 20 minute pass. 

Craig Dixon avatar
Thanks everyone for your help with this. It really has helped me progress and learn some new things so I really appreciate it. I’m going off to learn how to re-align the colour channels now as this seems to be a common recommendation.

It’s great to see everyone’s processing of the data, all of the images look fantastic.
Well Written Respectful Supportive
Craig Dixon avatar
Craig Dixon:
Is it just me that has a particular set of data that I just can't seem to process well? I collected 8 hours of data on the Wizard Nebula in January and still don't have a finished image as I just can't seem to process it to my satisfaction, despite several attempts. The image attached is where I'm at with it. I haven't put this on my profile yet as I'm still not happy but I'm just not sure where I'm going wrong.

Equipment Used:
Skywatcher 72ED (Doublet refractor - 420mm), ASI533MC Pro, L-enhance, EQ6-R Pro, ASIAir Plus, 120mm mini guide cam & scope, EAF

Processing in Pixinsight:
Crop, SPCC, BlurX, NoiseX, GHS (as best as I can figure it out), StarX, Bill's HOO normalisation & then colour masks, Curves, etc

Note: I know the the general procedure for Bill's pixelmath tools is not to use colour calibration and then stretch using his unlinked stretch tool but I don't seem to get any better results that way either.

Is anyone able to give me any feedback? The image just looks kind of forced. Even after stretching gently in GHS, the true colour version just looks mottled and rough. When I compare what I'm getting to other images on AstroBin, it just looks like a completely different target.

I appreciate any help. Thanks in advance!


Hi, first thing, that bites my eyes are the stars. Try to stretch them by @Adam Block method: just a little bit lesse as autostretch does, the stars should be just not all white. Then SXT and process starless image to your desired level.
Second, color balance: OSC and Filter may cause some color disbalancing, try to beat it with curves.
Why are you stacking in DSS if you have/use PI?
Do you have stacked XISF in its virgin state (linear)?

Sorry, I forgot to answer your question about stacking. I stacked once in Pixinsight but I found that it looks significantly more time than DSS, used a lot more temporary storage and didn’t produce a better result so I’ve always just stuck to DSS. I’m happy to be convinced of it’s merits though.
andrea tasselli avatar
Craig Dixon:
Craig Dixon:
Is it just me that has a particular set of data that I just can't seem to process well? I collected 8 hours of data on the Wizard Nebula in January and still don't have a finished image as I just can't seem to process it to my satisfaction, despite several attempts. The image attached is where I'm at with it. I haven't put this on my profile yet as I'm still not happy but I'm just not sure where I'm going wrong.

Equipment Used:
Skywatcher 72ED (Doublet refractor - 420mm), ASI533MC Pro, L-enhance, EQ6-R Pro, ASIAir Plus, 120mm mini guide cam & scope, EAF

Processing in Pixinsight:
Crop, SPCC, BlurX, NoiseX, GHS (as best as I can figure it out), StarX, Bill's HOO normalisation & then colour masks, Curves, etc

Note: I know the the general procedure for Bill's pixelmath tools is not to use colour calibration and then stretch using his unlinked stretch tool but I don't seem to get any better results that way either.

Is anyone able to give me any feedback? The image just looks kind of forced. Even after stretching gently in GHS, the true colour version just looks mottled and rough. When I compare what I'm getting to other images on AstroBin, it just looks like a completely different target.

I appreciate any help. Thanks in advance!


Hi, first thing, that bites my eyes are the stars. Try to stretch them by @Adam Block method: just a little bit lesse as autostretch does, the stars should be just not all white. Then SXT and process starless image to your desired level.
Second, color balance: OSC and Filter may cause some color disbalancing, try to beat it with curves.
Why are you stacking in DSS if you have/use PI?
Do you have stacked XISF in its virgin state (linear)?

Sorry, I forgot to answer your question about stacking. I stacked once in Pixinsight but I found that it looks significantly more time than DSS, used a lot more temporary storage and didn’t produce a better result so I’ve always just stuck to DSS. I’m happy to be convinced of it’s merits though.

Rest assured that PI does a way better job at stacking than anyhting DSS would be ever able to conjure, e.g., it would have aligned the colour planes affected by lateral color aberration such as yours.
Joe Linington avatar
There is a list of programs that out perform DSS. Siril and ASTAP are both free and very good, Siril can also be much faster than DSS. AstroPixelProcessor and PI are paid but both are fantastic. Siril and APP are my favorites for various reasons. APP and ASTAP can be easy on drive space and ASTAP is very light on the hardware.
Helpful