BlurXterminator technique and usage thread

Scott BadgerTim HawkesAlan BrunelleDark Matters AstrophotographyBill McLaughlin
99 replies10.3k views
Chris White- Overcast Observatory avatar
Starting a new thread for folks who want talk about using this tool.  How have you used it, what tricks have you learned.  Have you found any applications where it was useful other than what it was intended for?  Where in your workflow are you inserting it?  What don't you like about it, pitfalls?

I've got some ideas to contribute and as I do some more testing I will add my thoughts and what I have learned. 

If you want to discuss ethics or anything else off topic from using this new tool… please find somewhere else to do so.
Engaging
vercastro avatar
So I'm not quite solidified on a process yet. But I have experimented with the below.

For luminance:
Crop > DBE > BlurXterminator > TGV > NoiseXterminator > Starnet2 or StarXterminator (starless only) > Stretch however you like > Curves adjustments > HRDMT & LHE (if necessary).

For RGB:
Crop > DBE > BlurXterminator > TGV > NoiseXterminator > SPCC > Starnet2 or StarXterminator (stars and starless) > Stretch starless however you like, stretch stars only with HT > Curves adjustments > Saturation.

LRGB combine both starless. Then add saturated stars back with this technique: https://www.nightphotons.com/guides/star-addition

For narrow band, treat Ha like luminance. Linearfit SHO, and stretch separately. Combine them in non linear state however you prefer.

For OSC, create a copy of the luminance and treat that as luminance.

As for settings to use inside BlurXterminator, I find galaxies look best with lower settings (Sharpen Stars about 0.15-0.20, Sharpen Nonsteller 0.20-0.35). Large nebula can tolerate higher Nonsteller settings.
Helpful
Jonny Bravo avatar
I've only just started playing around with it. So far I've tried it on data collected with my 294MM Pro and 8" EdgeHD at f/10 - so an image scale of about 0.45"/px. I've tried it on a narrowband HOO image of Mel-15 (with RGB stars) and also on RGB data of M13. I've played around with the sliders a bit, and tried a few of the different checkbox options.

I use it very near the beginning of my workflow, as I don't want to have anything altering the data before the PSF is calculated. My typical workflow is:

Dynamic Crop for getting rid of any stacking/dithering artifacts
DBE if necessary
Channel combination (various different methods)
Image solver if necessary
SPCC

I have tried BXT at each stage of the workflow above... so as the first step, as the step immediately after crop, as the step after DBE, etc.

What I've found while playing around with it...

On its default settings it did quite a nice job on my M13 RGB data. Here are a couple screenshots of the before/after in PI. First is the "before":



Here's the "after":



I've zoomed in quite a bit to show the effects. I did BXT just after SPCC. This is just a standard STF applied.

Like the M13 data, I applied BXT after doing a channel combination of HOO, crop and SPCC on Mel-15. Same default STF applied. Here's the before:



And After:



Stars are tighter and there's considerably more detail visible in the nebulous structure. Just for fun, I maxed out all the sliders on that same data:



Definitely over-sharpened and the stars look a bit... off... almost like it's trying too hard to deal with the halos, and as a result softens the stars. Going the other way (sliding the halo corrector down to -0.50) gives, in my opinion, an awful result on the stars:



It looks better here than it does on my monitor open in PI. There, you can clearly see ringed artifacts around every star where the halos used to be.

Like Russell's other tools, I imagine this one will need to be applied judiciously per image, and there won't be a one-click-fits-all setting.
Helpful Engaging
Dale Penkala avatar
Chris White:
Starting a new thread for folks who want talk about using this tool.  How have you used it, what tricks have you learned.  Have you found any applications where it was useful other than what it was intended for?  Where in your workflow are you inserting it?  What don't you like about it, pitfalls?

I've got some ideas to contribute and as I do some more testing I will add my thoughts and what I have learned. 

If you want to discuss ethics or anything else off topic from using this new tool... please find somewhere else to do so.

I'm about where you are Chris, I've only started to play with it a little with my last image that I processed.

I'm doing my normal workflow which is:
Crop
DBE
BG Neutralization
SPCC
BlurXTerminator 
My settings were what I'd call very conservative. Star sharpening was like .15 - .20 Halo's - .25ish and non stellar I was in the .35 - .40 range.

From here I like to run StarXTerminator and process my Stars & Nebula or whatever separately and add them back in using PM.

I know that your not suppose to run it twice but I run it again on my Starless image with very slight sharpening just enough to see it when its done.

This wasn't the greatest data (only 480mm FL) but if anyone is interested in the results you can see it here: https://www.astrobin.com/full/vl5gs3/D/

In the coming days I plan to try this on data sets with a much longer focal length 1500mm. From what I understand it is really suppose to make a difference in this image scale.

Dale
Helpful
Bruce Donzanti avatar
Fascinating piece of software.  Just playing with it now on some old data taken with a C11" EdgeHD @ f/10 with a ZWO ASI6200mm and a Chroma Ha filter (0.224 arcsec/pk).   I left the Stellar adjustments at default but reduced the Sharpen nonstellar adjustment to 0.5.  Given my usual seeing conditions, these data would be oversampled, but I fortunately can get decent guiding.  So, I was assuming that BlurXterminator would produce an effect, whether considered good or bad, remains to be determined.

BEFORE using BlurXterminator


AFTER using BlurXterminator



Cropped- to see the effects more readily.

BEFORE

AFTER

The effect I see here is way more dramatic than compared to one of my Tak 85mm images (as one would expect), but it did do a nice job in star reduction in the 85mm image.
Helpful Engaging
James avatar
Right now I use BTX very early.

For RGB and NB:

Channel combination > Crop > color calibration > BTX > star removal > DBE


For Lum

Crop > BTX > star removal > DBE

I toss the lum stars and use rgb only when adding them back in.
Dark Matters Astrophotography avatar
Its been very good for me. I have looked out for the extra added crap people claim happens, and I saw none of that with this tool. I think the problem we have on that is that people are tin foil hat conspiracy lovers, and they should just bow the F out of this hobby and bother someone else.

With that said, lets see some killer results:

Melotte 15: Melotte 15 (rockstarbill) - Full resolution | AstroBin

Heart and HB3: Heart Nebula + HB3 SN Remnant - Redux (rockstarbill) - Full resolution | AstroBin

Neither of these images will ever be image of the day on Astrobin, because the judges hate me from CN and refuse to vote for me. Nonetheless, this is excellent data on all counts. 

Much better than the chuff they promote for IOTD.

If you disagree with me, great -- let me see you do better with a E160ED. My data here is bin 1x1 and not sampled to improve star quality. Literally the frames from the camera and scope, processed and shared.
Aygen avatar
@Chris White thank you for this thread - I am back on the hobby after a few months on "hold". Lucky that you have started this discussion - will thoroughly follow it.. I just gave it a quick try yesterday with some old data and I must say "waoooo" - that a game changer. 

@Bill Long you won't be surprised if I tell you that the quality of the two images that yo posted early today are just "EXCELLENT" !!!!! Sure thing, WE DON'T HATE YOU  - on the contrary ! 

PS: mannnnn, I am still waiting for my e-160ed - still a few months to go before enjoying it.

PS2: worth watching Adam's video covering this topic : BlurXTerminator
Dark Matters Astrophotography avatar
Aygen:
@Chris White thank you for this thread - I am back on the hobby after a few months on "hold". Lucky that you have started this discussion - will thoroughly follow it.. I just gave it a quick try yesterday with some old data and I must say "waoooo" - that a game changer. 

@Bill Long you won't be surprised if I tell you that the quality of the two images that yo posted early today are just "EXCELLENT" !!!!! Sure thing, WE DON'T HATE YOU  - on the contrary ! 

PS: mannnnn, I am still waiting for my e-160ed - still a few months to go before enjoying it.

PS2: worth watching Adam's video covering this topic : BlurXTerminator



My comment was not serious, but thanks.

I'll check out that video.
Well Written Respectful
Alex Ranous avatar
I just posted my first images using BXT.  I picked m8 as I had issues when I first processed the data this summer, so I shelved it.  I shot it with my focal reducer, so at a pixel scale of .89 arcsec/pixel, which was more amenable to deconvolution than 1.3 when I'm using my reducer.  I found the defaults a bit too strong.  I ended up dialing back to about .8 for the non-stellar features and about .2 for the stars.

As I didn't like my original version due to other processing sins, I don't have a comparison.

RGB version of M8

SHO version of M8

Has anyone used the tool on a Mosaic yet?  I'm wondering if it's best to use BXT on the individual panels before combining, or after it's been assembled.  As I'm lazy, I'd prefer to do it after, but I'm wondering if there are benefits after.  If your edge stars are problematic, then BXT may correct them, improving the result.  I guess I'll have to do it both ways and see.
Helpful Engaging
Aygen avatar
Alex Ranous:
I just posted my first images using BXT.  I picked m8 as I had issues when I first processed the data this summer, so I shelved it.  I shot it with my focal reducer, so at a pixel scale of .89 arcsec/pixel, which was more amenable to deconvolution than 1.3 when I'm using my reducer.  I found the defaults a bit too strong.  I ended up dialing back to about .8 for the non-stellar features and about .2 for the stars.

As I didn't like my original version due to other processing sins, I don't have a comparison.

RGB version of M8

SHO version of M8

Has anyone used the tool on a Mosaic yet?  I'm wondering if it's best to use BXT on the individual panels before combining, or after it's been assembled.  As I'm lazy, I'd prefer to do it after, but I'm wondering if there are benefits after.  If your edge stars are problematic, then BXT may correct them, improving the result.  I guess I'll have to do it both ways and see.

Alex, your images are great ! Can you give some details about your workflow ?
Norman Hey avatar
FWIW, here is the reply from Russell about a question of how to use BXT on narrowband images where you plan to blend channels. I asked just to be sure I understood the documentation correctly. 

I ran BXT on data from my scope obtained before getting the bespoke flattener from A-P. It confirms what Stewart shows above, correcting star shapes out in the edges and corners almost as well if not as well as the flattener that cost 10 times as much as the software! 

To Alex's question about pre-or post-mosaic assembly, hopefully I will get time to run a comparison as well and we can share results.RC-Astro Support <support@rc-astro.com>Sat, 17 Dec, 11:57 (3 days ago) 
to me

Hi Norman,


Yep… that’s exactly it.

-Russ
On Dec 17, 2022, at 10:47 AM, RC-Astro BlurXTerminator Support <[email]support@rc-astro.com[/email]> wrote:
\"When combining narrow-band data into a color image, it is best to avoid \"mixing\" color channels, as is done in some advanced color palette techniques, prior to deconvolution. If channels are mixed prior to deconvolution, especially with very different gains, star profiles can be significantly altered, perhaps so much as to make them unrecognizable as stars to the neural network. Do a simple SHO color combination, run BlurXTerminator, then perform any mixing between channels afterwards.\"\r\n\r\nFrom this, I am assuming that this implies combining the linear S, H and O masters, running BXT, then separating the channels for further steps, including blending these \"primary\" sources for each R, G or B channels (using the BXT-processed S, H and O in, say, R= .75Sii + .25Ha, where Sii and Ha are now BXT-processed as above). I would be grateful if you could confirm or correct my interpretation of the excerpted documentation from BlurXterminator.
Scott Badger avatar
Working on NGC 206, a star association within M31, and at first BX wasn't doing those 'embedded' stars any favors, even with Sharpen Nonstellar knocked back to 0.35. Enabling 'Nonstellar then stellar' helped some, but disabling auto PSF and setting the PSF diam to 3 pixels made a big improvement and let me push the nonstellar sharpening back up some.

Cheers,
Scott
Helpful Concise
Alicia Rossiter avatar
Chris White:
Starting a new thread for folks who want talk about using this tool.  How have you used it, what tricks have you learned.  Have you found any applications where it was useful other than what it was intended for?  Where in your workflow are you inserting it?  What don't you like about it, pitfalls?

I've got some ideas to contribute and as I do some more testing I will add my thoughts and what I have learned. 

If you want to discuss ethics or anything else off topic from using this new tool... please find somewhere else to do so.

I have used it once in my latest picture (Pisces trio). I used the default settings very early in the processing, right after the first stretch. I could see a difference mainly in the sharpening of the stars and a very subtle change in the detail of the galaxy. The star effect was pretty good, in  fact the picture did not require (in my opinion) further start reductions or deconvolution.   I tried to use the "correct only" feature with the star feature disabled with planets and it did not work at all.
Brent Newton avatar
Workflow wise is simply replaces Deconvolution in most or all cases so far. I've been tearing through several of my previous shots lately and so far have not found a single deficiency in use of the process. At worst, it may require a manually-set PSF value (and I suggested to RC that it may benefit from being able to accept sample PSF images as the Decon process does now) but this is barely a complaint. Many of my DSLR lens-based integrations or my older Doublet work suffered/suffers from poor corner stars (imperfect focus or loose focus tubes) and while I previously had to crop out what I considered to be unfixable areas of an image, BxT performs only what I can describe as a miracle.

I have also found it to work very well with close-in Lunar data with the notable exception of darker areas around the terminator. When facing a sudden drop to near 0 value (black pixels off the edge of a bright Moon) it adds a large bias to the image and severely lowers the contrast. Still, some skilled use in creating Range Masks could mitigate this issue. I have not tried it on planetary or solar data but would expect similar results. 

Helpful Insightful Respectful Engaging
Chris White- Overcast Observatory avatar
Brent Newton:
Workflow wise is simply replaces Deconvolution in most or all cases so far. I've been tearing through several of my previous shots lately and so far have not found a single deficiency in use of the process. At worst, it may require a manually-set PSF value (and I suggested to RC that it may benefit from being able to accept sample PSF images as the Decon process does now) but this is barely a complaint. Many of my DSLR lens-based integrations or my older Doublet work suffered/suffers from poor corner stars (imperfect focus or loose focus tubes) and while I previously had to crop out what I considered to be unfixable areas of an image, BxT performs only what I can describe as a miracle.




Brent, that is remarkable what it did for those stars!
Alex Ranous avatar
Aygen:
Alex Ranous:
I just posted my first images using BXT.  I picked m8 as I had issues when I first processed the data this summer, so I shelved it.  I shot it with my focal reducer, so at a pixel scale of .89 arcsec/pixel, which was more amenable to deconvolution than 1.3 when I'm using my reducer.  I found the defaults a bit too strong.  I ended up dialing back to about .8 for the non-stellar features and about .2 for the stars.

As I didn't like my original version due to other processing sins, I don't have a comparison.

RGB version of M8

SHO version of M8

Has anyone used the tool on a Mosaic yet?  I'm wondering if it's best to use BXT on the individual panels before combining, or after it's been assembled.  As I'm lazy, I'd prefer to do it after, but I'm wondering if there are benefits after.  If your edge stars are problematic, then BXT may correct them, improving the result.  I guess I'll have to do it both ways and see.

Alex, your images are great ! Can you give some details about your workflow ?

So the raw subs were processed with WBPP with all the normalization stuff.  For this target I didn't do any gradient stuff.  There was so much nebulosity everywhere I couldn't tell what may have been gradient vs nebula.  For the RGB image, I combined RGB and color corrected using SPCC.  I then hit it with BlurXTerminator.  I ended up backing off the non-stellar sharpening to about .7 or .8 if I recall.  I believe I kept the default star shrink.  

I then extracted the stars with StarXTerminator and stretched both the starless image and the stared image separate, the stars up to the size I wanted.  I then hit the RGB with NoiseXTerminator and various curves adjustments for saturation and brightness tweaks.  For the Lum I hit it with BXT with the same settings and the RGB, and then removed the stars.  I stretched it and in addition to curves tweaks, did some LocalHistogramEqualization and the DarkStructureEnhance. After the LRGBCombination, I combined back the RGB stars with the op_screen operation in PixelMath.

I do SHO a bit different.  I used BXT on each individual filter, then tossed the stars.  I stretch each individual filter separately and tweak them individually in curves to get the levels ok for the SHO channel combine.  I then blended my Ha and SII (about 70/30) to create a lum.  I then did some LHE and DarkStructureEnhance to get it looking the way I want.  I used some masks to selectively tweak different regions also.  I believe I also used the EZ Processing Suite's EZ HDR script to tone down some of the really bright areas.  I then LRGBCombined using that new lum with my SHO image, and added in the stars from my RGB image.

I tend to do a lot of tweaking using curves and sometimes masks. so a lot of what I do is very situational.  I often use the GAME and ColorMask scripts to create masks so I can target specific areas for work.
Helpful
Alex Ranous avatar
Alex Ranous:
Has anyone used the tool on a Mosaic yet?  I'm wondering if it's best to use BXT on the individual panels before combining, or after it's been assembled.  As I'm lazy, I'd prefer to do it after, but I'm wondering if there are benefits after.  If your edge stars are problematic, then BXT may correct them, improving the result.  I guess I'll have to do it both ways and see.

So I messaged Russ about mosaic use.  He suggested running BXT on the individual panels using the "Correct Only" option to fix up potential star issues in the edges, and then do the sharpening separately on the assembled mosaic.
Well Written Concise
Brent Newton avatar
Chris White:
Brent Newton:
Workflow wise is simply replaces Deconvolution in most or all cases so far. I've been tearing through several of my previous shots lately and so far have not found a single deficiency in use of the process. At worst, it may require a manually-set PSF value (and I suggested to RC that it may benefit from being able to accept sample PSF images as the Decon process does now) but this is barely a complaint. Many of my DSLR lens-based integrations or my older Doublet work suffered/suffers from poor corner stars (imperfect focus or loose focus tubes) and while I previously had to crop out what I considered to be unfixable areas of an image, BxT performs only what I can describe as a miracle.




Brent, that is remarkable what it did for those stars!

It really is - I'm still almost in disbelief in the results. I was out of town for schooling for an 8-month period this year and spent much of that time re-processing much of my best photos from the past few years and now that I'm back home I find myself tearing through them again instead of compiling the new data I've taken in the past month. To go from a rather tedious process of generating a PSF and then tweaking masking contrast and deringing to then find minor background artifacts after Decon is run (that were not visible in the preview tests) to essentially a single click is unbelievable
Alex Ranous avatar
Alex Ranous:
Aygen:
Alex Ranous:
I just posted my first images using BXT.  I picked m8 as I had issues when I first processed the data this summer, so I shelved it.  I shot it with my focal reducer, so at a pixel scale of .89 arcsec/pixel, which was more amenable to deconvolution than 1.3 when I'm using my reducer.  I found the defaults a bit too strong.  I ended up dialing back to about .8 for the non-stellar features and about .2 for the stars.

As I didn't like my original version due to other processing sins, I don't have a comparison.

RGB version of M8

SHO version of M8

Has anyone used the tool on a Mosaic yet?  I'm wondering if it's best to use BXT on the individual panels before combining, or after it's been assembled.  As I'm lazy, I'd prefer to do it after, but I'm wondering if there are benefits after.  If your edge stars are problematic, then BXT may correct them, improving the result.  I guess I'll have to do it both ways and see.

Alex, your images are great ! Can you give some details about your workflow ?

So the raw subs were processed with WBPP with all the normalization stuff.  For this target I didn't do any gradient stuff.  There was so much nebulosity everywhere I couldn't tell what may have been gradient vs nebula.  For the RGB image, I combined RGB and color corrected using SPCC.  I then hit it with BlurXTerminator.  I ended up backing off the non-stellar sharpening to about .7 or .8 if I recall.  I believe I kept the default star shrink.  

I then extracted the stars with StarXTerminator and stretched both the starless image and the stared image separate, the stars up to the size I wanted.  I then hit the RGB with NoiseXTerminator and various curves adjustments for saturation and brightness tweaks.  For the Lum I hit it with BXT with the same settings and the RGB, and then removed the stars.  I stretched it and in addition to curves tweaks, did some LocalHistogramEqualization and the DarkStructureEnhance. After the LRGBCombination, I combined back the RGB stars with the op_screen operation in PixelMath.

I do SHO a bit different.  I used BXT on each individual filter, then tossed the stars.  I stretch each individual filter separately and tweak them individually in curves to get the levels ok for the SHO channel combine.  I then blended my Ha and SII (about 70/30) to create a lum.  I then did some LHE and DarkStructureEnhance to get it looking the way I want.  I used some masks to selectively tweak different regions also.  I believe I also used the EZ Processing Suite's EZ HDR script to tone down some of the really bright areas.  I then LRGBCombined using that new lum with my SHO image, and added in the stars from my RGB image.

I tend to do a lot of tweaking using curves and sometimes masks. so a lot of what I do is very situational.  I often use the GAME and ColorMask scripts to create masks so I can target specific areas for work.

Hello,
I asked Russ about when to use the tool on SHO data, whether to run first on the individual stacks of mono data, or combine and then run BXT, and he told me to do the later, because I would probably varying FWHM figures across the different stacks, so best to do it this way….🤔🤔

Yea that makes sense to me.  You want to to run BTX on your combined linear data before you mess with it.  In my particular case with my m8 images, I wasn't using the SHO stars, but subbing the stars from my RGB image. My RGB stars were created from doing an RGB combine from the get go, doing BTX, then SPCC, and then extracting them, so following Russ's suggested workflow.

If I wasn't subbing in RGB stars, what I'd probably do is create an HOO image right from the get go even if I'm doing an SHO image.  HOO stars are closer to RGB than SHO ones, which tend to look yellow.  I'd run BTX on it and then use the narrowband filters mode of SPCC to color correct the image to get the stars as close to RGB as I could and then extract them and stretch them separately.  I'd then probably apply BTX to my individual SHO linear images, remove their stars and stretch them individually.  Once I've constructed combined back into my final color SHO image, then I'd apply the HOO stars.

The big advantage of StarXTerminator for me is treating the stars separately from my image.  Do the least harm to the stars and stretch them to taste.  Before these modern star extraction tools like SXT and starnet, I'd have to take a lot of precautions to mask the stars to prevent messing them up when manipulating the rest of the image.  I'd also use various star reduction techniques in my star dense images, but I was never happy with the way they looked when you zoomed in.  It's so much easier to remove them from the get go and stretch them to the desired size and not have to mess with all that star reduction stuff.
Helpful
Scott Badger avatar
Processing time aside, what about running it on calibrated lights before integrating them? Probably not worth the processing time no matter what you have for a computer, but just curious.... FWIW, it does its job on an M51 luminance sub below (default settings).


Dark Matters Astrophotography avatar
Using the tool very early on in linear mode produces the best results in my experience and that matches what Adam spoke about in his video.
Alan Brunelle avatar
I have been wanting to post this for some time, but no time!

I collected some horrendous data last week on a field that includes NGC 891.  I had temperature induced pinched optics that I failed to diagnose, but thought I could process out.  Turns out the stars were horrible.  In any case the stacked frame, full field, from a 2x drizzled image is as follows:

This is to show you what I was up against and you can refer to this when comparing the following images.  In any case this is fully unprocessed, except I did an autostretch to display here.  The stars are horrible, even within the center of the image.  Also, some corner effects.  The galaxy looks pretty good.  But I can tell you that none of the methods could fix the stars.  In fact StarXT had issues even seeing what was a star and what was a small galaxy.  

Now on to the BXT, NXT comparisons.  But first understand that I cropped these and arranged them in Publisher, which I cannot get a good saved jpg image of any quality, so I did a 260% image scale and did a screen capture.



Hopefully things transfer for you to see enough detail.  This is what I mean be checking my work against available data from much larger and better optics.  I have always done so with the old methods, and now also do so with the BTX and NXT sharpening tools.  I feel that the sharpening shown here is quite good considering the optical defects.  And that is in the context that BTX was completely unable to fix the stars.  My scope is a 12 inch, f4 Newtonian.  The image was captured with a coma corrector (spacing and tilt still being worked out) and the ASI071MC pro.  So this is color, and the processing was done on the color image as you see.  I think NGC 891 is a very good measure of any of these sharpening methods doing their jobs without introducing artifacts that are unexpected from the natural object.  The dust filaments, to me, act as a finger print of what is real (and what is unreal, or unrealizable) with a 12 inch telescope.  Not only are dark features sharpened, but blue regions are also improved and stand out, as should be expected if true unlike fake image features were introduced.  Here I feel the correlation is remarkably good.  And I have not begun to do further stretching or contrast enhancement.  Not even noise reduction.  

But now take this image back to the original field of view, and I argue that the improvement at that scale is welcome and the feature improved are completely non-controversial.  And the image improvement would stand up to considerable image cropping as one might want to do and these images here show that.  In fact my perusal of NGC 981 images here on AstroBin, using Planewave as a search modifier pulled up images that I feel this image rivals or exceeds in every case.  I believe that the few such images I found were processed a while ago, so all using standard old convolution methods (or none at all for what I know).  But I believe that BXT is causing no fake structures here.  Nor is it creating a level of clarity that is onto itself unbelievable.  I just can't wait to see what I get when the data isn't complete crap!

Edit to add:  And I never really addressed that fact that the "Detail" function in NXT is also very good at accomplishing this task.  So it alone may serve as a means to an end.  

Alan
Tim Hawkes avatar
A quick question for folks.  Has anyone else tried BlurXt  on very tight small-scale objects -planetary nebula?  I tried on some of my luminance images at 0.41 arcsec/ pixel of the catseye nebula.  It is indeed very small in angular size ... But there it didn't seem to work very well and I got much better results from just conventional PI deconvolution?   Could just be me -- anyone else looked at this sort of object?

So here conventional Deconvolution was better?!   (932 x 2s frames from an F4 300 mm telescope at ~0.41 arcsec/ pixel)?     At least judged by professional and hubble images the conventional deconvolution does reflect details that are real -even if at some cost of adding artifacts.  The BlurXt image at 0.9 doesn't seem much if at all improved and less close to a true image?

(btw BlurXt  has worked spectacularly well on other objects such as type II regions etc)

Tim

start



after PI deconvolution  and some additional processing (maybe taken too far?)



after BlurXterminator


Helpful Engaging
Andy Wray avatar
I used BXT to re-process an old Wizard nebular image and ended up with the below.  I had major collimation and pinched primary issues together with the captures being taken on whispy clouds nights.  Unfortunately my collimation and pinched optics issues could not be totally resolved using BXT on the stars.



I used 0.25 sharpening on stars and 0.6 on non-stellar.  I should probably have used 0.5 on non-stellar as I think I have oversharpenned.

FWIW:  the process I used was:
* dynamic crop of individual S, H and O images
* ABE applied to each (I was being lazy)
* BlurXterminator applied to all 3 channels using a custom PSF size determined by using PSFImage script
* I have a custom STF that I use which is a small to medium stretch which I applied to each channel to effectively achieve the same as a linearfit, but with the stars where I want them brightness-wise
* I did an SHO combination for one image, then a Foraxx combination for another image
* I did a 50/50 combination of the SHO and the Foraxx images
* Used StarXterminator to extract stars
* Used GHS, Curves transformation and ArcSinh stretch on the starless image to get it roughly where I wanted
* Used BackgroundNeutralisation with a range mask to get rid of colour cast that I had created on the background
* I used NoiseXterminator at about 0.4 on the starless image
* Used pixelmath to add the stars back in

Originals are here including non-BXT versions


Fiddling with my scope and captured the Wizard (Added a splash of colour)
Helpful Engaging
Related discussions
Dealing with noise
One of the main issues of imaging with a dslr or mirrorless camera is obviously digital noise. How do you deal with it? What technique/trick/software do you use to limit such noise? NGC 7635 - wide field
BlurXterminator is post-processing software tool for addressing image quality issues like noise.
Apr 3, 2021