Self-built gradient-removal pipeline: comparison vs PixInsight WBPP (Coma Cluster + Leo Triplet, identical data)

6 replies91 views
Dirk Bausch avatar

Hi all,

Over the past months I've been developing a custom reduction pipeline focused on background gradient removal. The motivation was simple: I wanted to stretch faint structures (galaxy halos, tidal features, IFN) more aggressively without amplifying background gradients or other artifacts.

Existing tools didn't quite get me there. I find DBE tedious and error-prone in practice — manual sample placement is a lot of work and easy to get wrong, especially around large galaxies or extended nebulosity. GraXpert and multiscale gradient tools are faster but often don't give me the result I want, and it's hard to understand why the algorithm decided what it decided. And none of them address the per-sub flat-fielding stage where many gradient problems originate.

So I built something based on the ABYSS approach from professional deep imaging surveys (Borlaff et al. 2019, A&A 621, A133): multi-pass background fitting with source-masked sky flats per sub, conservative gradient removal designed to preserve extended low-surface-brightness signal. Implementation with GPU acceleration.

The practical result: backgrounds are flat enough that aggressive stretching no longer reveals processing artifacts. Faint structures survive the background fit instead of being subtracted with it.

Comparison images — Pipeline vs PixInsight WBPP on identical subs:

📷 Coma_Abyss.jpgComa_Abyss.jpg📷 Coma_Wbpp.jpgComa_Wbpp.jpg

The Coma Cluster comparison shows the most striking visual difference — hundreds of cluster member galaxies that are barely visible in the WBPP output emerge cleanly in the pipeline output, along with faint diffuse signal in the cluster core consistent with intracluster light reported in deep imaging surveys.

📷 M65_Abyss.jpgM65_Abyss.jpg📷 M65_Wbpp.jpgM65_Wbpp.jpg

The Leo Triplet output shows similar improvements with documented structures — the NGC 3628 stellar halo and southern tidal plume are clearly visible in the pipeline output but not in WBPP at identical stretch.

Both comparisons use the same calibrated subs after identical quality filtering. Stretching applied identically. No BXT, NXT, or any post-processing on either side — what you see is the stack output directly.

Quantitative summary:

  • 5σ detection depth: +0.7 to +1.2 mag deeper than WBPP (Gaia DR3 photometric calibration)

  • Background spread: 8-14× flatter

  • For Leo Triplet: 95% of the pipeline's additional detections have a direct PanSTARRS DR1 counterpart, confirming they're real sources rather than artifacts

  • Pipeline tested with pure-noise input (no spurious structures created) and mock-source injection (correct faint-source recovery)

Setup: SkyWatcher Esprit 120ED, QHY268M Mono, Bortle 5 skies. Coma: 432 ×120s subs. Leo Triplet: 376×120s L after quality filtering.

For now I'd genuinely appreciate critical feedback. Especially:

  1. Anyone seeing issues in the comparison images I'm missing?

  2. Suggestions for additional validation targets ?

  3. Experiences with similar approaches?

Happy to discuss methodology in the thread.

Clear skies, Dirk

Well written Helpful Insightful Respectful Engaging
Tony Gondola avatar

I would really like to see what your workflow is for this as you’re right, it’s the key for getting everything in the data out. Especially for those of us who image under less than pristine skies.

Well written Respectful Engaging Supportive
Jake Turner avatar

I also would love to have more insight into the workflow. The results on first look seem quite impressive! I have an image of the Leo Triplet as well where I just could not manage to get the tidal tail to show. I would be really interested in seeing what this method could do with my data.

Validating on an IFN rich target would be nice to see as well. Polaris is pretty rich with it and always accessible, though I know imaging Polaris can come with other challenges you may not want to deal with for this purpose.

Really interested in seeing how this develops!

Well written Respectful Engaging Supportive
Vin avatar

Those comparison results look v strong. And yes with you on the potential for improved gradient reduction tools. If you want I have about 14hrs data on a very faint IFN field in Virgo if you want to try running it on that. Or another good candidate if you want to image yourself would be M104 on a widefield - there’s a dimmer halo and IFN around it but its a tough thing to extract without either blowing things out, or getting artefacts.

Well written Helpful Respectful Concise Engaging Supportive
Victor Van Puyenbroeck avatar

The approach looks promising but your comparison images are not a good example in my opinion.

You should show:

  • The original WBPP image.

  • How your algorithm performs against other gradient removal tools like DBE, MSGC or Graxpert.

  • A ground truth reference image for the background that shows any large scale low surface brightness features that should be preserved during gradient correction.

Well written Helpful Concise Engaging
andrea tasselli avatar

WBPP is no gradient removal tool so those comparisons are meaningless. Compare against the outcome of known gradient tools and we’ll see how it goes…

AstroGadac avatar

Curious to try as I have some image with remaining gradients I just can't get rid off, living in Bortle 8