WBPP vs Sirilic

11 replies239 views
Ethan Sweet avatar

Hey All!

I typically use Sirilic to stack my images. It’s always worked for me and never bothered to try WBPP. I recently gathered nearly 27 hours on M81 and M82 and when stacking in Sirilic I wasn’t happy with the results, so figured what the heck I’ll try learning to use WBPP.

Holy smokes, it took nearly 12 hours to complete! Granted it was 536 images from an ASI2600MC on an i5 laptop from 2018, but everything is stored on a M.2 drive. I did disable fast integration and ran WBPP with standard settings.

Sirilic did the same task in roughly 30 minutes under the same conditions.

WBPP did a noticeably better job stacking the images and I’m so far happy with it (haven’t processed it yet but just the raw stack has more detail)

Looking online, these long processing times are not out of the ordinary.

My question is, can anyone that’s familiar with both programs answer why the 24x processing time?

Well written Respectful Engaging
Tony Gondola avatar

WBPP is certainly still slower than Siril but the difference is much less than it used to be. I suspect that most of your issues with it are due to the fact that you’re running on 8 year old hardware. Running multi-night sessions with different filters (I shoot mono) rarely takes more than an hour or two on my 2 year old Zbook Studio (i7). Also, keep in mind that it’s doing much more than Siril does. The sorting and matching up all the files according to session, filter and exposure is automatic. Combing like images from multiple nights, plate solving and then delivering matched, aligned and cropped masters is really wonderful. I have found that the results are better, sharpness is similar, calibration frames seem to work better, especially flats. All that after just 3 clicks is pretty hard to beat.

Well written Helpful Engaging Supportive
Marcelof avatar

You should also try PI's FBPP; it was designed specifically to handle large numbers of frames 😉

Rodolphe Goldsztejn avatar

And if you activate fast integration in WBPP, the script goes much faster. At the expense of local normalization.

Peák Gergely avatar

wbpp gets slower kindof not linearly as the number of subs grow.

I have dekstop i9 14900k with 128gb ram.

A normal 100-200 sub project is done in 30-60 minutes with everything configured for quality.

the squid project of my was about 900 subs that took a good 10 hours.

Tony Gondola avatar

Update!

With the release of the new AMSP script by Cyril Richard there’s no longer any reason to use Sirilic. The new script works just like WBPP, taking multiple file sets, even from multiple nights and sorting all out. You don’t even need to use the usual Siril folder structure. I tested it yesterday against WBPP and it completed in 25% of the time with very slightly sharper results.

DeepSpaceAstro has a video with all the details:

https://www.youtube.com/watch?v=80LwMI-WEJE

Well written Helpful Respectful Concise Engaging Supportive
Ethan Sweet avatar

Tony Gondola · Apr 28, 2026, 12:36 PM

Update!

With the release of the new AMSP script by Cyril Richard there’s no longer any reason to use Sirilic. The new script works just like WBPP, taking multiple file sets, even from multiple nights and sorting all out. You don’t even need to use the usual Siril folder structure. I tested it yesterday against WBPP and it completed in 25% of the time with very slightly sharper results.

DeepSpaceAstro has a video with all the details:

https://www.youtube.com/watch?v=80LwMI-WEJE

Awesome that’s cool to hear! I will have to check that out. I was happy with the results I got from WBPP but maxing my CPU out for 12 hours straight concerns me haha

Respectful Supportive
Tony Gondola avatar

I’ll be curious as to how long it takes the new Siril script to run the same data.

Well written Respectful
John Hayes avatar

My projects are typically 500-1200 frames from an IMX455 sensor so a typical stacking session with WBPP can take anywhere from 4-8 hours. I never want to tie up my primary PC (a MacBook Pro) with that job so I bought a “cheap” Rizen 9 powered Minisforum NUC style PC specifically for that task. It’s fast, headless, and connected directly to my NAS system that receives data from my scopes. I have it networked so that I can turn it on/off and sign into it from wherever I happen to be. I simply sign in, load it up with a job, press “RUN”, and go about the rest of my life. Usually by the time I feel like I have more time to look at doing the rest of the processing, the data is all stacked and waiting for me. This set up has been life changing. It’s easy, it feels seamless, and I’ve almost always got stacked data sitting around waiting for more attention. The only “minor” problem (besides the cost) is that with my current internet service, the upload speed is unacceptably slow. The stacked data files are pretty large and if I shoot all 7 filters, it can take 4-5 hours to download it when I’m not home. I approach that problem by creating a raw RGB combination so that I have fewer files to transfer. This summer, I’ve got to try to try to address that problem. Hopefully I can find a symmetric fiber service that doesn’t cost an arm and leg.

I recognize that this isn’t an ideal solution for everyone, but if you are committed to processing a lot of data, it’s a solution worth considering regardless of how you stack your data. Offloading the really time intensive jobs to a dedicated computer works really well.

John

Well written Helpful Respectful Engaging Supportive
Spacey avatar

Ethan Sweet · Apr 18, 2026, 04:46 AM

i5 laptop from 2018

You didn’t say how much system RAM you have and given the age if its 16GB or less you would see a large reduction in processing time if you upgraded to 32gb or more.

Wei-Hao Wang avatar

WBPP sounds incredibly slow, based on many people’s description in the past few years. I still don’t get why it is so slow. Does it try to plate-solve every sub? PI’s plate solving is indeed very slow. If it try to do it on every sub, no wonder the whole WBPP process can take very long. PI’s local normalization is also a slow process, especially for a slower or fewer-core computer. If WBPP includes local normalization as part of its standard configuration, then this may explain the slowness. I found the normalize scale gradient script much faster, but I doubt this can be integrated into WBPP and replace local normalization.

Personally, I only use WBPP for calibration. I do registration, local normalization, and integration separately. I only do plate solving when it is absolutely necessary (like before PCC/SPCC, or before mosaic by coordinates). Overall, I rarely needed more than 10 hours to stack a target, even for my 50-hr projects with many subs on a slower, old intel computer.

Well written Helpful Insightful Respectful Engaging
Frank Alvaro avatar

Wei-Hao Wang · Apr 29, 2026, 01:04 AM

WBPP sounds incredibly slow, based on many people’s description in the past few years. I still don’t get why it is so slow.

You should post a question on the PixinSight Forum…oh, wait…

Related discussions
WBPP experiencing "Unknown exception" errors during "Measurements"
I have been having issues with Pixinsight WBPP failing to measure a significant number of sub-exposures during the Measurements step with the reason as “Unknown exception. I have posted this issue to the PI forum and this time been told to use FBPP b...
WBPP processing issues directly relevant to author's recent experience learning tool.
Sep 28, 2025