Andy Wray avatar
I believe I have an OK computer for doing PI processing, however I wondered if people had other suggestions.  

Mine looks like this:

* Windows 11
* Intel core- i7 8700 CPU with 6 cores and 12 threads (I would go with Core-i5 13600K nowadays if I had a choice)
* 64GB DDR4 3200 RAM (definitely fast enough and large enough for most of my needs; 32GB wasn't enough)
* Nvidia RTX 2060 GPU which provides decent acceleration on StarXterminator, NoiseXterminator and StarNet
* 500GB NVme boot drive
* Cheaper 2TB NVme data drive (maybe too small for all my data, but have SSD and classic hard drives for archival)

If the PI team could get their act together and start using CUDA acceleration on some of their core routines, then maybe we could save quite a bit of money on our computer hardware.
Helpful Concise
Stuart Taylor avatar
Andy Wray:
OK computer

great album!

;-)



I'm using a Windows 10 laptop. i9-10885H CPU @ 2.40GHz with 64GB RAM. NVIDIA GeForce 1650 Ti (which nicely speeds up StarXT and NoiseXT). 1TB SSD boot drive and 5TB data storage externally
Andy Wray avatar
Stuart Taylor:
great album!


I'll download it ... I never really listened to Radiohead, but I wlll do now
Stuart Taylor avatar
WBPP runs reasonably fast on my system, but if I am crunching a lot of data, I have to suspend the laptop on a couple of paperback novels at left and right so the fans can breathe. Otherwise it sometimes overheats and shuts down
Well Written Concise Engaging
Wei-Hao Wang avatar
CUDA is great, but unfortunately it only works on nVidia GPUs, and not everyone uses nVidia. There are at least two other flavors of GPUs that are widely used in personal computers (AMD and Apple). If my understanding is correct, Russell Croman's programs can take advantage of different kinds of GPUs, not just nVidia. Photoshop is also like this. I think this is a much better approach for a broad user base.
Well Written Concise
Andy Wray avatar
Stuart Taylor:
WBPP runs reasonably fast on my system, but if I am crunching a lot of data, I have to suspend the laptop on a couple of paperback novels at left and right so the fans can breathe. Otherwise it sometimes overheats and shuts down

Thankfully I don't have that issue as the CPU is water-cooled and I have 6 fans in the tower case.  I'll be honest, I wouldn't run PI on a laptop if I had a choice.
Well Written
Daniel Arenas avatar
Hi!

I use a Dell Inspiron 15 5000 series laptop.
11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz   1.69 GHz
16,0 GB (15,7 GB usable)
Win 11 Home 64 bits

And that's all. WBPP takes an eternity to stack 7 hours of data. Almost 4 hours but… that's what I have, so… it's cheaper to take patience.

I'm thinking about buying a desktop computer with the target of running PixInsight in a comfortable way (my daughter is also insisting me in being able to play games in it), maybe this summer, so suggestions are welcome!

Clear skies!
dkamen avatar
Intel NUC with 16G RAM, 1TB nvme root and 2TB SSD data drives. Usually raw subs are read from the root but derivative files are placed in the data drive.

I find I/O and RAM make a huge difference, processors not so much. This makes sense considerig the workload characteristics of your typical preprocessing and initial stacking workflow.

If I were to upgrade my NUC, I would add more RAM.
Helpful Insightful Concise
Ben Houston avatar
Andy Wray:
* Intel core- i7 8700 CPU with 6 cores and 12 threads (I would go with Core-i5 13600K nowadays if I had a choice

I recommend getting a faster multi core  CPU. You are already using a NVMe drive, enough RAM, and your GPU is okay.  I find CPU makes a huge difference for integration and registration.  Just get the fastest CPU on this list that you can afford. Click the “price performance” tab to see ROI on cost. I am currently running a Ryzen 6950 but if I bought new now I would get a Ryzen 7950.
Helpful Concise
Rafał Szwejkowski avatar
Apple M1 is a beast in PI.  Too bad memory and ssd upgrades for it are cost prohibitive.  I have a separate Intel box with tons of memory and disks for stacking.
TurtleCat avatar
Mac Studio Ultra with 64GB and 2TB drive for me. I’m looking forward to the native apple silicon version when they are able to release it. Instant 20-30% performance boost at least. GPU support is still a dream. I saw a post on the forum yesterday that they have some stuff (probably NVidia) trialed but they won’t do anything with it until sometime next year at the very earliest. Apple silicon GPU probably won’t happen until the distant future if it happens.
Alan Brunelle avatar
3 year old home built windows 10 machine.

Intel I9 (not the latest gen)

128 G of RAM, but 64 are dedicated to 2 RAM disks as suggested by PI.

At the time I built it it's benchmark tested at the top of the heap.

Its PI benchmark also tested near the top.

1TB nvme boot drive.  A couple 8TB HD, and a few 1-2TB SSDs that got salvaged from an older home built.

I do all my image processing on the name drive.  Once done, I move the project folders to the other drives.  I probably save too much.  I need to delete unused subs after blinking.  I should probably dump all the calibrated, debayered, etc subs as well.  They eat up a ton of space and not so hard to repeat the integrations.  Also WBPP keeps getting better anyway…  When my drives star filling up, I will likely clean that stuff out.

NVIDEA graphics card, but can't seem to get cuda accel. working lately.  But features using it still move pretty fast anyway.

After almost 3 years it's still not obsolete!
Michael J. Mangieri avatar
Dell XPS 8950
Intel Core i9-12900K  (16 cores/24 logical processors)
32 GB RAM (may add more later)
NVidia GeForce RTX 3080 Ti
2 TB SSD
Michael Ring avatar
Apple 14“ M1 based notebook with 32GB of Ram and most of my subs on a network drive so that they are also accessible from my other intel based Mac.
Stacking in wbpp works reasonably fast, 10hr projects with IMX571 subs are in the range of 2 hours if I am not mistaken, the project itself is stored on local ssd, not in the network.

I use the very same notebook in the field for checking the Nina instances on my telescopes and watching YouTube videos when bored, the notebook can easily stay on for a hole Winter night without the need for charging.
Ian McIntyre avatar
I started using PI last year on a Clevo laptop running Ubuntu Linux.

i7 - 10th Gen (2.8ghz…I think)
32GB DDR4 PC3200
2gb nvme 6800mb/sec

It worked fine. But then I wanted the latest latest Flight Simulator and decided to build a new desktop. I run seperate 2gb nvme drives to dual boot Linux and Windows. 

i9-11900k @3.5 ghz
64gb DDR4 PC4400 (clocked to spec)
​​​​2gb nvme 7200mb/sec
Radeon RX 6900xt

The productivity drive has Pop Os installed. I'm finding it s 5 stable, but a little clumsy. May ėnd up going back to Ubuntu or trying out Mint.
Michele Campini avatar
I've a:
Ryzen 9 5900x
64 GB ram at 4800mhz (but i want upgrade to 128)
GTX3080 (with cuda cores MOD applied)

At the moment it works fine, about 5-10 secs for starxterminator (with 26mp images) and normally 4-5 minutes for the stacks including drizzle for 20-30 light, 20 flat and dark masters.
David Moore avatar
I have a desktop I built about 7 years ago so it's time for a rebuild. I have:-
I7-4790K
GeForce GTX 1050 (bought 3 years ago)
DDR3 16GB
Samsung 840 EVO 1 TB SSD, Western Digital SSD and a couple of hard discs.
MB H97-Pro Gamer S1150 ATX

BlurXterminator takes 11 mins to run

What I would be very interested to know is how important is the graphics card in the performance of things like BlurXterminator, as I am not
playing games that need ray tracing and fast refresh rates?
Michael J. Mangieri avatar
David Moore:
What I would be very interested to know is how important is the graphics card in the performance of things like BlurXterminator, as I am playing games that need ray tracing and fast refresh rates?


Generally, not important as far as I can tell - the CPU carries the load of processing the images.  My processing computer is the Dell XPS (as shown above). BXT takes about 2 minutes. However, I recently set up the CUDA software since I have an NVidia card that supports parallel processing via the GPU.  Now BXT takes about 5-10 sec!!  You might want to see if your graphics card supports CUDA.
Helpful
Jerry Yesavage avatar
Dark Matters Astrophotography avatar
This, plus nVIDIA RTX 3080Ti

John Hayes avatar
I use a MacBook Pro M2 with 64MB RAM and 2TB SSD and it's plenty fast enough; although when I use the SubframeSelector to sort a few hundred 130MB frames, it takes a while.  So, I find something else to do while it churns.  A super jazzed up Win10 machine will be faster but Windows machines are way too unstable for my liking.

John
Helpful
Alan Brunelle avatar
John Hayes:
I use a MacBook Pro M2 with 64MB RAM and 2TB SSD and it's plenty fast enough; although when I use the SubframeSelector to sort a few hundred 130MB frames, it takes a while.  So, I find something else to do while it churns.  A super jazzed up Win10 machine will be faster but Windows machines are way too unstable for my liking.

John

I assembled one of those jazzed up Win10 machines and it has worked well for me.  The motherboard I used allowed for overclocking and I could push it pretty hard. But, as you say, it became more unstable as PI grew.  When I dialed back the overclock to the MB set of plus 38% or lower, the instability went away.  Honestly, I can't really point to any effects the marginal gains in over clocking accomplished.  It's not like gaming where increasing frame rate or something can be achieved.

My biggest upgrade has been to take my RAM from 64 GB to 128 GB.  And in doing so, I've dedicated more than the extra 64 GB to PI swap files which seems to help the PI benchmarks the most.  But I still take a good long break when doing the integration.

My next real gain will be to get my graphics card working again for the AI functions.

But at this point, living in the Pacific Northwest, I am data-limited and not processor-limited!

Alan
Helpful Engaging
Michael J. Mangieri avatar
Alan Brunelle:
John Hayes:
I use a MacBook Pro M2 with 64MB RAM and 2TB SSD and it's plenty fast enough; although when I use the SubframeSelector to sort a few hundred 130MB frames, it takes a while.  So, I find something else to do while it churns.  A super jazzed up Win10 machine will be faster but Windows machines are way too unstable for my liking.

John

I assembled one of those jazzed up Win10 machines and it has worked well for me.  The motherboard I used allowed for overclocking and I could push it pretty hard. But, as you say, it became more unstable as PI grew.  When I dialed back the overclock to the MB set of plus 38% or lower, the instability went away.  Honestly, I can't really point to any effects the marginal gains in over clocking accomplished.  It's not like gaming where increasing frame rate or something can be achieved.

My biggest upgrade has been to take my RAM from 64 GB to 128 GB.  And in doing so, I've dedicated more than the extra 64 GB to PI swap files which seems to help the PI benchmarks the most.  But I still take a good long break when doing the integration.

My next real gain will be to get my graphics card working again for the AI functions.

But at this point, living in the Pacific Northwest, I am data-limited and not processor-limited!

Alan

I have used Windows machines exclusively, and have rarely had any issues with instability while processing data in PI. Once in a while the software gets stuck, and I restart. So I tend to save my project peridically.

As for the upgrades ... I find there is a limited return for the $$$ when you start to get overclocked and 'over-RAM'd'.  My sweet spot is to not overclock at all, install between 32-64 GB RAM and a fast CPU (Core i9). A good GPU also helps.
Helpful Concise
Jerry Yesavage avatar
Yeah to the stability.  I was afraid after two BSODs.  I stopped using the RAM disk.  I only have 64 GB of (only!) but think your 128 is justified.  I see it max out often.  I have not tried the CUDA stuff again for fear of stability.  This system is so fast I do not have time to get a cup of coffee, so I want to put the premium on stabilty.  I do not want to rebuild the system… I have two different sets of disk images….
Alan Brunelle avatar
Michael J. Mangieri:
Alan Brunelle:
John Hayes:
I use a MacBook Pro M2 with 64MB RAM and 2TB SSD and it's plenty fast enough; although when I use the SubframeSelector to sort a few hundred 130MB frames, it takes a while.  So, I find something else to do while it churns.  A super jazzed up Win10 machine will be faster but Windows machines are way too unstable for my liking.

John

I assembled one of those jazzed up Win10 machines and it has worked well for me.  The motherboard I used allowed for overclocking and I could push it pretty hard. But, as you say, it became more unstable as PI grew.  When I dialed back the overclock to the MB set of plus 38% or lower, the instability went away.  Honestly, I can't really point to any effects the marginal gains in over clocking accomplished.  It's not like gaming where increasing frame rate or something can be achieved.

My biggest upgrade has been to take my RAM from 64 GB to 128 GB.  And in doing so, I've dedicated more than the extra 64 GB to PI swap files which seems to help the PI benchmarks the most.  But I still take a good long break when doing the integration.

My next real gain will be to get my graphics card working again for the AI functions.

But at this point, living in the Pacific Northwest, I am data-limited and not processor-limited!

Alan

I have used Windows machines exclusively, and have rarely had any issues with instability while processing data in PI. Once in a while the software gets stuck, and I restart. So I tend to save my project peridically.

As for the upgrades ... I find there is a limited return for the $$$ when you start to get overclocked and 'over-RAM'd'.  My sweet spot is to not overclock at all, install between 32-64 GB RAM and a fast CPU (Core i9). A good GPU also helps.

Michael,

Agreed.  My motherboard essentially was set to some overclock from the get go.  I only ran into trouble when I pushed it further and only did so out of curiosity and learning.  But it worked fine for a year, then I just dialed it back to the original settings when it seemed to have issues.  I did go with the i9.

I started with 64 GB RAM.  Upgrading came a year or so later after running into a good value.  Much cheaper than any other astro gear I have since bought.  But, still, may not have been worth it since I'm not sure I can see a huge difference in performance.  But it did make a difference in the PI benchmark.  It is incredible how much faster read/write on RAM disks are compared to even an SSD.

Still cannot get the graph is card to process the AI functions, but most completec their tasks within a few minutes.  So I guess the setup is running well.