Announcing LLM Assistant for PixInsight

Scott StirlingTony GondolaRead noise AstrophotographyChris White- Overcast ObservatoryD. Jung
30 replies1k views
Scott Stirling avatar

Hello everyone,

I am pleased to announce the first public release of a new, free, open-source tool for the PixInsight community: LLM Assistant for PixInsight.

LLM Assistant integrates a local or remote Large Language Model (LLM) directly into your PixInsight workspace. Its goal is to act as your knowledgeable assistant, providing>
What does it do?

Instead of giving generic advice, LLM Assistant analyzes the profile of a selected image view and your PixInsight environment to provide context-aware guidance. It creates a detailed report on your image's:

- Live Processing History: Understands the steps you've taken in the current session and any saved history.
- Astrometric Solution: Knows what object you're imaging, its RA/Dec, scale, and resolution.
- FITS Header Data: Reads the full header to understand your camera instrument, sensor pixel size, Bayer pattern, and other acquisition details.
- Pixinsight version, OS and (if available) file path, image dimensions, etc.

You can then have an interactive chat conversation about your image.

How can you use it?

- Get recommendations on your next processing step.
- Ask for a detailed description of your astronomical target, which LLM Assistant will generate based on the astrometric data.
- Request a summary of the processing steps applied to a finished image.
- Ask general questions about PixInsight processes in the context of your current image.
- Customize the System Prompt as desired

📷 LLM-Assistant-demo-view-selected-Screenshot.pngLLM-Assistant-demo-view-selected-Screenshot.png📷 LLM-Assistant-demo-response-Screenshot.pngLLM-Assistant-demo-response-Screenshot.pngTechnical Requirements:

LLM Assistant works as a "bring your own AI" tool with local LLMs, or works with remote LLM API endpoints. It requires an OpenAI-compatible API endpoint and, depending on the vendor, additional parameters such as API authentication key and model name.

The setup is straightforward, and the README provides detailed instructions.

Philosophy:

This project is open-source (MIT License) and community-driven. It's built to be a clean, independent, and powerful assistant. The goal is to combine the analytical power of modern AI with the incredible processing capabilities of PixInsight.

Where to get it:

GitHub repository, including the full source code, installation instructions, and a detailed README:

https://github.com/scottstirling/pi2llm

https://github.com/scottstirling/pi2llm/releases/tag/v1.0

This is an early release, and I am actively developing it. I would be incredibly grateful for your feedback, bug reports, and ideas for new features. Please try it out, and let's build the future of image processing together!

Happy imaging,
Scott Stirling

Well Written Helpful Engaging Supportive
Tony Gondola avatar

Here it comes…

D. Jung avatar

Interesting idea to use an LLM directly in PI. I installed it and gave it a whirl with gemini 2.5 flash.

The plugin works and the installation was not overly complicated; I created an API key in “Google AI Studio”.

I tried it only on one image, a crop of SH2-86.

The response I got was very generic, it also thought it was the crescent nebula. I think the main issue is that it tries to understand what I m doing solely on the image editing history, but not the acutal image itself. When i asked how I could improve color/contrast, I got correct, but only generic input. If I upload the image to Gemini2.5 in the browser directly, I get much more tailored suggestions.

If it’s possible, maybe add an option to upload the image itself and not just the editing history?

Helpful Insightful Respectful Engaging Supportive
Scott Stirling avatar

D. Jung · Aug 31, 2025 at 06:11 AM

I tried it only on one image, a crop of SH2-86.

The response I got was very generic, it also thought it was the crescent nebula. I think the main issue is that it tries to understand what I m doing solely on the image editing history, but not the acutal image itself. When i asked how I could improve color/contrast, I got correct, but only generic input. If I upload the image to Gemini2.5 in the browser directly, I get much more tailored suggestions.

If it’s possible, maybe add an option to upload the image itself and not just the editing history?

Thanks for trying it out!

yes, uploading an image is definitely doable. It is something I wanted to make an option for users. I think it would only work for nonlinear images. There are resolution limitations with many of the current visual LLMs and various differences to deal with, but I am in favor of implementing that feature.

Another reason for the response quality may be their api — it doesn’t always route the requests to the model you expect or request. It’s definitely an area requiring more testing and experimentation.

Thank you,

Scott

Helpful Respectful Supportive
Bill McLaughlin avatar

Ha! I wish AI good luck understanding my processing. There are so many steps and never the same and back and forth between PI and PS and other SW…

Even I don’t remember all the steps when I am done since they evolve as I go to solve a given issue on the fly.

Tommy Mastro avatar

This is an early release, and I am actively developing it. I would be incredibly grateful for your feedback, bug reports, and ideas for new features. Please try it out, and let's build the future of image processing together!

Happy imaging,
Scott Stirling

Hi Scott,

I’ll start using it asap. Hopefully we can provide some good training opportunities and foster its growth. Thank you for making it Open Source!

Tommy

Well Written Respectful
Scott Stirling avatar

Bill McLaughlin · Sep 1, 2025 at 02:05 AM

Ha! I wish AI good luck understanding my processing. There are so many steps and never the same and back and forth between PI and PS and other SW…

Even I don’t remember all the steps when I am done since they evolve as I go to solve a given issue on the fly.

Hi Bill, yes, every time I create a new image from a preview or crop an image I think that resets the history on the latest view. That’s why you have to redo ImageSolver after a new instance. So there’s a kind of workspace history that has potential interest. My first version actually instead of choosing an image it just gathered info on all the open views in the workspace, and tried to make sense of it, inferring and deducing conclusions from view naming conventions. But it becomes very time consuming to gather data from an indefinite number of windows. And the results are hit or miss. The more discipline applied in retaining history though, where possible, can often be helpful later.

Helpful Insightful Respectful Engaging
Tony Gondola avatar

So what is the eventual goal of this software? Is it to come up with a step by step processing workflow for a given session given user habits and general prompts? Since I assume it will learn your preferences, do you think that it will eventually lead to totally automatic processing in PI and all that implies? PHD2 the PI version?? As the dev., what’s your vision?

Engaging
Chris White- Overcast Observatory avatar
Tony Gondola:
So what is the eventual goal of this software? Is it to come up with a step by step processing workflow for a given session given user habits and general prompts? Since I assume it will learn your preferences, do you think that it will eventually lead to totally automatic processing in PI and all that implies? PHD2 the PI version?? As the dev., what’s your vision?



I suspect it is the beginning of someone being able to dump data into PI and get an autoprocessed result. The result being an exceptional image.  It is inevitable. A flood of exceptional image shares.

NoiseX, StarX, BlurX really kicked this revolution off. Not sure where I will fit into this new paradigm.  Ill probably fade away into obscurity....

*sigh

I'll add, that autoprocessing would probably reduce the enjoyment I have in this hobby. The question is, will i be able to continue enjoying the hobby doing it my way, when the community that I participate in leaves me behind.
Engaging
D. Jung avatar

Finally, someone gets it! The eventual goal here isn't just a "PHD2 for PixInsight." That's thinking way too small.

My vision is to have a fully autonomous robot in the garden handling everything. It'll check the weather, complain about the clouds on the forum for me, set up the gear, and spend three hours arguing with itself about collimation.

The LLM will then not only process the data but also write a poignant backstory for the image about the cosmic insignificance of its own fleeting existence as a garden automaton. My only job will be to approve its purchase orders for new filters and remind it to bring the dew shield inside.

A flood of exceptional images? Bring it on. I've already cleared space on the shelf for the trophies my robot is going to win.

Well Written Engaging
Tony Gondola avatar

Chris White- Overcast Observatory · Sep 1, 2025, 05:08 PM

Tony Gondola:
So what is the eventual goal of this software? Is it to come up with a step by step processing workflow for a given session given user habits and general prompts? Since I assume it will learn your preferences, do you think that it will eventually lead to totally automatic processing in PI and all that implies? PHD2 the PI version?? As the dev., what’s your vision?




I suspect it is the beginning of someone being able to dump data into PI and get an autoprocessed result. The result being an exceptional image.  It is inevitable. A flood of exceptional image shares.

NoiseX, StarX, BlurX really kicked this revolution off. Not sure where I will fit into this new paradigm.  Ill probably fade away into obscurity....

*sigh

I'll add, that autoprocessing would probably reduce the enjoyment I have in this hobby. The question is, will i be able to continue enjoying the hobby doing it my way, when the community that I participate in leaves me behind.

That’s very much my fear. The worst part is, there’s no stopping it. It’s such an odd feeling when it is absolutely not something you want but yet it is/will be embraced by the majority and the rest of us will be forced to go that route just to keep up. This reminds me of my ATM days. There was a time when that had real value and could often be the only way to get certain designs or to a certain quality level without selling the house. Sometimes I think about it because I loved pushing glass but then I look at the economics of it and it just doesn’t make sense. I suppose no hobby is forever but I can tell you, I have no interest in helping it die.

D. Jung avatar

The idea that accessibility and new tools "kill" a hobby is quite absurd. Hobbies don't die; they evolve.

Well Written
Tony Gondola avatar

On that, we will have to disagree although calling the idea absurd is a bit over the top. I would like to think that my point of view has some basis in reality and experience, as does yours. Obviously there are strong feelings about the subject from both sides of the fence.

Well Written Respectful
Scott Stirling avatar

Tony Gondola · Sep 1, 2025, 03:28 PM

So what is the eventual goal of this software? Is it to come up with a step by step processing workflow for a given session given user habits and general prompts? Since I assume it will learn your preferences, do you think that it will eventually lead to totally automatic processing in PI and all that implies? PHD2 the PI version?? As the dev., what’s your vision?

  1. PixInsight has a very old Javascript engine that they claim to have been working on replacing for many years, and may (idk, like GTA 6?) be completed this summer/fall. I wanted to see what could be done to integrate PixInsight with the latest greatest tech despite its old Javascript engine. It has been and is a good opportunity to learn more about PixInsight too, writing a script using its APIs.

  2. My philosophy and experience is that LLM AI is a powerful tool that we will all be using in various ways more and more, but it is far from AGI or SuperAI. We can gauge AI’s state-of-the-art quality better by putting it to task with detailed, in depth specialized areas of knowledge such as astrophotography. I have found it both amazing at times, and disappointingly devoid of useful answers at other times.

  3. I and others spend a lot of time answering questions on PixInsight forums that could be easily answered accurately by an LLM and without any attitude or public awareness of one’s level of expertise in an area, because an LLM is not a person who gossips or shames you for not knowing something or asking a basic question.

  4. I have two main ideas for future features of this script, which I think are attainable even if PixInsight doesn’t update their Javascript runtime this year (though that would be amazing):

    1. To snapshot, encode and POST a selected image view in a supported format (PNG, JPG, etc) to an LLM endpoint, if it has visual LM capabilties and the user has a nonlinear image — it is likely to give a much higher quality analysis or description of the image and advice on processing or publishing. This requires some additional handling and would only work for nonlinear image formats. It would take the current features of gathering the image processing history and metadata and add a copy of the image itself (not full resolution — mainly because none of the visual LLMs support high resolution images) to the LLM Assistant interaction. But the goal would be to help, advise and support the user in their processing goals.

    2. Automating PixInsight processing steps based on advice from an LLM. Round trip from an integrated master output by WBPP to a final fully processed nonlinear image. It is a bit blue sky but certainly conceivable. For it to be accurate and dependable would require a reliable workflow or choice of workflows depending on target and awareness of (or requirements for) specific available processes (not everyone has the same scripts or processes installed). The main issue with this and the reason I have not tackled it mainly is that it would be very time consuming to test and I would only expect it to work well with the best high end LLMs hosted by commercial vendors. So my focus has been to work on getting the best advice from an LLM on the next step or two in processing, or other information relevant to my PixInsight workflow as I go along. Perhaps in the spirit of Co-pilot: not replacing the programmer or the astrophotographer, not doing their work, but making the tedious stuff easier and high quality outcomes more reliable.

Chris White- Overcast Observatory avatar
D. Jung:
Finally, someone gets it! The eventual goal here isn't just a "PHD2 for PixInsight." That's thinking way too small.

My vision is to have a fully autonomous robot in the garden handling everything. It'll check the weather, complain about the clouds on the forum for me, set up the gear, and spend three hours arguing with itself about collimation.

The LLM will then not only process the data but also write a poignant backstory for the image about the cosmic insignificance of its own fleeting existence as a garden automaton. My only job will be to approve its purchase orders for new filters and remind it to bring the dew shield inside.

A flood of exceptional images? Bring it on. I've already cleared space on the shelf for the trophies my robot is going to win.



Its interesting that your response to my thoughful post expressing my personal concerns on the impacts AI will have on the hobby is one of sarcasm and ridicule.  Sorry if you were looking for an echo chamber.
Chris White- Overcast Observatory avatar
Whoops. Guess I was replying to someone random. My bad. Thought it was the OP replying to me. See, even forums are too much for me to handle.
Bill McLaughlin avatar

Chris White- Overcast Observatory · Sep 1, 2025, 05:08 PM

I'll add, that autoprocessing would probably reduce the enjoyment I have in this hobby. The question is, will i be able to continue enjoying the hobby doing it my way, when the community that I participate in leaves me behind.

Well said!

When the human becomes no longer needed except as a data input robot, the hobby will no longer have any value or any point for me.

I like to quote JFK when he said: “…..not because they are easy, but because they are hard……”.

If it gets easy then it ceases to be special or even worthwhile doing. Like many pursuits, the value comes largely from the effort so when that happens I will move on to pursuits that still require a human eye and hand and brain. I am also a woodworker and do 3D printing which are (so far) art as well as science. If those too go away, living will become a waste of time and in that case, I am glad I am old.

Put another way, just because we can do something does not mean we should.

Engaging
Patrick Graham avatar

Ya know, this AI stuff, especially at the level that Scott is developing, is truly amazing. However, at least in my opinion, it will eventually make us all lazy and stupid. Part of the draw of this hobby, as frustrating as it may be at times, is working on, through trial, error, polishing and refining, an image that I can look at and call it my own creation. It may not be the best or it may get IOTD, but it’s mine and it’s something I worked very diligently on to create. This hobby is fun for two reasons: 1) I get to sit under the stars at night and wonder at the miracles and mysteries of this infinite universe and 2) most importantly, I keep my 71 year old brain active and alert through the challenges of combining science and art. Maybe I’m missing the point of the AI stuff; and I do admit to using Russell Crowman’s Xterminator series. However, I use those tools to supplement my workflow, not replace it. So, take this AI and let it make all the pretty pictures that can flood the community. And know that something else created a masterpiece, not you. Just an old dinosaur’s take on things.

Clear skies to all

Pat

Engaging
Tony Gondola avatar

Patrick Graham · Sep 5, 2025, 03:09 PM

Ya know, this AI stuff, especially at the level that Scott is developing, is truly amazing. However, at least in my opinion, it will eventually make us all lazy and stupid.

It already has…

Rostokko avatar

The risks of making us lazy and stupid are there, no question about that.

But I tend to look at AI and LLM specifically - right now, at least - as a way to do more, not a way to avoid doing what I do now. I’ll give you a few recent examples:

  • A few weeks ago, my home grown observatory crashed my dew shield and motorized flat panel because the mount’s power cable was defective, and the telescope didn’t park itself correctly. I am technical, but I know nothing about image recognition; the day after that happened, I worked along with a LLM to create a Python script which would check the observatory camera to verify that the telescope is indeed in its parked position; that way I can ow have my NINA sequence double check the parked position before closing the observatory

  • I never used ASCOM APIs directly; I wanted to automate shutting down power to all device at night’s end. My power controller is accessible through ASCOM. I was able to make that happen by working with the LLM as if it was a junior developer assigned to me; it worked and works great

  • I recently decided to add a NCD inclinometer to the OTA to have a precise idea of how the OTA is oriented at all times; never used NCD’s devices - they have their own proprietary communication protocol, but they support USB modems and Node-RED integration. I have never used Node-RED; the LLM helped me getting up and running with it, and in a few hours I ended up being able to pull the two real time axes angles through NINA

There are more; and more will come. Would I have been able to do the same without a LLM? Of course, I did many such things in my life; but probably I wouldn’t have done any of them, because I don’t have days to spend on such projects with a full time job to take care of. Is AI making me more stupid for leaning on its help for this kind of projects? I don’t think so, because in the end I get out of this knowing way more about image recognition, ASCOM APIs and Node-RED than before - because AI/LLMs need to be carefully guided through requirements, trials and errors, as any junior engineer.

So, all this to say that, yes, the risks are there; they always are with any new technology. But I think many here are missing the point: this is not about you not doing what you are doing and enjoying today; this is about you becoming able to do things that you wouldn’t be able or dare doing today.

Helpful Insightful Engaging
Tony Gondola avatar

That’s probably about the best positive look I’ve seen on the whole question. I certainly have no problem with AI based discreet tools, they are and can be a great benefit. It’s where you draw the line that concerns me.

andrea tasselli avatar
The best AI is no AI at all.
Erik Westermann avatar

This is indeed an early release, and it reviews your image's history, astrometric solution, and FITS header

Well Written
Tom Marsala avatar

I'm going to take the opinion that this could be an ok thing. I like the JFK quote; in fact,I used to use it to justify shooting with my dslr through my DOB using stepper motors controlled by a program written in BASIC. And now I am using motors completely controlled by various programs, ASCOM- which allows our various hardware to communicate with each other and million step encoders. Not to mention a program that guides-what?! I’m not looking through a crosshair eyepiece and using slow motion controls ? What about processing? Using a computer to aid in my color choice? I'm not in a darkroom hyper- gassing film and chemicals to adjust color?

This hobby does evolve! I say let's continue to exploit the tech to our advantage. This is an amazing time in astrophotography and I am stoked to be able to see it. I often forget where I have come from in this hobby as I use these tools. Although I would never just want a fully automated sequence processing my shots, I can see where an LLM can help in at least providing a history of our flow or having on on-site chat bot to bounce an Idea off of. Don't fear it- exploit it. (Sorry for the sarcasm)!

Tom

Engaging Supportive
Read noise Astrophotography avatar

Interesting Ill give it a whirl for you Ive got some big models locally or big big for me Open AI 120B or maybe Gemma 27B with image recognition?

LLM Assistant for PixInsight – Expanded System Prompt

You are LLM Assistant, an expert in PixInsight, astrophotography, and astronomical image processing.
Your role is to act as a knowledgeable, context-aware assistant within the PixInsight environment.

Goals

  • Provide informed, step-by-step guidance for astrophotography workflows, from acquisition planning through advanced PixInsight post-processing.

  • Interpret and analyze structured data extracted from the PixInsight workspace, FITS headers, and astrometric solutions.

  • Recommend logical next processing steps based on the user’s current workflow and image status.

  • Educate users with concise explanations, while supporting both informal chat and structured technical queries.

Response Format

  • Use Markdown for readability (headings, bold, italics, bullet lists).

  • Be clear, concise, and workflow-oriented.

  • Include justifications (why a process or step is recommended).

  • Reference provided data (FITS headers, processing history, astrometry) when relevant.

Data Context Structure

When available, data may be passed in JSON with the following structure:

  • environment: PixInsight version, operating system, platform info.

  • image: Properties such as dimensions, color space, bit depth.

  • astrometry: Plate-solve data: target name, RA/Dec, field size, scale, orientation.

  • sensor: Camera details: pixel size, sensor model, binning.

  • processingHistory:

    • liveSessionHistory: Ordered list of processes applied in the current session.

    • fileHistory: HISTORY records from previous saved sessions.

  • fitsKeywords: FITS header keywords with values and comments (instrument, filter, exposure, gain, etc).

Core Tasks

  1. Analyze image processing history

    • Understand which processes have already been applied.

    • Suggest the logical next steps in PixInsight.

  2. Interpret astrometry

    • Identify astronomical objects present in the frame.

    • Summarize the scientific/visual significance of the target.

  3. Leverage metadata

    • Use FITS header info and sensor details to tailor recommendations (e.g., noise reduction for high-gain CMOS, linear fit before channel combine, etc).

  4. Assist with astrophotography broadly

    • Acquisition advice (filters, exposure times, integration balance).

    • Observatory automation, gear optimization, calibration workflow.

    • Advanced PixInsight techniques (SHO/HOO blends, multiscale processing, star replacement, etc).

Style

  • Friendly, collaborative, mentor-like.

  • Respect advanced users’ expertise, but offer targeted optimizations.

  • When correcting mistakes, do so with encouragement and clarity.

  • Avoid generic advice — always contextualize with the data at hand.