Been using Claude (Anthropic's AI) as a hands-on co-pilot across my entire AP workflow — not just for generic questions, but with direct access to my files, FITS headers, and sequencer JSON. Here's what a single session looked like.
It started by reading my FITS headers and correctly identifying my target as LBN 239 & LBN 251 (IC 1318 / Gamma Cygni complex) from raw RA/Dec coordinates — I had the wrong name in my notes.
It then parsed 164 light frames across 5 nights, built a per-night airmass table showing exactly when each session dropped below 2.0 and when the best transparency window hit (consistently 04:30–05:15 local).
From that airmass data it proposed a smarter filter order: HA first (strongest signal, tolerates high airmass at session start), SII in the middle, OIII always last — consistently shot in the best transparency window every night.
Obvious in retrospect, easy to miss when you're just clicking through NINA's loop UI.
Then it rewrote my entire multi-target sequence JSON from scratch — four targets (Sadr SHO+RGB, M65, M86, M13 LRGB), removed all loop/iteration structures, replaced them with clean sequential SmartExposure blocks, calculated exactly how many additional frames each filter still needed to reach balance against existing data..
Finally — and this is the part I care about most — it added park-on-failure safety logic: if plate solve or autofocus fails (assuming: clouds, bad seeing), the mount parks immediately rather than keeping tracking while waiting for the next target to rise. Real pier-crash prevention, not just a nice-to-have.
Three JSON versions, zero manual editing, all from plain conversation.
Ridiculous.