Arun H · May 11, 2026, 04:56 PM
I suspect AI has come a very long way in one year. I have had some pretty detailed math and physics problems I have posed to AI and it seemed to provide answers that were verifiably correct.
I’ll comment on the actual subject of this thread later in order to stay on topic, but as for this point:
I recently graduated from a good University with a degree in physics. AI from 2 years ago (so most definitely AI today) could do nearly all the problems a physics undergraduate is presented with. This is because physics undergraduate work, and even a significant number of the problems that you’d find at graduate level, are well documented and studied because that is the purpose of science. We have established such a strong base for students to learn from, that an AI has no issue scraping the internet and providing a solution (with sources) for these problems. Occasionally it makes an error but it is almost always easily corrected by another prompting. Does this mean AI can do physics? No, as the actual research involves asking new questions, something I’ve yet to see AI do effectively. I could devolve this topic into a post about how undergrads are just poisoning themselves by not learning the material or whatnot but that’s been covered extensively elsewhere. The point here is, yes AI can do these “difficult” physics problems because we’ve already solved them and understand them.
As for the actual subject matter of the thread:
IMO the AI gives a result that is annoyingly close enough that I don’t want to bother correcting it, but still wrong. I would parrot basically everything Charles Hagan, John Hayes, and Scott Badger said. Most notably I find that the AI has no distinction between variety of targets within each of the categories presented. Building off my rant above and some personal assumptions, I don’t believe there the depth of information of astrophotography integration time as there is on physics, so it’s no surprise the AI answer is lacking.
Edit because I always think of something else to mention shortly after I hit post:
My post is a bit redundant as the above few posts cover the ideas I’ve mentioned, though I would like to clarify that the danger in using AI for questions like Jerry’s here is that the AI is built to return an answer and does not have the means to confirm the answer if there is not a clear solution already present.