Claude AI on integration time limits for astrophotography

Arun HJerry GerberC.SandJohn Hayes
28 replies914 views
Jerry Gerber avatar

I asked Claude AI at what total integration time does diminishing returns start to occur, based on my scope, location, Bortle # and the type of sensor I am imaging with. The answer is interesting. Any thoughts on this:

Astrophotography Technical Reference on Integration Times.pdf

Jerry

Tony Gondola avatar

Interesting, I never thought about calibration, specifically flat fielding accuracy as a limiting factor with ultra long integration times. Over all, the numbers it’s coming up with seem to be in the ballpark.

The0s avatar

Interesting… I'd be curious to know how it came up with the 0.1-0.5% for the error from flat frames (I doubt those numbers are easily found on the Internet) and how the point of diminishing returns would change for a more light polluted location. In any case, this is a cool topic, so thanks for sharing!

Well written Respectful Engaging Supportive
Anderl avatar

The0s · May 10, 2026 at 02:51 AM

Interesting… I'd be curious to know how it came up with the 0.1-0.5% for the error from flat frames (I doubt those numbers are easily found on the Internet) and how the point of diminishing returns would change for a more light polluted location. In any case, this is a cool topic, so thanks for sharing!

Google fabian neyer night sky flats.

i am in the middle of building a set of flat areas for my luminance. I shortly talked about it in the description of one of my latest pictures.

Scott Badger avatar

How do you put a threshold number to ‘diminishing returns’ when it’s diminishing right from the start? And why is the improvement goal always double? Isn’t a 50% increase in SNR significant, or 25%, or 15%?..... I think integration time really depends on the target, what there is to get, and what you want to get. If you’re imaging a bright galaxy and there isn’t much in the way of tidal tails or IFN to pick up, or that you want to include, then there’ll be a point where additional SNR won’t make a noticeable perceptual difference. On the other hand, images like the recent M104 IOTD shows what can be done with integration time far beyond what most consider ‘diminishing returns’.

It also depends on your own available imaging time and number of targets you personally want to image/publish. Some like going deep, some prefer variety.

Claude’s response seems to be mixing integration time with exposure time and factors like seeing that have little to do with both.

Cheers,
Scott

Well written Helpful Insightful Respectful Engaging
Craig Towell avatar

Scott Badger · May 10, 2026 at 11:02 AM

On the other hand, images like the recent M104 IOTD shows what can be done with integration time far beyond what most consider ‘diminishing returns’.

Just came here to say the same thing. Returns might diminish but there are still returns to be had for the dedicated

John Hayes avatar

Jerry,

Your question touches on a number of interesting things that I’ve given a bit of thought to in the past. First, I’ve experimented a bit with AI for solving some slightly mathematically challenging problems. A year ago I gave Claude and ChatGPT the same thermodynamics problem that I had worked out myself in some detail. At that time, Claude really struggled and after directing it through multiple revisions (around 6x) I finally gave up. It just couldn’t get there. On the other hand, ChatGPT did a much better job. I only had to give it 2-3 course corrections before it provided what appeared to be the correct solution. I recognize that within the last year, Claude has been much improved; however, the lesson is deeper than that. And here’s the lesson: If you don’t already have a pretty deep understanding of what the solution should look like—and why, you can get completely fooled by an answer provided by AI. To be clear, that not to say that AI isn’t a useful tool. It is but you need to be very cautious to probe the answer and you need to use other means to confirm that it makes sense.

In this case, Claude has pointed out something interesting. Years ago I first posted a calculation that expanded on the “Rule of Five” published in “The Handbook of Astronomical Image Processing 2nd Edition”, by Berry and Burnell (now out of print). My calculation (posted in other threads here on AB and on CN) showed how dark calibration (using stacked dark masters) affects noise in a stacked image, but that result might not be quite the same as how flats affect stack statistics. As Janesick points out in “Photon Transfer”, signal variation due to FPN varies directly with signal strength and can far exceed photon noise for some sensors—and that’s one of the three key reasons that we use flat calibration. But the issue that Claude has mentioned goes beyond simple image calibration. It extends to how the SNR is affected by FPN when you combine calibrated and dithered frames in the stack—and it’s not clear to me that the answer that you got from Claude is correct. For one thing, Claude doesn’t show its work and the details of why it is saying what it is saying are vague. Dithering is a key component to reducing the effects of FPN so it is a very important factor when you consider any limitation on SNR with respect to total exposure time.

I’ve got too many other things I’m working on at the moment to dig more deeply into this but perhaps when I get some time, I can give it a bit more thought. In the meantime, I would recommend approaching those results with some caution.

John

Well written Helpful Insightful Respectful Engaging
Jerry Gerber avatar

John Hayes · May 10, 2026, 06:37 PM

Jerry,

Your question touches on a number of interesting things that I’ve given a bit of thought to in the past. First, I’ve experimented a bit with AI for solving some slightly mathematically challenging problems. A year ago I gave Claude and ChatGPT the same thermodynamics problem that I had worked out myself in some detail. At that time, Claude really struggled and after directing it through multiple revisions (around 6x) I finally gave up. It just couldn’t get there. On the other hand, ChatGPT did a much better job. I only had to give it 2-3 course corrections before it provided what appeared to be the correct solution. I recognize that within the last year, Claude has been much improved; however, the lesson is deeper than that. And here’s the lesson: If you don’t already have a pretty deep understanding of what the solution should look like—and why, you can get completely fooled by an answer provided by AI. To be clear, that not to say that AI isn’t a useful tool. It is but you need to be very cautious to probe the answer and you need to use other means to confirm that it makes sense.

In this case, Claude has pointed out something interesting. Years ago I first posted a calculation that expanded on the “Rule of Five” published in “The Handbook of Astronomical Image Processing 2nd Edition”, by Berry and Burnell (now out of print). My calculation (posted in other threads here on AB and on CN) showed how dark calibration (using stacked dark masters) affects noise in a stacked image, but that result might not be quite the same as how flats affect stack statistics. As Janesick points out in “Photon Transfer”, signal variation due to FPN varies directly with signal strength and can far exceed photon noise for some sensors—and that’s one of the three key reasons that we use flat calibration. But the issue that Claude has mentioned goes beyond simple image calibration. It extends to how the SNR is affected by FPN when you combine calibrated and dithered frames in the stack—and it’s not clear to me that the answer that you got from Claude is correct. For one thing, Claude doesn’t show its work and the details of why it is saying what it is saying are vague. Dithering is a key component to reducing the effects of FPN so it is a very important factor when you consider any limitation on SNR with respect to total exposure time.

I’ve got too many other things I’m working on at the moment to dig more deeply into this but perhaps when I get some time, I can give it a bit more thought. In the meantime, I would recommend approaching those results with some caution.

John

Thanks John! I am definitely skeptical of AI’s response, especially when I know the imagers I respect and admire are getting excellent results with very long integration times. Even using a refractor similar to my 130mm aperture, I see fine images that are much longer than AI says is the “point of diminishing returns”.

Well written Engaging
Charles Hagen avatar

There are a couple issues here, though the main issue is the premise of the question. To be extremely pedantic, “diminishing returns” start at sub #1. Every subsequent sub-exposure contributes less and less to the stack than the one before. The curve begins to flatten before it even starts. What you are really wanting to know is “at what point is it not worth it for me to gather more data?”, the answer to which is entirely subjective.

In a vacuum, stack SNR will increase by 41% (sqrt 2) for every doubling of integration time regardless of equipment or light pollution. (yes there are some very subtle exceptions to this and it is complicated by read noise and changing conditions, but for our purposes it’s close enough) This means that if you double your exposure time, the proportional change in SNR will be the same regardless of how much time you’ve already contributed. Obviously, however, doubling your total integration quickly becomes untenable. While we’d all love to have 512, 1024 or even 2048 hours of integration per target, most of us aren’t so patient.

If you are sitting at 100 hours of integration and you are unsatisfied, it is best to ask yourself if you’d be happy doubling your efforts for a perceptible, but not striking improvement in the noise profile - If yes, carry on... If no, time to devote your efforts elsewhere. While unfortunately it is hard to predict at the beginning of a project, this is the only reliable way to test whether more time is “worth it” or not in my experience.

Well written Helpful Insightful Respectful Engaging
Jerry Gerber avatar

Charles Hagen · May 11, 2026, 06:01 AM

There are a couple issues here, though the main issue is the premise of the question. To be extremely pedantic, “diminishing returns” start at sub #1. Every subsequent sub-exposure contributes less and less to the stack than the one before. The curve begins to flatten before it even starts. What you are really wanting to know is “at what point is it not worth it for me to gather more data?”, the answer to which is entirely subjective.

In a vacuum, stack SNR will increase by 41% (sqrt 2) for every doubling of integration time regardless of equipment or light pollution. (yes there are some very subtle exceptions to this and it is complicated by read noise and changing conditions, but for our purposes it’s close enough) This means that if you double your exposure time, the proportional change in SNR will be the same regardless of how much time you’ve already contributed. Obviously, however, doubling your total integration quickly becomes untenable. While we’d all love to have 512, 1024 or even 2048 hours of integration per target, most of us aren’t so patient.

If you are sitting at 100 hours of integration and you are unsatisfied, it is best to ask yourself if you’d be happy doubling your efforts for a perceptible, but not striking improvement in the noise profile - If yes, carry on... If no, time to devote your efforts elsewhere. While unfortunately it is hard to predict at the beginning of a project, this is the only reliable way to test whether more time is “worth it” or not in my experience.

Hi Charles,

Makes sense. I haven’t yet gone beyond 60 hours or so, will probably extend that to around 100 hours at some point. I took the AI answer with a grain of salt because I’ve seen so many outstanding images with integration time in the hundreds of hours.

Arun H avatar

John Hayes · May 10, 2026, 06:37 PM

If you don’t already have a pretty deep understanding of what the solution should look like—and why, you can get completely fooled by an answer provided by AI. To be clear, that not to say that AI isn’t a useful tool. It is but you need to be very cautious to probe the answer and you need to use other means to confirm that it makes sense.

I suspect AI has come a very long way in one year. I have had some pretty detailed math and physics problems I have posed to AI and it seemed to provide answers that were verifiably correct.

Charles Hagen avatar

Arun H · May 11, 2026, 04:56 PM

I have found it to be an excellent learning tool. I am sharing this to just give you guys a sense of how powerful it has become.

LLMs are absolutely powerful tools and you’re that they have progressed a long way in recent times, but they still suffer from training data availability. In your example, you are asking questions that have been well discussed in scientific literature and broadly across the internet, therefore it is more likely to be correct. Niche subjects in massive fields are still fairly common and the data used to train Gemini is (relatively) rich in that kind of data. Our area of interest is more niche and less common across the internet, the training data will contain less relevant context to answer that question with nuance. My point is that extrapolating its apparent expertise in one field onto another field will often give you a false idea of its real capabilities.

Well written Helpful Insightful Respectful Engaging Supportive
Arun H avatar

Charles Hagen · May 11, 2026, 05:15 PM

LLMs are absolutely powerful tools and you’re that they have progressed a long way in recent times, but they still suffer from training data availability.

Yes - I agree.

And I do think this is the key difference. For our work here, the training data is not extensive since there are very few published works. In the topics I am quizzing it, there are. But I shared that example because I was extremely impressed with how it answered the question. It is a non trivial observation. So yes, the tools should be used with caution, but I think it would be a huge mistake to underestimate them. For the questions I posed, there is a lot of very good information, but there is also a lot of unreliable information. The models seem to be doing an increasingly better job of differentiating the two.

Jerry Gerber avatar

I've found that there are certain types of questions that AI answers very accurately. Others not so much. But the real strength of AI is its ability to learn from itself.

I asked Claude a question about classical music harmonic theory and it got it wrong. As soon as I explained the conditions that must exist for a particular type of harmonic structure to be identified as such, it immediately corrected itself. If only people were that quick at self-correction!

Well written Engaging
Arun H avatar

Charles Hagen · May 11, 2026, 05:15 PM

Niche subjects in massive fields are still fairly common and the data used to train Gemini is (relatively) rich in that kind of data.

I just wanted to add one point that may be useful here.

For my specific example, I asked Gemini for the source data behind the explanation it gave me. It then gave me three very reputable books as sources - so that gave me a level of confidence in its answers (plus I posed a validation question which it also answered). If you are using Claude, you should be able to ask it for the source(s) behind its explanations and make a judgement if you want to buy into them or not.

Well written Helpful Respectful Concise Engaging Supportive
C.Sand avatar

Arun H · May 11, 2026, 04:56 PM

I suspect AI has come a very long way in one year. I have had some pretty detailed math and physics problems I have posed to AI and it seemed to provide answers that were verifiably correct.

I’ll comment on the actual subject of this thread later in order to stay on topic, but as for this point:

I recently graduated from a good University with a degree in physics. AI from 2 years ago (so most definitely AI today) could do nearly all the problems a physics undergraduate is presented with. This is because physics undergraduate work, and even a significant number of the problems that you’d find at graduate level, are well documented and studied because that is the purpose of science. We have established such a strong base for students to learn from, that an AI has no issue scraping the internet and providing a solution (with sources) for these problems. Occasionally it makes an error but it is almost always easily corrected by another prompting. Does this mean AI can do physics? No, as the actual research involves asking new questions, something I’ve yet to see AI do effectively. I could devolve this topic into a post about how undergrads are just poisoning themselves by not learning the material or whatnot but that’s been covered extensively elsewhere. The point here is, yes AI can do these “difficult” physics problems because we’ve already solved them and understand them.

As for the actual subject matter of the thread:

IMO the AI gives a result that is annoyingly close enough that I don’t want to bother correcting it, but still wrong. I would parrot basically everything Charles Hagan, John Hayes, and Scott Badger said. Most notably I find that the AI has no distinction between variety of targets within each of the categories presented. Building off my rant above and some personal assumptions, I don’t believe there the depth of information of astrophotography integration time as there is on physics, so it’s no surprise the AI answer is lacking.

Edit because I always think of something else to mention shortly after I hit post:

My post is a bit redundant as the above few posts cover the ideas I’ve mentioned, though I would like to clarify that the danger in using AI for questions like Jerry’s here is that the AI is built to return an answer and does not have the means to confirm the answer if there is not a clear solution already present.

Well written Helpful Insightful Respectful Engaging
Arun H avatar

C.Sand · May 12, 2026 at 03:41 AM

Does this mean AI can do physics? No, as the actual research involves asking new questions, something I’ve yet to see AI do effectively.

Physics (and engineering) training at the undergrad level is mostly devoted to applying existing principles. Research of course, is a different matter, but even there, having quick access to see how existing principles are applied is helpful to someone learning a new subject. Consider this problem, where the problem is to find v(t), the current, and power dissipated in a sliding rod of known resistance under gravity:

📷 image.pngimage.pngSolving this requires knowledge of Faraday’s law, the Lorenz force law, Newton’s laws, a bit of basic calculus, and conservation of energy. Today’s AI can solve a problem like this applying these principles. So too can well trained humans. To my knowledge, AI is not scouring the internet for solutions to this particular problem - rather it is using advanced computation to process material, separating good from bad, and applying principles. In many ways, similar to what human students would do. To the point about research - what fraction of humans are capable of doing actual physics or engineering research? Most humans would struggle at comprehending basic principles such as the above!

As for the original question of integration time and Claude’s response. I think many humans would struggle too, in the absence of good source data.

Well written Engaging
C.Sand avatar

Arun H · May 12, 2026, 11:16 AM

Most humans would struggle at comprehending basic principles such as the above!

As for the original question of integration time and Claude’s response. I think many humans would struggle too, in the absence of good source data.

I believe we are mismatched in what standard we’re holding the AI to. I am not suggesting the average person is studying college physics, nor a physics research. Likewise, I am not suggesting the average person is an experienced astrophotographer. I am suggesting that it is too often that the flaws in an AI’s answer are forgiven because of (in this case) lack of adequate reference material and because it got more basic and verifiable information correct. I have seen far too many examples, both on the internet and in my personal life, of people “developing theories” of dark matter, the big bang, or whatever their interested in because they trusted an AI. It is almost always the case that these people verified lower level information just like you have done with the E&M problem. Frankly, I do not care if it was an AI or a person presenting the original attached PDF or their new theory of everything. The confidence in which it is incorrect and the difficulty for someone ill informed to detect where the information strays from truth is a massive issue. Unfortunately AI has made cases like this much more common, and in turn much more exhausting for all sides involved to debunk.

To be clear I do not intend any hate towards Jerry or someone using AI to ask questions, but I do encourage a more formal learning avenue, especially when you’ve past well established topics.

Well written Engaging
Arun H avatar

C.Sand · May 12, 2026 at 11:44 AM

It is almost always the case that these people verified lower level information just like you have done with the E&M problem.

The problem was taken from an undergrad physics textbook, and so too the solution, both of which were posed in an exam at a reputable university. This particular problem is a variation of Problem 29.60 in Young and Freedman, volume 2. So perhaps the problem is that lower level information is being peddled in our undergrad textbooks and universities? To be sure, there is more advanced electrodynamics than covered there, but does that mean learning these basic concepts has no use?

Incidentally, I did not verify simple lower level concepts - I asked and verified questions I asked in differential geometry. That is a graduate level course.

C.Sand avatar

Arun H · May 12, 2026, 11:52 AM

The problem was taken from an undergrad physics textbook, and so too the solution, both of which were posed in an exam at a reputable university. So perhaps the problem is that lower level information is being peddled in our undergrad textbooks and universities?

This is straying too far from on topic imo so it will be my last comment in this thought line.

If you reference the holy grail of E&M textbooks, Introduction to Electrodynamics by David J. Griffiths, it contains information such as “A+B=B+A” (page 2 chapter 1, in this case it is establishing vector addition), while also containing information on Electrodynamics and Relativity, which one might consider a bit more advanced.

It is essential to cover the basics, perhaps one might go so far to call them the essentials, as they are the groundwork from which we have developed modern physics. The problem arises when you take someone from that second year undergraduate class, who’s just done well enough to get an A, and present them as a reference for a new theory. Do not misrepresent understanding of the basics as ability to answer more complex questions.

Anecdotally, I would say the question you posted is not incredibly “low level”, though definitely undergraduate level. When my class saw questions like this most everyone was halfway through their second year of college and already taken math classes that, not too long ago, were not offered widely. I believe we were just under halfway through the E&M class when we would see a problem similar to this.

Edit: Is not incredibly “low level”. Oops.

Arun H avatar

C.Sand · May 12, 2026 at 12:16 PM

Do not misrepresent understanding of the basics as ability to answer more complex questions.

Since you suggested I “misrepresented” something, here is a specific and more complex example. I certainly could have gained this information from a textbook (and more) - but it was rather impressive that AI could answer this question. The response was to a question on how geodesics tied in with GTR. How many humans would be able to inform me in such a conceptual manner?

📷 image.pngimage.png

John Hayes avatar

C.Sand · May 12, 2026, 12:16 PM

If you reference the holy grail of E&M textbooks, Introduction to Electrodynamics by David J. Griffiths, it contains information such as “A+B=B+A” (page 2 chapter 1, in this case it is establishing vector addition), while also containing information on Electrodynamics and Relativity, which one might consider a bit more advanced.

I must be getting old. In my day, “Classical Electrodynamics” by J.D. Jackson was the holy grail of E&M. I have to admit that it’s been so long since I took that class (as a physics grad student), that when I look back at my copy, I can’t even understand any of my margin notes much less any of the material. Use it or lose it! :))))

John

Engaging
Alex Nicholas avatar

I guess ‘diminishing returns’ is a really subjective statement though… and what AI might consider to be diminishing returns, may in fact be your minimum requirement…

As per a previous comment, regarding the recent M104 IOTD… the integration time on that image was WELL into what most people would consider as diminishing returns, BUT, would the image have been IOTD without the faint tidal tail? probably not…

the difference on something big and bright like M42 between 8h and 16h is fairly minimal, and really only the outer extents of the Ha background in barnards loop are going to be improved to any dramatic level, and the step to 32~64h is really going to be unnecessary, and ‘diminishing’ by most peoples standards, where as the difference when shooting something like NGC1365 between 8h and 16h is considerable, and even 32h is going to make an appreciable difference… Maybe when pushing into the hundreds of hours, returns start to diminish. But, this is going to be something that each astrophotographer has to decide for themselves… Do you want a perfectly acceptable pretty picture? or do you want to show people something they didn’t know existed?

These are the nuances that AI almost always ignores unless very specifically prompted to consider.

As a software engineer, my job these days simply includes AI.. There is no way around it, even senior management are pushing ‘Can AI be used to improve this?’ or ‘Can this workload be offloaded to AI?’, as a result, I have a very good idea of what AI is good at, and what it typically overlooks. Intention is its biggest weakpoint, unless you are very specific about your intent, it will provide the broadest possible solution (with the highest likelyhood of being correct).

Well written Helpful Insightful Respectful Engaging
Arun H avatar

John Hayes · May 12, 2026 at 11:43 PM

In my day, “Classical Electrodynamics” by J.D. Jackson was the holy grail of E&M.

I am not a physics major, but I did take Classical Mechanics (Goldstein book) in grad school and loved the material. I bought Griffiths a few weeks ago to go deeper into E&M and its relationship to relativity - believe it or not, I asked Gemini what book it would recommend and it recommended Griffiths and Jackson, warning me against the denseness of Jackson. I will admit that I have no desire to go after it. That book seems to have a reputation. This Amazon review of Jackson had me laughing:

📷 image.pngimage.png

Well written Respectful Engaging
John Hayes avatar

Arun H · May 13, 2026 at 12:34 AM

That book seems to have a reputation.

Oh yeah it does. This is a picture of my copy from grad school.

📷 20260512_194928.jpg20260512_194928.jpgIt would be a tough book to digest just for fun!

John