Claude AI on integration time limits for astrophotography

8 replies335 views
Jerry Gerber avatar

I asked Claude AI at what total integration time does diminishing returns start to occur, based on my scope, location, Bortle # and the type of sensor I am imaging with. The answer is interesting. Any thoughts on this:

Astrophotography Technical Reference on Integration Times.pdf

Jerry

Tony Gondola avatar

Interesting, I never thought about calibration, specifically flat fielding accuracy as a limiting factor with ultra long integration times. Over all, the numbers it’s coming up with seem to be in the ballpark.

The0s avatar

Interesting… I'd be curious to know how it came up with the 0.1-0.5% for the error from flat frames (I doubt those numbers are easily found on the Internet) and how the point of diminishing returns would change for a more light polluted location. In any case, this is a cool topic, so thanks for sharing!

Well written Respectful Engaging Supportive
Anderl avatar

The0s · May 10, 2026 at 02:51 AM

Interesting… I'd be curious to know how it came up with the 0.1-0.5% for the error from flat frames (I doubt those numbers are easily found on the Internet) and how the point of diminishing returns would change for a more light polluted location. In any case, this is a cool topic, so thanks for sharing!

Google fabian neyer night sky flats.

i am in the middle of building a set of flat areas for my luminance. I shortly talked about it in the description of one of my latest pictures.

Scott Badger avatar

How do you put a threshold number to ‘diminishing returns’ when it’s diminishing right from the start? And why is the improvement goal always double? Isn’t a 50% increase in SNR significant, or 25%, or 15%?..... I think integration time really depends on the target, what there is to get, and what you want to get. If you’re imaging a bright galaxy and there isn’t much in the way of tidal tails or IFN to pick up, or that you want to include, then there’ll be a point where additional SNR won’t make a noticeable perceptual difference. On the other hand, images like the recent M104 IOTD shows what can be done with integration time far beyond what most consider ‘diminishing returns’.

It also depends on your own available imaging time and number of targets you personally want to image/publish. Some like going deep, some prefer variety.

Claude’s response seems to be mixing integration time with exposure time and factors like seeing that have little to do with both.

Cheers,
Scott

Well written Helpful Insightful Respectful Engaging
Craig Towell avatar

Scott Badger · May 10, 2026 at 11:02 AM

On the other hand, images like the recent M104 IOTD shows what can be done with integration time far beyond what most consider ‘diminishing returns’.

Just came here to say the same thing. Returns might diminish but there are still returns to be had for the dedicated

John Hayes avatar

Jerry,

Your question touches on a number of interesting things that I’ve given a bit of thought to in the past. First, I’ve experimented a bit with AI for solving some slightly mathematically challenging problems. A year ago I gave Claude and ChatGPT the same thermodynamics problem that I had worked out myself in some detail. At that time, Claude really struggled and after directing it through multiple revisions (around 6x) I finally gave up. It just couldn’t get there. On the other hand, ChatGPT did a much better job. I only had to give it 2-3 course corrections before it provided what appeared to be the correct solution. I recognize that within the last year, Claude has been much improved; however, the lesson is deeper than that. And here’s the lesson: If you don’t already have a pretty deep understanding of what the solution should look like—and why, you can get completely fooled by an answer provided by AI. To be clear, that not to say that AI isn’t a useful tool. It is but you need to be very cautious to probe the answer and you need to use other means to confirm that it makes sense.

In this case, Claude has pointed out something interesting. Years ago I first posted a calculation that expanded on the “Rule of Five” published in “The Handbook of Astronomical Image Processing 2nd Edition”, by Berry and Burnell (now out of print). My calculation (posted in other threads here on AB and on CN) showed how dark calibration (using stacked dark masters) affects noise in a stacked image, but that result might not be quite the same as how flats affect stack statistics. As Janesick points out in “Photon Transfer”, signal variation due to FPN varies directly with signal strength and can far exceed photon noise for some sensors—and that’s one of the three key reasons that we use flat calibration. But the issue that Claude has mentioned goes beyond simple image calibration. It extends to how the SNR is affected by FPN when you combine calibrated and dithered frames in the stack—and it’s not clear to me that the answer that you got from Claude is correct. For one thing, Claude doesn’t show its work and the details of why it is saying what it is saying are vague. Dithering is a key component to reducing the effects of FPN so it is a very important factor when you consider any limitation on SNR with respect to total exposure time.

I’ve got too many other things I’m working on at the moment to dig more deeply into this but perhaps when I get some time, I can give it a bit more thought. In the meantime, I would recommend approaching those results with some caution.

John

Well written Helpful Insightful Respectful Engaging
Jerry Gerber avatar

John Hayes · May 10, 2026, 06:37 PM

Jerry,

Your question touches on a number of interesting things that I’ve given a bit of thought to in the past. First, I’ve experimented a bit with AI for solving some slightly mathematically challenging problems. A year ago I gave Claude and ChatGPT the same thermodynamics problem that I had worked out myself in some detail. At that time, Claude really struggled and after directing it through multiple revisions (around 6x) I finally gave up. It just couldn’t get there. On the other hand, ChatGPT did a much better job. I only had to give it 2-3 course corrections before it provided what appeared to be the correct solution. I recognize that within the last year, Claude has been much improved; however, the lesson is deeper than that. And here’s the lesson: If you don’t already have a pretty deep understanding of what the solution should look like—and why, you can get completely fooled by an answer provided by AI. To be clear, that not to say that AI isn’t a useful tool. It is but you need to be very cautious to probe the answer and you need to use other means to confirm that it makes sense.

In this case, Claude has pointed out something interesting. Years ago I first posted a calculation that expanded on the “Rule of Five” published in “The Handbook of Astronomical Image Processing 2nd Edition”, by Berry and Burnell (now out of print). My calculation (posted in other threads here on AB and on CN) showed how dark calibration (using stacked dark masters) affects noise in a stacked image, but that result might not be quite the same as how flats affect stack statistics. As Janesick points out in “Photon Transfer”, signal variation due to FPN varies directly with signal strength and can far exceed photon noise for some sensors—and that’s one of the three key reasons that we use flat calibration. But the issue that Claude has mentioned goes beyond simple image calibration. It extends to how the SNR is affected by FPN when you combine calibrated and dithered frames in the stack—and it’s not clear to me that the answer that you got from Claude is correct. For one thing, Claude doesn’t show its work and the details of why it is saying what it is saying are vague. Dithering is a key component to reducing the effects of FPN so it is a very important factor when you consider any limitation on SNR with respect to total exposure time.

I’ve got too many other things I’m working on at the moment to dig more deeply into this but perhaps when I get some time, I can give it a bit more thought. In the meantime, I would recommend approaching those results with some caution.

John

Thanks John! I am definitely skeptical of AI’s response, especially when I know the imagers I respect and admire are getting excellent results with very long integration times. Even using a refractor similar to my 130mm aperture, I see fine images that are much longer than AI says is the “point of diminishing returns”.

Well written Engaging
Charles Hagen avatar

There are a couple issues here, though the main issue is the premise of the question. To be extremely pedantic, “diminishing returns” start at sub #1. Every subsequent sub-exposure contributes less and less to the stack than the one before. The curve begins to flatten before it even starts. What you are really wanting to know is “at what point is it not worth it for me to gather more data?”, the answer to which is entirely subjective.

In a vacuum, stack SNR will increase by 41% (sqrt 2) for every doubling of integration time regardless of equipment or light pollution. (yes there are some very subtle exceptions to this and it is complicated by read noise and changing conditions, but for our purposes it’s close enough) This means that if you double your exposure time, the proportional change in SNR will be the same regardless of how much time you’ve already contributed. Obviously, however, doubling your total integration quickly becomes untenable. While we’d all love to have 512, 1024 or even 2048 hours of integration per target, most of us aren’t so patient.

If you are sitting at 100 hours of integration and you are unsatisfied, it is best to ask yourself if you’d be happy doubling your efforts for a perceptible, but not striking improvement in the noise profile - If yes, carry on... If no, time to devote your efforts elsewhere. While unfortunately it is hard to predict at the beginning of a project, this is the only reliable way to test whether more time is “worth it” or not in my experience.

Well written Helpful Insightful Respectful Engaging