So an F/4 Newt isn't really faster than an F/10 SCT???

John TuckerTony GondolaArun Handrea tasselli
28 replies755 views
John Tucker avatar

Something I guess everyone else knows but I never really grasped until today. My lightning-fast F/2.8 Newt isn’t really faster than my SCT. It just produces massive undersampling. Aperture is the only thing that really matters.

Graph 1: Relative photon flux per pixel for a Celestron 9.25” SCT, for a generic 6” F/6 Newt, and for a 6” F/4 Newt. For any size pixel, the OTA with the lower focal ratio is “faster”.

Graph 2: The same data reformatted to show relative photon flux vs digital resolution (arcsec per pixel). For any given image scale, the photon flux is identical for the two 6” OTAs and less than that obtained with the 9.25” SCT.

The greater speed of low focal ratio scopes arises solely from undersampling. Instead of buying an F/3 Newt I could have just bought a camera with bigger pixels.

Did I do this right?

📷 Screenshot 2026-03-08 110505.jpgScreenshot 2026-03-08 110505.jpg

Well Written Helpful Insightful Respectful Engaging
andrea tasselli avatar
What matters is entendue so yes, you would have been better served with a camera having bigger pixels, except for the FOV.
Arun H avatar

From a photon collection per arc second of object in the sky standpoint, aperture is the only thing that matters. This should not be very surprising. Photons entering your OTA have no idea how they will be concentrated on the sensor!

Focal length determines how the collected photons are concentrated on a sensor on the focal plane. For a rectilinear lens, an arc second in the sky = an arc second on the sensor. So the linear dimension one arc second occupies on the sensor = focal length * 1 arc second.

If you are able to change the pixel size so each pixel captures the same sky area, the only thing that matters from a photon collection per pixel standpoint is aperture.

Well Written Helpful Insightful Respectful Engaging
John Tucker avatar

I guess one caveat to my post above is related to computer screen resolution. The pixel array on a MacBook Pro 16” screen is 3456x2234, vs 6248×4176 for the ASI2600 sensor.

If you reduced the size of the pixels on the ASI2600 and maintained APS dimensions, I’m not sure you could see the additional detail captured by having an “appropriate” image scale at 400mm while viewing the picture as a whole on a typical computer screen.

As a practical matter, the resolution-limiting factor for viewing whole images captured on an APS sized sensor may be computer screen resolution and not astronomical seeing or image scale.

Well Written Engaging
Tony Gondola avatar

That’s something I’ve mentioned before. Of course it no longer applies once you zoom in and start pixel peeping. Astrophotography is a little weird in that sense. There doesn’t seem to be much though about intended presentation size or resolution. As a tiny sensor shooter I sometimes wish the bin had more options in that area.

TiffsAndAstro avatar
Tony Gondola:
That’s something I’ve mentioned before. Of course it no longer applies once you zoom in and start pixel peeping. Astrophotography is a little weird in that sense. There doesn’t seem to be much though about intended presentation size or resolution. As a tiny sensor shooter I sometimes wish the bin had more options in that area.


Image scale is under our control, presentation scale is not so I don't worry about the latter. 

So should I start to worry about it ?
andrea tasselli avatar
TiffsAndAstro:
So should I start to worry about it ?


No, why would you?
TiffsAndAstro avatar
andrea tasselli:
TiffsAndAstro:
So should I start to worry about it ?


No, why would you?


Just checking, as it's not really anything I'd considered.
Tony Gondola avatar

TiffsAndAstro · Mar 8, 2026, 07:13 PM

Tony Gondola:
That’s something I’ve mentioned before. Of course it no longer applies once you zoom in and start pixel peeping. Astrophotography is a little weird in that sense. There doesn’t seem to be much though about intended presentation size or resolution. As a tiny sensor shooter I sometimes wish the bin had more options in that area.



Image scale is under our control, presentation scale is not so I don't worry about the latter. 

So should I start to worry about it ?

Like many things, it depends. In this case, it depends on what the goal is for an image, how it will be viewed and how much control you want over these things. In the old days, when presenting an image was about making a print, everything was under control. Depending on a lot of factors, you knew how large the print should be and generally how it would be viewed. In the digital world, not so much.

John Tucker avatar

TiffsAndAstro · Mar 8, 2026, 07:13 PM

Tony Gondola:
That’s something I’ve mentioned before. Of course it no longer applies once you zoom in and start pixel peeping. Astrophotography is a little weird in that sense. There doesn’t seem to be much though about intended presentation size or resolution. As a tiny sensor shooter I sometimes wish the bin had more options in that area.



Image scale is under our control, presentation scale is not so I don't worry about the latter. 

So should I start to worry about it ?

Oh, I’d say they are both absolutely under your control. But I wouldn’t worry about it. Its an input to the decision-making process, not a goal unto itself.

If I want a relatively noise-free picture and I don’t have access to a lot of acquisition time (because I live in Florida and its cloudy 95% of the time during nebula season) I’ll undersample. If I’m aiming to win “Image of the Day” and create something that will withstand scrutiny at any level of examination using my hosted scope in rural Utah, I’ll keep my image scale down and get 100 hours of data.

But I guess my point in writing this was just my own slow thinking realization that there are no fast telescopes, only fast (and undersampled) image scales.

andrea tasselli avatar
John Tucker:
But I guess my point in writing this was just my own slow thinking realization that there are no fast telescopes, only fast (and undersampled) image scales


*That very much depends on your circumstances, such as the seeing throws at you.
John Hayes avatar

F/4 is definitely faster than F/10; however, what counts is signal strength and there are more factors that go into that than simply optical speed. If you want to understand this stuff on a deeper level, take a look at this presentation on TAIC:

https://www.youtube.com/watch?v=HiJoqQp1qFI

John

Well Written Respectful Concise Engaging Supportive
Rodd Dryfoos avatar

I think what he is saying is it’s the number of photons collected per unit time that is important. A 2” f3 scope is not faster than a 17” f7 scope. Throw a reducer on a scope and you have not changed the number of photons-just how they are distributed. If seeing is 2” and the pixel scale of a 20” scope is .5”, the only real difference between that and a 6” scope with s pixel scale of .5” is speed. So when comparing f ratios for speed, one must take into consideration aperture. For equal aperture and all other things being equal, f3 is faster than f8. Change the aperture of one of the scopes, and it is not so easy to say. Even if aperture is the same, all things are rarely equal. Stark (creator of phd2) wrote a paper comparing a 8” (might have been 10”) newtonian with a 4 inch refractor. Everyone thought the newt would prevail easily, but it was darn close.

Well Written Helpful Insightful Respectful Engaging
John Tucker avatar

John Hayes · Mar 9, 2026 at 02:46 AM

F/4 is definitely faster than F/10; however, what counts is signal strength and there are more factors that go into that than simply optical speed. If you want to understand this stuff on a deeper level, take a look at this presentation on TAIC:

https://www.youtube.com/watch?v=HiJoqQp1qFI

John

John Tucker avatar

andrea tasselli · Mar 9, 2026 at 01:17 AM

John Tucker:
But I guess my point in writing this was just my own slow thinking realization that there are no fast telescopes, only fast (and undersampled) image scales



*That very much depends on your circumstances, such as the seeing throws at you.

I can see that there are circumstances in which the seeing is bad and so you don’t get a higher resolution picture by having 1 arcsec/pixel digital resolution instead of 2 arcsec per pixel resolution. So in that sense I’ve misused the word “undersampled” here.

But the greater speed seen with the F/3 Newt relative to the F/4 when using the same camera still arises because you’ve increased the area of sky each pixel “sees” and thus arises from reduced digital resolution. At least at the first order of analysis, its pretty much equivalent to binning.

Well Written Helpful Insightful Respectful Engaging
John Stone avatar
John Tucker:
But the greater speed seen with the F/3 Newt relative to the F/4 when using the same camera still arises because you’ve increased the area of sky each pixel “sees” and thus arises from reduced digital resolution. At least at the first order of analysis, its pretty much equivalent to binning.


John, 

You're exactly right to think about what area of the sky is projected onto a pixel by the optics.   Now add in one more consideration: the aperture which tells you how many of the photons from that area of the sky are concentrated on that pixel. 

For counting photons ignore  the rest, it's just a bunch of noise.

https://lambermont.dyndns.org/astro/code/compare-telescopes.html?a&d1=100&l1=500&c1p=2.315&d2=100&l2=1000&c2p=4.63

But for the quality of the images produced by your camera you also need to consider the atmospheric seeing, the amount of distortion in the optics that's concentrating the signal onto your pixels and the response of the light sensor behind the pixel.  i.e.  seeing, strehl ratio, quantum efficiency, and camera noise.

What I do is always start with my site's best seeing.   Let's say that it's 1.5".   

Then figure out for that seeing what is the optimal sampling (as in the number of pixels laid across that 1.5" blurry star spot).   Math says the minimum you need are 2, but reasonable people say about 3.5 or so.  For 1.5" seeing that's about 0.43"/px.  Now find out what focal length gives you that sampling for your camera.  For the most popular IMX533/571/455/411/461 Sony sensors with 3.76u pixels that is 1803mm focal length.  Now go buy the biggest/fastest/low distortion optics you can possible afford for that focal length and at the focal plane put the biggest sensor you possible can.   Be sure to pair it with the highest quality filters you can find and use the highest quality camera you can find.

Now you have a telescope system that can record everything the sky can offer you at your observing site.

If you want to image smaller/finer things then you have to find a new/better observing site...  repeat until you're out of money.  :-)
Well Written Helpful Insightful Respectful Engaging
John Stone avatar
John Tucker:
At least at the first order of analysis, its pretty much equivalent to binning


You are absolutely correct in this thinking, except CMOS sensor noise doubles with each level of binning, so you have to take that into account.

In this analysis it's all about the ratio of the photon shot noise from the sky when compared to the other noise you get from the camera.    Photons are like rain drops, they don't come down evenly in time but instead arrive in "clumps" so you never know for a given exposure time exactly what the true average photon arrival rate is.  The true average photon count is the signal you're looking for, it will be the brightness of that pixel in your image compared against the black sky background.

I guy named Robin Glover (they guy behind the SharpCap software) came up with a rule-of-thumb that says you want 3x-10x more photon shot noise (squared) in your exposure than camera noise.  When you reach that ratio the camera noise is so small that you can't notice it in your pictures.

Each level of binning doubles the camera noise, having more camera noise requires longer exposures  to "swamp the read noise", longer exposures require better mechanics to hold your telescope steady against the sky,  longer exposures leave more time for something to go wrong and spoil that exposure. Spoiled exposures require more cloudless, moonless nights before you get a nice picture...
Well Written Helpful Insightful Respectful Engaging
John Tucker avatar

John Stone · Mar 9, 2026, 06:54 AM

John Tucker:
But the greater speed seen with the F/3 Newt relative to the F/4 when using the same camera still arises because you’ve increased the area of sky each pixel “sees” and thus arises from reduced digital resolution. At least at the first order of analysis, its pretty much equivalent to binning.



John, 

You're exactly right to think about what area of the sky is projected onto a pixel by the optics.   Now add in one more consideration: the aperture which tells you how many of the photons from that area of the sky are concentrated on that pixel. 

For counting photons ignore  the rest, it's just a bunch of noise.

https://lambermont.dyndns.org/astro/code/compare-telescopes.html?a&d1=100&l1=500&c1p=2.315&d2=100&l2=1000&c2p=4.63

But for the quality of the images produced by your camera you also need to consider the atmospheric seeing, the amount of distortion in the optics that's concentrating the signal onto your pixels and the response of the light sensor behind the pixel.  i.e.  seeing, strehl ratio, quantum efficiency, and camera noise.

What I do is always start with my site's best seeing.   Let's say that it's 1.5".   

Then figure out for that seeing what is the optimal sampling (as in the number of pixels laid across that 1.5" blurry star spot).   Math says the minimum you need are 2, but reasonable people say about 3.5 or so.  For 1.5" seeing that's about 0.43"/px.  Now find out what focal length gives you that sampling for your camera.  For the most popular IMX533/571/455/411/461 Sony sensors with 3.76u pixels that is 1803mm focal length.  Now go buy the biggest/fastest/low distortion optics you can possible afford for that focal length and at the focal plane put the biggest sensor you possible can.   Be sure to pair it with the highest quality filters you can find and use the highest quality camera you can find.

Now you have a telescope system that can record everything the sky can offer you at your observing site.

If you want to image smaller/finer things then you have to find a new/better observing site...  repeat until you're out of money.  :-)

Agree with all of the above. In focusing on the effect of focal ratio, it was never my intent to suggest that nothing else matters, just to look at the effect of focal ratio with other factors held constant.

Well Written
Arun H avatar

John Stone · Mar 9, 2026, 07:06 AM

Each level of binning doubles the camera noise, having more camera noise requires longer exposures  to "swamp the read noise"

Hi John -

I do not believe this is correct. Remember that ALL noise doubles when you bin and in the same ratio. So, yes, the camera read noise will double when you do a 2×2 bin, but so too will the photon shot noise, since the signal has increased by a factor of 4 (a 2×2 bin captures 4x the photons as a single pixel). Therefore the ratio of the read noise to the shot noise remains the same. If your exposure length is such that photon shot noise swamps the read noise in a single pixel, it will also swamp it in a 2×2 bin.

Arun

Well Written Helpful Insightful Respectful Engaging
John Tucker avatar

Arun H · Mar 9, 2026, 03:03 PM

noise will double when you do a 2×2 bin, but so too will the photon shot noise, since the signal has increased by a factor of 4 (a 2×2 bin captures 4x the photons as a single pixel). Therefore the ratio of the read noise to the shot noise remains the same. If your exposure length is such that photon shot noise swamps the read noise in a single pixel, it will also swamp it in a 2×2 bin.

You guys may be looking at second order effects that I am too simple to fully understand.

What I know is this: I got a lot of pixelation when I used my ASI294MC camera with 4.6um pixels on my F/4 Newt with a 0.75x reducer (effectively F/3).

So I went out and bought an ASI183MC camera with 2.4uM pixels. And found that

  • My image scale (arcsec/pixel) improved, giving a sharper picture, and

  • The time needed to get a decent exposure increased to be roughly equivalent to what it was with the 294 camera without the focal reducer (e.g., at F/4).

That’s all. Whatever the math is, empirically in my system I get pretty much the exact result predicted using photon flux per pixel as the only important variable.

Well Written Helpful Engaging
Bill McLaughlin avatar

Tony Gondola · Mar 8, 2026, 04:33 PM

That’s something I’ve mentioned before. Of course it no longer applies once you zoom in and start pixel peeping. Astrophotography is a little weird in that sense. There doesn’t seem to be much though about intended presentation size or resolution. As a tiny sensor shooter I sometimes wish the bin had more options in that area.

Indeed! The pixel count of imaging chips is mostly far greater than the screen resolutions of most commonly used displays. This is one reason that I still tend to favor the APS-C sensors over full frame (and don’t get me started ranting about the even larger sensors 😉). Often the only real advantage of the full frame is easier composition with the rest being mostly negative related to the greater demands on the optics and displays.

Well Written Concise Engaging
Arun H avatar

John Tucker · Mar 9, 2026, 03:11 PM

You guys may be looking at second order effects that I am too simple to fully understand.

The only thing I am responding to is an inaccurate statement made by John Stone which claims that binning increases the time required to swamp read noise - it does not, and it is a mathematical fact. This is not a second order effect. It is the fundamentals of how noise addition and SNR growth works. It is the exact principle that Robin Glover uses when he suggests optimal exposure times to swamp read noise.

I have agreed already with your original premise on the importance of the importance of aperture and this is actually quite well known. Two systems, with image scale such that each pixel captures the same area of sky in both systems, will have a signal at the pixel level that will depend only on aperture. In other words, for constant object space sampling, signal at pixel level depends only on aperture. It is a natural consequence of optics and is covered in detail in John Hayes’s presentation and also several discussions here on AB (and probably elsewhere too).

Well Written
John Tucker avatar

Arun H · Mar 9, 2026, 03:32 PM

John Tucker · Mar 9, 2026, 03:11 PM

You guys may be looking at second order effects that I am too simple to fully understand.

The only thing I am responding to is an inaccurate statement made by John Stone which claims that binning increases the time required to swamp read noise - it does not, and it is a mathematical fact. This is not a second order effect. It is the fundamentals of how noise addition and SNR growth works. It is the exact principle that Robin Glover uses when he suggests optimal exposure times to swamp read noise.

I have agreed already with your original premise on the importance of the importance of aperture and this is actually quite well known. Two systems, with image scale such that each pixel captures the same area of sky in both systems, will have a signal at the pixel level that will depend only on aperture. In other words, for constant object space sampling, signal at pixel level depends only on aperture. It is a natural consequence of optics and is covered in detail in John Hayes’s presentation and also several discussions here on AB (and probably elsewhere too).

I didn’t mean to be argumentative. Sorry.

John Hayes avatar

John Tucker · Mar 9, 2026 at 04:57 AM

Good Lord, John, I’m sure you’re qualified to speak to this, but if you disagree with what I said, tell me why. I’m not going to wade through a 1.75 hour video trying to find your argument! 😊

Start at 35:10, turn the playback speed to 1.25x - 1.5x, and watch to 1:06. That will take you under ~24 minutes. I explained how this stuff works in that presentation so that I don’t have to re-explain it to every single person who expresses a desire to understand it on a deeper level on these forums. I could give you a stack of references or go through pages of discussion on this thread but I’ve condensed all of that stuff into a very concise explanation in one easy to watch video. So, it’s up to you whether you want to put in a little effort understand how the first order properties of a telescope affects how irradiance translates into signal in your camera. Once you watch it, then I’m happy to answer questions about it.

- John

Well Written Concise
John Tucker avatar

John Hayes · Mar 9, 2026, 03:50 PM

John Tucker · Mar 9, 2026 at 04:57 AM

Good Lord, John, I’m sure you’re qualified to speak to this, but if you disagree with what I said, tell me why. I’m not going to wade through a 1.75 hour video trying to find your argument! 😊

Start at 35:10, turn the playback speed to 1.25x - 1.5x, and watch to 1:06. That will take you under ~24 minutes. I explained how this stuff works in that presentation so that I don’t have to re-explain it to every single person who expresses a desire to understand it on a deeper level on these forums. I could give you a stack of references or go through pages of discussion on this thread but I’ve condensed all of that stuff into a very concise explanation in one easy to watch video. So, it’s up to you whether you want to put in a little effort understand how the first order properties of a telescope affects how irradiance translates into signal in your camera. Once you watch it, then I’m happy to answer questions about it.

- John

deleted