Rick Evans · Dec 31, 2025, 12:27 AM
I know that, for example, a 9mm eyepiece has a focal length of 9mm. Dividing a telescopes focal length by the focal length of an eyepiece used gives approximate magnification. We don’t think in terms of magnification when using a camera sensor instead of an eyepiece though.
A camera sensor is placed at prime focus of a telescope system, so the focal length should be given by aperture x F ratio. I think that the camera sensor size is inversely proportional to the image scale produced. I think that image resolution is at least in part determined by telescope aperture and sensor pixel size and pixel number and there is a formula for calculating it.
So, with all that as background, I’m just curious to know how to determine which camera sensor characteristics would give me the equivalent detail that I would observe if using an eyepiece of a particular focal length with the same telescope.
Let me see if I can clarify a few things for you. Optically, the question you are asking is a little like, “If apples are red and oranges are citrus, how much do grapes weigh?
Optical magnification is a measure of how much angles are magnified. So, a 1000 mm focal length telescope equipped with an eyepiece that has a 20 mm focal length will magnify angles by 50 times. It is the angular magnification that makes things look bigger (or if you like, closer). Angular magnification has nothing to do with the field of view that you see through the telescope. The field of view is determined by whatever limits the off-axis rays in the focal plane. In an eyepiece, that is call the “field stop” and you can easily see it if you look in the back side of the eyepiece. When you remove the eyepiece and place a sensor in focal plane, the size of the sensor determines the field of view. Many sensors are larger than the field stop in most eyepieces, which means that a camera will see a bigger patch of sky than you can with your eye. (There are of course exceptions to this when you select a smaller sensor size.)
The sharpness of the image that you can see with your eye is determined by the resolving power of your eye, which is typically about one minute of arc for most folks with 20/20 vision. If you work that backwards through the telescope, the resolution of the system is typically about 1/m in arc-minutes, where ‘m’ is the magnification. This is not an unlimited value because things like optical aberrations, atmospheric seeing, and the diffraction of light place an upper bound on what can be resolved.
When you put a camera on a telescope, the angular resolution is determined by the ratio of the spacing of the pixels in the sensor divided by the focal length. Here again, optical aberrations, atmospheric seeing, and the diffraction of light all place the same upper bound on what can be resolved.
So tell me the aperture of your telescope, the focal length and what eyepiece you are using and it’s easy to compute the pixel spacing required to match what you see visually. Here’s an example:
Telescope: 200 mm, F/10, FL = 2000 mm
Eyepiece: 20 mm, m = 100x, visual resolution ~ 0.1’ = 0.6”
Sensor resolution = arctan(p/f) ===> p = f*tan(sensor resolution)
p = Pixel spacing = 5.82 microns (spacing needed to match 1’ visual resolution)
Just be aware that typical atmospheric seeing might limit even the visual resolution to around 2” so this system would be over-sampled unless the seeing were spectacularly good. Most modern CMOS sensors have a pixel spacing of 3.76 microns so this sort of sampling is pretty easy to achieve. NOTE: I’ve been pretty loose with terminology here. Sensor sampling is not the same as resolution but that’s a completely different discussion.
John