Hello ABin! Absent the ability to image still (weather and clouds, or just poor timing on nights it does get clear), I've been pondering something:
What would the optimal astrophotography image sensor look like?
Thus far, astrophotography cameras are made using sensors initially designed for other purposes, most of the time. CCDs are often designed for entirely different purposes and repurposed for astro (although there are some exceptions). sCMOS sensors are usually designed primarily for scientific or medical purposes, and often have characteristics that are quite detrimental to their use in astro (i.e. high dark current). Most of the CMOS sensors in modern consumer grade astro cameras were designed for terrestrial photography with DSLR or mirrorless cameras, security cameras, medical imaging, etc.
So while some sensors are very good, such as the IMX455 or IMX533 they are not necessarily optimal for our use case. What WOULD be optimal? I've had a few thoughts, but I'm curious what other ideas might be combined to produce a sensor that was truly optimal for astrophotography purposes. There are some of my initial thoughts:
A. Non-destructive reads!
- Consider a sensor that allowed non-destructive reads. Maybe even of just some portion of the sensor (i.e. the center, or a corner, or just a portion of the side), wherein you were using the primary IMAGING sensor to ALSO GUIDE! The non-destructive reads could potentially be used for guiding purposes.
- Also consider a sensor that could allow full field non-destructive reads…say just to preview the image, see if all is going well!
B. Split pixels in clustered groups.
- Successive non-destructive reads for guiding might not be optimal, as the signal would differ from read to read. That could change centroid unnecessarily. It might work, there might be ways to use a dynamic gain setting or dynamically adjust the exposures to normalize them to minimize such effects, but it could still be less than optimal.
- Split pixels (such as dual-pixel auto-focus used in a lot of mirrorless cameras that cannot use a separate dedicated AF sensor), although much more densely concentrated (and perhaps in several regions of the sensor: central, left, right, top, bottom and the four corners) so that every pixel in a region was split, might allow for more consistent partial reads for guiding purposes. With each "guide" read being a complete read of those pixels, you wouldn't have problems with successively changing signal with each read using non-destructives. Those reads could potentially then be recorded (or perhaps successively combined in some kind of pixel backing memory) so that the full pixel charge could be applied during a full readout. This would allow guiding, without affecting the total signal for the final image.
C. Voltage Binning.
- While CMOS cameras don't support the kind of binning that CCDs do, where you can bin up to arbitrarily large groups or rows (or columns) of pixels with many CCDs thanks to the charge shifting that occurs (granted, there are limits, as eventually binning enough pixels could saturate the output register or output buffer), there are other forms of binning. You can in fact do minor amounts of charge binning in CMOS cameras that support it (most CMOS sensors use a 4-shared pixel architecture, and the addition of a backing memory in the form of, you guessed it, a CCD (!) for each group can allow for binning of those four pixels (2x2 groups). But this requires much more complex readout architectures.
- A simpler approach is to use voltage binning, where the output voltage of each binned pixel is combined. There are different ways of doing this…IMO combining the voltage after conversion but before amplification is better, but from what I've read its more complex (doable with BSI sensors where there is more room for the necessary transistors and wiring.)
D. Continuous read? (Just theory-mongering)
- What if we didn't need to do a single read at the end of an exposure? What if, instead, we could do "continuous" reads and accumulate the result? There is a concept for a quantum film type sensor where "Jots", dynamic groups of insanely small pixels (1 micron or less) can be "activated" more like a silver halide grain in classic film, but with the ability for each jot to subdivide and become finer-grained as more photons arrive at the sensor, allowing for, in a sense, a dynamic signal and dynamic resolution. Quantum film sensors don't really do a normal readout, they "read out" progressively, and are in essence photon-counting, so there is effectively no read noise. You count the active pixels in each active jot, and sum the actives over time. Very interesting concept, but still very theoretical. However, perhaps some kind of periodic read, say every second, could be used and the resulting signal be combined with previous reads, progressively accumulating the signal over time?
- This would require sensors with effectively zero noise, otherwise the accumulation of read noise in each progressive read would overwhelm the signal. There are potentially ways to achieve this. The quantum film sensor effectively does this as its not really reading but counting. That said, there are sensors with fractional read noise (small fraction of an electron per read), and if you are able to acquire many photons per read per pixel (background sky level), then that might work.
- Photon counting is also an interesting concept, as it effectively allows for no read noise. This might not matter in light polluted zones, but under truly dark skies, say at a permanent dark site observatory, such a sensor could allow for better images. Especially narrow band images, or any imaging of exceptionally faint details (IFN?)
So, any other ideas? What have you thought of that might make for the IDEAL, optimal imaging sensor for astrophotography?
What would the optimal astrophotography image sensor look like?
Thus far, astrophotography cameras are made using sensors initially designed for other purposes, most of the time. CCDs are often designed for entirely different purposes and repurposed for astro (although there are some exceptions). sCMOS sensors are usually designed primarily for scientific or medical purposes, and often have characteristics that are quite detrimental to their use in astro (i.e. high dark current). Most of the CMOS sensors in modern consumer grade astro cameras were designed for terrestrial photography with DSLR or mirrorless cameras, security cameras, medical imaging, etc.
So while some sensors are very good, such as the IMX455 or IMX533 they are not necessarily optimal for our use case. What WOULD be optimal? I've had a few thoughts, but I'm curious what other ideas might be combined to produce a sensor that was truly optimal for astrophotography purposes. There are some of my initial thoughts:
A. Non-destructive reads!
- Consider a sensor that allowed non-destructive reads. Maybe even of just some portion of the sensor (i.e. the center, or a corner, or just a portion of the side), wherein you were using the primary IMAGING sensor to ALSO GUIDE! The non-destructive reads could potentially be used for guiding purposes.
- Also consider a sensor that could allow full field non-destructive reads…say just to preview the image, see if all is going well!
B. Split pixels in clustered groups.
- Successive non-destructive reads for guiding might not be optimal, as the signal would differ from read to read. That could change centroid unnecessarily. It might work, there might be ways to use a dynamic gain setting or dynamically adjust the exposures to normalize them to minimize such effects, but it could still be less than optimal.
- Split pixels (such as dual-pixel auto-focus used in a lot of mirrorless cameras that cannot use a separate dedicated AF sensor), although much more densely concentrated (and perhaps in several regions of the sensor: central, left, right, top, bottom and the four corners) so that every pixel in a region was split, might allow for more consistent partial reads for guiding purposes. With each "guide" read being a complete read of those pixels, you wouldn't have problems with successively changing signal with each read using non-destructives. Those reads could potentially then be recorded (or perhaps successively combined in some kind of pixel backing memory) so that the full pixel charge could be applied during a full readout. This would allow guiding, without affecting the total signal for the final image.
C. Voltage Binning.
- While CMOS cameras don't support the kind of binning that CCDs do, where you can bin up to arbitrarily large groups or rows (or columns) of pixels with many CCDs thanks to the charge shifting that occurs (granted, there are limits, as eventually binning enough pixels could saturate the output register or output buffer), there are other forms of binning. You can in fact do minor amounts of charge binning in CMOS cameras that support it (most CMOS sensors use a 4-shared pixel architecture, and the addition of a backing memory in the form of, you guessed it, a CCD (!) for each group can allow for binning of those four pixels (2x2 groups). But this requires much more complex readout architectures.
- A simpler approach is to use voltage binning, where the output voltage of each binned pixel is combined. There are different ways of doing this…IMO combining the voltage after conversion but before amplification is better, but from what I've read its more complex (doable with BSI sensors where there is more room for the necessary transistors and wiring.)
D. Continuous read? (Just theory-mongering)
- What if we didn't need to do a single read at the end of an exposure? What if, instead, we could do "continuous" reads and accumulate the result? There is a concept for a quantum film type sensor where "Jots", dynamic groups of insanely small pixels (1 micron or less) can be "activated" more like a silver halide grain in classic film, but with the ability for each jot to subdivide and become finer-grained as more photons arrive at the sensor, allowing for, in a sense, a dynamic signal and dynamic resolution. Quantum film sensors don't really do a normal readout, they "read out" progressively, and are in essence photon-counting, so there is effectively no read noise. You count the active pixels in each active jot, and sum the actives over time. Very interesting concept, but still very theoretical. However, perhaps some kind of periodic read, say every second, could be used and the resulting signal be combined with previous reads, progressively accumulating the signal over time?
- This would require sensors with effectively zero noise, otherwise the accumulation of read noise in each progressive read would overwhelm the signal. There are potentially ways to achieve this. The quantum film sensor effectively does this as its not really reading but counting. That said, there are sensors with fractional read noise (small fraction of an electron per read), and if you are able to acquire many photons per read per pixel (background sky level), then that might work.
- Photon counting is also an interesting concept, as it effectively allows for no read noise. This might not matter in light polluted zones, but under truly dark skies, say at a permanent dark site observatory, such a sensor could allow for better images. Especially narrow band images, or any imaging of exceptionally faint details (IFN?)
So, any other ideas? What have you thought of that might make for the IDEAL, optimal imaging sensor for astrophotography?