Stacked sensors

10 replies264 views
Tony Gondola avatar

I wonder if the new stacked image sensors that are coming out will make it possible to image and guide with the same chip. No more OAG’s or separate guide scopes. A full frame sensor can achieve a readout rate of 20 fps with this technology so it seems possible. With the right software it would be transparent to the user. The total integration time could be summed in camera while a short duration guide frame is taken every few seconds. The guide exposures could then be rolled back into the internal summing that makes up the total sub so that the total sub length really hasn’t changed. Am I missing something or is this the future?

Well Written Respectful Concise Engaging
Spacey avatar

if you mean read data while still exposing then it will require additional and substantial read and memory logic on the CMOS sensor which reduces performance. this is a good topic for a discussion with AI to give you some insights on the evolution of Sony’s CMOS technology

Robin Bosshard avatar

Wouldn’t we hit the DUO dilemma again? With NB filters the short guiding frames will struggle to get a good SNR (if you still manage to get’em to be short)…

Tony Gondola avatar

Spacey · Mar 24, 2026, 12:41 AM

if you mean read data while still exposing then it will require additional and substantial read and memory logic on the CMOS sensor which reduces performance. this is a good topic for a discussion with AI to give you some insights on the evolution of Sony’s CMOS technology

No, you need to read up on what a stacked sensor does. The main advantage is it gives very fast readout speeds with even a full frame sensor giving 20 fps. In DSLR, where the technology is currently deployed it allows much faster autofocus and does away with rolling shutter effects. The rest is just imagining what this technology might make possible on the astro camera side.

Helpful Concise
Tony Gondola avatar

Robin Bosshard · Mar 24, 2026, 01:04 AM

Wouldn’t we hit the DUO dilemma again? With NB filters the short guiding frames will struggle to get a good SNR (if you still manage to get’em to be short)…

yes, you certainly would but if you can fold the data from the guide frames into the total for the sub it really wouldn’t matter. In other words, the guide frames could be as long as you need them to be without impacting intergration time.

Alex Nicholas avatar

It’s been tried in a sense in the past by Starlight Xpress, using interlaced pixel row readout (yes, I’m aware this is not the exact same thing - but, you know, elephant that is a perfect sphere in a vacuum etc.)

At the end of the day - if you’re imaging and guiding with the same camera, regardless of the technology used to achieve it, your issue is going to be guiding through filters…

I’ve done it with 3nm filters and an F/10 SCT with an SBIG ST-10XME in 2010~2011 or there abouts… You CAN… but you don’t want to if there is a better way… Spoiler Alert! There is.

My primary question is this. What problem would this solve that isn’t better solved by an OAG, or OnAG?

None.

Change for the sake of change is wasteful, and sure, somewhere in between the lines, innovation exists… But honestly, Guiding through your filters is sub-par, so any form of dual sensor/stacked sensor/interlaced readout guiding is going to be sub-par.

Well Written Helpful Insightful Respectful Engaging
Spacey avatar

Tony Gondola · Mar 24, 2026, 02:49 AM

Spacey · Mar 24, 2026, 12:41 AM

if you mean read data while still exposing then it will require additional and substantial read and memory logic on the CMOS sensor which reduces performance. this is a good topic for a discussion with AI to give you some insights on the evolution of Sony’s CMOS technology

No, you need to read up on what a stacked sensor does. The main advantage is it gives very fast readout speeds with even a full frame sensor giving 20 fps. In DSLR, where the technology is currently deployed it allows much faster autofocus and does away with rolling shutter effects. The rest is just imagining what this technology might make possible on the astro camera side.

I’m going to tell you that you need to read up on what a stacked sensor is, and what it is not.

Tony Gondola avatar

Alex Nicholas · Mar 24, 2026, 03:33 AM

It’s been tried in a sense in the past by Starlight Xpress, using interlaced pixel row readout (yes, I’m aware this is not the exact same thing - but, you know, elephant that is a perfect sphere in a vacuum etc.)

At the end of the day - if you’re imaging and guiding with the same camera, regardless of the technology used to achieve it, your issue is going to be guiding through filters…

I’ve done it with 3nm filters and an F/10 SCT with an SBIG ST-10XME in 2010~2011 or there abouts… You CAN… but you don’t want to if there is a better way… Spoiler Alert! There is.

My primary question is this. What problem would this solve that isn’t better solved by an OAG, or OnAG?

None.

Change for the sake of change is wasteful, and sure, somewhere in between the lines, innovation exists… But honestly, Guiding through your filters is sub-par, so any form of dual sensor/stacked sensor/interlaced readout guiding is going to be sub-par.

The major advantage to my mind is the ability to use the entire sensor for guiding. That would resolve the filter issue to some degree simply because more/brighter stars would be available.

Well Written Respectful
SonnyE avatar

Curious subject. It sounds like a step (or a leap) forward. But I would expect to see some catch 22’s to it.

I may be in the dark ages with my guide scope and separate guide camera, but I do have the independence of changing individual components, like when my old guide camera failed, to a “New and Improved” specimen (ASI290MM Mini). And with PHD2 and the multi star guiding, 9 guiding stars for stability (in my case) seems to work amazingly well.

I, too, think I can see the advantages of guiding with the same picture as the File Images are being taken. But I’d lose my independent component flexibility. That was one drawback to the Duo camera concept for me.

Anyway, I appreciate the steps forward. But I’m all warm and cozy where I am.

So for those points, I’m out. 😉

Well Written Respectful Engaging Supportive
C.Sand avatar

Tony Gondola · Mar 24, 2026, 02:52 AM

Robin Bosshard · Mar 24, 2026, 01:04 AM

Wouldn’t we hit the DUO dilemma again? With NB filters the short guiding frames will struggle to get a good SNR (if you still manage to get’em to be short)…

yes, you certainly would but if you can fold the data from the guide frames into the total for the sub it really wouldn’t matter. In other words, the guide frames could be as long as you need them to be without impacting intergration time.

Guide frames being long isn’t the concern here. We know we can take longer frames to get the SNR necessary. I believe Robin is concerned with that matter that you may not be able to take guide frames short enough to accurately guide.

I can see how this may be addressed by binning or other tactics, but I believe this misses the bigger issue of misunderstanding stacked sensors still. If I’m interpreting everything right, you are probably referring to the capabilities of cameras such as the A7V with Dual Gain Output. Issue here is that dual gain output takes the same exposure and basically just scales them differently with the dual gain. There is no capability for dual exposure time with these sensors.

What you’re looking for is a sensor with non-destructive readout capabilities (maybe you new this and I’m talking down here, apologies if that’s the case). This is being used in jwst nircam, but to my knowledge is a separate concept than what we would consider stacked sensors for daytime photography. To be clear, the daytime photography sensors are primarily speed focused and have no non-destructive readout capabilities at this time.

https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-instrumentation/nircam-detector-overview/nircam-detector-readout-patterns#gsc.tab=0

As charge accumulates during a NIRCam integration, the detectors are read out multiple times, non-destructively, sampling the data while conserving the charge in each pixel. This MULTIACCUM technique enables “up-the-ramp” fitting to determine the count rate from multiple data samples obtained over time. Up-the-ramp fitting facilitates cosmic ray rejection, reduces the effective readout noise (approximately by the square root of the number of samples), and increases the dynamic range of the final image (sampling bright sources before they saturate).

Unfortunately this is about where my knowledge drops off, and all the research papers I can find are behind paywalls. I believe that nircam isn’t a stacked sensor in the context of what this post is referring to, but I can’t find the exact reasoning behind why nircam can do this readout technique. Referencing the JWST user documentation may be helpful to your understanding here, specifically the NIRSpec Detector Readout page, and the Understanding Exposure Times page. As you’ll find, there are a number of advantages to this readout method, however I would also like to note that jwst uses a separate guiding system called the Fine Guidance System (FGS). I believe this is essentially just due to the point Robbin Bosshard was making, in which in general you would prefer to have your guide sensor unencumbered by any filters (of which jwst has MANY) as to get the best SNR possible.

So, I could absolutely see this non-destructive readout technology coming to consumer sensors, but probably not for a while, and for the moment I still see only advantages in using an OAG opposed to a cam like the Duo’s.

Well Written Helpful Insightful Respectful Engaging
Alex Nicholas avatar

Tony Gondola · Mar 24, 2026, 02:09 PM

Alex Nicholas · Mar 24, 2026, 03:33 AM

The major advantage to my mind is the ability to use the entire sensor for guiding. That would resolve the filter issue to some degree simply because more/brighter stars would be available.

Agree/disagree… Really depends on what you’re imaging… and the OnAG does the same thing, except with two distinct cameras. Obviously, there is a cost difference, doing it all with one sensor alleviates the requirement for the OnAG and second guide camera.

I personally think, for the average user, an OAG with a sufficiently large prism and a high quality, high sensitivity guide camera makes the most sense, or for more serious setups, and OnAG with something like a cooled 585m taking guide exposures, and a IMX455 or something similar doing your imaging.

The advantage of the OnAG is that you’re not using a pick off prism right at the edge of your telescopes corrected field of view, rather, you are guiding On Axis, right in the middle of your imaging scope’s field of view. You’ll have the best quality stars and the least vignetting there - as well as literally guiding on your target, as opposed to something just near it…

OAG and OnAG are both widely used in the amateur and professional realms and both do a great job…

For the typical amateur, a 30mm f/4 guide scope with a 120mm is more than sufficient… 50mm f/4 if you need the extra 80mm of focal length on your guider…

Given how new this type of technology is, and how slow the astrophotography market is to respond to new tech, I’d say it would be a long while before it comes to astrocameras, and because the word ‘astro’ is in ‘astrocamera’ expect it to be EXPENSIVE…

If you save your pennies, by the time they release a cooled astro camera with non-destructive readout, you’d have enough money to buy a 10Micron mount and not even bother with guiding at all….

Helpful Engaging
Related discussions
Review of the QHY5III678M for ground-based satellite imaging
This is a review that shows the performance of the QHY5III678M planetary camera for ground-based imaging of satellites, I will showcase the specifications of the camera and example images I have taken with it of several satellites. I also want to kin...
Reviews camera sensor technology relevant to imaging and guiding applications.
Jan 12, 2026