Why no LRGB Bayer matrix for astronomical OSC sensors?

16 replies750 views
Brian Boyle avatar
At the risk of embarrassing myself, I wanted to ask a question that popped into my head the other night, as I struggled with LRGB imaging on a monochrome sensor.

Why don’t they make an OSC camera for astronomical imaging with an LRGB matrix rather than a RGGB matrix?

Sure, it won’t win you back the resolution gains of LRGB imaging using a monochrome sensor, and it limits you to a sub-optimal efficiency of 1:1:1:1 ratio through the different filters, but wouldn’t it be approx twice as efficient in photon capture as a Bayer matrix - on the assumption that the second green channel is really “wasted” photons?

I am sure someone more knowledge of OSC photography/imaging technology than me can explain the obvious reason why this thinking is flawed (dynamic range issues?), or indeed point me to examples where this approach has already been tried, and proved ineffective.

Thanks and Clear skies

Brian
Well Written Respectful Engaging
Tim Hawkes avatar
Just guessing - the astronomy market is too small to support the R and D plus production costs?
Minh Lết avatar
Because all of the Bayer matrixes that are on commercial image sensor designed to mimic the human vision. And human eye also more sensitive toward green (a trait that we share among fellow primates, since green spectrum is the most abundant in the terrestrial sun light that has passed through the atmosphere, also the reason why tree leaves are green in color)
if the color image sensor does not double the greenlight exposure area (hence 2 green pixels for each 1 R and 1 B) the color balance of the image will be off and does not look "natural" to the human eye. 
Those image sensor then used on coldmos. Developing a LRGB Bayer matrix is sure to be "doable" if you have enough R&D and manufacturing prowess to put one into market. But I doubt you'd get enough market volume from such a niche market as astrophotography to put out such an unique kind of image sensor (don't forget you have to develop a new debayer algorithm for it tool) it's way easier and more efficient to just use a mono sensor and slap what ever filter that you need Infront of that sensor.
Helpful
Felipe del Alamo avatar
Hi Brian. I love that lateral thinking ideas! As a computer worker I want to highlight another additional point. Changing the bayer matrix imply to change the camera interpreter too. It´s necessary to change the algorithm to compound the image in a similar way an astronomy stacking program do, instead the actual algorithm with RGGB channels or similar. An additional cost to the manufacturer.
But this is a very interesting idea. I hope someday a manufacturer decides to take the risk.
Concise Supportive
andrea tasselli avatar
It would be better to use the tri-CCD/CMOS technology used in high end video recorders. But there is an obvious problem with the different dynamic ranges between colour filtered pixel and clear ones. To me it is a clear case of pointless inventions. I mean, who else uses Luminance layer but us…?
John Hayes avatar
Brian,

That's an interesting question but the answer lies more in radiometry than in color balance.  The reason that LRGB imaging is so useful for AP is that with a monochrome sensor and selectable color filters, you can either go deeper (with the same exposure) -or- reduce your total exposure time relative to using a (OSC) RBG sensor at the same SNR.  I'm not prepared to do a calculation here but with monochrome imaging, the 'L' filter is what determines the the SNR of the image while the RGB data determines the "color noise".  Since your eye is less sensitive to color noise, you can get away with much less exposure in the RGB channels relative to the 'L' channel and still produce a "good" image.  The key is that you can use different exposures for L relative to the RGB data.   If you simply replaced one of the green filters in a Bayer pattern, you would lose that advantage.  With an OSC camera, the best you can do is to sum the channels to create a synthetic "Lum" channel…and that generally works pretty well–assuming that you want to create a LRGB image.

John
Well Written Helpful Insightful
Arun H avatar
Brian Boyle:
Sure, it won’t win you back the resolution gains of LRGB imaging using a monochrome sensor, and it limits you to a sub-optimal efficiency of 1:1:1:1 ratio through the different filters, but wouldn’t it be approx twice as efficient in photon capture as a Bayer matrix - on the assumption that the second green channel is really “wasted” photons?


I think John's post captures it well. The only thing I'd add here is that the second green pixel in an OSC quad is not really as much of a waste as you'd think. Many broadband targets such as galaxies and reflection nebula have significant transmissions in all three channels and hence that additional pixel contributes meaningfully to SNR. Also, if you look at the QE graphs of an OSC, you'll see that the sensitivities of the R, G, and B pixels have significant overlap, so the green pixel is picking up some red photons and vice versa, for example.
Well Written Helpful Insightful Concise Engaging
Tim McCollum avatar
Dylon O'Donnell once did a YouTube video on using the Green data as luminance to add back with LGRB combo.
I'm not sure if the video below is the one I was thinking about or not but doing a quick search of his channel I did find this one.
You could try splitting the channels and then using the G as L.
This video uses NB filters subbing the G for L and Ha for R, it might give you some ideas to try.
https://youtu.be/ld5X7xRA8Ck
Tim
Björn Arnold avatar
As a note on the side, just for "fun", I found a listing of color filter arrays on Wikipedia: (there's apparently no limitation on imagination)
https://en.wikipedia.org/wiki/Color_filter_array

Björn
Brian Boyle avatar
Thanks for everyone’s responses so far.

The main drivers would appear to be too much investment required for too little gain for too few people.

Those are all great points, but I would like to push a little further.

I completely acknowledge John’s point about it falling well short of LRGB mono imaging efficiency, but I was really thinking of comparing it to an existing RGGB OSC which many choose from convenience. For example, if you were in the market for an Zwo2400MC, would you rather buy one with a RGGB matrix or a LRGB one?  If it is a factor of 2 more efficient, then you are saving 40% of your time.

As Tim, Felipe and Minh said R&D costs might be very high - not only in the matrix fabrication (I don’t know how they are made) but in the processing.  But an number of different matrix’s already exist, so some R&D exists.

Andrea’s point about dynamic range is good one, and may limit sub-exposure times, but there are already large variations in photon rates onto a sensor through an RGGB matrix for red and blue objects.   Nevertheless, this may limit its use for a broader public.

As to its pointlessness and limitation to a small niche market…  I smile when I hear that. I recall the technology developed by John O’Sullivan and his team in the 1980s  to detect Hawking radiation from black holes. A bold but fruitless search, but the technology patented as 802.11 went on to have some use in other areas, or so I understand.
Helpful Insightful Respectful Engaging
Arun H avatar
Brian Boyle:
I completely acknowledge John’s point about it falling well short of LRGB mono imaging efficiency, but I was really thinking of comparing it to an existing RGGB OSC which many choose from convenience. For example, if you were in the market for an Zwo2400MC, would you rather buy one with a RGGB matrix or a LRGB one?  If it is a factor of 2 more efficient, then you are saving 40% of your time.


Replacing one"G" pixel with an "L" pixel would give you approximately 30% greater photon captured in the same time as compared to an RGGB OSC and about 22% less efficient than true LRGB split 50% between L and RGB, so somewhat less than 40% of the time, but still meaningful. The bigger issue is being locked into a certain ratio of L:RGB capture versus the flexibility that mono gives you. Assuming someone made one, it might be preferable to an RGGB OSC, but I'd imagine for such a niche market, no one will bother!
Helpful Insightful Concise
Scott Badger avatar
John Hayes:
With an OSC camera, the best you can do is to sum the channels to create a synthetic "Lum" channel...and that generally works pretty well--assuming that you want to create a LRGB image.

John, how well is 'pretty well'? I generally shoot about twice the number of L's than each of R, G, & B, but I've wondered if I'd be better off using a synthetic L from the RGB integration and 'investing' the time spent shooting Luminance in the other channels instead. The integration time for R, G, & B would increase by a little more than 50% (I don't bin).

Side question, if for a target the Ha integration is a better substitute for luminance, what can I do with the luminance data I shot?

Cheers,
Scott
Well Written Engaging
Brian Boyle avatar
=16px Assuming someone made one, it might be preferable to an RGGB OSC, but I'd imagine for such a niche market, no one will bother!

This is an important point  

But Canon clearly saw a market for an Ra.  The technology leap from a Canon R to a Canon Ra is not really a leap of course…. 

Also if just targetted for astroimaging, one wouldn’t need the processing R&D (assuming raw FITS output) and PI/APP develops the processing algorithms…

This just leaves the R&D for the move from RGGB matrix to an LRGB one Since I dont know how an RGGB Bayer matrix is fabricated, I dont know the R&D costs needed to change it.

Nor do I know the size of the astro camera market to work out whether the unknown R&D costs could be justified by the unknown sales 

Does anyone know these numbers?
Shawn avatar
I work in the industry of digital camera image processing. There are certainly may color filter array (CFA) patterns out there other than the traditional Bayer pattern. You can see a list of the CFAs on wikipedia. Consumer cameras want higher signal to noise ratio too, not just for astrophotography. Human vision is much more sensitive to contrast in luminance than in color, so it makes sense to have more luminance data than color data. So one strategy is to replace some pixels with white (luminance) filter. The reason most color cameras still use Bayer pattern is because with new types of CFA you need a lot of R&D to optimize demosiacing and color correction to match human vision. Obviously people are thinking about non-Bayer CFAs for a long time, but since Bayer works well enough, the industry as a whole hasn't felt the need to fix it yet.
Well Written Helpful Insightful Concise Engaging
Tim McCollum avatar
Mods to the R & Ra where mainly to just remove the built in IR filter and give it 30 x zoom to help manual focusing.
They didn't even bother to increase the exposure timer past 30 seconds.
Arun H avatar
Tim McCollum:
Mods to the R & Ra where mainly to just remove the built in IR filter and give it 30 x zoom to help manual focusing.
They didn't even bother to increase the exposure timer past 30 seconds.


I think the EOS Ra is a throwback to the times when dedicated astro cams were in their infancy or were CCDs that were enormously expensive. So people used predominantly modified cameras for astro imaging, dominantly Canons. There were many reasons for this, including Canon's general popularity and Nikon/Sony having a reputation for clipping low signal. Canon saw a market where they could very easily modify their regular DSLRs for astro work (eg. the Ra, but also the 60Da). As noted by Tim, investment was minimal and a far cry from a completely new CFA. I'd also note that the newer mirrorless cams from Canon don't have astro versions that I know of. So it is a very small market comparatively and we have to leverage the market for consumer and industrial sensors.
Well Written Helpful Insightful
Minh Lết avatar
Brian Boyle:
Thanks for everyone’s responses so far.

The main drivers would appear to be too much investment required for too little gain for too few people.

Those are all great points, but I would like to push a little further.

I completely acknowledge John’s point about it falling well short of LRGB mono imaging efficiency, but I was really thinking of comparing it to an existing RGGB OSC which many choose from convenience. For example, if you were in the market for an Zwo2400MC, would you rather buy one with a RGGB matrix or a LRGB one?  If it is a factor of 2 more efficient, then you are saving 40% of your time.

As Tim, Felipe and Minh said R&D costs might be very high - not only in the matrix fabrication (I don’t know how they are made) but in the processing.  But an number of different matrix’s already exist, so some R&D exists.

Andrea’s point about dynamic range is good one, and may limit sub-exposure times, but there are already large variations in photon rates onto a sensor through an RGGB matrix for red and blue objects.   Nevertheless, this may limit its use for a broader public.

As to its pointlessness and limitation to a small niche market...  I smile when I hear that. I recall the technology developed by John O’Sullivan and his team in the 1980s  to detect Hawking radiation from black holes. A bold but fruitless search, but the technology patented as 802.11 went on to have some use in other areas, or so I understand.


the other Bayer matrix exist but they are all of the same RGGB formula, just different arrangement with the reason I stated before: color balance on terrestrial target. 
OSC used on astrophotography will always be just that: a hobby. Which will severely  limit the R&D budget pour into it. All the serious scientific-big budget imaging is happening at the mono spectrum. 
the non linearity nature of the Bayer matrix limit its scientific uses. So I doubt there will be any research regarding the LRGB Bayer matrix.
The most likely next advancement in CMOS tech is the pixel size. Some Chinese sensor maker (Gpixel) is making 9um sensor right now but their product still not up the Sony's specs just yet.