What is the "Mono bin" button in my ASI air app? I'm using a color cam...

20 replies1.2k views
Oscar avatar
when I read what it was about in my ASI app, all I think I could understand was that it converts the color pixels and groups them and turns them into 2x2 bigger but monochrome pixels…

what's this for? and why so few people talk about it?
Brian Diaz avatar
hi

not good basically low resolution imitation of a monochrome camera  ,Mono binning is an option for colour cameras, if selected, the color camera will ignore the information of the Bayer matrix and select the closest pixel value to merge and get a grayscale image. This will result close to the image of a monochrome sensor but it is important to remember you only get a quarter of the resolution.
V avatar
Brian Diaz:
hi

not good basically low resolution imitation of a monochrome camera  ,Mono binning is an option for colour cameras, if selected, the color camera will ignore the information of the Bayer matrix and select the closest pixel value to merge and get a grayscale image. This will result close to the image of a monochrome sensor but it is important to remember you only get a quarter of the resolution.

The direct way to combat this is by drizzling 2x in that data's stack.
Oscar avatar
So my color camera can be a mono camera? That is the most unordinary sounding thing I ever heard in astrophotography. smile
Brian Diaz avatar
Scott Roffers avatar
I did  not know that this was enabled on my ASI2600MC Pro. I think a recent ASIAir update turned it on.

Anyway, I was not using binning. Are my captures still compatible with the color images that I took the next night with this setting off? I want to stack both nights together, but I am not sure if this "mono bin" from the first night affected those images.
Mark Fox avatar
So my color camera can be a mono camera? That is the most unordinary sounding thing I ever heard in astrophotography. 

Yes, I can't think of anything more absurd than this setting.

The 'mono bin' option gives you a 2x2 bin gray image.  It loses the Bayer matrix info.
Scott Roffers:
Are my captures still compatible with the color images that I took the next night with this setting off? I want to stack both nights together, but I am not sure if this "mono bin" from the first night affected those images.

Scott, I believe that you have lost the color information from the prior night, and you only have the monochrome grayscale info.  However, not all is lost with that data.  You could use it as a luminance layer for your RGB data - although it wouldn't make sense if you are getting your RGB data as 1x1 bin.

And for ZWO - I find that help diagram completely misleading, if not totally confusing.
Helpful Insightful Concise
The_lazy_Astronomer avatar
Mark Fox:
So my color camera can be a mono camera? That is the most unordinary sounding thing I ever heard in astrophotography. 

Yes, I can't think of anything more absurd than this setting.





Hey everyone,

I wanted to share a thought on a feature that I think is seriously underrated and could be a real game-changer for many of us. 

I'm talking about quadrupling the Full Well Capacity on sensors like the IMX571, which goes from 51k to 204k electrons.

On paper, that sounds nice, but in practice, it means being able to shoot 10 to 15-minute subs without clipping the bright stars. Imagine the insane dynamic range: you can finally capture the core of M31 AND its faintest arms in the same sub, without having to mess with HDR or make compromises. The read noise becomes basically negligible.

But where it gets really brilliant is with this hybrid workflow that lets you get the best of both worlds with a single camera:
  1. First, you capture your Color (RGB) layers in standard color mode at the sensor's native resolution. Simple and effective.
  2. Then, for the Luminance (L), you switch to monochrome mode. This mode activates the x4 Full Well, but it often works by binning (e.g., 2x2), which gives you an ultra-deep L frame but at a reduced resolution.



And here's the trick: When you stack your Luminance subs, you apply a Drizzle (usually 2x). This process reconstructs an L image that recovers the sensor's full native resolution!

In the end, you can combine this high-resolution "drizzled" Luminance with the color layer you captured earlier. You get a final image that merges the incredible dynamic range of the mono mode with the simple acquisition of color. It’s the perfect solution for galaxies, for example.

Honestly, just for the ability to shoot 15-minute subs without clipping and to finally push the limits of your mount and your sky, I think it's worth considering.

What are your thoughts on this? Have any of you already tried this approach? I'm curious to hear your feedback.
Helpful Insightful Engaging
Tony Gondola avatar
I could be wrong but I don't think the full well depth would change. Combining two 4 oversaturated pixels won't give you four that suddenly have useful information. Plus the light still has to pass through the filter matrix  with its associated losses. You could do the same thing in post.
Helpful Insightful Respectful
The_lazy_Astronomer avatar
Tony Gondola:
I could be wrong but I don't think the full well depth would change. Combining two 4 oversaturated pixels won't give you four that suddenly have useful information. Plus the light still has to pass through the filter matrix  with its associated losses. You could do the same thing in post.

Yes bin2 implies FW is multiplied by 4. it is  always valid . and SNR is improved by 2 . This is physics
Tony Gondola avatar
Tony Gondola:
I could be wrong but I don't think the full well depth would change. Combining two 4 oversaturated pixels won't give you four that suddenly have useful information. Plus the light still has to pass through the filter matrix  with its associated losses. You could do the same thing in post.

Yes bin2 implies FW is multiplied by 4. it is  always valid . and SNR is improved by 2 . This is physics

You'll need to explain that because it doesn't make sense to me at first glance. SNR yes, but please describe the well depth increases. If true I'd like to understand why.
Tony Gondola avatar
To expand on the above, my understanding that if you have hardware binning the signal from the original pixels are combined before it hits the ADC so in that case, yes you can get an increase in dynamic range/effective well depth. The problem is is how CCD sensors do it but not CMOS sensors.

With CMOS chips each original pixel goes though it's own ADC on the chip. You are combining the signal after digitization not before. In that case you are still limited by the ADC range, 4096 levels for 12 bit ADCs and 65,435 for a 16 bit ADC.
Helpful Insightful Respectful
The_lazy_Astronomer avatar
because your super pixels of 4 pixels will benefits of the FW of each pixels and sum them . The only drawbacks comes from the bayer matrix that will not give an equal spectral answers for any pixels . So you will not be as good as for true mono camera . But it worth a try to be honest  ( but not for narrowband , only for broadboand imaging ) 
regarding the difference between CMOS and CCD for binning it relies essentially on SNR : With a Bin2 , on a CCD the SNR is increased by 4 and on CMOS only 2x.
Rick Krejci avatar
There really is no difference between the "HW" mono bin feature and doing the same thing in post processing for these CMOS cameras.   The ONLY advantage is smaller file sizes, but that is at the very hefty price of forever losing your color data.
Well Written Concise
The_lazy_Astronomer avatar
but what about the huge dynamic gain on broadband target ? it can worth a try
Oscar avatar
but what about the huge dynamic gain on broadband target ? it can worth a try

I think Rick is saying FW doesn't change, so dynamic range should also not change, no?
The_lazy_Astronomer avatar
Oscar:
but what about the huge dynamic gain on broadband target ? it can worth a try

I think Rick is saying FW doesn't change, so dynamic range should be the same, no?

if the FW do not change that's correct . But do the FW do not change ?  i am really not sure . It will not be as effective as a mono cam for sure . but stay the same than BIN 1 on broadband target ?  no .
The_lazy_Astronomer avatar
i just made a fast test . 
i used previous data shot with a 2600 MC duo on the cocoon nebula . i just shot 7 frame about 10 min each on monobin mode in order to create a luminance  ( 7 frames is not a lot , so there is a lot of noise , please do not care too much about that ) . Here the results . The left is with the mono bin luminance and the second is the same without the Monobin luminance . 

There is a clear difference in dynamic. 

Rick Krejci avatar
but what about the huge dynamic gain on broadband target ? it can worth a try

Adding the value of 4 pixels in the camera vs. adding them after download is the same.
The_lazy_Astronomer avatar
Rick Krejci:
but what about the huge dynamic gain on broadband target ? it can worth a try

Adding the value of 4 pixels in the camera vs. adding them after download is the same.



i fear that it is not correct , it's all about when the electronic "read noise" is added.
  • CMOS "Mono Binning" (On-Camera) is smarter because it combines the signals from four pixels before the final measurement. This means the camera only has to perform one measurement, so it only adds noise once.
  • Software Binning (On your PC) is less efficient. The camera measures each of the four pixels individually, adding noise to each one. Then, your software adds up these four already-noisy measurements, which also adds up all the noise.



    Even though the binning on a CMOS sensor isn't "true" hardware binning like on a CCD, it still performs the critical step of combining signals in the analog domain before the primary source of read noise is introduced. This is a physical advantage built into the camera that you cannot replicate with software after the fact.
    Helpful Insightful
    Rick Krejci avatar
    Rick Krejci:
    but what about the huge dynamic gain on broadband target ? it can worth a try

    Adding the value of 4 pixels in the camera vs. adding them after download is the same.



    i fear that it is not correct , it's all about when the electronic "read noise" is added.
    • CMOS "Mono Binning" (On-Camera) is smarter because it combines the signals from four pixels before the final measurement. This means the camera only has to perform one measurement, so it only adds noise once.
    • Software Binning (On your PC) is less efficient. The camera measures each of the four pixels individually, adding noise to each one. Then, your software adds up these four already-noisy measurements, which also adds up all the noise.



      Even though the binning on a CMOS sensor isn't "true" hardware binning like on a CCD, it still performs the critical step of combining signals in the analog domain before the primary source of read noise is introduced. This is a physical advantage built into the camera that you cannot replicate with software after the fact.

      As Tony said, with CMOS the signals are combined after the ADC/digitization for CMOS chips, which is there the read noise is introduced.  CMOS binning is done by firmware/software in the digital realm, whether it be in the camera's firmware or your image processing SW.  So you're losing color information for no real gain besides faster download and less storage space.
      Concise