[RCC] Galaxy detail without the fuzzy noise

9 replies382 views
Jneums87 avatar
Really appreciate anyone's thoughts. 

Processed some data on M81 from 7 May 21.  First light for my ASI1600MM and first attempt at mono, period. 

Equip:
Camera: ASI1600mm
Scope: Radian Raptor 61
Mount: Eq6-r Pro
Guide Scope: Apertura 60mm
Guide camera: ASI120mm
SW: Astroberry, KStars/EKOS, Photoshop, DSS

Acquisition: 
Bortle 8
35x90s  Lum
35x90s R
35x90s G
35x90s B
30x90s Darks
Does this count as almost 5 hrs total of integration with mono or truly an hour since each filter only collected about an hour each?

Workflow: 
Individual stacking of LRGB lights
Individual processing of each filter using star masks right out the gate to avoid too much star bloating (was seeing red blowing out on one side, green on the other even though they were centered)
Probably over stretched
Forgot to mask the galaxy itself to target the work directly on it

The more I stretch the nastier the noise and less detail.  

 Thanks all.
Helpful
Rodney Watters avatar
Hi,

Interesting question regarding the total integration time. The standard way of expressing this is to total up the integration time for each filter. In your case this is (35*90s) * 4 = 210 minutes or 3.5 hours. The four filters in this case are LRGB (you don't count the darks as part of the integration time).

Before we can evaluate noise, there is something odd about the image above that needs to be understood first. I downloaded a copy of the above image and it presents as an RGB image but with each channel having the exact same value which effectively makes it a grey scale image. Is this meant to be an RGB i.e. a colour image?

Clear skies,
Rodney
Well Written Helpful Insightful Respectful Engaging Supportive
Björn Arnold avatar
Hi,

You integrate all exposure times and this give the total integration time. However, there may certainly be differences, e.g.: some people combine the RGB for the color and place the L as a Luminance layer. What is also possible is to combine RGB for color and make a super-luminance layer through stacking LRGB into a single L layer. How should the latter be counted? In any case the time you pointed the scope/camera at the target is always the sum of all single exposures in the final image.

I would say the exposure times of your RGB are too short. Assuming the exposure time for L is "adequate" you'll likely need to triple exposure time for each color channel (the bandwidth of each color filter is roughly a third of the L bandwith). Of course, this is a rough rule as this assumes a broad band emitter (which is the case for galaxies and stars) and equal quantum efficiency (QE) for each wavelength. Usually, the QE is lowest in red and decaying even further towards near infrared. Therefore, I usually try to find the right exposure time for the R channel and keep this time for the G and B as well.

Don't process the color channels individually before combining. If you do this, your colors will be completely off. Unless it's not important to you, you could do that but I strongly recommend not doing it. It is possible to combine stretched L and combined and stretched RGB. I'm using AstroPixelProcessor as a dedicated software and there I combine all layers in a linear fashion. After this, I do a color calibration on the stars on the linear combined image. Once all this is done, the image is stretched.

I also recommend taking light frames. I made the experience that 10 of them are sufficient. But this is usually a quick procedure as their exposure time is rather short. You can also take 20. This will help you with vignetting and dust spots on the filters/sensor etc.

Cheers!

Björn
Helpful Insightful
Olaf Fritsche avatar
I also recommend taking light frames. I made the experience that 10 of them are sufficient. But this is usually a quick procedure as their exposure time is rather short. You can also take 20. This will help you with vignetting and dust spots on the filters/sensor etc.

I think your are referring to "flat frames", right?
Björn Arnold avatar
Olaf Fritsche:
I think your are referring to "flat frames", right?


*** Yes, you’re right. I meant flat and wrote light. Strange that I didn’t notice that even after a second read. Good that you noticed my mistake.
Well Written Respectful
Jneums87 avatar
Rodney Watters:
Before we can evaluate noise, there is something odd about the image above that needs to be understood first. I downloaded a copy of the above image and it presents as an RGB image but with each channel having the exact same value which effectively makes it a grey scale image. Is this meant to be an RGB i.e. a colour image?


Yes, I was going for RGB image, but you're saying its not..? I'm not sure what you mean by them being the exact same value. I did try color balancing, so is that what you are seeing? I moved histogram sliders for each channel to a shadow value of 30.  Thanks for your insight in the rest of your posts.  REALLY helps.
Jneums87 avatar
Björn Arnold:
I would say the exposure times of your RGB are too short. Assuming the exposure time for L is "adequate" you'll likely need to triple exposure time for each color channel (the bandwidth of each color filter is roughly a third of the L bandwith). Of course, this is a rough rule as this assumes a broad band emitter (which is the case for galaxies and stars) and equal quantum efficiency (QE) for each wavelength. Usually, the QE is lowest in red and decaying even further towards near infrared. Therefore, I usually try to find the right exposure time for the R channel and keep this time for the G and B as well.


Thank you for this. I'll keep that in mind and try that next time.
Well Written Respectful
FiZzZ avatar
Acquisition: 
Bortle 8
35x90s  Lum
35x90s R
35x90s G
35x90s B
30x90s Darks
Does this count as almost 5 hrs total of integration with mono or truly an hour since each filter only collected about an hour each?


Hi, do you remember gain, offset (in case also binning and sensor temperature) of the shots ?
Rodney Watters avatar
Yes, I was going for RGB image, but you're saying its not..? I'm not sure what you mean by them being the exact same value. I did try color balancing, so is that what you are seeing? I moved histogram sliders for each channel to a shadow value of 30.  Thanks for your insight in the rest of your posts.  REALLY helps.



Technically, the image posted is an RGB image in that it has three channels per pixel. But it is not a colour image because every pixel in the image has the RG&B channels all set to exactly the same value. Whilst the pixels vary in intensity across the image they are all just variations on grey scale. 

For example, I downloaded the JPEG image above and opened it in Pixinsight. I then zoomed right in to the pixel level near the centre of the image. The graphic below is a screen shot of four pixels near the galaxy in the centre of the image. Your image is 4376 x 3120 pixels and I was zoomed in at 100:1. I used the readout tool in PI to measure the value of each of the channels in these four pixels. Starting in the top left corner and going clockwise, I get the following results:
Red Gre Blu
180 180 180
192 192 192
190 190 190
178 178 178



Each pixel in an RGB image has three values, one for the Red channel, one for the Blue channel and one for the Green channel. The different values will determine the overall colour of that pixel. When the are all the same value as you have here, there will only be shades of grey in the image, no colour.

There is something fundamental that is going wrong in the workflow that needs to be addressed first and then after that is sorted we can move on to looking at noise and general image quality.  I see that you are using Deep Sky Stacker and Photoshop. I also note previous comments in this thread about processing each colour channel separately which is not normal practice. Maybe if you post some more detail of the process steps that you have undertaken in DSS we can get to the root of this problem.
Helpful
HR_Maurer avatar
Björn Arnold:
You integrate all exposure times and this give the total integration time. However, there may certainly be differences, e.g.: some people combine the RGB for the color and place the L as a Luminance layer. What is also possible is to combine RGB for color and make a super-luminance layer through stacking LRGB into a single L layer. How should the latter be counted?

Hi,
in every case, the integration time is the sum of all included light frames. No matter if some data is utilized twice, e.g. in an artificial luminance layer.

The strange channel issue could result from some error during preprocessing. Maybe one of the grayscale images has been saved as RGB by accident, so the channel combination didnt work. Or you just picked the wrong image from your hard drive at some point.

Cheers,
Horst
Helpful Concise