What *is* luminance? How can I get it from an unmodified DSLR ... if at all?

19 replies1k views
firstLight avatar
Please excuse my ignorance but .....

I often read about LRGB where numerous "luminance" (L) frames were taken in addition to R + G + B data for enhanced (L)RGB images. For the time being and probably for a long time to come I (will) stick to my unmodified Canon DSLRs: EOS 5D IV + EOS 7D. Since starting basic and always mobile astrophotography with these cameras and EF lenses in January 2020, I always wonder whether my OSC image data could be improved, maybe in post processing by separating RGB channels or adding a (mix?) of monochrome brightness layer(s) from R / G / B channel(s).

Thus my questions:

Q: What is "luminance" and where do I find it?
My guess:  "luminance" is the brightness information from any / all RGB channel(s)

Q: How is "luminance" digged out of OSC / RGB data from a (unmodified) DSLR?
My guess: Take one/all of the RGB channel(s), convert to B/W, mix at will

Thanks in advance for enlighten me and/or giving some hints and infos!

Frank aka firstLight
Respectful Engaging
andrea tasselli avatar
You got the first right. The second is arrived by adding all RGB channels once the raw image has been de-Bayered.
Lynn K avatar
A Luminance  image is taken through a clear luminance filter with a mono camera. It is usualy IR bloched. The luminance image takes in/records all the wave lengths of light that the CCD/CMOS chip is capable of recording. There is no filtering of the  different wave lengths.

Were as your DSLR CMOS chip is filtered with the Bayer array and the filter in front of you chip is blocking a lot of the light in the red spectrum. 

You can creates a synthetic luminance as you mentioned by combining the RGB, but will still be missing the red spectrum blocked by the main filter window.

Triditionally in mono LRGB imaging, the luminance filter is thought to hold/have all the detail. The RGB filtered channels are thought to only give color. With your Baader RGB array you rely on the RGB to give both detail and color.

Lynn K.
Helpful
Konstantin Firsov avatar
I think you really can improve your OSC images if you will generate a "synthetic Luminance channel". As for me, I often create separate R G B - channel images when preprocessing my OSC raw data. I do prefer Iris, you might like other software. Iris has a Split CFA command for this purpose, you can run it after debayering. Fitswork gives more options for debayering, btw. This allows me: 
1) perform the per-channel registration, eliminating chromatic aberrations (if any). It means that I can align all of the colour channels to a single (say, green) reference image.
2) stack all those R-G-B channels together, thus making a synthetic L.
Synthetic L has higher SNR, take all benefits you can!
Yes, I always shoot OSC cameras smile

P.S. certainly this all makes sense if you shoot with the clear or no filter
Helpful
Arun H avatar
Q: How is "luminance" digged out of OSC / RGB data from a (unmodified) DSLR?
My guess: Take one/all of the RGB channel(s), convert to B/W, mix at will

Software like PixInsight (probably PS too, though I don't use it) allows you to easily extract  luminance from an RGB image. In the context of a DSLR or OSC image, it is basically a weighted average of the R,G, and B channels with no rejection. When you do this, you are not changing the signal to noise ratio of your data.

True luminance, for those of us who use mono cameras, is acquired by using a luminance filter which admits wavelengths of the entire visible spectrum. This dramatically increases signal to noise ratio at the expense of color information, which is acquired separately using R,G, or B filters. This is a good tradeoff because the human eye/brain is more sensitive to detail than color, and there is nothing like imaging with a luminance filter to gather detail.

In processing, the advantage of using luminance, whether synthetic or actual, is that it is much easier to see impact of the changes you are making - noise reduction, deconvolution, stretching etc. than if you apply these to a full color image.
Well Written Helpful Insightful Engaging Supportive
Ferenc Szabo avatar
The best improvement you can do to your images made with a DSLR is to keep adding more subframes. 
IMO there is no real improvement can be made by playing with RGB channels / adding substracting and creating layers and then adding it again. 

Keep saving up whatever you have imaged, don't delete the subs, don't delete your flats and darks. Some people can back and re-image the same targets again and just re-stack the subs again with new lights added.
Giovanni Paglioli avatar
Hi! Your question seems to be a trivial one but is not! There is a good amount of misconception about the term "luminance"… In physics terms Photometry is the science that measure the real luminance which is the photon flux (in the considered part of the spectrum) that comes each second on a given angle. As You said it is a real measurement of the luminous intensity. The "luminance" we normally use in astroimaging is something completely different. If You think well, it is just an escamotage for reducing total integration time when You shoot in wide spectra RGB or simply "colors" we can see. Reguardless the fact You acquire data with a monochrome camera or with a one shot color one (which is nothing else that a mono sensor with a Bayer RGB matrix on top), You need to acquire at least 3 different measurement dividing the entire color spectrum in three parts, lower energy (reds), medium energy (greens) and hi energy (blues). The "luminance" filter is just a filter that let the sum of the three RGB ones pass in a single shot. The trasmittance curve of an "L" filter must copy exactly the sum of the 3 R, G and B curve since if something different or wider than that is being captured in "L", the datas You get has no corrispondance in "colors" then it is not possible to colorize it and the resulting image is "washed out". Taking 1hr of data in each RGB channel takes 3hrs but just 1hr in "L" just becouse is the sum of the RGB's! The eye is not perticulary sensitive to color changes respect to luminosity changes and "borders" between luminosity levels. Because of that it is possible to acquire less RGB (which means much lower SNR) that will be used just as color informations of the "L" that has much higher SNR (about 3 times more for each frame, remember is the sum of RGB and cover the entire spectrum) that will be used to define the luminosity of the pixels in which we are much more critical and sensitive. It will be much more correct to define the "L" channel as luminosity (which is very different in RGB color space) other than "Luminance" which is a real phisical measurement. The luminosity in RGB is the the "percived" lightness of a color reguardless his real photometry and is related to Human Vision perception. The vision in fact is by 90% a brain process not a "real" phisically correct one which is measurable.

Hope I've clarified somethin more than confusing ideas more! smile
Helpful
Giovanni Paglioli avatar
About Your second question, it is not possible to acquire a "real" L channel from a sigle shot color imaging sensor since You can't remove the Bayer matrix anyway. Spatial informations about colors are not coherent and You have to dither the frames to have a better rappresentation of the mean luminosity per channel. As someone said, it is possible to sum all the channels to build a "synthetic" L channel that could be easier to work with but it is not a "real" L channel that a mono sensor could produce.
Helpful Concise
firstLight avatar
@andrea tasselli
@Lynn K
@Konstantin Firsov
@Arun H.
@frankszabo75 
@Giovanni Paglioli 


Thank you all for your contributions – so well written and explained – good brain food!

I learned: Only an astro modified DSLR (sensor filter removed) or a dedicated full spectrum (mono) astro camera can deliver "luminance" data from wavelengths my unmodyfied DSLRs are unable to see.


I understand: Nothing beats long exposures / integration time! My first DS images, impatiently taken within 40-60 minutes, were a good start. I started already last autumn/winter to increase my total exposure time to 2 hrs or more and of course I can dig out more detail since.

Knowing that a doubled exposure time only yields 1.4 times better S/N ratio while a 2 times better S/N ratio requires a 4x exposure (total integration) time, my intention is to work with longer exposures on one object at a time.

As I said, I always work mobile, so some things are hard(er) to achieve, eg. the exact same framing and tracking of an object, without computer support for controlled framing and guiding, after full manual setup my tripod and manual polar alignment and star hopping to the intended target ... with my lovely little "Star Adventurer".

Though I'm tempted occasionally, I wouldn't love to buy image data from remote telescopes. I prefer learning, improving and enjoying what I have and what I can get out of my equipment at hand.

Nevertheless, I still hesitate getting an (uncooled) astro modified DSLR. I would prefer a cooled astro camera, probably a color model. But then I would also need a lot more things like external storage, probably USB and/or Wifi connection, cables, power supply, autoguider, Laptop with PHD2 or other, .... which would be the end of my mobility.

So, I decided to stay curious and happy with what I have: maximum freedom and mobility by avoiding bigger technical dependencies and time consuming material.

Of course, I love the great and inspiring works I see here on Astrobin – a source of inspiration!

Thank you all!
Helpful Respectful
Konstantin Firsov avatar
I learned: Only an astro modified DSLR (sensor filter removed) or a dedicated full spectrum (mono) astro camera can deliver "luminance" data from wavelengths my unmodyfied DSLRs are unable to see.


Please note, that astro-modified DSLR has IR-block filter (glass) removed. Still it has the colour Bayer filters in place! Only mono camera is capable of truly capturing L.
firstLight avatar
Konstantin Firsov:
I learned: Only an astro modified DSLR (sensor filter removed) or a dedicated full spectrum (mono) astro camera can deliver "luminance" data from wavelengths my unmodyfied DSLRs are unable to see.


Please note, that astro-modified DSLR has IR-block filter (glass) removed. Still it has the colour Bayer filters in place! Only mono camera is capable of truly capturing L.

Ah ... yes, of course ... this RGGB matrix thing that needs to be debayert! Thanks for the reminder!
Scott Badger avatar
Like you I have both the 5d/IV and 7D and as others have mentioned, I generally create and add a synthetic luminance as part of my workflow. As noted, a synthetic luminance is no real luminance, but better than none, I think. During my first scope backorder vigil, I used one of the online scope rental services and shot R,G, B plus luminance separately and really enjoyed doing it that way, so  first equipment upgrade is going to be an astro camera!.....or maybe a new mount….. Anyhow, I use PI and create a synthetic luminance from the RGB integration using ChannelExtractor (after equalizing all channels with RGBWorkingSpace) but after background extraction and a first run at noise reduction while still linear. Before combining the luminance with the RGB integration, I use deconvolution on it and then to stretch the luminance, I’ve been using MaskedStretch and ExponentialTransformation to give it a bit more oomph. I then use LRGBCombination to combine it with the RGB integration and find nudging the saturation a bit is a good start to bringing out some color.

Here’s an article on deconvolution that helped me. Like the author mentions, my previous attempts always seemed t do more harm than good. https://astrodoc.ca/wp-content/uploads/2017/06/Sky-and-Telescope-July-2017-Deconvolution-article.pdf

And here’s a tutorial for using MaskedStretch and ExponentialTransformation. The author goes on to use HDRMultiScaleTransform to bring out more detail, but in the couple attempts I’ve made, I couldn’t get a result I liked so skipped it. https://chaoticnebula.com/pixinsight-hdr-multiscale-transform/

Cheers,
Scott
Helpful
Gary JONES avatar
Hi Frank,
I think your question has been answered already, but it is an interesting one, so I thought I'd respond as well

First, I think it's helpful to clarify some definitions ... without getting too scientific.

Luminance is an objective measure of the luminous intensity of light per unit area, based on a given field of view.
Luminous intensity is a measure of the wavelength-weighted power emitted by a light source, based a standard model of the sensitivity of the human eye.  Most people would equate this with brightness, which is an individual's subjective impression of luminance.

Don't confuse these with luminosity, which is an objective measure of electromagnetic energy emitted by an object per unit time.
It's commonly used to measure the energy radiated by a galaxy or star.

Our eye has two types of photosensitive receptors (well, actually there are 3, but only 2 relate directly to how we perceive images and colour) :-

Rods : these are basically intensity receptors.
They work well in low light and are found mostly around the edge of the retina, and therefore contribute to peripheral vision.
This is why pilots are trained to look for targets out of the corner of their eye during night-time sorties.
This corresponds to the L channel.

Cones : these respond to specific wavelengths of light.
Most people have 3 sensitivity bands, although many animals and some people have 4.
(the bluebottle butterfly has 15, some of which cover the UV spectrum, which is really handy for landing on the right flowers when they search for food).

People who are colour-blind have fewer than 3, or might have diminished sensitivity in one or more sets of cones.
These correspond to the R, G, B channels.

So - when you go out at night, you might be able to find your way around, but colours look less vibrant, because your rods are doing most of the work.
In bright light, everything looks more colourful, because your cones are doing most of the work.

In digital photography, the goal is to represent an image on a screen or printed on paper so it looks like the original object.
(although in astrophotography, 'false' colour images are also used to correspond to wavelengths outside the range of human vision, eg X-Rays).

Digital colours are usually represented as triplets of numbers, generally in the range of 0-255, although some colour systems use a larger range.
So in the RGB colour model, pure red = [255, 0, 0], pure green = [0,255,0] and pure blue = [0,0,255].

Every colour is made up of different combinations of these 3 values.
Grey is made up of equal values of R, G and B - so a 'neutral' grey might be [128, 128, 128].
On the other hand, something like 'strawberry' would be [255, 47, 146].

Another way of looking at this is to say that every colour is a shade of grey, with some R, G and/or B added on top.
So 'strawberry' would be Grey = [47, 47, 47] + colour = [208, 0, 99], so LRGB = [47, 208, 0, 99].

You can represent an image using L only, as you might see on a back & white TV.
But if you take the luminance away, you are left with 3 colour components (chrominance) and an image with a completely different 'ghostly' appearance.

Colour TVs used to work this way - they had luminance and chrominance signals - so if you 'lost' the chrominance single, you still had a B&W image - which was better than nothing - and if you watched re-runs of old B&W sitcoms, it didn't matter anyway.

Back to astrophotography ...

Essentially, it's much easier and less expensive to get a quality, low noise image using a monochromatic camera.
So the idea is to take one picture as L, then swap filters in front of the camera to separately capture the R, G, B components of colour, then combine them all together.

Another way to think of it is like this ...
Say you have a monochrome camera with 10 million pixels.
If you capture a monochrome image, and then 3 more images using R, G, B colour filters, you get 10 million separate values for each of L, R, G and B,
which are combined to give a 10 mpx colour image.

If you have a 10 mpx one-shot colour (OSC) camera, each pixel has a small filter in front of it - this is known as a Bayer matrix.
Most OSC cameras have a Bayer matrix with 1 red, 1 blue and 2 green filters - known as RGGB.
(there are twice as many green elements as red or blue to mimic the physiology of the human eye).

So - you still get 10 million pixels of colour, but each pixel has to be a lot smaller, because you need 4 of them in an RGGB matrix to equate to 1 pixel in monochrome. As a result (assuming the image sensors are the same size), each pixel gathers less light, and is therefore less sensitive, and has a poorer signal-to-noise ratio.

In addition to the RGB colour model, there are a number of other colour models, including the CMYK model (mostly for printing), and CIE-LAB (or L*a*b).
L*a*b expresses colour as :-
L = perceived brightness
a = green–red opponent colors, with negative values being green and positive values being red
b = blue–yellow opponent colors, with negative numbers being blue and positive being yellow.

These models are mostly interchangeable - in other words, you can convert between them to manipulate images in different ways.
 (I say 'mostly' because different models can represent different ranges of colour, and each model has different colour 'spaces' to represent a range of colours, known as the colour gamut).

For example, you could convert your RGB image into the L*a*b colour pace to manipulate just the luminance,
as shown here in the curves adjustment panel in Affinity Photo - then back to RGB.

So - the answer is 'yes' to your first question, and 'yes, but' to your second question :-
1. Yes - luminance equates to the perceived 'brightness' of the colour image
2. Yes - you can 'dig out' the luminance information and adjust L, R, G, B separately, but it's not straightforward unless your photo editing software supports it.

The best advice I can give you is to experiment !

I hope that helps


Gary
Well Written Helpful Insightful Engaging
firstLight avatar
@all + @Scott Badger + @Gary JONES 

I’m really impressed how generous with your time you commenters and contributors are – so much food for thought, useful comments and suggestions. Thank you all!

Should I hope for many cloudy nights now?

@Scott – deconvolution (as opposed to sharpenig) is so well explained in the S&T article you recommended!

@Gary – great article, clearly described, easy to follow and understand, precise! If you are not a renowned guru / teacher / professor already ... should become one by now!

While having been a pure visual astronomer (mostly "dobsonaut") for over 3 decades, I learned so much about visual eye sight (rods, cones) by studying Roger N. Clark's phantastic standard work NullResultssmileBDP">Visual Astronomy of the Deep Sky. This book really opens my eyes in the words best meaning. Happy to have it in my book shelf.

Your explanation of LRGB was an eye opener for me, too: "So 'strawberry' would be Grey = [47, 47, 47] + colour = [208, 0, 99], so LRGB = [47, 208, 0, 99]" – I never saw it this way before, though it should have been obvious.

~ * ~

My biggest challenge in this respect is, to meaningful apply this knowledge to my imaging data with the (deep sky capable) software I have at hands ...

Pre-Processing:
  • Siril (Iris for Linux): only the provided standard OSC scripts, no manual fiddling
  • ASTAP: very good photometric stackig results, safly very slow (1 CPU only)

Post-Processing:
  • StarTools: a different approach in post processing, lovely, low cost, great support

Final Touch:
  • darktable: use it for my professional work as a photo journalist / photographer, free, open source (OSS)

Off topic:

Being Linux only since early 1993, I never looked jealously for what was available elsewhere. BTW, PixInsight's main development plattform is also Linux. Unfortunately, I had too many bad experiences with proprietary (photography related) software. So while PI seems very promising and capable, I prefer open software wherever possible.

Thanks for listening, nice to meet you!
firstLight avatar
Hello @all

@andrea tasselli
@Lynn K
@Konstantin Firsov 
@Arun H.
@frankszabo75
@Giovanni Paglioli
@Scott Badger
@Gary JONES 

just want to tell you: I did my home work and proudly present my first synthL + R + G + B processed image. Still way to go, but I learned something already.

Cheers,
Frank aka firstLight
Gary JONES avatar
Hi Frank
Well done - that is a great image smile

Maybe you can post a larger version so we can get a closer look at the detail :L)

Gary
firstLight avatar
Gary JONES:
Well done - that is a great image

Thank you, @Gary! Your detailed comment really encouraged me!

Gary JONES:
Maybe you can post a larger version so we can get a closer look at the detail :L)

Done: https://www.astrobin.com/nc4189/B/
Gary JONES avatar
Hi Frank,
Thats great - I'm really pleased you felt encouraged - I could not receive a nicer compliment

That is a very nice image !

You might like to try a few things that could improve your image :-
- experiment with black point, white point and saturation - try to enhance the beauty of the nebula without detracting from the overall image.
- your stars are nice and round, but are all very 'white' - try experimenting with colour balance as well as saturation to bring some 'yellowness' into your stars.
- some of your bright stars have an unusual asymmetrical appearance - I'm not sure what's causing this ??
- you seem to have a few bright pixels - IM not certain whether Siril does bad pixel mapping - or y00u could add a p[ixel layer and edit them pout maually
- if you're using a DSLR, don't forget to cover the eyepiece, to stop light from behind the telescope getting into your optical train
- your image has very little noise, but try experimenting with noise reduction. Just like your image, noise has luminance and chrominance components, try experimenting with each of these to see which makes the most difference without sacrificing detail.
- also try longer exposures - your tracking looks quite good, so this might improve your signal/noise ratio. For example, instead of 120 x 40", try 40 x 120".

I hope that helps

Gary
Helpful Respectful Engaging Supportive
firstLight avatar
Hi Gary,

thank you very much for your continued support and suggestions!

Offer / Request:

First of all a kind offer/request to you and everybody else curious enough to take a dive into my data: I would appreciate it much to send you my complete data set (lights, darks, flats, biases, all RAWs) of this image, zipped (*.zip) with subfolders in place and ready for download from my website.

Please PM me if you are bold enough to trying my data.

Motivation:

As I used OSC / DSLR with little equipment (Star Adventurer + 4 kg payload), provisional setup (small balcony, no Polaris), no GPS, guiding, computer, everything manual and in addition rather short integration time due to mountains + tree in sight, the 40s subs were already very risky for F=300mm.

Actually, stars are not perfect round in the lightframes, small motion blur is perceivable. With this balcony setup I once was very happy to get 120s exposures for F=135mm. Certainly I would prefer having much less files with much longer exposure times.

Unfortunately, with my little provisional setup every single test for F=200mm and up failed. So it is not only a question of image capture (poorly PA aligned, unguided, ...) but then also for the image processing the data amount and quality I actually have.

This is a challeng! Maybe yours?

Replies to some of your (Gary's) comments:
  • white stars: Usually I take StarTools for post processing. There I know how to handle star colors and such. This time, I tried to do everything with Siril: it can convert RGBs into L*a*b or HSL ... which I used to separate the RGB channels and to create a synthetic L channel. Unfortunately, I found no (obvious) way to improve star colors with Siril.
  • bright pixels: I was surprised, too, that Siril seems not to handle "hot pixels" automatically. Again I didn't find an obvious way to mask / remove them. No problem in StarTools.
  • cover eyepiece of camera: Never thougth light could enter here, because the mirror is locked up for exposure. Will try it ...
  • noise reduction: Again, I know how to do it in StarTools but not in Siril. For the "final touch" I use Darktable which is very capable removing luminance and chrominance noise in a fine tuned manner.

After all, for this image my first goal was to try out the RGB to synL+R+G+B approach for the first time. It looks promising and I learned something – thanks to @all! I'm sure there is still potential to further improve the synthL along this way.

Clear skies and happy nights,
Frank
Helpful Respectful
Gary JONES avatar
Hi Frank,
Many thanks for your post ...

Sure - I'm bold enough to try your data

Send me a link and I'll download them.

Also, you might like to read this article and this article on the 500 Rule and the NPF (Aperture, Pitch, Focal Length) rule, which might help with your manual setup.

Clear skies,

Gary
Helpful Respectful Supportive