Hi Frank,
I think your question has been answered already, but it is an interesting one, so I thought I'd respond as well

First, I think it's helpful to clarify some definitions ... without getting too scientific.
Luminance is an
objective measure of the luminous intensity of light per unit area, based on a given field of view.
Luminous intensity is a measure of the wavelength-weighted power emitted by a light source, based a standard model of the sensitivity of the human eye. Most people would equate this with brightness, which is an individual's
subjective impression of luminance.
Don't confuse these with
luminosity, which is an objective measure of electromagnetic energy emitted by an object per unit time.
It's commonly used to measure the energy radiated by a galaxy or star.
Our eye has two types of photosensitive receptors (well, actually there are 3, but only 2 relate directly to how we perceive images and colour) :-
Rods : these are basically intensity receptors.
They work well in low light and are found mostly around the edge of the retina, and therefore contribute to peripheral vision.
This is why pilots are trained to look for targets out of the corner of their eye during night-time sorties.
This corresponds to the L channel.
Cones : these respond to specific wavelengths of light.
Most people have 3 sensitivity bands, although many animals and some people have 4.
(the bluebottle butterfly has 15, some of which cover the UV spectrum, which is really handy for landing on the right flowers when they search for food).
People who are colour-blind have fewer than 3, or might have diminished sensitivity in one or more sets of cones.
These correspond to the R, G, B channels.
So - when you go out at night, you might be able to find your way around, but colours look less vibrant, because your rods are doing most of the work.
In bright light, everything looks more colourful, because your cones are doing most of the work.
In digital photography, the goal is to represent an image on a screen or printed on paper so it looks like the original object.
(although in astrophotography, 'false' colour images are also used to correspond to wavelengths outside the range of human vision, eg X-Rays).
Digital colours are usually represented as triplets of numbers, generally in the range of 0-255, although some colour systems use a larger range.
So in the RGB colour model, pure red = [255, 0, 0], pure green = [0,255,0] and pure blue = [0,0,255].
Every colour is made up of different combinations of these 3 values.
Grey is made up of equal values of R, G and B - so a 'neutral' grey might be [128, 128, 128].
On the other hand, something like 'strawberry' would be [255, 47, 146].
Another way of looking at this is to say that every colour is a shade of grey, with some R, G and/or B added on top.
So 'strawberry' would be Grey = [47, 47, 47] + colour = [208, 0, 99], so LRGB = [47, 208, 0, 99].
You can represent an image using L only, as you might see on a back & white TV.
But if you take the luminance away, you are left with 3 colour components (chrominance) and an image with a completely different 'ghostly' appearance.
Colour TVs used to work this way - they had luminance and chrominance signals - so if you 'lost' the chrominance single, you still had a B&W image - which was better than nothing - and if you watched re-runs of old B&W sitcoms, it didn't matter anyway.
Back to astrophotography ...
Essentially, it's much easier and less expensive to get a quality, low noise image using a monochromatic camera.
So the idea is to take one picture as L, then swap filters in front of the camera to separately capture the R, G, B components of colour, then combine them all together.
Another way to think of it is like this ...
Say you have a monochrome camera with 10 million pixels.
If you capture a monochrome image, and then 3 more images using R, G, B colour filters, you get 10 million separate values for each of L, R, G and B,
which are combined to give a 10 mpx colour image.
If you have a 10 mpx one-shot colour (OSC) camera, each pixel has a small filter in front of it - this is known as a Bayer matrix.
Most OSC cameras have a Bayer matrix with 1 red, 1 blue and 2 green filters - known as RGGB.
(there are twice as many green elements as red or blue to mimic the physiology of the human eye).
So - you still get 10 million pixels of colour, but each pixel has to be a lot smaller, because you need 4 of them in an RGGB matrix to equate to 1 pixel in monochrome. As a result (assuming the image sensors are the same size), each pixel gathers less light, and is therefore less sensitive, and has a poorer signal-to-noise ratio.
In addition to the RGB colour model, there are a number of other colour models, including the CMYK model (mostly for printing), and CIE-LAB (or L*a*b).
L*a*b expresses colour as :-
L = perceived brightness
a = green–red opponent colors, with negative values being green and positive values being red
b = blue–yellow opponent colors, with negative numbers being blue and positive being yellow.
These models are mostly interchangeable - in other words, you can convert between them to manipulate images in different ways.
(I say 'mostly' because different models can represent different ranges of colour, and each model has different colour 'spaces' to represent a range of colours, known as the colour gamut).
For example, you could convert your RGB image into the L*a*b colour pace to manipulate just the luminance,
as shown here in the curves adjustment panel in Affinity Photo - then back to RGB.

So - the answer is '
yes' to your first question, and '
yes, but' to your second question :-
1. Yes - luminance equates to the perceived 'brightness' of the colour image
2. Yes - you can 'dig out' the luminance information and adjust L, R, G, B separately,
but it's not straightforward unless your photo editing software supports it.
The best advice I can give you is to experiment !
I hope that helps

Gary