I am hoping to tap into the Astrobin braintrust. I regularly collect substantial amounts of RGB and Luminance data. I equally regularly replace the lightness component of my RGB images with the Luminance data.
If I run 10 hours of each of RGB and L, then throwing away the Lightness data from the RGB is essentially halving the amount of Lightness data I have (assuming 100nm bandwidths for the RGB and 300nm for the L filters), which seems silly. Assuming the FWHMs are equal across all channels, why bother with the Luminance in this approach?
Leave aside for these purposes any discussion about how one could collect less RGB. Assume FWHMs of the RGB and L are all roughly the same.
I am aware that an approach to not losing the lightness data in the RGB is to create a superluminance. But I have not found any serious discussion of the best approach to creating a superluminance that explains why that is the best approach and what the pros and cons are. There are advocates for running image integration on all the RGBL subs, for running image integration on the RGBL masters and for using pixelmath to blend the Luminance and the lightness from the RGB.
So that is my question to the deep thinkers out there: is there a generally best approach to creating a superluminance, if so what is it, why is it the best, and what downsides are there, if any, to using that superluminance versus just an L or the lightness in the RGB?