Hi Entropy,
Just my two cents worth, but I feel that nowhere enough attention is paid to background extraction and the subtleties of its use can dramatically affect the results of your processing - especially when the field of view contains nebulosity over most of the frame.
You want to make sure your point selection only contains sky glow and not dim nebulosity. I take my strongest signal to noise masters (lum, Ha) and combine them to form the strongest signal/noise linear mono-image I can. Then I overstretch - particularly the dimmest portions of the image. This image is used only for determining where I should place my background points. Make sure these points are selected only at places where the signal is truly zero - either the dark nebula is so thick that it is truly black or the nebulosity is truly absent. Because this image is over-stretched, you may have to change the colour of the points to actuallly see them. If, as in the Heart Nebula, there are very few places that are truly black, then you may have few points to compute your background. Try and avoid temptation to add points where the subject matter isn't black. One this is done, and your points are selected, iconify DBE and exit dynamic mode, and you can delete this image.
Take one of your master frames (either colour of mono) and double click on the iconified DBE so that the points you selected actually appear on the linear image. I agree with what is stated above, in that you should avoid making the background spline fit tolerance too big. You just want all of your points included, but don't want to over-match the spline fit to your background or you will generate terrible artifacts. Splines can be great, but they can also be terrible and
it is a fine art to know when and how to use them.
Clicking the check mark without any correction method will display the background model that will be used. Make sure this makes sense before doing any correction on each of you masters.
The final crucial part of DBE is which correction to apply -
subtraction or division. With experience you will see the different results that these different methods will yield. Subtraction can clip actual data, while division can amplify or diminish one of the signal channels too much. I usually create two "background removed" images using each subtraction and division and then take a linear combination of the two (50% of each?), to create the final result. The exact porportion will depend upon whether you want to show the dim stuff or not. At least including some portion of the division result will keep data from being clipped.
I could go on and on about this, but DBE needs to be applied very carefully. You can try GraXpert, but this involved leaving Pixinsight to do your background extraction.
Hope this helps,
Dave