A lot of the replies are answers for experiences as to what techniques and functions work well for individuals. I would like to offer some of the problems I have had with the methods I have used as well to help you assess also the limitations. At least from my experiences.
MT (Erosion): Morphological Transformation (Erosion), with masking to minimize stars was the first method I was introduced to and used. And I still use this method often. However lately, I pretty much use it exclusively late in my processing to fine tune the intensity of the starfield. Typically this is only a light touch up. The artifacts that MT can generate are unlikely to show at this point. As suggested MT erosion requires a mask (the mask needs to be inverted so that only the stars are exposed to the MT process) and the quality of this mask can be critical. Getting a good mask can be difficult or nearly impossible in starfields that are extremely dense. The problem is that for the MT process to work, it needs to "borrow" data from the background periphery around the star in order to fill in towards the center of the star. So the mask has to extend out beyond the local boundary of the star. If the stars are very close together, the mask star elements will overlap. If your mask "stars" overlap, they will form a network of linked elements that when worked on by MT will cause the cores of stars to become linked with strands of illuminated pixels. I find this to be unacceptable. Preventing this for extremely dense stars with the normal mask generator can be quite the challenge and take a lot of iterations. Factor in the issues that arise when different parts of the image field require different mask parameters and frustration can build. (I am really only discussing the worst case scenario here. It is almost never an issue with well isolated stars, such as later on in the process...) In this worst case, building a proper starmask by using either StarNet or StarXTerminator can be done with more control. But then one has to take the raw mask from those functions and typically apply Binarize (used to eliminate any background and also select the size class of stars to work on), MT (dilation) to make the mask "stars" round and broad enough to include the background periphery, Convolution to "feather" the mask elements to prevent visible edge effects after star reduction with MT (erosion). Using these sub-processes to build a mask allows you to better control the problem of overlapping star elements in the final mask. But sometimes this just cannot be avoided and that is why I feel that other methods are better than MT at earlier stages of processing. Once mastered, the general method of mask generation is pretty easy to do. Elements of this can be seen in the Bloch method of star deemphasis.
Bloch (original): I say original, because I came across this first on YouTube and I believe that it was the first introduction of this method. I will let you look it up, since I always mess up linking, etc. For fear of making a mistake in relating someone else's method, my understanding is that this method is basically intended to be used fairly early in post processing. The theory is that one is trying to achieve the reduction in the star size by filling in the periphery with local image background local to each star (i.e. deep space dark if that is where the star resides, or even nebulosity if that is where the star is). This is not unlike the objective in the MT process, but Bloch seeks to replace the peripheral with a peripheral that matches almost exactly the color, density,
and noise present locally. If done properly, the common artifact of rings around the star are eliminated come stretching or further stretching that some of the other methods suffer from. In other words, if done properly, the lightly processed images at this stage, can be processed fully as if unreduced without the arising of ugly artifacts that can happen. It is interesting just to read and understand his methods of generating the source of the replacement fields used in the process. A good number of sub-processes he uses and describes in the full process can, alone, stand as methods that could be used for other processing work unrelated to star reduction. All good things to have in your processing quiver.
Bloch (revised): Here, I believe I found this on his web site. This is a similar background replacement method, however, with this revision he suggests the use of a StarNet-generated starless image as a source of background pixels for the replacement. In other words, let StarNet do the heavy lifting that you would do in the original method when making the noise matched background source. I have not used this a lot because of the defects StarNet often creates around the "dissappeared" stars. StarNet has a habit of replacing stars with a hatched background (and not uncommonly a raised signal to normal background) and this will actually show up in the reduced stars. To modify the starless image to work as well as the original method then amounts to a lot of work that was supposed to be saved. However, I do use this modification later in processing where it works better (for me) after the bulk of the stretching and noise reduction has been done. StarNet works better after stretching, but I have found that StarXTerminator to be a better process to pair with this method anyway and is designed to to star elimination with unstretched images (I see now that StarNet now has this function, but I have yet to try it).
In both the Bloch methods, it is either stated or implied that the point of this process is to create rather subtle reductions. The goal is to leave an image still rather true to the reality of the captured subs. He leaves the brightest/largest stars alone, removing them (far fewer in numbers than the others) from the halo mask used to target the stars for reduction. It is also stated/implied that the method will also avoid modifying the very faintest stars as well. One can iterate above the single pass with this method to create stronger reductions. In fact, in his "revised" post, he actually states that using a StarNet starmask, is not recommended because it is actually too good. That it captures too many of the stars to be reduced. I would like to state, that I often will adopt my own "ethic" when applying his method. And that his method can be adjusted to change the targeting or strength of effect. For example, his suggestion that the method be used to only reduce the stars within a narrow middle range (i.e. not the brighter stars and not the small stars) works well for scenes where you may want only a minor reduction, but I find that using the method this way, but to a stronger effect, causes the starfield to become very flat and uniform. It can completely destroy the wonderful appearance of depth of field and broad array of stars of different brightness, size along with color. So for me, there are times I simply need (want

) a stronger star reduction and there are so many faint stars, I insist on applying the method to as many stars as possible. (But normally still leave alone the brighter few stars which almost no method works well anyway.) In other words, I am trying to evenly lower the brightness and size of all but the brightest stars. But I am trying to keep all the stars. However, this typically causes some of the very faintest stars to basically disappear, but it is certainly not what is typically done for full star replacement, which I try to avoid if at all possible.
Star Replacement: I find that I have had to resort to this in extreme cases. I am no expert in this method. It seems to be "de rigueur" for those who do narrow band processing. (Not wanting to start a fight in that statement, just my perception.) I am assuming this comes as a necessity since it is the best way to add RGB stars to the typical unnatural color palette for most narrow band images. I do OSC exclusively, which may explain why I do things the way I do. In the rare case where I did full star replacement, with the source of the stars came from an earlier image, I recall some difficulty in the regions around the stars that came in with the star image and did not stretch well, thereby leaving ring artifacts. You can see for yourself the struggles I had with this image:

NGC 7380 in OSC. Includes Sh2-142 and Sh2-143 An image that I am not so proud of. I think if I had more experience, it would have been much more easy. And others here do a great job with it.
I think the thing to avoid with star processing is to avoid destroying what is seen as a natural psf function of the star. Especially for the less bright stars. You can see this in some images where the star has a fairly sharp edge and the whole star is the same density across the face of the star. Kind of like the period at the end of this sentence. Sometimes they are fully saturated. But in the interest of dimming the star, sometimes they look bland, even when colored. A good psf leaves the viewer with the impression that the image of the star is of a round object, not a circle. I say these are negatives, but that very much depends on the intended scale of view for the image. Even flat circles stars could be fine if they are small enough in the intended image scale. I think we here on AstroBin tend to be pixel peepers. But when I look at an image, even though I do pixel peep (mostly to learn about processes used), I only seek what its impression leaves on me when viewed full frame. Most of the methods discussed have a tendency to preserve a psf, since they involve an erosion from outside to inside.