Just to join the conversation, your suggestion of using a different PDF for different parts of the image is a very good one, and is completely doable. Just takes the time and effort to determine how to determine and interpolate the PDF over various areas of the image and actually code it. Of course we would then have to create PDFs over different parts of the picture, which would be more time-consuming.
For my fellow geeks, deconvolution is simply trying to determine what the signal really is, given the response that we record on our image, assuming that image is a linear (sum of response) to all of the individual signals over the field. In other words, if we see a response (brightness in a pixel) the only assumption is that all of the various potential contributors (ie, stars, nebula) that are in the signal contributed to the overall response in a linear way. For example, the brightness of a given pixel may have 4% contribution from the sky, 40% contribution from the star to the east, and 56% contribution from a star to the west. Now I just made these numbers up, but the key to deconvolution is determining what these numbers are. Again the only assumption is that the contributions of every signal (every star) adds up (linearly) to the total response that is seen in the image.
To perform deconvolution, we need to know what the response is be to a single, known signal. The details on how the transformation (convolution) occurs is not important. Once we know that, we can "deconvolve" what the signal needs to be to give the response on our image. In otherwords what combination of "stars" or signals do we need that when turned into a response by our atmosphere, telescope and camera and added all together, yields the result we see on the image.
In engineering this is often done via a calibration. Put a known, simple single signal through whatever thing it is you are investigating and see what the response is like. For most engineering examples, this "calibration" involves putting a simple signal (point source of light, pressure pulse, Dirac-delta, etc.) through a process and seeing what the response is like.
In astronomy, we are fortunate to already have such calibrations in our image. The signal consists of a star, which is a single point source of light (close enough to have 0 diameter, yet emitting photons) which should appear as a single pixel, of a given brightness in our image if our optical system, camera, atmosphere, whatever else, were perfect. The response we actually get is a fuzzy circle that is brightest in the centre and tails off with distance away from this. So we know the signal - a single pixel of given brightness and the response, a fuzzy circle we call the PDF. That, is the machine (our telescope and camera) show us the point source of light as a fuzzy circle (the PDF). Note our PDFs also have noise added, but we can get rid of most noise by averaging the calibration - using multiple responses, by averagine the PDFs from multiple stars. Deconvolution doesn't care why the respose is fuzzy or oval or wide, only how fuzzy, oval, or wide it is.
Finally, knowing that any brightness (other than noise) in an image must be the sum of all the contributions from all of the stars in the image, we can determine what the original signal must look like to generate the image (response) from the camera. In a perfect world, the deconvolved image should only show stars that occupy a single pixel, and the image would be perfect recording of the signal. Instead, for a host of reasons, what we get in practice is an image with smaller stars, sharper nebulae, etc that is more representative of the actual signal, - still not perfect but closer to the signal than the original image. The "host of reasons" includes residual noise, camera resolution, numerical dispersion, non-linearity of camera response, light diffraction (non-linear), etc. etc.
Using different PDFs for different parts of the picture is no different that using a different PDF for a different picture, a different filter, or a different camera - as long as the signal-response relationship, or calibration distance, or PDF diameter, is small compared to the overall scale of the picture. Does a point source of light give a circular image or an oval image and how much spread (standard deviation) results. Any nebulosity in the image is just the linear sum of all of the "point sources" that make up the nebula.
The beauty of deconvolution is that is doen't need to know anything about the nature of what happens to the signal to get the response and doesn't need to know anything about what created the signal, just what the transformation from signal to reponse is and that is the PDF.
Hope this entertained you a little.
Dave