After reading a related post on some problems using SPCC, it gave rise to some questions I have sort of assembled upon my recent adoption of this feature in PI.
Please feel free to dump on me over these questions and challenges below, but I am not an expert on this optical stuff. Yes, I am a trained scientist and generally understand the calibration of laboratory equipment. So maybe I ought too know better, but...!
First question: I am in a philosophical conundrum over what this tool asks of us in order to get it to calibrate an image. I believe that I basically understand that the point of color calibration is to take our images, compare how well our equipment has recorded color informtion, check the performance against standards and then make the necessary corrections to yield an image the we "should" have expected to get. Color-wise, that is. In light of that, I find it odd that we have to add all this information about what sensor our camera uses, the properties (light transmittance curves, sensitivities, etc.) of any bayer filters, additional filters, etc. If SPCC has information on "true color" via the GAIA data base (the standard), then what does that all matter? My guess is that because GAIA provides the standards for only known stellar objects, and that we ought not extend those assumptions to our non-stellar objects. But why not? I will revisit that question later. And it also implicates another question as to what and how does SPCC do its corrections. Next question.
Second: Why do we need to input all this detailed information on the filters and camera sensors we use? Example, 1. if all (or enough) of the correct stellar color information for a particular image is contained in the GAIA database for a particular image. 2. And if that image contains only stars. Should not SPCC be able to take a monochrome image and color it perfectly? I actually tried doing this, but SPCC defaulted to a warning statement that it cannot work on monochrome images. Why not? As I suggested above, the issue may be assumed that star-only color information is not thought to be sufficient to calibrate all features of an image. But I wonder if this has been proven or true? Maybe how SPCC works is the issue. I have a follow-on question below. It is not uncommon for surrogate information (such as just stars) to be used to calibrate instruments. It is done all the time. Bottom line question from just an idiot, If one is using SPCC, should not SPCC assume that the starting image does not match the color information in the reference frame and simply make the working frame the same? Why should SPCC care "how" that working frame came to be not perfect?
Third: A repeat of the question above "Why do we need all this detailed information... But a different perspective! First off, a good number of the narrow band, LPF, etc. filters are made by vacuum deposition methods. These filters all have published light transmittance curves. But that does not mean that any individual filter is a perfect fit to the published curve. The method of manufacture is not perfectly reproducable, for certain. Some clearly can be badly off, and require returns when unlucky to get a bad one. But what of those that are not so bad that we, who are not experts, do not or cannot notice or test for? In my opinion, SPCC should be telling us what properties our filters have! It should be a sub tool of SPCC that SPCC generates our filter's performance and logs it. And after giving us that information, then it be included in our personalized list of filters/cameras (or even whole image trains) to use in the SPCC process. (Caveat in the Second above.) It would seem that if SPCC has highly accurate color information for hundreds (if not thousands) of stars in any one of our images, supported by many redundant measurements by the GAIA telescope, that information could easily be used to calibrate each of our filters (and cameras) using a single image with enough stars. Each star should have a wavelength max. And given very accurate signal strength for each star in the image, comparisons of signal strengths between stars within a certain filter's bandpass should quickly support what that filter is passing. It may be required that the function also has access to an unfiltered image from the same camera which to compare with the band pass filter.
Fourth: So how does SPCC work? What does it do with all the information? What is its output and how is it generated? For stars, does SPCC correct each star, one-to-one for stars it can find in the GAIA database? Or does SPCC create a global correction that resembles the resulting models that DBE or ABE create? Does it correct stars as a whole, separately and then use a different model to correct the background? If not star to star correction, and for stars found in the image that are not in the GAIA database, is there a model of how the camera/filter image train defect/correction is applied? Does it parse the image and work on sectors using only GAIA data for those sectors to correct sector by sector, such as BXT does its deconvolution? How does it deal with gradients? It seems to me that gradients could be a source of major errors if SPCC can only create a global correction factor. In fact, the PI writeup does states that SPCC should be used after the background is flatted. So that makes sense. The background neutralization as a separate feature then seems odd. Should that not fall out as part of the overall correction? And if not, wouldn't the background issues cause defects in the CC of other features? Would it not make sense to do color correction near the end of processing? After all, that great CC, done early, will easily be skewed by all the other functions, some known to cause color shifts, applied to an image.
Additional Question (In Edit): One main purpose of GAIA was to detect variable light signals in stars. I.e. discover planet transit signals and such. These are rare. But there are many many variable stars for other reasons. Because SPCC relies on the accurate photometry of individual stars, does SPCC specifically exclude stars that GAIA has identified as variable when it makes its CC?
It is interesting that the PI writeup for SPCC has a couple sections that point to the history and origins of the development of reference-based CC in PI and the philosophy of what color means in astrophotography. I find it odd that they (or at least the writer of these sections) seem to believe that there is somehow a defect (maybe an untruthfulness?) to images generated with uncalibrated, improperly stretched, or othewise images with regards to color. Yet just above, there is a basically a tutorial on how you can "choose" your white balance source, which yields completely color-changed images! Taken a different way, it implies that if person A arrives at an identically colored image of the same object as person B, yet person A started with a propely color calibrated image, only image from person A is somehow OK. But then the final, philosphical, question is, how accurate does an image need to be calibrated to yield the images we want?
Please feel free to dump on me over these questions and challenges below, but I am not an expert on this optical stuff. Yes, I am a trained scientist and generally understand the calibration of laboratory equipment. So maybe I ought too know better, but...!
First question: I am in a philosophical conundrum over what this tool asks of us in order to get it to calibrate an image. I believe that I basically understand that the point of color calibration is to take our images, compare how well our equipment has recorded color informtion, check the performance against standards and then make the necessary corrections to yield an image the we "should" have expected to get. Color-wise, that is. In light of that, I find it odd that we have to add all this information about what sensor our camera uses, the properties (light transmittance curves, sensitivities, etc.) of any bayer filters, additional filters, etc. If SPCC has information on "true color" via the GAIA data base (the standard), then what does that all matter? My guess is that because GAIA provides the standards for only known stellar objects, and that we ought not extend those assumptions to our non-stellar objects. But why not? I will revisit that question later. And it also implicates another question as to what and how does SPCC do its corrections. Next question.
Second: Why do we need to input all this detailed information on the filters and camera sensors we use? Example, 1. if all (or enough) of the correct stellar color information for a particular image is contained in the GAIA database for a particular image. 2. And if that image contains only stars. Should not SPCC be able to take a monochrome image and color it perfectly? I actually tried doing this, but SPCC defaulted to a warning statement that it cannot work on monochrome images. Why not? As I suggested above, the issue may be assumed that star-only color information is not thought to be sufficient to calibrate all features of an image. But I wonder if this has been proven or true? Maybe how SPCC works is the issue. I have a follow-on question below. It is not uncommon for surrogate information (such as just stars) to be used to calibrate instruments. It is done all the time. Bottom line question from just an idiot, If one is using SPCC, should not SPCC assume that the starting image does not match the color information in the reference frame and simply make the working frame the same? Why should SPCC care "how" that working frame came to be not perfect?
Third: A repeat of the question above "Why do we need all this detailed information... But a different perspective! First off, a good number of the narrow band, LPF, etc. filters are made by vacuum deposition methods. These filters all have published light transmittance curves. But that does not mean that any individual filter is a perfect fit to the published curve. The method of manufacture is not perfectly reproducable, for certain. Some clearly can be badly off, and require returns when unlucky to get a bad one. But what of those that are not so bad that we, who are not experts, do not or cannot notice or test for? In my opinion, SPCC should be telling us what properties our filters have! It should be a sub tool of SPCC that SPCC generates our filter's performance and logs it. And after giving us that information, then it be included in our personalized list of filters/cameras (or even whole image trains) to use in the SPCC process. (Caveat in the Second above.) It would seem that if SPCC has highly accurate color information for hundreds (if not thousands) of stars in any one of our images, supported by many redundant measurements by the GAIA telescope, that information could easily be used to calibrate each of our filters (and cameras) using a single image with enough stars. Each star should have a wavelength max. And given very accurate signal strength for each star in the image, comparisons of signal strengths between stars within a certain filter's bandpass should quickly support what that filter is passing. It may be required that the function also has access to an unfiltered image from the same camera which to compare with the band pass filter.
Fourth: So how does SPCC work? What does it do with all the information? What is its output and how is it generated? For stars, does SPCC correct each star, one-to-one for stars it can find in the GAIA database? Or does SPCC create a global correction that resembles the resulting models that DBE or ABE create? Does it correct stars as a whole, separately and then use a different model to correct the background? If not star to star correction, and for stars found in the image that are not in the GAIA database, is there a model of how the camera/filter image train defect/correction is applied? Does it parse the image and work on sectors using only GAIA data for those sectors to correct sector by sector, such as BXT does its deconvolution? How does it deal with gradients? It seems to me that gradients could be a source of major errors if SPCC can only create a global correction factor. In fact, the PI writeup does states that SPCC should be used after the background is flatted. So that makes sense. The background neutralization as a separate feature then seems odd. Should that not fall out as part of the overall correction? And if not, wouldn't the background issues cause defects in the CC of other features? Would it not make sense to do color correction near the end of processing? After all, that great CC, done early, will easily be skewed by all the other functions, some known to cause color shifts, applied to an image.
Additional Question (In Edit): One main purpose of GAIA was to detect variable light signals in stars. I.e. discover planet transit signals and such. These are rare. But there are many many variable stars for other reasons. Because SPCC relies on the accurate photometry of individual stars, does SPCC specifically exclude stars that GAIA has identified as variable when it makes its CC?
It is interesting that the PI writeup for SPCC has a couple sections that point to the history and origins of the development of reference-based CC in PI and the philosophy of what color means in astrophotography. I find it odd that they (or at least the writer of these sections) seem to believe that there is somehow a defect (maybe an untruthfulness?) to images generated with uncalibrated, improperly stretched, or othewise images with regards to color. Yet just above, there is a basically a tutorial on how you can "choose" your white balance source, which yields completely color-changed images! Taken a different way, it implies that if person A arrives at an identically colored image of the same object as person B, yet person A started with a propely color calibrated image, only image from person A is somehow OK. But then the final, philosphical, question is, how accurate does an image need to be calibrated to yield the images we want?