SPCC Thoughts and Questions

8 replies495 views
Alan Brunelle avatar
After reading a related post on some problems using SPCC, it gave rise to some questions I have sort of assembled upon my recent adoption of this feature in PI.  

Please feel free to dump on me over these questions and challenges below, but I am not an expert on this optical stuff.  Yes, I am a trained scientist and generally understand the calibration of laboratory equipment.  So maybe I ought too know better, but...!

First question:  I am in a philosophical conundrum over what this tool asks of us in order to get it to calibrate an image.  I believe that I basically understand that the point of color calibration is to take our images, compare how well our equipment has recorded color informtion, check the performance against standards and then make the necessary corrections to yield an image the we "should" have expected to get.  Color-wise, that is.  In light of that, I find it odd that we have to add all this information about what sensor our camera uses, the properties (light transmittance curves, sensitivities, etc.) of any bayer filters, additional filters, etc.  If SPCC has information on "true color" via the GAIA data base (the standard), then what does that all matter?  My guess is that because GAIA provides the standards for only known stellar objects, and that we ought not extend those assumptions to our non-stellar objects.  But why not?  I will revisit that question later.  And it also implicates another question as to what and how does SPCC do its corrections.  Next question.

Second: Why do we need to input all this detailed information on the filters and camera sensors we use?  Example, 1. if all (or enough) of the correct stellar color information for a particular image is contained in the GAIA database for a particular image.  2. And if that image contains only stars.  Should not SPCC be able to take a monochrome image and color it perfectly?  I actually tried doing this, but SPCC defaulted to a warning statement that it cannot work on monochrome images.  Why not?  As I suggested above, the issue may be assumed that star-only color information is not thought to be sufficient to calibrate all features of an image.  But I wonder if this has been proven or true?  Maybe how SPCC works is the issue.  I have a follow-on question below.  It is not uncommon for surrogate information (such as just stars) to be used to calibrate instruments.  It is done all the time.   Bottom line question from just an idiot, If one is using SPCC, should not SPCC assume that the starting image does not match the color information in the reference frame and simply make the working frame the same?  Why should SPCC care "how" that working frame came to be not perfect?

Third:  A repeat of the question above "Why do we need all this detailed information...  But a different perspective!  First off, a good number of the narrow band, LPF, etc. filters are made by vacuum deposition methods.  These filters all have published light transmittance curves.  But that does not mean that any individual filter is a perfect fit to the published curve.  The method of manufacture is not perfectly reproducable, for certain.  Some clearly can be badly off, and require returns when unlucky to get a bad one.  But what of those that are not so bad that we, who are not experts, do not or cannot notice or test for?  In my opinion, SPCC should be telling us what properties our filters have!  It should be a sub tool of SPCC that SPCC generates our filter's performance and logs it.  And after giving us that information, then it be included in our personalized list of filters/cameras (or even whole image trains) to use in the SPCC process. (Caveat in the Second above.)  It would seem that if SPCC has highly accurate color information for hundreds (if not thousands) of stars in any one of our images, supported by many redundant measurements by the GAIA telescope, that information could easily be used to calibrate each of our filters (and cameras) using a single image with enough stars.  Each star should have a wavelength max.  And given very accurate signal strength for each star in the image, comparisons of signal strengths between stars within a certain filter's bandpass should quickly support what that filter is passing.  It may be required that the function also has access to an unfiltered image from the same camera which to compare with the band pass filter.  

Fourth: So how does SPCC work?  What does it do with all the information?  What is its output and how is it generated?  For stars, does SPCC correct each star, one-to-one for stars it can find in the GAIA database?  Or does SPCC create a global correction that resembles the resulting models that DBE or ABE create?  Does it correct stars as a whole, separately and then use a different model to correct the background?  If not star to star correction, and for stars found in the image that are not in the GAIA database, is there a model of how the camera/filter image train defect/correction is applied?  Does it parse the image and work on sectors using only GAIA data for those sectors to correct sector by sector, such as BXT does its deconvolution?  How does it deal with gradients?  It seems to me that gradients could be a source of major errors if SPCC can only create a global correction factor.  In fact, the PI writeup does states that SPCC should be used after the background is flatted.  So that makes sense.  The background neutralization as a separate feature then seems odd.  Should that not fall out as part of the overall correction?  And if not, wouldn't the background issues cause defects in the CC of other features?  Would it not make sense to do color correction near the end of processing?  After all, that great CC, done early, will easily be skewed by all the other functions, some known to cause color shifts, applied to an image.

Additional Question (In Edit):  One main purpose of GAIA was to detect variable light signals in stars.  I.e. discover planet transit signals and such.  These are rare.  But there are many many variable stars for other reasons.  Because SPCC relies on the accurate photometry of individual stars, does SPCC specifically exclude stars that GAIA has identified as variable when it makes its CC?

It is interesting that the PI writeup for SPCC has a couple sections that point to the history and origins of the development of reference-based CC in PI and the philosophy of what color means in astrophotography.  I find it odd that they (or at least the writer of these sections) seem to believe that there is somehow a defect (maybe an untruthfulness?) to images generated with uncalibrated, improperly stretched, or othewise images with regards to color.  Yet just above, there is a basically a tutorial on how you can "choose" your white balance source, which yields completely color-changed images!  Taken a different way, it implies that if person A arrives at an identically colored image of the same object as person B, yet person A started with a propely color calibrated image, only image from person A is somehow OK.  But then the final, philosphical, question is, how accurate does an image need to be calibrated to yield the images we want?
Chris Bailey avatar
At the end of the day SPCC is computing a single set of Red and Green Correction factors with respect to Blue (Unity). If the frequency response and sensitivity of filters and sensors were linear then it would be a simple matter of measuring image stars and reference stars and carrying out some form of regression analysis to calculate best fit scaling factors. SPCC would have no need to know what sensor/filter combination you were using. The filters and sensors however are misbehaved so there has to be a correction based on the sensor and filter used. With strong gradients, DBE should be used prior to SPCC as otherwise the correction factors would be 'skewed' by the gradients.

I find, for the most part, SPCC works remarkably well.
Well Written Helpful Insightful Concise
Bruce Donzanti avatar
Alan

If you have not seen this excellent video by Adam Block (PixInsight SpectroPhotometric Color Calibration: Part 2 (The SPCC Process) - YouTube), it may address just about all of your questions.   And after using it since it was made available, I find it to be excellent.


Bruce
Concise
Alan Brunelle avatar
Chris Bailey:
At the end of the day SPCC is computing a single set of Red and Green Correction factors with respect to Blue (Unity). If the frequency response and sensitivity of filters and sensors were linear then it would be a simple matter of measuring image stars and reference stars and carrying out some form of regression analysis to calculate best fit scaling factors. SPCC would have no need to know what sensor/filter combination you were using. The filters and sensors however are misbehaved so there has to be a correction based on the sensor and filter used. With strong gradients, DBE should be used prior to SPCC as otherwise the correction factors would be 'skewed' by the gradients.

I find, for the most part, SPCC works remarkably well.

Thanks Chris,

I very much appreciate your response.  I will try to continue to digest what you are saying.  I have limited capacity to understand what all you say here regarding what SPCC is generating.  But I think I understand the concept of the generation of a single set of correction factors.  I may never fully appreciate the following regarding linearity in frequency response.   What you say, seems to have major implications regarding the quality of work done to prep the image for SPCC application.  How well you correct the background is then critical.  No easy task.  Yes it is easy to get a result with pushing a button on ABC or DBE or other tool, but what those tools provide as a result (i.e. quality) is therefore the paramount driver in any CC quality.  Some images are easy, deep dark space between stars, etc.  Others nearly impossible to know, such as North America Nebula, where there is no known background to which to create a model.  But there are more and more processes showing up these days that work in the same unstretched environment that are known to color shift stars or the whole image, and will then impact the SPCC result.  So knowing when to apply SPCC becomes another part of the teaching to apply SPCC correctly.  That, of course, only matters to people who feel they need to get perfect colors.  And also only matters to those who can discriminate between parts per thousands in any image.  And then also to those who don't immediately destroy their color balance in the many steps that follow color correction.

Perhaps the logical next tool PI needs to make this CC actually approach perfection is to come up with a CC tool that uses the background (non-stellar) color from a GAIA-like database to create an equivalent CC tool to replace DBE?  Logic dictates that you need both to be correct to get to correct color...

I do find that SPCC works well.  I have not used it for very long because I was not sure consuming a decent fraction of a hard drive to the databases was worth it.  And still not sure of that.  While Adam Bloch's video title on this tool, with the implication that "our CC is not accurate unless we use SPCC", is probably technically correct (but only if you choose the correct check boxes and choices, if available).  However, our "art" is visual, and it is one thing to say that your correction factor (of 0.99901) to make your image color correct, but quite another thing to ask if you could tell the difference visually.  He implied that PCC was not as accurate as SPCC and that also may be correct.  But can anyone look at an image and know?

It is stated that SPCC uses the same engine as PCC on the PI web site.  And it is stated that the difference in performance is therefore due to the quality of the lookup tables in the different databases.  SPCC is remarkably fast (and good in my opinion).  I wonder why, then, we all have to carry the database on our local computers?  Our images' fields of view are mostly all remarkably small.  So why not call up the GAIA info remotely when used like PCC did?  I will assume that the folks that hold the GAIA database has asked or thanked PI that their computers not be subjected to downloads from a bunch of astrophotographers!  

I chose to adopt SPCC and the database burdon because I believe that having the database offers additional benefits to me than its use in SPCC.  And I want to do photometry in the near future.  Otherwise, I certainly would not find it essential, since PCC worked very well for me and I cannot really detect differences in real images.
Respectful
Alan Brunelle avatar
Bruce Donzanti:
Alan

If you have not seen this excellent video by Adam Block (PixInsight SpectroPhotometric Color Calibration: Part 2 (The SPCC Process) - YouTube), it may address just about all of your questions.   And after using it since it was made available, I find it to be excellent.


Bruce

Thanks Bruce,

I did see the video.  In fact this video and the SPCC writeup on the PI web site are what caused my questions to surface.  The descriptions of the quality and nature of the GAIA data is very clear.  How comparing current data to a standard data set such as GAIA as a reference to make corrections is very clear.  As I said, I am familiar with how calibration works in a chemistry and biochemistry lab setting.  Reference standards are key.  But Bloch's description of the use of the required information demands of SPCC are presented in such a way that does not support why the filter and sensor information is necessary.  At all.  Go to the section around 30 min where he describes (with the SPCC example) as to how the "white point" is derived, and he keeps saying we arrive at the white point that we choose and/or want to use.  Confusing matters most, is related to the white point statements, the PI documentation also describes how you can also choose different white points to make complete color shifts in an image.  And gives multiple examples of doing so.  That the program can choose a white point for us is almost a moot point, because we decide (or are not even aware) whether we want it to use some arbitrary standard such as a specific type of face-on spiral galaxy as white.

But I have to say my point still stands.  And please read my response to Chris as well.  Consider we have an image.  We aquire it with our standard rig (lets say OSC).  And then we reaquire it with a partial blue filter that depresses the R and G by 50%.  If we process both with SPCC, SPCC should see that there may be a very slight correction needed (ignoring the Bayer issue) for the standard image.  SPCC should also detect, very accurately a 50% reduction in R and G, meaning that the stars will look too blue by a specific amount.  Easy and accurate correction would be to either depress B, or raise R&G by the specified amount.  All that is required is the information in the images and the reference.  In fact there is no need for anything but the standard and one of the images we acquired.  Once the image is in memory there is no reason that any CC should need any information about how that image is acquired.  (This in my simple mind.)  What Bloch's video says in capsule, is that SPCC generates a single corrective number each for Red and Green that is applied to the whole of the image without consideration of position.  The number is derived from an arbitrary, changeable (but thoughtful appoach) to a white point.  The number is derived from the accurate GAIA database of the stars presented within the field of view being calibrated.  To state it another way, if my image contains Mu Ceph, a pretty red star near the elephant trunk nebula, my image will present it as a collection of pixels that are color defined with three numbers, one each for red, green and blue.  SPCC has no choice but to use that information in a way that pins the blue color at what it is, then corrects the red to come into line with the standard and the same for the green.  In other words, no matter where my images information came from, SPCC will make the triad of numbers be the same as the reference.

The image train imparts it own unique color skew to the image.  That skew is derivable solely from the image itself in comparison to the standard.  The positive feature of SPCC is the accuracy of the standard.  Still can't wrap my head around the complex requirements for use...  But other than what I see as unnecessary complexity, it works well.  To the extent that I can tell.

Ask this question:  If we need to input information for each filter and sensor, then why do we not need to input information for sky conditions?  After all, even the best light pollution filters are not perfect, so therefore your location to a city is critical for background interference.  So why not a catagory for which city you live near?  What about oxygen sky glow, which is highly variable?  What about subtle aurora effects that are often present and highly variable?  These all happen in the wavelengths our filters pass.

In a practical sense, this is why background becomes a critical component of CC and why this task must be fully completed prior to CC.  Anyone who has performed a star removal such as StarXTerminator knows that the stars' colors, once the stars are extracted, are different than the stars in the image, because the background contributes very substantially to the final star color.  The continued "accuracy" of stars is critically dependant on the strict levels of the background on which the stars lie in our images.  Consider that if you use any methods of star replacement to achieve a goal after you do CC where the background is altered in brightness or other property independantly of the stars will completely waste your CC.  At least for the stars.  The same applies for altering the star intensities independantly of the background.  Star reduction comes to mind as one such task.  So care needs to be taken for anyone who wants to keep color fidelity throughout.

Alan
Chris Bailey avatar
I’m not going to try and answer all your point but in respect of local atmospheric conditions, they are a constant and therefore correctable, as long as they are linearly distributed across your image. If not, you need DBE.

Imagine you are imaging everything with the addition of a red filter with true linear response…again…that is easily correctable.

Imagine further putting on a pair of sunglasses with a tint. When you put them on, the tint is obvious. After a while, it’s not. Brain instigated colour calibration.
Alan Brunelle avatar
Chris Bailey:
Imagine you are imaging everything with the addition of a red filter with true linear response…again…that is easily correctable.


*I think that is my point.  Here is my thought process, which may be too simple.  And I may never understand this:  SPCC comes up with a single correction value for red skewed.  Example, each star (assume all stars of the same intensity) has a certain value within what the sensor will see as red, but at different specific wavelength profile, star to star.  The non-linear filter will see each "red" intensity for the different stars and report a slightly different value on that red sensor because each star will have a unique emission in the red spectrum and that light falls on different parts of the filter's band pass.  So the sensor will report a value slightly different for all stars.  This will actually point to the spectral throughput of the filter, actually.  (Reason why I feel SPCC should be telling us how our filters are performing, rather than we tell SPCC.  Think of data from thousands of star with high quality spectral information from the reference!)  So potentially great specificity available to the system considering the spectral data available from GAIA.  But SPCC does not use that.  It creates a single number, or set of two, that globally impacts the image.  That means all the specific information now becomes a best average correction, with each star being corrected the same, which means that no star is specifically accurately corrected.  The color image only reports, and can only report three values, one for each color.  Only those three values can be compared to a reference.  If the reference has more information than that, i.e. high resolution spectral information, then I can see that with the non-linear filter information, it can be predicted how the test image pixel's should report those three color values.  That seems only usable star-to-star.  But here we are not fixing colors star, by star.  SPCC comes up with two reference numbers for the whole image.  And the fix is a single homogeneous adjustment to the two color channels globally.  That makes the filters' and sensor's spectral response characteristics irrelevant.  If SPCC generated a "model" image like ABE or DBE can, we would see that SPCC's model would be perfectly flat, with the same values at each pixel.  That is less complex than what DBE or ABE is doing or at least capable of doing.

I'm not trying to be critical of SPCC.  I use it, I like it.  I'm just wanting to know enough to make sure I apply it knowingly and correctly and understand its caveats.  For example, almost all star reduction methods or other methods that differentially manipulate the background and star images put the CC at risk because star color is a sum of the color intensities of the two independant images, when star removal is a component of the technique.
Chris Bailey avatar
I believe Photometric based ‘DBE’ is in the pipeline though how this would work with sparsely populated regions is a question,
Alan Brunelle avatar
Chris Bailey:
I believe Photometric based ‘DBE’ is in the pipeline though how this would work with sparsely populated regions is a question,

I think the concept behind these new tools is remarkable when coupled with the large amount of space data becoming available.  I can only guess and hope what the future brings.  In doing astrophotography, it remains to be seen whether more deeply engineered tools will be of true value, but some of the newer ones are being adopted very quickly recently.  With regard to color, I am not sure how much more this community can use, given the free manipulation with color that is so typical.  But rules would tend to make every image the same, and that would be not a good thing.
Related discussions
Issue in frames calibration?
Hi everyone, this is a long post, but it’s needed to provide proper context. Thanks in advance to anyone willing to read through it and help me figure things out! I’ve been struggling for days with what seems to be a calibration issue (or possibly a ...
Jul 8, 2025
Problems That I Encounter When Polar Aligning The Seastar
Since the major update for EQ mode was released for Seastar, I've been encountering a few issues that I’d like to share with others who use Seastar in EQ mode. Since Zwo did not develop an equatorial wedge for this mode, I had to purchase a Skywa...
May 2, 2025
Diagnostic help, please
Hello, I have searched many sites looking for similarities regarding a worsening issue I have been fighting for some time. I hope that posting here and attaching some photos will help with the diagnosis. Essentially, my stacked light frames using the...
Oct 19, 2025
DISCONNECTION WITH IOPTRON COMMANDER
HELLO EVERYONE I have an ioptron cem 120 EC2. Randomly the ioptron commander application disconnects saying that it can't find any devices anymore. Looking in the USB peripherals I see the mount connected. The only way to reconnect the applicatio...
May 22, 2025