Any target and acquisition suggestions for areas with poor seeing?

Connor KesslerRick VereginScott BadgerJoe Linington
26 replies1.2k views
Connor Kessler avatar
Hi, all.  

I was hoping to pick folks brains about ideas for targets and perhaps acquisition techniques for areas with routinely poor seeing. 

For context, I live in the San Francisco Bay Area on the SF peninsula.  My place of residence and where I do 90% of my image taking is on a few minutes south of SF and right on the coast.  As such, light pollution is bit of an issue (my backyard is actually in at least in a relatively dark pocket thankfully) and atmospheric moisture is extremely noticeable. 

I would bet 75% of nights that are clear enough for imaging have poor to just below average seeing because of how much moisture is in the air.  I've really struggled to reconcile with this reality.  My current primary set up is an Orion EON 130 w/ a .80x reducer and either the ASI294MM or MC Pro.  Shot in Bin2x2, my pixel ratio is about 1.3ish pixels per arc/s.  I also have an Askar FRA300 quint at my disposal, but the pixel scale is extremely undersampled unless I use 1x1 on the mono.

Far too often I find that the seeing has resulted in data that's just barely too mushy to really give an image that awesome "pop" in detail when responsibly sharpened.  I've definitely been slowly improving over the last couple of years in getting better processing skills, but there comes a point where garbage in still means that some garbage will be coming out, processing skills be damned. 

So my question is does anyone have any suggestions for targets or acquisition techniques that would help maximize the quality of my images in an area with such difficult seeing conditions?  Maybe this is a question that doesn't really have an answer, but I thought I would give it a shot anyways.  In case it is worth mentioning, I have been considering trading in the 294's for a 2600mm at some point to get some more framing flexibility but I don't know when that might be. 

Thoughts?  I'm willing to travel and go camping for better quality images (like my latest Wizard Nebula), but I can't do it too often.  But maybe 1 night in a good, clean dark sky is better than 5 nights of sub par seeing?  As you can tell, it's all quite frustrating, but rather than giving up on shooting at home all together I want to know if anyone has tips on how to make the most of what I'm dealing with.  

Thanks for reading!
Engaging
Brian Boyle avatar
Hi Connor,

What actually is your seeing?  

In my experience [both as a professional astronomer and as an amateur astrophotographer] there is no observation/target that can't be improved by better seeing.  That's the bad news.

But there is good news too.   

I live in a dark Bortle 2 area with routinely awful seeing.  My typical seeing is around 4-5arcsec, on the best of  nights I may get to about 3arcsec.  But this is very much the exception.   Rather than give up and go elsewhere, I have evolved the following strategy. 

As with all strategies, it is driven by the end-goal.  In my case this is to promote astronomy and dark skies to the public.   I give around 20 presentations a year and my images feature in a few exhibitions every year.  I have even sold some, so it must work to some extent!  Typically, my images are displayed at 4k resolution for the public and printed at 75 x 50cm

My primary telescope is an RC8 - 1600mm focal length and a 6200MM camera.  On the face of it, this looks crazy.  [On further reflection, it may still be crazy].   0.48arcsec/pix in 4arcsec seeing!   This drives me to binning x 2. I used to resample in post-processing, but the size of the frames and the increased age of my Mac has now led me to bin on the sensor.    

At binx2, I am roughly over sampling the seeing disk by a factor 3-5 [depending on the seeing] and gives me leeway to crop or further resample in post-processing while still able to display on a 4K device or print at 200-300dpi to 70cm width.  [And yes I have sold a number of images at this size!]

Now, I realise that full-frame sensors are not universally loved here on AB.  They are unaffordable for many, and few telescopes [including mine] have the capacity to deliver quality images across the full-field of the sensor.  Of course, this latter issue is ameliorated by binning/resampling at factors of 2-3.  Binning doesn't effect field-of-view and I still have a decent 1deg x 0.7deg field-of-view.  If there are some targets that won't fit into this, then I will mosaic.  At up to four fields, I don't find mosaicing too painful.  

Nevertheless you did mention that you were thinking of a 2600MM pro, and I think that is a very good option.  With resampling at a factor of two you can still generate 4K images and very large prints.

You didn't mention it, but I would also invest in RCAstro's Blur Exterminator and StarXterminator packages for PI as they have revolutionised my images.  Used wisely, they can further reduce the worst visual effects of poor seeing.

Finally, another approach I have tried is to buy luminance imaging time on a public telescope, to add to my home-grown RGB imaging.  
An hour or two can make a huge difference here, and is certainly more affordable than a new FF sensor.

In summary, I don't pick targets for poor seeing - but try to evolve strategies that give me what I want.  That is images to entertain and inform the public [and my friends] at a resolution that most people find impressive. 

This won't appeal to the pixel peepers, but that is not my intention.   I have learned to stop worrying and live with [if not love] poor seeing. 

CS Brian
Helpful Insightful Respectful Engaging Supportive
Connor Kessler avatar
This is a solid response, Brian, thank you!

Re: my specific seeing, I honestly don't know the best way to even calculate that.  What I can tell you is that there are plenty of nights I think the stars might fall right out of the sky from how much shaking they're doing smile

Yeah, I thankfully managed to accept the truth about seeing pretty early on so I appreciate you understanding where I'm coming from in trying to make the most of it.  

I agree with you on bumping to the 2600mm as I think it will be a lot more flexible across the varying conditions and help get the most out of imaging here.  

I've also considered doing more mosaics to get respectable image scales that print and show well without making the pixel imperfections too obvious or distracting.  I should have also specified that I do use those AMAZING programs from RC and they have really pushed my processing abilities up a notch, but as you said, I need to be careful.  I learned pretty quick that all the benefits can instantly backfire with over zealous use of those tools. 

Honestly, I think the bit you mentioned about using public L data is the most profound to me.  I had never even thought of that option.  I'm obviously far less picky about seeing with shooting only the color so if I can supplement with quality luminance that would be a life saver and still give me the satisfaction of knowing I still acquired the bulk of the data myself.
Respectful Supportive
Guillermo (Guy) Yanez avatar
Bad seeing is not good news when it comes to imaging. Planets, the moon and the sun are high resolution demanding objects. Same thing with double stars, I do not see how you are going to split them with bad seeing. Nebulas are faint and require stable atmosphere to get those subs right. Now, I do think that open clusters might be your friends in these adverse seeing circumstances. Those objects are not that great when imaging compared to visual observing them but you may find some interesting targets within those type of star clusters.
Try to get short sub exposures such as one minute or two at maximum. Then stack a couple of hours and see if you get decent results (especially getting the noise down). I would try to select large clusters in a wide field of view. Wide images are very forgiving with bad seeing as long as you do not need to pull out fine details. I do not know where you are located but maybe the double cluster in perseus is one good target in the north and there are plenty in the south. In southern latitudes, you may also include some globular cluster such as NGC5139 and 47 tucanae that are pretty large. M13 might work in northern latitudes.
I wish you luck

Guy
Helpful Engaging
Rick Veregin avatar
Hi Connor
Can you provide a bit more detail please on your setup. Based on what I could find, here are my numbers (they don't line up with yours) and suggestions.
  • From what I see the Orion Eon 130 is f7, so 910 mm FL? With a 0.8 reducer that is 728 mm FL and F5.6. And your pixel size in this camera is 4.63 micron?
  • With this input at 728 mm I get 1.13 arc-sec/px. For your aperture the resolution defined by the Rayleigh limit is 1.1 arc-sec, since your seeing is worse than this, seeing is indeed your limiting factor. This is the most common situation for those of us on the planet Earth at the bottom of our atmosphere.
  • Now, with seeing your limiting factor, at 3 to 5 arc-sec, ideally you want your pixel scale to be at least 1/2 to 1/3 of this, this is the Nyquist sampling criterion. This brings you to an optimal pixel scale in the range of 1 to 2.5 arc-sec. So that would indicate at 1x1 bin and 1.13arc-sec/px  that you would be doing the best you can with your seeing for optimal resolution.
  • What I don't understand then is why you would then bin 2x2? That binning takes you to 2.26 arc-sec/px scale, way too coarse considering your seeing. At 2x2 binning, your binning is limiting the resolution. However, this would result in pixelated stars, but from what I can see, your stars are blurry, not pixelated. So at 2x2 binning it is difficult to understand why the stars would still be fuzzy.

Was there a reason that you decided to bin? Binning can hid collimation problems, guiding problems, focus problems. and so on, but if you hide an issue that you are not dealing with, you will not resolve the problem. And I would hate to see you give up on what your setup and conditions can do, without getting to the root of the issue.

My suggestion is you are at a good spot with no binning, get rid of the 2x2 bin, it will not be helping and indeed appears to be hurting you. Your images have nice detail, so two common issues are either your stars are overexposed, or you are overstretching them and bloating them. 


If you still see issues with no binning, then reply back to this forum and hopefully we can help to understand the underlying problem. If there is a particular image that illustrates the problem you are seeing that might be helpful.

At this point I would not make any major equipment changes, at this point i don't see how that is the root of your problem.
CS
Rick
Helpful Insightful Engaging
Connor Kessler avatar
Rick Veregin:
Hi Connor
Can you provide a bit more detail please on your setup. Based on what I could find, here are my numbers (they don't line up with yours) and suggestions.
From what I see the Orion Eon 130 is f7, so 910 mm FL? With a 0.8 reducer that is 728 mm FL and F5.6. And your pixel size in this camera is 4.63 micron?
With this input at 728 mm I get 1.13 arc-sec/px. For your aperture the resolution defined by the Rayleigh limit is 1.1 arc-sec, since your seeing is worse than this, seeing is indeed your limiting factor. This is the most common situation for those of us on the planet Earth at the bottom of our atmosphere.


Hey, Rick! Learning about the Nyquist bit was actually really helpful as that's new information for me to start taking into consideration.  As for the binning and image scale, I'm not really sure how the calculation is done to measure it, but I use Stellarium as well as the Astrobin measurements which have both calculated my scale at ~1.31 arc-sec/px.

Also, I don't know if this might be what's throwing you off re: why I shoot in Bin 2x2 but it's mostly because it's a quirk of the ASI294MM Pro that it shoots in 2x2 as the default to match its MC counterpart.  294MM uses a completely different sensor from the MC, though, so the 294MM can actually drop down to Bin 1x1 with the native 2.3 micron pixels.  The trade off is that the image files are gigantic and the bit depth drops from 14 to 12 as well as the well depth.  I almost exclusively shoot bin 2x2 by default.  So the 1.3 arc-sec/px is being calculated assuming the camera is operating at 2x2 and not 1x1. 

Not sure if that's clarified anything at all.  Hopefully that helps you make more sense of the binning situation.
Helpful
Rick Veregin avatar
Connor Kessler:
Rick Veregin:
Hi Connor
Can you provide a bit more detail please on your setup. Based on what I could find, here are my numbers (they don't line up with yours) and suggestions.
From what I see the Orion Eon 130 is f7, so 910 mm FL? With a 0.8 reducer that is 728 mm FL and F5.6. And your pixel size in this camera is 4.63 micron?
With this input at 728 mm I get 1.13 arc-sec/px. For your aperture the resolution defined by the Rayleigh limit is 1.1 arc-sec, since your seeing is worse than this, seeing is indeed your limiting factor. This is the most common situation for those of us on the planet Earth at the bottom of our atmosphere.


Hey, Rick! Learning about the Nyquist bit was actually really helpful as that's new information for me to start taking into consideration.  As for the binning and image scale, I'm not really sure how the calculation is done to measure it, but I use Stellarium as well as the Astrobin measurements which have both calculated my scale at ~1.31 arc-sec/px.

Also, I don't know if this might be what's throwing you off re: why I shoot in Bin 2x2 but it's mostly because it's a quirk of the ASI294MM Pro that it shoots in 2x2 as the default to match its MC counterpart.  294MM uses a completely different sensor from the MC, though, so the 294MM can actually drop down to Bin 1x1 with the native 2.3 micron pixels.  The trade off is that the image files are gigantic and the bit depth drops from 14 to 12 as well as the well depth.  I almost exclusively shoot bin 2x2 by default.  So the 1.3 arc-sec/px is being calculated assuming the camera is operating at 2x2 and not 1x1. 

Not sure if that's clarified anything at all.  Hopefully that helps you make more sense of the binning situation.

Thanks, I had missed that quirk on your camera--I understand now why it is 2x2 bin. That makes sense.  So at 1.3 arc-sec/px, for sampling at minimum 2x to 3 x, that would be good for seeing of 2.6 to 3.9 arc-sec/px. So your sampling is good for seeing conditions of 2.6 to 3.9 arc-second, for most of us this is average to very poor seeing, though you might see a 5 arc-sec on a terrible night. So for the most part you are probably not limited by your seeing.  Now if your seeing were good, then you would be a little under-sampled, so limited by your resolution. Thus if you feel your stars are getting bloated, something other than seeing is the cause. 

 I would suggest you look at the FWHM for stars in your subs (without stretching). ASTAP Astrometric STAcking Program is free. It shows the HFD (equal to the FWHM if the stars are perfectly Gaussian, but HFD is less sensitive to star shapes). It can also show you star asymmetries across your image, image tilt, such as from a tilted camera, etc. The point being you can see if your HFDs are where they should be based on your seeing. Note it gives the values in pixels. But it can plate solve too, so if you plate solve the frame, then it will be in arc-sec. So if your stars in the raw image are much wider than you expect based on seeing, there is another issue in the data collection (focus--are you using a Bhatinov mask, to be sure of focus, tracking, etc.). If your star widths are as expected, you can also inspect the FWHM for your stretched image in ASTAP, to see if you are bloating them in processing.

Your star shapes look good to me, though not that Gaussian--they might be overexposed in the subs (but I don't think so based on your settings ). I saw 60 seconds in one your backyard images for RGB, seems reasonable at low gain, unless you have high LP.  I assume you checked your histogram to make sure you are now blowing out all your stars. But more likely maybe the stars are stretched too far. One solution, if you cannot find a happy comprise stretch, is to use an AI star elimination program like StarXterminator which works great in PI or PS, or there is a free standalone StarNet (but not quite as good), then do a second processing for the stars only. So one run through processing stretched for nebulosity and one stretched for great stars, then add the good stars to the good nebulosity layer. Also deconvolution if not overdone, can make your stars look much better. I also find using a rejection method in stacking helps to improve the star shapes and look quite a bit, as it reduces the effect of big drifts in seeing for example. 

By the way, Ha is less affected by seeing, so one could use Ha as the luminance layer for the stars, and the RGB star data just to provide the color. This could help a little when your seeing is really bad. 

Looking at your images I would not say you should stay away from any targets. That being said, with smaller aperture and FL, targets that fill your field of view will be best. Small targets typically need larger FL and larger aperture to get the plate scale and resolution you need, and seeing needs to be reasonably good. This is especially true for small PN etc with a lot of detail, will be difficult to match images you see from larger FL/aperture and better seeing. 

Personally, to help select targets I watch what others have done with similar aperture/FL to mine (in my case 235/1480mm), and taken where seeing is typical (not top of the mountain stuff), but where their images look great. Then I know I am tackling an image that I should be able to do.  Of course, if they have a much smaller F# than mine I know I will have to increase my total exposure, but at least I know I/m not taking on a target that will never look great for me.

Hope this helps.
Rick
Helpful
Connor Kessler avatar
Rick Veregin:
Connor Kessler:
Rick Veregin:
Hi Connor
Can you provide a bit more detail please on your setup. Based on what I could find, here are my numbers (they don't line up with yours) and suggestions.
From what I see the Orion Eon 130 is f7, so 910 mm FL? With a 0.8 reducer that is 728 mm FL and F5.6. And your pixel size in this camera is 4.63 micron?
With this input at 728 mm I get 1.13 arc-sec/px. For your aperture the resolution defined by the Rayleigh limit is 1.1 arc-sec, since your seeing is worse than this, seeing is indeed your limiting factor. This is the most common situation for those of us on the planet Earth at the bottom of our atmosphere.


Hey, Rick! Learning about the Nyquist bit was actually really helpful as that's new information for me to start taking into consideration.  As for the binning and image scale, I'm not really sure how the calculation is done to measure it, but I use Stellarium as well as the Astrobin measurements which have both calculated my scale at ~1.31 arc-sec/px.

Also, I don't know if this might be what's throwing you off re: why I shoot in Bin 2x2 but it's mostly because it's a quirk of the ASI294MM Pro that it shoots in 2x2 as the default to match its MC counterpart.  294MM uses a completely different sensor from the MC, though, so the 294MM can actually drop down to Bin 1x1 with the native 2.3 micron pixels.  The trade off is that the image files are gigantic and the bit depth drops from 14 to 12 as well as the well depth.  I almost exclusively shoot bin 2x2 by default.  So the 1.3 arc-sec/px is being calculated assuming the camera is operating at 2x2 and not 1x1. 

Not sure if that's clarified anything at all.  Hopefully that helps you make more sense of the binning situation.

Thanks, I had missed that quirk on your camera--I understand now why it is 2x2 bin. That makes sense.  So at 1.3 arc-sec/px, for sampling at minimum 2x to 3 x, that would be good for seeing of 2.6 to 3.9 arc-sec/px. So your sampling is good for seeing conditions of 2.6 to 3.9 arc-second, for most of us this is average to very poor seeing, though you might see a 5 arc-sec on a terrible night. So for the most part you are probably not limited by your seeing.  Now if your seeing were good, then you would be a little under-sampled, so limited by your resolution. Thus if you feel your stars are getting bloated, something other than seeing is the cause. 

 I would suggest you look at the FWHM for stars in your subs (without stretching). ASTAP Astrometric STAcking Program is free. It shows the HFD (equal to the FWHM if the stars are perfectly Gaussian, but HFD is less sensitive to star shapes). It can also show you star asymmetries across your image, image tilt, such as from a tilted camera, etc. The point being you can see if your HFDs are where they should be based on your seeing. Note it gives the values in pixels. But it can plate solve too, so if you plate solve the frame, then it will be in arc-sec. So if your stars in the raw image are much wider than you expect based on seeing, there is another issue in the data collection (focus--are you using a Bhatinov mask, to be sure of focus, tracking, etc.). If your star widths are as expected, you can also inspect the FWHM for your stretched image in ASTAP, to see if you are bloating them in processing.

Your star shapes look good to me, though not that Gaussian--they might be overexposed in the subs (but I don't think so based on your settings ). I saw 60 seconds in one your backyard images for RGB, seems reasonable at low gain, unless you have high LP.  I assume you checked your histogram to make sure you are now blowing out all your stars. But more likely maybe the stars are stretched too far. One solution, if you cannot find a happy comprise stretch, is to use an AI star elimination program like StarXterminator which works great in PI or PS, or there is a free standalone StarNet (but not quite as good), then do a second processing for the stars only. So one run through processing stretched for nebulosity and one stretched for great stars, then add the good stars to the good nebulosity layer. Also deconvolution if not overdone, can make your stars look much better. I also find using a rejection method in stacking helps to improve the star shapes and look quite a bit, as it reduces the effect of big drifts in seeing for example. 

By the way, Ha is less affected by seeing, so one could use Ha as the luminance layer for the stars, and the RGB star data just to provide the color. This could help a little when your seeing is really bad. 

Looking at your images I would not say you should stay away from any targets. That being said, with smaller aperture and FL, targets that fill your field of view will be best. Small targets typically need larger FL and larger aperture to get the plate scale and resolution you need, and seeing needs to be reasonably good. This is especially true for small PN etc with a lot of detail, will be difficult to match images you see from larger FL/aperture and better seeing. 

Personally, to help select targets I watch what others have done with similar aperture/FL to mine (in my case 235/1480mm), and taken where seeing is typical (not top of the mountain stuff), but where their images look great. Then I know I am tackling an image that I should be able to do.  Of course, if they have a much smaller F# than mine I know I will have to increase my total exposure, but at least I know I/m not taking on a target that will never look great for me.

Hope this helps.
Rick

Thank you, Rick! This was very, very helpful and I will absolutely look into ASTAP for better analysis of my data. 

As for the star stretching specifically, star processing is still very much my Achilles heel at this current stage of my processing abilities.  My histograms look good and I'm always very careful to not overexpose, however I fully acknowledge I still over saturate and/or stretch them too far as I'm still not familiar with a reliable work flow to get clean and pleasant results re: star processing.  I'm mostly doing trial and error experimenting with bits and pieces of techniques I've picked up on the web (including the use of StarXterminator).  I find that star processing is a high finesse thing and there's all most *too* much varying info out there on the best way to process them.  I get a bit lost in it all.  If you have any links to star processing info that you think is quality and reliable, I would love to see them.  I've also been considering purchasing Adam Block's recent "Stretch Academy" to learn from him as well.
Well Written Respectful
Scott Badger avatar
Moisture is actually a friend of seeing, though not so much to transparency….. The real driver of bad seeing, as I understand it, is thermal variance, made worse by wind and/or topography that adds turbulence to it. I live in the northeast and a really good night is anything below 2.75" (but it never gets below 2") more typically it's in the 3" - 4" range. Summer nights tend to be on the good side, but winter is awful…..

I agree that choosing targets according to seeing conditions probably won't make much of a difference. Getting lums on a public scope is a great suggestion that hadn't occurred to me either, but a cheaper version would be to shoot lums when seeing is best and rgb when it's not. Also, bad seeing causes, and gets compounded by, poor guiding and less than optimal auto-focus runs, so honing your guiding and focusing as much as possible to mitigate those follow on effects will help. And wind. Whether seeing is good or bad, wind sucks. Building a small shed-style observatory was a game-changer for me. Though not a dome, I have a unique (maybe just odd….) roof system which protects against wind almost as much, actually more in some circumstances, and far better than a roll-off. Anyhow, before it even a less than 5mph breeze would ruin imaging, but now I can image in 25mph.

For processing, BlurX makes a huge difference, at least over what my skills got out of traditional tools. BlurX likes lots of signal, so I spend a long time on a target (around 20 hours) to maximize what it can do. You mention StarX, so I assume you're separating stars and target and stretching them separately? If not, that really helped me with star quality. Also, I find GHS to be the best stretch tool for stars.

FWIW,

Cheers,
Scott
Helpful Insightful Respectful Engaging
Joe Linington avatar
I don't think your situation is very far different from many of us. I get a few nights of 2 arcseconds or better but they are very rare. I image at a similar scale using a SharpStar 76EDPH and a QHY294m in bin1 and at 0.83 arcseconds with a 102mm and the same camera. Both reduced to 0.8. Some observations I have made.

Adding lum (or Ha as Lum) from my 102mm scope to data collected with my 76mm really does improve the resolution to the level of the 102mm. I tried this recently as an experiment and was blown away with how well it worked.

This may be controversial but I have found that I don't like the stars from either of my scopes after processing with BlurXterminator. I still love what BXT does to for the main image but I have found it is actually too good at reducing stars and make mine almost always look spiky and pinched. I have started use less sharpened stars that I reduce with this procedure instead, much later in the workflow.

https://pixinsight.com/forum/index.php?threads/star-reduction-using-pixelmath.18855/

Just some of my observations.
Helpful Insightful Respectful Engaging Supportive
Scott Badger avatar
Joe Linington:
This may be controversial but I have found that I don't like the stars from either of my scopes after processing with BlurXterminator. I still love what BXT does to for the main image but I have found it is actually too good at reducing stars and make mine almost always look spiky and pinched. I have started use less sharpened stars that I reduce with this procedure instead, much later in the workflow.

https://pixinsight.com/forum/index.php?threads/star-reduction-using-pixelmath.18855/

Have you decreased the "Sharpen stars" amount from the default and still get spikey/pinched stars? As it sharpens, BX can sometimes accentuate any aberrations.

Cheers,
Scott
Well Written Concise
Joe Linington avatar
Scott Badger:
Joe Linington:
This may be controversial but I have found that I don't like the stars from either of my scopes after processing with BlurXterminator. I still love what BXT does to for the main image but I have found it is actually too good at reducing stars and make mine almost always look spiky and pinched. I have started use less sharpened stars that I reduce with this procedure instead, much later in the workflow.

https://pixinsight.com/forum/index.php?threads/star-reduction-using-pixelmath.18855/

Have you decreased the "Sharpen stars" amount from the default and still get spikey/pinched stars? As it sharpens, BX can sometimes accentuate any aberrations.

Cheers,
Scott

I have played with that a little but not enough yet. I may try combining both methods to see if I like the results. I also have to play with the halo reduce once I find a nice level of star sharpening to get just the right amount. It's all a steep learning cliff and once you think you have it figured, you find something else to improve.
Rick Veregin avatar
Connor Kessler:
Thank you, Rick! This was very, very helpful and I will absolutely look into ASTAP for better analysis of my data. 

As for the star stretching specifically, star processing is still very much my Achilles heel at this current stage of my processing abilities.  My histograms look good and I'm always very careful to not overexpose, however I fully acknowledge I still over saturate and/or stretch them too far as I'm still not familiar with a reliable work flow to get clean and pleasant results re: star processing.  I'm mostly doing trial and error experimenting with bits and pieces of techniques I've picked up on the web (including the use of StarXterminator).  I find that star processing is a high finesse thing and there's all most *too* much varying info out there on the best way to process them.  I get a bit lost in it all.  If you have any links to star processing info that you think is quality and reliable, I would love to see them.  I've also been considering purchasing Adam Block's recent "Stretch Academy" to learn from him as well.


Actually, stars are the most difficult thing for most of us mortals--in my own images, or from many others on AB, more often than not the stars are not wonderful--though personally I'm putting some effort into improving, and hopefully am improving. Nebulosity typically needs a severe stretch, while bright stars look wonderful with little stretch. And stars are symmetric with a well defined known shape and absolutely shows any problems in imaging and processing, they are the Canaries in the Coal Mine.

I use StarTools for processing, it has a manual stretch I like. So a steep development stretch gives sharp stars with less halo, but makes fainter stars very faint and bright stars too bright. A shallow stretch brings up fainter stars (and background/noise) without blowing out bright stars, but makes stars fuzzier and increases the halos. The very sharp stars look unnatural, they are not Gaussian (an approximation to the Airy disk for the most obvious part.) So it is generally a trade-off to reduce the slope of the stretch enough to give pleasing and slightly soft looking stars (which is what they should look like, stars are not hard points, they exhibit an Airy disk, which actually trails off to infinity, so definitely not a sharp point.

In Photoshop or PixInsight one can do a masked stretch, so mask out the stars, stretch, and repeat. That way only the nebulosity is stretched further, the stars are left alone. Startools doesn't allow a masked stretch. I don't do PI.

Startools also has a sophisticated deconvolution routine as well as star reduction and halo reduction routine. They are not AI driven, more conventional, and one can control their effects in many ways, which typically AI doesn't, typically AI is all or nothing, few free parameters to choose. Not sure what software you are using, how to stretch stars depends a lot on that.

If you are not using masked stretch, it is worth doing, it can be very helpful to get around this tradeoff in development. A quick Google search "masked stretch for stars using xx" will give you lots of hits. Aside from masked stretch, I can't think of any tutorials I have seen specifically on stretching stars, quite odd, considering the importance and difficulty.

CS Connor!
Rick
Helpful Insightful Respectful
Connor Kessler avatar
Scott Badger:
Moisture is actually a friend of seeing, though not so much to transparency..... The real driver of bad seeing, as I understand it, is thermal variance, made worse by wind and/or topography that adds turbulence to it. I live in the northeast and a really good night is anything below 2.75" (but it never gets below 2") more typically it's in the 3" - 4" range. Summer nights tend to be on the good side, but winter is awful.....

I agree that choosing targets according to seeing conditions probably won't make much of a difference. Getting lums on a public scope is a great suggestion that hadn't occurred to me either, but a cheaper version would be to shoot lums when seeing is best and rgb when it's not. Also, bad seeing causes, and gets compounded by, poor guiding and less than optimal auto-focus runs, so honing your guiding and focusing as much as possible to mitigate those follow on effects will help. And wind. Whether seeing is good or bad, wind sucks. Building a small shed-style observatory was a game-changer for me. Though not a dome, I have a unique (maybe just odd....) roof system which protects against wind almost as much, actually more in some circumstances, and far better than a roll-off. Anyhow, before it even a less than 5mph breeze would ruin imaging, but now I can image in 25mph.

For processing, BlurX makes a huge difference, at least over what my skills got out of traditional tools. BlurX likes lots of signal, so I spend a long time on a target (around 20 hours) to maximize what it can do. You mention StarX, so I assume you're separating stars and target and stretching them separately? If not, that really helped me with star quality. Also, I find GHS to be the best stretch tool for stars.

FWIW,

Cheers,
Scott

Honestly, after mulling over the info of this thread I am realizing that you are right.  My seeing issues are more to do with the crazy thermal shifts off the marine layer than anything.  My guiding and focus are spot on thanks to the ZWO EAF and having become very efficient at adequate balancing of the mount and very good polar alignment, so those issues are for the most part resolved aside from some anomalies here and there (usually my own fault). 

I did take a look at some of my subs that I've scrapped from stacking and I'm thinking that the short gusts I get here are brief, but just enough to *bump* the scope to ruin a frame here and there which might slip through my sorting in PI Blink before integrating.  I friend here highly recommended one of the varying folding/pop up observatories to help with the wind (he uses the one from Explore Scientific specifically) and has said it makes a subtle, but very welcome difference.  Perhaps that would be a worthwhile experiment as well.  

Once again, I really appreciate all your guys' input here.  I'm learning that maybe my gripes are valid, but not about the specific things I mentioned after all. I still consider myself a beginner having only been doing this for not even 3 years now so all this information is extremely valuable.
Helpful Insightful Respectful Engaging
Connor Kessler avatar
Joe Linington:
I don't think your situation is very far different from many of us. I get a few nights of 2 arcseconds or better but they are very rare. I image at a similar scale using a SharpStar 76EDPH and a QHY294m in bin1 and at 0.83 arcseconds with a 102mm and the same camera. Both reduced to 0.8. Some observations I have made.

Adding lum (or Ha as Lum) from my 102mm scope to data collected with my 76mm really does improve the resolution to the level of the 102mm. I tried this recently as an experiment and was blown away with how well it worked.

This may be controversial but I have found that I don't like the stars from either of my scopes after processing with BlurXterminator. I still love what BXT does to for the main image but I have found it is actually too good at reducing stars and make mine almost always look spiky and pinched. I have started use less sharpened stars that I reduce with this procedure instead, much later in the workflow.

https://pixinsight.com/forum/index.php?threads/star-reduction-using-pixelmath.18855/

Just some of my observations.

*I have yet to experiment with using batches data from different telescopes and image scales to compose a single image,  but I've been realizing it's actually a pretty common practice.  Does PI StarAlignment rescale the data automatically?  The only reason I've never done it was because I don't really have an understanding of how to blend data together from different sources.  Sounds like I'm way over thinking it?
Engaging
Joe Linington avatar
PI Star Align does rescale the data to the reference image, just make sure you use the higher sampling image as the reference. I have used mismatched Ha and Lum as a Lum layer in PI but for combining more complicated groups of data, I have only ever done it using APP which is kinda built for combining crazy data sets. I have seen others do it in PI so I am sure it's possible, I just don't know how.
Concise
Wei-Hao Wang avatar
Just to extend what Brian said a bit, I think you can switch to equipment suites more optimized for wide-field imaging (shorter focal length, larger sensor). This way, poor seeing can hurt you less (if at all), and you can still get high-definition high quality images.  And if your site is dark, you can combine the above with long integration time to go deep.  Wide and deep images can be very appealing if you do it right, and you don't have to have arcsec level resolution.

We usually use the FWHM of stars to describe seeing.  If you know your pixel size, you can run the FWHMEccentricity script in PI to check the FWHM of your stars and convert it to arcsec.  (Here the assumption is that your telescope is large enough so diffraction doesn't contribute to the FWHM, and your optics is good enough so aberration doesn't contribute as well.) That's the seeing measurement in the image processing stage.  You may also measure seeing in the image acquisition stage.  Unfortunately many amateur image acquisition programs reports HFD instead of FWHM.
Well Written Helpful Insightful
Scott Badger avatar
Wei-Hao Wang:
We usually use the FWHM of stars to describe seeing.  If you know your pixel size, you can run the FWHMEccentricity script in PI to check the FWHM of your stars and convert it to arcsec.  (Here the assumption is that your telescope is large enough so diffraction doesn't contribute to the FWHM, and your optics is good enough so aberration doesn't contribute as well.) That's the seeing measurement in the image processing stage.  You may also measure seeing in the image acquisition stage.  Unfortunately many amateur image acquisition programs reports HFD instead of FWHM.

I hope this isn't too much of a diversion, and probably dumb question of the week, but since we use the same metric for both seeing and focus (fwhm/hfr/hfd), is there a visual difference between being slightly out of focus and seeing at the same fwhm. For example, if one night seeing is 2.5", but I'm a bit out of focus and getting 3.5" stars, and the next night my focus is good but seeing is worse and stars are still 3.5", would there be a visual difference in the data from the two nights? I realize that being enough out of focus will make donuts of the stars, which bad seeing doesn't do, but not at just 1" out of focus.

Side note, for a metric that's so central to astroimaging, why are there 2 (or three counting HFR and HFD) different versions that aren't even translatable between each other, two different units (arcseconds and pixels) and half the time no indication as to which is being reported, and no consistency in the numbers reported between any two softwares, even when using the same metric and units?......

Cheers,
Scott
Engaging
Joe Linington avatar
Scott Badger:
Wei-Hao Wang:
We usually use the FWHM of stars to describe seeing.  If you know your pixel size, you can run the FWHMEccentricity script in PI to check the FWHM of your stars and convert it to arcsec.  (Here the assumption is that your telescope is large enough so diffraction doesn't contribute to the FWHM, and your optics is good enough so aberration doesn't contribute as well.) That's the seeing measurement in the image processing stage.  You may also measure seeing in the image acquisition stage.  Unfortunately many amateur image acquisition programs reports HFD instead of FWHM.

I hope this isn't too much of a diversion, and probably dumb question of the week, but since we use the same metric for both seeing and focus (fwhm/hfr/hfd), is there a visual difference between being slightly out of focus and seeing at the same fwhm. For example, if one night seeing is 2.5", but I'm a bit out of focus and getting 3.5" stars, and the next night my focus is good but seeing is worse and stars are still 3.5", would there be a visual difference in the data from the two nights? I realize that being enough out of focus will make donuts of the stars, which bad seeing doesn't do, but not at just 1" out of focus.

Side note, for a metric that's so central to astroimaging, why are there 2 (or three counting HFR and HFD) different versions that aren't even translatable between each other, two different units (arcseconds and pixels) and half the time no indication as to which is being reported, and no consistency in the numbers reported between any two softwares, even when using the same metric and units?......

Cheers,
Scott

That is a lot of good questions. I only have an answer for the most simple one. Why Pixels and Arcseconds to measure guiding performance. Pixels is measuring how much a star moves on your sensor and is not comparable directly to other setups with a different guide scope, OAG, camera etc. Arcseconds is that pixel number taken and mashed through a formula that accounts for your optics to get a measurement of the angular change of a star. This is comparable to other systems and really should be the only way we talk about guiding. But PHD2  shows both numbers in a rather unintuitive way so they get mixed up all the time.
Helpful Concise
Scott Badger avatar
Joe Linington:
That is a lot of good questions. I only have an answer for the most simple one. Why Pixels and Arcseconds to measure guiding performance. Pixels is measuring how much a star moves on your sensor and is not comparable directly to other setups with a different guide scope, OAG, camera etc. Arcseconds is that pixel number taken and mashed through a formula that accounts for your optics to get a measurement of the angular change of a star. This is comparable to other systems and really should be the only way we talk about guiding. But PHD2  shows both numbers in a rather unintuitive way so they get mixed up all the time.

Exactly! There may be some few situations where hfr/hfd and/or pixels is more informative, but 90%+ of the time, fwhm in arcseconds is as good or better.. Then if the software companies would hold the ruler in the same way.....

Also, when we talk about our own seeing conditions, I'm never sure if it's based on the fwhm in our images, or some reported/forecasted seeing for our location, and what the relationship is between "forecasted" seeing and what I'll actually get. When I look at Meteoblue for example, tonight's seeing is forecasted at 0.85" to 1.03" and 3 or 4 out of 5 depending on the index used, but while my seeing night to night tracks MB pretty well in terms of the 1 to 5 index, my fwhm's will be at least 2.5x what's forecasted....

Cheers,
Scott
Helpful Insightful Respectful Engaging
Wei-Hao Wang avatar
Scott Badger:
Wei-Hao Wang:
We usually use the FWHM of stars to describe seeing.  If you know your pixel size, you can run the FWHMEccentricity script in PI to check the FWHM of your stars and convert it to arcsec.  (Here the assumption is that your telescope is large enough so diffraction doesn't contribute to the FWHM, and your optics is good enough so aberration doesn't contribute as well.) That's the seeing measurement in the image processing stage.  You may also measure seeing in the image acquisition stage.  Unfortunately many amateur image acquisition programs reports HFD instead of FWHM.

I hope this isn't too much of a diversion, and probably dumb question of the week, but since we use the same metric for both seeing and focus (fwhm/hfr/hfd), is there a visual difference between being slightly out of focus and seeing at the same fwhm. For example, if one night seeing is 2.5", but I'm a bit out of focus and getting 3.5" stars, and the next night my focus is good but seeing is worse and stars are still 3.5", would there be a visual difference in the data from the two nights? I realize that being enough out of focus will make donuts of the stars, which bad seeing doesn't do, but not at just 1" out of focus.

Side note, for a metric that's so central to astroimaging, why are there 2 (or three counting HFR and HFD) different versions that aren't even translatable between each other, two different units (arcseconds and pixels) and half the time no indication as to which is being reported, and no consistency in the numbers reported between any two softwares, even when using the same metric and units?......

Cheers,
Scott

In the above I said FWHM reflects seeing if diffraction and optics do not contribute to FWHM.  I think I should add that focusing and guiding errors should be small as well.  

Can we tell the difference between 3.5" seeing with perfect focus and infinitely good seeing with a 3.5" blur caused by defocus?  Definitely.  Out of focus stars have distinct shapes (sharp edges, or donut shapes if using a reflector), while seeing profile is smooth and Gaussian-like (not exactly Gaussian though).  So this is very easy.  

Then, can we tell the difference between 2.5" seeing with slight defocus that leads to a 3.5" FWHM and 3.5" seeing with perfect focus?  It will be harder. You will need to understand your system very well, including how the images and guide graphs look like under good and poor seeing, how the focus shifts with temperature or with even telescope pointing direction, and so on.  Then there can be some hope for you to spot slight defocus, after looking at the image and many other direct/indirect evidence for defocus during your imaging session. 

In professional astronomy, we always use FWHM to describe resolution, either caused by seeing or by diffraction, or anything else.  We also very often use half-light radius to describe size of small and distant galaxies.  I haven't read any papers using half-light radius or diameter to describe image quality (perhaps because I haven't read enough).  As far as I know, amateur astronomers like to use half-light radius/diameter because it is claimed to be more stable than FWHM under poor seeing and short exposures.  (I am not sure if it is truly more stable.  I am just saying that people claim so.  Personally I can't verify this.)  If this is true, than indeed at least for focusing and guiding, half-light radius/diameter is a better indicator.  But at the same time, if this is the only reason (stable under short exposure and seeing), then there is really no need to use half-light radius/diameter to express seeing under long exposure.  I think people should just use FWHM.

BTW, if FWHM is pure caused by seeing (no diffraction, no aberration, and no defocus etc), one should be able to work out a simple relationship between FWHM and HFD, as seeing profile is quite well known.  However, for focusing, since the shape of stars is no longer controlled just by seeing profile, there isn't a simple relation between HFD and FWHM.
Helpful Insightful Engaging
Miguel A. avatar
I am new to AP and was encouraged earlier on when seeing is not great to look at some galaxies that can be seen easily under urban conditions, namely M81 & 82, and of course M31. There are more but these are the brightest ones. Globulars include M's 13, 92, 3, 5, and 15. Mostly, though, beating the atmospheric seeing is just a matter of patience. Just keep watching[b],[/b] and intermittent good moments may surprise you.
Helpful Supportive
Tareq Abdulla avatar
Which is better or worse in those scenarios, a poor seeing condition with low light pollution [Say Bortle 3-5], or Clear nice seeing condition under bad light pollution [Bortle 7-9]???
Brian Boyle avatar
For the record, I measure my seeing as the FWHM of a bright star(s) at (or near) the centre of the field in a short (1s or less)  at the beginning of the night, immediately following a collimation check. 

I will never truly measure seeing - for that I would need a DIMM or some such as many professional mountain top observatories do - but its as close as I can get to taking out as much of the instrumental contribution to image size as I can.  The diffraction limit of my RC8 is 0.5arcsec, so it will contribute a little to the measured value, but not a great deal at a typical 4-5arcsec FWHM!

My weather patterns as such that seeing changes quite slowly, although bad seeing nights [5-6 arcsec] also have brief periods of "blow outs" to even larger.
At that point, I will admit to giving up - because I can't even see the guide stars.

But I would always trade good seeing for dark skies.  Quite apart from the AP, it is just wonderful to see the stars….

CS Brian
Helpful Insightful
Rick Veregin avatar
Joe Linington:
Scott Badger:
Wei-Hao Wang:
We usually use the FWHM of stars to describe seeing.  If you know your pixel size, you can run the FWHMEccentricity script in PI to check the FWHM of your stars and convert it to arcsec.  (Here the assumption is that your telescope is large enough so diffraction doesn't contribute to the FWHM, and your optics is good enough so aberration doesn't contribute as well.) That's the seeing measurement in the image processing stage.  You may also measure seeing in the image acquisition stage.  Unfortunately many amateur image acquisition programs reports HFD instead of FWHM.

I hope this isn't too much of a diversion, and probably dumb question of the week, but since we use the same metric for both seeing and focus (fwhm/hfr/hfd), is there a visual difference between being slightly out of focus and seeing at the same fwhm. For example, if one night seeing is 2.5", but I'm a bit out of focus and getting 3.5" stars, and the next night my focus is good but seeing is worse and stars are still 3.5", would there be a visual difference in the data from the two nights? I realize that being enough out of focus will make donuts of the stars, which bad seeing doesn't do, but not at just 1" out of focus.

Side note, for a metric that's so central to astroimaging, why are there 2 (or three counting HFR and HFD) different versions that aren't even translatable between each other, two different units (arcseconds and pixels) and half the time no indication as to which is being reported, and no consistency in the numbers reported between any two softwares, even when using the same metric and units?......

Cheers,
Scott

That is a lot of good questions. I only have an answer for the most simple one. Why Pixels and Arcseconds to measure guiding performance. Pixels is measuring how much a star moves on your sensor and is not comparable directly to other setups with a different guide scope, OAG, camera etc. Arcseconds is that pixel number taken and mashed through a formula that accounts for your optics to get a measurement of the angular change of a star. This is comparable to other systems and really should be the only way we talk about guiding. But PHD2  shows both numbers in a rather unintuitive way so they get mixed up all the time.

I have address the comments above, there seems to be some misunderstanding of HFD, as it definitely is not some amateur mistake.

HFD actually fixes some of the issues with FWHM. First, HFD and FWHM, as I mentioned before, give the same value if your stars are good shapes. No translation required--they will be the same. If you stars are off in shape, say due to aberrations or tracking errors, or anything where your star is not the right shape, you should use HFD. For example, donuts can give good FWHM, so not necessarily good for manual focus, and can be terrible for autofocus which can't tell they are donuts. Also, if your stars are trailing in one direction FWHM will typically be lower, even though the stars are oblong. HFD is a better measure of your focus and is more reliable as it won't tell you a trailed star is better than a round one.  Don't worry that they are different, that is why HFD is used because it is different and gives  a more reliable value.

HFR is just based the radius of the star, shape while HFD is based on the diameter of the star shape, so HFR=1/2 HFD. Nothing simpler.

Note, anyone can use any units they want for HFR, HFD and FWHM. Typically it will be in pixels or arc-seconds. Unfortunately not everyone reports the units, just make sure you do add units when you report your values. If you know the pixel scale and the camera pixel size when someone quotes HFR, HFD or FWHM, you may be able to figure out what makes sense for their units, but sometimes it will be difficult. Best case is to ask if you can what units were used.

Software that doesn't know your plate scale (arc-sec/px), will have to report values in pixels. I you just provide an image and no other information you will typically get back values in pixels, nothing else is possible without more information. Some software can plate solve the image, or you can enter the data so it can figure out your plate scale, in which case it will give you the value in arc-sec. So you should be able to tell what the value is depending on what information you gave the program, and if it told you it plate solved. Finally, no guarantee one software will give you identical values to other software--how it determines the background and what algorithm it uses may be different, etc. so no guarantee at all they will be identical values. 

Hope this clarifies things.
Rick
Helpful Insightful