Unhappy processing results: Witch Head, APP, many short subs

Rod Van MeterDie Launische DivaAndy Wray
29 replies720 views
Rod Van Meter avatar
I just finished 14 hours of CPU time processing 155 15-second subs of Witch Head Nebula. Tracked but unguided, my drive is about 3% slow in R.A. (see separate post under Equipment), forcing me to such short subs. There was a vicious haze where we were shooting, but fortunately the actual clouds and rain stayed south of us long enough for me to get several hours of total data on three targets. The haze is responsible for Rigel being a giant fuzzball; you can see the halos on the individual subs, so it's not just that I've tortured the data beyond what it has to give (which may also be true).

Run through Astro Pixel Processor, 82 flats, 100 darks and 100 biases. Vignetting and dust eliminated nearly perfectly, but background is still grayish. Go to APP's light pollution tool, and it does a great job of fixing the really harsh gradient, but I'm still left with this bilateral gradient on top and bottom of the frame. My guess is that my calibration frames aren't good enough, that this is noise from a temperature gradient across the sensor caused by Sony a7iii electronics after several hours of continuous use. My calibration frames were taken some hours later, under somewhat different conditions.

Any advice?

P.S. I'm planning to crop this anyway, but that's not the point here; I'd like to solve the problem and be ready for next time, or better yet to fix this image.
Well Written Helpful Insightful Engaging
Die Launische Diva avatar
I don't use APP, but I suspect it should also a good practice in APP to model the background in a cropped image. The crop should be done in order to remove only the dark/black and low-SNR edges of the stack. Also, if APP allows the user to place the background samples by himself, I would try putting my samples far away from the bright star halos (and the Witch of course).

Do you have a raw integrated image to share with us?
Helpful Concise Supportive
deleted avatar
You need at least 2 min exposure a F/5 to ger an excellent info from the With head nebula....

This object is dim !

Do another adquisition with longer subs..


This is a 30 sec subs... 85mm at F3.2 unguided.. with Over stretch ... bortle 6.  Canon T6 not mod.

KIJJA JEARWATTANAKANOK avatar
@Rod Van Meter 

After applying APP1083 beta2 'remove light pollution' and 'calibrate star colors'  (3-times) , batch crop, 'HSL selective color' and 'star reducer' tools to your posted image, I arrived with this. I don't know if it is better or worse.

C.S.
Kijja

Rod Van Meter avatar
Thanks, all, for the instructive responses. I knew when I started this one that it was a reach, but I wanted to try a reflection nebula and see what I could get. And of course I wish the sky had been clearer for longer, but so it goes.

@Luis Marco Gutierrez your image is impressive, especially for Bortle 6. Did you use a light pollution filter? I haven't acquired one yet, but am considering it. Of course I want longer subs and longer total time, but (as in an Equipment post) my current setup has a tracking problem. I'm hoping to add single-axis (R.A.) guiding later this year or early next. Meantime, I'm still trying to figure out why it's running slow and see if I can get up to at least 30sec or 60sec. Longer than that will definitely require guiding.

@Die Launische Diva thanks for the advice. APP does allow the user to place the background samples to determine gradient. It takes five. I put them roughly in the corners and one off the chin of the Witch. I haven't fiddled much with different locations yet, but I certainly could.

@KIJJA JEARWATTANAKANOK I think your edit is probably about the best that can be done with the data I have. That crop is about what I am considering, and the noise is probably inevitable with this data. You are far ahead of me in processing. You pulled quite a bit of detail out around the nose/eye socket that I didn't even know is there. When I get a chance, I will try duplicating your sequence and see if I can come close.

I'm still new here, but images in the forums are limited to 5MB? That's why I posted JPEG in the first place. How do people share files, just link to their own offsite storage? Here are three versions:
Integrated with flats, darks, and biases (no stretch, no light pollution correction, no additional background tweaking; this is the very dark file):
https://www.dropbox.com/s/obwo9y7bql31arj/Witch_Head_Nebula_full-RGB-session_1.fits?dl=0
Same thing, with APP's initial stretch (only) applied (maybe the right starting point for tinkering):
https://www.dropbox.com/s/l0p9dhjgektxhn5/Witch_Head_Nebula_full-RGB-session_1-St.fits?dl=0
The above, with APP's light pollution correction and background correction (close to but not exactly the JPEG above):
https://www.dropbox.com/s/li8r1kpulyu30xa/Witch_Head_Nebula_full-RGB-session_1-lpc-cbg-St.fits?dl=0

In random tidbits, I neglected to mention that this was all shot at ISO 800. I have the impression that noise is a big problem at 1600 and above, but the histogram shows such heavy clipping at the bottom end that maybe it's worth it to boost off the floor and get at least some dynamic range, as long as I'm stuck at such short subs?

I was considering 2x2 binning, sacrificing resolution for noise reduction, but apparently APP doesn't directly support binning. One recommendation from another forum is to drizzle and interpolate during integration, but I haven't figured out what settings would be good for that. Integration times on this wimpy laptop are something like 8 hours, so I need to choose pretty carefully before kicking off a job.
Respectful Supportive
Rod Van Meter avatar
One interesting tidbit I just noticed: right at the edges, you can see the ragged edges on alignment. The subs were shot in batches of fifty using the a7iii's internal intervalometer, but I had to trigger a set by hand on the shutter button. That doesn't bother me.

But if you look closely, you can see that the set with the overhang on the top has a gradient in one direction, while the set with the overhang on the bottom has a gradient in the other direction. This is east-west and west-east. There was a huge bank of clouds to the south, but we did suffer from some clouds coming through before finally settling in, as well as the high haze that gave us the star halos (my shooting partner had the same haze with a different setup, so the haze/halos are not my equipment). Perhaps this is different cloud gradients?

I actually have about 230 total subs, tossed out 35 by hand as being clouded over, basically all of those at the end. I let APP process 195 up through normalization, then took 155 of those for final integration based on "score", which is apparently almost entirely star shape. I did try to keep an eye on the SNR, too, but I'm not sure how that is actually calculated.

Given existing noise levels I can't afford to toss out a big fraction of the remaining 155, but variation in cloud cover might actually explain the bilateral gradient.
Helpful Insightful Respectful Engaging
Die Launische Diva avatar
You just need haze-free skies and more integration time. Avoid using complicated integration processes at this point. I also believe that you have a couple of dark spots due to insufficient flat correction. This is my result after a quick modelling and removing LP/haze in PI:



I didn't perform any other processing, other than LP modelling and a default stretch in order to save the image as JPG.

If your PC can't handle integrating many images, try doing your integration in batches, but for this, you may have to ask in the APP forum.

And this is my attempt on the Witch Nebula, ~ 4.5 hrs of integration under Bortle 5 skies and very, very careful flat frame correction and LP modelling:

Helpful Insightful Engaging
Rod Van Meter avatar
Gorgeous. Your edit came out much better than mine. I assume that is primarily due to talent & experience, knowing what to apply and where, but is PI also that much better than APP? I used Nebulosity first, and APP is miles and miles easier to use (plus, of course, I'm learning).
Re: haze and time, roger that. That's also one reason I didn't get the flats right after the lights, which probably resulted in the one or two dust spots you're seeing. (Given how many spots were in the original subs, I'm actually really happy with how effective the flats were.) Was planning to shoot them in the morning twilight, but we packed it in at 230am and I didn't have a light panel or laptop screen for the flats, so I had to tear down my rig and then shoot the flats (and the darks and biases) the next evening a little before sunset against an overcast but light sky.
Re: batches, yeah, I should learn how to do that. I'm still developing my routine.
And your photo is stunning. I hope I'll get there eventually!
Helpful Respectful Engaging Supportive
Andy Wray avatar
Rod Van Meter:
Gorgeous. Your edit came out much better than mine. I assume that is primarily due to talent & experience, knowing what to apply and where, but is PI also that much better than APP? I used Nebulosity first, and APP is miles and miles easier to use (plus, of course, I'm learning).
Re: haze and time, roger that. That's also one reason I didn't get the flats right after the lights, which probably resulted in the one or two dust spots you're seeing. (Given how many spots were in the original subs, I'm actually really happy with how effective the flats were.) Was planning to shoot them in the morning twilight, but we packed it in at 230am and I didn't have a light panel or laptop screen for the flats, so I had to tear down my rig and then shoot the flats (and the darks and biases) the next evening a little before sunset against an overcast but light sky.
Re: batches, yeah, I should learn how to do that. I'm still developing my routine.
And your photo is stunning. I hope I'll get there eventually!

When you say that you tore down your rig before taking flats, did you actually remove the camera?  If so, then the chances of getting your flats lined up with your lights are pretty slim unless you have some way of guaranteeing the angle of the camera relative to the scope.  I'm probably stating the obvious about making sure flats are taken with the camera in exactly the same orientation relative to the light train, but thought I'd mention it just in case.  FWIW:  light panels are really quite cheap, work well and mean taking the flats takes only a couple of minutes at the end of a session.
Well Written Helpful Insightful Respectful Supportive
Alberto Ibañez avatar
Hi Ron, 

You need much more integration time for an object like this. 155x15s is about 0.6h of total integration time. I would suggest you to go at least to 5h (better 10h).

Apart of this, which criteria did you use to choose 15s as subexposure time? You may want to overcome the read noise of your camera, of the fainter stuff will be below it and you will be battling against the noise floor with almost any chance to pull it up. This also will help you to reduce the computing workload (in any case, if you computer needs 15 to compute 155 subs, you definitively need a new computer. Luckily christmas is almost here smile ). 

Good luck!
Helpful Engaging Supportive
Rod Van Meter avatar
Thanks, Alberto. Yes, I know, I need much more time. I had scheduled three to four hours, but we got clouded out. I took right around one hour, but only two-thirds of it was useful. 15 second subs is limited by the quality of my setup; I should be able to do 30 if I can correct a slow tracking problem, but to get to 120 sec subs I'm going to need an autoguider.  I need lots of things, a new computer is just one of them, but it may be showing up in my budget in a few months.

This one was always a reach for my setup and skill level, I knew, but when I saw the sky conditions I especially converted my expectations from making a great photo, to learning something useful in the long run. So thanks for the advice!
Well Written Insightful Respectful
Rod Van Meter avatar
@Andy Wray yes, I decoupled the camera body from the Redcat. It's pretty reproducible in orientation, but of course not 100%. I think the bigger problem is the sensor dust pattern changing. I wasn't thinking fully clearly at 230am, I admit, but it's also the way my gear has to go in my bag.

If you have a recommendation for a cheap, portable, battery-powered light panel, I'm all ears. A couple of months ago while putting together a shopping list, I spent a short amount of time googling stuff up, and the ones I was finding were several hundred dollars and had complex color temperature control systems, which seemed ridiculous, but I shelved it in favor of other things and haven't gotten back around to it yet. At the moment, I don't need a big one, with my biggest scope being 90mm diameter, but depending on price and portability it might or might not make sense to plan ahead.

I am getting the message on working harder on matching flats to the image, though, thanks for the push!
Andy Wray avatar
Rod Van Meter:
@Andy Wray yes, I decoupled the camera body from the Redcat. It's pretty reproducible in orientation, but of course not 100%. I think the bigger problem is the sensor dust pattern changing. I wasn't thinking fully clearly at 230am, I admit, but it's also the way my gear has to go in my bag.

If you have a recommendation for a cheap, portable, battery-powered light panel, I'm all ears. A couple of months ago while putting together a shopping list, I spent a short amount of time googling stuff up, and the ones I was finding were several hundred dollars and had complex color temperature control systems, which seemed ridiculous, but I shelved it in favor of other things and haven't gotten back around to it yet. At the moment, I don't need a big one, with my biggest scope being 90mm diameter, but depending on price and portability it might or might not make sense to plan ahead.

I am getting the message on working harder on matching flats to the image, though, thanks for the push!

I use an A3 tracing board (Voilamart A3 Tracing Board Ultra-thin Brightness Adjustable LED Drawing Copy Board) which I bought from Amazon for £37 (about 50 USD).  You probably only need an A4 variant which should be around 36 dollars.
Well Written Helpful Concise
matthew.maclean avatar
@Rod Van Meter , I use APP too for stacking and, if I understood your description above, I remove background gradients by using more than five interrogation boxes. APP is not very clear on what it is doing, but I think that each box you draw is an interrogation volume that it extracts the average brightness from and maps some kind of polynomial interpolation to smooth the background across the image. PixInsight does something very similar with its DBE tool (it places a grid of marks over the image you can control), which I think is why @Die Launische Diva obtained such a nice looking background result above (certainly also lots of experience too!).

So, when I opened up the JPG you posted at the top and place the following grid on it in APP, most of the background disappears. It is a little tedious to draw all the boxes and APP definitely needs some automated way to do it like PI has, but the effect seems to be what you are looking for I think? If you do something like this to your raw 32-bit stack, I bet it will come out more to your liking.

Helpful
Rod Van Meter avatar
Thanks, all.
@matthew.maclean thanks for the really helpful image there. While you were posting that, I was discovering for myself how much more powerful and complex the light pollution correction tool in APP is. It can show the map it creates, too, which is helpful. Until a little while ago I thought it required *exactly* five boxes, but I discovered it will take more and tried up to fourteen medium-sized boxes. But I hadn't tried coverage as complete as yours. I think I accidentally picked up some areas with red emission nebula, because mine turned a little green, not happy with the color.

I don't understand the green, yellow and red boxes here, what's the difference? It seems to only do the greens? Or does it do all of them, but it's recommending that you *not* include the red and yellow ones?

Here's my current learning/playing around version. When I get a chance, I'll rework this with my new knowledge. Apologies for the low image file quality, had to turn the JPEG compression way up to get it under 5MB.

Respectful Engaging Supportive
Rod Van Meter avatar
Thanks, @Andy Wray I just ordered a cheap, similar, A4 thing (3,000 yen), which should arrive tomorrow. I had seen ones like that, but just assumed that they were only illuminated at the edges in a way that makes them unusable for this purpose, but for thirty bucks, it's worth a shot. Do you also cover it with a cloth (white t-shirt?) to improve the angular diffusion, or do you just trust it? I find that when I use my laptop screen for flats, the color difference depending on the angle you're looking at the screen at creeps into the flats, so they are better with extra diffusion.
Helpful Insightful Respectful Engaging
andrea tasselli avatar
Alberto Ibañez:
Hi Ron, 

You need much more integration time for an object like this. 155x15s is about 0.6h of total integration time. I would suggest you to go at least to 5h (better 10h).

Apart of this, which criteria did you use to choose 15s as subexposure time? You may want to overcome the read noise of your camera, of the fainter stuff will be below it and you will be battling against the noise floor with almost any chance to pull it up. This also will help you to reduce the computing workload (in any case, if you computer needs 15 to compute 155 subs, you definitively need a new computer. Luckily christmas is almost here ). 

Good luck!

Well that depends. 1 hour is good enough if the sky is truly dark, maybe even less. Let's say 1/1.5 hours with a good DSLR such as the OP's and conservative pixel scale.
Andy Wray avatar
@Rod Van Meter I was initially covering the top of the scope with a white t-shirt, but when I tried it without one it made no difference (probably because the drawing board is totally out of focus).  I usually just point the scope vertically upwards, lay the board face down on the on the top of the scope and tell APT to take 50 flats.
Concise
matthew.maclean avatar
Rod Van Meter:
Thanks, all.
@matthew.maclean thanks for the really helpful image there. While you were posting that, I was discovering for myself how much more powerful and complex the light pollution correction tool in APP is. It can show the map it creates, too, which is helpful. Until a little while ago I thought it required *exactly* five boxes, but I discovered it will take more and tried up to fourteen medium-sized boxes. But I hadn't tried coverage as complete as yours. I think I accidentally picked up some areas with red emission nebula, because mine turned a little green, not happy with the color.

I don't understand the green, yellow and red boxes here, what's the difference? It seems to only do the greens? Or does it do all of them, but it's recommending that you *not* include the red and yellow ones?

I am honestly not sure what the green, yellow, red highlighting is for either. I have always assumed that the red ones are outliers in the pattern somehow, but APP definitely does not make it clear if that is a bad thing or not. I used to remove them with the removal button, but lately I often leave them in place and I do not really see any drastic differences with or without. Perhaps that is case specific, but maybe just try it both ways on your image and see what works best?
dkamen avatar
Hi Rod,

Integration time is important for snr and for making the object stand out, but haze is the main issue here, not noise. Everything is fuzzed out similarly to the big stars, it is just less obvious. The bands could be due to misaligned calibration subs or cloud cover or even gradients amplified by the haze. I would say try removing each suspect one by one: first integrate without darks. Then integrate without flats. Then integrate only the absolute best subs (stretch at 30% to inspect them visually for clouds and stuff). Depending on how the result changes, you will know what is at fault.  Integrating with max and equal weights is very helpful for this kind of analysis. Also, downscale 0.5 or less to speed things up, it's not pixel level microdetail that you are looking to debug, it is the large bands.

Cheers,

Dimitris
Helpful Insightful Concise Engaging Supportive
Die Launische Diva avatar
Rod Van Meter:
@Die Launische Diva thanks for the advice. APP does allow the user to place the background samples to determine gradient. It takes five. I put them roughly in the corners and one off the chin of the Witch. I haven't fiddled much with different locations yet, but I certainly could.

You are welcome, @Rod Van Meter!
The bands could be due to misaligned calibration subs or cloud cover or even gradients amplified by the haze.

@dkamen, if you are referring to the bright bands after LP removal, I believe it is overshoot of the interpolation surface. While this dataset is a good exercise on gradient removal, I believe it will be more fun if the cloud and haze gods cooperate🤞 for Rod to collect new, haze-free data .
dkamen avatar
No, actually I am referring to the bands before LP removal, in the original pic. I mean, they shouldn't exist on both edges of the frame (I think so at least). smile
Die Launische Diva avatar
No, actually I am referring to the bands before LP removal, in the original pic. I mean, they shouldn't exist on both edges of the frame (I think so at least).

I see, indeed, these bright corners can't be attributed to haze alone.
Well Written Respectful
Rod Van Meter avatar
Okay, folks, I declare victory.

This one is a little more even, and doesn't have the green-yellow tint of the one above, but it's also not quite as bright. I think I like it this. I doubt this data has any more to give, but I am now hooked on this subject, and look forward to a night when I can collect a good five hours of multi-minute subs.

@Die Launische Diva I'm including an early engineering integration (10 frames, no calibration), so you can see how much dust there is on the sensor. Flats did a pretty good job except for the two the size of the boulder Sisyphus was pushing. Time to get out the scrub brush and Comet(!) cleanser, I suppose.

I'll post this one as an image later today. Thanks for all the advice.

I actually have two more datasets from the same evening, shot before Orion was high enough to shoot. Will attack them over the next few days using my new skills.

P.S. I'm on the lookout for other targets that are good for beginners and take advantage of the large field of view (5x8 degrees) I have. Suggestions welcome!


Engaging Supportive
Rod Van Meter avatar
Related discussions
Bias frame gradient
I have a new ASI294MM Pro (monochrome). I took 100 bias frames at gain 120, offset 20, -10C, and 1x1 binning. The statistics are consistent across the raw frames I examined. However, I extracted the gradient across several of the unstacked frames usi...
Relevant to understanding bias frame calibration for monochrome camera processing.
Mar 25, 2023