Question about stacking TG images

I picked up a SeeStar S50 very recently and have started to play around with it a bit for variable observation and estimation. Some of the initial results using the TG images seem reasonable (in terms of AAVSO ‘V’ observations of the same stars around the same time and in the closeness to the check star magnitudes) but I did have a question.

I notice when you upload the raw color images you get three TG images for each: G1, G2, and Gavg, I believe they’re labeled in the FITS headers. Obviously the last is the average of the two green channels from the original image. My question is, if I want to stack a bunch of these is it okay to just stack them all together? That is, stack all the G1s and G2s together with the Gavg images to get the image I want to do photometric analysis on?

If what’s happening in the creation of the Gavg channel and in the stacking is truly an “averaging” in the mathematical sense then I think this should be OK, and yield the same result that just averaging all the G1s and G2s would produce, but I wanted to check to make sure.

Thanks,
Brian

Hi, Brian!
First, thanks for asking a very good question. At the root of this question is whether (and in what way) photometry gets better (or worse) when you stack multiple single-color-channel images. This is a question that the AAVSO’s Smart Telescope Working Group has been looking at (and for which we don’t yet have a definitive answer).

Because Gavg was built from G1 and G2, there’s no new information contained in the Gavg image, and I would leave it out of the stacking process. (Or, alternatively, leave out the G1 and G2 images and only stack the Gavg images.) One of the issues we found with stacking G1 and G2 (or with averaging them), is that the images are displaced (shifted) from each other by a pixel or so. This doesn’t seem like much, but on the S50, star images tend to be pretty small (we’re seeing FWHM of about 3 pixels in typical amateur skies), so shifting by a pixel can introduce an error when centering the measurement aperture.

We’d love to hear more about what you’re trying. Try things both ways (or as many ways as you can think of), and pass along the results of your experiments in the forum.

  • Mark Munkacsy (MMU)
1 Like

Hi Mark,

Thanks for the info. Up until now I have been mostly just playing around with the Seestar but I’m going to be more systematic and try some different photometry scenarios, as you suggest, and see how the results compare. I can think of no fewer than four different stacking approaches:

  1. Stack all the TG images in VPhot, including the Gavg images
  2. Stack only the G1 and G2 images in VPhot
  3. Don’t stack in VPhot but use the stacked FITS file that the Seestar provides for the target
  4. Stack the raw images from the Seestar using a different app - I have been using Siril to stack my non-variable DSO images so far

I think I assumed that stacking in VPhot was probably better than the other options in case VPhot’s stacking algorithm had any special logic to make it particularly appropriate for photometry, but I don’t know if that’s the case.

One thing I noticed when stacking in VPhot that I wanted to ask about is that in some of my variable images, if I stack 5 or 10 minutes worth of 10 second exposures I often get some trails going around in a circle around the center of the image (see attached). If I stack only 2 or 3 minutes worth of images I don’t see this (or at least it’s not clearly noticeable). I don’t think image rotation from the Alt-Az mount could be happening in such a short period of time, could it? The interesting thing is that Seestar’s stacked image of the same raw frames does not show this effect, so maybe this is just a difference in the stacking algorithms?

In any case I’m really enjoying using the Seestar and look forward to the findings of the Smart Telescope Working Group - I’m sure that will be a big help to me and what I’m sure are a number of other visual variable observers who are using a smart scope to try to break into digital variable star photometry for the first time.

Brian Scott (SBQ)

Hi, Brian:
Yes, that’s a classic example of field rotation caused by stacking images from an alt-az mount by performing only left-right/up-down translation during the stacking process. The Seestar corrects for field rotation when it stacks; VPhot doesn’t. The amount of rotation you get depends on where in the sky you are imaging. The highest rotation rates occur as an object crosses near the zenith (straight up), while there are other spots in the sky where the rotation rate is near zero – so how long you can go before you start to see circular trails depends on both the total amount of time that you are stacking as well as the location in the sky.
(By the way, circular trails are bad for photometry, because it means that the amount of a star’s light that falls inside a fixed-radius circular measurement aperture depends on where the star is in the image (center vs. edges) – this creates a systemic measurement error across the image.

Within the Smart Telescope Working Group, we’re generally trying to stay away from stacking except for the case where trying to get data on a star that’s too faint to show up in any single exposure. In general, we seem to get better results by averaging multiple measurements (from multiple images).

  • Mark
1 Like

Hi Mark,

That’s interesting, I didn’t realize that the effects of field rotation would be visible over a short period of time like that - I imagined that only on longer runs would you need to worry about it.

I am starting to try some of the different stacking options I listed above to compare the results and have a question. When you’re looking at a single image photometry report, what is your primary criterion for deciding if it is likely accurate? Do you rely first on how close the check star estimate is, or is the size of the Err value for the target more significant? I’ve found that the two don’t always match in the sense that in one “process” (with different comp stars, or different stacking options like here) the target may have a larger error in a case when the check star estimate is closer to the official magnitude. Is it possible to say in such a case which estimate is “better”?

Brian

Ah, this is indeed The Hard Question.
A couple years ago, the AAVSO chartered the Data Quality Task Force, and we spent a lot of time talking it through. My takeaway was that you should start by imaging one of the AAVSO standard fields (see Standard Fields – for the Smart Telescope Working Group, we’ve been using the SA20 field a lot with the Seestar S50). From your images of the field that you choose, you can calculate two numbers that together give a good description of photometric quality:

  1. Repeatability: Make a whole bunch of images of a given field (at least 20-25), find some way of setting a consistent zero point (either with a single comp star or with an identical set of (at least 16) ensemble stars), then measure the magnitudes of known standard stars in that field. For each of those known standard stars, put all your measurements (one per image) of that star into a spreadsheet and calculate the standard deviation of that list. Average together that set of standard deviations (one standard deviation for each reference star). That average standard deviation tells you how repeatable your measurements are from image to image.
  2. Correctness: Average all of your magnitudes for each star, so that you have an average measured magnitude for each reference star that you can then compare to the AAVSO’s reference magnitude for that star. The difference is your error for that star. (The average error should be pretty close to zero.) Calculate the standard deviation of those errors. That’s a measure of your correctness.

Now you’ve got two numbers: repeatability and correctness. Different error sources will appear differently in those two numbers. For example, color-related measurement errors will show up in correctness, but shouldn’t really affect repeatability. Too-low SNR will strongly affect repeatability but (if you have enough images) affect correctness less (unless you’ve got issues with the way background is being subtracted during photometry, which can create correctness problems that primarily affect low-SNR stars – we see this with some smart telescope image processing techniques).
In your case, field rotation issues will affect correctness, but probably not affect repeatability as much. The Smart Telescope Working Group has seen the de-Bayering process affect both repeatability and correctness. But try it and see. In order to test different stacking approaches, you’ll need lots of images because you want to have at least 20-25 stacked images, each of which will need 10 or 15 raw images.
– Mark

1 Like

I’ve been successfully stacking Seestar images using Tycho Tracker for EB eclipses and DSCT stars for quite a while now without this rotation effect. I would hope that the Working Group does not recommend not to perform stacking as I think this will detract from what can be achieved with the Seestar.

For performing time series measurements over a few hours, as people may be aware that’s a lot of images and Vphot is not really set up to handle that volume (at least in my case the upload is problematic). There are other offline packages that are up to the task of processing and stacking large volumes of Seestar images.

1 Like

Andrew:
Thanks for the feedback! Yes, you’re right that VPhot has a hard time with the volume of many images (not to mention the out-of-pocket cost to the AAVSO associated with bandwidth and storage). The Working Group sees an evolution away from uploading images to VPhot; instead, images will be converted to a “starlist” and the starlist will be uploaded instead of the image. The starlist contains information on each star found in the image, instead of containing pixel-by-pixel information. Starlists can be processed much faster (and with less expense) than images. The Working Group has some experimental tools for this – it’s an active area of work. (The starlist file specification will be published by the AAVSO and is based on JSON, so it’s pretty easy for anyone to work with.)

Meanwhile, can you amplify on what you mean by “this will detract from what can be achieved with the Seestar”?

– Mark (MMU)

1 Like

Not stacking would certainly limit the magnitudes one could effectively reach with the Seestar, I think. My target in the case I have been mostly taking about (V482 Cyg) is at around magnitude 10.9, and a single 10s exposure with the Seestar has an SNR of around 25 for the target, while a stack of roughly three minutes of 10s exposures has an SNR of 97.

You can increase the exposure time on the Seestar to 30s, I believe, though I’ve read that doing that greatly increases the percentage of frames that it rejects, so I don’t know if in the end that would be much improvement to the overall result.

Brian (SBQ)

Hi Mark,

Regarding your message about how to judge the quality of photometry from the Seestar, thanks for the very detailed information about a way to do such an evaluation. I suspected that I should do a standard field test of some kind but wasn’t really sure how exactly to go about it.

It sounds like I have some homework for the next couple of clear nights!

Brian (SBQ)

Back to the issue of stacking and the possible ways to go about it, I just noticed something interesting when looking at the different stacking “strategies”. When I stack the three minutes worth of raw Seestar TG images in VPhot (including all the G1, G2, and Gavg frames) I get an SNR for my target star of 97. But when I upload the stacked Fits file that Seestar gives me and look at the TG frame from that the target’s SNR is 290. I know these are different stacking algorithms and I wouldn’t expect identical results but the large difference surprised me.

Also interesting is that fact that despite the much better SNR the photometry result from the Seestar stacked image was worse in terms of both the target star error and the check star estimate than the VPhot stack. (I was using the same comparison and check stars and aperture settings for both.)

Brian (SBQ)

Hi Mark

Your post stated “ Within the Smart Telescope Working Group, we’re generally trying to stay away from stacking ” I took that as maybe being a potential recommendation from the Group. Apologies I may have jumped the gun!

Of course, stacking Seestar images is a key benefit and enabler in using it in the first place. So I would not like to see any Guide produced in not recommending that approach.

I would strongly encourage the WG, if not doing already, to consider other software packages already out there that work really well with Seestar images, apart from Vphot. I have submitted thousands of TG measurements of EB minima using a very efficient workflow possible with Tycho Tracker. Analysis of ToM’s shows pretty good O-C residuals. In fact time series type observations are probably the best use of the Seestar based on my experience over the last year rather than multi colour photometry type observations.

Regards
Andrew

Hi Andrew,

If you would care to share any details of your workflow using Tycho Tracker I would be very interested in hearing about it. Up to now I have been using VPhot for my variable star image stacking and Siril for stacking of other Seestar images but have been looking at Tycho Tracker because of the many good things I’ve heard about its capabilities.

Brian

Hi Brian

Have a look at my posts in this thread - Stacking Software - Technology / Instrumentation & Equipment - American Association of Variable Star Observers

Cheers
Andrew

Hi Andrew,

Thanks, that’s very helpful. I guess it’s fair to say that you find Tycho Tracker to work well for photometry?

Brian

Hi Brian

Yes I haven’t yet come across a better package than Tycho Tracker for time series measurements using my Seestar.

Regards
Andrew

I agree. I’m also a fan of Tycho Tracker.

So just to clarify a few things:

  1. The Smart Telescope Working Group is not writing a “guide.” Instead, our aim is to develop an approach that makes it easy for smart telescope owners to contribute meaningfully to stellar research. A key part of that approach is the formalization of an intermediate form (the “starlist”) that can be created either on the smart telescope itself or by the smart telescope observer to eliminate the need to transfer images over the Internet.
  2. The Smart Telescope Working Group has been avoiding stacked images for a pragmatic reason: in our experiments, the only clear advantage to using stacked images is when an “important” star is not detectable on individual images. (In particular, using stacked images to improve SNR over the SNR of individual images in the stack is usually equivalent to combining the data from the starlists of each individual image – and Brian’s earlier example of stacking 18 exposures to increase SNR by a factor of 4 reinforces that (notice that 4 is close to the square root of 18).) The Smart Telescope Working Group is currently thinking that stars too faint for detection in the individual images are probably not great targets for smart telescopes and should be observed with different equipment (in large part because of pixel scale and the challenge of adjacent faint star blending at typical smart telescope focal lengths).
  3. Some targets (and some observing programs) are much better matches for smart telescopes than others – for example, measuring time of minimum or extracting complete light curves for short-period variables. One of the things the Smart Telescope Working Group is thinking about is what to make of this; should this become a recommendation to smart telescope observers? Should this affect the way AAVSO communicates target stars to the vendors? Is there anything we can do about the data fields in a starlist because of this?
  4. Some (many?) of the smart telescope vendors are using algorithms to clean up images, and usually those algorithms are applied as part of the stacking process. These algorithms generally mess up photometry (and/or measurement of image noise, which in turn will affect measures of noise pulled from the images). In addition, there’s an interaction between image stacking and the assumed system gain (measured in electrons per ADU), and system gain plays an important role in the computation of SNR. And all of that is just a long-winded way of saying that each smart telescope vendor seems to do stacking in a slightly different way. That’s made it difficult for the Smart Telescope Working Group to get our arms around how we ensure that the AAVSO database doesn’t degrade as a result of sending images through an “astrophotography” image optimization cycle before performing photometry.

Don’t let any of my words discourage any of you! What you’re doing with your smart telescopes is fantastic; you are helping the entire community understand better just how transformative these telescopes can become to our mission of enabling anyone to do science from their backyards. And the way you’re sharing your observations, suggestions, experience, and questions on this forum is helping all of us.

– Mark (MMU)

1 Like

I said earlier that I would experiment with a few different stacking methods and report back what I found. Of course this is only for a single set of 18 10s observations of a single target so I wouldn’t try to conclude too much from it, but here are the results. I included both the check star delta (the difference between the photometry report’s check star estimate and the given reference magnitude) and the “Err” value on the target start in the report.

The main things that I noticed was that using the Seestar stacked file gave the worst results in terms of the check star delta and the target error, and that there were minor differences among the different approaches to stacking inside VPhot. I had thought that maybe these might all end up the same, just being different combinations of the individual G channels and the G average channel, but there were slight differences. Going by the check star result stacking just the Gavg channels was the best; going by target err value stacking all the TG images was slightly better.

I used the same set of three comp stars and the same aperture settings in each case.

Brian (SBQ)

Hi Mark,

I took some images of SA20 last night in order to start doing the kind of check that you suggest here and had a question or two. I’ve uploaded my images and started stacking them in small enough batches to avoid field rotation. I have chosen to stack 15 images at a time, which is about 3 minutes worth, and rotation is not evident at that small a time scale for this target.

When I load the AAVSO standard stars I get 17 in total; if I understand your suggestion correctly I should choose one comp star - which will be used for the whole sequence of stacked images - and make the other standard stars targets? Then the photometry report will give me estimates of those 16 standard stars, and I can repeat this for the other stacks of 15. Do I have this right?

As an alternative you suggest using a large ensemble of comp stars (at least 16) but with only 17 standard stars showing up in total the doesn’t seem to be an option. (I could try stacking more images at a time, which presumably will load more standard stars, but I don’t know at what point rotation might become an issue).

Anyway, just want to see if I’m understanding this correctly…

Thanks,
Brian (SBQ)