Status Update from the AAVSO Smart Telescope Working Group

Mark,
Thank you for that explanation. As I wrote before, I’m not a smart telescope user. Therefore I shouldn’t take up your time asking for explanations.

However, my intuition makes me think that obtaining a magnitude ovservation from a stack of 3 hours duration is problematic because it creates the necessity of choosing which stacks to toss aside if they not only align with an eclipse but even partly overlap an eclipse. I don’t know how you would do that.

I don’t really understand why anyone would do this. If I were setting up a programme to observe such a star to capture the long term variability I would just define a much shorter duration for each stack. The long term variability would then be captured by many such observations taken during the out of eclipse phases.

Hi Roy:

Have you contacted Han, the author of ASTAP? He is very responsive to email questions I have had, and also some "bug: which I have found.

I have recently applied the transformation coefficients determined using standard fields, and have immediately seen my reported errors in the report decrease.

Scott

Hi Scott,
I haven’t contacted Han, but since my previous post mentioning problems with ASTAP I have a routine that works. I callibrate and extract TG and TB files, open them in AstroimageJ to align them, then perform photometry in AstroimageJ. The output data go into spreadsheet templates which apply transformation coefficients and calculate transformed magnitudes. Yes, I understand that ASTAP appears to do all that, but I’ve used AIJ for years on FITS files from astro cameras, and the spreadsheets give me complete control.

Hi @mark_munkacsy , in terms of the AAVSO STWG, is the end goal more on an automated workflow for submission from smart telescopes, or will the WG also be providing guidance for smart telescope submissions processed independent of the AAVSO pipeline?

I am particularly interested with guidance on two aspects: (a) use of UV/IR cut filter, and (b) debayering vs. processing R+G+G+B as clear/luminance. For example, TR, TG and TB for debayered with UV/IR cut; TR (debayered) and/or CR (R+G+G+B) for smart telescopes without UV/IR cut (e.g. Unistellar); and CV and/or CR (R+G+G+B) for both with/without UV/IR cut for the most basic of measurements.

Thanks,
Raymund

Hi, Raymund:
Probably the best response to your question is that the STWG is looking at both of those things. Yes, we have an end goal of an automated submission pipeline (and our development version of that pipeline recently surpassed the 1-million star milestone – we’ve pushed over 1,000,000 smart telescope photometry measurements through the pipeline under development), but along the way we’ve dealt with exactly the questions that you’ve asked.
For example, we’ve looked at both the UV/IR cut filter as well as the “nebula filter” that many smart telescopes bring to the table. We get good results with the UV/IR cut filter in place. We’ve done a small number of tests with the nebula filter as well, and have found that our luminance channel data (transformed to Johnson V) with the nebula filter is just as good as green channel data transformed to V. [We’ve done no testing to date without a UV/IR cut filter.]
And after a lot of testing, we’ve walked away from traditional deBayering (splitting the original image into red, green, and blue derivative images that are processed independently). In every test scenario, we found we got better results (lower residual errors on standard Landolt fields) using what we call “masked RGB photometry”: there is always only one image and one, color-independent (x,y) centroid location on the image for each star, with that centroid location being the center of the photometry aperture for all colors. We then sample just the red, just the green, and just the blue pixels within that one aperture to get red, green, and blue channel photometry. We sample all the pixels to get luminance channel photometry. We suspect this works well because partial pixel calculations are correct and consistent from color to color and because we don’t get the centroid shifting that we found when doing traditional deBayering. For stars with “non-extreme” colors (no strong emission or absorption lines), the green channel and luminance channel photometry transforms equally well into Johnson V.

Mark:

I’ll try to understand a bit differently. Your recommendation is as follows:

  1. Do not split the full Bayer array (e.g., BGGR) image into four separate single bayer color channel images (TB,TG,TG,TR)? Mainly due to poor centroiding of the stars across the different color pixels and subsequent systematic errors in the individual magnitude measurements?
  2. Instead, pretend that all the bayer array pixels are the same and just add their individual flux as you would with a monochrome image? Do you use a similar measurement aperture as you would for a mono camera, or do you use a larger aperture than normal so that you have a sufficient number of individual color channel pixels to meet the Nyquist criterion for all channels?
  3. IOW, you accumulate all blue channel flux, green channel flux and red channel flux inside the measuring aperture. so, you have accumulated something akin to a clear filter/bandpass flux?
  4. You then conduct normal differential aperture photometry using V magnitude comp stars and report the magnitude of the target as CV?
  5. You then transform this CV magnitude to Johnson V magnitude? You find this procedure MOST reliable/accurate?
  6. The problem you observed when separating Bayer channels is the star centroids shift very slightly between image pixels so the amount of target flux changes between the bayer color channels from one image to another and therefore you measure different color pixel fluxes and the measured magnitude has quite a bit of systematic error? Did you try to use a larger aperture radius to include more color pixels of each type and average this error out?
  7. You then stated: “We then sample just the red, just the green, and just the blue pixels within that one aperture to get red, green, and blue channel photometry.” Does this mean that you subsequently accumulated only individual Bayer filter/bandpass (i.e., B,G,or R) pixel fluxes that exist inside the same measuring aperture as used above for all pixels to get CV flux? You certainly have fewer pixels in this case and partial pixels as you noted. But, you indicated this still works. Can you explain with some actual magnitude comparisons?
  8. It is mostly this color sub-sampling that I need help understanding? Are you implying that individual color photometry (B,V,R) can still be conducted with a color camera? Or, are you still recommending that ONLY TG magnitudes and total CV magnitudes are reliable?

Ken

Hi, Ken:

1. Do not split the full Bayer array (e.g., BGGR) image into four separate single bayer color channel images (TB,TG,TG,TR)? Mainly due to poor centroiding of the stars across the different color pixels and subsequent systematic errors in the individual magnitude measurements?

Yes, with one slight tweak: there seems to be two dominant error sources with traditional deBayering: centroiding errors (which we’ve seen approaching a full pixel distance), and photometric measurement error due to undersampling (mostly an issue for blue and red channels). The good news is that both of these error sources seem to have gaussian distributions around a mean of zero, which means that if you have many exposures, do photometry separately on each image, and average all the resulting data together, both of these errors average out nicely. (Of course, this becomes an issue if you need good time resolution for a time series.)

2. Instead, pretend that all the bayer array pixels are the same and just add their individual flux as you would with a monochrome image? Do you use a similar measurement aperture as you would for a mono camera, or do you use a larger aperture than normal so that you have a sufficient number of individual color channel pixels to meet the Nyquist criterion for all channels?

Yes. We found that using different aperture sizes for the different channels made a significant difference. (But more work is needed on this – our results were kind of backwards relative to our expectations.) We saw smallest errors with one size for blue and red channels, and a different size for the green and luminance channels. I don’t have the graph in front of me, but I recall a “best aperture” for green/luminance that was around 1.8xFWHM and around 0.8xFWHM for blue/red. Arne’s been routinely using multiple aperture sizes, and we’ll probably adopt that.

3. IOW, you accumulate all blue channel flux, green channel flux and red channel flux inside the measuring aperture. so, you have accumulated something akin to a clear filter/bandpass flux?

Yes (I think?). We make four passes over each star centroid (the centroid location stays locked for all four passes; this way we compute partial pixel fractions once (for each aperture size)); in one pass we accumulate just blue pixels (this gives us blue flux), in another we accumulate green pixels (green flux), and in one we accumulate red (red flux). We make a fourth pass to collect our luminance channel using all pixels. (Which would be the same as the red+blue+green sum if the aperture size was the same for all four channels.)

4. You then conduct normal differential aperture photometry using V magnitude comp stars and report the magnitude of the target as CV?

We perform differential photometry using massive ensembles (using as many field stars as possible that are not in VSX – giving us ensembles that typically have 100-500 stars for most smart telescope images). Right now we use APASS-10 and ATLAS Refcat2 catalogs for our comparison ensembles. We will typically use V as the reference for the green and luminance channels, B for the blue channel, and Rc (or maybe SR if Rc isn’t available) for the red channel. Our goal is to establish zero points for each image that are very stable from image to image and from observer to observer. We think we’re at the point where zero point scatter (image to image) is significantly less than most of our other residual error sources. Our field of view is small enough that we’re not using Arne’s technique of breaking each image into zones and solving for different zero points in each zone.

5. You then transform this CV magnitude to Johnson V magnitude? You find this procedure MOST reliable/accurate?

The standard magnitudes from those four channels (R, G, B, and Luminance) then go through “corrections.” We have correction terms for vignetting, extinction (first order), extinction (second order), and color. We generate correction coefficients over two time intervals: each nightly observing session has a set of corrections for each observer’s telescope, and we also generate a set of corrections for a “season” of up to 90 days of images from that observer. A typical nightly session uses all 100-500 stars in each ensemble during fitting, so that it isn’t uncommon for us to be curve-fitting against several hundred thousand points in each curve.
And then we add a “cadence matching” cycle. For many projects, the natural 10-second image cycle of a smart telescope is faster than what is needed for the science being done. So we sort all the images into “time frames” that have been sized to match target star behavior. (The “time frame” for a long-period variable could easily be 12 hours; the “time frame” for a study of flickering could be as short as 10 seconds.) Within each of those frames we average the corrected standard magnitudes together. This further drives down random errors. For fast time series, the length of each “time frame” becomes as short as 10 seconds, which degenerates into no averaging.

6. The problem you observed when separating Bayer channels is the star centroids shift very slightly between image pixels so the amount of target flux changes between the bayer color channels from one image to another and therefore you measure different color pixel fluxes and the measured magnitude has quite a bit of systematic error? Did you try to use a larger aperture radius to include more color pixels of each type and average this error out?

The centroiding shifts are an insidious little problem. In one star in one image, there is a single PSF for that star’s image, and that PSF is common to all color channels. If the star centroid is allowed to shift between the red, blue, and green deBayered images, then each color channel ends up sampling a different part of the PSF distribution. This introduces an error that is effectively random, since we haven’t found any systemic behavior in the centroid errors.
As I mentioned, we’re a bit confused as to the connection between aperture size and minimum error residuals. (It may be that what we’re seeing is that when you have small flux, the small aperture works better, and for blue and red channels you only have ¼ of the pixels being sampled, so flux is low and small apertures work better. For green and luminance, you have more pixels, leading to more flux, and a larger aperture works better. Sounds reasonable, but we haven’t proven yet that this is what’s happening.)

  1. It is mostly this color sub-sampling that I need help understanding? Are you implying that individual color photometry (B,V,R) can still be conducted with a color camera? Or, are you still recommending that ONLY TG magnitudes and total CV magnitudes are reliable?

We are consistently seeing nearly identical residual errors (around 0.02 mags after averaging) in our green channel and in our luminance channel (both referenced and transformed into V). The blue and red channels (referenced and transformed into B and Rc) show much higher residuals (around 0.08 to 0.12 mags). (All of these are from observations of standard Landolt fields.) Right now it’s our intent to archive the resulting photometry from all four channels.

Hi Everyone! I’m new in this forum and maybe not at the right place. So far I have done only visual observations of variable stars (AAVSO code: BMAT) and with Seestar S50, only astrophotography. I have watched Andrew Pearce’s excellent video on YouTube about using the S50 for science, plus some readings here and there to learn how to start photometry with the Seestar. But I couldn’t find a tutorial or a method of « How to do
» this practice for beginners?
Does someone could recommend me a step-by-step method so I could at least try something with the telescope?
Sorry if I’m in the wrong channel…
Manon (beginner)

Manon:

You can contact me offline at kb0fhp@gmail.com, and I will help you as much as I can using a SeeStar and extracting the data.

Scott (MDSA)

That is very nice from you, Scott! Presently the sky is very cloudy here in Québec. As soon as I can have some clear skies and start testing the SeeStar on variable stars, I will certainly have some questions to ask you eventually.
Tks a lot!

Lately , my magnitude estimates are highly varying:

Example on RY Cep on January 4, 2026 (seestar s50; 10s; Altaz):

nb= 37 (best Subs)
average=9.26 (1 AAVSO comp=93 )
Error=Standard deviation of 37 estimates =0.18
SNR= 26dB =398 ratio ;

Although the Error seems photometrically unacceptable: 0.18, the SEM=0.03 (Error/sqrt(nb)) is still statisticaly acceptable…

So a simple question: are my results still AAVSO acceptable or should I avoid reporting them?
And what is the reasonable limit ?

Thanks a lot !

Remember that SD is a measure of the spread of the individual estimates and therefore doesn’t tell you how accurate the average is. For that, look at the results for your check star. Better still, experiment using photometric standards as targets.

1 Like

I’m very interested in this topic. I have a unistellar eVSCOPE. I know some folks who submit data from Unistellar scopes and I want to investigate that. Some Unistellar data is on AAVSO but I understand there has been some issues continuing with since not all unistellar scopes can install a UV/IR cut filter. Is this group looking at Unistellar data? If you need any data, I would love to provide it . Oh I see Steve is in this group, and representing Unistellar.

Hi, Karen:
We don’t have much of anything in our “Sample Data” collection from the eVscope. (The little bit that we have is from a borrowed eVscope, so we don’t have a lot of confidence in that set of files.) It would be good to have a better collection of eVscope images. If you can provide a full set of images from one evening of observing (extra credit if some of the targets are variable star fields!), that would be great. We can then go through the process of configuring our “image to starlist” converter tool to work with these images and then do some quick quality checks.

An immediate question that will help us: are the raw Unistellar FITS images already corrected with darks and flats, or is that something that we need to do as part of processing? (Different vendors have different definitions of what they mean by “raw.”)

Thanks,
– Mark

The raw Unistellar FITS images are not corrected with darks or flats. That would need to be done as part of the processing.

Steve Barkes
Las Cruces, NM

I would love to do that. First I have to ask if you would like me to use an UV/IR cut filter? The variable star capture process in Unistellars citizen science program does not have us use a UV/IR filter. I think I could collect data files both ways. I can do a collection for several targets that are active campaigns. I am also testing a beta app for the Unistellar developers and I have discovered there are some errors in the fits headers. This beta version flipped the data and it changed the debayer pattern. However the fits header does not reflect that. Unistellar told me they would fix the fits files. The raw data collected is not corrected as I have looked at it and the dark files are captured separately. I will collect flats as well. I will start looking for some good targets.

Karen:
Well, in a perfect world I’d like to have one night with a UV/IR cut filter and then a second night without. (It doesn’t need to be the same targets.)

I don’t particularly care what fields you use, but if you could include about 15 images of a single one of the Landolt standard fields (your choice which one), that would be fantastic.

Please don’t try to fix any of the FITS header values – just give it to us the way Unistellar encodes them.

And finally, yes, please include a master dark and master flat (or give us a bunch of raw darks and flats) from that same night.

This would be wonderful!
– Mark