Evaluating quality of photometry from Seestar in VPhot

I have been using my Seestar to do some photometry for about a month or so and am trying to get a sense of how good the results are. After looking at suggestions from Mark Munkacsy and others I took a bunch of images (10 seconds each) of the AAVSO standard field SA20. The individual images did not have enough SNR to get many of the standard stars to show up well so I stacked these images in bunches of about 2 to 2 and a half minutes (since longer stacking could produce field rotation issues) and ended up with 30 stacked images to analyze, gathered over two nights. There were 16 standard stars that could be seen in these images with high enough SNR to be usable so I divided them into 7 comp stars and 9 targets, each group with a mix of magnitudes and B-V values.

I used the same comp stars and the same aperture settings for each analyzed stacked image. Then I looked at the differences between my observed magnitudes of the targets and the AAVSO reference magnitudes, and also took the standard deviation of my estimates for each target. (All my measurements are using the TG channel with no transformation applied.) A summary of the results is as follows:

Defining “accuracy” to be my observed magnitude minus the reference magnitude, the average accuracy across all targets was -0.009. The best accuracy among the target stars was -0.002 and the worst was -0.023. The worst single target observation among the roughly 270 total observations differed from the reference magnitude by 0.069.

The average std among all the targets was 0.018, with the best being 0.011 and the worst being 0.026.

There didn’t seem to be any trend based on the magnitudes of the target stars - that is, the fainter ones did about as well on average as the brighter ones. Color did seem to make a difference; the three reddest target stars (B-Vs just over 1) and the bluest (B-V of -0.5) all had greater standard deviation among their measurements and slightly poorer accuracy than the targets that had B-V values ranging from roughly 0.4 to 0.8.

The results seem pretty good to me, and better than I expected. But, I admittedly don’t have much experience with all this stuff yet and don’t have a good idea what a reasonable expectation really is here. Any thoughts from those with more experience would be welcome…

Brian (SBQ)

2 Likes

Brian:

  1. You are gathering experience by carrying out such analyses repeatedly. Better than just asking for comments.
  2. Using very high quality comparison stars (precise and accurate primary standards like Landolt standards) is the key to such an analysis as Mark recommended.
  3. A few plots of your data might help provide more statistically valid conclusions to define ‘pretty good’ precision / accuracy more soundly. The slope and scatter of your plots really helps answer your question concerning quality.
  4. Plot O-K of your targets vs their B-V. The observed slope should tell you something about your TG transformation.
  5. Switch your targets and comps to analyze another set of stars. Are the results similar? I hope so?
  6. Add/try some of the fainter comps as targets as well to see how std deviation trends with magnitude / SNR. What did you use as your SNR selection criterion (SNR>100?) in your analysis above? Fainter comps / smaller SNR should impact the standard deviation more significantly.
  7. IOW, I recommend that you expand your analysis using the data you have and then perhaps consider another SA field. Or pick two more fields and then present your method and all your findings at the next AAVSO Meeting?
  8. And yes, IMHO your precision and accuracy is very good, especially considering that you were using a Bayer array G filter as opposed to a V photometric filter and working with the short exposures provided by your SeeStar!

Ken

3 Likes

Ken,

Thanks for the detailed response, it’s much appreciated.

When you say “plot O-K against B-V”, I assume O and K are “observed” and “known” magnitudes respectively, is that right?

Brian

Brian, thanks for evaluating Seestar’s accuracy, which speaks to how well the instrument’s mean result agrees with a photometric standard. I am speaking to my local club next week about using Seestar to observe eclipsing binaries. Your post helps in my thinking about my presentation.

Another aspect of quality is the scatter of individual observations around the instrumental mean. In the case of eclipsing binaries between V magnitude 9 and 11, I’m finding scatter to be on the order of 0.05 to 0.10 magnitude. Part of the reason for this scatter is that the Seestar’s field “jumps around” from one image to the next. This causes light to fall on different pixels with each image, and no two pixels have the exact same response. If I could figure out how to make flat fields, that might go a long way towards reducing the scatter. Another way to minimize the impact of scatter is take many images, as you explained. Using 10 second exposures, I normally acquire about 700 images during a three hour time span centered on minimum. This large number of images makes it easy to fit a mean light curve through the data to find the time of minimum light. All in all, the Seestar is proving to be a great telescope to use in variable star observing.

Andy (HOA)

2 Likes

Andy,

Do you need to stack to get sufficient SNR for the variables you’re talking about here (magnitude 9 to 11 or so) or are you able to make a successful time series just from the individual 10 second images? I have tried using the longer exposures (20 and 30 seconds) but so far have found that the Seestar drops so many frames at longer exposures that it usually doesn’t help very much if at all.

Brian

Hi Brian,

At declinations below 60 degrees, I use 10 sec integrations. This works with stars as faint as mag 12.5. Above 60 degrees, I’ve used 20 sec integrations with good success. To answer your question, I work with the individual unstacked frames to generate time series data. I have considered working with stacked data, but this would require manually restarting the image stack periodically, which in my case would be every 3-5 minutes. That’s a lot of trouble, so my preferred method is to simply to work with the unstacked images that Seestar saves to the *_sub folder. I download this folder after each run and then extract the TG data to plot a light curve.

Andy (HOA)

Yes, Observed and Known mags. You already have this parameter calculated so it will plot easily.

Perhaps a better standard field to use would be Melotte 111 which has brighter and many more stars available. The cluster is available in the evening skies in spring or if you are a night owl could get it in the morning.

Plots of your data would be much better for evaluation and comment generation.

Here is one I did when I was using a DSLR, Pentax K3II + 300mm f/4 + 1.4x extender, 630mm efl. Decent image scale. Using 23 stars in SA20 the slope was -0.16986 using the B-V to get to V transformed. The coefficient standard error was 0.021. Here is the plot, not bad for a DSLR.

Jim (DEY)

1 Like

I took the advice of Ken and others and plotted my O - K values against the B - V values of the stars; this is the result:

The yellow is the first set of data I gathered from my SA 20 observations (described in my original post) and the green is the result of swapping the comp and target stars among the roughly sixteen standard stars that VPhot identified in my images and following the same process.

I’m not sure how to explain the difference between the two data sets - the target stars in the second set (green) were a bit fainter on average than those in the first (yellow) set but the same set of images was used, as well as the same aperture settings. The only change was, as noted, swapping the comps and targets.

Brian

Brian,

I have two suggestions and one comment. First, if you reverse your axes - plot B-V on the X axis and TG minus V on the Y axis - that is a more standard presentation. Second, if you have the instrumental magnitudes (g, as opposed to the actual TG magnitude) and plot V - g against B-V (as in Jim’s post), you will actually have a transformation plot, the slope being the transformation coefficient Tv_bv for your system!

The comment is that, just looking at the results, your G filter performs more closely to Johnson V than some others (e.g., a typical DSLR G channel). Thus, across a B-V range of about 1.0, your TG - V is quite small, about 0.03 to 0.04. That is good news, since the B-V difference between target and comp stars will influence the TG magnitude by only a small value. If your target/comp B-V difference is itself small, the error attributable to that difference will be very small indeed.

Roy

1 Like

Roy,

Thanks for the tips. I redid the graphs as you suggested - for some reason i could not get the two sets of data to show up properly on the same graph so I made two:

I intend to look at creating real transformation coefficients in the near future, so thanks for the info on that. (We did talk about transformations in Ken’s recent VPhot course, but I can’t say I really digested it all at the time :slight_smile:.)

Brian:

  1. What is the same between the plots of the two target/comp sets? Are the slopes similar? That implies that the filter transform coeff is the same. A good thing since the filter is the same.
  2. What is different between the plots of the two target/comp sets? Is there an intercept difference? Check the average color (B-V) of the two sets of comp ensembles and compare them to the average color of the target sets. An average color difference in the two ensembles would yield such an offset. Is the average ensemble comp color difference very much?
  3. IF NOT, ain’t science grand! Is random error just a little too big to be sure about differences? ;-(

Ken

1 Like

See responses inside the quoted message below…