Hi, Ken:
1. Do not split the full Bayer array (e.g., BGGR) image into four separate single bayer color channel images (TB,TG,TG,TR)? Mainly due to poor centroiding of the stars across the different color pixels and subsequent systematic errors in the individual magnitude measurements?
Yes, with one slight tweak: there seems to be two dominant error sources with traditional deBayering: centroiding errors (which we’ve seen approaching a full pixel distance), and photometric measurement error due to undersampling (mostly an issue for blue and red channels). The good news is that both of these error sources seem to have gaussian distributions around a mean of zero, which means that if you have many exposures, do photometry separately on each image, and average all the resulting data together, both of these errors average out nicely. (Of course, this becomes an issue if you need good time resolution for a time series.)
2. Instead, pretend that all the bayer array pixels are the same and just add their individual flux as you would with a monochrome image? Do you use a similar measurement aperture as you would for a mono camera, or do you use a larger aperture than normal so that you have a sufficient number of individual color channel pixels to meet the Nyquist criterion for all channels?
Yes. We found that using different aperture sizes for the different channels made a significant difference. (But more work is needed on this – our results were kind of backwards relative to our expectations.) We saw smallest errors with one size for blue and red channels, and a different size for the green and luminance channels. I don’t have the graph in front of me, but I recall a “best aperture” for green/luminance that was around 1.8xFWHM and around 0.8xFWHM for blue/red. Arne’s been routinely using multiple aperture sizes, and we’ll probably adopt that.
3. IOW, you accumulate all blue channel flux, green channel flux and red channel flux inside the measuring aperture. so, you have accumulated something akin to a clear filter/bandpass flux?
Yes (I think?). We make four passes over each star centroid (the centroid location stays locked for all four passes; this way we compute partial pixel fractions once (for each aperture size)); in one pass we accumulate just blue pixels (this gives us blue flux), in another we accumulate green pixels (green flux), and in one we accumulate red (red flux). We make a fourth pass to collect our luminance channel using all pixels. (Which would be the same as the red+blue+green sum if the aperture size was the same for all four channels.)
4. You then conduct normal differential aperture photometry using V magnitude comp stars and report the magnitude of the target as CV?
We perform differential photometry using massive ensembles (using as many field stars as possible that are not in VSX – giving us ensembles that typically have 100-500 stars for most smart telescope images). Right now we use APASS-10 and ATLAS Refcat2 catalogs for our comparison ensembles. We will typically use V as the reference for the green and luminance channels, B for the blue channel, and Rc (or maybe SR if Rc isn’t available) for the red channel. Our goal is to establish zero points for each image that are very stable from image to image and from observer to observer. We think we’re at the point where zero point scatter (image to image) is significantly less than most of our other residual error sources. Our field of view is small enough that we’re not using Arne’s technique of breaking each image into zones and solving for different zero points in each zone.
5. You then transform this CV magnitude to Johnson V magnitude? You find this procedure MOST reliable/accurate?
The standard magnitudes from those four channels (R, G, B, and Luminance) then go through “corrections.” We have correction terms for vignetting, extinction (first order), extinction (second order), and color. We generate correction coefficients over two time intervals: each nightly observing session has a set of corrections for each observer’s telescope, and we also generate a set of corrections for a “season” of up to 90 days of images from that observer. A typical nightly session uses all 100-500 stars in each ensemble during fitting, so that it isn’t uncommon for us to be curve-fitting against several hundred thousand points in each curve.
And then we add a “cadence matching” cycle. For many projects, the natural 10-second image cycle of a smart telescope is faster than what is needed for the science being done. So we sort all the images into “time frames” that have been sized to match target star behavior. (The “time frame” for a long-period variable could easily be 12 hours; the “time frame” for a study of flickering could be as short as 10 seconds.) Within each of those frames we average the corrected standard magnitudes together. This further drives down random errors. For fast time series, the length of each “time frame” becomes as short as 10 seconds, which degenerates into no averaging.
6. The problem you observed when separating Bayer channels is the star centroids shift very slightly between image pixels so the amount of target flux changes between the bayer color channels from one image to another and therefore you measure different color pixel fluxes and the measured magnitude has quite a bit of systematic error? Did you try to use a larger aperture radius to include more color pixels of each type and average this error out?
The centroiding shifts are an insidious little problem. In one star in one image, there is a single PSF for that star’s image, and that PSF is common to all color channels. If the star centroid is allowed to shift between the red, blue, and green deBayered images, then each color channel ends up sampling a different part of the PSF distribution. This introduces an error that is effectively random, since we haven’t found any systemic behavior in the centroid errors.
As I mentioned, we’re a bit confused as to the connection between aperture size and minimum error residuals. (It may be that what we’re seeing is that when you have small flux, the small aperture works better, and for blue and red channels you only have ¼ of the pixels being sampled, so flux is low and small apertures work better. For green and luminance, you have more pixels, leading to more flux, and a larger aperture works better. Sounds reasonable, but we haven’t proven yet that this is what’s happening.)
- It is mostly this color sub-sampling that I need help understanding? Are you implying that individual color photometry (B,V,R) can still be conducted with a color camera? Or, are you still recommending that ONLY TG magnitudes and total CV magnitudes are reliable?
We are consistently seeing nearly identical residual errors (around 0.02 mags after averaging) in our green channel and in our luminance channel (both referenced and transformed into V). The blue and red channels (referenced and transformed into B and Rc) show much higher residuals (around 0.08 to 0.12 mags). (All of these are from observations of standard Landolt fields.) Right now it’s our intent to archive the resulting photometry from all four channels.