Hello! I’m wondering about the sweet spot for FOV and photometry. The reason for upgrading my rig is that it would allow me to perform photometry on multiple targets with a single image. However, with my current setup, on those occasions when I can get a second target in the FOV, the integration times of the two targets are different enough to warrant different exposure times, so there are fewer advantages in such a situation
With my 8" LX200 classic and ST402 with BVIC filters, I get 12’x20’ FOV. It is rare that I have a target without comps or a check star.
Still, the slopiness of the SCT focuser regardless of electronic focuser(even with a thrust bearing upgrade and since I cannot lock the mirror and use another focuser), and the mirror shift during wide slews make me discard images during a run, so I’m considering upgrading the scope, such as to an 8" newt running at f3 (with Nexus). An ASI2600 seems a good match, giving 1.5 x 2.25 degrees with correction of coma and field curvature.
However, how often do folks who work with a large FOV actually are able to get multiple targets on a single image? If not common, then a large FOV would not offer many advantages over my current system, in my opinion. (Though the focuser upgrade of a new scope would be nice!) Best regards.
Mike
Rather than field-of-view, I would reckon on image-scale, i.e. arcseconds per pixel. In typical sorts of set-ups, you might well aim for something around 1 or 2 arcsec/pixel, say between 0".8 and maybe 2". Do the arithmetic based on whatever the detector pixel sizes are and available focal lengths. Obviously you want to aim to get the maximum field and largest telescope aperture consistent with the match to the detector pixel size. It helps to know what your typical seeing is as well (or simply delivered image quality), so you neither under-sample nor greatly over-sample images on most nights.
Anywhere along the Milky Way just about any field more than (say) 15’ across will have multiple targets, so most readily available/affordable set-ups will give plenty of data. That’s not a problem.
\Brian
Thanks! My current setup meets the suggested pixel scale, as would a ASI2600 with FL = 600mm. However, I’m not sure if the wider FOV is worth it compared to my present system. Other than minimizing focus problems of my current LX200 classic that does not have mirror lock, a 1.5x2.25 degree FOV with a 600mm FL newt operating at F3 may not offer any advantage over my 12’x20’ FOV for pure photometry.
Pretty pictures yes; but photometry? That is why I’m curious about the advantages/improvements folks have found when moving up to a large FOV. Best regards.
Mike
“… so you neither under-sample nor greatly over-sample images on most nights.”
The problem with undersampling is easy to understand, but I have never been able to understand why oversampling is a problem (I note your adjective “greatly”). It is recommended as a routine (by way of defocussing) for photometry with OSC and DSLR cameras. A couple of years back I came across a paper in the professional literature deliberately using defocussing to obtain higher precision photometric data. I didn’t save the paper. May have been exoplanet photometry, but I’m not 100% sure. I understand defocussing may introduce the problem of blended stars, but that is particular to that reason for the oversampling.
At high S/N, as in some exoplanet-transit photometry, it makes sense to defocus with the thought that filling up more pixels with electrons is better. Perhaps in some cases it allows you to take longer exposures so that scintillation noise is reduced.
But the usual thing is that one is trying to go as faint as possible in some reasonable exposure time. So having the images spread over too many pixels means the underlying sky background, flat-fielding errors etc are more significant items in the error budget.
\Brian
“but I have never been able to understand why oversampling is a problem”
Let’s say there are 10000 photons coming from a star onto your detector during an exposure time, e.g. 1 second. That stellar image on the telescope focal plane has non-zero width (seeing FWHM) in e.g. arcseconds. Let it be 3 arcsec. It is a kind of rule of thumb that 99% of the starlight fits into the circular aperture, which radius is 3xFWHM. So in current case that radius of the aperture would be 9 arcseconds.
Let’s assume we are using a camera where 1 arcsecond corresponds to the width of the pixel. Then the radius of stellar image on detector is also 9" / (1 "/pix) = 9 pixels. And the surface area of the aperture would be Pi*(9 pix)^2 = 254 pix^2, i.e. the aperture would encompass 254 pixels on detector.
The equation of signal-to-noise for some specific exposure time is:
SNR = S / sqrt(S + npix*(S_sky + S_dark + RON^2)),
where S is total signal from a star (in the example 10000), npix is number of pixels on which the stellar signal falls (in the example 254), S_sky is the signal from blank sky (per pixel), S_dark is dark signal (per pixel), and RON is read-out noise (electrons per pixel).
Lets say, signal from sky per pixel is 1 and dark signal is also 1, but read-out noise is 5 electrons per pixel. Let’s put them into equation: SNR = 10000 / sqrt(10000 + 245*(1+1+5^2)) = 10000 / sqrt(10000 + 6858) = 10000 / 130 = 77.
Now when the detector has tiny pixels and the plate scale would be e.g. 10 pixels per arcsecond, the measurement aperture radius would be not 9 but 90 pixels and the surface area of the aperture 25434 pixels^2. The stellar signal of 10000 photons would be divided between all of those pixels. Now the SNR calculation would be:
SNR = 10000 / sqrt(10000 + 25434*(1+1+5^2)) = 10000 / 834 = 12.
That’s a pretty dramatic drop for a quite faint star. However, if the signal from the star is much higher, let’s say 1 million photons the resulting SNR would be 770 - which is very good. In the case of that 1 "/pix camera it would be a bit higher, 996 - better, but not dramatically.
As a rule of thumb: when stars are very bright (on detector), SNR is almost exactly sqrt(number of photons), even in the case of quite significant oversampling. When the target is faint, one really would like to work close to optimal sampling (2) … 2.5 … 3 pixels per FWHM. And every bit counts - the darker the sky, the cooler the camera, the lower the read-out noise, and the better the seeing (FWHM). In practice, the radius of an aperture is often chosen close to 1xFWHM, which would be optimal for fainter stars.
Best wishes,
Tõnis
However, how often do folks who work with a large FOV actually are able to get multiple targets on a single image? If not common, then a large FOV would not offer many advantages over my current system, in my opinion.
https://britastro.org/vss/VSSC201.pdf may be of interest in this regard. It reports a limited experiment on data taken with a FOV approximately 12x10 arcminutes. I found a roughly four-fold improvement in productivity.
Paul
Thank you! I appreciate your guidance.
Mike
“but I have never been able to understand why oversampling is a problem”
Thaks Brian and Tonisee. My first contact with this issue was the origunal CCD photometry guide, which basically stated that both undersampling and oversampling were not good but no real explanation was offered. The current CCD/CMOS photometry guide has modified this statement to say that fine photometry can be done with moderate oversampling.
Over the years, most of my (not very prolific) photometry targetted relatively bright stars, initially using a DSLR camera and then a 12 bit mono CMOS camera. To maximise the S/N for some stars with the latter I defocussed and increased exposure times to collect more photons, and as a result achieved better time series light curves. BUT my targets were never very faint stars.
Therefore, I had never personally found any problem with defocussing.
It really depends on fields, the closer the Milky Way plane, the more one could expect variables per FoV. I usually report all the variables in the field which have database entries. It is common to have couple of variables in e.g. every typical exoplanet FoV with the size of 38’ x 38’. They are often just pretty faint.
Best wishes,
Tõnis
When I have more than one variable in the field, often it requires a different integration time, so I don’t “gain” anything by imaging it, except the time to slew to the field. Best regards.
Mike
Horses for courses. Sometimes the other VS in the FOV are sufficiently close in magnitude for the SNR to be acceptable and not saturated; other times they are not.
On average the productivity is >1 in my experience.