Previous discussions on the AAVSO forums have indicated that interference filters for photometry should be used with caution with fast optics. I believe Arne Henden was quoted saying that he would advise no faster than f/4.
At least one such discussion, if I remember correctly, referred to fast reflectors, something like f/3.
I use fast optics, but not a reflector. The imaging train is a 200mm f/2.8 Canon lens, a ZWO adapter between the lens and a ZWO filter wheel, then an ASI294MM camera. Photometry is performed on in-focus images taken through Astrodon scientific filters. I have never had any systematic problems with the images or the photometry.
Does anyone know if fast lenses such as the one described described above should cause problems with interference filters?
Here is a quote from one of the manufacturers:
An increase in the angle of incidence causes a shift of the center wavelength to shorter wavelengths. This can be very useful in tuning a narrowband filter to a desired wavelength. The wavelength shift with angle of incidence, can be calculated by: λθ= λo [1-(no/ne)2 sin2θ]1/2 where λθ is the wavelength at the new angle of incidence; λo is the center wavelength; no is the index of refraction of the environment (no = 1 in air); ne is the effective index of refraction of the filter; and θ is the angle of incidence.
Here is an optical encyclopedia that I occasionally use for a refresher:
Interference coatings are 1/4 or 1/2 wave thick at 90 degrees (perpendicular) to the surface for a single wavelength. There can be up to 39 layers that re-enforce the transmission or reflection at a single wavelength. For narrow band filters the pass band shifts a few nm for about 10 degrees because the 1/4 wave path is 1/4 wave for a longer wavelength.
Here are a couple images from IDEX:
It is easy to see how a laser line filter is coated, but I really have no idea how they do it for a wideband filter.
I think I could manage to mount a filter at up to 10 degrees in my cheap spectrophotometer. I suspect that I would see a small shift of the entire passband. It is such a tedious job that I won’t have time to do it anytime soon. Don’t know if I have one that is not in one of my filter wheels. For your application some of your light is perpendicular and some is not, so you might get a 20 nm broader passband of the same shape. Just an educated guess.
Things have advanced since I retired from the optics games. I see they are up to 100 dielectric layers these days. I can’t find the angle versus shift plots that some manufacturers used to publish. One of these days, I’ll measure it and let you know.
Thanks. I tried to find information on light paths and angles of incidence at the sensor surface with the Canon lens but drew a blank. My belief is that there is no problem in terms of the results I get. My main aim is timing eclipses rather than trying to achieve the highest possible photometric accuracy. Under good conditions check star measured mags are OK.
What stimulated the post was the thought that the ray diagrams for an f/3 reflector may possibly be different from those for a multi element camera lens, but I really don’t have the data.
I think part of this is the difference between narrow band imaging and photometric imaging. When I checked with manufacturers, they did not feel that photometric imaging would be affected by the narrow light code.
Of course, the proof is in the pudding, and I’d be curious to see folks who have actually done tests on this! Best regards.
Looks like a couple of free-ninety-nine ray tracing packages here:
Oops, some of this is for graphics rather than optical ray tracing.
But this is a rabbit hole that may not be productive because you don’t have specifications for all your lenses and mirrors. You could measure all your lenses with a camera, laser, some collimation optics and optical bench.
Ray, not sure I have the fortitude to work through the detail the ray tracing software would require. I should think the main problem would be finding and handling the specifications for all the lens elements.
Yup. An easier solution is a few optics on an optical breadboard.
Start with a laser then two 1% beam splitters to get 0.0001 of the laser beam. Follow with a pinhole “spatial filter”, then a concave lens then a convex lens. Vary the distances of the two lenses until you get a parallel beam the size of your telescope objective. You will be playing a version of the Lippershey game (1608 A.D.). Then put it through the telescope. Use the QHY star camera to look at the spot size at two axial points, one on each side of where the filter might sit, do the trig and calculate the convergence angle. This not the same as imaging a point source such as a star. It is more like looking at your flats board except that you don’t get scattered light. The laser beam is collimated so that all the rays are parallel. The laser might be 1 to 5 milliwatts but the two 99% attenuators reduce it to 0.1 to 0.5 microwatt. That is still a lot of photons, so experiment with 0.001 exposures. ThorLabs has optical breadboards. Of course you can estimate photons with E = h nu .
Interesting, and thanks again, but I just don’t think that’s going to happen. I would need to start from scratch to learn, acquire the equipment, set it up then use it. My initial post was made just in case someone already knew the answer to the question.
I have photometry with the interference filters and f/7.5 refractors as well as the f/2.8 camera lens. Transformation coefficient plots look good with all of these optics, as do the results of photometry on variable stars and check stars.
In the absence of evidence that suggests there is a problem, I’ll just keep working with what I’m using.
Seems reasonable. Another thought is that if severe angles shift the pass band a bit, BVRI filters don’t have steep edges anyway. Designs to include, exclude important spectral lines may not be rendered less useful. I might be more careful with fast optics when using my SDSS filters. They have relatively steep skirts.
Roy, I think that’s the right approach. Your question about geometric wavelength shifts is a very interesting one, and you’re doing exactly the right thing by investigating it, but given the squishiness which already exists in the bandpasses (as Ray mentioned), I’d expect this effect to be insignificant in comparison to other sources of error.
Brian Kloppenborg talked about a related issue during his webinar on filters & transformations a few months back, when he examined the difference in bandpass between classic and Bessel UBVR filters:
Transcript (lightly edited):
Just by visually comparing you can see that these filters are different. Every filter that comes from a manufacturer is going to be different from the adjacent filter, unless they’re really close to each other when they’re built. So you get one filter from one lot versus another filter from another lot, they’re going to have slight differences. This is something that’s to be expected.
I believe in both cases, both in case of the Bessel filters and in case of the classic filters from Chroma, both of them have been shown to transform very well to the standard Johnson-Cousins system.
Since the bandpass differences between classic and Bessel filters are much larger than the shift which you’d expect from putting fast optics in front of an interference filter, I think this indicates that transformation should take care of the shift nicely.
That said: It does seem possible that there might be a small systematic uncertainty added to your transformation coefficients due to the fact that the edge of the FOV sees steeper incoming light rays on average than the center of the field. If you decide to investigate this topic further, I might suggest checking for that effect first—perhaps you could shoot a standard field and plot the transform coefficients for each star as a function of radius from the center? (Maybe you’ve already done this and that’s what you meant by “transformation coefficient plots”; in that case, please forgive a spectroscopic observer her ignorance. )
Thanks for your thoughts and the quote from Brian’s webinar.
I’ve not tried to find out if distance of a star from the image centre alters the transform relationships. By transformation plots I meant the graphs themselves, e.g., V-v versus B-V, b-v versus B-V. On good nights, the residuals are small.
That said, although I do have TCs for my filters (B and V), I routinely use non-tranformed V because I take time series, mainly of EBs for several hours each night. My V and B filters are not par-focal, and I have to focus the camera lens manually. Alternating V and B in time series is thus not an option. The aim is to obtain times of mid eclipse for Variable Stars South. Comp and check stars are chosen so that magnitude and colour are as close as possible to those of the variable. My Tv_bv is about 0.054, so a colour difference of, say, 0.2 mag units would yield a theoretical error of 0.011 mag.
Concerning distance from the centre of the image, I use a CMOS camera in ROI (Region of Interest) mode, with the ROI set to 1/2. Vignetting is thus reduced as is the image file size (saves disk space and processing time), and check and comp stars are chosen to be as close to the variable as possible. To achieve small var-comp B-V differences as well as small distance separation between the measured stars, I may use only one comp and one check. I recognize that this last choice may not be regarded as best practice (I presume most observers use ensembles of comp stars), but for me ensembles often just add stars with larger var-comp B-V differences.
The final choice I have made is to use focus-specific flats, which means I take flats more often than do those observers who use libraries of callibration frames.