Specs for a fully robotic observatory in 2024

Hi everyone!

I was wondering if there is an up-to-date build specification for a fully robotic observatory and data processing pipeline sufficient to produce variable star observations of scientific quality. As turn key as possible. Minimal human intervention. I’m sure many interchangeable configurations are possible and there are many points of debate - and plenty of reasons to claim that there are simply too many tradeoffs to consider or too many dependencies. But it would be great if there was some limited consensus around a few standard configurations that define, say, three budget levels, and that provides a continuously updated and comprehensive parts list of currently available technologies, software requirements, set up, etc. Perhaps maintained in a Google Doc. Something that captures the expertise, best practices, and collective wisdom of the AAVSO membership. Let’s assume the observatory isn’t permanently mounted. Quick mobile set up with no construction overhead: feed it a target list, automatic image acquisition, data reduction, etc. I’m very curious : is this a total pipe dream? What’s possible in 2024? Let’s say I had hypothetically (VERY hypothetically) a 10k budget. What could I do? What about a 3K budget? Even possible? I’m imagining a scalable highly-distributed loosely-coupled network of budget astronomical observatories for amateur survey astronomy with each site capable of generating thousands (10s of thousands?) of observations per year. What’s possible? What’s the state of the art? Thoughts?

John R.

2 Likes

Sounds pretty cool - I’m on the same journey with a (right now) remotely operable micro-observatory that would definitely fall below the $10k (USD presumably) limit. Right now I’m working on the software to run the whole show using KStars/EKOS to do the legwork with a web based tool to allow me to specify what stars to observe, collect the results, and do some automated reduction and report the results. Certainly the fact there is ubiquitous, free software able to automate the process is a big help.

My project is located on Github if anyone is interested:

Thank you for this - I’ll definitely have a closer look. And thx for cluing me in about EKOS. Love the term micro-observatory. That’s the spirit! :grinning:

The term fits even if I don’t :slight_smile:

1 Like

Very interesting question indeed. Since few years now, I am also working on a remote observatory management software suite for automated spectroscopic survey.
Unfortunately, it is still lacking a lot of field-testing. But there are plenty of nice references that could be of interest for you if you are willing to do some developments:

it also has a (pretty basic) web interface for monitoring: (I cannot put the link but you can search github panoptes PAWS)

But my short term goal is to plug this observatory management system to a more generic observation and astronomic data management tool such as TOM: GitHub - TOMToolkit/tom_base: The base Django project for a Target and Observation Manager developped by Las Cumbres Observatory (LCO).

I am also in contact with developers of a very nice spectroscopic software ecosystem / web platform, responsible for the development of this database/vizualisation tool: staros-projects (dot) org which I hope will be my goto solution for getting a scientific outcome out of the gathered data.

2 Likes

Hey, @rachlin

I have been working on a solution for a remote observatory close to Brady, TX. I will install a rig there that is going to be used for photometry (B,V,R,I) and for low res spectroscopy (diffraction grating). I must say, though, that although the data capture process is completely automated, the reduction process is still a little bit convoluted.
Let me begin with the equipment I chose, which is less than 10K USD:
ZWO AM5N mount (just the mount, since the observatory has a pier to hold it)
Askar 103APO F/6.8 telescope with 1.0x field flattener
ASI533MM-PRO
ZWO 5 position Mini EFW, 1.25", with Optolong B,V,R,I photometric filters and 1 diffraction grating SA200 for spectroscopy
Blue Fire Ball Camera Angle Adjuster (mechanical rotator) to fix the orientation of the camera’s sensor in respect of the diffraction grating
16.5 mm extender for ASI533MM-PRO (for back focus, comes with the camera)
William Optics Uniguide 50mm guide scope
ASI220MM-Mini guide camera
Dew Not 4" Dew Heater strips (2 pcs) one for the main and the second for the guide scope
Wanderer Astro Flat Panel V4-EC (allows me to automate the flat/dark-flat capture)
Pegasus Astro Powerbox Advance Gen 2 with power supply cable and mounting bracket
MeLE Quieter 4 C mini PC with 16 Gb of RAM and 512 Gb of hard drive with its mounting bracket
Web Power Switch Pro
Appropriate connection cables for the above-mentioned devices

This “hardware” allows me to control everything from my home using Google’s Remote Desktop. The Web Power Switch Pro (this is too overkill, I must confess, but there are less expensive options in Amazon) allows me to power up everything remotely. The powerbox, on the other hand, allows me to distribute power, have a reliable USB hub and help me with the cable management. The idea is to avoid any human intervention after the installation and polar alignment have been done. I did not consider the very limited capabilities of “off the shelf” solutions such as ASI Air or similar devices. A mini PC with appropriate capturing programs such as NInA and SharpCap, plus normal ASCOM drivers for the devices, has proved to be more capable and flexible.

The capture software I will be using for photometry is NInA (completely free, although the developers will appreciate that you buy them a cup of coffee or so). This software, with it’s Plug-In option called Target Scheduler paired with the Advance Sequencer, allows me to make observation plans for multiple stars in one session, including the control some useful parameters such as altitude in the sky, distance from the Moon, exposure times per filter (which can be customized by target), starting and finish times (Astronomical, Nautical, etc), and many more (all this free!).

I use to preprocess the calibration data (flats, dark-flats, darks) in Pixinsight with the Weighted Batch Preprocessing (WBPP) script, and the alignment, plate solving, and calibration of the files with Tycho Tracker. Afterwards, the calibrated exposures are uploaded to VPhot for data reduction. The submission files are uploaded immediately after this is done. (I warned you that this is the most convoluted part of it, but I really want to avoid spending a fortune on MaximDL software --and it turns out, that I have not found the way to use AstroImage J properly in my Windows 11 driven PC). Will try though the offline software from the AAVSO to do this instead, I think this will save me some time.

On the spectroscopy part, I am still figuring out how to “automate” the capture of several targets per session (my workflow is still pretty “manual”). If I figure out something that yields in good quality and consistent results, I will definitely inform you about that. But I fear that would happen once the telescope is up and running.

Hope this helps for the moment, at least a little.

Cheers!

Enrique Boeneker (BETB)

3 Likes

Thank you Enrique for your detailed response, specs, and insights!

1 Like

Very cool, thank you for the info! I’ll check it out.

Wow - TOM actually seems to duplicate most of the functions I am working on. Since I have the observatory control programs done and operating for both the observatory and telescope, it’s a good time to change direction if TOM can be made to do what I need. Thanks for the link!

1 Like

Thanks for the equipment spec Enrique, I have gone a different direction from ASCOM with INDI and all open source tools. Here’s what my setup looks like:

  • NEQ6 mount with a tandem mount for a C8 and an Evolux 62ED refractor. Also use a 102mm Achromat and a 10" newt (currently being modified from F/5ish to F3.3 by regrinding the mirror)
  • ZWO ASI183MM on the C8 with a 5 filter wheel containing SHO filters along with B and V photometric filters. The C8 also has a white light solar filter for solar operations (manual setup)
  • A Starfield 50mm guide scope with an ASI224MC camera and white light solar filter (also currently manual setup)
  • The 62mm is equipped with a Sol’ex spectroheliograph for solar operations and normally has a ASI294MC OSC for pretty pictures and wanderer rotator
  • Both telescopes have motorized focusers controlled by INDI, a Gemini (MyFocuserPro2) on the C8 and an Orion Accufocus/Shoestring FSUSB on the 63mm
  • A Beelink MiniPC running Stellarmate X which includes an INDI server to run the telescope and cameras.
  • A A4 size LED panel is used for flats, set with a potentiometer to be very dim. Flats are taken by EKOS that automatically adjusts exposure to a preset ADU so no need to adjust the panel. The panel is powered on via a 12v adapter controlled by the Kasa power strip below.
  • A python script on the telescope computer waits for the telescope to be unparked by the observatory computer and invokes KStars/EKOS to run the currently scheduled observing tasks (saved as an EKOS scheduler file)

My observatory is equipped with:

  • Aleko 1550AR gate opener attached to an Arduino Uno that can trigger an open/close operation
  • The Arduino Uno runs an indi-rolloffino sketch which monitors two switches for opened/closed and triggers the gate opener.
  • A python script runs on the observatory computer (an HP EliteDesk800 G2) that monitors clouds from an indi-allsky camera as well as rain (from a Hydreon RG-11 rain monitor attached to an Arduino Uno) and weather (from a Argent ADS-WS1 via serial) to determine if it’s safe to open the roof and start telescope operations. If the roof is safe to open it is unparked via INDI, then if successful the telescope is unparked also (which triggers a script to observe as described above)
  • A Kasa power bar that is wifi connected and controlled using scripts on the observatory computer to turn up to 8 AC plugs on and off
  • All devices attached to a 900vA UPS

You can see the scripts that run the observatory and telescope in the MCP (master control program) folder of the obsy git repo. I was using them last night and they need a few tweaks (for some reason the roof won’t park if the telescope was parked first, but parks fine otherwise!)

I run NordVPN MeshNet which allows me to connect to the computers from anywhere via NoMachine. I find the latter a lot more reliable than VNC. The observatory is currently connected to the house network via PowerLine Ethernet but will eventually move out from my Bortle 8 backyard somewhere dark so it needs to be fully autonomous.

Data collected from the telescope is saved to a repository inside the house for further processing. I’m working on a Python script that auto-processes photometry data and want to automate other collection as well. Pretty pictures are a manual process.

1 Like

Quite an impressive setup! Thanks for sharing!

In fact TOM is eerily similar to my intended software, down to the different menu options! Convergent evolution I guess. I’m going to focus on enhancing TOM versus building something new!

Hello, @gordtulloch!

I like your observatory very much! Is it possible for you to share the blueprints of it?

Cheers,

Enrique Boeneker (BETB)

I have a Fusion 360 model that is approximately correct I can share thats useful. I pretty much wing it when I’m building stuff :slight_smile:

Model file is here: https://www.gordtulloch.com/wp-content/uploads/2024/09/Micro-observatory.f3d

I also have a build thread on CN here: https://www.cloudynights.com/topic/767358-new-micro-observatory-project/

1 Like

Really appreciate this, Gordon! Thanks for sharing!

It’s amazing how similar TOM is to your concept! Focusing on increasing TOM sounds like a smart move. This is because using existing software can save time and resources while still achieving your goals. Null Brawl APK

Interestingly after a thorough look at TOM I will be returning to my previous codebase to continue development of Obsy. Basically the design philospophy of TOM is not conducive to use in amatuer astronomy, as they have chosen a code-driven customization route versus one that is data driven. So basically to bring a new observatory onstream one needs to write a bunch of Python code whereas a data driven design allows one to fill out a configuration form to configure the new observatory. Since instrumentation and configuration of professional observatories is vastly more complex than amateur observatories (which are really INDI or ASCOM and just devices therein) it’s more work to create a data driven version of TOM than just write Obsy from scratch. I found myself removing a lot of code on the observation side as well. I’ll be harvesting some code and ideas from TOM tho so not a total loss.