Rubin Observatory Controls Documentation¶
The LSST Control Software contains the overall control aspects of the survey and the telescope including the computers, network, communication and software infrastructure. It contains all work required to design, code, test and integrate, in the lab and in the field, the high level coordination software.
Warning
This page is under heavy development and is subject to change.
Control System Architecture¶
The Rubin control system is based on a reactive data-driven actor-based architecture that uses a multi cast Data Distribution Service (DDS) messaging protocol middleware. The system consists of numerous individual components, called Controllable SAL Components (CSCs), which are generally grouped together and controlled via a higher-level control package which handles the sequencing and simplifies interactions for higher-level use-cases (e.g. slewing the telescope). This section and the links there-in detail the architecture of the control system, the different control packages, and link to the documentation to the individual CSCs.
Control System User Interfaces¶
Interacting with the observatory software and hardware can be performed through many different interfaces.
LSST Observing Visualization Environment (LOVE)¶
LOVE is the graphical display of the system status.
It will also have areas where the user can interact with the system, including a STOP
button and an interface to the scriptQueue.
Nublado (Jupyter) Interface¶
Analogous to the notebook interface used with the LSST Science Platform Notebook Aspect Documentation, there is a Nublado instance that is connected to the control network. This enables users to interact with the system from a notebook environment which is useful for development purposes, but also has example notebooks to perform certain functions, such as collimation of the Auxiliary Telescope and some fault recovery procedures.
Script Queue¶
The ScriptQueue is the main interface for users and operators to perform standard operation procedures. It contains a set of SAL Scripts that users can schedule, organize and monitor their execution.
There are two ScriptQueues running in parallel in the system, one for the Main Telescope (index=1
) and another for the Auxiliary Telescope (index=2
).
Users can interface with them either using the LOVE interface (preferred) or from Jupyter notebooks in nublado.
Scheduler-driven JSON BLOCK Execution¶
Users also have the option of executing JSON BLOCKS, which are collections of SAL Scripts and their configurations via the Scheduler. BLOCKS are loaded into the Scheduler during the CSC enable stage, validated for JSON format, and then made availabe for execution by the Scheduler. The user then commands the Scheduler to add the JSON BLOCK to the ScriptQueue via the addBLOCK command.
The BLOCK repositories are organized by Scheduler, and can be found in the ts config ocs repository.
Watcher¶
The Watcher is an alarm system that is responsible for monitoring the observatory status. The alarms sent by the Watcher are displayed in the LOVE interface where users have the option to acknowledge it and take actions to mitigate any problems.
See the Watcher documentation for more information about handling alarms and escalation.
Engineering Facility Database¶
The Engineering Facility Database (EFD) contains all information that is communicated via the control network, such as commands, events, telemetry and even small files. Interacting with the database can be performed in multiple ways depending on your use-case. A python client has been built to query the database, which can also be used via the Nublado interface mentioned above. However, if a more visual user-friendly interface is required, users should consider using Chronograf, which works well for specific use-cases.
LSST Camera Image Visualization¶
Due to the large size and complexity of assembling a single LSST image, a visualization tool is being developed to enable observers to view and interact with them as they roll off the telescope. This tool is still under heavy development.
System Development Guidelines and Policies¶
The CSC interface definitions are data objects and they are stored in the ts_xml repository. This is where CSCs and their APIs are defined. The Service Abstraction Layer (SAL) is an utility that consumes the XML to produce language specific libraries that enable communication over the DDS network. The procedures for maintaining these definitions are defined in the ts_xml repository itself. As each CSC is developed under the guidelines of the specific team doing the work, there is no single resource to look at to define CSC development. However, there are certain system level policies that govern the development processes.
System-Level Policies¶
Team Specific Development Guidelines¶
System Testing, Deployments, and Upgrades¶
Procedures for system testing, software build, deployment and upgrades at the summit and at TTS are compiled here.
Troubleshooting¶
Software user manuals are often packaged and released with the code such that the documentation is available at the same time new features are released. Because issues often arise with software that is already released, this section focuses on current issues and their workarounds.
Project Documentation Information¶
- Describes information regarding this documentation tree, including how to contribute.