What is roverd
, Anyway?
roverd
is a vital part of the ASE software framework. It handles pipeline validation, building and execution. Its setup is fairly complex, so in this tutorial we will go over its features one by one, focussing on the why and the how. Finally, we will explain how to create and use your own roverd
, and how to simulate its behavior.

Elias Groot
Founding Member, Ex-Software Lead and Ex-Project Administrator
roverd
is short for "rover daemon" - a process that is always running on the Rover, used to manage other processes. In its simplest form, you can think about roverd
as a web server: it exposes an HTTP REST API that you can read and write data to, to modify your Rover's behavior. You can imagine that hence, roverd
must always be running. If it is not, there is no way to control your Rover.
Using the API
There are many interesting endpoints exposed by roverd
, but the most simple one is the health endpoint. If your Rover is booted up it automatically starts roverd
and you can read its health information by navigating to its hostname. For Rover 7 that would be http://rover07.local. You hopefully see the pattern there.
For this to work, you need to be connected to the same "aselabs" network as the Rover. By using hostnames like "rover07.local" you rely on mDNS, which might require some additional installation for your system. Read more here.

operational
status, indicating that the Rover is ready to execute a pipeline. Making web requests each time would be cumbersome (and JSON is not made for humans anyway), so to make your life a bit easier we have developed roverctl
and roverctl-web
. They are tight wrappers around the API exposed by roverd
. In fact, when you run roverctl info
, it will query the health endpoint for you.

roverctl
.In many cases, using roverctl
is the easiest way to get the job done. But the possibility of using the API yourself can be useful at times, for example if you want to perform automated scripting (e.g. using curl
).
You can find the list of available endpoints here if you want to play around with the API yourself. Notice the structure: the endpoints are not defined as code, but as an OpenAPI schema. More on that later.

curl
.As you can read in the spec, most endpoints are protected using HTTP Basic Auth. When using curl you can use the -u
flag to authenticate. The username and password should both be debix
(so you can use curl -u debix:debix
).
Starting roverd
as a Daemon
Enough about the API for now. How do we make sure that roverd
always runs and restarts when it crashes? How does a program become a daemon?
The Linux-native way of running a program as a daemon is to define it as a systemctl
service. (Do not confuse this with an ASE service!). The benefit is that you can inspect, start and stop roverd
conveniently when SSHed into the Rover using well-documented systemctl
commands.

roverd
running on Rover 18 through systemctl
.Right there, you can also see logs that roverd
outputs, which is especially useful when running into errors, or if roverd
reports a non-operational status. A list of frequently used systemctl commands is published here.
Look at the logs in the screenshot above. Do you see the "app daemons" being started? It seems that roverd
spawns and manages two processes: "battery" and "display".
Jun 19 17:43:35 rover18 roverd[4005]: INFO roverd::app::daemons: daemon 'battery' started
Jun 19 17:43:35 rover18 roverd[4005]: INFO roverd::app::daemons: daemon 'display' started
These two processes - which are in turn also daemons - take care of two important tasks on the Rover:
- The "battery" daemon measures the battery voltage and reports it to other services. If the voltage drops below a set threshold, it will warn active SSH users. When below a critical threshold, it will shut down the Debix automatically.
- This is a critical safety feature, to avoid undercharging the battery
- The "display" daemon reads the system benchmarks (CPU and memory usage) as well as the battery readings from the battery daemon. It then outputs these, together with the Rover name and number on the ssd1306 OLED display on the Rover.
This is the first step into understanding why roverd
is such a critical piece of software. If it is not operational, something is seriously wrong. On startup, roverd
always checks for new updates on the internet and automatically downloads them when available.
Pipelines, in Depth
Fundamental to the entire ASE framework is the notion of pipelines. A pipeline is an abstract concept, made concrete by roverd
. It relies on several static checks (i.e. checks before runtime) to ensure that services can run together and communicate with each other.
A service is a self-contained software program (that is, you should be able to ZIP a service up and transfer it to another Rover without problems). It can be any normal Linux program, such as a binary or a python script. Let's take a look at one of the most famous ASE services: controller
.
For every service, its most important characteristics are described in the service.yaml file. You can think of this as a declaration of intent: it tells "hey, I am service controller, and I would like to read data from a service called imaging". It does not tell you any technical details yet, it is unknown to us how the controller can actually read data from imaging.
Easy enough. Let's clone the service, upload it to roverd
, and enable it in the pipeline to see what happens.

Wow, that's a lot of red text, and arguably not super easy to understand. The commands above are analogous to using roverctl-web
, selecting the controller
service and pressing "start execution". It will give you a similar error:

roverctl-web
.We have taken a moment to convert the raw JSON error (from the terminal) into one that is easier to understand for us humans (in the web interface). See the problem?
In its declaration of intent (the service.yaml file). The controller
service made it clear: it wants to read data from the imaging
service. Yet, roverd
does not have an imaging
service enabled in its pipeline. It does not know how to arrange communication for the controller
service and will refuse to run the pipeline.
Fair enough, and that is on us. We should have uploaded an imaging
service that roverd
can connect to the controller
for us. This is the power of static checks: you can avoid problems before they happen. Looking at the imaging
service.yaml it also becomes apparent how the outputs and the inputs match up among services (consider the highlighted portions in blue).
Let's download the official ASE imaging
service that we need. As a small aside, did you know that you can directly install services from Github release links (or any file storage link for that matter)? Press "install a service" and select "install from URL":

If we enable the controller
and the imaging
service and press "start execution" we are happy: the pipeline starts running. Sure, in its current configuration it's a tad boring - it does not even drive the Rover because there is no actuator
, but all the static checks passed.

This simple introduction already highlighted two important concepts in roverd
pipeline validation: invalid pipelines cannot be saved. The inverse is also true: if a pipeline is saved, it must be valid. This gives us developers some peace of mind. But we still didn't answer the hows:
- How does the
imaging
service know where to write to? - How does the
controller
service know where to read from? - How does
roverd
know when to stop the pipeline?
Answering all the hows relies on understanding many different parts of the ASE framework. From roverd
to the various "roverlibs". Let's dive in.
From Intent to Address
We have already seen the service.yaml files: the declarations of intent for a single service. After checking these declarations among all services in the pipeline, roverd
will turn the service into a Linux process, and it needs to pass the necessary communication details (e.g. socket names and ports) into this process. This is what we use bootspecs for. Think of them as the "address book" for a service.
It is thus up to roverd
to take the declarations of intent, and turn them into actually useful addressing information for each service. It must:
- Validate a pipeline by checking each service's service.yaml file
- Is each service.yaml's syntax valid?
- Can each input be connected to exactly one input?
- Are all service names unique?
- Build/compile each service
- Create a bootspec for each service and inject it into the service when it starts running
We focus on the controller
service, but the process is the same for each service in the pipeline. It declared its intent to read data from the imaging
service, so roverd
must create a bootspec that reflects this.
Because every service has to "register" in the roverd
pipeline, roverd
is all-knowing: it can hand out bootspecs as it sees fit and it knows how each pair of services communicates, which also
turns out surprisingly useful when building a debugging framework.
While service.yaml files are long-lived and meant to be edited by service authors (you and us), the bootspecs are ephemeral JSON structures that should be produced by roverd
and consumed by a roverlib programmatically. A bootspecs is injected into a service's environment through the ASE_SERVICE
environment variable. Each bootspec differs per service. For the controller
service it would look something like this:
- Service.yaml
- Bootspec
As edited by the service author (most likely, you).
name: controller
author: vu-ase
source: https://github.com/vu-ase/controller
version: 1.0.0
description: the authority on all steering decisions
commands:
build: make build
run: ./bin/controller
inputs:
- service: imaging
streams:
- path
outputs:
- decision
...
As generated by roverd and injected in the ASE_SERVICE
environment variable.
{
"name": "controller",
"author": "vu-ase",
"version": "1.0.0",
"inputs": [{
"service": "imaging",
"streams": [{
"name": "path",
"address": "tcp://localhost:7890" // protocol "TCP" and port 7890 are chosen by roverd
},
]
},
],
"outputs": [{
"name": "decision",
"address": "tcp://*:7893" // protocol "TCP" and port 7893 are chosen by roverd
}
],
}
...
There is clear overlap between a service's service.yaml and its bootspec, but they are absolutely not the same.
Consuming a Bootspec
Once injected, it is up to the next link in the chain: the specific "roverlib" bundled with the controller service to parse the environment variable and the JSON structure. It will do some basic syntax checks and will then take care of opening the necessary ZeroMQ sockets with the information that roverd
provided. The roverlib (in case of our controller
service, roverlib-go
) forms the glue between the APIs you interact with as a service author, and the pipeline setup that roverd
takes care of.
Designing our software this way, having roverd
be the all-knowing service while the roverlib
is a "dumb" JSON parser, allows us to push out and maintain roverlibs for so many different programming languages. After all, if a language can do JSON parsing and ZMQ sockets, it can have a roverlib. Consider the roverlib-go
pattern of parsing the bootspec:
...
// Fetch and parse service definition as injected by roverd
definition := os.Getenv("ASE_SERVICE")
if definition == "" {
panic("No service definition found in environment variable ASE_SERVICE. Are you sure that this service is started by roverd?")
}
service, err := UnmarshalService([]byte(definition))
if err != nil {
panic(fmt.Errorf("Failed to unmarshal service definition in ASE_SERVICE: %w", err))
}
...
And its roverlib-python
counterpart:
...
# Fetch and parse service definition as injected by roverd
definition = os.getenv("ASE_SERVICE")
if definition is None:
raise RuntimeError(
"No service definition found in environment variable ASE_SERVICE. Are you sure that this service is started by roverd?"
)
service_dict = json.loads(definition)
...
See the similarity? All roverlibs are structured almost identically in terms of file names, comments and function names. This is by design, to make switching languages convenient.
The best part is that, because we defined what a valid bootspec should look like in this JSON schema, we can autogenerate almost 40% of the code for each roverlib using QuickType. You can try it out. Just copy the JSON schema and paste it in the online QuickType app:

There are many languages to choose from, and the automatic code generation will take much of the hassle of writing validation code away for you.
If a Service Dies
Even though, at this point in the execution, most of the hard work happens in the individual services and the respective roverlib
s, the job for roverd
is not over yet. During pipeline execution, it has two important tasks:
- Monitor execution for each service
- If one service process dies, all services must be killed because communication is now disturbed
- Collect all logs for each service
These tasks are fairly common tasks in Linux and are implemented using process groups and stdout/stderr redirection. We will not describe them in depth, but it is good to know that the roverd
API will tell you exactly which service caused the pipeline to stop. This is useful information when debugging a pipeline - it is also shown in roverctl-web
.

roverd
services endpoint. Notice that the first result shows the currently running services as processes. The second result shows that the pipeline was stopped.When roverd
detects that a service died (either through a crash or a graceful exit), it will send a signal to all other services in the pipeline. These services then get 1 second to gracefully shut down (e.g. to close open files, stop motors from spinning, or reset hardware values). This circles back to the philosophy that a pipeline is only valid if all services are valid. As soon as one service dies, no guarantees can be made, and as a precaution all services will be terminated to prevent errors.
How Services Are Built and Run
roverd
exposes a build endpoint, that will take care of building your service for you, according to the commands.build
definition in its service.yaml. To do so, it runs the build command as the root "debix" user, with the working directory set to the service's installation directory. Say that your build command is set to a shell script that writes "Hello World" into a file:
name: example-service
author: elias # note that this will be replaced by the author configured in roverctl
source: https://github.com/vu-ase/example-service
version: 1.0.0
commands:
build: echo "Hello World" > test.txt
run: echo "ok"
inputs: []
outputs: []
configuration: []
Then, after uploading this service using roverctl
, it will tell you where you can find it on the Rover.
Now, you can run this build command by using the build API endpoint. This is already integrated in roverctl-web
and roverctl
:
As proof that roverd
actually executed your service, you can SSH into your Rover and navigate to the location of your service (as reported by roverctl
). Notice that the file "test.txt" is generated.
Understanding that roverd
builds and runs your service as the debix user from the service's installation directory is crucial. For example, you might want to write to a file from your service, and then need to understand that files are written relative to the service installation directory. If you have global dependencies (such as Python pip
packages, or C shared object files), you must make sure that you install these for the root debix user first (by SSH'ing into the Rover).
If builds take suspiciously long through roverd
, we recommend executing the build step through SSH. This way, you can see the build logs directly and roverd
can still benefit from incremental builds afterwards.
Notice that this is why roverctl
uploads your entire service directory to the Rover: the build step needs to be executed on the Rover to make sure that the architecture matches the Rover's architecture (ARM64). The official ASE releases (such as this one) do not contain any source code but are already compiled for ARM64 Linux using a Debix model A so that you do not have to recompile our binaries.
roverd
And the transceiver
Service
Because roverd
knows and manages which services will run, and how they will communicate, debugging these services becomes fairly easy. roverd
itself does not capture any communication between services, but it relies on the transceiver
service to do so. How this works is best illustrated with an example pipeline.
We have three basic services: imaging
, controller
and actuator
. The controller service depends on imaging, and the actuator service depends on the controller. As normal, each service expresses their intent in its service.yaml file.
However, for debugging, we want to add one more service: the transceiver
. Its purpose is to snoop on all communication that happens between all other services in the pipeline. Unlike normal services, its service.yaml declaration does not contain any inputs:
# Service identity
name: transceiver
author: vu-ase
source: https://github.com/VU-ASE/transceiver
version: 1.0.0
commands:
build: make build
run: ./bin/transceiver
# No 'official' dependencies, but the transceiver listens to all streams from all services
inputs: []
outputs:
- transceiver # outputs all tuning messages received from the passthrough service, so that other services can listen and use them
...
All service.yaml declarations, and the bootspecs that are generated from them by roverd
, are highlighted in the schematic below. Notice the color coding between matching inputs and outputs.
This might seem counter-intuitive: a service that declares no inputs, actually get all inputs from other services? That seems to break all prior conventions?
And indeed, it does. The transceiver
service is the only service that gets a special treatment from roverd
. If any pipeline contains a service that is named transceiver
, this service will receive all communication information through its bootspec. This decouples debugging from the roverd
binary, which has several advantages:
- You can enable or disable debugging quickly by just adding or removing a
transceiver
service to/from your pipeline - If you want to create your own debugger (for example, to log values to a CSV file), you can just create your own
transceiver
service (similar to other ASE services), without needing to modifyroverd
- We can release new versions of the ASE
transceiver
service, without needing to update theroverd
binaries on all Rovers
The services that are being debugged, are completely unaware of this, due to the underlying zeroMQ pub/sub mechanism used. The communication pattern in our example is shown below.
If you want to create your own debugger, we recommend to take a look at the official ASE transceiver
to understand how it reads from other services. This loop especially shows how to iterate over all available inputs.
For your debugger to work, you must name your service transceiver
(case-sensitive). We recommend using the as
field, to distinguish your code from the official ASE transceiver, like this:
# Service identity
name: my-new-transceiver
as: transceiver # This is what roverd will check
author: just-a-student
...
You can find the relevant source code in roverd
here.
Tuning Services
When roverd
finds a service called transceiver
enabled in its pipeline, it makes another exception: it populates the tuning
field in the bootspecs of all services that are not the transceiver
. This allows each service to subscribe to data from the transceiver
service, which allows for tuning service variables. The example for the imaging service is highlighted below.
- Without Transceiver
- With Transceiver
Bootspec for the imaging service, as generated by roverd
, when there exists no transceiver
service in the current pipeline.
{
"name": "imaging",
"author": "vu-ase",
"version": "1.0.0",
"inputs": [],
"outputs": [{
"name": "path",
"address": "tcp://*:7892" // protocol "TCP" and port 7892 are chosen by roverd
}
],
"tuning": [{
"enabled": false,
"address": ""
}
],
}
...
Bootspec for the imaging service, as generated by roverd
, when there exists a transceiver
service in the current pipeline.
{
"name": "imaging",
"author": "vu-ase",
"version": "1.0.0",
"inputs": [],
"outputs": [{
"name": "path",
"address": "tcp://*:7892" // protocol "TCP" and port 7892 are chosen by roverd
}
],
"tuning": [{
"enabled": true,
"address": "tcp://localhost:7893" // protocol "TCP" and port 7893 are chosen by roverd
}
],
}
...
This exception is based purely on name ("transceiver") as well, and will set the tuning.address
field of the bootspec to the outputs.transceiver
field of the transceiver
's service.yaml.
Modifying roverd
As should be clear by now, roverd
is essential to checking, running and debugging services in a pipeline. We provide roverd
binaries as releases in our rover repository, but you might want to roll your own version of roverd
to modify its behavior for your project's needs. For example to:
- Customize how services in a pipeline are built or executed
- Customize the environment variables injected into a service
- Customize the graceful exit period for a service
- Customize which protocol is used for service communication (
TCP
orIPC
)
The good news is, that because roverd
is just a normal systemctl service, and we have set everything up for you, this is fairly easy!
Preserving Type-Safety
First things first. roverd
's main purpose is to expose functionality through a web API. When consuming such APIs, one of the first things you lose is type safety (i.e. knowing the structure of the data that you are querying). This can be an important source of bugs and can cause a lot of frustration when writing code. To combat this, we have defined the roverd
API as an OpenAPI spec. This allows us (and you!) to define the API endpoints once and automatically generate server- and client side code for many popular programming languages.
The spec is thus the single source of truth that allows us to parallelize work and pick the best programming language for the job effortlessly. It is defined here. Next to the available data types per endpoint, it describes authorization and errors that can be thrown.
Using our CI/CD pipeline, the spec is used every time we create a new release for (one of) roverd
, roverctl
or roverctl-web
, which allows us to preserve type-safety among version matches (e.g. roverd
version 1.2.1 is always compatible with roverctl
version 1.2.1). If you are writing a consumer yourself, the spec is a must-read.
If you are interested in understanding the CI/CD pipeline and OpenAPI code generation, you can find the responsible github workflows here.
Compiling Your Own roverd
roverd
is written in Rust, and needs to be compiled to an 64-bit ARM binary to be able to run on the Rover. You can follow these steps to modify and compile roverd
:
- Clone the
rover
repository and open it in VS code - Then press ctrl/cmd + shift + p and click "Dev Containers: Rebuild and Reopen in Container"
- Now, select the "ASE-roverd" devcontainer
- It might take a while to build to devcontainer the first time. After that, you are good to go and can inspect and modify the Rust source code. You can find the main code in the "roverd" folder
- Open a new terminal in VS code, and notice that you are now the roverd user. Devcontainer magic!
- Prepare your dev setup, by running
make dev-setup
and specifying aVERSION
of your choosing. You only have to do this once.
We recommend choosing a VERSION
that is not officially published by ASE, to distinguish dev releases from production releases (e.g. use a number like 9.9.9)
- After modifying some source code, run
make dev
with yourVERSION
. This will compile and runroverd
on your machine.
- Compilation might take a while on first run. Later runs will use incremental compilation and will thus be faster. Once done, you will see that
roverd
is running locally on 0.0.0.0:80, and you can access its API on localhost:80
The "host" network setting that roverd
uses to run locally is only available on Linux, and not on macOS. So you can only preview the API in your browser on Linux.
Deploying Your Own roverd
- Using
make dev
is nice for local debugging, but when developing for the Rover, you need to produce an ARM binary. We have enabled cross-compilation for you, so you can runmake build-arm
with yourVERSION
- This will produce an ARM binary and output it to ./target/aarch64-unknown-linux-gnu/release/roverd. This is the binary we need to upload to the Rover
- Start your Rover and make sure it is powered on by running
roverctl info
with your Rover number
- SSH into your Rover and check where
roverd
is installed usingwhich roverd
. This most likely is /usr/local/bin/roverd. Remember this location
- Stop the
roverd
systemctl service usingsudo systemctl stop roverd
and rename the originalroverd
binary usingsudo mv
. Use the "-BAK" prefix so that you can find it back later. Notice that once you stoproverd
,roverctl
will no longer work.
sudo systemctl stop roverd
sudo mv /usr/local/bin/roverd /usr/local/bin/roverd-BAK
- Exit the SSH terminal. Use SCP to copy your local
roverd
binary to your Rover
# ROVER_ADDRESS should be in the form of roverXX.local (e.g. rover01.local)
scp ./target/aarch64-unknown-linux-gnu/release/roverd debix@<ROVER_ADDRESS>:~/my-roverd
- SSH into your Rover again. Notice that when you run
ls
, your binary "my-roverd" appears.
- Now the final step is to move this new binary into the place where
roverd
should be and enabling it again usingsystemctl
sudo mv ~/my-roverd /usr/local/bin/roverd
sudo systemctl start roverd
- Use
roverctl info
again, and notice that it now reports our custom version!
roverctl
will always report an "incompatible" version and can suggest updating roverctl
. This can safely be ignored.
- As a bonus, you can view the
roverd
logs andsystemctl
status. Read more tips and tricks here
roverd-logs
sudo systemctl status roverd
We recommend to take a look at the roverd
makefile to view different compilation and testing options.
Simulating roverd
If you launch a service directly from its service installation directory, for example when SSH'ed into the Rover, or when running a service on your own machine, the roverlib implementation will disallow you from executing the binary standalone (i.e. without roverd
).
Yet, sometimes you might still want or need to run services in isolation with full control. You could modify and recompile roverd
, but an easier solution exists: simulating roverd
.
Recall that roverd
works by injecting a bootspec into a service's environment and then watching the spawned process. The service checks if the environment variable is set up correctly and then starts executing, so you can simulate roverd
by creating your own bootspec and injecting it. There are many ways to do this, but we recommend the following steps:
- Create a "bootspec.json" file and populate it with a correct bootspec that covers all inputs, outputs and configuration values defined in the service.yaml. You can find an example service.yaml and derived bootspec here.
Normally roverd
is in charge of generating a correct bootspec. This can be error-prone, so we recommend to use an editor to help with JSON syntax checking.
- Inject the bootspec as an environment variable and start the service as defined in the
commands.run
field of the service.yaml
# Reads the bootspec and puts it in ASE_SERVICE
export ASE_SERVICE=$(cat bootspec.json)
./bin/imaging # replace with your method of running a service
You can simulate multiple services by using different shells, and can even link communication between them. This also works when SSH'ed into the Rover.