Worker Pools

This article explains how you can set up and use on-premise private worker pools.

By default, Spacelift uses a managed worker pool hosted and operated by us. This is very convenient, but often you may have special requirements regarding infrastructure, security or compliance, which aren't served by the public worker pool. This is why Spacelift also supports private worker pools, which you can use to host the workers which execute Spacelift workflows on your end.

In order to enjoy the maximum level of flexibility and security with a private worker pool, temporary run state is encrypted end-to-end, so only the workers in your worker pool can look inside it. We use asymmetric encryption to achieve this and only you ever have access to the private key.

Setting up

To make sure that we have no access to your private key, you will need to generate it on your end, and use it to create a certificate signing request to give to Spacelift. We'll generate a certificate for you, so that workers can use it to authenticate with the Spacelift backend. The following command will generate the key and CSR:

openssl req -new -newkey rsa:4096 -nodes -keyout spacelift.key -out spacelift.csr

You should now store the spacelift.key file in a secure place. You’ll need it later, when launching workers in you worker pool.

Now you can submit the spacelift.csr file in the worker pool creation form. In response, you’ll receive a Spacelift token. It contains configuration for your worker pool launchers, as well as the certificate we generated for you based on the certificate signing request.

The launcher binary is available here. In order to work, it expects to be able to write to the local Docker socket. Unless you're using a Docker-based container scheduler like Kubernetes or ECS, please make sure that Docker is installed.

For AWS users we've prepared an easy way to run Spacelift as an ASG. This repository contains the code Spacelift's base AMIs and this one provides a Terraform module to customize and deploy the actual ASG.

Finally, you can run the launcher binary by setting two environment variables:

  • SPACELIFT_TOKEN - the token you’ve received from Spacelift on worker pool creation

  • SPACELIFT_POOL_PRIVATE_KEY - the contents of the private key file you generated, in base64.

You need to encode the entire private key using base-64, making it a single line of text. The simplest approach is to just run something like cat spacelift.key | base64in your command line.

Congrats! Your launcher should now connect to the Spacelift backend and start handling runs.

Configuration options

A number of configuration variables is available to customize how your launcher behaves:

  • SPACELIFT_MASK_ENVS- comma-delimited list of whitelisted environment variables that are passed to the workers but should never appear in the logs;

  • SPACELIFT_WORKER_NETWORK - network ID/name to connect the launched worker containers, defaults to bridge;

  • SPACELIFT_WORKER_EXTRA_MOUNTS - additional files or directories to be mounted to the launched worker docker containers, as a comma separated list of mounts in the form of /host/path:/container/path;

  • SPCELIFT_WORKER_RUNTIME - runtime to use for worker container;

  • SPACELIFT_WHITELIST_ENVS - comma-delimited list of environment variables to pass from the launcher's own environment to the workers' environment;

Passing metadata tags

When the launcher from a private worker pool is registering with the mothership, you can send along some tags that will allow you to uniquely identify the process / machine for the purpose of draining or debugging. Any environment variables using SPACELIFT_METADATA_ prefix will be passed on. As an example, if you're running Spacelift workers in EC2, you can do the following just before you execute the launcher binary:

export SPACELIFT_METADATA_instance_id=$(ec2-metadata --instance-id | cut -d ' ' -f2)

Doing so will set your EC2 instance ID as instance_id tag in your worker.

Using worker pools

Worker pools must be explicitly attached to stacks and/or modules in order to start processing their workloads. This can be done in the Behavior section of stack and module settings:

Example when editing the existing stack
Example when setting up a new module