This article covers all settings that are set directly on the stack. It's important to note that these are not the only settings that affect how runs and tasks within a given stack are processed - environment, attached contexts, runtime configuration and various integrations will all play a role here, too.
This setting indicates whether a stack has administrative privileges. Runs executed by administrative stacks receive an API token that gives them administrative access to a subset of the Spacelift API used by ourTerraform provider, which means they can create, update and destroy Spacelift resources.
The main use case is to create one or a small number of administrative stacks that declaratively define the rest of Spacelift resources like other stacks, their environments, contexts, policies, modules, worker pools etc. in order to avoid ClickOps.
Another pattern we've seen is stacks exporting their outputs as a context to avoid exposing their entire state through the Terraform remote state pattern or using external storage mechanisms, like AWS Parameter Store or Secrets Manager.
If this sounds interesting and you want to give it a try, please refer to the help article exclusively dedicated to Spacelift's Terraform provider.
Indicates whether changes to the stack can be applied automatically. When autodeploy is set to true, any change to the tracked branch will automatically be applied if the planning phase was successful and there are no plan policy warnings.
Consider setting it to true if you always do a code review before merging to the tracked branch, and/or want to rely on plan policies to automatically flag potential problems. If each candidate change goes through a meaningful human code review with stack writers as reviewers, having a separate step to confirm deployment may be overkill. You may also want to refer to a dedicated section on using plan policies for automated code review.
Indicates whether obsolete proposed changes will be retried automatically. When autoretry is set to true and a change gets applied, all Pull Requests to the tracked branch conflicting with that change will be reevaluated based on the changed state.
This saves you from manually retrying runs on Pull Requests when the state changes. This way it also gives you more confidence, that the proposed changes will actually be the actual changes you get after merging the Pull Request.
Spacelift workflow can be customized by adding extra commands to be executed before and after each of the following phases:
- Initialization (
- Planning (
- Applying (
- Destroying (
- Performing (
These commands may serve one of two general purposes - either to make some modifications to your workspace (eg. set up symlinks, move files around etc.) or perhaps to run validations using something like
When a run resumes after having been paused for any reason (e.g., confirmation, approval policy), the remaining phases are run in a new container. As a result, any tool installed in a phase that occurred before the pause won't be available in the subsequent phases. A better way to achieve this would be to bake the tool into a custom runner image.
If any of the "before" hooks fail (non-zero exit code), the relevant phase is not executed. If the phase itself fails, none of the "after" hooks get executed.
The workflow can be customized either using our Terraform provider or in the GUI. The GUI has a very nice editor that allows you to select the phase you want to customize and add commands before and after each phase. You will be able to add and remove commands, reorder them using drag and drop and edit them in-line. Note how the commands that precede the customized phase are the "before" hooks (
ps aux and
ls in the example below), and the ones that go after it are the "after" hooks (
ls -la .terraform):
Perhaps worth noting is the fact that these commands run in the same shell session as the phase itself. So the phase will have access to any shell variables exported by the preceding scripts, but these variables will not be persisted between steps unless explicitly requested. This is particularly useful for retrieving one-off initialization secrets (eg. sensitive credentials).
These scripts can be overridden by the runtime configuration specified in the
Persisting environment variables between steps»
Environment variables can be persisted between steps by writing them to the
.env file in the project root. In this example, we're using two hooks - one before the initialization and one after the initialization phase. We use the first command to retrieve a secret from external storage and put it in the environment to be used by the initialization phase. We use the second command to persist the secret to the environment so that subsequent steps can access it:
Note that the environment persisted this way is uploaded (with RSA wrapped AES encryption) to external storage when the tracked run requires manual review. If you don't feel comfortable with it, you have 2 options:
- use private workers where we don't have the key to decrypt the payload;
- do not persist sensitive environment variables between steps - instead, retrieve them before each step that needs them;
Enable local preview»
Indicates whether creating proposed Runs based on user-uploaded local workspaces is allowed.
If this is enabled, you can use spacectl to create a proposed run based on the directory you're in:
This in effect allows anybody with write access to the Stack to execute arbitrary code with access to all the environment variables configured in the Stack.
Use with caution.
Name and description»
Stack name and description are pretty self-explanatory. The required name is what you'll see in the stack list on the home screen and menu selection dropdown. Make sure that it's informative enough to be able to immediately communicate the purpose of the stack, but short enough so that it fits nicely in the dropdown, and no important information is cut off.
The optional description is completely free-form and it supports Markdown. This is perhaps a good place for a thorough explanation of the purpose of the stack, perhaps a link or two, and an obligatory cat GIF.
Based on the original name, Spacelift generates an immutable slug that serves as a unique identifier of this stack. If the name and the slug diverge significantly, things may become confusing.
So even though you can change the stack name at any point, we strongly discourage all non-trivial changes.
Labels are arbitrary, user-defined tags that can be attached to Stacks. A single Stack can have an arbitrary number of these, but they must be unique. Labels can be used for any purpose, including UI filtering, but one area where they shine most is user-defined policies which can modify their behavior based on the presence (or lack thereof) of a particular label.
Project root points to the directory within the repo where the project should start executing. This is especially useful for monorepos, or indeed repositories hosting multiple somewhat independent projects. This setting plays very well with Git push policies, allowing you to easily express generic rules on what it means for the stack to be affected by a code change.
The project root can be overridden by the runtime configuration specified in the
Repository and branch»
Repository and branch point to the location of the source code for a stack. The repository must either belong to the GitHub account linked to Spacelift (its choice may further be limited by the way the Spacelift GitHub app has been installed) or to the GitLab server integrated with your Spacelift account. For more information about these integrations, please refer to our GitHub and GitLab documentation respectively.
Thanks to the strong integration between GitHub and Spacelift, the link between a stack and a repository can survive the repository being renamed in GitHub. If you're storing your repositories in GitLab then you need to make sure to manually (or programmatically, using Terraform) point the stack to the new location of the source code.
Spacelift does not support moving repositories between GitHub accounts, since Spacelift accounts are strongly linked to GitHub ones. In that case the best course of action is to take your Terraform state, download it and import it while recreating the stack (or multiple stacks) in a different account. After that, all the stacks pointing to the old repository can be safely deleted.
Moving a repository between GitHub and GitLab or the other way around is simple, however. Just change the provider setting on the Spacelift project, and point the stack to the new source code location.
Branch signifies the repository branch tracked by the stack. By default, that is unless a Git push policy explicitly determines otherwise, a commit pushed to the tracked branch triggers a deployment represented by a tracked run. A push to any other branch by default triggers a test represented by a proposed run. More information about git push policies, tracked branches, and head commits can be found here.
Results of both tracked and proposed runs are displayed in the source control provider using their specific APIs - please refer to our GitHub and GitLab documentation respectively to understand how Spacelift feedback is provided for your infrastructure changes.
A branch must exist before it's pointed to in Spacelift.
Since every Spacelift job (which we call runs) is executed in a separate Docker container, setting a custom runner image provides a convenient way to prepare the exact runtime environment your infra-as-code flow is designed to use.
Additionally, for our Pulumi integration overriding the default runner image is the canonical way of selecting the exact Pulumi version and its corresponding language SDK.
You can find more information about our use of Docker in this dedicated help article.
Runner image can be overridden by the runtime configuration specified in the
On the public worker pool, Docker images can only be pulled from allowed registries. On private workers, images can be stored in any registry, including self-hosted ones.
The Terraform version is set when a stack is created to indicate the version of Terraform that will be used with this project. However, Spacelift covers the entire Terraform version management story, and applying a change with a newer version will automatically update the version on the stack.
Terraform workspaces are supported by Spacelift, too, as long as your state backend supports them. If the workspace is set, Spacelift will try to first select, and then - should that fail - automatically create the required workspace on the state backend.
If you're managing Terraform state through Spacelift, the workspace argument is ignored since Spacelift gives each stack a separate workspace by default.
Login URL is the address Pulumi should log into during Run initialization. Since we do not yet provide a full-featured Pulumi state backend, you need to bring your own (eg. Amazon S3).
The name of the Pulumi stack which should be selected for backend operations. Please do not confuse it with the Spacelift stack name - they may be different, though it's probably good if you can keep them identical.