Skip to content

Stack settings»

This article covers all settings that are set directly on the stack. It's important to note that these are not the only settings that affect how runs and tasks within a given stack are processed - environment, attached contexts, runtime configuration and various integrations will all play a role here, too.

Video Walkthrough»

Common settings»

Administrative»

This setting indicates whether a stack has administrative privileges within the space it lives in. Runs executed by administrative stacks receive an API token that gives them administrative access to a subset of the Spacelift API used by our Terraform provider, which means they can create, update and destroy Spacelift resources.

The main use case is to create one or a small number of administrative stacks that declaratively define the rest of Spacelift resources like other stacks, their environments, contexts, policies, modules, worker pools etc. in order to avoid ClickOps.

Another pattern we've seen is stacks exporting their outputs as a context to avoid exposing their entire state through the Terraform remote state pattern or using external storage mechanisms, like AWS Parameter Store or Secrets Manager.

If this sounds interesting and you want to give it a try, please refer to the help article exclusively dedicated to Spacelift's Terraform provider.

Autodeploy»

Indicates whether changes to the stack can be applied automatically. When autodeploy is set to true, any change to the tracked branch will automatically be applied if the planning phase was successful and there are no plan policy warnings.

Consider setting it to true if you always do a code review before merging to the tracked branch, and/or want to rely on plan policies to automatically flag potential problems. If each candidate change goes through a meaningful human code review with stack writers as reviewers, having a separate step to confirm deployment may be overkill. You may also want to refer to a dedicated section on using plan policies for automated code review.

Autoretry»

Indicates whether obsolete proposed changes will be retried automatically. When autoretry is set to true and a change gets applied, all Pull Requests to the tracked branch conflicting with that change will be reevaluated based on the changed state.

This saves you from manually retrying runs on Pull Requests when the state changes. This way it also gives you more confidence, that the proposed changes will actually be the actual changes you get after merging the Pull Request.

Autoretry is only supported for Stacks with a private Worker Pool attached.

Customizing workflow»

Spacelift workflow can be customized by adding extra commands to be executed before and after each of the following phases:

  • Initialization (before_init and after_init, respectively)
  • Planning (before_plan and after_plan, respectively)
  • Applying (before_apply and after_apply, respectively)
  • Destroying (before_destroy and after_destroy, respectively)
  • Performing (before_perform and after_perform, respectively)
  • Finally (after_run): Executed after each actively processed run, regardless of its outcome. These hooks will execute as part of the last "active" state of the run and will have access to an environment variable called TF_VAR_spacelift_final_run_state indicating the final state of the run.

Note here that all hooks, including the after_run ones, execute on the worker. Hence, the after_run hooks will not fire if the run is not being processed by the worker - for example, if the run is terminated outside of the worker (eg. canceled, discarded), there is an issue setting up the workspace or starting the worker container, or the worker container is killed while processing the run.

These commands may serve one of two general purposes - either to make some modifications to your workspace (eg. set up symlinks, move files around etc.) or perhaps to run validations using something like tfsec, tflint or terraform fmt.

Tip

We don’t recommend using newlines (\n) in hooks. The reason is that we are chaining the Spacelift commands (eg. terraform plan) commands with pre/post hooks with double ampersand (&&) and using commands separated by newlines can cause a non-zero exit code by a command to be hidden if the last command in the newline-separated block succeeds. If you'd like to run multiple commands in a hook, you can either add multiple hooks or add a script as a mounted file and call it with a hook.

Additionally, since we chain the commands, if you use a semicolon (;), the hooks will continue to run even if the phase fails. Therefore, you should use (&&) or wrap your hook in parentheses to ensure that to ensure that "after" commands are only executed if the phase succeed.

Danger

When a run resumes after having been paused for any reason (e.g., confirmation, approval policy), the remaining phases are run in a new container. As a result, any tool installed in a phase that occurred before the pause won't be available in the subsequent phases. A better way to achieve this would be to bake the tool into a custom runner image.

Info

If any of the "before" hooks fail (non-zero exit code), the relevant phase is not executed. If the phase itself fails, none of the "after" hooks get executed, except in the case where the "after" hook is using a semicolon (;). For more information on the use of semicolons and ampersands in hooks, please refer to the tip two above.

The workflow can be customized either using our Terraform provider or in the GUI. The GUI has a very nice editor that allows you to customize commands before and after each phase. You will be able to add and remove commands, reorder them using drag and drop and edit them in-line. Note how the commands that precede the customized phase are the "before" hooks (ps aux and ls in the example below), and the ones that go after it are the "after" hooks (ls -la .terraform):

Perhaps worth noting is the fact that these commands run in the same shell session as the phase itself. So the phase will have access to any shell variables exported by the preceding scripts.

Environment variables are preserved from one phase to the next.

Info

These scripts can be overridden by the runtime configuration specified in the .spacelift/config.yml file.

Note on hook ordering»

Hooks added to stacks and contexts attached to them follow distinct ordering principles. Stack hooks are organized through a drag-and-drop mechanism, while context hooks adhere to prioritization based on context priority. Additionally, auto-attached contexts are arranged alphabetically or reversed alphabetically depending on the operation type (before/after).

Hooks from manually and auto-attached contexts can only be edited from their respective views.

In the before phase, hook priorities work as follows:

  • context hooks (based on set priorities)
  • context auto-attached hooks (reversed alphabetically)
  • stack hooks

In the after phase, hook priorities work as follows:

  • stack hooks
  • context auto-attached hooks (alphabetically)
  • context hooks (reversed priorities)

Let's suppose you have 4 contexts attached to a stack:

  • context_a (auto-attached)
  • context_b (auto-attached)
  • context_c (priority 0)
  • context_d (priority 5)

In all of these contexts, we have added hooks that echo the context name before and after phases. To add to this, we will also add two static hooks on the stack level that will do a simple "echo stack".

Before phase order:

  • context_c
  • context_d
  • context_b
  • context_a
  • stack

After phase order:

  • stack
  • context_a
  • context_b
  • context_d
  • context_c

Runtime commands»

Spacelift can handle special commands to change the workflow behavior. Runtime commands use the echo command in a specific format.

You could use those commands in any lifecycle step of the workflow.

stack_runtime_command

1
echo "::command arg1 arg2"

Below is a list of supported commands. See the more detailed doc after this table.

Command Description
::add-mask Adds a set of values that should be masked in log output

::add-mask»

When you mask a value, it is treated as a secret and will be redacted in the logs output. Each masked word separated by whitespace is replaced with five * characters.

Example»
1
2
3
4
5
# Multiple masks can be set with a single command
echo "::add-mask secret-string another-secret-string"

# You can pull a secret dynamically, for example here we can mask the account ID
echo "::add-mask $(aws sts get-caller-identity | jq -r .Account)"

Enable local preview»

Indicates whether creating proposed Runs based on user-uploaded local workspaces is allowed.

If this is enabled, you can use spacectl to create a proposed run based on the directory you're in:

1
spacectl stack local-preview --id <stack-id>

Danger

This in effect allows anybody with write access to the Stack to execute arbitrary code with access to all the environment variables configured in the Stack.

Use with caution.

Enable well known secret masking»

This setting determines if secret patterns will be automatically redacted from logs. If enabled, the following secrets will be masked from logs:

  • AWS Access Key Id
  • GitHub PAT
  • GitHub Fine-Grained PAT
  • GitHub App Token
  • GitHub Refresh Token
  • GitHub OAuth Access Token
  • Slack Token
  • PGP Private Key
  • RSA Private Key
  • PEM block with BEGIN PRIVATE KEY header

Name and description»

Stack name and description are pretty self-explanatory. The required name is what you'll see in the stack list on the home screen and menu selection dropdown. Make sure that it's informative enough to be able to immediately communicate the purpose of the stack, but short enough so that it fits nicely in the dropdown, and no important information is cut off.

The optional description is completely free-form and it supports Markdown. This is perhaps a good place for a thorough explanation of the purpose of the stack, perhaps a link or two, and an obligatory cat GIF.

Warning

Based on the original name, Spacelift generates an immutable slug that serves as a unique identifier of this stack. If the name and the slug diverge significantly, things may become confusing.

So even though you can change the stack name at any point, we strongly discourage all non-trivial changes.

Labels»

Labels are arbitrary, user-defined tags that can be attached to Stacks. A single Stack can have an arbitrary number of these, but they must be unique. Labels can be used for any purpose, including UI filtering, but one area where they shine most is user-defined policies which can modify their behavior based on the presence (or lack thereof) of a particular label.

There are some magic labels that you can add to your stacks. These labels add/remove functionalities based on their presence.

List of the most useful labels:

  • infracost -- Enables Infracost on your stack
  • feature:enable_log_timestamps -- Enables timestamps on run logs.
  • feature:add_plan_pr_comment -- Enables Pull Request Plan Commenting. It is deprecated. Please use Notification policies instead.
  • feature:disable_pr_comments - Disables Pull Request Comments
  • feature:disable_pr_delta_comments - Disables Pull Request Delta Comments
  • feature:disable_resource_sanitization -- Disables resource sanitization
  • feature:ignore_runtime_config -- Ignores .spacelift/config
  • terragrunt -- Old way of using Terragrunt from the Terraform backend
  • ghenv: Name -- GitHub Deployment environment (defaults to the stack name)
  • ghenv: - -- Disables the creation of GitHub deployment environments
  • autoattach:autoattached_label -- Used for policies/contexts to autoattach the policy/contexts to all stacks containing autoattached_label
  • feature:k8s_keep_using_prune_white_list_flag -- sets --prune-whitelist flag instead of --prune-allowlist for the template parameter .PruneWhiteList in the Kubernetes custom workflow.

Project root»

Project root points to the directory within the repo where the project should start executing. This is especially useful for monorepos, or indeed repositories hosting multiple somewhat independent projects. This setting plays very well with Git push policies, allowing you to easily express generic rules on what it means for the stack to be affected by a code change.

Info

The project root can be overridden by the runtime configuration specified in the .spacelift/config.yml file.

Project globs»

The project globs option allows you to specify files and directories outside of the project root that the stack cares about. In the absence of push policies, any changes made to the project root and any paths specified by project globs will trigger Spacelift runs.

Warning

Project globs do not mount the files or directories in your project root. They are used primarily for triggering your stack when for example there are changes to a module outside of the project root.

You aren't required to add any project globs if you don't want to, but you have the option to add as many project globs as you want for a stack.

Under the hood, the project globs option takes advantage of the doublestar.Match function to do pattern matching.

Example matches:

  • Any directory or file: **
  • A directory and all of its content: dir/*
  • Match all files with a specific extension: dir/*.tf
  • Match all files that start with a string, end with another and have a predefined number of chars in the middle -- data-???-report will match three chars between data and report
  • Match all files that start with a string, and finish with any character from a sequence: dir/instance[0-9].tf

As you can see in the example matches, these are the regex rules that you are already accustomed to.

VCS integration and repository»

We have two types of integrations types: default and Space-level. Default integrations will be always available for all stacks, however Space-level integrations will be available only for stacks that are in the same Space as the integration or have access to it via inheritance. Read more about VCS integrations in the source control page.

Repository and branch point to the location of the source code for a stack. The repository must either belong to the GitHub account linked to Spacelift (its choice may further be limited by the way the Spacelift GitHub app has been installed) or to the GitLab server integrated with your Spacelift account. For more information about these integrations, please refer to our GitHub and GitLab documentation respectively.

Thanks to the strong integration between GitHub and Spacelift, the link between a stack and a repository can survive the repository being renamed in GitHub. If you're storing your repositories in GitLab then you need to make sure to manually (or programmatically, using Terraform) point the stack to the new location of the source code.

Info

Spacelift does not support moving repositories between GitHub accounts, since Spacelift accounts are strongly linked to GitHub ones. In that case the best course of action is to take your Terraform state, download it and import it while recreating the stack (or multiple stacks) in a different account. After that, all the stacks pointing to the old repository can be safely deleted.

Moving a repository between GitHub and GitLab or the other way around is simple, however. Just change the provider setting on the Spacelift project, and point the stack to the new source code location.

Branch signifies the repository branch tracked by the stack. By default, that is unless a Git push policy explicitly determines otherwise, a commit pushed to the tracked branch triggers a deployment represented by a tracked run. A push to any other branch by default triggers a test represented by a proposed run. More information about git push policies, tracked branches, and head commits can be found here.

Results of both tracked and proposed runs are displayed in the source control provider using their specific APIs - please refer to our GitHub and GitLab documentation respectively to understand how Spacelift feedback is provided for your infrastructure changes.

Info

A branch must exist before it's pointed to in Spacelift.

Runner image»

Since every Spacelift job (which we call runs) is executed in a separate Docker container, setting a custom runner image provides a convenient way to prepare the exact runtime environment your infra-as-code flow is designed to use.

Additionally, for our Pulumi integration overriding the default runner image is the canonical way of selecting the exact Pulumi version and its corresponding language SDK.

You can find more information about our use of Docker in this dedicated help article.

Info

Runner image can be overridden by the runtime configuration specified in the .spacelift/config.yml file.

Warning

On the public worker pool, Docker images can only be pulled from allowed registries. On private workers, images can be stored in any registry, including self-hosted ones.

Worker pool»

Terraform-specific settings»

Version»

The Terraform version is set when a stack is created to indicate the version of Terraform that will be used with this project. However, Spacelift covers the entire Terraform version management story, and applying a change with a newer version will automatically update the version on the stack.

Workspace»

Terraform workspaces are supported by Spacelift, too, as long as your state backend supports them. If the workspace is set, Spacelift will try to first select, and then - should that fail - automatically create the required workspace on the state backend.

If you're managing Terraform state through Spacelift, the workspace argument is ignored since Spacelift gives each stack a separate workspace by default.

Pulumi-specific settings»

Login URL»

Login URL is the address Pulumi should log into during Run initialization. Since we do not yet provide a full-featured Pulumi state backend, you need to bring your own (eg. Amazon S3).

You can read more about the login process here. More general explanation of Pulumi state management and backends is available here.

Stack name»

The name of the Pulumi stack which should be selected for backend operations. Please do not confuse it with the Spacelift stack name - they may be different, though it's probably good if you can keep them identical.