terraform-stacks
This skill is a comprehensive guide for working with Terraform Stacks, including configuration examples, CLI usage, and API monitoring. It includes runnable commands such as terraform stacks configuration upload, instructs extracting ~/.terraform.d/credentials.tfrc.json into TOKEN, and shows curl calls to https://app.terraform.io.
Terraform Stacks
Terraform Stacks simplify infrastructure provisioning and management at scale by providing a configuration layer above traditional Terraform modules. Stacks enable declarative orchestration of multiple components across environments, regions, and cloud accounts.
Core Concepts
Stack: A complete unit of infrastructure composed of components and deployments that can be managed together.
Component: An abstraction around a Terraform module that defines infrastructure pieces. Each component specifies a source module, inputs, and providers.
Deployment: An instance of all components in a stack with specific input values. Use deployments for different environments (dev/staging/prod), regions, or cloud accounts.
Stack Language: A separate HCL-based language (not regular Terraform HCL) with distinct blocks and file extensions.
File Structure
Terraform Stacks use specific file extensions:
- Component configuration:
.tfcomponent.hcl - Deployment configuration:
.tfdeploy.hcl - Provider lock file:
.terraform.lock.hcl(generated by CLI)
All configuration files must be at the root level of the Stack repository. HCP Terraform processes all files in dependency order.
Recommended File Organization
my-stack/
├── variables.tfcomponent.hcl # Variable declarations
├── providers.tfcomponent.hcl # Provider configurations
├── components.tfcomponent.hcl # Component definitions
├── outputs.tfcomponent.hcl # Stack outputs
├── deployments.tfdeploy.hcl # Deployment definitions
├── .terraform.lock.hcl # Provider lock file (generated)
└── modules/ # Local modules (optional - only if using local modules)
├── vpc/
└── compute/
Note: The modules/ directory is only required when using local module sources. Components can reference modules from:
- Local file paths:
./modules/vpc - Public registry:
terraform-aws-modules/vpc/aws - Private registry:
app.terraform.io/<org-name>/vpc/aws
When validating Stack configurations, check component source declarations rather than assuming a local modules/ directory must exist.
Alternative file organization pattern: For larger Stacks, you can split configuration by purpose into separate files:
my-stack/
├── variables.tfcomponent.hcl # All variable declarations
├── providers.tfcomponent.hcl # All provider configurations
├── components.tfcomponent.hcl # All component definitions
├── outputs.tfcomponent.hcl # All output declarations
├── deployments.tfdeploy.hcl # All deployment definitions
├── .terraform.lock.hcl # Provider lock file (generated)
└── modules/ # Local modules (if needed)
├── vpc/
├── instance/
└── key_pair/
HCP Terraform processes all .tfcomponent.hcl and .tfdeploy.hcl files in dependency order, so organizing by purpose improves readability for complex Stacks.
Component Configuration (.tfcomponent.hcl)
Variable Block
Declare input variables for the Stack configuration. Variables must define a type field and do not support the validation argument.
variable "aws_region" {
type = string
description = "AWS region for deployments"
default = "us-west-1"
}
variable "identity_token" {
type = string
description = "OIDC identity token"
ephemeral = true # Does not persist to state file
}
variable "instance_count" {
type = number
nullable = false
}
Important: Use ephemeral = true for credentials and tokens (identity tokens, API keys, passwords) to prevent them from persisting in state files. Use stable for longer-lived values like license keys that need to persist across runs.
Required Providers Block
Works the same as traditional Terraform configurations:
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.7.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.5.0"
}
}
Provider Block
Provider blocks differ from traditional Terraform:
- Support
for_eachmeta-argument - Define aliases in the block header (not as an argument)
- Accept configuration through a
configblock
Single Provider Configuration:
provider "aws" "this" {
config {
region = var.aws_region
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
Multiple Provider Configurations with for_each:
provider "aws" "configurations" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
Authentication Best Practice: Use workload identity (OIDC) as the preferred authentication method for Stacks. This approach:
- Avoids long-lived static credentials
- Provides temporary, scoped credentials per deployment run
- Integrates with cloud provider IAM (AWS IAM Roles, Azure Managed Identities, GCP Service Accounts)
- Eliminates need for platform-managed environment variables
Configure workload identity using identity_token blocks and assume_role_with_web_identity in provider configuration.
Component Block
Each Stack requires at least one component block. Add a component for each module to include in the Stack.
Component Source: Each component's source argument must specify one of the following source types:
- Local file path:
./modules/vpc - Public registry:
terraform-aws-modules/vpc/aws - Private registry:
app.terraform.io/my-org/vpc/aws - Git repository:
git::https://github.com/org/repo.git//modules/vpc?ref=v1.0.0
component "vpc" {
source = "./modules/vpc"
inputs = {
cidr_block = var.vpc_cidr
name_prefix = var.name_prefix
}
providers = {
aws = provider.aws.this
}
}
component "networking" {
source = "app.terraform.io/my-org/vpc/aws"
version = "2.1.0"
inputs = {
cidr_block = var.vpc_cidr
environment = var.environment
}
providers = {
aws = provider.aws.this
}
}
component "compute" {
source = "./modules/compute"
inputs = {
vpc_id = component.vpc.vpc_id
subnet_ids = component.vpc.private_subnet_ids
instance_type = var.instance_type
}
providers = {
aws = provider.aws.this
}
}
Component with for_each for Multi-Region:
component "s3" {
for_each = var.regions
source = "./modules/s3"
inputs = {
region = each.value
tags = var.common_tags
}
providers = {
aws = provider.aws.configurations[each.value]
}
}
Key Points:
- Reference component outputs using
component.<name>.<output> - For components with
for_each, reference specific instances:component.<name>[each.value].<output> - Example:
component.vpc["us-east-1"].vpc_idto access VPC ID for a specific region - Aggregate outputs from multiple instances using
forexpressions:[for x in component.instance : x.instance_ids] - All inputs are provided as a single
inputsobject - Provider references are normal values:
provider.<type>.<alias>orprovider.<type>.<alias>[each.value] - Dependencies are automatically inferred from component references
Output Block
Outputs require a type argument and do not support preconditions:
output "vpc_id" {
type = string
description = "VPC ID"
value = component.vpc.vpc_id
}
output "endpoint_urls" {
type = map(string)
value = {
for region, comp in component.api : region => comp.endpoint_url
}
sensitive = false
}
Locals Block
Works exactly as in traditional Terraform:
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform Stacks"
Project = var.project_name
}
region_config = {
for region in var.regions : region => {
name_suffix = "${var.environment}-${region}"
}
}
}
Removed Block
Use to safely remove components from a Stack. HCP Terraform requires the component's providers to remove it.
removed {
from = component.old_component
source = "./modules/old-module"
providers = {
aws = provider.aws.this
}
}
Deployment Configuration (.tfdeploy.hcl)
Identity Token Block
Generate JWT tokens for OIDC authentication with cloud providers:
identity_token "aws" {
audience = ["aws.workload.identity"]
}
identity_token "azure" {
audience = ["api://AzureADTokenExchange"]
}
Reference tokens in deployments using identity_token.<name>.jwt
Store Block
Access HCP Terraform variable sets within Stack deployments. You can reference variable sets by either ID or name:
# Reference by ID
store "varset" "aws_credentials" {
id = "varset-ABC123" # Variable set ID from HCP Terraform
source = "tfc-cloud-shared"
category = "terraform"
}
# Or reference by name
store "varset" "api_keys" {
name = "api_keys_default_project" # Variable set name
category = "terraform"
}
Reference variable set values in deployments:
deployment "production" {
inputs = {
aws_access_key = store.varset.aws_credentials.AWS_ACCESS_KEY_ID
aws_secret_key = store.varset.aws_credentials.AWS_SECRET_ACCESS_KEY
api_key = store.varset.api_keys.API_KEY
# ... other inputs
}
}
Attributes:
idorname- Reference variable set by ID or name (use one, not both)source- Source of the variable set (e.g.,"tfc-cloud-shared")category- Variable category:"terraform"for Terraform variables,"env"for environment variables
Benefits:
- Centralize credentials and configuration
- Share variables across multiple Stacks
- Manage sensitive values securely in HCP Terraform
- Reference the same variable set from multiple deployments
Locals Block
Define local values for deployment configuration:
locals {
aws_regions = ["us-west-1", "us-east-1", "eu-west-1"]
role_arn = "arn:aws:iam::123456789012:role/hcp-terraform-stacks"
}
Deployment Block
Define deployment instances. Each Stack requires at least one deployment (maximum 20 per Stack).
Single Environment Deployment:
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 3
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
Multiple Environment Deployments:
deployment "development" {
inputs = {
aws_region = "us-east-1"
instance_count = 1
name_suffix = "dev"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
deployment "staging" {
inputs = {
aws_region = "us-east-1"
instance_count = 2
name_suffix = "staging"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
deployment "production" {
inputs = {
aws_region = "us-west-1"
instance_count = 5
name_suffix = "prod"
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
}
Destroying a Deployment:
To safely remove a deployment:
deployment "old_environment" {
inputs = {
aws_region = "us-west-1"
instance_count = 2
role_arn = local.role_arn
identity_token = identity_token.aws.jwt
}
destroy = true # Mark for destruction
}
After applying the plan and the deployment is destroyed, remove the deployment block from your configuration.
Deployment Group Block
Group deployments together to configure shared settings. Important: Custom deployment groups require HCP Terraform Premium tier. Organizations on free/standard tiers automatically use default deployment groups (named {deployment-name}_default).
Syntax: Deployments reference groups (not the other way around):
# Define the deployment group
deployment_group "canary" {
# Optional: Configure auto-approve rules
auto_approve_checks = [deployment_auto_approve.safe_changes]
}
deployment_group "production" {
# Groups can have auto-approve configuration
auto_approve_checks = [deployment_auto_approve.applyable_only]
}
# Deployments reference their group
deployment "dev" {
inputs = {
environment = "dev"
# ... other inputs
}
deployment_group = deployment_group.canary # Reference to group
}
deployment "staging" {
inputs = {
environment = "staging"
# ... other inputs
}
deployment_group = deployment_group.canary # Multiple deployments can reference same group
}
deployment "prod_us_east" {
inputs = {
environment = "prod"
region = "us-east-1"
# ... other inputs
}
deployment_group = deployment_group.production
}
Note: On free/standard tiers without custom groups, each deployment automatically gets a default group named {deployment-name}_default.
Deployment Auto-Approve Block
Define rules that automatically approve deployment plans based on specific conditions (Premium feature):
deployment_auto_approve "safe_changes" {
deployment_group = deployment_group.canary
check {
condition = context.plan.changes.remove == 0
reason = "Cannot auto-approve plans with resource deletions"
}
check {
condition = context.plan.applyable
reason = "Plan must be applyable"
}
}
deployment_auto_approve "applyable_only" {
deployment_group = deployment_group.production
check {
condition = context.plan.applyable
reason = "Plan must be successful"
}
}
Available Context Variables:
context.plan.applyable- Boolean: Plan succeeded without errorscontext.plan.changes.add- Number: Resources to addcontext.plan.changes.change- Number: Resources to changecontext.plan.changes.remove- Number: Resources to removecontext.plan.changes.total- Number: Total number of changes (add + change + remove)context.success- Boolean: Whether the previous operation succeeded
Common patterns:
- Approve plans with no deletions:
context.plan.changes.remove == 0 - Approve only successful plans:
context.plan.applyable && context.success - Approve plans with limited changes:
context.plan.changes.total <= 10
Note: orchestrate blocks are deprecated. Use deployment_group and deployment_auto_approve instead.
Publish Output Block
Export outputs from a Stack for use in other Stacks (linked Stacks):
publish_output "vpc_id_network" {
type = string
value = deployment.network.vpc_id
}
publish_output "subnet_ids" {
type = list(string)
value = deployment.network.private_subnet_ids
}
Upstream Input Block
Reference published outputs from another Stack:
upstream_input "network_stack" {
type = "stack"
source = "app.terraform.io/my-org/my-project/networking-stack"
}
deployment "application" {
inputs = {
vpc_id = upstream_input.network_stack.vpc_id_network
subnet_ids = upstream_input.network_stack.subnet_ids
}
}
Terraform Stacks CLI
Note: Terraform Stacks is Generally Available (GA) as of Terraform CLI v1.13+. Stacks now count toward Resources Under Management (RUM) for HCP Terraform billing.
Initialize and Validate
Initialize Stack - Downloads providers, modules, and generates provider lock file:
terraform stacks init
Update provider lock file - Regenerate lock file with additional platforms or updated providers:
terraform stacks providers-lock
Validate configuration - Check syntax and validate configuration without uploading:
terraform stacks validate
Deployment Workflow
Important: There are no terraform stacks plan or terraform stacks apply commands. The workflow is:
- Upload configuration - Send Stack configuration to HCP Terraform, which automatically triggers deployment runs:
terraform stacks configuration upload
-
HCP Terraform automatically triggers deployment runs for each deployment
-
Monitor deployments - Track deployment progress:
# Watch all deployments in a group (streams status updates)
terraform stacks deployment-group watch -deployment-group=canary
# List all deployment runs (non-interactive, shows current status)
terraform stacks deployment-run list
# Watch a specific deployment run (streams detailed progress)
terraform stacks deployment-run watch -deployment-run-id=sdr-ABC123
- Approve deployments - Required if auto-approve is not configured:
# Approve all pending plan operations in a specific deployment run
terraform stacks deployment-run approve-all-plans -deployment-run-id=sdr-ABC123
# Approve all pending plans across all runs in a deployment group
terraform stacks deployment-group approve-all-plans -deployment-group=canary
# Cancel a deployment run if needed
terraform stacks deployment-run cancel -deployment-run-id=sdr-ABC123
Configuration Management
List configurations - Show all configuration versions for the current Stack:
terraform stacks configuration list
Fetch configuration - Download a specific configuration version to local directory:
terraform stacks configuration fetch -configuration-id=stc-ABC123
Watch configuration processing - Monitor configuration upload and validation status:
terraform stacks configuration watch
Other Useful Commands
Create Stack - Initialize a new Stack with scaffolding (interactive):
terraform stacks create
Format files - Format Stack configuration files to canonical style:
terraform stacks fmt
List Stacks - Show all Stacks in the current project:
terraform stacks list
Show version - Display Terraform CLI and Stacks version:
terraform stacks version
Rerun deployment - Trigger a new deployment run for a deployment group:
terraform stacks deployment-group rerun -deployment-group=canary
Monitoring Deployments with HCP Terraform API
When CLI commands are insufficient or you need programmatic monitoring (automation, CI/CD), use the HCP Terraform API. This is particularly useful for non-interactive environments like AI agents.
Authentication
Extract your API token from the Terraform credentials file:
TOKEN=$(jq -r '.credentials["app.terraform.io"].token' ~/.terraform.d/credentials.tfrc.json)
API Monitoring Workflow
After uploading a configuration with terraform stacks configuration upload, follow this sequence to monitor deployment progress:
1. Get Configuration Status
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/stack-configurations/{configuration-id}" | jq '.'
Returns: Configuration status (pending/completed), components detected, sequence number
2. Get Deployment Group Summaries
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/stack-configurations/{configuration-id}/stack-deployment-group-summaries" | jq '.'
Returns: Deployment group ID, name (e.g., dev_default), status, status-counts (pending/succeeded/failed)
3. Get Deployment Runs
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/stack-deployment-groups/{group-id}/stack-deployment-runs" | jq '.'
Returns: Deployment run IDs, current status, timestamps
4. Get Deployment Steps
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/stack-deployment-runs/{run-id}/stack-deployment-steps" | jq '.'
Returns: Step IDs, operation-type (plan/apply), status (running/completed/failed)
5. Get Error Diagnostics (if deployment fails)
curl -s -H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/vnd.api+json" \
"https://app.terraform.io/api/v2/stack-deployment-steps/{step-id}/stack-diagnostics?stack_deployment_step_id={step-id}" | jq '.'
Critical: The stack_deployment_step_id query parameter is required to retrieve diagnostics. Without it, the API returns empty results.
Returns: Detailed error messages with file locations, line numbers, code snippets, and error descriptions
6. Get Stack Outputs (after successful deployment)
# Get the final apply step ID from step 4, then:
curl -L -s -H "Authorization: Bearer $TOKEN" \
"https://app.terraform.io/api/v2/stack-deployment-steps/{final-apply-step-id}/artifacts?name=apply-description" | \
jq -r '.outputs | to_entries | .[] | "\(.key): \(.value.change.after)"'
Important:
- This endpoint returns HTTP 307 redirect - use
curl -Lto follow redirects - The artifacts endpoint is currently the only way to retrieve Stack outputs programmatically
- This endpoint is not yet documented in public API documentation
API Notes for AI Agents and Automation
- Interactive commands don't work: Commands like
terraform stacks deployment-run watchstream output and block, making them unusable for automation - Use API for polling: Poll deployment run status via API for non-interactive monitoring
- No direct output command: Currently no CLI command to retrieve Stack outputs (must use artifacts API)
- Parse JSON responses: Use
jqto extract relevant fields from API responses
Common Patterns
Multi-Region Deployment
# variables.tfcomponent.hcl
variable "regions" {
type = set(string)
default = ["us-west-1", "us-east-1", "eu-west-1"]
}
# providers.tfcomponent.hcl
provider "aws" "regional" {
for_each = var.regions
config {
region = each.value
assume_role_with_web_identity {
role_arn = var.role_arn
web_identity_token = var.identity_token
}
}
}
# components.tfcomponent.hcl
component "regional_infra" {
for_each = var.regions
source = "./modules/regional"
inputs = {
region = each.value
}
providers = {
aws = provider.aws.regional[each.value]
}
}
Component Dependencies
Dependencies are automatically inferred when one component references another's output:
component "database" {
source = "./modules/rds"
inputs = {
subnet_ids = component.vpc.private_subnet_ids # Creates dependency
}
providers = {
aws = provider.aws.this
}
}
Deferred Changes
Stacks support deferred changes to handle dependencies where values are only known after apply (known_after_apply). This enables configurations that would fail in traditional Terraform due to circular dependencies.
How it works:
- Terraform plans and applies what it can with current information
- Re-evaluates the configuration with newly available values
- Plans and applies remaining resources
- Repeats until all resources converge or max iterations reached
Example use case: Creating a Kubernetes cluster and deploying helm charts in the same deployment:
component "eks_cluster" {
source = "./modules/eks"
# ... configuration
}
component "helm_releases" {
source = "./modules/helm"
inputs = {
cluster_endpoint = component.eks_cluster.endpoint # Known after EKS apply
cluster_ca = component.eks_cluster.ca_cert
}
providers = {
helm = provider.helm.this
# Helm provider needs cluster info from EKS component
}
}
Without deferred changes, this would fail because the helm provider needs values from the EKS cluster that don't exist yet. With deferred changes, Stacks:
- Creates the EKS cluster first
- Retrieves the cluster endpoint and certificate
- Configures the helm provider
- Deploys the helm releases
When to use: Complex multi-component deployments where some resources depend on runtime values from other components (cluster endpoints, generated passwords, IP addresses, etc.)
Best Practices
- Component Granularity: Create components for logical infrastructure units that share a lifecycle
- Module Compatibility:
- Modules used with Stacks cannot include provider blocks (configure providers in Stack configuration)
- Test public registry modules before using in production Stacks - some modules may have compatibility issues
- Consider using raw resources for critical infrastructure if module compatibility is uncertain
- Example: Some terraform-aws-modules versions have been found to have compatibility issues with Stacks (e.g., ALB and ECS modules)
- State Isolation: Each deployment has its own isolated state
- Input Variables: Use variables for values that differ across deployments; use locals for shared values
- Provider Lock Files: Always generate and commit
.terraform.lock.hclto version control - Naming Conventions: Use descriptive names for components and deployments
- Deployment Groups: Always organize deployments into deployment groups, even if you only have one deployment. Deployment groups enable auto-approval rules, logical organization, and provide a foundation for scaling. While deployment groups are a Premium feature, organizing your configurations to use them is a best practice for all Stacks
- Testing: Test Stack configurations in dev/staging deployments before production
Troubleshooting
Circular Dependencies
Issue: Component A references Component B, and Component B references Component A Solution: Refactor to break the circular reference or use intermediate components
Deployment Destruction
Issue: Cannot destroy deployments from HCP Terraform UI
Solution: Deployment destruction is only available via configuration. Set destroy = true in the deployment block:
deployment "old_env" {
inputs = {
# ... existing inputs
}
destroy = true # Destroys all resources in this deployment
}
Then upload the configuration. HCP Terraform will create a destroy run. You cannot destroy Stack deployments from the UI.
References
For detailed block specifications and advanced features, see:
references/component-blocks.md- Complete component block referencereferences/deployment-blocks.md- Complete deployment block referencereferences/examples.md- Complete working examples for common scenarios