🌐
Xebia
xebia.com › home › blog › anti-patterns of using layers with terraform
Anti-patterns Of Using Layers With Terraform | Xebia
January 28, 2026 - In Terraform, the typical organizational structure involves modules, which are reusable units of code that encapsulate infrastructure components. These modules can be composed to create more complex infrastructure. There isn't a standardized or official concept referred to as "Terraform layers" in the Terraform documentation.
🌐
Medium
medium.com › @andrey.i.karpov › decomposing-terraform-into-multiple-layers-part-1-76b1ff1f4214
Decomposing Terraform into multiple layers — Part 1 | by Andrey Karpov | Medium
August 3, 2024 - One Terraform state file tracks all resources. To scale the project, we can only add new resources into the same flat file structure and, as a result, track them in the same “tfstate” file. ... Multi-layered deployment is a type of configuration that contains several independent groups.
🌐
Terrateam
terrateam.io › blog › terraform-deployment-with-layered-architecture
Terraform Deployment with Layered Architecture
November 6, 2024 - Learn how to simplify Terraform deployments and manage infrastructure dependencies with a layered architecture approach.
🌐
Medium
blog.infostrux.com › terraform-layered-architecture-explained-8d611433628b
Terraform Layered Architecture Explained | by Balkaran Brar | Infostrux Engineering Blog
June 14, 2024 - Kubernetes Layer: I’m using Kubernetes layer in this example but it could be any other layer as well, like ECS, EC2 or Database (RDS), etc. Now this is where we need to consume the resources exported in the Identity and Network layers. To do this, we need to leverage Terraform data sources.
🌐
Rackspace
manage.rackspace.com › aws › docs › product-guide › iac_beta › terraform-standards.html
Terraform Standards - Fanatical Support for AWS Product Guide
Put simply, a layer is a directory that is treated as a single Terraform configuration. It is a logical grouping of related resources that should be managed together by Terraform. Layers are placed in the layers/ directory inside an Account Repository.
🌐
Theodo
cloud.theodo.com › home › blog › technology › terraform iac from scratch to scale:...
Multi-layering: Terraform IaC from scratch to scale
August 6, 2021 - As there is one state file per layer and per workspace, each team member can make changes to different layers without conflicting with coworkers' modifications to other layers. You can terraform apply two layers simultaneously without worrying about the state lock, which is fantastic.
Find elsewhere
🌐
Medium
medium.com › @dalethestirling › my-terraform-has-layers-84e295d086a0
My Terraform has layers. Every one has an opinion on how you… | by Dale Stirling | Medium
June 11, 2019 - As the size of projects I was involved in grew in size and complexity, the concern that the current approach would not scale to the larger platforms and more importantly the larger team that will be be required to deliver the infrastructure via Terraform triggered the next evolution. This shifted the segmentation from resource types to responsibility layers, with each layer containing many resource types.
Top answer
1 of 2
13

For larger systems it is common to split infrastructure across multiple separate configurations and apply each of them separately. This is a separate idea from (and complimentary to) using shared modules: modules allow a number of different configurations to have their own separate "copy" of a particular set of infrastructure, while the patterns described below allow an object managed by one configuration to be passed by reference to another.

If some configurations will depend on the results of other configurations, it's necessary to store these results in some data store that can be written to by its producer and read from by its consumer. In an environment where the Terraform state is stored remotely and readable broadly, the terraform_remote_state data source is a common way to get started:

data "terraform_remote_state" "resource_group" {
  # The settings here should match the "backend" settings in the
  # configuration that manages the network resources.
  backend = "s3"
  config {
    bucket = "mycompany-terraform-states"
    region = "us-east-1"
    key    = "azure-resource-group/terraform.tfstate"
  }
}

resource "azurerm_virtual_machine" "example" {
  resource_group_name = "${data.terraform_remote_state.resource_group.resource_group_name}"
  # ... etc ...
}

The resource_group_name attribute exported by the terraform_remote_state data source in this example assumes that a value of that name was exposed by the configuration that manages the resource group using an output.

This decouples the two configurations so that they have an entirely separate lifecycle. You first terraform apply in the configuration that creates the resource group, and then terraform apply in the configuration that contains the terraform_remote_state data resource shown above. You can then apply that latter configuration as many times as you like without risk to the shared resource group or key vault.


While the terraform_remote_state data source is quick to get started with for any organization already using remote state (which is recommended), some organizations prefer to decouple configurations further by introducing an intermediate data store like Consul, which then allows data to be passed between configurations more explicitly.

To do this, the "producing" configuration (the one that manages your resource group) publishes the necessary information about what it created into Consul at a well-known location, using the consul_key_prefix resource:

resource "consul_key_prefix" "example" {
  path_prefix = "shared/resource_group/"
  subkeys = {
    name = "${azurerm_resource_group.example.name}"
    id   = "${azurerm_resource_group.example.id}"
  }

resource "consul_key_prefix" "example" {
  path_prefix = "shared/key_vault/"
  subkeys = {
    name = "${azurerm_key_vault.example.name}"
    id   = "${azurerm_key_vault.example.id}"
    uri  = "${azurerm_key_vault.example.uri}"
  }
}

The separate configuration(s) that use the centrally-managed resource group and key vault would then read it using the consul_keys data source:

data "consul_keys" "example" {
  key {
    name = "resource_group_name"
    path = "shared/resource_group/name"
  }
  key {
    name = "key_vault_name"
    path = "shared/key_vault/name"
  }
  key {
    name = "key_vault_uri"
    path = "shared/key_vault/uri"
  }
}

resource "azurerm_virtual_machine" "example" {
  resource_group_name = "${data.consul_keys.example.var.resource_group_name}"
  # ... etc ...
}

In return for the additional complexity of running another service to store these intermediate values, the two configurations now know nothing about each other apart from the agreed-upon naming scheme for keys within Consul, which gives flexibility if, for example, in future you decide to refactor these Terraform configurations so that the key vault has its own separate configuration too. Using a generic data store like Consul also potentially makes this data available to the applications themselves, e.g. via consul-template.

Consul is just one example of a data store that happens to already be well-supported in Terraform. It's also possible to achieve similar results using any other data store that Terraform can both read and write. For example, you could even store values in TXT records in a DNS zone and use the DNS provider to read, as an "outside the box" solution that avoids running an additional service.


As usual, there is a tradeoff to be made here between simplicity (with "everything in one configuration" being the simplest possible) and flexibility (with a separate configuration store), so you'll need to evaluate which of these approaches is the best fit for your situation.

As some additional context: I've documented a pattern I used successfully for a moderate-complexity system. In that case we used a mixture of Consul and DNS to create an "environment" abstraction that allowed us to deploy the same applications separately for a staging environment, production, etc. The exact technologies used are less important than the pattern, though. That approach won't apply exactly to all other situations, but hopefully there are some ideas in there to help others think about how to best make use of Terraform in their environment.

2 of 2
1

You can destroy specific resources using terraform destroy -target path.to.resource. Docs

Different parts of a large solution can be split up into modules, these modules do not even have to be part of the same codebase and can be referenced remotely. Depending on your solution you may want to break up your deployments into modules and reference them from a "master" state file that contains everything.

🌐
GitHub
github.com › enter-at › terraform-aws-lambda-layer
GitHub - enter-at/terraform-aws-lambda-layer: Terraform module designed to facilitate the creation of AWS Lambda layers
Terraform module designed to facilitate the creation of AWS Lambda layers - enter-at/terraform-aws-lambda-layer
Starred by 13 users
Forked by 14 users
Languages   HCL 56.0% | Shell 35.2% | Makefile 8.8% | HCL 56.0% | Shell 35.2% | Makefile 8.8%
🌐
Terraform
registry.terraform.io › modules › terraform-aws-modules › lambda › aws › latest › examples › complete
terraform-aws-modules/lambda/aws | complete Example | Terraform Registry
module "lambda_example_complete" { source = "terraform-aws-modules/lambda/aws//examples/complete" version = "8.7.0" } Configuration in this directory creates AWS Lambda Function, Layers, Alias, and so on with the large variety of supported features showing this module in action.
🌐
Medium
medium.com › @david.alvares.62 › the-layering-method-with-terraform-d06e1e851a34
The layering method with Terraform | by Pro Coder | Medium
February 17, 2022 - The layering method with Terraform After a few diverse and varied infra missions using this great tool that is Terraform, I felt there was a need to share what I have learned. In this article, I will …
🌐
Medium
medium.com › @beardr3d › infrastructure-with-terraform-layers-a-practical-approach-0ad16d0add6a
Infrastructure with Terraform Layers: A Practical Approach
December 2, 2025 - Layers never modify each other’s resources A layer cannot change or override resources managed in a different layer. Structuring your AWS infrastructure with Terraform layers creates a stable, predictable, and scalable environment. Each layer has a clear purpose, limited blast radius, and explicit ownership.
🌐
Spacemacs
develop.spacemacs.org › layers › +tools › terraform › README.html
Terraform layer
June 25, 2023 - This layer provides basic support for Terraform .tf files.
🌐
Reddit
reddit.com › r/terraform › referencing resources in previous "layers" in terraform
r/Terraform on Reddit: Referencing resources in previous "layers" in Terraform
June 7, 2021 -

In my current project I'm following a layering pattern for deploying infrastructure to Azure, like the one explained in this article: https://www.padok.fr/en/blog/terraform-iac-multi-layering

So I currently have 4 layers:

  • A bootstrap layer, creating the resource group and state storage (intended to be fire-and-forget)

  • The network layer, setting up the VNET, subnets and firewall/network security group rules

  • Data layer, setting up storage accounts & SQL databases

  • The apps layer, setting up the app services etc. There could be multiple layers here on the same "level", but for different services.

Each of the layers has its own state. Now, this works for the most part great for us, and gives us a nice and tidy organization of our resources. BUT, in the various layers we often need to refer to resources that has been created earlier.

A good example is the subnets, in which we put the SQL databases and app services as well as storage accounts. They are created in the network layer, with a certain naming convention. In e.g. the apps layer, I am using data sources to resolve references, but then I need to know the name of the subnet I want to use. This of course works, but I have to duplicate the subnet name. If I change e.g. the subnet's name, I would have to remember to update ALL subsequent layers.

What is the best way to refer to resources created in earlier stages of deployments? Since all layers are distinct configurations, I can't directly refer to them. Here's the options I've thought of so far:

  • Simply do as I do now - just refer to the subnet (which also requires me to pass in the VNET name as well). For me, this smells - but I'm open to be convinced otherwise.

  • Define all these names for resources that will be referred to in multiple layers as variables that are passed in. I fear this will blow up the number of variables I need to pass in, but somehow feels better than the first option as I then can define the subnet names and VNET name in a common .tfvars file.

  • I have considered simply creating a module in e.g. the network layer that exposes the subnet IDs as outputs. That way at least I can have one place where I define the names. I don't know if this considered an anti-pattern or not, and if so for what reason.

Does anyone have any experience with this design pattern for Terraform, and how to best resolve resources in subsequent layers?

Top answer
1 of 5
9
The traditional way to do this is with the Remote State Data Source. Many ( including HashiCorp) don’t consider it a best practice. The documentation has suggestions for alternate approaches. https://www.terraform.io/docs/language/state/remote-state-data.html Personally, I tend use use data sources to lookup ids of resources created in different layers.
2 of 5
3
I've gone back and forth on this a lot; creating programs on layers and monoliths. I am only responsible for the bootstrap and foundation, but like you, there services within that construct and heavy dependencies between them. For instance, or firewalls are NVAs, so for all intents, just VMs that get layered onto the network piece. Part of my design is enforcing at each subnet created in any VNET that it get pointed up to it's respective firewall next-hop with UDRs. So for each subnet created, there is a back-referencing needed to override VNETs from other peers where I require a secure "zone." It's pretty complicated, though it requires remote state to function. The moral of my story is that remote state can be considered acceptable if its use stays within the responsible span of control. Our service or application deployments can not create or modify network features, though their requirements are fed into that delegate team. It does have the downside of establishing a contract between network resource naming and the consumers, but as you know, you can't change names of things without pretty major penalties anyway.
🌐
Garbage Value
garbagevalue.com › blog › terraforms-layered-architecture
Building Scalable Infrastructure with Terraform’s Layered Architecture - Garbage Value
December 1, 2024 - Layered architecture in Terraform refers to structuring infrastructure code in layers for more modularity and reduction in complexity. Under this structure, each layer of the layers is responsible for different parts of the infrastructure.
🌐
Reddit
reddit.com › r/devops › designing infrastructure layers
r/devops on Reddit: Designing infrastructure layers
February 14, 2024 -

I am trying to make sense of the mechanics and best practices around splitting Terraform code into independent root modules that build on top of each other.

An example of this is Google Cloud's Enterprise foundations blueprint which consists of multiple layers of terraform root modules:

  • 0 bootstrap

  • 1 org

  • 2 environments

  • 3 networks-dual-svpc / networks-hub-and-spoke

  • 4 projects

  • 5 app-infra

I would be interested to hear how much layer separation of this kind is happening in real world projects and how the Terraform root modules are connected (e.g. using terraform_remote_state or something else).

One particular thing I am wondering about is how to set up the Terraform deployment itself. I can see how the automated deployment of a Terraform root module requires infrastructure that already has to be set up. Things like:

  • A Terraform backend to persist the state

  • A service account to access the infrastructure

  • A deployment pipeline that runs terraform apply and has access to the service account and the Terraform backend

This cannot be managed by the Terraform root module that is supposed to be deployed by it, right? It either has to be set up by a lower layer or by hand. How do you deal with this?

Thanks.

Top answer
1 of 3
3
For real-world projects, layer separation like Google Cloud's blueprint is common. It helps manage complexity by breaking down the infrastructure into manageable parts. Root modules often connect using terraform_remote_state to access outputs from other modules, maintaining modularity and reusability. Setting up Terraform deployment requires some initial manual setup or a bootstrap process. Typically, you'd manually set up the backend and service account first. This bootstrap layer is the foundation for automating the rest. For continuous deployment, tools like Jenkins, GitHub Actions, or GitLab CI can automate terraform apply using the pre-configured service account and backend. It's a mix of initial manual setup followed by automation.
2 of 3
3
I do something similar for our scale-up sized company. Every AWS account hosts its own Terraform state (S3 + Dymano for locks), deployed with a CloudFormation template (infra as code is still infra as code) Layers communicate by either publishing resource names to know AWS SSM Parameter store paths, or via deterministic naming (e.g. the Terraform state bucket is always "corp-name-terraform-state-aws-account-id" and the deployment IAM role will be named "ci-role" in each account) We use the layers: Baselines and standard fixtures: Stuff like standard S3 buckets for log storage and backups, audit config, IAM roles, anything you want in every account (regardless of whether it will be used in that account) Network / VPC App databases and storage Kubernetes cluster: This just the base cluster plus any standard controllers, roles, monitoring, etc. (it adds up) Application resources: Anything app dependencies that the infra layer provides App monitoring: trace collectors, log collectors (we have a lot of custom app-specific rules), etc.