How to Manage AWS CloudFormation Stack Dependencies

Automated infrastructure (Infrastructure as Code) is essential to succeed (not only) in the cloud.


AWS provides its own service for managing resource stacks: AWS CloudFormation. What are the options to manage dependencies between stacks, how to use them and which pros & cons they have?

In general, we have three options how to link resources from different stacks:

Hard-Coded References

This is the most trivial variant as well as the most disadvantageous one. Let's say, the stack B needs a resource from the stack A:

// stackA.yml

ServiceA:
  Type: AWS::Lambda::Function
  Properties:
    FunctionName: "service-A"
    ...

// stackB.yml

ServiceB:
  Type: AWS::Lambda::Function
  Properties:
    FunctionName: "service-B"
    Environment:
      Variables:
        SERVICE_A: "service-A"
    ...

Well, at least the dependency is set via an environment variable (it could be worse: the reference could be hard-coded direct in the function code), but it's still very impractical. The value of the variable must be changed either via a template code change, or manually, which breaks principles of Continuous Delivery. The service B is not informed about a potential change in the stack A, there is no validation that the dependency actually exists and is correct. A system built in this way is obviously brittle and can stop working anytime.

Stack Parameters 

Setting references via stack parameters is not very different from hard-coded values, but it's definitely a small progress, because we can change parameter values via our continuous delivery process (pipeline). But there is still no guarantee that the value is correct.

// stackA.yml

Resources:
  ServiceA:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "service-A"
      ...

Outputs:
  ServiceA:
    Description: "Service A."
    Value: !Ref ServiceA

// stackB.yml

Parameters:
  ServiceA:
    Type: String
    Description: "Reference to the Service A"

Resources:
  ServiceB:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "service-B"
      Environment:
        Variables:
          SERVICE_A: !Ref ServiceA
      ...

Because the stack A publishes the service A in its outputs, we can set the value even in an automation manner. But the problem with inconsistence, in case the resources has changed, remains. 

Exports/Imports

The most secure way how to deal with stack dependencies in AWS CloudFormation is to use exports/imports. The exported (and somewhere imported) resources are protected from changes and we get a handy overview of our dependencies out of the box.

// stackA.yml

Resources:
  ServiceA:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "service-A"
      ...

Outputs:
  ServiceA:
    Description: "Service A."
    Value: !Ref ServiceA
    Export:
      Name: "ServiceA"

// stackB.yml

Resources:
  ServiceB:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "service-B"
      Environment:
        Variables:
          SERVICE_A: !ImportValue "ServiceA"
      ...

Now, any change of the exported value will cause an integrity error and so we can be sure that our dependencies are always correct. 

Parameterized Exports/Imports

The approach above is fine for small systems with only few stacks. As our system grows there are more and more stacks and we can easily lose the overview which resource belongs to which stack. A good practice here is to use the stack names as "namespaces" to group all the stack resources under the same prefix:

// stackA.yml

Resources:
  ServiceA:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: !Sub "${AWS::StackName}-service-A-${AWS::Region}"
      ...

Outputs:
  ServiceA:
    Description: "Service A."
    Value: !Ref ServiceA
    Export:
      Name: !Sub "${AWS::StackName}-ServiceA"

The question is, how to pass the name of the exported variable? We can hard-code it, but it will couple the template code with the stack name, which is undesirable, because the code shouldn't have any knowledge how stacks are deployed - named.

Another option is to pass variable names as stack parameters, which could work fine, but it means hard and unnecessary effort, because, all in all, the names are part of the stack API and therefore mustn't change (only the stack name is variable).

The compromise is to pass only the stack name as a parameter

// stackB.yml

Parameters:
  StackNameA:
    Type: String
    Description: "Name of the Stack A"

Resources:
  ServiceB:
    Type: AWS::Lambda::Function
    Properties:
      FunctionName: "service-B"
      Environment:
        Variables:
          SERVICE_A:
            Fn::ImportValue: {"Fn::Sub": "${StackNameA}-ServiceA"}
      ...

With this approach we have all the benefits of exports/imports integrity while variability and deployment independence is preserved.

Happy infrastructure coding!