Intro

New to the cloud? Finally you can start in the console - the portal where you can create, edit or delete resources of your cloud provider, edit or delete resources from your cloud provider. The first impulse: click, click, click. Great, the server is up and running. “Great,” thinks your colleague. Now we need a few more servers in the team. It’s your turn again: Click, click, click-what was the setting again? What did I select before? Click, click, click, crap - the wrong size. Back again, click, click … You get it.

In my current project we are working with different customers. They all have their own infrastructure, but it is always built the same way. With coded infrastructure we can bring a new setup online much easier and more flexible. And best of all, I don’t have to remember what I’ve set up in the past. Everything is scripted. No “clicky-bunty” dilemma. 😎

What is IaC?

Infrastructure as Code (IaC) is a tool providing the setup of your infrastructure sustainably and if set up with a deployment pipeline also automatically. You’ll code your infrastructure. Depending on which tool you choose, it differs how you have to do so. Some tools have their own language. Other tools are based on YAML or work with common programming languages like Python or Typescript. Your code is interpreted by the tool and creates for the choosen provider. A provider refers for example to a cloud provider (AWS, Azure, Google, etc.), Kubernetes, MySQL, or Github.

The IaC workflow - there a lot of tools

When you code your infrastructure, the structure is clearly described and can therefore also be included in a deployment process. This makes it easy to automate the deployment of your infrastructure. Further advantages of IaC are a lower maintenance effort and thus lower costs as well as less time that has to be spent on building an infrastructure (only code once).

In this article, I’ll show you how to use Terraform to code your infrastructure in AWS.

What is Terraform?

Terraform is a product of HashiCorp, which was released in 2014. Terraform is open source and released under the Mozilla Public License v2.0 on Github. Terraform uses the configuration language HCL (HashiCorp Configuration Language) - a scripting language developed by HashiCorp.

Installation

To use Terraform, install the Terraform CLI.

# manual installation (WSL/Linux)
# get latest terraform binaries for your system https://www.terraform.io/downloads.html
wget https://releases.hashicorp.com/terraform/0.14.4/terraform_0.14.4_linux_amd64.zip
unzip terraform_0.14.4_linux_amd64.zip
# move terraform to an existing dir included in PATH 
# (or add current dir to PATH)
mv terraform /usr/local/bin/

A manual installation can be advantageous if you cannot work with the current version of Terraform. Older versions can be found athttps://releases.hashicorp.com/terraform/.

💡 Besides the manual installation you can also install the Terraform CLI via homebrew (MacOS), chocolatey (Windows) and e.g. apt-get or yum (Linux). You can find out more about this in HashiCorp’s Starter Guide.

Keywords

To better understand the individual features of Terraform, I have summarized the most important keywords here:

  • provider: A plattform which resources you can create with Terraform, e.g. AWS, Github or even Azure. Overview of all providers: https://registry.terraform.io/browse/providers

  • resource: A resource of your provider. This can be, for example, a server, a user, a database, etc. Resources are defined in .tf files.

  • variables: To use values dynamically or even multiple times, you can use variables. Value assignments of variables are implemented in .tfvars files.

  • outputs: Attributes of a resource, which you can output or make available for further steps.

  • state: A JSON-ish list of your infrastructure resources maintained with Terraform. The state is used to compare the currently existing resources and the new or to be changed resources. Stored locally in the terraform.state file.

remote state: A Terraform state that is stored centrally, e.g. in an AWS S3 bucket or in Terraform Cloud. This is especially useful when multiple developers are working on the setup.

  • modules: Terraform offers the possibility to store recurring resources in modules. Hashicorp and the community offer own module libraries for many providers .

  • Terraform Registry: Here you can find the documentation for each provider and its usable resources in the Documentation section, e.g. for AWS.

  • Terraform CLI: The Terraform Command Line Interface. You work with Terraform from your console.

With the most important keywords in hand, you can start coding. 💪

Terraform in Action with AWS

In Terraform, you go through several stages. We’ll go through each one with our code example.

The Terraform Workflow

Code Example

Below, we’ll build an AWS S3 bucket together that can be used as an website by providing an index.html. You can also find the whole code example in my Github Repo

AWS Setup

For our website, we need a publicly accessible S3 bucket and an index.html as an S3 object. To avoid having to type the name of the S3 bucket over and over again, we’ll use the variable s3_bucket_name (line 12). You can reference the variable with var.s3_bucket_name (line 20). At the end we let us output the URL of our website using the output s3_bucket_website_url (line 56).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# main.tf - define your resources

# set provider
# the starting point to connect to AWS
provider "aws" {
  profile = "test"      # the profile you configured via AWS CLI 
  region  = "us-east-1" # the region you want to deploy to 
}


# set variables
variable "s3_bucket_name" {
  description = "the s3 bucket name"
  type        = string
}

# set resources
# s3 bucket
resource "aws_s3_bucket" "website" {
  bucket = var.s3_bucket_name
  acl    = "public-read"
  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Sid": "PublicReadGetObject",
          "Effect": "Allow",
          "Principal": "*",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::${var.s3_bucket_name}/*"
          ]
      }
  ]
}
POLICY

  website {
    index_document = "index.html"
  }
}

# s3 object "index.html"
resource "aws_s3_bucket_object" "website" {
  bucket       = aws_s3_bucket.website.id
  key          = "index.html"          # how your file will be named in the S3 Bucket (we need an index.html)
  source       = "index.html"          # set the path to your "index.html" (here it lies in the same directory) 
  content_type = "text/html"           # use the respective MIME type for your object
  etag         = filemd5("index.html") # same path as in source 
}

# set output
output "s3_bucket_website_url" {
  value = aws_s3_bucket.website.website_endpoint
}
1
2
3
# variables.tfvars
# set the values for your variables
s3_bucket_name = "test" # add your unique bucket name

Terraform automatically interprets the resources and their dependencies from all files with .tf extension. Therefore the distribution of your resources is up to you. For larger setups it can make sense to build a logical separation by several files, e.g. with:

  • server.tf
  • website.tf
  • variables.tf
  • output.tf etc.

It doesn’t matter what your files are called. There are no limits to your creativity.

Preparation

💡 An AWS account is required for deployment to AWS. Also, programatic access must be enabled for your user.

Installing the AWS CLI:

wget "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
# aws-cli/2.1.1 Python/3.7.4 Linux/4.14.133-113.105.amzn2.x86_64 botocore/2.0.0

👉 https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html

💡 There are analogous options for other providers such as Azure or Google Cloud, e.g. with the Azure CLI or with Google Cloud Shell

Set up your AWS accesses (credentials):

You can use aws configure to set up your AWS credentials. You can read more about it here: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html.

Initialization and Planning

Your first Terraform code is written. The AWS CLI is set up. So we are ready to launch. Yuppieh! Start from your folder where your .tf files are located.

Initialization:

# initalisation of terraform and the chosen provider
terraform intit
# output
Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 3.25.0...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = "~> 3.25"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Planning:

With terraform plan you check what your code will result in. If you already have a state (local or remote), this step checks against the available resources. Since variables are used, you include them with -var-file=variables.tfvars. The output of the scheduling shows you how many resources are added, updated and deleted.

# see a plan of what you want to create
terraform plan -var-file=variables.tfvars
# output
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.


------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.website will be created
  + resource "aws_s3_bucket" "website" { ... }      
    ...
Plan: 2 to add, 0 to change, 0 to destroy.

Let’s take a look at the output:

  • + create (green): the resource or the attribute of a resource is newly added
  • ~ update in place (yellow): the resource is updated automatically
  • - delete (red): the resource or the attribute of a resource is deleted
  • -/+ replace (red): the resource is deleted and created anew

In each planning output you will also see which attribute is responsible for the changes: update in place

Let’s Deploy!

The plan fits? If you are satisfied with the output, you can start your deployment to the cloud with terraform apply. A planning is performed again and after your approval (yes) the apply is performed. With the flag -auto-approve the apply is executed immediately. executed. Since we have defined an output via Terraform, the value for s3_bucket_website_url is displayed.

terraform apply -var-file=variables.tfvars
# output
...
...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Outputs:

s3_bucket_website_url = ...

Let’s Delete!

You want to take your setup offline again? With terraform destroy you can delete all defined resources:

terraform destroy -var-file=variables.tfvars

With the flag -auto-approve you can let the job run automatically.

Each step of the deployment offers further setting possibilities (e.g. a no-color output or the definition of individual variables). More about this you’ll find here for plan, apply or destroy

Helper functions

Terraform also offers other great features besides planning. I would like to give you two of them here:

Formatting your code:

terraform fmt formats your code according to the Terraform Code Styleguide

Validating your code:

terraform validate is an important feature to find and fix syntax errors in the code before the planning step. It often happened to me that a planning or apply broke after a long runtime because I forgot a bracket or misspelled a reference. Make it easier for yourself and build the validation into your Terraform routine.

Terraform routine:

I recommend the following sequence when running Terraform

# terraform command recomondation
terraform fmt
terraform init
terraform validate
terraform plan -var-file=variables.tfvars
terraform apply -var-file=variables.tf.vars

Congratulations! You are now Terraform approved. Let&rsquo;s code some Terraform

##Advantages and challenges Terraform is great. But there are two sides to every coin, as they say. I would like to conclude by telling you about

Advantages of Terraform are:

  • easy entry and steep learning curve: HCL is easy to understand and Terraform’s documentation is excellent, therefore you can deep dive into the topic really fast.

  • many providers: Through its wide range of providers Terraform offers a wide range of providers a wide spectrum of resources to built. Thus, not only the big players are there, but also smaller providers. In addition, you can extend even your own providers through the so-called Private Registries.

  • Multi-cloud implementable: With Terraform you can implement multiple cloud poviders at the same time.

  • Community driven & fast update cycles: Terraform is open source and is updated regularly. On Github you can see the current developments. Many providers are now actively working with HashiCorp/Terraform, so that the implementation of new resources is now much faster than 3 to 4 years ago.

Challenges with Terraform:

  • conditionals: In contrast to a known programming language it is not so easy to map conditions in Terraform. Especially more complex conditions are sometimes not easy to implement. But with every release something is done in this direction. With the introduction of v12.x, for example, conditions via dynamics and foreach were implemented.

  • Multi-Cloud: As great as Multi-Cloud sounds, as complex is the implementation. Terraform is provider specific. This means that it is based on the identifiers of each individual provider. Thus, a switch, for example, from AWS to Azure is certainly easy in theory, but also requires a lot of hard work depending on the size of the setup, since the individual resources have different names and thus also - understandably - different configuration options. This is where a developer*’s experience with each provider comes into play.

  • Fast Delivery: Terraform delivers at least for AWS new features very fast. That means you have to be contantly up to date with the latest development. It can be challenging to always have the current improvement in view Nonetheless it is as a sign that it is a good tool, which is constantly evolving. An overview of the current releases can be found here: https://github.com/hashicorp/terraform/releases and in Changelog

Further development and alternatives

Coding your infrastructure is fun. And it is fun for more and more developers, because there is a lot going on in the area of Infrastructure as Code.

I would like to give you a preview of what else is going on. You’ll find posts about it here on this blog soon.

AWS Cloud Development Kit

At Reinvent 2018, AWS announced the Cloud Development Kit (AWS CDK). In July 2019, with TypeScript and Python the first generally available version was pablished. Since then it has been steadily expanded and since 2020 there is also Terraform CDK. So it remains exciting. ✨

Pulumi

Similar to the Cloud Development Kit, Pulumi builds the infrastructure using a programming language, e.g. with Python. In addition to the most common cloud providers, there are also other providers like Kubernetes or even MySQL. I am currently planning on a demo for another blog post on this. So be curious ☺️

Other Providers

AWS offers CloudFormation, an internal service for scripting the infrastructure with JSON or YAML. Azure offers with Azure Resource Manager (ARM) and bicep some provider specific way. Google Cloud offers with the Cloud Deployment Manager. I can say rather less about the latter two at the moment, since I have not actively used them so far.

Conclusion

As you can see, Infrastructure as Code is a great approach to set up your projects in a sustainable way. With Terraform you have a powerful tool to plan and deploy your inftastructure setup. In the end, however, it’s your use case and your preferences that decide - maybe you find it easier to build a setup with Python or Typescript. Whatever you choose: IaC is always a good choice to make your infrastructure setup more traceable and automatable for you and your team.

Wanna learn more about Terraform? Check out my series: https://wolkencode.de/en/series/terraform/. I regularly add new topics to it 👩‍💻


How did you like this post? Feel free to send me your feedback, like or share on Twitter.

Happy Coding and see ya next time!

Nora See ya!