Or how to build a simple architecture in Terraform
If you’ve been following my journey, this project may seem a bit familiar. I previously completed a project using AWS’ CloudFormation and YAML to create a 3-tier example architecture. This time, we’re cutting it down to two tiers, and it’s going to be done in Terraform.
Simply put, Terraform is an Infrastructure as Code (IaC) tool that allows one to create whole architectures regardless of which provider you are using (AWS, GCP, Azure, etc). Similar to CloudFormation, in which you are creating a “template” to launch, Terraform utilizes a central document to list data points and resources to create.
We have been tasked with building a two-tier architecture with the following specs:
- VPC with CIDR 10.0.0.0/16
- 2 public subnets with CIDRs 10.0.1.0/24 and 10.0.2.0/24. Each in a different AZ for high availability.
- 2 private subnets with CIDRs 10.0.3.0/24 and 10.0.4.0/24. Each in a different AZ for high availability.
- An RDS MySQL instance in one of the private subnets.
- A load balancer that will direct traffic to the public subnets.
- An EC2 t2.micro instance in each public subnet.
- AWS Account with Admin privileges
- Access to the Terraform Documentation
- IDE/Terminal of your choice. (I use a combination of Visual Code Studio and Cloud9)
- Terraform installed on your IDE or local workstation.
For this project, we’re going to be a little messy, and create a “monolithic” Terraform (tf) file. This just means that all our resources, data points, etc will be in one file. Tf does allow for more modular design by allowing the engineer to split elements out into different documents, but we’re just going to create a beast for this project.
I recommend creating a new directory for your Terraform project, as there will be a few files/folders created when we run it. But for now, a simple 1,2,3 will get us where we need to be:
main.tf file is what we’ll be editing.
This is where we tell tf what service we will be using. Note that you could have multiple providers (in fact, that’s one of the beauties of tf), but for now we’re just going to use AWS.
Next up, we’re going to create our VPC. This will be our environment that contains all of our components. Immediately following, we’ll go ahead and create our subnets:
The subnets are easily switched from public to private by the
map_public_ip_on_launch element. The
vpc_id element ensures the subnets are inside our created VPC. The code above also shows how we can manually assign each subnet to a different availability zone for reliability.
With this, we’ve essentially got the “skeleton” of our little project, so now let’s add some organs!
The project, modelling a 2-tier architecture, needs an RDS MySQL instance within the private subnet for our “database tier”, and a couple EC2 instances in the public subnets to act as our “web tier”. The following code also shows a few of the extras that needed to be created, like a key pair for the EC2 instances, the security group for them, and the subnet group for the RDS instance.
The security group we’ll cover in the next section, so for now, let’s go over the RDS. Like with 99% of this project, we can look to the Terraform registry for the documentation of all these elements. There is example code, along with a listing of all the components (required or optional), as well as internal attributes that can be referenced in outputs, etc. Plug in our values, and we’re off! I will note that RDS can take quite a while to create/destroy, so I changed my size to be much smaller than the example they had provided, as well as turned off multi-az as I didn’t need it for this project.
So many of the elements are really self-explanatory, so it made creating the resource (eg. modifying the template) fairly quick and easy.
The EC2 instances were just like creating them in the console, or AWS CLI, or pretty much any other way of instantiating an EC2 instance. We’re just going to make sure that
vpc_security_group_ids are set correctly, otherwise it’s a *lot* of troubleshooting. I throw my bootstrap inline, (though with bigger bootstrap scripts, it can be passed in as file or a data source) and there we have our instances!
If the instances were our organs, then the networking interfaces would be like the mouth or eyes… gateways in to our environment.
The first part that a user or client would actually interact with, is the Internet Gateway. This is what allows traffic into our VPC. We will also set our routing tables here too:
The Load Balancer
Then we move inside to a device that divvies up traffic to our web tier. This is what actually provides the DNS that we use to access our webserver, so it seems like it would be the first part, but with the DNS just being an address inside our VPC, it has to go through the IG first.
Pretty straightforward, we build our ALB to point to our public subnets, make sure it has the right security group, set
false so that it is an “internet-facing” load balancer, and we’re set! Oh wait, not quite… this took me a bit to figure out, but having a listener (and ESPECIALLY the
target_group_arn) is absolute key to make this work.
So just to quickly recap:
- create an ALB that needs a listener
- create a listener that needs a target group
- create a target group that needs targets
- attach targets to group
Still with me? Good!
I will say that the documentation for all these parts is pretty good at explaining what each part *is*, but generally doesn’t help you at all at knowing *which* pieces go together, or which pieces you *need* to make it work. These projects have definitely taught me that part of it.
This last section, before we try and run this, will cover the three security groups we need in order to let the correct traffic flow where it needs to go.
VPC, ALB, and EC2.
This is honestly what gave me my greatest struggle on the whole project. Ultimately, I had forgotten the
cidr_blocks on my VPC security group, and then needed to create security groups for the EC2 instances, and the Load Balancer. Allowing traffic from the ALB to the web tier was done by referencing the ALB security group.
With this last piece of the puzzle, we are now ready to walk through the Terraform commands to build this creation!
1. terraform init
terraform init is the first command we’re going to run on our terminal. As you may guess, this initializes all the files and dependencies that our project will use.
2. terraform fmt/terraform validate
These two different commands are optional, but can come in handy.
fmt merely formats your .tf files so they have clean spacing and whatnot.
validate runs through your files and checks for syntax. It won’t be able to detect all issues, but it can definitely save you some time by finding those little annoying errors! Both of these will just return any files they have made changes on. (can use
fmt to show what changes were made)
3. terraform plan
This next command will go through your code, and provide an output that shows the actions that Terraform will perform upon “apply”.
4. terraform apply
All right folks, here’s the big red button! This command tells Terraform “Go”, and after a secondary “yes” to confirm, tf will start the creation of all the resources.
Meanwhile, grab a cup of coffee, or a snack. For this project, even with a small RDS size, it can take 5–6 minutes.
When the build is complete, we should see the following screen:
A cool command we can run is
terraform state list which will show us all the different elements that were created:
We can manually confirm that everything has been created by checking out the different spots in the AWS console, but our final check is to see if we can reach our webserver.
5. terraform destroy
Epic success! Now let’s clean up. This single command will shut everything down and remove all of the assets/resources that we’ve created.