How to launch Web Application with AWS using Terraform and Git

Shubham Rasal [SRE]
6 min readJun 13, 2020

We are building infrastructure and deploying our application according to below Scenario/Use-case.

Task: Create/launch Application using Terraform

  1. Create the key and security group which allows the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume (EBS) and mount that volume into /var/www/html
  5. A developer has uploaded the code into GitHub repo also the repo has some images.
  6. Copy the GitHub repo code into /var/www/html
  7. Create a S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

How to do it?

Prerequisite: You must have AWS-CLI and git installed.

1. Configure the Provider

To do this step you need to have AWS-CLI signed in. To know more about how to do it please check how to launch ec2.

provider "aws" {
region = "ap-south-1"
profile = "shubham"
}

2. Create Key Pair

resource “tls_private_key” “webserver_private_key” {
algorithm = “RSA”
rsa_bits = 4096
}
resource “local_file” “private_key” {
content = tls_private_key.webserver_private_key.private_key_pem
filename = “webserver_key.pem”
file_permission = 0400
}
resource “aws_key_pair” “webserver_key” {
key_name = “webserver”
public_key = tls_private_key.webserver_private_key.public_key_openssh
}

Here we have used resource ‘tls_private_key’ to create private key saved locally with the name ‘webserver_key.pem’. Then to create ‘AWS Key Pair’ we used resource ‘aws_key_pair’ and used our private key here as public key.

3. Create a Security Group

We want to access our website through HTTP protocol so need to set this rule while creating a Security group. Also, we want remote access of instances(OS) through ssh to configure it.

resource "aws_security_group" "allow_http_ssh" {
name = "allow_http"
description = "Allow http inbound traffic"
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]

}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_http_ssh"
}
}

Ingress rules are for the traffic that enters the boundary of a network. Egress rules imply to traffic exits instance or network. Configuring security group to allow ssh and http access.

4. Launch EC2 instance

We want to deploy our website on EC2 instance so that we need to launch an instance with installed servers and other dependencies. for that, we create an instance and downloading https, PHP, git to configure it.

resource "aws_instance" "webserver" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.webserver_key.key_name
security_groups=[aws_security_group.allow_http_ssh.name]
tags = {
Name = "webserver_task1"
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_private_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
]
}
}

We have created an instance in the Security group with amazon Linux(ami-0447a12f28fddb066), type of ‘t2.micro’, and keypair that we have created above. We have used Provisioner. (Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction). To know more about the provisioner check this: Defining Provisioners. I recommend you to read this.

5. Create EBS Volume

Why we need EBS volume? We want to store code in persistent storage so that instance termination could not affect it.

resource "aws_ebs_volume" "my_volume" {
availability_zone = aws_instance.webserver.availability_zone
size = 1
tags = {
Name = "webserver-pd"
}
}

To attach a volume to EC2 instance it must be in the same availability zone of instance. Here size is 1GB. and a tag is ‘webserver-pd’.

6. Attach EBS volume to EC2 instance.

Our requirement is to copy code into EBS volume for that we need to attach EBS volume to EC2 instance.

resource "aws_volume_attachment" "ebs_attachment" {
device_name = "/dev/xvdf"
volume_id = aws_ebs_volume.my_volume.id
instance_id = aws_instance.webserver.id
force_detach =true
depends_on=[ aws_ebs_volume.my_volume,aws_ebs_volume.my_volume]
}

This resource has dependency on EBS volume and instance. Because until they are not created we can not attach them. while destroying if the volume is attached to the instance we can not destroy it gives ‘volume is busy’ error. for that, we used force_detach = true.

7. Create S3 Bucket

resource "aws_s3_bucket" "task1_s3bucket" {
bucket = "website-images-res"
acl = "public-read"
tags = {
Name = "My bucket"
Environment = "Dev"
}
}

We used terraform’s resource ‘aws_s3_bucket’ to create a bucket. To create a s3 bucket you must give a unique name to the bucket. ‘Here’s bucket name is ‘website-images-res’.

8. Add Object into S3

here in my case, I want to upload images from GitHub into the S3 bucket. I have used local-provisioner to download images from GitHub locally and then upload it to the S3 bucket.

resource "null_resource" "images_repo" {
provisioner "local-exec" {
command = "git clone https://github.com/ShubhamRasal/myimages.git my_images"
}
provisioner "local-exec"{
when = destroy
command = "rm -rf my_images"
}
}
resource "aws_s3_bucket_object" "sun_image" {
bucket = aws_s3_bucket.task1_s3bucket.bucket
key = "sun.png"
source = "my_images/sun.png"
acl="public-read"
depends_on = [aws_s3_bucket.task1_s3bucket,null_resource.images_repo]
}

9.Create CloudFront for S3

resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.task1_s3bucket.bucket_regional_domain_name
origin_id = aws_s3_bucket.task1_s3bucket.id

custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = aws_s3_bucket.task1_s3bucket.id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
depends_on = [aws_s3_bucket.task1_s3bucket]
}

We created CloudFront distribution for our s3 bucket that we have created early. To know more about attributes please go through this: resource aws_cloudfront_distribution.

10. Now let's do some Coding.

What we want to do is to show images from the S3 bucket using the CloudFront URL. How to use URL when it is created run time? It changes every time when you create/apply. What if there are many images? and I don’t want us to write website code in terraform provisioner. What to do?

Feel Complicated? Don’t worry we are not in a hurry...

Let’s create a simple page using PHP language.

<html>
<body>
<h1>Hello World!</h1>
<h5><b>Below image is from cloudfront</b></h5>
<?php
$firstline=`head -n1 path.txt`;
$path_img="https://".$firstline."/sun.png";
echo "<br>";
echo "<img src='{$path_img}' width=500 height=500>";
?>
</body>
</html>

what we are doing here is reading a file which will have CloudFront URL. Creating a link in $path_img variable and using this we are giving it as src to img tag. I have stored this file in the Github repository.

11. Time to Deploy Code

We want to download our code from the git repository into the document root in our case /var/www/html. and store CloudFront URL into path.txt. so images can be accessed via CloudFront.

resource "null_resource" "nullremote"  {
depends_on = [ aws_volume_attachment.ebs_attachment,aws_cloudfront_distribution.s3_distribution
]
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.webserver.public_ip
port = 22
private_key = tls_private_key.webserver_private_key.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdf",
"sudo mount /dev/xvdf /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/ShubhamRasal/demo.git /var/www/html/",
"sudo su << EOF",
"echo \"${aws_cloudfront_distribution.s3_distribution.domain_name}\" >> /var/www/html/path.txt",
"EOF",
"sudo systemctl restart httpd"
]
}
}

We want to store our code in persistent storage i.e EBS volume. To use this storage we require to format it then mount it. Then we copied(clone) code from git using ‘git clone’ command. We created a file inside the instance that has the CloudFront URL link of s3 and stored it into /var/www/html.

12.Output

output "IP"{
value=aws_instance.webserver.public_ip
}

Now time to run/apply infrastructure.

#initalise and download pulgins
$ terraform init
#check for errors
$ terraform validate
#build the infrastructure
$ terraform apply -auto-approve
#destroy the infrastructure
$ terraform destroy -auto-approve

The public address will be print on the terminal copy that and paste in browser.

Conclusion

We have launched a website using amazon Services- EC2+EBS+S3+CloudFront. Above, we have created a web server using EC2 attached EBS volume then formated and mounted volume. Also, I created an S3 bucket and uploaded images from the Github repository and uploaded it on the s3 bucket. Created CloudFront distribution for the S3 bucket. Cloned git repository into document root i.e /var/www/html also created fie with CloudFront URL.

--

--