Cloud Computing

Discover the power of deploying and scaling containers effortlessly with Amazon ECS and Terraform

Discover the power of deploying and scaling containers effortlessly with Amazon ECS and Terraform

ECS Cluster with EC2 Launch type Using Terraform

Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.

It is a fully managed service which means you don’t need to manage control planes, nodes, or add-ons. It’s integrated with both AWS and third-party tools, such as Amazon Elastic Container Registry and Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane or nodes.

Launch types

There are two models that you can use to run your containers:

Fargate launch typeThis is a serverless pay-as-you-go option. You can run containers without needing to manage your infrastructure.

EC2 launch type Configure and deploy EC2 instances in your cluster to run your containers. The EC2 launch type is suitable for the following workloads:

  • Workloads that require consistently high CPU core and memory usage
  • Large workloads that need to be optimized for price
  • Your applications need to access persistent storage
  • You must directly manage your infrastructure

What is Terraform?

Terraform is open source and a great advantage of working with Terraform is the reusability of implemented configurations that can also be shared across various projects.

Before going to explain how an ECS cluster with EC2 launch type is deployed we need some necessary files as mentioned below.

Create a new project directory with the name “techifyecs” and within a directory first we need to create a provider.tf file, which allows  Terraform to interact with cloud providers.

provider.tf

Create VPC: Creating the new VPC and vpc resources such as route tables, subnets and internet gateway.We have to enable DNS Hostnames. Without this option, the EC2 instances in our cluster won’t be able to register themselves in ECS.

Created public and private subnets to improve the network security.

To make our VPC accessible from the Internet, our VPC needs an Internet Gateway and we have to make the public subnet accessible from the internet.In vpc.tf file we can see igw and routetable.

Create Security Group: As we know that security group is to control the inbound and outbound traffic for the instance.Then, create a security group for the EC2 instances in the ECS cluster and also security group rule.

Create An Auto Scaling Group For The ECS Cluster With A Launch Configuration We need to launch EC2 instances using autoscaling groups when the load of the ECS cluster reaches over a certain metric such as CPU and memory utilization. First we need to create an IAM role and an instance profile in the iam.tf file to use when they are launched.

iam.tf

In the above file instance profile also created, it is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.An EC2 Instance cannot be assigned a Role directly.so instance profile is used to assign a role.

we need to create a task execution role, a role that grants permissions to start the containers defined in a task.

Task role: that grants permissions to the actual application once the container is started.

Create A Launch Configuration:When you create a launch configuration, you must specify information about the EC2 instances to launch and also we should specify UserData so that it registers with the ECS cluster.

Also Amazon ECS-optimized AMI should be used. for more information https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html.Then, we need to create an autoscaling group that defines the minimum, the maximum, and the desired EC2 instances count.

Next in the variable.tf file all the required variables have been added.

variable.tf

In the local.tf file local values like cluster name mentioned

local.tf

ECS cluster:

An Amazon ECS cluster is a logical grouping of tasks or services. First we create the ECS Cluster and name it as local value.

ecs-cluster.tf

Task-definition:

A task definition is required to run Docker containers in Amazon ECS.The parameters that you use depend on the launch type that you choose for the task.

In the below task-definition we have preferred required compatibilities with EC2 and network-mode as bridge.FYI check below documentation

Task definition parameters

Task definitions are split into separate parts: the task family, the IAM task role, the network mode, container…

docs.aws.amazon.com

In the volume block name and host_path(optional) path on the host container instance that is presented to the container. If not set, ECS will create a nonpersistent data volume that starts empty and is deleted after the task has finished.

task-definition.tf

A task placement constraint is a rule that’s considered during task placement. For example, you can use constraints to place tasks based on Availability Zone or instance type. You can also associate attributes, which are name/value pairs, with your container instances and then use a constraint to place tasks based on attributes.

ECS Service:

Created ECS Service with launch type EC2 and ARN of the task-definition to run in your service.load balancer block with ARN of the Load Balancer target group to associate with the service.Container name and port to associate with load balancer.

ecs-service.tf

Load balancer: An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each container instance in your cluster. Application Load Balancers support dynamic host port mapping. For example, if your task’s container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance .

When the task is launched, the NGINX container is registered with the Application Load Balancer as an instance ID and port combination, and traffic is distributed to the instance ID and port corresponding to that container. This dynamic mapping allows you to have multiple tasks from a single service on the same container instance. created load balancer and http listener.

Target group is used to route requests to one or more registered targets.

For load balancer we need to create a security group with inbound and outbound rules.

Finally we have created all the resources using terraform and Run the terraform commands:

  • terraform init : Setup a new terraform project for this file.

  • terraform plan : To preview the actions as it’s defined in the .tf file.
  • terraform apply : Setup the infrastructure as it’s defined in the .tf file.

Now let’s check in the AWS console… all the resources have been deployed.

Use the terraform destroy: Tear down everything that terraform created.

Conclusion: We can quickly deploy, manage, and scale containers running applications, services, and batch processes based on your resource needs. With Amazon ECS, your containers are defined in a task definition that you use to run an individual task or task within a service. It can also be integrated with AWS services like AWS cloudwatch, Elastic Load Balancing, EC2 security groups and IAM roles.