Wednesday, January 6, 2021

Terraform: Backend as S3

Instead of keeping the terraform state file locally we can store it on a remote location as well. In this blog we are going to store this state file in AWS S3 bucket.

Lets create a main.tf file:
Now lets run terraform init --> terraform plan --> terraform apply

If you look closely into your directory you will see terraform.tfstate still exist and upon opening it you will still see data inside it. In order to move this state file into remote location i.e in our case storing it on S3, we will need to create one bucket in S3. 
Eg: Bucket Name: abhaybuckets
Now lets add required permission to the bucket:

Now that we have created a bucket lets add the backend configuration to store the state file in S3. (Add the configuration in same main.tf file)
We will need to run terraform init again to reconfigure the state file location changes. While executing this command you will notice that terraform will prompt a question whether do you want to move the existing file to S3? Upon agreeing it will store the existing state file to S3 and the data inside the local terraform.tfstate file will be erased.

Please note: You can store your credentials either in profile or in environment variable. If you are storing your credentials inside provider block you may get the following error:

terraform backend s3 Error: NoCredentialProviders: no valid providers in chain. Deprecated.         For verbose messaging see aws.Config.CredentialsChainVerboseErrors 

Friday, January 1, 2021

Terraform : Working with modules

Terraform modules is the best way to reuse your terraform resources. 

Create a following folder structure:

  • terraform-app
  • terraform-app/modules
  • terraform-app/dev
  • terraform-app/prod
So terraform-app/modules will have the template kind of code which you can reuse it in your dev environment & prod environment.

In this example we will create a VPC & Subnet template which can be reused in dev & prod.

lets create network.tf file inside terraform/modules, which will have VPC & Subnet templates, Will be using output to print and to associate it with Subnet.

Now lets create variables.tfvars file

Lets switch the directory and create a main.tf file inside terraform-app/dev which will make use of the modules which we have created above.

Also lets create another main.tf file inside terraform-app/prod

So the main.tf file under dev will create a VPC with CIDR block 192.168.0.0/16 and the main.tf file under prod will create a VPC with default CIDR block 10.0.0.0/16 which is defined under variables.tfvars 

Saturday, November 14, 2020

AWS ECS - Task Definition Example

 {

   "containerDefinitions": [ 

      { 

         "command": [

            "/bin/sh -c \"echo '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p> </div></body></html>' >  /usr/local/apache2/htdocs/index.html && httpd-foreground\""

         ],

         "entryPoint": [

            "sh",

            "-c"

         ],

         "essential": true,

         "image": "httpd:2.4",

         "logConfiguration": { 

            "logDriver": "awslogs",

            "options": { 

               "awslogs-group" : "/ecs/fargate-task-definition",

               "awslogs-region": "us-east-1",

               "awslogs-stream-prefix": "ecs"

            }

         },

         "name": "sample-fargate-app",

         "portMappings": [ 

            { 

               "containerPort": 80,

               "hostPort": 80,

               "protocol": "tcp"

            }

         ]

      }

   ],

   "cpu": "256",

   "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",

   "family": "fargate-task-definition",

   "memory": "512",

   "networkMode": "awsvpc",

   "requiresCompatibilities": [ 

       "FARGATE" 

    ]

}

Thursday, November 5, 2020

Amazon Aurora Connection Management:

Upto 15 Aurora Replica's can perform read only query traffic.

For each task you can connect to appropriate endpoints:


Types of endpoints:

1. Cluster Endpoint (or writer endpoint):

cluster endpoint (or writer endpoint) for an Aurora DB cluster connects to the current primary DB instance for that DB cluster. This endpoint is the only one that can perform write operations such as DDL statements. Because of this, the cluster endpoint is the one that you connect to when you first set up a cluster or when your cluster only contains a single DB instance.

Each Aurora DB cluster has one cluster endpoint and one primary DB instance.

You use the cluster endpoint for all write operations on the DB cluster, including inserts, updates, deletes, and DDL changes. You can also use the cluster endpoint for read operations, such as queries.

The cluster endpoint provides failover support for read/write connections to the DB cluster. If the current primary DB instance of a DB cluster fails, Aurora automatically fails over to a new primary DB instance. During a failover, the DB cluster continues to serve connection requests to the cluster endpoint from the new primary DB instance, with minimal interruption of service.

The following example illustrates a cluster endpoint for an Aurora MySQL DB cluster.

mydbcluster.cluster-123456789012.us-east-1.rds.amazonaws.com:3306

2. Reader Endpoint:

reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB cluster. Use the reader endpoint for read operations, such as queries. By processing those statements on the read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.

If the cluster contains one or more Aurora Replicas, the reader endpoint load-balances each connection request among the Aurora Replicas. In that case, you can only perform read-only statements such as SELECT in that session. If the cluster only contains a primary instance and no Aurora Replicas, the reader endpoint connects to the primary instance. In that case, you can perform write operations through the endpoint.

The following example illustrates a reader endpoint for an Aurora MySQL DB cluster.

mydbcluster.cluster-ro-123456789012.us-east-1.rds.amazonaws.com:3306

3. Custom Endpoint:

custom endpoint for an Aurora cluster represents a set of DB instances that you choose. When you connect to the endpoint, Aurora performs load balancing and chooses one of the instances in the group to handle the connection. You define which instances this endpoint refers to, and you decide what purpose the endpoint serves.

An Aurora DB cluster has no custom endpoints until you create one. You can create up to five custom endpoints for each provisioned Aurora cluster. You can't use custom endpoints for Aurora Serverless clusters.

The custom endpoint provides load-balanced database connections based on criteria other than the read-only or read/write capability of the DB instances. For example, you might define a custom endpoint to connect to instances that use a particular AWS instance class or a particular DB parameter group. Then you might tell particular groups of users about this custom endpoint. For example, you might direct internal users to low-capacity instances for report generation or ad hoc (one-time) querying, and direct production traffic to high-capacity instances.

Because the connection can go to any DB instance that is associated with the custom endpoint, we recommend that you make sure that all the DB instances within that group share some similar characteristic. Doing so ensures that the performance, memory capacity, and so on, are consistent for everyone who connects to that endpoint.

This feature is intended for advanced users with specialized kinds of workloads where it isn't practical to keep all the Aurora Replicas in the cluster identical. With custom endpoints, you can predict the capacity of the DB instance used for each connection. When you use custom endpoints, you typically don't use the reader endpoint for that cluster.

The following example illustrates a custom endpoint for a DB instance in an Aurora MySQL DB cluster.

myendpoint.cluster-custom-123456789012.us-east-1.rds.amazonaws.com:3306

4. Instance Endpoint:

An instance endpoint connects to a specific DB instance within an Aurora cluster. Each DB instance in a DB cluster has its own unique instance endpoint. So there is one instance endpoint for the current primary DB instance of the DB cluster, and there is one instance endpoint for each of the Aurora Replicas in the DB cluster.

The instance endpoint provides direct control over connections to the DB cluster, for scenarios where using the cluster endpoint or reader endpoint might not be appropriate. For example, your client application might require more fine-grained load balancing based on workload type. In this case, you can configure multiple clients to connect to different Aurora Replicas in a DB cluster to distribute read workloads. For an example that uses instance endpoints to improve connection speed after a failover for Aurora PostgreSQL, see Fast failover with Amazon Aurora PostgreSQL. For an example that uses instance endpoints to improve connection speed after a failover for Aurora MySQL, see MariaDB Connector/J failover support – case Amazon Aurora.

The following example illustrates an instance endpoint for a DB instance in an Aurora MySQL DB cluster.

mydbinstance.123456789012.us-east-1.rds.amazonaws.com:3306


Sunday, October 11, 2020

AWS Route 53 [Part 2]

AWS Route 53

Routing Policies:

When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:

1.      Simple Routing: Randomly picks IP



2.      Weighted Routing: Routes traffic as per the weight allotted, in example 70% of the traffic will be routed to 30.1.1.2 IP and rest will be to another



 

3.      Latency Based Routing: Routing based on latency. The traffic will be routed to the instance which takes lesser time to reach.



 

4.      Failover Routing: Active/Passive. The routing will be diverted to the active instance and if the instance goes down then the passive instance will be made active.



5.      Geolocation Based Routing: The routing will be based on route location of user & instance.



Geoproximity Routing (traffic flow only): Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. You can also optionally choose to route more traffic or less to a given resource by specifying a value, known as a bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.

 

6.      Multivalue Answer routing: Same like Simple routing policy but health check will be added and the unhealthy instance will be removed from the routing policy.




Saturday, September 19, 2020

AWS Disaster Management

 RPO: Recovery Point Objective.


RTP: Recovery Time Objective.


Disaster Recovery Strategies:

  1. Backup & Restore.

  2. Pilot Light

  3. Warm Standby

  4. Hot Site/Multi Site approach

  1. Backup & Restore:

    1. Very easy 

    2. Only cost of storing the data

    3. Can take long to Restore.

    4. High RPO & High RTO.

  2. Pilot Light

    1. A small version of the app is always running in the cloud

    2. Useful for the critical core

    3. Very similar to backup & restore

    4. Faster than backup & Restore as critical systems are already up.

  1. Warm Standby:

    1. Full System is up but at minimum size

    2. Upon disaster we can scale up to prod load


  1. Hot Site / Multi Site approach

    1. Very low RTO (minutes or seconds) → very expensive

    2. Full production scale is running on cloud or on prem

    3. Active Active Status


All AWS Multi Region:

Disaster Recovery Tips:

Backups:


Terraform Cheat Sheet [WIP]

Installing Terraform