Labels

aws (3) ci/cd (2) datadog (1) devops (1) iac (6) migration (1) techincal_tag (1) terraform (5) vvvhq (1)
Showing posts with label terraform. Show all posts
Showing posts with label terraform. Show all posts

April 2, 2023

Securing the Enchanted Cloud Kingdom: Terraform and AWS Security Best Practices

Introduction:

    Infrastructure as Code (IaC) has brought about a new era in managing and provisioning cloud resources, enhancing efficiency and uniformity. Nevertheless, when wielding the power of IaC, it's vital to incorporate security best practices to protect your enchanted cloud kingdom. This article delves into various security best practices when harnessing Terraform with AWS and presents code examples to help you construct a secure fortress.

Common approach:

1. Enable Logging and Monitoring Services.

To maintain a secure infrastructure, it's important to have visibility into all activities. Enabling logging and monitoring services, such as AWS CloudTrail, AWS Config, and Amazon GuardDuty, can help you achieve this.
 
This example creates a CloudTrail configuration named main-cloudtrail and specifies the S3 bucket to store the logs. It also enables multi-region trail and includes global service events.

2. Implement Identity and Access Management (IAM)

Restricting access to your AWS resources is essential for security. Implement least privilege principles and use IAM roles to grant the necessary permissions to users, groups, and services.
 
In this example, we create an IAM policy named example-policy that allows access to an S3 bucket named example-bucket. We then create an IAM role named example-role and attach the policy to it.

3. Encrypt Data at Rest and in Transit

Encrypting data ensures that unauthorized parties cannot access it. Use encryption features like AWS Key Management Service (KMS) for data at rest and enforce encryption in transit using HTTPS.
 
In the example, we create a KMS key for encrypting the RDS instance. We then create an Amazon RDS instance named `encrypteddb`, enabling storage encryption and specifying the KMS key to use.

4. Use Security Groups and Network Access Control Lists (NACLs)

To safeguard your infrastructure, restrict inbound and outbound traffic using security groups and NACLs. Configure them according to the principle of least privilege.
 
In this example, we create a security group named web_security_group that allows inbound traffic on ports 80 and 443 (HTTP and HTTPS) and unrestricted outbound traffic. This security group can be attached to web servers to allow only necessary incoming connections and provide a basic level of security.

    By implementing these security best practices in your Infrastructure as Code, you can ensure a more secure and reliable cloud environment. As you work with different cloud resources and IaC tools, it's essential to stay up-to-date with the latest security recommendations and best practices, continuously adapting and improving your infrastructure to minimize risks and protect your data and applications.

Implementing AWS autoscaling with Terraform: A Practical Guide (with examples)

Introduction:

    Dynamically scaling cloud infrastructure is essential for optimizing resources and costs. Infrastructure as Code (IaC) tools like Terraform can help manage the scaling process effectively. In this article, we will discuss how to scale AWS infrastructure using Terraform, complete with code examples.

What resources are needed

1. Autoscaling groups

Autoscaling groups enable you to scale your EC2 instances automatically based on load or schedule. Using IaC with Terraform, you can manage autoscaling groups easily.
 
In this example, the autoscaling group launches instances using the specified launch configuration, allowing the group to scale between 1 and 5 instances.

2. AWS Auto Scaling policies

Using AWS Auto Scaling policies, you can create rules that define how your infrastructure scales based on specific metrics, such as CPU utilization or network throughput.
In this example, we create a scaling policy that triggers when the CPU utilization exceeds 80% for 1 minute. The autoscaling group scales up by one instance when this occurs.

3. Scheduled scaling

Scheduled scaling enables you to scale your infrastructure based on predefined schedules, such as daily or weekly peaks in demand.
In this example, we set up three scheduled scaling actions: one to scale up the web app during weekdays, another to scale it down on weekends, and a third to scale it up again on weekends. These actions help ensure that the infrastructure can handle varying loads throughout the week.

    In conclusion, Infrastructure as Code and dynamic scaling are essential for modern cloud infrastructures. By using tools like Terraform and AWS features, you can create a flexible and efficient cloud environment that adapts to your applications and users' changing needs. Keep exploring IaC, stay updated on best practices, and continue optimizing your cloud infrastructure for top performance and cost-efficiency.

Avoid 5 common mistakes when using Terraform and be prepared for challenges in the world of infrastructure!

Introduction:

    Terraform is a powerful Infrastructure as Code (IaC) tool, but it can also be challenging to work with, especially for those new to it. In this article, we will discuss the top 5 common Terraform usage errors, provide code examples of improper and proper usage, and give tips on how to avoid these errors.

Common errors list:

1. Error: Not using variables and hardcoding values

Improper usage:
 
Proper usage:
Solution: Always use variables for values that might change, and avoid hardcoding values directly in your resource configurations.

2. Error: Insufficient use of modules for reusability

Improper usage:
Proper usage:
  • in the main.tf
  • module vpc
Solution: Use modules to encapsulate reusable pieces of infrastructure code and promote reusability.

3. Error: Not specifying required provider versions

Improper usage:
Proper usage:
Solution: Specify the required provider versions in your Terraform configuration to ensure consistent behavior across different environments and team members.

4. Error: Not properly handling sensitive data

Improper usage:
Proper usage:
Solution: Store sensitive data like passwords and API keys in variables marked as sensitive or use services like AWS Secrets Manager to manage secrets securely.

5. Error: Not using .tfignore or .gitignore to exclude sensitive files

Improper usage:
Not having a .tfignore or .gitignore file, or not including sensitive files in them.
Proper usage:
Create a .tfignore and .gitignore file and include sensitive files and directories.
.tfignore and .gitignore:
Solution: Use .tfignore to exclude sensitive files from being uploaded to the Terraform backend, and .gitignore to exclude sensitive files from your Git repository. This helps protect sensitive data and prevents accidental exposure.

    In conclusion, avoiding these Terraform usage errors is crucial to maintaining robust and secure Infrastructure as Code practices. By using variables, leveraging modules, specifying provider versions, managing sensitive data carefully, and properly excluding sensitive files, you can streamline your Terraform workflows and enhance the overall efficiency of your infrastructure management.

May 1, 2020

Implementing "IF" in Terraform "conditional expression"

Terraform Cloud can create and manage your infrastructure, roll updates for your infrastructure, and your apps, CI-CD also. 

I found TFC as very productive for both infrastructure development and production infrastructure workflows
It sounds like it can process workflow where conditions matter. At any stage and its focus.

So I will focus now on terraform arguments conditions.


Case No 1: You want to create EC2 VM with disk size 20Gb by default, but IF volume_size workspace variable is set - THEN let override this, per this workspace.


To override default 20Gib need to add a "volume_size " variable to terraform workspace, with the requested size value (in Gib).



Don't forget to "queue run" after the change.







Case No 2: You want to keep this volume_size argument in some secrets store, and you are fetching secrets via "data" object, for example from AWS secrets manager. 
  • It can be AWS SSM, Vault, any place where you are fetching with "data".
But IF volume_size variable is set THEN you want to override this with terraform workspace variable.
Well, this is a bit more complicated:
There is original documentation here, but still not easy to understand the implementation, but in the end, it appears to be super easy.
Imagine that you can create a testing pipeline where each stage is null_resource with "local-exec" provisioner scripts that depend on each other hierarchy, and this every stage bases on the previous step outputs.
Write in comments if you need an example.

Case No3: You want to create a resource only IF some variable has "true" value:

* You still can't pass "count" argument in "module", but probably will be implemented later.

April 24, 2020

Environment Variables in Terraform Cloud Remote Run


It is great to be able to use the output (or state data) from other terraform cloud workspace.

In most of the cases, it will be in the same TFC organization.
But one of the required arguments in this "terraform_remote_state" data object is "organization"... Hmm, this is where I am running just now.
The second required argument is "name" (remote workspace name).
Hmm, what if you are using some workspace name convention or workspace prefixes?

Ok, it looks like it can be done easily.
Like any "CICDaaS" with remote runners, TFC has a unique dynamic (system) environment variables on each run, like "runID", workspace and organization names.
To be sure what variables exist - just run some TF config with local-exec command "printenv" - you will see all Key=Value.

Note that only system variables with "TF_VAR_" prefix are accessible via terraform for you, this has no connection with "terraform providers"  that have pre-compiled specific system variables for their own need (like AWS_DEFAULT_REGION).

Lets back to our case.

So we have two workspaces in TFC, under the same organization:

  1. Workspace where we are bringing up AWS VPC and EKS, MongoDB
  2. Workspace where we will deploy Kubernetes services with helm charts.
To be able to deploy to created Kubernetes cluster (EKS) - "second" workspace must pass Kubernetes authentication first. Also, Kubernetes services should get the "MongoDB_URI" string.

That's why we will call "first" workspace "demo" and "second" we will call "demo-helm".
Then in "second" workspace, an object "terraform_remote_state" must run before the rest of object/resources:



I hope this helps a little to not define these things in variables and to prevent helm deployment on the wrong cluster by its nature.