Labels

aws (3) ci/cd (2) datadog (1) devops (1) iac (6) migration (1) techincal_tag (1) terraform (5) vvvhq (1)

April 2, 2023

Securing the Enchanted Cloud Kingdom: Terraform and AWS Security Best Practices

Introduction:

    Infrastructure as Code (IaC) has brought about a new era in managing and provisioning cloud resources, enhancing efficiency and uniformity. Nevertheless, when wielding the power of IaC, it's vital to incorporate security best practices to protect your enchanted cloud kingdom. This article delves into various security best practices when harnessing Terraform with AWS and presents code examples to help you construct a secure fortress.

Common approach:

1. Enable Logging and Monitoring Services.

To maintain a secure infrastructure, it's important to have visibility into all activities. Enabling logging and monitoring services, such as AWS CloudTrail, AWS Config, and Amazon GuardDuty, can help you achieve this.
 
This example creates a CloudTrail configuration named main-cloudtrail and specifies the S3 bucket to store the logs. It also enables multi-region trail and includes global service events.

2. Implement Identity and Access Management (IAM)

Restricting access to your AWS resources is essential for security. Implement least privilege principles and use IAM roles to grant the necessary permissions to users, groups, and services.
 
In this example, we create an IAM policy named example-policy that allows access to an S3 bucket named example-bucket. We then create an IAM role named example-role and attach the policy to it.

3. Encrypt Data at Rest and in Transit

Encrypting data ensures that unauthorized parties cannot access it. Use encryption features like AWS Key Management Service (KMS) for data at rest and enforce encryption in transit using HTTPS.
 
In the example, we create a KMS key for encrypting the RDS instance. We then create an Amazon RDS instance named `encrypteddb`, enabling storage encryption and specifying the KMS key to use.

4. Use Security Groups and Network Access Control Lists (NACLs)

To safeguard your infrastructure, restrict inbound and outbound traffic using security groups and NACLs. Configure them according to the principle of least privilege.
 
In this example, we create a security group named web_security_group that allows inbound traffic on ports 80 and 443 (HTTP and HTTPS) and unrestricted outbound traffic. This security group can be attached to web servers to allow only necessary incoming connections and provide a basic level of security.

    By implementing these security best practices in your Infrastructure as Code, you can ensure a more secure and reliable cloud environment. As you work with different cloud resources and IaC tools, it's essential to stay up-to-date with the latest security recommendations and best practices, continuously adapting and improving your infrastructure to minimize risks and protect your data and applications.

Implementing AWS autoscaling with Terraform: A Practical Guide (with examples)

Introduction:

    Dynamically scaling cloud infrastructure is essential for optimizing resources and costs. Infrastructure as Code (IaC) tools like Terraform can help manage the scaling process effectively. In this article, we will discuss how to scale AWS infrastructure using Terraform, complete with code examples.

What resources are needed

1. Autoscaling groups

Autoscaling groups enable you to scale your EC2 instances automatically based on load or schedule. Using IaC with Terraform, you can manage autoscaling groups easily.
 
In this example, the autoscaling group launches instances using the specified launch configuration, allowing the group to scale between 1 and 5 instances.

2. AWS Auto Scaling policies

Using AWS Auto Scaling policies, you can create rules that define how your infrastructure scales based on specific metrics, such as CPU utilization or network throughput.
In this example, we create a scaling policy that triggers when the CPU utilization exceeds 80% for 1 minute. The autoscaling group scales up by one instance when this occurs.

3. Scheduled scaling

Scheduled scaling enables you to scale your infrastructure based on predefined schedules, such as daily or weekly peaks in demand.
In this example, we set up three scheduled scaling actions: one to scale up the web app during weekdays, another to scale it down on weekends, and a third to scale it up again on weekends. These actions help ensure that the infrastructure can handle varying loads throughout the week.

    In conclusion, Infrastructure as Code and dynamic scaling are essential for modern cloud infrastructures. By using tools like Terraform and AWS features, you can create a flexible and efficient cloud environment that adapts to your applications and users' changing needs. Keep exploring IaC, stay updated on best practices, and continue optimizing your cloud infrastructure for top performance and cost-efficiency.

Avoid 5 common mistakes when using Terraform and be prepared for challenges in the world of infrastructure!

Introduction:

    Terraform is a powerful Infrastructure as Code (IaC) tool, but it can also be challenging to work with, especially for those new to it. In this article, we will discuss the top 5 common Terraform usage errors, provide code examples of improper and proper usage, and give tips on how to avoid these errors.

Common errors list:

1. Error: Not using variables and hardcoding values

Improper usage:
 
Proper usage:
Solution: Always use variables for values that might change, and avoid hardcoding values directly in your resource configurations.

2. Error: Insufficient use of modules for reusability

Improper usage:
Proper usage:
  • in the main.tf
  • module vpc
Solution: Use modules to encapsulate reusable pieces of infrastructure code and promote reusability.

3. Error: Not specifying required provider versions

Improper usage:
Proper usage:
Solution: Specify the required provider versions in your Terraform configuration to ensure consistent behavior across different environments and team members.

4. Error: Not properly handling sensitive data

Improper usage:
Proper usage:
Solution: Store sensitive data like passwords and API keys in variables marked as sensitive or use services like AWS Secrets Manager to manage secrets securely.

5. Error: Not using .tfignore or .gitignore to exclude sensitive files

Improper usage:
Not having a .tfignore or .gitignore file, or not including sensitive files in them.
Proper usage:
Create a .tfignore and .gitignore file and include sensitive files and directories.
.tfignore and .gitignore:
Solution: Use .tfignore to exclude sensitive files from being uploaded to the Terraform backend, and .gitignore to exclude sensitive files from your Git repository. This helps protect sensitive data and prevents accidental exposure.

    In conclusion, avoiding these Terraform usage errors is crucial to maintaining robust and secure Infrastructure as Code practices. By using variables, leveraging modules, specifying provider versions, managing sensitive data carefully, and properly excluding sensitive files, you can streamline your Terraform workflows and enhance the overall efficiency of your infrastructure management.

March 27, 2023

Containerization Basics: An Introduction for Cloud and DevOps Enthusiasts draft

1. Introduction to Containerization

    In recent years, containerization has become an essential aspect of modern application development, particularly in cloud services, architecture, and DevOps.

    Unlike traditional virtualization techniques that rely on running multiple virtual machines, each with its own operating system, containerization enables applications to run in lightweight, portable, and isolated environments. This approach significantly improves resource utilization, efficiency, and consistency across development, testing, and production environments.

2. Core Concepts and Terminology

Understanding containerization requires familiarity with some core concepts and terminology:

  • Containers: These are lightweight, portable, and isolated environments for running applications, which share the host operating system's kernel.
  • Images: These are the blueprints for creating containers, containing all the necessary application code, dependencies, and configuration information.
  • Registries: These repositories store and distribute container images, allowing developers to easily share and deploy applications.
  • Orchestration: This refers to the management and coordination of container deployments, particularly for scaling and managing the lifecycle of containers.

3. Popular Containerization Technologies

Several containerization technologies are popular among developers and DevOps professionals:

  • Docker: The most widely used container platform, Docker provides a simple and powerful way to create, deploy, and manage containers.
  • podman
  • containerd: A high-level container runtime used by Docker and Kubernetes, containerd is designed to be lightweight and efficient, making it suitable for large-scale deployments.

Additional to the end of every 1st subtopic article (Introductions)

Variants:

  1. If you're interested in delving deeper into {TOPIC_NAME}, we invite you to explore our articles tagged with {TOPIC_TAG}. Don't forget to stay tuned for more content from us!
  2. For further information on {TOPIC_NAME}, be sure to read our articles that are tagged with {TOPIC_TAG}. Keep following us for more updates!
  3. To learn more about {TOPIC_NAME}, check out our articles tagged with {TOPIC_TAG}. Stay tuned!

An Introduction to the Art of Infrastructure as Code (IaC) (ver. 1)

    1. The Magical World of IaC 

    In the grand tale of cloud computing, Infrastructure as Code (IaC) emerges as a modern marvel, a sorcery that conjures and commands cloud infrastructure with mere lines of code. By conjuring infrastructure like a software spell, organizations can deftly navigate the realm of development and operations, bestowing order and banishing chaos. Today, dear reader, we shall embark on a journey, exploring the bountiful blessings of IaC, the enchanted tools and technologies, the sacred practices, and the path to initiation.

    2. The Boons of IaC 

    Embracing IaC bestows upon the cloud architect and DevOps practitioner manifold gifts:

  • Scalability and Flexibility: IaC weaves a powerful charm, enabling one to stretch or shrink infrastructure in response to the ever-changing winds of demand.
  • Swift and Steady Deployments: Automation, the loyal servant of IaC, hastens the provisioning of resources, ensuring consistency and reducing the folly of human error. 
  • Collaboration and the Tome of Versions: IaC unites fellow mages, allowing them to craft infrastructure together, recording their work in the mystical annals of version control.

     3. The Wondrous IaC Tools and Technologies

    In the enchanted realm of IaC, a collection of powerful artifacts aid the diligent practitioner:

  • Terraform: A legendary, open-source IaC relic, this multi-cloud tool conjures a diverse array of cloud providers and platforms.
  • AWS CloudFormation: A mystical service native to the land of AWS, it empowers the adept to define, provision, and manage AWS resources using the sacred scripts of JSON or YAML.
  • Pulumi: A modern IaC talisman that bridges the realms of programming languages and cloud providers, allowing wizards to inscribe infrastructure code in the familiar tongues of Python, TypeScript, and Go.

    4. Sacred IaC Best Practices To fully harness the power of IaC, the wise shall heed these hallowed practices:

  • Version Control and Modularity: Enshrine your IaC incantations in a version control system (like the fabled Git) and craft modular spells for reusability and ease of maintenance.
  • Testing, Validation, and Security: Examine your IaC enchantments for correctness and security, employing arcane instruments like Terratest or Checkov to thwart potential threats before they manifest in the realm of production.
  • Continuous Integration and Deployment (CI/CD): Integrate IaC into your CI/CD rituals, automating the summoning and maintenance of infrastructure, ensuring harmony with your application code.

    5. IaC Tales of Triumph 

    Numerous chronicles recount the success of organizations that embraced the magic of IaC to transform their cloud infrastructure:

  • A renowned e-commerce kingdom employed Terraform to govern their multi-cloud realm, enabling the rapid scaling of infrastructure during the high tide of shopping seasons.
  • A fabled media streaming citadel harnessed the power of AWS CloudFormation to automate the creation of their intricate AWS infrastructure, reducing the time required for deployment and establishing consistency across their dominions.

    6. The Path to IaC Mastery 

    Are you prepared, my dear reader, to embark on the quest for IaC mastery within your own organization? Here, then, are some guiding stars to light your way:

  • Choose wisely an IaC artifact that suits your needs and is compatible with your chosen cloud provider(s).
  • Immerse yourself in the lore of your selected tool, studying official scrolls, engaging in tutorials, and partaking in online courses of wisdom.
  • Begin your journey with humble steps, automating the summoning of a solitary resource or service, and then gradually expanding your arcane prowess.
    Thus concludes our voyage into the mystical realm of Infrastructure as Code. May the knowledge imparted herein serve you well as you weave your own IaC spells, shaping the cloud to your will and forging a future of unparalleled innovation.

CI/CD Overview: Transforming DevOps and Cloud Architecture (ver. 0)

1. Introduction to CI/CD
    Continuous Integration (CI) and Continuous Deployment (CD) are essential components of modern software development and DevOps practices. CI/CD enables developers to integrate code changes frequently and reliably, while automated deployments ensure that the application remains up-to-date and available to users.

2. Benefits of CI/CD
Implementing CI/CD offers several advantages for both development and operations teams:

  • Faster release cycles: CI/CD pipelines enable quicker delivery of new features and bug fixes to users.
  • Improved code quality: Frequent integration and automated testing catch issues early and prevent them from reaching production.
  • Reduced risk of deployment failures: Automated deployments minimize human error and ensure consistency across environments.
  • Increased collaboration: CI/CD fosters a culture of shared responsibility and collaboration between development and operations teams.

3. Key Components of CI/CD
CI/CD relies on several key components:

  • Source control management (e.g., Git) allows developers to track changes and collaborate on code.
  • Build automation tools (e.g., Jenkins, GitLab CI/CD) facilitate the process of compiling, testing, and packaging code.
  • Deployment automation and orchestration (e.g., Kubernetes, Docker) streamline the process of deploying applications to various environments.

4. Popular CI/CDTools and Platforms
There are numerous CI/CD tools and platforms available to suit the needs of different teams and projects:

  • GitLab CI/CD: An integrated CI/CD platform within GitLab, providing a seamless experience for teams using GitLab for source control and issue management.
  • GitHub Actions: Allows teams to create CI/CD workflows directly in their GitHub repositories, simplifying the setup process and improving integration with other GitHub features.
  • Jenkins: An open-source CI/CD server that offers a wide range of plugins and integrations, making it highly customizable and adaptable to different workflows.
  • CircleCI: A cloud-native CI/CD platform that offers advanced features like parallelism and caching to optimize build performance.

5. Integrating CI/CD with Cloud Architecture
CI/CD can be effectively integrated with cloud architectures to further streamline development and deployment processes:

  • Leveraging cloud-based CI/CD platforms: Cloud-based CI/CD tools can scale on-demand, reducing the need for dedicated build infrastructure.
  • Deploying to cloud infrastructure: CI/CD pipelines can automate the deployment of applications to cloud platforms like AWS, Azure, and Google Cloud.
  • Managing cloud resources with Infrastructure as Code (IaC): Incorporating IaC into CI/CD pipelines enables teams to manage cloud resources alongside application code, ensuring consistency across environments.

6. Best Practices for Implementing CI/CD
To maximize the benefits of CI/CD, consider the following best practices:

  • Automate testing and code review processes to catch issues early and ensure high code quality.
  • Monitor and measure the performance of CI/CD pipelines, tracking success metrics to identify areas for improvement.
  • Ensure security and compliance in CI/CD pipelines by integrating security checks, vulnerability scanning, and access controls.

7. Conclusion
    CI/CD has a transformative impact on DevOps and cloud architecture, enabling faster delivery of new features, improved code quality, and enhanced collaboration between development and operations teams. By selecting the right tools and platforms, integrating CI/CD with cloud architecture, and adhering to best practices, organizations can streamline their software development processes and stay ahead in today's competitive landscape.


 

Infrastructure as Code: A Straightforward Introduction (ver. 2)

1. Grasping Infrastructure as Code

    Infrastructure as Code (IaC) is the practical approach of managing cloud infrastructure with code, streamlining development and operations. In this article, we examine the benefits, the key tools and technologies, best practices, and how to start implementing IaC.

2. Advantages of IaC

IaC brings valuable benefits to cloud architecture and DevOps:

  • Scalability and Flexibility: IaC allows easy adjustment of infrastructure to meet changing demands.
  • Rapid and Consistent Deployments: Automation speeds up provisioning and increases reliability.
  • Collaboration and Version Control: IaC enables teams to work together on infrastructure design and track changes.
3. Tools for IaC

Several IaC tools help manage infrastructure:

  • Terraform: Open-source, multi-cloud tool supporting a variety of cloud providers and platforms.
  • AWS CloudFormation: AWS service for defining, provisioning, and managing AWS resources using JSON or YAML templates.
  • Pulumi: Modern IaC platform that supports multiple languages and cloud providers, allowing developers to write infrastructure code in familiar languages.
4. Best Practices in IaC

To maximize IaC effectiveness, adhere to these best practices:

  • Version Control and Modularity: Use version control systems, such as Git, and modularize code for reusability and maintainability.
  • Testing, Validation, and Security: Regularly test IaC code for correctness and security using tools like Terratest or Checkov to prevent issues before production.
  • Continuous Integration and Deployment (CI/CD): Incorporate IaC into CI/CD pipelines for automated infrastructure provisioning and updates, keeping infrastructure aligned with application code.
5. IaC in Action

Numerous organizations have successfully implemented IaC to improve cloud infrastructure management:

  • A well-known e-commerce company utilized Terraform to manage their multi-cloud environment, allowing rapid scaling during peak shopping seasons.
  • A media streaming service employed AWS CloudFormation to automate provisioning of their complex AWS infrastructure, reducing deployment times and increasing consistency across environments.
6. Embarking on IaC

To begin using IaC in your organization, consider these steps:
  1. Choose an IaC tool that suits your needs and supports your chosen cloud provider(s).
  2. Learn the basics of your chosen tool through official documentation, tutorials, and online courses.
  3. Start small by automating the provisioning of a single resource or service, then gradually expand your IaC implementation.
This introduction to Infrastructure as Code provides a clear understanding of the concept, its advantages, and how to implement it. With this knowledge, you can confidently explore IaC and enhance your organization's cloud architecture and DevOps practices.

Introduction to Infrastructure as Code (IaC) (ver. 0)


1. Introduction to IaC

    Infrastructure as Code (IaC) is a modern approach to managing and provisioning cloud infrastructure through code, rather than manual processes. By treating infrastructure like software, organizations can streamline their development and operations processes, increase consistency, and reduce human error. In this article, we'll explore the benefits of IaC, popular tools and technologies, best practices, and how to get started.

2. Benefits of IaC

Adopting IaC brings several advantages to cloud architecture and DevOps practices:

  • Scalability and Flexibility: IaC makes it easy to scale infrastructure up or down as needed, enabling teams to respond quickly to changes in demand.
  • Faster and More Consistent Deployments: By automating infrastructure provisioning, teams can deploy resources faster and more consistently, reducing the risk of human error.
  • Collaboration and Version Control: IaC allows multiple team members to collaborate on infrastructure design and track changes over time, ensuring everyone is working from the same "blueprint."

3. IaC Tools and Technologies
There are several popular IaC tools available to help manage your infrastructure:

  • Terraform: An open-source, multi-cloud IaC tool that supports a wide range of cloud providers and platforms.
  • AWS CloudFormation: A native AWS service that enables you to define, provision, and manage AWS resources using JSON or YAML templates.
  • Pulumi: A modern IaC platform that supports multiple programming languages and cloud providers, allowing developers to write infrastructure code in familiar languages like Python, TypeScript, and Go.
4. IaC Best Practices
To make the most of IaC, follow these best practices:

  • Version Control and Modularity: Store IaC code in a version control system (such as Git) and modularize your code to make it reusable and easy to maintain.
  • Testing, Validation, and Security: Regularly test your IaC code for correctness and security, using tools like Terratest or Checkov to catch potential issues before they reach production.
  • Continuous Integration and Deployment (CI/CD): Integrate IaC into your CI/CD pipelines to automate infrastructure provisioning and updates, ensuring that your infrastructure stays in sync with your application code.
5. IaC Real-World Examples
Many organizations have successfully implemented IaC to improve their cloud infrastructure management:
  • A popular e-commerce company used Terraform to manage their multi-cloud environment, enabling them to rapidly scale their infrastructure during peak shopping seasons.
  • A media streaming service leveraged AWS CloudFormation to automate the provisioning of their complex AWS infrastructure, reducing deployment times and increasing consistency across environments.
6. Getting Started with IaC
Ready to start using IaC in your organization? Here are some tips and resources to help you get started:
  • Begin by choosing an IaC tool that fits your needs and supports your chosen cloud provider(s).
  • Learn the basics of your chosen tool through official documentation, tutorials, and online courses.
  • Start small by automating the provisioning of a single resource or service, then gradually expand your IaC implementation as you gain confidence and experience.
 

An Overview of CI/CD: Revolutionizing DevOps and Cloud Architecture (ver. 1)

1. Unveiling CI/CD

    Within contemporary software development and DevOps methodologies, Continuous Integration (CI) and Continuous Deployment (CD) hold paramount importance. These approaches enable developers to merge code changes frequently and reliably while ensuring automated deployments keep applications current and accessible.

2. The Perks of CI/CD

CI/CD implementation bestows several advantages upon development and operations teams

  • Expedited release cycles: CI/CD pipelines facilitate the swift delivery of novel features and bug fixes to users.
  • Superior code quality: Regular integration and automated testing identify issues early, preventing them from infiltrating production environments.
  • Diminished deployment failure risk: Automated deployments curtail human error and maintain uniformity across settings.
  • Amplified collaboration: CI/CD nurtures a culture of mutual responsibility and cooperation between development and operations units.

3. The Cornerstones of CI/CD

CI/CD hinges on multiple fundamental components:

  • Source control management (e.g., Git) empowers developers to track modifications and cooperate on code.
  • Build automation tools (e.g., Jenkins, GitLab CI/CD) ease the compilation, testing, and packaging of code.
  • Deployment automation and orchestration (e.g., Kubernetes, Docker) streamline application deployment in diverse environments.

4. Prominent CI/CD Tools and Platforms

Numerous CI/CD tools and platforms cater to the varying needs of different teams and projects:

  • Jenkins: An open-source CI/CD server with a vast array of plugins and integrations for extensive customization and adaptability.
  • GitLab CI/CD: A CI/CD platform incorporated within GitLab, providing a seamless experience for teams employing GitLab for source control and issue management.
  • GitHub Actions: Allows teams to establish CI/CD workflows directly in their GitHub repositories, simplifying setup and bolstering integration with other GitHub functionalities.
  • CircleCI: A cloud-native CI/CD platform offering advanced features, such as parallelism and caching, to optimize build performance.

5. Merging CI/CD with Cloud Architecture

CI/CD can be efficiently merged with cloud architectures to further optimize development and deployment processes:

  • Adopting cloud-based CI/CD platforms: Cloud-based CI/CD tools scale on-demand, mitigating the need for dedicated build infrastructure.
  • Deploying to cloud infrastructure: CI/CD pipelines can automate application deployment to cloud platforms like AWS, Azure, and Google Cloud.
  • Administering cloud resources with Infrastructure as Code (IaC): Integrating IaC into CI/CD pipelines allows teams to manage cloud resources in conjunction with application code, ensuring consistency across environments.

6. Implementing CI/CD: Best Practices 

To fully reap the rewards of CI/CD, adhere to the following best practices:

  • Automate testing and code review processes to detect issues early and guarantee high code quality.
  • Monitor and assess CI/CD pipeline performance, tracking key success metrics to pinpoint areas for enhancement.
  • Safeguard security and compliance within CI/CD pipelines by incorporating security checks, vulnerability scanning, and access controls.

7. In Conclusion

    CI/CD profoundly influences DevOps and cloud architecture, promoting swifter feature delivery, enhanced code quality, and improved collaboration between development and operations teams. By selecting suitable tools and platforms, integrating CI/CD with cloud architecture, and following best practices, organizations can streamline their software development processes and maintain a competitive edge in today's fast-paced landscape. Embracing CI/CD can revolutionize your approach to DevOps and cloud architecture, ultimately leading to more efficient and reliable application development and deployment.

March 22, 2023

The Right Way to Migrate to AWS: A Comprehensive Guide for DevOps Success

The cloud computing landscape is evolving rapidly, and Amazon Web Services (AWS) is at the forefront of this transformation. As more businesses realize the potential of cloud technologies, migrating to AWS becomes a strategic priority. In this blog post, we'll discuss the right way to migrate to AWS, covering key considerations, best practices, and essential steps for a successful DevOps journey.

  1. Assess Your Current Infrastructure

Before starting the migration process, it's crucial to have a clear understanding of your current infrastructure. This includes assessing the applications, databases, and services you are currently using. Identify any dependencies, as well as the required resources and performance metrics for each component. This information will help you make informed decisions about which AWS services to use and how to optimize them during the migration process.

  1. Define Your Migration Strategy

Once you have assessed your current infrastructure, it's time to define your migration strategy. There are several approaches you can take:

  • Rehosting (Lift and Shift): Migrate your existing applications and infrastructure to AWS without making any significant changes. This approach is suitable for a quick migration with minimal downtime.
  • Replatforming (Lift, Tinker, and Shift): Optimize your applications and infrastructure during the migration process by making some changes to take advantage of AWS services.
  • Refactoring: Re-architect your applications and infrastructure to fully utilize AWS native services and features, such as serverless computing and managed databases.

Each strategy has its pros and cons, so choose the one that aligns with your business goals, budget, and timelines.

  1. Create a Detailed Migration Plan

A detailed migration plan will serve as your roadmap throughout the migration process. This plan should include:

  • A list of applications, services, and databases to be migrated
  • The migration strategy for each component
  • A timeline for each migration phase
  • Roles and responsibilities of team members
  • Risk mitigation strategies
  • Contingency plans for potential issues
  1. Choose the Right AWS Services

AWS offers a wide range of services that can help you migrate, manage, and optimize your infrastructure. Some key services to consider include:

  • Amazon EC2 for compute resources
  • Amazon RDS for managed relational databases
  • Amazon S3 for storage
  • AWS Lambda for serverless computing
  • Amazon VPC for networking

Ensure that you select the right services for your needs by considering factors such as performance, scalability, and cost.

  1. Execute the Migration

With your migration plan in place, it's time to execute the migration. Follow these steps for a smooth migration process:

  • Set up the required AWS services and configure their settings
  • Migrate your data using tools like AWS Database Migration Service (DMS) or AWS Snowball
  • Test the migrated applications and infrastructure to ensure they are functioning correctly
  • Monitor the performance of your applications and infrastructure and optimize them as needed
  • Implement security best practices to protect your AWS environment
  1. Monitor and Optimize Post-Migration

After the migration is complete, it's essential to continually monitor your AWS infrastructure's performance, security, and cost. Utilize tools like Amazon CloudWatch and AWS Trusted Advisor to gain insights into your environment and identify areas for optimization.

Migrating to AWS can be a complex process, but with the right strategy, planning, and execution, it can lead to significant benefits for your organization. By following these best practices and leveraging AWS's robust suite of services, you can ensure a successful DevOps migration that maximizes the potential of the cloud.

May 1, 2020

Implementing "IF" in Terraform "conditional expression"

Terraform Cloud can create and manage your infrastructure, roll updates for your infrastructure, and your apps, CI-CD also. 

I found TFC as very productive for both infrastructure development and production infrastructure workflows
It sounds like it can process workflow where conditions matter. At any stage and its focus.

So I will focus now on terraform arguments conditions.


Case No 1: You want to create EC2 VM with disk size 20Gb by default, but IF volume_size workspace variable is set - THEN let override this, per this workspace.


To override default 20Gib need to add a "volume_size " variable to terraform workspace, with the requested size value (in Gib).



Don't forget to "queue run" after the change.







Case No 2: You want to keep this volume_size argument in some secrets store, and you are fetching secrets via "data" object, for example from AWS secrets manager. 
  • It can be AWS SSM, Vault, any place where you are fetching with "data".
But IF volume_size variable is set THEN you want to override this with terraform workspace variable.
Well, this is a bit more complicated:
There is original documentation here, but still not easy to understand the implementation, but in the end, it appears to be super easy.
Imagine that you can create a testing pipeline where each stage is null_resource with "local-exec" provisioner scripts that depend on each other hierarchy, and this every stage bases on the previous step outputs.
Write in comments if you need an example.

Case No3: You want to create a resource only IF some variable has "true" value:

* You still can't pass "count" argument in "module", but probably will be implemented later.

April 25, 2020

Datadog agent - Kubernetes logs processing


Datadog has great monitoring abilities, including log monitoring.

In a case where your monitored resource is Kubernetes cluster, there are a couple of ways to get the logs. The first-class solution is to use Datadog agents for most of the roles.

"Ways" means to deliver the log to Datadog servers.


Why? See..

  • The agent is well compiled
  • Comparing to other log delivery agents like fluentd or logstash, Datadog agents use much fewer resources for their own activity.
  • Logs, metrics, traces, and rest of things that Datadog agent can transmit arrived already "right tagged" and pre-formatted, depends on agent config off cause.
  • And if you are paying for Datadog - support cases with their agent should get a better response than additional integrations where 3rd party services involved to log delivery (like other agents or AWS lambda)

And why it is good to use agent pre-formatting features? Well...

  • Log messages delivered in full format (multi_line)
  • Excluded log patterns can save you Datadog bill and also your log dashboard will be much cleaner.
  • Mask sensitive data
  • Take some static log files from non-default log directories
  • Here you can find a bit more processing rules types
  • And here about more patterns for known integrations. Logs collection integration info usually appears at the end of the specific integration article.
  • I hope this helps to implement Datadog logs collection from Kubernetes apps.
  • So, here is an example of log pre-processing from the nodejs app that runs on Kubernetes. In Datadog Kubernetes daemonset there is an option to pass log processing rule for datadog in the app "annotations".


This means that specific app log processing config defined on the app, and not on datadog agent (which is possible by default).
And this is great in cases where log parsing rule planned change only for a specific app and you don't want to redeploy Datadog daemonset.

Don't forget that all this only pre-processing, main parsing, and all indexing boiled in your Datadog log pipelines.

If your "source" name in logs will be the same name as one of the known integrations - you will see dynamic default pipelines for this integration and ready pipeline rules.

You can just clone them and customize them. See this example below.


I hope this helps to implement Datadog logs collection from Kubernetes apps.



April 24, 2020

Environment Variables in Terraform Cloud Remote Run


It is great to be able to use the output (or state data) from other terraform cloud workspace.

In most of the cases, it will be in the same TFC organization.
But one of the required arguments in this "terraform_remote_state" data object is "organization"... Hmm, this is where I am running just now.
The second required argument is "name" (remote workspace name).
Hmm, what if you are using some workspace name convention or workspace prefixes?

Ok, it looks like it can be done easily.
Like any "CICDaaS" with remote runners, TFC has a unique dynamic (system) environment variables on each run, like "runID", workspace and organization names.
To be sure what variables exist - just run some TF config with local-exec command "printenv" - you will see all Key=Value.

Note that only system variables with "TF_VAR_" prefix are accessible via terraform for you, this has no connection with "terraform providers"  that have pre-compiled specific system variables for their own need (like AWS_DEFAULT_REGION).

Lets back to our case.

So we have two workspaces in TFC, under the same organization:

  1. Workspace where we are bringing up AWS VPC and EKS, MongoDB
  2. Workspace where we will deploy Kubernetes services with helm charts.
To be able to deploy to created Kubernetes cluster (EKS) - "second" workspace must pass Kubernetes authentication first. Also, Kubernetes services should get the "MongoDB_URI" string.

That's why we will call "first" workspace "demo" and "second" we will call "demo-helm".
Then in "second" workspace, an object "terraform_remote_state" must run before the rest of object/resources:



I hope this helps a little to not define these things in variables and to prevent helm deployment on the wrong cluster by its nature.