Labels

aws (3) ci/cd (2) datadog (1) devops (1) iac (6) migration (1) techincal_tag (1) terraform (5) vvvhq (1)

March 27, 2023

Infrastructure as Code: A Straightforward Introduction (ver. 2)

1. Grasping Infrastructure as Code

    Infrastructure as Code (IaC) is the practical approach of managing cloud infrastructure with code, streamlining development and operations. In this article, we examine the benefits, the key tools and technologies, best practices, and how to start implementing IaC.

2. Advantages of IaC

IaC brings valuable benefits to cloud architecture and DevOps:

  • Scalability and Flexibility: IaC allows easy adjustment of infrastructure to meet changing demands.
  • Rapid and Consistent Deployments: Automation speeds up provisioning and increases reliability.
  • Collaboration and Version Control: IaC enables teams to work together on infrastructure design and track changes.
3. Tools for IaC

Several IaC tools help manage infrastructure:

  • Terraform: Open-source, multi-cloud tool supporting a variety of cloud providers and platforms.
  • AWS CloudFormation: AWS service for defining, provisioning, and managing AWS resources using JSON or YAML templates.
  • Pulumi: Modern IaC platform that supports multiple languages and cloud providers, allowing developers to write infrastructure code in familiar languages.
4. Best Practices in IaC

To maximize IaC effectiveness, adhere to these best practices:

  • Version Control and Modularity: Use version control systems, such as Git, and modularize code for reusability and maintainability.
  • Testing, Validation, and Security: Regularly test IaC code for correctness and security using tools like Terratest or Checkov to prevent issues before production.
  • Continuous Integration and Deployment (CI/CD): Incorporate IaC into CI/CD pipelines for automated infrastructure provisioning and updates, keeping infrastructure aligned with application code.
5. IaC in Action

Numerous organizations have successfully implemented IaC to improve cloud infrastructure management:

  • A well-known e-commerce company utilized Terraform to manage their multi-cloud environment, allowing rapid scaling during peak shopping seasons.
  • A media streaming service employed AWS CloudFormation to automate provisioning of their complex AWS infrastructure, reducing deployment times and increasing consistency across environments.
6. Embarking on IaC

To begin using IaC in your organization, consider these steps:
  1. Choose an IaC tool that suits your needs and supports your chosen cloud provider(s).
  2. Learn the basics of your chosen tool through official documentation, tutorials, and online courses.
  3. Start small by automating the provisioning of a single resource or service, then gradually expand your IaC implementation.
This introduction to Infrastructure as Code provides a clear understanding of the concept, its advantages, and how to implement it. With this knowledge, you can confidently explore IaC and enhance your organization's cloud architecture and DevOps practices.

Introduction to Infrastructure as Code (IaC) (ver. 0)


1. Introduction to IaC

    Infrastructure as Code (IaC) is a modern approach to managing and provisioning cloud infrastructure through code, rather than manual processes. By treating infrastructure like software, organizations can streamline their development and operations processes, increase consistency, and reduce human error. In this article, we'll explore the benefits of IaC, popular tools and technologies, best practices, and how to get started.

2. Benefits of IaC

Adopting IaC brings several advantages to cloud architecture and DevOps practices:

  • Scalability and Flexibility: IaC makes it easy to scale infrastructure up or down as needed, enabling teams to respond quickly to changes in demand.
  • Faster and More Consistent Deployments: By automating infrastructure provisioning, teams can deploy resources faster and more consistently, reducing the risk of human error.
  • Collaboration and Version Control: IaC allows multiple team members to collaborate on infrastructure design and track changes over time, ensuring everyone is working from the same "blueprint."

3. IaC Tools and Technologies
There are several popular IaC tools available to help manage your infrastructure:

  • Terraform: An open-source, multi-cloud IaC tool that supports a wide range of cloud providers and platforms.
  • AWS CloudFormation: A native AWS service that enables you to define, provision, and manage AWS resources using JSON or YAML templates.
  • Pulumi: A modern IaC platform that supports multiple programming languages and cloud providers, allowing developers to write infrastructure code in familiar languages like Python, TypeScript, and Go.
4. IaC Best Practices
To make the most of IaC, follow these best practices:

  • Version Control and Modularity: Store IaC code in a version control system (such as Git) and modularize your code to make it reusable and easy to maintain.
  • Testing, Validation, and Security: Regularly test your IaC code for correctness and security, using tools like Terratest or Checkov to catch potential issues before they reach production.
  • Continuous Integration and Deployment (CI/CD): Integrate IaC into your CI/CD pipelines to automate infrastructure provisioning and updates, ensuring that your infrastructure stays in sync with your application code.
5. IaC Real-World Examples
Many organizations have successfully implemented IaC to improve their cloud infrastructure management:
  • A popular e-commerce company used Terraform to manage their multi-cloud environment, enabling them to rapidly scale their infrastructure during peak shopping seasons.
  • A media streaming service leveraged AWS CloudFormation to automate the provisioning of their complex AWS infrastructure, reducing deployment times and increasing consistency across environments.
6. Getting Started with IaC
Ready to start using IaC in your organization? Here are some tips and resources to help you get started:
  • Begin by choosing an IaC tool that fits your needs and supports your chosen cloud provider(s).
  • Learn the basics of your chosen tool through official documentation, tutorials, and online courses.
  • Start small by automating the provisioning of a single resource or service, then gradually expand your IaC implementation as you gain confidence and experience.
 

An Overview of CI/CD: Revolutionizing DevOps and Cloud Architecture (ver. 1)

1. Unveiling CI/CD

    Within contemporary software development and DevOps methodologies, Continuous Integration (CI) and Continuous Deployment (CD) hold paramount importance. These approaches enable developers to merge code changes frequently and reliably while ensuring automated deployments keep applications current and accessible.

2. The Perks of CI/CD

CI/CD implementation bestows several advantages upon development and operations teams

  • Expedited release cycles: CI/CD pipelines facilitate the swift delivery of novel features and bug fixes to users.
  • Superior code quality: Regular integration and automated testing identify issues early, preventing them from infiltrating production environments.
  • Diminished deployment failure risk: Automated deployments curtail human error and maintain uniformity across settings.
  • Amplified collaboration: CI/CD nurtures a culture of mutual responsibility and cooperation between development and operations units.

3. The Cornerstones of CI/CD

CI/CD hinges on multiple fundamental components:

  • Source control management (e.g., Git) empowers developers to track modifications and cooperate on code.
  • Build automation tools (e.g., Jenkins, GitLab CI/CD) ease the compilation, testing, and packaging of code.
  • Deployment automation and orchestration (e.g., Kubernetes, Docker) streamline application deployment in diverse environments.

4. Prominent CI/CD Tools and Platforms

Numerous CI/CD tools and platforms cater to the varying needs of different teams and projects:

  • Jenkins: An open-source CI/CD server with a vast array of plugins and integrations for extensive customization and adaptability.
  • GitLab CI/CD: A CI/CD platform incorporated within GitLab, providing a seamless experience for teams employing GitLab for source control and issue management.
  • GitHub Actions: Allows teams to establish CI/CD workflows directly in their GitHub repositories, simplifying setup and bolstering integration with other GitHub functionalities.
  • CircleCI: A cloud-native CI/CD platform offering advanced features, such as parallelism and caching, to optimize build performance.

5. Merging CI/CD with Cloud Architecture

CI/CD can be efficiently merged with cloud architectures to further optimize development and deployment processes:

  • Adopting cloud-based CI/CD platforms: Cloud-based CI/CD tools scale on-demand, mitigating the need for dedicated build infrastructure.
  • Deploying to cloud infrastructure: CI/CD pipelines can automate application deployment to cloud platforms like AWS, Azure, and Google Cloud.
  • Administering cloud resources with Infrastructure as Code (IaC): Integrating IaC into CI/CD pipelines allows teams to manage cloud resources in conjunction with application code, ensuring consistency across environments.

6. Implementing CI/CD: Best Practices 

To fully reap the rewards of CI/CD, adhere to the following best practices:

  • Automate testing and code review processes to detect issues early and guarantee high code quality.
  • Monitor and assess CI/CD pipeline performance, tracking key success metrics to pinpoint areas for enhancement.
  • Safeguard security and compliance within CI/CD pipelines by incorporating security checks, vulnerability scanning, and access controls.

7. In Conclusion

    CI/CD profoundly influences DevOps and cloud architecture, promoting swifter feature delivery, enhanced code quality, and improved collaboration between development and operations teams. By selecting suitable tools and platforms, integrating CI/CD with cloud architecture, and following best practices, organizations can streamline their software development processes and maintain a competitive edge in today's fast-paced landscape. Embracing CI/CD can revolutionize your approach to DevOps and cloud architecture, ultimately leading to more efficient and reliable application development and deployment.

March 22, 2023

The Right Way to Migrate to AWS: A Comprehensive Guide for DevOps Success

The cloud computing landscape is evolving rapidly, and Amazon Web Services (AWS) is at the forefront of this transformation. As more businesses realize the potential of cloud technologies, migrating to AWS becomes a strategic priority. In this blog post, we'll discuss the right way to migrate to AWS, covering key considerations, best practices, and essential steps for a successful DevOps journey.

  1. Assess Your Current Infrastructure

Before starting the migration process, it's crucial to have a clear understanding of your current infrastructure. This includes assessing the applications, databases, and services you are currently using. Identify any dependencies, as well as the required resources and performance metrics for each component. This information will help you make informed decisions about which AWS services to use and how to optimize them during the migration process.

  1. Define Your Migration Strategy

Once you have assessed your current infrastructure, it's time to define your migration strategy. There are several approaches you can take:

  • Rehosting (Lift and Shift): Migrate your existing applications and infrastructure to AWS without making any significant changes. This approach is suitable for a quick migration with minimal downtime.
  • Replatforming (Lift, Tinker, and Shift): Optimize your applications and infrastructure during the migration process by making some changes to take advantage of AWS services.
  • Refactoring: Re-architect your applications and infrastructure to fully utilize AWS native services and features, such as serverless computing and managed databases.

Each strategy has its pros and cons, so choose the one that aligns with your business goals, budget, and timelines.

  1. Create a Detailed Migration Plan

A detailed migration plan will serve as your roadmap throughout the migration process. This plan should include:

  • A list of applications, services, and databases to be migrated
  • The migration strategy for each component
  • A timeline for each migration phase
  • Roles and responsibilities of team members
  • Risk mitigation strategies
  • Contingency plans for potential issues
  1. Choose the Right AWS Services

AWS offers a wide range of services that can help you migrate, manage, and optimize your infrastructure. Some key services to consider include:

  • Amazon EC2 for compute resources
  • Amazon RDS for managed relational databases
  • Amazon S3 for storage
  • AWS Lambda for serverless computing
  • Amazon VPC for networking

Ensure that you select the right services for your needs by considering factors such as performance, scalability, and cost.

  1. Execute the Migration

With your migration plan in place, it's time to execute the migration. Follow these steps for a smooth migration process:

  • Set up the required AWS services and configure their settings
  • Migrate your data using tools like AWS Database Migration Service (DMS) or AWS Snowball
  • Test the migrated applications and infrastructure to ensure they are functioning correctly
  • Monitor the performance of your applications and infrastructure and optimize them as needed
  • Implement security best practices to protect your AWS environment
  1. Monitor and Optimize Post-Migration

After the migration is complete, it's essential to continually monitor your AWS infrastructure's performance, security, and cost. Utilize tools like Amazon CloudWatch and AWS Trusted Advisor to gain insights into your environment and identify areas for optimization.

Migrating to AWS can be a complex process, but with the right strategy, planning, and execution, it can lead to significant benefits for your organization. By following these best practices and leveraging AWS's robust suite of services, you can ensure a successful DevOps migration that maximizes the potential of the cloud.

May 1, 2020

Implementing "IF" in Terraform "conditional expression"

Terraform Cloud can create and manage your infrastructure, roll updates for your infrastructure, and your apps, CI-CD also. 

I found TFC as very productive for both infrastructure development and production infrastructure workflows
It sounds like it can process workflow where conditions matter. At any stage and its focus.

So I will focus now on terraform arguments conditions.


Case No 1: You want to create EC2 VM with disk size 20Gb by default, but IF volume_size workspace variable is set - THEN let override this, per this workspace.


To override default 20Gib need to add a "volume_size " variable to terraform workspace, with the requested size value (in Gib).



Don't forget to "queue run" after the change.







Case No 2: You want to keep this volume_size argument in some secrets store, and you are fetching secrets via "data" object, for example from AWS secrets manager. 
  • It can be AWS SSM, Vault, any place where you are fetching with "data".
But IF volume_size variable is set THEN you want to override this with terraform workspace variable.
Well, this is a bit more complicated:
There is original documentation here, but still not easy to understand the implementation, but in the end, it appears to be super easy.
Imagine that you can create a testing pipeline where each stage is null_resource with "local-exec" provisioner scripts that depend on each other hierarchy, and this every stage bases on the previous step outputs.
Write in comments if you need an example.

Case No3: You want to create a resource only IF some variable has "true" value:

* You still can't pass "count" argument in "module", but probably will be implemented later.

April 25, 2020

Datadog agent - Kubernetes logs processing


Datadog has great monitoring abilities, including log monitoring.

In a case where your monitored resource is Kubernetes cluster, there are a couple of ways to get the logs. The first-class solution is to use Datadog agents for most of the roles.

"Ways" means to deliver the log to Datadog servers.


Why? See..

  • The agent is well compiled
  • Comparing to other log delivery agents like fluentd or logstash, Datadog agents use much fewer resources for their own activity.
  • Logs, metrics, traces, and rest of things that Datadog agent can transmit arrived already "right tagged" and pre-formatted, depends on agent config off cause.
  • And if you are paying for Datadog - support cases with their agent should get a better response than additional integrations where 3rd party services involved to log delivery (like other agents or AWS lambda)

And why it is good to use agent pre-formatting features? Well...

  • Log messages delivered in full format (multi_line)
  • Excluded log patterns can save you Datadog bill and also your log dashboard will be much cleaner.
  • Mask sensitive data
  • Take some static log files from non-default log directories
  • Here you can find a bit more processing rules types
  • And here about more patterns for known integrations. Logs collection integration info usually appears at the end of the specific integration article.
  • I hope this helps to implement Datadog logs collection from Kubernetes apps.
  • So, here is an example of log pre-processing from the nodejs app that runs on Kubernetes. In Datadog Kubernetes daemonset there is an option to pass log processing rule for datadog in the app "annotations".


This means that specific app log processing config defined on the app, and not on datadog agent (which is possible by default).
And this is great in cases where log parsing rule planned change only for a specific app and you don't want to redeploy Datadog daemonset.

Don't forget that all this only pre-processing, main parsing, and all indexing boiled in your Datadog log pipelines.

If your "source" name in logs will be the same name as one of the known integrations - you will see dynamic default pipelines for this integration and ready pipeline rules.

You can just clone them and customize them. See this example below.


I hope this helps to implement Datadog logs collection from Kubernetes apps.



April 24, 2020

Environment Variables in Terraform Cloud Remote Run


It is great to be able to use the output (or state data) from other terraform cloud workspace.

In most of the cases, it will be in the same TFC organization.
But one of the required arguments in this "terraform_remote_state" data object is "organization"... Hmm, this is where I am running just now.
The second required argument is "name" (remote workspace name).
Hmm, what if you are using some workspace name convention or workspace prefixes?

Ok, it looks like it can be done easily.
Like any "CICDaaS" with remote runners, TFC has a unique dynamic (system) environment variables on each run, like "runID", workspace and organization names.
To be sure what variables exist - just run some TF config with local-exec command "printenv" - you will see all Key=Value.

Note that only system variables with "TF_VAR_" prefix are accessible via terraform for you, this has no connection with "terraform providers"  that have pre-compiled specific system variables for their own need (like AWS_DEFAULT_REGION).

Lets back to our case.

So we have two workspaces in TFC, under the same organization:

  1. Workspace where we are bringing up AWS VPC and EKS, MongoDB
  2. Workspace where we will deploy Kubernetes services with helm charts.
To be able to deploy to created Kubernetes cluster (EKS) - "second" workspace must pass Kubernetes authentication first. Also, Kubernetes services should get the "MongoDB_URI" string.

That's why we will call "first" workspace "demo" and "second" we will call "demo-helm".
Then in "second" workspace, an object "terraform_remote_state" must run before the rest of object/resources:



I hope this helps a little to not define these things in variables and to prevent helm deployment on the wrong cluster by its nature.