Labels

aws (1) datadog (1) migration (1) terraform (2)

March 22, 2023

The Right Way to Migrate to AWS: A Comprehensive Guide for DevOps Success

The cloud computing landscape is evolving rapidly, and Amazon Web Services (AWS) is at the forefront of this transformation. As more businesses realize the potential of cloud technologies, migrating to AWS becomes a strategic priority. In this blog post, we'll discuss the right way to migrate to AWS, covering key considerations, best practices, and essential steps for a successful DevOps journey.

  1. Assess Your Current Infrastructure

Before starting the migration process, it's crucial to have a clear understanding of your current infrastructure. This includes assessing the applications, databases, and services you are currently using. Identify any dependencies, as well as the required resources and performance metrics for each component. This information will help you make informed decisions about which AWS services to use and how to optimize them during the migration process.

  1. Define Your Migration Strategy

Once you have assessed your current infrastructure, it's time to define your migration strategy. There are several approaches you can take:

  • Rehosting (Lift and Shift): Migrate your existing applications and infrastructure to AWS without making any significant changes. This approach is suitable for a quick migration with minimal downtime.
  • Replatforming (Lift, Tinker, and Shift): Optimize your applications and infrastructure during the migration process by making some changes to take advantage of AWS services.
  • Refactoring: Re-architect your applications and infrastructure to fully utilize AWS native services and features, such as serverless computing and managed databases.

Each strategy has its pros and cons, so choose the one that aligns with your business goals, budget, and timelines.

  1. Create a Detailed Migration Plan

A detailed migration plan will serve as your roadmap throughout the migration process. This plan should include:

  • A list of applications, services, and databases to be migrated
  • The migration strategy for each component
  • A timeline for each migration phase
  • Roles and responsibilities of team members
  • Risk mitigation strategies
  • Contingency plans for potential issues
  1. Choose the Right AWS Services

AWS offers a wide range of services that can help you migrate, manage, and optimize your infrastructure. Some key services to consider include:

  • Amazon EC2 for compute resources
  • Amazon RDS for managed relational databases
  • Amazon S3 for storage
  • AWS Lambda for serverless computing
  • Amazon VPC for networking

Ensure that you select the right services for your needs by considering factors such as performance, scalability, and cost.

  1. Execute the Migration

With your migration plan in place, it's time to execute the migration. Follow these steps for a smooth migration process:

  • Set up the required AWS services and configure their settings
  • Migrate your data using tools like AWS Database Migration Service (DMS) or AWS Snowball
  • Test the migrated applications and infrastructure to ensure they are functioning correctly
  • Monitor the performance of your applications and infrastructure and optimize them as needed
  • Implement security best practices to protect your AWS environment
  1. Monitor and Optimize Post-Migration

After the migration is complete, it's essential to continually monitor your AWS infrastructure's performance, security, and cost. Utilize tools like Amazon CloudWatch and AWS Trusted Advisor to gain insights into your environment and identify areas for optimization.

Migrating to AWS can be a complex process, but with the right strategy, planning, and execution, it can lead to significant benefits for your organization. By following these best practices and leveraging AWS's robust suite of services, you can ensure a successful DevOps migration that maximizes the potential of the cloud.

May 1, 2020

Implementing "IF" in Terraform "conditional expression"

Terraform Cloud can create and manage your infrastructure, roll updates for your infrastructure, and your apps, CI-CD also. 

I found TFC as very productive for both infrastructure development and production infrastructure workflows
It sounds like it can process workflow where conditions matter. At any stage and its focus.

So I will focus now on terraform arguments conditions.


Case No 1: You want to create EC2 VM with disk size 20Gb by default, but IF volume_size workspace variable is set - THEN let override this, per this workspace.


To override default 20Gib need to add a "volume_size " variable to terraform workspace, with the requested size value (in Gib).



Don't forget to "queue run" after the change.







Case No 2: You want to keep this volume_size argument in some secrets store, and you are fetching secrets via "data" object, for example from AWS secrets manager. 
  • It can be AWS SSM, Vault, any place where you are fetching with "data".
But IF volume_size variable is set THEN you want to override this with terraform workspace variable.
Well, this is a bit more complicated:
There is original documentation here, but still not easy to understand the implementation, but in the end, it appears to be super easy.
Imagine that you can create a testing pipeline where each stage is null_resource with "local-exec" provisioner scripts that depend on each other hierarchy, and this every stage bases on the previous step outputs.
Write in comments if you need an example.

Case No3: You want to create a resource only IF some variable has "true" value:

* You still can't pass "count" argument in "module", but probably will be implemented later.

April 25, 2020

Datadog agent - Kubernetes logs processing


Datadog has great monitoring abilities, including log monitoring.

In a case where your monitored resource is Kubernetes cluster, there are a couple of ways to get the logs. The first-class solution is to use Datadog agents for most of the roles.

"Ways" means to deliver the log to Datadog servers.


Why? See..

  • The agent is well compiled
  • Comparing to other log delivery agents like fluentd or logstash, Datadog agents use much fewer resources for their own activity.
  • Logs, metrics, traces, and rest of things that Datadog agent can transmit arrived already "right tagged" and pre-formatted, depends on agent config off cause.
  • And if you are paying for Datadog - support cases with their agent should get a better response than additional integrations where 3rd party services involved to log delivery (like other agents or AWS lambda)

And why it is good to use agent pre-formatting features? Well...

  • Log messages delivered in full format (multi_line)
  • Excluded log patterns can save you Datadog bill and also your log dashboard will be much cleaner.
  • Mask sensitive data
  • Take some static log files from non-default log directories
  • Here you can find a bit more processing rules types
  • And here about more patterns for known integrations. Logs collection integration info usually appears at the end of the specific integration article.
  • I hope this helps to implement Datadog logs collection from Kubernetes apps.
  • So, here is an example of log pre-processing from the nodejs app that runs on Kubernetes. In Datadog Kubernetes daemonset there is an option to pass log processing rule for datadog in the app "annotations".


This means that specific app log processing config defined on the app, and not on datadog agent (which is possible by default).
And this is great in cases where log parsing rule planned change only for a specific app and you don't want to redeploy Datadog daemonset.

Don't forget that all this only pre-processing, main parsing, and all indexing boiled in your Datadog log pipelines.

If your "source" name in logs will be the same name as one of the known integrations - you will see dynamic default pipelines for this integration and ready pipeline rules.

You can just clone them and customize them. See this example below.


I hope this helps to implement Datadog logs collection from Kubernetes apps.



April 24, 2020

Environment Variables in Terraform Cloud Remote Run


It is great to be able to use the output (or state data) from other terraform cloud workspace.

In most of the cases, it will be in the same TFC organization.
But one of the required arguments in this "terraform_remote_state" data object is "organization"... Hmm, this is where I am running just now.
The second required argument is "name" (remote workspace name).
Hmm, what if you are using some workspace name convention or workspace prefixes?

Ok, it looks like it can be done easily.
Like any "CICDaaS" with remote runners, TFC has a unique dynamic (system) environment variables on each run, like "runID", workspace and organization names.
To be sure what variables exist - just run some TF config with local-exec command "printenv" - you will see all Key=Value.

Note that only system variables with "TF_VAR_" prefix are accessible via terraform for you, this has no connection with "terraform providers"  that have pre-compiled specific system variables for their own need (like AWS_DEFAULT_REGION).

Lets back to our case.

So we have two workspaces in TFC, under the same organization:

  1. Workspace where we are bringing up AWS VPC and EKS, MongoDB
  2. Workspace where we will deploy Kubernetes services with helm charts.
To be able to deploy to created Kubernetes cluster (EKS) - "second" workspace must pass Kubernetes authentication first. Also, Kubernetes services should get the "MongoDB_URI" string.

That's why we will call "first" workspace "demo" and "second" we will call "demo-helm".
Then in "second" workspace, an object "terraform_remote_state" must run before the rest of object/resources:



I hope this helps a little to not define these things in variables and to prevent helm deployment on the wrong cluster by its nature.