Migrating existing AWS infrastructure to new acount
Researching how to import existing AWS infrastructure into configuration template
Scenario
We want to migrate hundreds of AWS resources to a new account. Up to now, all resources were created manually using AWS Console or CLI tool.
CloudFormation
CloudFormation is the inhouse cloud provisioning tool of AWS for declaratively describing your cloud resources.
CloudFormation is a hard, complex, inconsistent and badly documented piece of software. A stack is an atomic collection of resources in CloudFormation. Its creation or update succeeds only and only if all the resources within the stack succeed. A failure within a stack update leads to a rollback back to the previous state.
Authoring CloudFormation templates is dreadful so we want to have a head start and generate templates from existing resources.
Generate CloudFormation templates from your existing AWS resources.
Former2
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account. By making the relevant calls using the AWS JavaScript SDK, Former2 will scan across your infrastructure and present you with the list of resources for you to choose which to generate outputs for.
Installation
npm install -g former2
Generation
former2 generate --output-cloudformation "cloudformation.yml"
Error
(node:14732) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 4)
(node:14732) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
(node:14732) UnhandledPromiseRejectionWarning: null
AWS CloudFormer
AWS CloudFormer is a template creation beta tool that creates an AWS CloudFormation template from existing AWS resources in your account. You select any supported AWS resources that are running in your account, and CloudFormer creates a template in an Amazon S3 bucket. We want AWS CloudFormer to create a template of existing infrastructure.
CloudFormer is itself an AWS CloudFormation stack, so the first step is to create and launch the stack from the AWS CloudFormation console. The template created would have some issues/syntax errors, but it is far better than creating the whole template from ground zero.
Prerequisites for Cloudformer: create VPC, create VPC InternetGateway, create a CloudFormation role.
Due to source account permissions, we failed to create CloudFormer:
SOCCloudFormer ROLLBACK_IN_PROGRESS The following resource(s) failed to create: [VPCInternetGateway, VPC, CFNRole]. . Rollback requested by user.
VPC CREATE_FAILED API: ec2:CreateVpc You are not authorized to perform this operation.
CFNRole CREATE_FAILED API: iam:CreateRole User: arn:aws:sts::********************:assumed-role/******/****** is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::******:role/SOCCloudFormer-CFNRole-65C5TRSYD8RG
VPCInternetGateway CREATE_FAILED API: ec2:CreateInternetGateway You are not authorized to perform this operation.
So we cannot use the above tools to import existing resources from source account most likely due to insufficient permissions. We have to author all templates manually which could be time-consuming for hundred of resources. However, CloudFormation does not support creating DynamoDB Global Tables. DynamoDB Global Table was introduced during late 2017. Still, you can create the global tables only using the AWS console or AWS CLI.
AAWS::DynamoDB::GlobalTable - Feature Request
Tables need to be created in all regions and then the global table can be created only when all the regional tables are all created and ready, using the AWS console or AWS CLI. This means that it will not be a fully declarative configuration.
Terraform
Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision infrastructure using a high-level configuration language known as Hashicorp Configuration Language (HCL), or optionally JSON. Terraform is platform-agnostic; you can use it to manage bare metal servers or cloud servers like AWS, Google Cloud Platform, OpenStack, and Azure. Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open-source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
Terraform is controlled via a very easy to use the command-line interface (CLI). Terraform is only a single command-line application: terraform. This application then takes a subcommand such as “apply” or “plan”.
Import existing AWS infrastructure into Terraform
Terraform supports import command to import existing infrastructure into your Terraform state. It will find and import the specified resource into your Terraform state, allowing existing infrastructure to come under Terraform management without having to be initially created by Terraform.
terraform import aws_dynamodb_table.basic-dynamodb-table local-basic-dynamodb-table
But, we have hundreds of resources to migrate?
Terraformer
Terraformer is CLI tool to generate terraform files from existing infrastructure (reverse Terraform).
terraformer import aws --resources=dynamodb,s3,kms,lambda,elasticache --regions=us-west-2
Terraformer by default separates each resource into a file, which is put into a given service directory. The default path for resource files is {output}/{provider}/{service}/{resource}.tf and can vary for each provider.
Just wait few minutes and you got folder with all configuration files organized by service type:
generated
aws
dynamodb
elasticache
kms
lambda
s3
In the end, we do not have to write configuration templates from scratch and everything was imported with locally installed CLI tool using current AWS credentials. This is the beginning of the journey to infrastructure as code.