Using Terraform to deploy in multiple data centers on Triton
In this post, you'll learn how to implement a blue-green deployment on multiple Triton data centers. Over the past few months, I've posted a number of tutorials which give a foundation of knowledge that will help you with the rest of this post: create custom infrastructure images with Packer, get started deploying a simple application with Terraform, and using Terraform for blue-green deployments.
There are a number of reasons to consider deploying your application to more than one data center. For starters, it's one of the best ways to maintain availability, greatly reducing the risk of events outside of your control, such as data center failure or internet connectivity issues between countries or regions. Provide minimum latency on a global scale by maintaining multiple versions of an application in data centers nearest the majority of your users.
Grow the capacity of your visitors by expanding deployments across multiple regions. You could even follow a hybrid approach and split the application between on-prem and the cloud. Triton is a public, private, and hybrid compute service, the perfect match for all deployment models.
By adding the blue-green deployment workflow to the mix, you can ensure that updates to those multi data center deployments go smoothly. After all, all application requirements are not the same. It's always better to test before sending an application to production.
VERY IMPORTANT NOTE
Do not delete the `.tfstate` file. I encountered numerous headaches as Terraform tried to recreate instances and DNS records that it already created. If you do delete the `.tfstate` file and try to continue with the demo, you're going to have to go back and start from scratch. That means you have to manually delete any records set up on Cloudflare and manually remove instances on Triton.
In the future, we'll publish a blog post about how you can prevent this headache by storing the state files with Triton Object Storage.
Want to skip the step-by-step and see the source code? Check it out on GitHub.
New Terraform terms used in this post
There were a number of terms defined in the first Terraform post, but there's one more that deserves special attention going forward.
Modules: modules are self-contained packages of Terraform configuration. They can be used to create reusable components as well as basic organization of code. In this example, we will use modules for data center configuration and for setting up Cloudflare. Think of modules like functions: modules have input variables and output variables.
The only required piece of information in a module is the source
, which tells Terraform where to download the data sources and resources which in turn tell Terraform what to use.
Cloudflare, like Triton, is a provider. Using this provider requires having a Cloudflare account and API access.
Prerequisites
If you haven't already installed Terraform, go back to step 1 in setting up a simple application with Terraform. Additionally, you should also have signed up for a Triton account and installed triton
CLI.
You should already have version 1.0 and version 1.1 of the Happy Randomizer image on each of the data centers upon which you'll be deploying your application.
NOTE: While I think it's fun to use a custom application where you can see the difference (as well as take use of a prior exercise), you could just as easily run through this post with two stock Ubuntu images: ubuntu-14.04
and ubuntu-16.04
. Check out all of the available stock images by executing triton images
.
Step 1: Set up variables and modules
If you've read our post for using Terraform for blue-green deployments, a lot of this content will look familiar to you. Many of the variables stay the same, including service_production
, blue_image_name
, green_package_name
, and service_networks
to name a few. Missing from the that variable list is blue_count
and green_count
, as those numbers are decided for each data center individually.
We're going to create Triton infrastructure managed by Terraform using the composition root design pattern. In this pattern, we'll use a single driver file, typically named main.tf
, and call into modules in order to affect change. The concept of a single driver which calls into modules is very useful for understanding what Terraform will do, as well as enabling good reuse of local Terraform modules.
Create and change into the directory for our new application deployment:
<code class="language-sh">$ mkdir happy-bg-dcs$ cd happy-bg-dcs</code>
Then, create your variables.tf
file and open it in your IDE of choice.
<code class="language-sh">$ touch variables.tf$ vi variables.tf</code>
Add the initial set of variables
Copy and paste the first set of variables which will define the production instances, service name, networks, image names, versions, and packages.
NOTE: The Triton provider makes use of the environment variables we've already set up for CloudAPI. The Cloudflare provider also has a set of environment variables. All of these environment variables could also be configured using Terraform variables. Regardless of where the values are stored, best practice dictates we do not store API keys, credentials, or provider-specific details in Terraform configuration files. Instead, we store them in Terraform variable files or pass them in as environment variables. Read more on direnv
.
<code class="language-hcl">## Details about all deployments of this application#variable "service_production" { type = "string" description = "Which deployment is considered 'production'? The other is 'staging'. Value can be one of 'blue' or 'green'." default = "blue"}variable "service_name" { type = "string" description = "The name of the service in CNS." default = "happiness"}variable "service_networks" { type = "list" default = ["Joyent-SDC-Public"]}## Details about the "blue" deployment#variable "blue_image_name" { type = "string" description = "The name of the image for the 'blue' deployment." default = "happy_randomizer"}variable "blue_image_type" { type = "string" default = "lx-dataset"}variable "blue_image_version" { type = "string" description = "The version of the image for the 'blue' deployment." default = "1.0.0"}variable "blue_package_name" { type = "string" description = "The package to use when making a blue deployment." default = "g4-highcpu-128M"}## Details about the "green" deployment#variable "green_image_name" { type = "string" description = "The name of the image for the 'green' deployment." default = "happy_randomizer"}variable "green_image_type" { type = "string" default = "lx-dataset"}variable "green_image_version" { type = "string" description = "The version of the image for the 'green' deployment." default = "1.1.0"}variable "green_package_name" { type = "string" description = "The package to use when making a green deployment." default = "g4-highcpu-128M"}</code>
Add modules for each data center
We'll be deploying our application to us-east-1
, us-sw-1
, and us-west-1
. Each module will include the data center name, instance counts, and a source for where Terraform can find the actionable code (defining providers, data sources, and resources). For now, that source will be defined as two sub-directories down, which we'll create later in this tutorial.
Create our driver file in our composition root pattern named main.tf
and open it in your IDE of choice. The main.tf
file is your driver file.
<code class="language-sh">$ touch main.tf$ vi main.tf</code>
Copy the content below into main.tf
<code class="language-hcl">## Details about the deployments for each data center#module "east" { source = "./modules/service" region_name = "us-east-1" blue_count = 3 green_count = 3 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_type = "${var.blue_image_type}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_type = "${var.green_image_type}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}output "east_datacenter_green_ips" { value = ["${module.east.green_ips}"]}output "east_datacenter_blue_ips" { value = ["${module.east.blue_ips}"]}module "sw" { source = "./modules/service" region_name = "us-sw-1" blue_count = 0 green_count = 3 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_type = "${var.blue_image_type}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_type = "${var.green_image_type}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}output "sw_datacenter_green_ips" { value = ["${module.sw.green_ips}"]}output "sw_datacenter_blue_ips" { value = ["${module.sw.blue_ips}"]}module "west" { source = "./modules/service" region_name = "us-west-1" blue_count = 3 green_count = 0 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_type = "${var.blue_image_type}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_type = "${var.green_image_type}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}output "west_datacenter_green_ips" { value = ["${module.west.green_ips}"]}output "west_datacenter_blue_ips" { value = ["${module.west.blue_ips}"]}</code>
You'll notice that for this demo, we'll be creating 3 blue instances and 3 green instances on us-east-1
, 3 blue instances on us-west-1
, and 3 green instances on us-sw-1
. The purpose of that being you could be at any state of your rollout of updates to your application at any time. Your data centers do not have to have identical versions of your application running.
NOTE: We're deviating from a strict definition of a blue-green deployment by having multiple blue and green hosts up at the same time. This is a demo to prove the value of Terraform, not created for the purposes of establishing the exact details are of different deployment models.
We've also given outputs for each data center which will tell us the IP addresses for all of our instances in each data center. This will be helpful to confirm details about connections to your domain name later in this demo.
Add a module for Cloudflare
There's also a module to encapsulate the Cloudflare's Terraform provider. Like the Triton services modules, this module provides a set of information to Terraform which will be used along with a separate actionable file. Again, this module can have input and output variables like a function.
Add the following content to main.tf
:
<code class="language-hcl">## Details for Cloudflare#module "dns" { source = "./modules/dns" zone_name = "alexandra.space" host_name = "@" staging_host_name = "staging" ttl = "300" service_instance_count = "${module.east.blue_count + module.sw.blue_count + module.west.blue_count}" service_instance_list = "${concat(module.east.blue_ips, module.sw.blue_ips, module.west.blue_ips)}" staging_service_instance_count = "${module.east.green_count + module.sw.green_count + module.west.green_count}" staging_service_instance_list = "${concat(module.east.green_ips, module.sw.green_ips, module.west.green_ips)}"}</code>
The full variables file can be found on GitHub.
Step 2: create entry point for Cloudflare module
As you know, there are two folders referred within the modules in main.tf
. Each folder set, /modules/service
and /modules/dns
will contain an interface.tf
file which contains the providers, data sources, and resources. Let's create those folders:
<code class="language-sh">$ mkdir -p modules/service$ mkdir -p modules/dns</code>
Deployment interface
We'll start by adding a file to the ./modules/service/
directory and opening it in our text editor. Because I'm using vi
, I can do that in one action:
<code class="language-sh">$ vi modules/service/interface.tf</code>
Much of interface.tf
will also be identical to the blue-green deployment's main.tf
file.
NOTE: Some variables won't have default
values, such as blue_count
and green_count
. This is because when you specify a default
, a value becomes optional. We want the counts to be required attributes of the module in order to prevent accidentally deleting instances with a default value of 0
.
Copy the following content:
<code class="language-hcl">terraform { required_version = ">= 0.10.0"}variable "service_production" { description = "Which deployment is considered 'production'? The other is 'staging'. Value can be one of 'blue' or 'green'."}variable "service_name" { description = "The name of the service in CNS."}variable "service_networks" { type = "list" description = "The name or ID of one or more networks the service will operate on."}variable "blue_image_name" { description = "The name of the image for the 'blue' deployment."}variable "blue_image_type" { description = "The type of the image for the 'blue' deployment."}variable "blue_image_version" { description = "The version of the image for the 'blue' deployment."}variable "blue_package_name" { description = "The package to use when making a blue deployment."}variable "green_image_name" { description = "The name of the image for the 'green' deployment."}variable "green_image_type" { description = "The type of the image for the 'green' deployment."}variable "green_image_version" { description = "The version of the image for the 'green' deployment."}variable "green_package_name" { description = "The package to use when making a green deployment."}provider "triton" { url = "https://${var.region_name}.api.joyent.com"}variable "blue_count" { description = "number of blue machines"}variable "green_count" { description = "number of green machines"}variable "region_name" { description = "Name of the data center for the API endpoint"}## Common details about both "blue" and "green" deployments#data "triton_network" "service_networks" { count = "${length(var.service_networks)}" name = "${element(var.service_networks, count.index)}"}## Details about the "blue" deployment#data "triton_image" "blue_image" { name = "${var.blue_image_name}" version = "${var.blue_image_version}" type = "${var.blue_image_type}" most_recent = true}resource "triton_machine" "blue_machine" { count = "${var.blue_count}" name = "${format("happy-%02d-blue", count.index + 1)}" package = "${var.blue_package_name}" image = "${data.triton_image.blue_image.id}" networks = ["${data.triton_network.service_networks.*.id}"] cns { services = ["${var.service_production == "blue" ? var.service_name : "staging-${var.service_name}" }", "blue-${var.service_name}"] }}## Details about the "green" deployment#data "triton_image" "green_image" { name = "${var.green_image_name}" version = "${var.green_image_version}" type = "${var.green_image_type}" most_recent = true}resource "triton_machine" "green_machine" { count = "${var.green_count}" name = "${format("happy-%02d-green", count.index + 1)}" package = "${var.green_package_name}" image = "${data.triton_image.green_image.id}" networks = ["${data.triton_network.service_networks.*.id}"] cns { services = ["${var.service_production == "green" ? var.service_name : "staging-${var.service_name}" }", "green-${var.service_name}"] }}## Outputs for all deployments#output "blue_ips" { value = ["${triton_machine.blue_machine.*.primaryip}"]}output "green_ips" { value = ["${triton_machine.green_machine.*.primaryip}"]}output "blue_count" { value = "${var.blue_count}"}output "green_count" { value = "${var.green_count}"}</code>
There are some key differences:
- While I've combined all of the variables (and modules) into the composition root,
main.tf
, they've all been re-declared in the module'sinterface.tf
file. - Redeclaring variables inside of modules allows the modules to be reused as abstract composeable units of infrastructure configuration. Being explicit, the redeclaration of variables creates a contract that consumers must satisfy.
- The
provider
uses aurl
setting in place of your Triton environment variable. This allows us to implement multiple data centers. - Three variables,
blue_count
,green_count
, andregion_name
, were created as part of modules and not separate variables. However, they still need to be declared as variables withininterface.tf
.
Additionally, the output
s allow us to get all of the IP addresses for instances created and the total number of created instances. The output variables are what help Terraform build a Directed Ayclic Graph (DAG) and guide the evaluation and execution of Terraform configuration in the necessarily required order. Think of output
as the return parameters from a function. These outputs will be used in the Cloudflare module.
DNS interface
While Triton CNS is a great resource for applications within a single data center, it does not work across data centers. Even if you declare the same service name, the DNS name still contains the data center's name, i.e. us-sw-1
or us-west-1
. With the help of Cloudflare, we'll connect our instances to a vanity domain name, enabling global use of a single DNS name.
The DNS file is going to help connect our deployment with Cloudflare. Once again, you must declare all of the necessary variables, which includes the variables set within our dns
module.
Create and edit a new interface.tf
in the ./modules/dns/
directory:
<code class="language-sh">$ vi modules/dns/interface.tf</code>
Copy the following text to your interface.tf
file:
<code class="language-hcl">variable "zone_name" { type = "string" description = "The domain to add the DNS record to."}variable "host_name" { type = "string" description = "The name of the DNS host. For production, this is represented by '@'."}variable "staging_host_name" { type = "string" description = "The name of the DNS host. For staging, this is the subdomain 'staging'."}variable "ttl" { type = "string" description = "TTL for the DNS record."}variable "service_instance_list" { type = "list"}variable "service_instance_count" { type = "string"}variable "staging_service_instance_list" { type = "list"}variable "staging_service_instance_count" { type = "string"}provider "cloudflare" { }resource "cloudflare_record" "production" { count = "${var.service_instance_count}" domain = "${var.zone_name}" name = "${var.host_name}" value = "${element(var.service_instance_list, count.index)}" type = "A" ttl = "${var.ttl}"}resource "cloudflare_record" "staging" { count = "${var.staging_service_instance_count}" domain = "${var.zone_name}" name = "${var.staging_host_name}" value = "${element(var.staging_service_instance_list, count.index)}" type = "A" ttl = "${var.ttl}"}</code>
The Cloudflare provider requires your account email address and your API token, which you can get under API Keys on your Cloudflare profile. View and copy your Global API Key. Again, it's best practice that you add API keys as environment variables so that you do not accidentally share these keys.
<code class="language-sh">export CLOUDFLARE_TOKEN=<Global API Key>export CLOUDFLARE_EMAIL=<example@email.com></code>
NOTE: You'll notice in this file, there isn't any mention of "blue" or "green." That's because this module is reusable and can be used for both staging and production, regardless of the color. For our first deployment, our blue instances are in production and our green instances are in staging. By the end of this demo, green will be in production and blue will be in staging.
The cloudflare_record
resources will set an A
record for all of the instances. There's a production
resource which will set the domain to the origin website, which in my case will be alexandra.space
. The staging
resource sets an A record for the subdomain staging.alexandra.space
.
Step 3: initializing Terraform
All of the necessary files for this deployment have been created. You are now ready to build your infrastructure.
The first step is to download the providers.
Execute terraform init
to download the Triton and Cloudflare providers in the background into the local application directory. You should be using at least Terraform version 0.10.x
to execute the following commands. We always recommend always using the latest Triton provider and keeping up to date with Terraform core's changes.
<code class="language-sh"> $ terraform init Initializing modules... - module.east Getting source "./modules/service" - module.sw Getting source "./modules/service" - module.west Getting source "./modules/service" - module.dns Getting source "./modules/dns" Initializing provider plugins... - Checking for available provider plugins on https://releases.hashicorp.com... - Downloading plugin for provider "triton" (0.4.0)... - Downloading plugin for provider "cloudflare" (0.1.0)... The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below. * provider.cloudflare: version = "~> 0.1" * provider.triton: version = "~> 0.4" Terraform has been successfully initialized!</code>
Step 4: planning your infrastructure
Execute terraform plan
to review the deployment. Below is a compressed version of the results.
<code class="language-sh"> $ terraform plan -out multidcs.plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. data.triton_image.green_image: Refreshing state... data.triton_network.service_networks: Refreshing state... data.triton_image.blue_image: Refreshing state... data.triton_image.green_image: Refreshing state... data.triton_network.service_networks: Refreshing state... data.triton_image.blue_image: Refreshing state... data.triton_network.service_networks: Refreshing state... data.triton_image.green_image: Refreshing state... data.triton_image.blue_image: Refreshing state... An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: + module.dns.cloudflare_record.production[0] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "${element(var.service_instance_list, count.index)}" zone_id: <computed> [...] + module.dns.cloudflare_record.staging[0] id: <computed> domain: "alexandra.space" hostname: <computed> name: "staging" proxied: "false" ttl: "300" type: "A" value: "${element(var.staging_service_instance_list, count.index)}" zone_id: <computed> [...] + module.east.triton_machine.blue_machine[0] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "happiness" cns.0.services.1: "blue-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "5d1915c0-f04d-4778-8eb1-255da3b34bcf" ips.#: <computed> memory: <computed> name: "happy-1-blue" networks.#: "1" networks.0: "9ec60129-9034-47b4-b111-3026f9b1a10f" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> [...] + module.east.triton_machine.green_machine[0] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "staging-happiness" cns.0.services.1: "green-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "22fd6286-6828-42ea-bd10-e17c63702e4e" ips.#: <computed> memory: <computed> name: "happy-1-green" networks.#: "1" networks.0: "9ec60129-9034-47b4-b111-3026f9b1a10f" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> [...] + module.sw.triton_machine.green_machine[0] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "staging-happiness" cns.0.services.1: "green-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "8c1bcd0d-d77e-4d82-a025-cf6023a92155" ips.#: <computed> memory: <computed> name: "happy-1-green" networks.#: "1" networks.0: "f7ed95d3-faaf-43ef-9346-15644403b963" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> [...] + module.west.triton_machine.blue_machine[0] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "happiness" cns.0.services.1: "blue-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "28652e20-ff42-44a7-ae5d-75edb09195a0" ips.#: <computed> memory: <computed> name: "happy-1-blue" networks.#: "1" networks.0: "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> [...] Plan: 24 to add, 0 to change, 0 to destroy. This plan was saved to: multidcs To perform exactly these actions, run the following command to apply: terraform apply multidcs.plan</code>
The plan states: 24 to add, 0 to change, 0 to destroy
. That equates to twelve instances, 6 blue and 6 green, as well as 12 address records, 6 for production and 6 for staging.
Step 5: execute and build your infrastructure
The plan came back with no errors. We're ready to build our infrastructure and set the A records for our vanity domain.
Again, I'll condense the results below so you get a sense of what's happening.
<code class="language-sh"> $ terraform apply multidcs.plan module.east.triton_machine.blue_machine[1]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "happiness" cns.0.services.1: "" => "blue-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "5d1915c0-f04d-4778-8eb1-255da3b34bcf" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-02-blue" networks.#: "" => "1" networks.0: "" => "9ec60129-9034-47b4-b111-3026f9b1a10f" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" module.sw.triton_machine.green_machine[1]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "8c1bcd0d-d77e-4d82-a025-cf6023a92155" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-02-green" networks.#: "" => "1" networks.0: "" => "f7ed95d3-faaf-43ef-9346-15644403b963" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" [...] module.sw.triton_machine.green_machine[2]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "8c1bcd0d-d77e-4d82-a025-cf6023a92155" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-03-green" networks.#: "" => "1" networks.0: "" => "f7ed95d3-faaf-43ef-9346-15644403b963" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" [...] module.west.triton_machine.blue_machine[2]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "happiness" cns.0.services.1: "" => "blue-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "28652e20-ff42-44a7-ae5d-75edb09195a0" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-03-blue" networks.#: "" => "1" networks.0: "" => "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" [...] module.east.triton_machine.green_machine[2]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "22fd6286-6828-42ea-bd10-e17c63702e4e" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-03-green" networks.#: "" => "1" networks.0: "" => "9ec60129-9034-47b4-b111-3026f9b1a10f" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" module.east.triton_machine.blue_machine[0]: Creation complete after 44s (ID: ac120758-c9da-ce41-a0f7-97894929e7a5) module.east.triton_machine.blue_machine[2]: Creation complete after 45s (ID: 9115e6f5-495b-448b-ed5b-ccbdfc976d33) module.west.triton_machine.blue_machine[1]: Creation complete after 53s (ID: 839f7377-5e58-6653-ef20-86e42f32ed80) module.west.triton_machine.blue_machine[2]: Creation complete after 53s (ID: feba4e8b-540a-6658-ed6d-da4a3caa6c2a) module.sw.triton_machine.green_machine[1]: Creation complete after 54s (ID: e8e9c4b0-7f51-43f8-a20f-9e1f83787a20) module.east.triton_machine.green_machine[2]: Creation complete after 55s (ID: f658ab49-6b3e-c087-e933-fb58e153a350) module.sw.triton_machine.green_machine[2]: Creation complete after 1m4s (ID: 7b7a2e48-64b1-e839-cbb0-dfdfb7b68ce3) module.east.triton_machine.green_machine[0]: Creation complete after 1m5s (ID: 7ee232d6-37b4-4209-b547-f5131a50f572) [...] module.dns.cloudflare_record.production[0]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "@" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "72.2.113.31" zone_id: "" => "<computed>" [...] module.dns.cloudflare_record.staging[0]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "staging" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "165.225.139.88" zone_id: "" => "<computed>" module.dns.cloudflare_record.staging[1]: Creation complete after 0s (ID: e4b01ccc964a46e44a88aea905d1eabe) module.dns.cloudflare_record.staging[0]: Creation complete after 0s (ID: 69896ded978191212881e6876a04b9a6) module.dns.cloudflare_record.staging[3]: Creation complete after 1s (ID: c5ad7311a157911c2a23104f6233fa34) module.dns.cloudflare_record.staging[2]: Creation complete after 1s (ID: cac28d3b03ea1ea1979dd8c768d855a4) module.dns.cloudflare_record.staging[4]: Creation complete after 1s (ID: a77bfa81f0078acb89bcec181cc7b897) module.dns.cloudflare_record.staging[5]: Creation complete after 1s (ID: 61f5e3760ae913795424e23f7ac052f6) Apply complete! Resources: 24 added, 0 changed, 0 destroyed. Outputs: east_datacenter_blue_ips = [ 72.2.113.31, 165.225.139.133, 72.2.112.164 ] east_datacenter_green_ips = [ 165.225.139.88, 165.225.139.56, 72.2.115.240 ] sw_datacenter_blue_ips = [] sw_datacenter_green_ips = [ 165.225.157.17, 64.30.129.88, 64.30.128.197 ] west_datacenter_blue_ips = [ 165.225.148.120, 165.225.151.144, 165.225.149.198 ] west_datacenter_green_ips = []</code>
There were no errors in creating any of our resources. It took a couple of minutes to get everything up and running, but all-in-all it was a pretty hands free process.
I'll be able to visit alexandra.space to see my production application (i.e. the blue instances) and staging.alexandra.space to see the staging version (i.e. the green instances).
Using dig
to check your domain
With dig
, you can see the IP addresses associated with your domain name. This should line up with the IP addresses of the instances created with Terraform.
<code class="language-sh">$ dig +noall +answer staging.alexandra.spacestaging.alexandra.space. 300 IN A 165.225.139.88staging.alexandra.space. 300 IN A 165.225.139.56staging.alexandra.space. 300 IN A 72.2.115.240staging.alexandra.space. 300 IN A 165.225.157.17staging.alexandra.space. 300 IN A 64.30.129.88staging.alexandra.space. 300 IN A 64.30.128.197</code>
There are six IP addresses tied with the staging domain. Those IP addresses should be the same as the ones listed in our outputs of green instances:
<code class="language-sh">[...]Outputs:east_datacenter_green_ips = [ 165.225.139.88, 165.225.139.56, 72.2.115.240]sw_datacenter_green_ips = [ 165.225.157.17, 64.30.129.88, 64.30.128.197]west_datacenter_green_ips = []</code>
Looks like we have a match.
You can also view the IP addresses associated with your instances by visiting the portal or by executing triton inst ip <instance>
in the appropriate data centers.
Step 6: updating the number of instances to favor green
Everything is up and running as expected. Because this is a blue-green deploy (where blue started as production and green as staging), you'd expect at some point to have a fully green infrastructure running in production. That's easy to accomplish by updating the modules and variables.
We're going to break this up into two parts, with step 6 covering updating the number of instances and step 7 covering the updates to your DNS records. Within the steps will be a series of mini-steps.
For updating the number of instances in step 6, you will:
- Update the variables file to add green instances and reduce blue
- Plan the update to your instances
- Apply that update
Update the driver to add green instances and reduce blue
In the main.tf
file in your root directory, let's roll out green instances on us-west-1
and reduce the number of blue instances on us-east-1
. There are no blue instances on us-sw-1
, which means there is no version of the current production application on that data center.
<code class="language-sh">$ vi main.tf</code>
NOTE: Terraform always deletes before it creates, so it's important not to set us-west-1
's blue instance count to 0
before rolling out the green instances.
The only updates you'll be making are related to the blue_count
and green_count
where main.tf
embeds our data center counts for our application. In this case, I'm reducing the number of blue instances in production from 3
to 2
. You may not want to do this if it would negatively impact your users or the load on your instances. Because this is a demo, I'm not worried about it.
NOTE: You could independently update the counts as it relates to the DNS by creating the variables green_dns_count
and blue_dns_count
. This would allow DNS to age out before removing the machines, thus preventing customer outage.
<code class="language-sh">## Details about the deployments for each data center#module "east" { source = "./modules/service" region_name = "us-east-1" blue_count = 1 green_count = 3 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}module "sw" { source = "./modules/service" region_name = "us-sw-1" blue_count = 0 green_count = 3 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}module "west" { source = "./modules/service" region_name = "us-west-1" blue_count = 1 green_count = 3 service_production = "${var.service_production}" service_name = "${var.service_name}" service_networks = "${var.service_networks}" blue_image_name = "${var.blue_image_name}" blue_image_version = "${var.blue_image_version}" blue_package_name = "${var.blue_package_name}" green_image_name = "${var.green_image_name}" green_image_version = "${var.green_image_version}" green_package_name = "${var.green_package_name}"}</code>
Plan the update to your instances
Always plan before you execute. Let's see what will happen after implementing those changes.
NOTE: Because plans are single-use files, you can reuse the same plan name as earlier, multidcs.plan
. However, I have used a different plan name to mark the steps in our process.
<code class="language-sh"> $ terraform plan -out updatecount.plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. data.triton_image.blue_image: Refreshing state... data.triton_network.service_networks: Refreshing state... data.triton_image.green_image: Refreshing state... [...] triton_machine.green_machine[0]: Refreshing state... (ID: 930eb7d9-2dba-6021-9494-a33a2ae6f767) triton_machine.blue_machine[1]: Refreshing state... (ID: 839f7377-5e58-6653-ef20-86e42f32ed80) [...] cloudflare_record.staging[0]: Refreshing state... (ID: 69896ded978191212881e6876a04b9a6) cloudflare_record.production[0]: Refreshing state... (ID: 0d7d8a40a7f97aa180e52ff5c56d01b5) An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create ~ update in-place - destroy Terraform will perform the following actions: ~ module.dns.cloudflare_record.production[1] value: "165.225.139.133" => "165.225.148.120" - module.dns.cloudflare_record.production[2] - module.dns.cloudflare_record.production[3] - module.dns.cloudflare_record.production[4] - module.dns.cloudflare_record.production[5] ~ module.dns.cloudflare_record.staging[0] value: "165.225.139.88" => "${element(var.staging_service_instance_list, count.index)}" ~ module.dns.cloudflare_record.staging[1] value: "165.225.139.56" => "${element(var.staging_service_instance_list, count.index)}" ~ module.dns.cloudflare_record.staging[2] value: "72.2.115.240" => "${element(var.staging_service_instance_list, count.index)}" ~ module.dns.cloudflare_record.staging[3] value: "165.225.157.17" => "${element(var.staging_service_instance_list, count.index)}" ~ module.dns.cloudflare_record.staging[4] value: "64.30.129.88" => "${element(var.staging_service_instance_list, count.index)}" ~ module.dns.cloudflare_record.staging[5] value: "64.30.128.197" => "${element(var.staging_service_instance_list, count.index)}" + module.dns.cloudflare_record.staging[6] id: <computed> domain: "alexandra.space" hostname: <computed> name: "staging" proxied: "false" ttl: "300" type: "A" value: "${element(var.staging_service_instance_list, count.index)}" zone_id: <computed> + module.dns.cloudflare_record.staging[7] id: <computed> domain: "alexandra.space" hostname: <computed> name: "staging" proxied: "false" ttl: "300" type: "A" value: "${element(var.staging_service_instance_list, count.index)}" zone_id: <computed> + module.dns.cloudflare_record.staging[8] id: <computed> domain: "alexandra.space" hostname: <computed> name: "staging" proxied: "false" ttl: "300" type: "A" value: "${element(var.staging_service_instance_list, count.index)}" zone_id: <computed> - module.east.triton_machine.blue_machine[1] - module.east.triton_machine.blue_machine[2] - module.west.triton_machine.blue_machine[1] - module.west.triton_machine.blue_machine[2] + module.west.triton_machine.green_machine[0] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "staging-happiness" cns.0.services.1: "green-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: <computed> memory: <computed> name: "happy-01-green" networks.#: "1" networks.0: "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> + module.west.triton_machine.green_machine[1] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "staging-happiness" cns.0.services.1: "green-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: <computed> memory: <computed> name: "happy-02-green" networks.#: "1" networks.0: "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> + module.west.triton_machine.green_machine[2] id: <computed> cns.#: "1" cns.0.services.#: "2" cns.0.services.0: "staging-happiness" cns.0.services.1: "green-happiness" created: <computed> dataset: <computed> disk: <computed> domain_names.#: <computed> firewall_enabled: "false" image: "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: <computed> memory: <computed> name: "happy-03-green" networks.#: "1" networks.0: "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: <computed> package: "g4-highcpu-128M" primaryip: <computed> root_authorized_keys: <computed> type: <computed> updated: <computed> Plan: 6 to add, 7 to change, 8 to destroy. This plan was saved to: updatecount.plan To perform exactly these actions, run the following command to apply: terraform apply "updatecount.plan"</code>
There are lots of updates happening with this plan. Although it may look like from our changes we just updated the number of instances, remember that also affects our DNS records. 6 to add, 7 to change, 8 to destroy
translates to:
- 3 green machines are added plus the 3 associated DNS records
- 4 blue machines are destroyed plus their 4 associated DNS records
- There are
7
changes indicated with the DNS records, but that's a misnomer. What's actually happening:- 1 change is being made to the production DNS records, where
happy-1-blue
onus-west-1
is being elevated to the second production record (ashappy-2-blue
andhappy-3-blue
onus-east-1
are being removed). - 6 "changes" are really the addition of 3 DNS records to
staging
(totally 6 DNS records associated withstaging
), which is already accounted for.
- 1 change is being made to the production DNS records, where
Why does it say there are 7
changes if that's not technically true? It's associated with the way the Cloudflare provider thinks about DNS records.
Apply the update to your instances
The actual application of our plan will follow the plan, foregoing the 6 changes that aren't actually changes (as discussed above). The number of objects to change is reduced to 1.
The only change will be with one DNS record associated with production.
<code class="language-sh"> $ terraform apply updatecount.plan module.dns.cloudflare_record.production[4]: Destroying... (ID: 7c54da68bcac09cda3cc4ab07f12cd7a) module.dns.cloudflare_record.production[5]: Destroying... (ID: 4ca04d854727fecf3130f95c3b33cb50) module.west.triton_machine.green_machine[2]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-03-green" networks.#: "" => "1" networks.0: "" => "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" module.west.triton_machine.green_machine[1]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-02-green" networks.#: "" => "1" networks.0: "" => "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" module.west.triton_machine.green_machine[0]: Creating... cns.#: "" => "1" cns.0.services.#: "" => "2" cns.0.services.0: "" => "staging-happiness" cns.0.services.1: "" => "green-happiness" created: "" => "<computed>" dataset: "" => "<computed>" disk: "" => "<computed>" domain_names.#: "" => "<computed>" firewall_enabled: "" => "false" image: "" => "a39fb537-8a8f-47c1-87f6-c33aca78102b" ips.#: "" => "<computed>" memory: "" => "<computed>" name: "" => "happy-01-green" networks.#: "" => "1" networks.0: "" => "42325ea0-eb62-44c1-8eb6-0af3e2f83abc" nic.#: "" => "<computed>" package: "" => "g4-highcpu-128M" primaryip: "" => "<computed>" root_authorized_keys: "" => "<computed>" type: "" => "<computed>" updated: "" => "<computed>" module.dns.cloudflare_record.production[5]: Destruction complete after 0s module.dns.cloudflare_record.production[3]: Destruction complete after 1s module.dns.cloudflare_record.production[4]: Destruction complete after 1s [...] module.dns.cloudflare_record.staging[7]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "staging" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "165.225.149.77" zone_id: "" => "<computed>" module.dns.cloudflare_record.staging[6]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "staging" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "165.225.150.82" zone_id: "" => "<computed>" module.dns.cloudflare_record.staging[8]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "staging" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "165.225.144.136" zone_id: "" => "<computed>" module.dns.cloudflare_record.staging[8]: Creation complete after 1s (ID: 7a2052fa1ee4f6fdc5e9e869a5bf85e1) [...] Apply complete! Resources: 6 added, 1 changed, 8 destroyed. Outputs: east_datacenter_blue_ips = [ 72.2.113.31 ] east_datacenter_green_ips = [ 165.225.139.88, 165.225.139.56, 72.2.115.240 ] sw_datacenter_blue_ips = [] sw_datacenter_green_ips = [ 165.225.157.17, 64.30.129.88, 64.30.128.197 ] west_datacenter_blue_ips = [ 165.225.148.120 ] west_datacenter_green_ips = [ 165.225.150.82, 165.225.149.77, 165.225.144.136 ]</code>
The plan was successful. I can confirm that the IP addresses match with dig
:
<code class="language-sh"># Confirming for staging$ dig +noall +answer staging.alexandra.spacestaging.alexandra.space. 299 IN A 165.225.144.136staging.alexandra.space. 299 IN A 165.225.139.56staging.alexandra.space. 299 IN A 72.2.115.240staging.alexandra.space. 299 IN A 165.225.139.88staging.alexandra.space. 299 IN A 165.225.150.82staging.alexandra.space. 299 IN A 64.30.128.197staging.alexandra.space. 299 IN A 165.225.149.77staging.alexandra.space. 299 IN A 165.225.157.17staging.alexandra.space. 299 IN A 64.30.129.88# Confirming for production$ dig +noall +answer alexandra.spacealexandra.space. 300 IN A 165.225.148.120alexandra.space. 300 IN A 72.2.113.31</code>
Step 7: Setting green as the production instances
Now that you've got more green instances available to be running in production, it's time to swap the DNS records so that green instances point to production and blue point to staging.
For updating the DNS in step 7, you will:
- Update the variables file to set the production variable and Cloudflare details
- Plan the DNS update
- Apply that update
Update the variables file
Now that all of our green instances are up and running, we can swap the DNS records for staging and production. That requires updates in both variables.tf
and main.tf
.
Update the variable service_production
in variables.tf
:
<code class="language-hcl">variable "service_production" { type = "string" default = "green"}</code>
Update the dns
module in main.tf
<code class="language-hcl">## Details for Cloudflare#module "dns" { source = "./modules/dns" zone_name = "alexandra.space" host_name = "@" staging_host_name = "staging" ttl = "300" service_instance_count = "${module.east.blue_count + module.sw.blue_count + module.west.blue_count}" service_instance_list = "${concat(module.east.blue_ips, module.sw.blue_ips, module.west.blue_ips)}" staging_service_instance_count = "${module.east.green_count + module.sw.green_count + module.west.green_count}" staging_service_instance_list = "${concat(module.east.green_ips, module.sw.green_ips, module.west.green_ips)}"}</code>
The service_production
variable updates Triton CNS while the updates to the dns
module affect the Cloudflare DNS names.
Plan the DNS update
Let's plan our DNS update. Remember, always use terraform plan
to save the output before you terraform apply
to review what will happen. Remember, only you can prevent "Terrorform."
<code class="language-sh"> $ terraform plan -out swap.plan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. [...] An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create ~ update in-place - destroy Terraform will perform the following actions: ~ module.dns.cloudflare_record.production[0] value: "72.2.113.31" => "165.225.139.88" ~ module.dns.cloudflare_record.production[1] value: "165.225.148.120" => "165.225.139.56" + module.dns.cloudflare_record.production[2] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "72.2.115.240" zone_id: <computed> + module.dns.cloudflare_record.production[3] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "165.225.157.17" zone_id: <computed> + module.dns.cloudflare_record.production[4] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "64.30.129.88" zone_id: <computed> + module.dns.cloudflare_record.production[5] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "64.30.128.197" zone_id: <computed> + module.dns.cloudflare_record.production[6] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "165.225.150.82" zone_id: <computed> + module.dns.cloudflare_record.production[7] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "165.225.149.77" zone_id: <computed> + module.dns.cloudflare_record.production[8] id: <computed> domain: "alexandra.space" hostname: <computed> name: "@" proxied: "false" ttl: "300" type: "A" value: "165.225.144.136" zone_id: <computed> ~ module.dns.cloudflare_record.staging[0] value: "165.225.139.88" => "72.2.113.31" ~ module.dns.cloudflare_record.staging[1] value: "165.225.139.56" => "165.225.148.120" - module.dns.cloudflare_record.staging[2] - module.dns.cloudflare_record.staging[3] - module.dns.cloudflare_record.staging[4] - module.dns.cloudflare_record.staging[5] - module.dns.cloudflare_record.staging[6] - module.dns.cloudflare_record.staging[7] - module.dns.cloudflare_record.staging[8] ~ module.east.triton_machine.blue_machine cns.0.services.0: "happiness" => "staging-happiness" ~ module.east.triton_machine.green_machine[0] cns.0.services.0: "staging-happiness" => "happiness" ~ module.east.triton_machine.green_machine[1] cns.0.services.0: "staging-happiness" => "happiness" ~ module.east.triton_machine.green_machine[2] cns.0.services.0: "staging-happiness" => "happiness" ~ module.sw.triton_machine.green_machine[0] cns.0.services.0: "staging-happiness" => "happiness" ~ module.sw.triton_machine.green_machine[1] cns.0.services.0: "staging-happiness" => "happiness" ~ module.sw.triton_machine.green_machine[2] cns.0.services.0: "staging-happiness" => "happiness" ~ module.west.triton_machine.blue_machine cns.0.services.0: "happiness" => "staging-happiness" ~ module.west.triton_machine.green_machine[0] cns.0.services.0: "staging-happiness" => "happiness" ~ module.west.triton_machine.green_machine[1] cns.0.services.0: "staging-happiness" => "happiness" ~ module.west.triton_machine.green_machine[2] cns.0.services.0: "staging-happiness" => "happiness" Plan: 7 to add, 15 to change, 7 to destroy. ------------------------------------------------------------------------ This plan was saved to: swap.plan To perform exactly these actions, run the following command to apply: terraform apply "swap.plan"</code>
Let's go into the details of the plan:
- 7 new DNS records are created for production
- The 15 changes include:
- Updating the CNS records for all of the 11 existing instances so that blue instances have
staging-happiness
for a CNS service label and green instances havehappiness
. - 2 changes to the staging Cloudflare DNS records, accounting for replacing the top two records with blue instance IPs.
- 2 changes to the production Cloudflare DNS records, accounting for replacing the existing two records with new green records.
- Updating the CNS records for all of the 11 existing instances so that blue instances have
- 7 staging DNS records are removed
Apply the DNS update
Everything looks good, so we're ready to implement the final step.
<code class="language-sh"> $ terraform apply swap.plan module.west.triton_machine.blue_machine: Modifying... (ID: f32a1b42-0275-4d8a-dc3c-a497bfcc7861) cns.0.services.0: "happiness" => "staging-happiness" module.west.triton_machine.green_machine[1]: Modifying... (ID: f69117a7-9a62-c7cb-91ed-cc57d95c61d4) cns.0.services.0: "staging-happiness" => "happiness" module.sw.triton_machine.green_machine[0]: Modifying... (ID: 930eb7d9-2dba-6021-9494-a33a2ae6f767) cns.0.services.0: "staging-happiness" => "happiness" module.dns.cloudflare_record.staging[3]: Destruction complete after 1s module.east.triton_machine.blue_machine: Modifying... (ID: ac120758-c9da-ce41-a0f7-97894929e7a5) [...] module.dns.cloudflare_record.production[0]: Modifying... (ID: 0d7d8a40a7f97aa180e52ff5c56d01b5) value: "72.2.113.31" => "165.225.139.88" module.dns.cloudflare_record.production[7]: Creating... domain: "" => "alexandra.space" hostname: "" => "<computed>" name: "" => "@" proxied: "" => "false" ttl: "" => "300" type: "" => "A" value: "" => "165.225.149.77" zone_id: "" => "<computed>" [...] module.dns.cloudflare_record.production[8]: Creation complete after 2s (ID: 365dd60c34743560c24c1220db672218) module.dns.cloudflare_record.production[0]: Modifications complete after 2s (ID: 0d7d8a40a7f97aa180e52ff5c56d01b5) Apply complete! Resources: 7 added, 15 changed, 7 destroyed. Outputs: east_datacenter_blue_ips = [ 72.2.113.31 ] east_datacenter_green_ips = [ 165.225.139.88, 165.225.139.56, 72.2.115.240 ] sw_datacenter_blue_ips = [] sw_datacenter_green_ips = [ 165.225.157.17, 64.30.129.88, 64.30.128.197 ] west_datacenter_blue_ips = [ 165.225.148.120 ] west_datacenter_green_ips = [ 165.225.150.82, 165.225.149.77, 165.225.144.136 ]</code>
Depending on DNS caching (be mindful of long TTL values whilst making frequent changes), you should be able to check out your DNS records with dig
. It should be a total swap from when you added new green instances.
<code class="language-sh"># Confirming for staging$ dig +noall +answer staging.alexandra.spacestaging.alexandra.space. 300 IN A 72.2.113.31staging.alexandra.space. 300 IN A 165.225.148.120# Confirming for production$ dig +noall +answer alexandra.spacealexandra.space. 300 IN A 165.225.157.17alexandra.space. 300 IN A 165.225.150.82alexandra.space. 300 IN A 64.30.128.197alexandra.space. 300 IN A 165.225.139.88alexandra.space. 300 IN A 165.225.144.136alexandra.space. 300 IN A 165.225.139.56alexandra.space. 300 IN A 64.30.129.88alexandra.space. 300 IN A 72.2.115.240alexandra.space. 300 IN A 165.225.149.77</code>
Wrapping up
If you visit the live websites, your production domain and the staging domain, the applications should be updated accordingly. Depending on DNS caching, it may not be an instant update. However, using dig
or looking at your Cloudflare dashboard should confirm the actions you've taken.
Whether it's Cloudflare or another DNS management provider, you can easily update your DNS names for your geographically distributed application.
Customization doesn't stop there. You can use Triton Compute Service on the Triton Public Cloud exclusively or pair it with an on-prem installation. You could even use multiple cloud services, providing there's a Terraform provider.
Check out the full source code for this demo on GitHub.
Let us know if you're running a complex blue-green installation across data centers by tweeting us at @joyent.
Many thanks to Sean Chittenden. Without him, this post could not have happened.
Post written by Alexandra White