Preview: managing containers and VMs together with Terraform on Triton
Now upstreamed!
The Triton provider for Terraform was upstreamed in Terraform 0.6.14 on March 2016.
Learn more
Community and support
- The sdc-discuss mailing list is a great starting point for questions about how to use and manage Triton, including with Terraform: Subscribe, archives.
- The Terraform mailing list and IRC channel (
#terraform-tool
on Freenode) are great for general questions about Terraform. - Joyent customers can contact support for issues specific to Terraform on Triton and in the Joyent public cloud.
Bugs
Please report bugs (do not ask for general help) in the GitHub issue tracker for Terraform.
Original post
Hashicorp's Terraform is a powerful tool to create, manage, and version reproducible infrastructure, including compute resources and upstack services. It's made by Hashicorp, who are famous for their infrastructure tools including Vagrant, Packer, Consul, and more. I could go on, but I'm not sure I can say it more clearly or succinctly than the Terraform.io website:
Terraform allows you to effortlessly combine high-level system providers with your own or with each other. Launch a server from one cloud provider, add a DNS entry with its IP with a different provider. Built-in dependency resolution means things happen in the right order.
Your configuration and state can be stored in version control, shared and distributed among your team. Updates, scaling and modifications will be planned first, so you can iterate with confidence.
In short, Terraform truly allows us to manage infrastructure as code, and it's high time we offered a Triton provider and resources.
Terraform was born in an age of virtual machine infrastructure, but on Triton we wanted first class support for our three types of compute resources:
- Bare-metal Docker containers. These run the Docker images you expect, but without complication of having to run them in a virtual machine or prepare the infrastructure first.
- Infrastructure containers. These work like hardware virtual machines, but perform like the bare metal containers they are.
- Hardware virtual machines. These allow flexibility to run Windows or other non-Linux operating systems.
Each of those compute resources are first-order objects in Triton, so it was important to make sure we represented them that way in Terraform. Fortunately, Terraform is well suited to that. Terraform can even combine all of those resources together, possibly along with others, in a single .tf
infrastructure definition file. Eventually, we hope this integrated support will make its way into Atlas, Hashicorp's hosted infrastructure management toolset based on Terraform.
Right now, the work is in a very early stage, but enough is completed that we'd like to invite you to try it out and give us feedback. We're planning to include support for the following resources in our 1.0 release:
- Terraform provider for Triton with resources:
- ✓ Keys, done in resource_key.go, but not yet merged to Terraform origin
- ✓ Machines, done in resource_machine.go, but not yet merged to Terraform origin
- NICs
- ✓ Firewall Rules, done in resource_firewall_rule.go, but not yet merged to Terraform origin
- ✓ Docker, Done! hashicorp/terraform#3761 is merged to Terraform origin and scheduled for release in version 0.6.9!
- Fabrics
- Networks
Additionally, we're also planning to build Packer providers for:
Of course, Packer already has support for building Docker containers.
Installation
Before installing Terraform, you should have a Joyent account and the Joyent CloudAPI tools. To use Docker, you'll need the Docker Engine installed and configured as well.
Hashicorp has some fantastic instructions for getting started with Terraform. However, for this demo we need a custom build that includes our Triton provider and the not-yet-released Docker provider. That custom build is available in our Github repo for this project. On my Mac, I can install it using these commands:
mkdir -p ~/terraform/bin/ # create a directory for the Terraform filescurl -L 'https://github.com/joyent/triton-terraform/releases/download/0.0.2/terraform.0.0.2.darwin.tar.gz' | tar xz --strip-components 1 -C ~/terraform/bin # download and extract archive
With everything downloaded to your ~/terraform/bin
directory, you can put that directory anywhere you want (I typically put it in /usr/local/
, but you can also use it right where it is. You will however, probably want to set your $PATH
so you can execute the files in there:
export PATH=$PATH:~/terraform/bin # set my path to include the Terraform bin.
That will set the path for just this terminal session, but StackOverflow has some instructions on how to do it permanantly.
However you set the path, you should be able to test your fresh install by typing terraform
in your terminal:
$ terraformusage: terraform [--version] [--help] []Available commands are: apply Builds or changes infrastructure destroy Destroy Terraform-managed infrastructure get Download and install modules for the configuration graph Create a visual graph of Terraform resources init Initializes Terraform configuration from a module output Read an output from a state file plan Generate and show an execution plan push Upload this Terraform module to Atlas to run refresh Update local state file against real resources remote Configure remote state storage show Inspect Terraform state or plan taint Manually mark a resource for recreation version Prints the Terraform version
Configuring Terraform for Triton
If you've got an existing version of Terraform installed and you jumped down here without doing the steps above, please go back and install our custom build. You'll need the terraform-provider-triton
and terraform-provider-docker
provided in there. If you're all set with that, then let's go try it all out.
Terraform stores the description of your infrastructure in .tf
files. We're going to build up a triton.tf
file that includes a variety of compute resources that Triton supports. Let's use Terraform to download our example files to get started:
terraform init github.com/joyent/triton-terraform/examples/triton-docker-combined triton-terraform-democd triton-terraform-demomv terraform.tfvars.example terraform.tfvars
Edit the terraform.tfvars
so that it has your credentials:
triton_account = ""triton_url = "https://us-sw-1.api.joyentcloud.com"triton_key_path = "~/.ssh/id_rsa"triton_key_id = "your key fingerprint, for example: 2e:c9:f9:89:ec:78:04:5d:ff:fd:74:88:f3:a5:18:a5"docker_cert_path = "/Users//.sdc/docker/"docker_host = "tcp://us-sw-1.docker.joyent.com:2376"
To generate your SSH key fingerprint for triton_key_id
, you can use ssh-keygen
as shown below:
ssh-keygen -l -E md5 -f | cut -d' ' -f2 | cut -d: -f2-
My private key is in ~/.ssh/id_rsa
, so I substitute that for the triton_key_path
value in the example above.
That will complete the configuration we need to create infrastructure containers and hardware virtual machines, but we'll need to do some further configuration to be able to use Terraform to also manage Docker containers.
For that, we'll need to complete the steps to get our Docker Engine setup on our laptop (or wherever we're using Terraform). That can include downloading and installing the Docker Engine (I'm on a Mac, so I use these instructions), and then configuring the Docker Engine to connect to Triton by running sdc-docker-setup.sh
.
Once we've got that done, we can set the right Docker provider
values in our terraform.tfvars
file. Those values include both the Docker host
and the cert_path
. The easy way to get those details is by looking at the matching environment variables we just set when configuring the Docker Engine for Triton:
echo $DOCKER_CERT_PATHecho $DOCKER_HOST
Now here's what that provider configuration will look like in the triton.tf
file:
provider "triton" { account = "${var.triton_account}" key = "${var.triton_key_path}" key_id = "${var.triton_key_id}" url = "${var.triton_url}"}provider "docker" { host = "${var.docker_host}" cert_path = "${var.docker_cert_path}"}
Notice how we're importing our user and connection variables using the "${var.variable_name}"
syntax. For more details check Terraform's documentation on variables.
Usage
Now that you have the providers installed and configured, we can use them to create and destroy infrastructure. The power is truly intoxicating. Muhahhhha.
Ah, um. OK, let's move on. Here's how we can use Terraform to describe Docker containers on Triton:
resource "docker_image" "nginx" { name = "nginx:latest" keep_updated = true}resource "docker_container" "nginx" { count = 1 name = "nginx-terraform-${format("%02d", count.index+1)}" image = "${docker_image.nginx.latest}" must_run = true env = ["env=test", "role=test"] restart = "always" memory = 128 labels { env = "test" role = "docker" } log_driver = "json-file" log_opts = { max-size = "1m" max-file = 2 } ports { internal = 80 external = 80 } # Docker support requires Terraform v0.6.9+ and includes restart policies, log drivers and labels.}
We can try this out by running the command terraform plan
. We should see output like:
$ terraform planRefreshing Terraform state prior to plan...The Terraform execution plan has been generated and is shown below.Resources are shown in alphabetical order for quick scanning. Green resourceswill be created (or destroyed and then created if an existing resourceexists), yellow resources are being changed in-place, and red resourceswill be destroyed.Note: You didn't specify an "-out" parameter to save this plan, so when"apply" is called, Terraform can't guarantee this is what will execute.+ docker_container.nginx bridge: "" => "" env.#: "" => "2" env.1532065269: "" => "env=test" env.2234656949: "" => "role=test" gateway: "" => "" image: "" => "${docker_image.nginx.latest}" ip_address: "" => "" ip_prefix_length: "" => "" labels.#: "" => "2" labels.env: "" => "test" labels.role: "" => "docker" log_driver: "" => "json-file" log_opts.#: "" => "2" log_opts.max-file: "" => "2" log_opts.max-size: "" => "1m" memory: "" => "1024" must_run: "" => "1" name: "" => "nginx-terraform-01" ports.#: "" => "1" ports.1516735375.external: "" => "80" ports.1516735375.internal: "" => "80" ports.1516735375.ip: "" => "" ports.1516735375.protocol: "" => "tcp" restart: "" => "on-failure"+ docker_image.nginx keep_updated: "" => "1" latest: "" => "" name: "" => "nginx:latest"Plan: 2 to add, 0 to change, 0 to destroy.
Aside: did you note that we didn't have to create a virtual machine to run the Docker container in first? That's because Docker containers run on bare metal as first-order objects on Triton. We think running containers on bare metal (and having the security to make it possible) make containers easy, convenient, and fast. I could go on about container-native infrastructure, but we're here to talk about Terraform....
Next, let's add an infrastructure container to our environment using the Triton provider. We'll use container-native Ubuntu Linux for our next demonstration.
Use the sdc-listimages
tool to find the image UUID of the latest Ubuntu image in this data center:
$ sdc-listimages | json -a id name version os published_at -c "this.os==='linux'" -c "/ubuntu/.test(this.name)" | tail -1ffe82a0a-83d2-11e5-b5ac-f3e14f42f12d ubuntu-15.04 20151105 linux 2015-11-05T15:36:36Z
Let's add an entry to our triton.tf
file for the new container:
resource "triton_machine" "testmachine" { name = "test-machine" package = "t4-standard-512M" image = "ffe82a0a-83d2-11e5-b5ac-f3e14f42f12d" count = 1}
Let's repeat the steps to add a hardware virtual machine running Windows Server. Again we'll find our package and image UUIDs using the tools installed from smartdc earlier. Alternatively you can experiment with the newer Triton tool (also in beta development).
$ sdc-listimages | json -a id name version os published_at -c "this.os==='windows'" | tail -166810176-4011-11e4-968f-938d7c9edfa2 ws2012std-r2 20140919 windows 2014-09-19T15:27:00Z
Next let's add that information to our .tf
file:
resource "triton_machine" "windowsmachine" { name = "win-test-terraform" package = "g3-standard-4-kvm" image = "66810176-4011-11e4-968f-938d7c9edfa2" count = 1}
We can try this out by running the Terraform tool. Let's begin by viewing its plan by typing terraform plan
:
$ terraform planRefreshing Terraform state prior to plan...docker_image.nginx: Refreshing state... (ID: 9fab4090484a840de49347c9c49597ab32df23ec26bb98d7a7ec24d59dff8945nginx:latest)docker_container.nginx: Refreshing state... (ID: 53af0b01c0f34a8bb49eef831e3b2e716fe2afb8bd3d41269277be6c1ef03dd1)The Terraform execution plan has been generated and is shown below.Resources are shown in alphabetical order for quick scanning. Green resourceswill be created (or destroyed and then created if an existing resourceexists), yellow resources are being changed in-place, and red resourceswill be destroyed.Note: You didn't specify an "-out" parameter to save this plan, so when"apply" is called, Terraform can't guarantee this is what will execute.+ triton_machine.testmachine administrator_pw: "" => "" created: "" => "" dataset: "" => "" disk: "" => "" image: "" => "ffe82a0a-83d2-11e5-b5ac-f3e14f42f12d" ips.#: "" => "" memory: "" => "" name: "" => "" package: "" => "t4-standard-512M" primaryip: "" => "" root_authorized_keys: "" => "" state: "" => "" type: "" => "" updated: "" => "" user_data: "" => "" user_script: "" => ""+ triton_machine.windowsmachine administrator_pw: "" => "" created: "" => "" dataset: "" => "" disk: "" => "" image: "" => "66810176-4011-11e4-968f-938d7c9edfa2" ips.#: "" => "" memory: "" => "" name: "" => "" package: "" => "g3-standard-4-kvm" primaryip: "" => "" root_authorized_keys: "" => "" state: "" => "" type: "" => "" updated: "" => "" user_data: "" => "" user_script: "" => ""Plan: 2 to add, 0 to change, 0 to destroy.
Note above that Terraform will check for the state of docker_image.nginx
and docker_container.nginx
. Since they both already exist it doesn't recreate them.
If you get a syntax error while running terraform plan
you can compare your file with this triton.tf.
Now, let's create our Windows VM using terraform apply
:
$ terraform applydocker_image.nginx: Refreshing state... (ID: 9fab4090484a840de49347c9c49597ab32df23ec26bb98d7a7ec24d59dff8945nginx:latest)docker_container.nginx: Refreshing state... (ID: 53af0b01c0f34a8bb49eef831e3b2e716fe2afb8bd3d41269277be6c1ef03dd1)triton_machine.windowsmachine: Creating... administrator_pw: "" => "" created: "" => "" dataset: "" => "" disk: "" => "" image: "" => "66810176-4011-11e4-968f-938d7c9edfa2" ips.#: "" => "" memory: "" => "" name: "" => "" package: "" => "4436ebfd-fc2d-e670-b4f4-c4828f314eb3" primaryip: "" => "" root_authorized_keys: "" => "" state: "" => "" type: "" => "" updated: "" => "" user_data: "" => "" user_script: "" => ""triton_machine.testmachine: Creating... administrator_pw: "" => "" created: "" => "" dataset: "" => "" disk: "" => "" image: "" => "ffe82a0a-83d2-11e5-b5ac-f3e14f42f12d" ips.#: "" => "" memory: "" => "" name: "" => "" package: "" => "t4-standard-512M" primaryip: "" => "" root_authorized_keys: "" => "" state: "" => "" type: "" => "" updated: "" => "" user_data: "" => "" user_script: "" => ""triton_machine.testmachine: Creation completetriton_machine.windowsmachine: Creation completeApply complete! Resources: 2 added, 0 changed, 0 destroyed.The state of your infrastructure has been saved to the pathbelow. This state is required to modify and destroy yourinfrastructure, so keep it safe. To inspect the complete stateuse the `terraform show` command.State path: terraform.tfstate
You can use terraform show
to view IP addresses and other metadata:
$ terraform show | grep -i ips\.0 ips.0 = 72.2.114.230 ips.0 = 72.2.114.36
As a last step for this demo, be sure to tear down all the infrastructure instances we just created using terraform destroy
and replying yes
at the prompt:
$ terraform destroyDo you really want to destroy? Terraform will delete all your managed infrastructure. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yestriton_machine.testmachine: Refreshing state... (ID: c6cc7012-75b6-61a8-dae4-ed9364eca5b4)triton_machine.windowsmachine: Refreshing state... (ID: f5f459cb-7fea-c955-88b3-91e52a56d9a6)docker_image.nginx: Refreshing state... (ID: 9fab4090484a840de49347c9c49597ab32df23ec26bb98d7a7ec24d59dff8945nginx:latest)docker_container.nginx: Refreshing state... (ID: 53af0b01c0f34a8bb49eef831e3b2e716fe2afb8bd3d41269277be6c1ef03dd1)triton_machine.windowsmachine: Destroying...triton_machine.testmachine: Destroying...docker_container.nginx: Destroying...docker_container.nginx: Destruction completedocker_image.nginx: Destroying...docker_image.nginx: Destruction completetriton_machine.testmachine: Destruction completetriton_machine.windowsmachine: Destruction completeApply complete! Resources: 0 added, 0 changed, 4 destroyed.
It's a beta!
We wanted to share our work on this so we can get early feedback, but I need to emphasize that this is a beta (actually, since we haven't hit full coverage for the planned resources, this could technically be an alpha). We expect questions and even some bugs. Please report what you find as Github issues.
I also need to thank the team at Aster.is for their work and expertise. They're doing the heavy lifting to make this work on Triton and improve Docker provider support as well. They're hard at work now to build support for managing network fabrics and NICs via Terraform, as well as Packer support to create images.
Post written by Drew Miller & Casey Bisson