Test drive Joyent's elastic container infrastructure for Docker
We're hard at work preparing Joyent's next generation Docker service. This post provides instructions on how you can get an early preview by running the alpha code on your laptop or lab hardware. Before diving in, though, consider watching Bryan Cantrill's Future of Containers in Production talk, which outlines some of the advances we are making with this next generation container service (e.g. simplified Docker host management, software defined networking, full stack introspection, cloud grade security and bare metal performance for Docker containers). Jump to the 46 minute mark if you want just a short overview and demo.
Let's dive in.
Get CoaL running
Joyent's Docker magic is part of Triton Elastic Container Infrastructure (formerly SmartDataCenter + a number of newly developed components), the open-source software that powers our public cloud and on-premisis private cloud offerings. Triton Elastic Container Infrastructure typically runs on racks of hardware, but for testing and development we typically run it on our laptops in VMware. We call that CoaL, or cloud on a laptop.
SmartDataCenter is the open-source foundation for Triton Elastic Container Infrastructure, so follow the directions in the SmartDataCenter repo to get started. You can do this install on standalone hardware, but the instructions below assume CoaL for simplicity. If you do go the hardware route, the biggest changes will be substituting the correct IP address at some points.
I run all this in VMware Fusion 7 (not the Pro version) on my MacBook Pro, but there's documentation for configuring everything on VMware on Windows and Linux as well. Be sure to give the VM lots of RAM. I've been able to run it with 4GB for a short time, but it quickly runs out of RAM and things quickly fall apart. Even with 8GB of RAM you should expect some swapping, especially as you bring up a number of containers. Also, keep in mind that it's running in a VM on your laptop, so set your performance expectations accordingly.
Once you've completed the installation steps in the documentation there, ssh into your new SDC head node:
ssh root@10.88.88.200
The password is one you entered during the install steps earlier. The 10.88.88.200
IP is the public network IP assigned during install. In a production scenario you'd probably want to firewall that off so that the head node is only reachable from the admin network, but while we're testing (especially in the CoaL context) we'll leave that open.
The first post-install task is to upgrade the sdcadm
utility, like so:
sdcadm self-update
With that done you can complete the common post install steps that will prepare your SDC CoaL install for development and testing.
These three sdcadm post-setup
commands will get you on your way by setting up network interfaces on the public network, CloudAPI, and setting provisioning rules so you can provision containers on the head node. You probably never want to provision containers on the head node in a production environment, but testing is a lot easier if we allow that. That's what the last command in the group does for us.
sdcadm post-setup common-external-nicssdcadm post-setup cloudapisdcadm post-setup dev-headnode-prov
Those commands will give you a good, solid platform to explore Triton Elastic Container Infrastructure. If you're running on CoaL, you should consider taking a VMware snapshot now so you can go back to a reliable, non-experimental state easily.
Install the Triton Engine for Docker
After all that, installing SDC-Docker is a breeze. The repo includes some installation docs, but because we've already done some of the work I'll include all the steps here.
The following command will setup and enable the Triton Engine for Docker support on the head node.
sdcadm experimental update-docker
That will likely take a while to complete, so spend some time exploring the operations portal and APIs (see below).
Setting up the Docker hosts
Okay, that's me trolling you. There are no Docker hosts, not as you might be thinking of them, and this is where the magic happens.
Wouldn't it be great if the same tools that allow us to turn a data center full of server hardware into virtual machines could be used to provision Docker containers? That's exactly what we're doing with Triton Elastic Container Infrastructure. Rather than provision a virtual machine, and then provision containers inside it, we're making Docker containers first class citizens in the infrastructure. If you're not familiar with the OS-based virtualization technology that makes this possible and secure, you should definitely watch Bryan's talk from the beginning.
Instead of spinning up, paying for, and managing Docker hosts, or a whole cluster of Docker hosts, you just launch containers and let the infrastructure layer take care of the rest. And the fantastic part is that the Docker containers run on the metal, rather than inside a virtual machine, so they run faster and you can pack more of them on the same hardware.
Triton Elastic Container Infrastructure with Docker Engine for Triton is like one giant Docker host for each user. It really is that simple.
Connecting to that giant Docker host starts by fetching this shell script that's used to set the Docker environment variables.
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/docker-client-env && chmod +x docker-client-env
Once you have the shell script, execute it inside backticks so that it can set the shell variables.
`./docker-client-env root@10.88.88.200`
Pause for a moment and consider that the shell script is outputting another shell script, which is executing in your shell's context to set the environment variables. If only the shell script could output it's own source code....
Playing with Docker, or "breaking alpha"
Your laptop is a data center with just one node, so this is where playing with real hardware can be more fun. CoaL in this context is like boot2docker, so you'll just have to imagine doing the following in a data center with 10 or 1,000 or more compute nodes.
Start out with a docker info
.
docker info
If you don't have a docker
client, see Docker's installation instructions.
Now let's try running a real container.
docker run --rm busybox echo "Why howdy there, now you're rocking with sdc-docker. Boom."
The result should look something like this:
Unable to find image 'busybox:latest' locallyPulling repository library/busybox4986bf8c1536: Download complete.511136ea3c5a: Download complete.df7546f9f060: Download complete.ea13149945cb: Download complete.Status: Downloaded newer image for library/busybox:latestWhy howdy there, now you're rocking with sdc-docker. Boom.
Keep it going with a simple benchmarking container I created that reports some CPU and write performance numbers. We can run that like so:
CID=$(docker run -d \-e "TERM=xterm" \-e "MANTA_URL=$MANTA_URL" \-e "MANTA_USER=$MANTA_USER" \-e "MANTA_KEY_ID=$MANTA_KEY_ID" \-e "SKEY=`cat ~/.ssh/id_rsa`" \-e "SKEYPUB=`cat ~/.ssh/id_rsa.pub`" \-e "DOCKER_HOST=$DOCKER_HOST" \misterbisson/simple-container-benchmarks)
Let's inspect that Docker container:
docker inspect $CID
Take note that the container has an IP address all its own. If you've ever had to struggle with port mapping or collisions, you can do a little dance here, because Joyent's container technology solves that frustration. If you haven't struggled with this problem and you just think every container should have a unique IP, then know that we agree with you, but please let your neighbor dance in joy all the same. Actually, if you like dancing, or just want to see some backstory on this networking magic, this is another good time to mention Bryan's talk.
If the container is still running, we can exec into a shell in the container with the following:
docker exec -it $CID bash
And with that you'll be in the container! Go ahead, run htop
and see the isolated processes.
The container will exit when it's done and you'll be returned to your terminal, or you can control + d
to exit sooner. Once back at your normal terminal, inspect the logs to see the benchmarks:
docker logs $CID
Now go try running a container of your own. Keep in mind that this is alpha-quality. Expect it to break somewhere. When you do find an error, tell us about it in the sdc-discuss mailing list.
I haven't bothered to cover the TLS cert implementation, as that's in active development as of this writing, but take a look at the docs for a look at how to enable it and where it's going.
You're on your own now, but take another peek in the operations portal to see list of containers in use (https://10.99.99.31/vms for CoaL users). That shows both Docker containers and SDC containers, and gives you some additional visibility into all of them.
Extra: take a look inside the SDC operations portal
The first step in signing into the operations portal is to figure out what the IP address is. For that, let's ssh into the headnode again and get a list of all the components of SDC with the sdc-role list
command.
ssh root@10.88.88.200
...and after entering your password:
sdc-role list
That should return something like the following:
ALIAS SERVER UUID RAM STATE ROLE ADMIN_IPadminui0 headnode ebe25f0c-a2ed-4154-800d-e36644e043be 2048 running adminui 10.99.99.31amon0 headnode ...and so on
You should see 23 different components, each running in its own container in a microservices architecture. The first line is for the adminui
container that hosts the operations portal. Point your browser to the IP number there to sign in. For me (and probably for you) https://10.99.99.31 is where I'm going to see the ops portal.
Now, if you're doing this on separate hardware, you may have to look up the public IP of the admin UI. Again, you'd probably not want to expose this on the public internet in a production situation, but for testing.... To get that IP, let's inspect the adminui container's details, filtering out just the network interfaces:
vmadm lookup -j tags.smartdc_role=~adminui | json -aH nics
You can chain some more json
filters in there to get just the external IP address:
vmadm lookup -j tags.smartdc_role=~adminui | json -aH nics | json -aH -c 'nic_tag == "external"' ip
Whatever IP you use to connect to the operations portal, take a look inside. You'll be able to see your Docker instances, networks, and total system resources for the entire data center.
Extra: upgrading the platform image
The platform images (PI) are what each of the microservices run. Upgrading these images (collectively just called "the image") is how we upgrade SDC. This shouldn't be necessary when first installing, but it's important as a way to keep up to date over the days that follow the initial install. This is much easier and faster than the old way, but it'll still take a while to download all the updates.
sdcadm self-updatesdcadm update --all -ysdcadm experimental update-docker
You might have to re-run that a few times, just continue until it works without error.
Version history
- March 25: Updated with Triton branding, see announcement blog posts from Bryan Cantrill and Casey Bisson.
Post written by Casey Bisson