Modern application blueprint: Node.js + Docker + NoSQL
Community support notice
This application blueprint is not being maintained by Joyent and may no longer work as described.
Please see our tutorial on building Autopilot Pattern applications and Autopilot Pattern WordPress implementation. The original blueprint is archived below.
Bug reports and pull requests from the community are welcome. Please file and pull those in github.com/autopilotpattern/touchbase.
Original post
One of the most confusing challenges developers and operators face when building modern applications is how to design them for easy deployment and scaling. The most critical factor in that is how to automate the process of connecting the components of the application together, and doing so in a way that works on our laptops as well as it does in production.
This post demonstrates how to solve those problems in the context of a Node.js application backed by Couchbase and load balanced with Nginx. All the components are running in multiple Docker containers on Triton, and use ContainerPilot to automate discovery and configuration. We're using Docker Compose to deploy the application and scale it across the data center on Triton. Finally, this post also demonstrates how to use Triton Container Name Service (CNS), the free and automated global DNS built-in to Triton, to make it easy for users on the internet to find our application.
All the code for this application in Github, so check that out and follow along below.
The shopping list
The application we're going to put together is Touchbase, a Node.js application. We'll need the following pieces:
- Touchbase, the Node.js app at the center of the stack
- Nginx, acting as a load balancer for Touchbase nodes
- Couchbase, for the data tier
- Consul, acting as a discovery service
- ContainerPilot, to help with service discovery
- Triton, our container-native infrastructure platform
- Triton Container Name Service, the free, automated DNS built-in to Triton
This stack could be used for many different applications, and the individual components can be swapped out easily. Prefer HAProxy to Nginx? No problem - just update the docker-compose.yml
file with the image you want to use.
Touchbase
The Touchbase Node.js application was written by Couchbase Labs as a demonstration of Couchbase 4.0's new N1QL query features. It wasn't especially designed for a container-native world, so we're using ContainerPilot to allow it to fulfill our requirements for service discovery.
Touchbase uses Couchbase as its data layer. We can have it serve requests directly, but we're going to put the Nginx load balancer in front of it because we want Nginx's ability to perform zero-downtime configuration reloads. (Also, in a production project we might have multiple applications behind Nginx.) We're going to use a fork of Touchbase that eliminates the requirement to configure SendGrid, because setting up transactional email services is beyond the scope of this article.
The Touchbase service's ContainerPilot has an onChange
handler that calls out to consul-template
to write out a new config.json
file based on a template that stored in Consul. Unfortunately Touchbase does not support a graceful reload, so in order to give Touchbase an initial configuration with a Couchbase cluster IP we'll need a pre-start script that does so. Having the option to run the onChange
handler or another startup script before forking the main application would be a great feature to add to ContainerPilot and I'll circle back on that in an upcoming post.
Nginx
The Nginx virtualhost config has an upstream
directive to run a least-conns load balancer for the backend Touchbase application nodes. When Touchbase nodes come online, they'll register themselves with Consul.
Just like in our original ContainerPilot example project, the Nginx service's ContainerPilot has an onChange
handler that calls out to consul-template
to write out a new virtualhost configuration file based a template that we've stored in Consul. It then fires an nginx -s reload
signal to Nginx, which causes it to gracefully reload its configuration.
Couchbase
Couchbase is a clustered NoSQL database. We're going to use the blueprint for clustered Couchbase in containers written by my Joyent colleague Casey Bisson. It uses the Autopilot Pattern Couchbase repo for Couchbase 4.0 to get access to the new N1QL feature.
When the first Couchbase node starts, we use docker exec
to bootstrap the cluster and register the first node with Consul for discovery. We'll then run the appropriate REST API calls to create Couchbase buckets and indexes for our application. At this point, we can add new nodes via docker-compose scale
and those nodes will pick up a Couchbase cluster IP from Consul. At that point, we hand off to Couchbase's own self-clustering.
Running the example
You can run this entire stack using the start.sh
script found at the top of the Github repo.Once you're ready:
- Get a Joyent account.
- Install the Docker Toolbox (including
docker
anddocker-compose
) on your laptop or other environment, as well as the Joyent CloudAPI CLI tools (including thesmartdc
andjson
tools). - Configure Docker and Docker Compose for use with Joyent.
- Download or clone the repo
At this point you can run the example on Triton:
./start.sh env# here you'll be asked to fill in the .env file./start.sh
or in your local Docker environment (note that you may need to increase the memory available to your docker-machine VM to run the full-scale cluster):
./start.sh env./start.sh -f docker-compose-local.yml
The .env
file that's created will need to be filled in with the values described below:
COUCHBASE_USER=COUCHBASE_PASS=
As the start script runs, it will launch the Consul web UI and the Couchbase web UI. Once Nginx is running, it will launch the login page for the Touchbase site. Take note of the domain name for each of these components: they're each using the automated Triton CNS feature.
At this point there is only one Couchbase node, one application server and one Nginx server and you will see the message:
Touchbase cluster is launched!Try scaling it up by running: ./start.sh scale
If you do so you'll be running docker-compose scale
operations that add 2 more Couchbase and Touchbase nodes and 1 more Nginx node. You can watch as nodes become live by checking out the Consul and Couchbase web UIs. Each of the new instances is automatically added to the DNS records by Triton CNS.
Using your own domain name
The Nginx containers that front this app are globally findable using the Triton CNS generated DNS. On Joyent's cloud, that matches the following pattern, but private cloud implementations may use a different pattern.
.svc...triton.zone
If Triton CNS is activated for your account, the start.sh
script will output the Nginx service domain in the terminal (if CNS is not active, it will output an IP address). On compatible platforms, it will even open the default web browser with that domain name.
However, the CNS-generated domain name isn't easy to remember or type. To solve that, we can assign a CNAME from our own domain to the CNS domain name.
In BIND DNS, that would look like the following:
IN CNAME .svc...triton.zone.
And here's what it looks like in CloudFlare DNS:
That CNAME connects my domain name with the Triton CNS generated domain name, so that any request to my domain name will be sent to the Nginx containers at the front of the Touchbase app. As I scale the number of Nginx instances up and down with demand, or replace instances as I update them, the new IPs of those instances will be reflected in and DNS requests. Triton CNS makes it easy to globally discover my application.
Wrapping up
The stack we've built here highlights the advantages of Dockerizing this application. We have an easy, repeatable deployment that we could test locally and then push the same stack to production. We have the automatic discovery and configuration that makes that deployment possible. We have easy horizontal scaling, with fine grained control over scale of each tier. And we have global discovery, thanks to Triton Container Name Service, the free and automated DNS built-in to Triton.
We've used ContainerPilot as example of the minimal shims that are required to make arbitrary applications container-native. And now that we've seen a production-ready multi-tier application assembled on Triton, we can see that container-native service discovery can be agnostic to any particular scheduling framework. This made it easy to connect components that were not designed to be containerized.
Deploying on Triton made all this even easier. In an environment where application containers have their own NIC(s), as they do on Triton, we can rely on application containers updating the discovery service without a heavyweight scheduler. That means you can use simple tools like Docker Compose to deploy and link containers without any additional software. With Triton's container-native infrastructure there's no need to provision virtual machines, and Triton charges per container so it's easy to keep track of how your costs will scale with your app. You can deploy on Triton in the Joyent public cloud or in your own data center (it's open source!). Just configure Docker and press the start.sh
button!
Post written by Tim Gross