Docker bake-off: AWS vs Joyent
In identifying the seven characteristics of container-native infrastructure, I knew many of them would need a demonstration. True container-native infrastructure is so unlike the alternatives that it really must be seen to be believed or even understood. What we need is a container infrastructure bake-off.
I need an opponent for this bake-off so I picked AWS, but I invite you to try this on any infrastructure you choose. If you don't want to go through that effort, I'm pretty sure that what I found running Docker on AWS is indicative of what you will find running Docker on any other major public cloud or on a private cloud backed by VMware or OpenStack.
Round 1: setup your "cluster"
Amazon's EC2 Container Service is like so many in that you have to start up and pay for a "cluster" of virtual machines on which the containers will run, so the first step on AWS is to start those EC2 instances. Actually, the first step is to install and configure their CLI tools, and setup IAM roles and permissions.
We're just testing, so I'll start a single VM cluster for now. We'll start that instance with AWS' optimized AMI with Docker pre-installed. Be sure to substitute your values for key-name and security-group-ids if you're playing along at home.
AWSIID=$(aws ec2 run-instances --image-id ami-4d3c1a7d --count 1 --instance-type c4.large --key-name aws-oregon --security-group-ids sg-d8ddf2bd | json -aH Instances | json -aH InstanceId); time aws ec2 wait instance-running
If everything worked, that probably blocked for about 30 seconds. We didn't have to time it, but I was curious. The results I typically see look like this:
real 0m31.450suser 0m0.732ssys 0m0.182s
You could have done this via the web UI as well, but it's more repeatable for me to do it on the command line. No matter how you do it, though, you should now have a one VM cluster on AWS' EC2 Container Service.
So what do we have to do to spin up a cluster on Joyent's Triton Elastic Infrastructure for Docker? Nothing.
Seriously, once you've setup the Docker CLI you're pretty much ready to go. You can see all the steps in my previous blog post, including the optional steps to install Joyent's CLI tools. But, because our service is container-native, you emphatically don't have to start up any VMs. You can run your Docker containers across entire data centers without ever creating a "cluster" as other IaaS providers would have you do.
Score
I'm giving this one solidly to Joyent on the ops and accounting fronts. I don't mean to persist a stereotype, but this really isn't a development problem, so I've awarded no scores there.
Judge | AWS | Joyent |
---|---|---|
Dev | n/a | n/a |
Ops | Lose | Win |
Finance | Lose | Win |
Container-native difference: There's no need to pre-provision and pay for virtual machines or bare metal on container-native public clouds. Eliminating lifecycle management for those resources saves time and money.
Round 2: run a Docker container already
A quick read through AWS' Container Service docs suggests running containers there requires learning a new API. Registering "tasks" and running them requires the AWS CLI. I can't find any documentation that suggests we can use that API to develop and test tasks on our laptops, so we'll have to pay for time on AWS' cloud just to test it.
AWS' notion of tasks might be a great, if incompatible alternative to Fig and now Docker Compose, but I just want to run a single container. For that I'll ssh into the VM I created earlier (note that I captured the instance ID in $AWSIID
above):
ssh ec2-user@$(aws ec2 describe-instances --instance-ids $AWSIID | json -aH Reservations | json -aH Instances | json -aH PublicDnsName)
The Docker daemon is already installed, so we can start the container as soon as we're in. I'm using a simple benchmarking container for the tests we're doing throughout this post, but here I'm just running it to get the date:
docker run --rm -it misterbisson/simple-container-benchmarks date
Let's run that exact same command on Joyent as well.
The first difference you'll notice is that there's no separate API to start Docker containers. We just use the Docker API, which you can access from the Docker CLI. The second difference is that we don't have to ssh into a Docker host. The configuration done in the previous step was to setup the Docker CLI on our laptops (or elsewhere) so that we can securely access the API to start Docker containers remotely.
So, just run the command above and you're done.
Score
Joyent's straightforward use of the native Docker API, instead of layering proprietary APIs on top preserves our ability to develop and test the entire workflow on our laptops, a big win for both dev and ops.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | n/a | n/a |
Container-native difference: Admission: this isn't so much about container-native as it is about Docker-native, and the convenience and workflow portability that allows.
Round 3: container performance
None of us are running containers just to get the date, though, so let's try something more interesting.
The container we fetched earlier does some very simple benchmarks on filesystem and CPU performance. Let's run it to see how the two environments perform. Some of these tests will take a while, so you'll want multiple terminal windows or sessions running.
Let's run the image in server mode first. Do this on both the AWS instance and Joyent:
docker run -d \-m 4g \-p 80:80 \-p 5001:5001 \--name=simple-container-benchmarks-server-4g \misterbisson/simple-container-benchmarks
The server container executes some commands triggered by a client container. The command below will start the Docker image in client mode, then poll for it to complete and output the logs when done. This will block your terminal, so those multiple windows will be especially useful. Let's try this on both AWS and Joyent:
docker run -d \-m 256m \-e "DOCKER_HOST=$DOCKER_HOST" \-e "TARGET=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' simple-container-benchmarks-server-4g)" \--name=simple-container-benchmarks-client-4g \misterbisson/simple-container-benchmarksi=1; while [ $i -gt 0 ]; do i=$(docker inspect --format '{{ .Name }}' $(docker ps -q) | grep simple-container-benchmarks-client-4g | wc -l); t=$(date "+%H:%M:%S"); printf "%s %03d containers running\n" $t $i; sleep .3; done; docker logs simple-container-benchmarks-client-4g
Take another look at that docker run
command while we wait for these benchmarks to complete. It uses the Docker API as a service directory for discovery of the server container. That might be more interesting later....
Are the benchmarks done yet? If not, here's what I found from the AWS c4.large
instance with 3.75GB of RAM and 2 vCPUs (repeated test lines truncated):
Client mode...Target: 172.17.0.3––––––––––––––––––––––––––––––Performance benchmarks––––––––––––––––––––––––––––––dockerhost:host: 172.17.0.3 72fd691cbb6deth0: 172.17.0.3date: Mon Mar 9 06:05:05 UTC 2015––––––––––––––––––––––––––––––FS write performance––––––––––––––––––––––––––––––1073741824 bytes (1.1 GB) copied, 19.2424 s, 55.8 MB/s...1073741824 bytes (1.1 GB) copied, 20.0602 s, 53.5 MB/s––––––––––––––––––––––––––––––CPU performance––––––––––––––––––––––––––––––268435456 bytes (268 MB) copied, 40.9638 s, 6.6 MB/s...268435456 bytes (268 MB) copied, 42.013 s, 6.4 MB/s––––––––––––––––––––––––––––––System info–––––––––––––––––––––––––––––– total used free shared buffers cachedMem: 3858732 772624 3086108 116 44696 536740Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 2On-line CPU(s) list: 0,1Thread(s) per core: 2Core(s) per socket: 1Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 63Stepping: 2CPU MHz: 2900.042BogoMIPS: 5800.08Hypervisor vendor: XenVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 25600KNUMA node0 CPU(s): 0,1real 10m25.200suser 0m0.064ssys 0m0.032s
The time at the bottom is the total time to run all the benchmarks inside the container. There's some information about the system, but the yummy details are in the "FS write performance" and "CPU performance" sections.
Let's say it right now: benchmarking is a sport with very little relationship to the actual performance of a system. These benchmarks are intentionally simple, both because it makes them easy to run and repeat anywhere, and because too many people have wasted too much time trying to create "better" benchmarks that are no more indicative of how your app will perform on a system than these are. Read the github repo for more background.
Still, this is a fun sport, and more fun now that we can compare that against the same benchmarks on Joyent's infrastructure:
Client mode... Target: 165.225.169.84––––––––––––––––––––––––––––––Performance benchmarks––––––––––––––––––––––––––––––dockerhost: tcp://165.225.168.25:2376host: 165.225.169.84 b08d0738f855eth0: 165.225.169.84date: Mon Mar 9 06:36:51 UTC 2015––––––––––––––––––––––––––––––FS write performance––––––––––––––––––––––––––––––1073741824 bytes (1.1 GB) copied, 2.19373 s, 489 MB/s...1073741824 bytes (1.1 GB) copied, 2.41978 s, 444 MB/s––––––––––––––––––––––––––––––CPU performance––––––––––––––––––––––––––––––268435456 bytes (268 MB) copied, 6.73823 s, 39.8 MB/s...268435456 bytes (268 MB) copied, 6.33424 s, 42.4 MB/s––––––––––––––––––––––––––––––System info–––––––––––––––––––––––––––––– total used free shared buffers cachedMem: 4194304 54228 4140076 0 0 0Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 0Vendor ID: GenuineIntelCPU family: 6Model: 63Stepping: 2CPU MHz: 2599.946real 1m41.687suser 0m0.241ssys 0m0.286s
Those bottom-line times say a lot: 10m25.200s
on AWS vs. 1m41.687s
on Joyent. The biggest difference there is in the I/O, where Joyent's container architecture trounced the competition. Benchmarking can be a bloody sport.
Score
Faster performance, especially this hugely faster performance, is a big Joyent win for the dev, ops, and finance judges. Faster performance could mean running fewer or smaller containers to handle the same load, or delivering substantially better performance for the same price.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | Lose | Win |
Container-native difference: Containers running on Joyent's bare metal container hypervisor, rather than inside a virtual machine, run faster with much higher I/O performance.
Round 4: container networking
Very few real apps run in just one or two containers. Most apps are composed of a number of connected containers, making networking between containers (and containers on multiple compute nodes) a significant factor in deploying them at scale.
Let's stick with our benchmarking container, but run a few client server pairs. I've wrapped the docker run
command in a loop to start three pairs of containers. Here's the command to run, let's start on Joyent:
i=0; while [ $i -lt 3 ]; \do docker run -d -m 1g -c 64 -p 80:80 -p 5001:5001 --name=simple-container-benchmarks-server-1g-$i misterbisson/simple-container-benchmarks && \docker run -d -m 256m -e "DOCKER_HOST=$DOCKER_HOST" -e "TARGET=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' simple-container-benchmarks-server-1g-$i)" --name=simple-container-benchmarks-client-1g-$i misterbisson/simple-container-benchmarks; \i=$[$i+1]; donei=1; while [ $i -gt 0 ]; do i=$(docker inspect --format '{{ .Name }}' $(docker ps -q) | grep simple-container-benchmarks-client-1g | wc -l); t=$(date "+%H:%M:%S"); printf "%s %03d containers running\n" $t $i; sleep .3; done; docker inspect --format '{{ .Name }}' $(docker ps -a -q) | grep simple-container-benchmarks-client-1g | xargs -n 1 docker --tls logs
Now let's try it on AWS. Oh, snap! That's an error!
2015/03/12 18:34:29 Error response from daemon: Cannot start container 12bd87861d62b963d820968c94da830f5b611f19348c61cdd29ac14874494cca: Bind for 0.0.0.0:5001 failed: port is already allocated
You probably also saw an error about TLS verification, but that's easy. The hard part is the port conflicts.
Because the Docker containers share the host's network stack, no two containers can register the same port. This is like the dark days before Apache implemented named virtual host support and we had to run multiple daemons on different ports just to serve more than one logical site. That was a pain for ops and even worse for users. We all worked quickly to eliminate port mapping and all its complexity back then, it's a huge shame to bring it back in this more enlightened era.
Our choice now is to either spin up more hosts to run more containers, or modify our container and startup script to run on one host without port conflicts.
This is a simple container, so it's trivial to map the ports all over the place, but not every container is so pliable.
Ironically, because the client container is running on the same host as the server, the port only needs to be changed for the server side and the client still connects to port 80. This is because the container shares the Docker host's network here, and connections between containers on the same host are handled different than connections between containers on different hosts.
Score
Making containers full peers on the network, rather than isolating them in the compute node on which they run, is critical to cloud-scale deployments for Docker. Joyent's network simplicity is a win for both dev and ops, as neither needs to worry about lost performance and additional complexity of burying the container under the compute node's network.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | n/a | n/a |
Container-native difference: In a container-native environment, each container has an independent IP stack, and you'll no longer have to suffer the frustration of port conflicts. The container's network address isn't complicated by which compute node it's on, or if the request to that container is coming from the same compute node or a different one.
Round 5: multiple container performance
In addition to connecting multiple containers on multiple compute nodes, we need to ensure each container runs with the performance it needs and isn't clobbered by other containers sharing the same compute node. We also need to be able to scale those containers quickly and easily.
The earlier test with three pairs of client server containers on Joyent is probably complete by now. Let's try the same on AWS with the modified startup script that works around the port collisions to see how the performance compares.
Running containers in our single hardware VM on AWS leaves the containers competing for the same resources. To give the Docker daemon a hint at how I want to divide the resources, i've included -m 1g
and -c 512
arguments in the run command for the server. This should allocate about a quarter of the CPU and memory resources to each server container. The CPU share should be about 1/2 a vCPU per server container. Let's see how it runs:
i=0; while [ $i -lt 3 ]; \do docker run -d -m 1g -c 512 -p 808$i:80 --name=simple-container-benchmarks-server-1g-$i misterbisson/simple-container-benchmarks && \docker run -d -m 256m -e "DOCKER_HOST=$DOCKER_HOST" -e "TARGET=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' simple-container-benchmarks-server-1g-$i)" --name=simple-container-benchmarks-client-1g-$i misterbisson/simple-container-benchmarks; \i=$[$i+1]; donei=1; while [ $i -gt 0 ]; do i=$(docker inspect --format '{{ .Name }}' $(docker ps -q) | grep simple-container-benchmarks-client-1g | wc -l); t=$(date "+%H:%M:%S"); printf "%s %03d containers running\n" $t $i; sleep .3; done; docker inspect --format '{{ .Name }}' $(docker ps -a -q) | grep simple-container-benchmarks-client-1g | xargs -n 1 docker logs
Wow, okay, we've been waiting a while with no result, so let's talk about the Joyent results.
First, I have to point out that I ran the Joyent side with -m 1g
and -c 64
args to the run command. The CPU share number is different, but in the Joyent world it still maps to about 1/2 a vCPU, so this should be apples to apples in those terms.
In my test on Joyent right now, the longest running of the three contianers finished with these numbers. Take note how all ten tests for the FS and CPU benchmarks ran successfully.
Client mode...Target: 165.225.169.101––––––––––––––––––––––––––––––Performance benchmarks––––––––––––––––––––––––––––––dockerhost: tcp://165.225.168.25:2376host: 165.225.169.101 27104524a098eth0: 165.225.169.101date: Mon Mar 9 07:38:47 UTC 2015––––––––––––––––––––––––––––––FS write performance––––––––––––––––––––––––––––––1073741824 bytes (1.1 GB) copied, 2.90692 s, 369 MB/s1073741824 bytes (1.1 GB) copied, 2.90281 s, 370 MB/s1073741824 bytes (1.1 GB) copied, 2.60176 s, 413 MB/s1073741824 bytes (1.1 GB) copied, 2.82431 s, 380 MB/s1073741824 bytes (1.1 GB) copied, 2.54831 s, 421 MB/s1073741824 bytes (1.1 GB) copied, 2.80067 s, 383 MB/s1073741824 bytes (1.1 GB) copied, 2.8974 s, 371 MB/s1073741824 bytes (1.1 GB) copied, 2.89222 s, 371 MB/s1073741824 bytes (1.1 GB) copied, 2.62163 s, 410 MB/s1073741824 bytes (1.1 GB) copied, 2.50695 s, 428 MB/s––––––––––––––––––––––––––––––CPU performance––––––––––––––––––––––––––––––268435456 bytes (268 MB) copied, 6.56869 s, 40.9 MB/s268435456 bytes (268 MB) copied, 6.96235 s, 38.6 MB/s268435456 bytes (268 MB) copied, 6.82501 s, 39.3 MB/s268435456 bytes (268 MB) copied, 6.92153 s, 38.8 MB/s268435456 bytes (268 MB) copied, 8.34781 s, 32.2 MB/s268435456 bytes (268 MB) copied, 7.33672 s, 36.6 MB/s268435456 bytes (268 MB) copied, 9.36881 s, 28.7 MB/s268435456 bytes (268 MB) copied, 7.40027 s, 36.3 MB/s268435456 bytes (268 MB) copied, 8.02375 s, 33.5 MB/s268435456 bytes (268 MB) copied, 6.8273 s, 39.3 MB/s––––––––––––––––––––––––––––––System info–––––––––––––––––––––––––––––– total used free shared buffers cachedMem: 1048576 83388 965188 0 0 0Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 0Vendor ID: GenuineIntelCPU family: 6Model: 63Stepping: 2CPU MHz: 2599.949real 1m47.163suser 0m0.103ssys 0m0.129s
If you're lucky, we might have the results back on AWS by now.
I've done a number of tests, and I haven't gotten all three client containers to complete every FS or CPU iteration. In many cases the clients can't make any connections at all, like the following:
Client mode...Target: 172.17.0.24––––––––––––––––––––––––––––––Performance benchmarks––––––––––––––––––––––––––––––dockerhost:curl: failed to connectdate: Mon Mar 9 07:39:30 UTC 2015––––––––––––––––––––––––––––––FS write performance––––––––––––––––––––––––––––––curl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connect––––––––––––––––––––––––––––––CPU performance––––––––––––––––––––––––––––––curl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connectcurl: failed to connect––––––––––––––––––––––––––––––System info––––––––––––––––––––––––––––––curl: failed to connectcurl: failed to connectreal 0m0.216suser 0m0.080ssys 0m0.016s
Sadly, even though each container was configured to use just a portion of the VM's resources, the hardware VM on AWS was quickly overwhelmed by all the activity. On Joyent, however, each container ran with isolated performance and was unharmed by noise and activity from other containers.
Seriously, try this same test on AWS' c4.8xlarge
or even c3.8xlarge
(SSD-backed) instances with over 30 vCPUs. If you do get all the tests to run, they still clobber each other and the three run slower than a single instance on the c4.large
. My test on a c3.8xlarge
just now took over 18 minutes! It's like the thing was stuck in mud.
Score
Better performance is a win for everybody. Performance isolation that keeps everything running at high performance despite heavy loads is a double rainbow win. This problem isn't unique to running Docker in VMs, it's a reality of hosting Docker in an operating system without strong performance isolation between containers.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | Lose | Win |
Container-native difference: The performance and security protections in Joyent's container hypervisor isolate every container from trouble in the containers nearby and keep them running and ready even under loads that crush the alternatives.
Round 6: running containers on multiple compute nodes
You probably don't want to try this stunt at home, but let's spin up 100 of these benchmarks client-server pairs on Joyent:
i=0; while [ $i -lt 100 ]; \do docker run -d -m 1g -c 64 -p 80:80 -p 5001:5001 --name=simple-container-benchmarks-server-1g-$i misterbisson/simple-container-benchmarks; \i=$[$i+1]; donei=0; while [ $i -lt 100 ]; \do docker run -d -m 256m -e "DOCKER_HOST=$DOCKER_HOST" -e "TARGET=$(docker inspect --format='{{.NetworkSettings.IPAddress}}' simple-container-benchmarks-server-1g-$i)" --name=simple-container-benchmarks-client-1g-$i misterbisson/simple-container-benchmarks & \i=$[$i+1]; donei=1; while [ $i -gt 0 ]; do i=$(docker inspect --format '{{ .Name }}' $(docker ps -q) | grep simple-container-benchmarks-client-1g | wc -l); t=$(date "+%H:%M:%S"); printf "%s %03d containers running\n" $t $i; sleep .3; done; docker inspect --format '{{ .Name }}' $(docker ps -a -q) | grep simple-container-benchmarks-client-1g | xargs -n 1 docker --tls logs
We're just using the docker run
command here to start all 200 containers (100 servers, 100 clients), but the infrastructure behind that is spinning up the containers on compute nodes throughout the data center. I don't have to use a new API for this or really do anything that I wouldn't or can't do on my laptop (well, my laptop can't scale this far).
I can see these containers are running on multiple compute nodes using the Joyent CLI:
sdc-listmachines | json -aH -c '/^simple-container-benchmarks-server-1g/.test(this.name)' compute_node | uniq
The result shows the containers are distributed across seven compute nodes:
44454c4c-4400-1046-8050-b5c04f38343244454c4c-4400-1043-8053-b5c04f38343244454c4c-5400-1034-8053-b5c04f38343244454c4c-5400-1034-8052-b5c04f38343244454c4c-4400-1056-8050-b5c04f38343244454c4c-4400-1054-8052-b5c04f38343244454c4c-4400-1046-8050-b5c04f383432
How'd it perform? 100 tests distributed across seven compute nodes performed the same as a single test on a single compute node. How much abuse can Joyent's infrastructure take? Running 100 tests per compute node sufficiently saturated the filesystem I/O to slow those tests to the same performance as a single test on AWS' c4.large
VM.
Thinking of AWS, how are we going to test this on that infrastructure? The first thing we'd have to do is provision a much larger cluster of VMs. We might need 100 VMs, since the round above shows we can't run multiple server containers in the same VM without them clobbering each other.
Launching those 100 VMs is a matter of adding a --count 100
argument to the aws ec2 run-instances
command at the top of this post. The next step would be to implement AWS' proprietary EC2 Container Service API to launch all 200 containers. You could also register each of those VMs in Docker Swarm and deploy that way, but who has time for any of that?
Score
When the internet rushes in, we need to bring new containers online quickly and simply. Coordinating VM provisioning with container provisioning is a barrier to scaling vertically and horizontally as needed for those loads. Using the docker API to launch containers throughout the data center on Joyent's infrastructure dramatically simplifies development and operations, and allows far easier scaling of resources up and down as needs require.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | Lose | Win |
Container-native difference: Distributing containers among multiple hosts for availability and performance is easy when the infrastructure is built for it.
Round 7: cloud infrastructure providers beginning with "A"
Getting knocked down six rounds in a row is too embarrassing for any cloud infrastructure provider. Joyent would never want to appear unsporting, so we'll take a fall here.
Seriously, AWS created the market for virtual hardware in the cloud, and it's safe to assume we've all used it and are familiar with it. It may not be the best infrastructure for your Docker containers, but that's AWS' market to lose.
Score
Alphabetical priority is obviously a significant issue from all viewpoints, giving AWS the clear win for this round.
Judge | AWS | Joyent |
---|---|---|
Dev | Win | Lose |
Ops | Win | Lose |
Finance | Win | Lose |
Container-native difference: It's easy to pick familiar infrastructure, but that's not always the best infrastructure. People who care about security, performance, or network simplicity on the other hand should look carefully at all options. The good news is that Docker containers are making applications more portable than ever, so it's easy to test multiple infrastructure providers to find the ideal fit.
Round 8: on premises
Perhaps you have reasons you can't run in a public cloud, or maybe you've seen price advantages to in-housing your infrastructure? Okay, let's try that on AWS.
...we can't. AWS doesn't offer their tools to manage your own infrastructure.
So...does Joyent? You bet. The same software that manages our public cloud is available to run your own infrastructure. It's open source, and Docker support is available now.
Score
Joyent's easy to install infrastructure management tools are proven in the public cloud and available as the foundation of your private cloud. These tools open up opportunities to run workloads in Joyent's public cloud for highly elastic infrastructure with identical APIs. This is another win across the board for Joyent.
Judge | AWS | Joyent |
---|---|---|
Dev | Lose | Win |
Ops | Lose | Win |
Finance | Lose | Win |
Knockout: container-native is different
To get almost the same level of security and performance isolation, as well as unrestricted control of the network stack that containers on Joyent's infrastructure enjoy, you'd have to run containers in their own VM on AWS' infrastructure. This isn't specific to AWS, it's a reality shared by every other cloud offering on the market.
Judge | AWS | Joyent |
---|---|---|
Dev | 1 | 6 |
Ops | 1 | 7 |
Finance | 1 | 5 |
The security, performance, networking, and infrastructure management problems that are currently limiting broader production use of Docker are well recognized, and the number of potential solutions for each problem is truly amazing. Joyent's container-native infrastructure is the first to solve all of these problems, and we've done it in a way that's ready to use now.
Early access users can see all these benefits for themselves. Be sure to sign up if you haven't yet.
Version history
- March 25: Updated with Triton branding, see announcement blog posts from Bryan Cantrill and Casey Bisson.
Post written by Casey Bisson