In this tutorial, we’ll discuss how to continuously deploy your containerized applications onto Marathon running on top of DC/OS. We will be using Mesosphere’s own sample app, tweeter, with some slight modifications to deploy.
- A Wercker account
- The tweeter app, forked from here
- A running DC/OS cluster with Marathon installed
- A dockerhub repository called
Tweeter is a simple twitter clone developed in Rails. It uses Cassandra for storage and (optionally) Kafka as the message queue. Using Wercker, we will be building a container that is fit to run on Marathon, and another image that we can use to debug and develop the application locally.
Wercker’s configuration lives in one file: the
wercker.yml. In the
wercker.yml we will specify several automation pipelines that define how our application should be tested, built and then deployed. The end result will look like this:
box: rails:4.1 dev: services: - spotify/cassandra # - spotify/kafka steps: - bundle-install - internal/watch: code: | until bundle exec rake cassandra:setup; do echo "waiting..."; sleep 5; done; bundle exec rspec rails server reload: false test: services: - spotify/cassandra steps: - bundle-install - script: name: setup cassandra code: until bundle exec rake cassandra:setup; do sleep 5; done; - script: name: rspec code: bundle exec rspec build-dev: steps: - script: name: move to rails dir code: | mkdir /rails mv $WERCKER_SOURCE_DIR/* /rails - bundle-install: cwd: /rails - internal/docker-push: username: $USERNAME password: $PASSWORD cmd: rails server working-dir: /rails ports: 3000 repository: $REPOSITORY tag: dev build-prod: box: id: alpine:3.2 cmd: /bin/sh steps: - script: name: install deps code: | apk update && apk upgrade echo installing deps apk add curl-dev ruby-dev build-base tzdata echo installing ruby stuff apk add ruby ruby-io-console ruby-bundler apk add nodejs rm -rf /var/cache/apk/* - script: name: bundle install code: | gem install bundler # bundle install --without development test bundle install - internal/docker-push: username: $USERNAME password: $PASSWORD cmd: rails server working-dir: /pipeline/source ports: 3000 repository: $REPOSITORY tag: alpine deploy-to-marathon: box: id: buildpack-deps:jessie steps: - script: name: generate json file code: | chmod +x template.sh ./template.sh cat $APP_NAME.json - marathon-deploy: marathon-url: $MARATHON_URL app-name: $APP_NAME app-json-file: $APP_JSON instances: "3" auth-token: $MESOS_TOKEN
Don’t worry if it seems like a lot, we’ll be going over every pipeline in the next couple of sections. One thing to take note of, however is the usage of various environment variables.
If we want to use Wercker locally using the CLI, we’ll need to define these env vars somewhere. By default, Wercker will look for an ENVIRONMENT file, and if present, expose those env vars when executing the pipelines. Alternatively, you can specify a custom file by using the
--environment <your_file> flag. You can copy/paste the following into that file:
XXX_USERNAME=your_dockerhub_username XXX_PASSWORD=your_dockerhub_password X_APP_NAME=tweeter X_APP_JSON=tweeter.json X_REPOSITORY=dockerhub_username/tweeter X_CONTAINER_PORT=3000 # found in your cloudformation output X_MESOS_DNS_HOST=mesos_master_dns_address # generated by logging into the DCOS cli XXX_MESOS_TOKEN=your_token X_MARATHON_URL=http:$MESOS_DNS_HOST/service/marathon X_INSTANCES=3 X_CASSANDRA_HOSTS="cassandra"
Now that we’ve configured Wercker, let’s start out by taking a close look at the
devpipeline. We use this pipeline to develop our application inside a container on our local machine. By spinning up our applications, alongside its required services in containers, we achieve a higher level of dev/prod parity.
dev: services: - spotify/cassandra steps: - bundle-install - internal/watch: code: | until bundle exec rake cassandra:setup; do wecho "waiting..."; sleep 5; done; bundle exec rspec rails server reload: false
services clause, we inform Wercker to spin up a Cassandra container. We use Spotify’s container here because it’s an optimized version. Then, after installing our dependencies using
bundle-install, we specify which command should be executed once our container is spun up. In this case we need to setup cassandra and wait until the migrations complete. Then, we run our tests and finally we run
rails server to serve our application.
To see the dev pipeline in action, execute the following command in your terminal:
wercker dev --publish 3000
Wercker will now execute the
dev pipeline. You should see containers coming up with the docker ps command, and once the rails server has successfully loaded, you can go to
<your_docker_host_ip>:3000 and see our rails app in action:
--> Running step: watch Finished in 12.37 seconds (files took 1 minute 33.09 seconds to load) 1 example, 0 failures [2016-05-08 08:32:55] INFO WEBrick 1.3.1 [2016-05-08 08:32:55] INFO ruby 2.1.5 (2014-11-13) [x86_64-linux] [2016-05-08 08:32:55] INFO WEBrick::HTTPServer#start: pid=37 port=3000
Building the container
After setting up our local development environment we can now move on to setting up the pipelines that will build our container images. Our Workflow will look like this:
Setting up testing
In the test pipeline, we make sure that our application gets tested before we start building our containers. Again we need Cassandra to run our tests in, so we specify it in the services clause. Then we simply execute some code which we’ve already seen before in the
Once the test pipeline completes, we will setup Wercker in such a way that it will trigger two pipelines simultaneously:
Building a development image
At Wercker, we consider it a best-practice to split up your containers into a debug container and a production-ready container. This pipeline will create a development image and is relatively straightforward: running
bundle install and then pushing the resulting container to a registry. We can then use this image to easily distribute the latest version of our application to team members.
Building a production-ready image
When building containers for production, it’s a good idea to make it as much of a clean package as possible. That means getting rid of any dependencies and other files we don’t need. It also means reconsidering which base image we’re using and if we need a full-fledged OS (most of the times, we don’t). So in our build-prod pipeline we replace our rails image with an alpine. This gives us a much smaller footprint to start things off. Of course this means we need to install Ruby and all the build dependencies required to do that.
build-prod: box: id: alpine:3.2 cmd: /bin/sh steps: - script: name: install deps code: | apk update && apk upgrade echo installing deps apk add curl-dev ruby-dev build-base tzdata echo installing ruby stuff apk add ruby ruby-io-console ruby-bundler apk add nodejs rm -rf /var/cache/apk/* - script: name: bundle install code: | gem install bundler bundle install
And finally, we push this minified version of our container to the registry, and tag it with
Deploying the result
Defining the deploy pipeline
Deploying an application to Marathon involves creating a JSON file that specifies how the application should run and which dependencies it might have. To that end, we created the
template.sh script, which generates such a JSON file. The file should be relatively self-explanetory, so we won’t go into detail about it here.
deploy-to-mesosphere pipeline will execute this script and then, using the marathon-deploy step, do an API call to let Marathon know we have a new version of our application available.
deploy-to-marathon: box: id: buildpack-deps:jessie steps: - script: name: generate json file code: | chmod +x template.sh ./template.sh cat $APP_NAME.json - marathon-deploy: marathon-url: $MARATHON_URL app-name: $APP_NAME app-json-file: $APP_JSON instances: "3" auth-token: $MESOS_TOKEN
Setting up hosted Wercker
Now that we’ve defined all of pipelines, we’ll have to chain them together using Workflows, which can be done thru the Wercker web interface. Go ahead and create a new project on Wercker using your forked tweeter repository. Then, once your project is created, create a new Workflow using the
Manage Workflows button in the top right.
You will first need to define which pipelines are available and which environment variables they should expose. The
test pipeline should contain one env var,
CASSANDRA_HOSTS and should be set to
cassandra. This will allow our app to find the Cassandra service through a DNS lookup. When creating this pipeline, make sure you set the hook to
Git push instead of
default, since this is our starting point for the Workflow we will define.
Then, create the deploy-to-marathon pipeline and expose these env vars (remember you can copy/paste from your
ENVIRONMENT file you created earlier):
Lastly, create the
build-prodpipelines. Instead of creating env vars for each of these, we’ll just create the necessary env vars on a project level (since they both require the same ones). You can add these environent variables by navigating “
environment variables” in the project settings:
Creating the Workflow
Now that we’ve defined all the pipelines we can chain them together! Navigate to the
Workflows tab, and you should see your
test pipeline in the Workflows editor, which represents the starting point for our Workflow. Now you can add the remaining pipelines to create the Workflow we want. The end result should look like this:
Preparing DC/OS for deployment
The tweeter app requires Cassandra and Kafka installed. In the DC/OS interface you can install these packages, as explained in this tutorial.
That’s it! You’ve now set up continuous deployment to Mesosphere’s DC/OS and Marathon.