Intended audience

This is not a howto or guide for running service discovery in production. It’s only writedown of a small expirement to see how service discovery using consul can work. It assumes basic knowledge of how to use a shell, docker and aws.

Service Discovery within an AWS environment

AWS has a blogpost available on using consul with ECS: In this post, a cloudformation template is referenced. I would definitely run this template as an example to learn from, but I would do it in a “throw-away” aws environment. In other words, create a trial account, launch it, throw it away after use. It will “pollute” your exisiting AWS account with iam profiles, policies, sec groups etc.

References I used:


Consul is a distributed key–value store and service discovery layer. In the example below I use this consul-agent docker image. Hashicorp has published an official dockerimage, but that runs Consul as a docker user instead of the root user. This has as the side effect that the docker container can’t bind on port 53, but binds to port 8600 instead. You will haven to bind port 53 to port 8600 in the docker run command, but with the current beta release of docker for mac bridging networking mode seems to broken.

Consul template

Consul Template provides generic template rendering and notifications with Consul. This interacts with consul to generate configuration templates for eg. HAProxy (it can be used ofc for other services as well).


Registrator is responsible for registering (and de-registering) containers to Consul. It receives signals through the Docker socket, which is exposed to it.

local test setup

# progium consul version
$ docker run --name consul -d -h dev progrium/consul -server -bootstrap-expect 1
# official consul version (broken on docker for OS X)
# the env variables ensure you can actually access the exposed ports
$ docker run -d --net=host -p 53:8600/tcp -p 53:8600/udp --name consul -h dev -e CONSUL_BIND_INTERFACE=eth0 -e CONSUL_CLIENT_INTERFACE=eth0 consul agent -server -bootstrap-expect 1 -ui

# get the ipaddress and store it in an environment variable

$ CONSUL_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul)

# start registrator
$ docker run -d -v /var/run/docker.sock:/tmp/docker.sock -h registrator --name registrator gliderlabs/registrator -internal consul://$CONSUL_IP:8500

# launch the 1st service
$ docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello1 --name hello1 sirile/scala-boot-test

# start haproxy
$ docker run -d -e SERVICE_NAME=rest --name=rest -p 80:80 -p 1936:1936 sirile/haproxy

# launch the webinterface
$ docker run --name consulweb -d -h web -p 8500:8500 progrium/consul -ui-dir=/ui -join=$CONSUL_IP

You can now login into consul at http://localhost:8500 and into the stats page of haproxy on http://localhost:1936 (someuser:password) for docker-machine.

Now launch more clients:

docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello2 --name hello2 -p :80 sirile/scala-boot-test
docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello3 --name hello3 -p :80 sirile/scala-boot-test
docker run -d -e SERVICE_NAME=hello/v1 -e SERVICE_TAGS=rest -h hello4 --name hello4 -p :80 sirile/scala-boot-test

If you now refresh the haproxy stats page you will see that the additional launched instances are added to haproxy automagically.

If you query port 80, you will get replies back from random nodes, so haproxy is doing it’s job.

$ curl http://localhost/hello/v1
$ curl http://localhost/hello/v1
$ curl http://localhost/hello/v1
$ curl http://localhost/hello/v1
$ curl http://localhost/hello/v1

Same applies when you kill some docker instances, those will be removed from your haproxy stats page.

docker rm -f hello1 hello4

This is a very simple hello world setup of service discovery, but it should allow you to see what moving parts are involved. I intend to post a more production ready setup using terraform and docker swarm soon.