Rick

Rick
Rick

Monday, May 18, 2015

Setting up Consul to run with Docker for Microservices Service Discovery

For services to find one another easily, there are limits to what Docker provides (although maybe that is changing, i.e., Docker Swarm.)
There are many ways containers/microservices can find each other. etcd, ZooKeeper, Consul, to name just a few. 
Consul is one of many choices. Consul is nice for microservices service discoverybecause it has a HTTP/JSON (REST-ish) API that can be accessed from any programming language, and it has some support for health checks which are flexible and feed right into the service discovery.
If you are new to Consul, a good place to start is this consul tutorial. This article won't cover the basics that are covered there. 
We follow the same basic flow/structure defined in the Java micro service article to create docker images and containers, and we use the docker linking described in part 2 of the gradle docker java micro service article.
The basic directory structure to setup three consul servers is here.
.
├── consul_server1
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       ├── readme.md
│   │       └── web
│   │           ├── index.html
│   │           └── static
│   │               ├── application.min.js
│   │               ├── base.css
│   │               ├── base.css.map
│   │               ├── bootstrap.min.css
│   │               ├── consul-logo.png
│   │               ├── favicon.png
│   │               └── loading-cylon-purple.svg
│   ├── runContainer.sh
│   ├── ui
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── consul_server2
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       └── readme.md
│   ├── runContainer.sh
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── consul_server3
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       └── readme.md
│   ├── runContainer.sh
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── readme.md
└── runSample.sh

Consul configuration file for server 1

consul-server1 is setup to launch in bootstrap mode. 

/consul-server1/etc/consul/consul.json

{
  "datacenter": "test-dc",
  "data_dir": "/opt/consul/data",
  "log_level": "INFO",
  "node_name": "consul-server1",
  "server": true,
  "bootstrap" : true
}

Consul server 1 launches the UI.

consul-server1 is also setup to launch the consul UI which took a bit of doing.
Remember: The ip address is not localhost when you are using boot2docker.
To get to the ip address of the actual host, we used $HOSTNAME which contains the host name, and the utility getent to look up the actual ip address. We added this to the run script that launches consul.

consul-server1/opt/consul/bin/run.sh

/opt/consul/bin/consul agent \ 
      -config-file=/etc/consul/consul.json  
      \ -ui-dir=/opt/consul/web 
      \ -client=`getent hosts $HOSTNAME | cut -d' ' -f1`



This will make it so the UI is available at http://192.168.59.103:8500/ui/#/test-dc/servicesfor OSX.

Dockerfile

The Dockerfile is pretty standard.
# Pull base image.
FROM ubuntu

EXPOSE 8500:8500

RUN  apt-get update
RUN  apt-get install -y wget
RUN  apt-get install -y unzip


COPY opt /opt
COPY etc /etc
COPY var /var

RUN wget https://dl.bintray.com/mitchellh/consul/0.5.1_linux_amd64.zip
RUN unzip 0.5.1_linux_amd64.zip
RUN  mv consul /opt/consul/bin/


ENTRYPOINT /opt/consul/bin/run.sh

We just pull down consul. Then we launch the run.sh script that we showed earlier.
The only difference of the consul-server2 and consul-server3 images is the config and that they don't run the web-ui.

consul config for server 3

{
  "datacenter": "test-dc",
  "data_dir": "/opt/consul/data",
  "log_level": "INFO",
  "node_name": "consul-server3",
  "server": true,
  "bootstrap" : false,
  "retry_join" : [
    "consul-server1"]
}

Run.sh script for server 2 and 3

/opt/consul/bin/consul agent -config-file=/etc/consul/consul.json  

Building the image

To build the docker image

Building the docker image

$ docker build -t example/consul-server2:1.0-SNAP .

Starting the container

To start consul server 1 use this command.

Start the docker container for consul-server1

$ docker run --name consul-server1 -i -t  example/consul-server1:1.0-SNAP
To start the other two container, we need to tell them how to find server 1. Once they find it, they remember it. :)

Launching docker container with link to consul-server1 for consul-server2 and consul-server3

$ docker run --name consul-server2 --link consul-server1:consul-server1  \
     -t -i example/consul-server2:1.0-SNAP
At this point you should be able to launch the UI and see all of the nodes in the nodes tab.
Check it out.
Kafka and Cassandra support, training for AWS EC2 Cassandra 3.0 Training