Rick

Rick
Rick

Monday, May 18, 2015

Setting up Consul to run with Docker for Microservices Service Discovery

For services to find one another easily, there are limits to what Docker provides (although maybe that is changing, i.e., Docker Swarm.)
There are many ways containers/microservices can find each other. etcd, ZooKeeper, Consul, to name just a few. 
Consul is one of many choices. Consul is nice for microservices service discoverybecause it has a HTTP/JSON (REST-ish) API that can be accessed from any programming language, and it has some support for health checks which are flexible and feed right into the service discovery.
If you are new to Consul, a good place to start is this consul tutorial. This article won't cover the basics that are covered there. 
We follow the same basic flow/structure defined in the Java micro service article to create docker images and containers, and we use the docker linking described in part 2 of the gradle docker java micro service article.
The basic directory structure to setup three consul servers is here.
.
├── consul_server1
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       ├── readme.md
│   │       └── web
│   │           ├── index.html
│   │           └── static
│   │               ├── application.min.js
│   │               ├── base.css
│   │               ├── base.css.map
│   │               ├── bootstrap.min.css
│   │               ├── consul-logo.png
│   │               ├── favicon.png
│   │               └── loading-cylon-purple.svg
│   ├── runContainer.sh
│   ├── ui
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── consul_server2
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       └── readme.md
│   ├── runContainer.sh
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── consul_server3
│   ├── Dockerfile
│   ├── buildImage.sh
│   ├── env.sh
│   ├── etc
│   │   └── consul
│   │       └── consul.json
│   ├── opt
│   │   └── consul
│   │       ├── bin
│   │       │   ├── readme.md
│   │       │   └── run.sh
│   │       ├── data
│   │       └── readme.md
│   ├── runContainer.sh
│   └── var
│       └── logs
│           └── consul
│               └── readme.md
├── readme.md
└── runSample.sh

Consul configuration file for server 1

consul-server1 is setup to launch in bootstrap mode. 

/consul-server1/etc/consul/consul.json

{
  "datacenter": "test-dc",
  "data_dir": "/opt/consul/data",
  "log_level": "INFO",
  "node_name": "consul-server1",
  "server": true,
  "bootstrap" : true
}

Consul server 1 launches the UI.

consul-server1 is also setup to launch the consul UI which took a bit of doing.
Remember: The ip address is not localhost when you are using boot2docker.
To get to the ip address of the actual host, we used $HOSTNAME which contains the host name, and the utility getent to look up the actual ip address. We added this to the run script that launches consul.

consul-server1/opt/consul/bin/run.sh

/opt/consul/bin/consul agent \ 
      -config-file=/etc/consul/consul.json  
      \ -ui-dir=/opt/consul/web 
      \ -client=`getent hosts $HOSTNAME | cut -d' ' -f1`



This will make it so the UI is available at http://192.168.59.103:8500/ui/#/test-dc/servicesfor OSX.

Dockerfile

The Dockerfile is pretty standard.
# Pull base image.
FROM ubuntu

EXPOSE 8500:8500

RUN  apt-get update
RUN  apt-get install -y wget
RUN  apt-get install -y unzip


COPY opt /opt
COPY etc /etc
COPY var /var

RUN wget https://dl.bintray.com/mitchellh/consul/0.5.1_linux_amd64.zip
RUN unzip 0.5.1_linux_amd64.zip
RUN  mv consul /opt/consul/bin/


ENTRYPOINT /opt/consul/bin/run.sh

We just pull down consul. Then we launch the run.sh script that we showed earlier.
The only difference of the consul-server2 and consul-server3 images is the config and that they don't run the web-ui.

consul config for server 3

{
  "datacenter": "test-dc",
  "data_dir": "/opt/consul/data",
  "log_level": "INFO",
  "node_name": "consul-server3",
  "server": true,
  "bootstrap" : false,
  "retry_join" : [
    "consul-server1"]
}

Run.sh script for server 2 and 3

/opt/consul/bin/consul agent -config-file=/etc/consul/consul.json  

Building the image

To build the docker image

Building the docker image

$ docker build -t example/consul-server2:1.0-SNAP .

Starting the container

To start consul server 1 use this command.

Start the docker container for consul-server1

$ docker run --name consul-server1 -i -t  example/consul-server1:1.0-SNAP
To start the other two container, we need to tell them how to find server 1. Once they find it, they remember it. :)

Launching docker container with link to consul-server1 for consul-server2 and consul-server3

$ docker run --name consul-server2 --link consul-server1:consul-server1  \
     -t -i example/consul-server2:1.0-SNAP
At this point you should be able to launch the UI and see all of the nodes in the nodes tab.
Check it out.

Friday, May 15, 2015

Docker and Gradle to create Java microservices part 2 Connecting containers with links

Docker and Gradle to create Java microservices part 2

In the last article, we used gradle and docker to create a docker container that had our Java application running in it. (We pick up right where we left off so go back to that if you have not read it.)
This is great, but microservices typically talk to other microservices or other resources like databases, so how can we deploy our service to talk to another microservice. The docker answer to this is docker container links
When we add links to another docker container, docker will add domain name alias to the other container so we don't have to ship IP addresses around. For more background about Docker Container Links check out their documentation.
This can help us with local integration tests and for onboarding new developers. This allows us to setup topologies of docker containers that collborate with each other over the network. 
For this example, we will have one Java application called client and one called server. The project structure will look just like that last project structure. 

Running the client application

docker run --name client --link server:server -t -i example/client:1.0-SNAP
Notice that we pass --link server:server, this will add a DNS like alias for server so we can configure the host address of our server app as server. In practice, you would want a more qualified name. When we start up the server, we will need to give it the docker name server with --name server so that the client can find its address.
Thus under images/etc/client/conf/conf.properties we would have:

Config for client app image images/etc/client/conf/conf.properties

port=9999
host=server

Client gradle build script

Our gradle build script for out client is essentially the same as our myapp example. The major difference is we now depend on QBit, which is a microservice lib that can easily work with HTTP clients and servers.

gradle build script for client

apply plugin: 'java'
apply plugin: 'application'
apply plugin: 'idea'


def installOptDir="/opt/client"
def installConfDir="/etc/client"

mainClassName = 'com.example.ClientMain'
applicationName = 'client'

applicationDefaultJvmArgs = [
        "-Dclient.config.file=/etc/client/conf.properties",
        "-Dlogback.configurationFile=/etc/client/logging.xml"]

repositories {
    mavenCentral()
}

task copyDist(type: Copy) {
    dependsOn "installDist"
    from "$buildDir/install/client"
    into installOptDir
}

task copyConf(type: Copy) {
    from "conf/conf.properties"
    into installConfDir

}


task copyLogConf(type: Copy) {
    from "conf/logging.xml"
    into installConfDir

}

task copyAllConf() {
    dependsOn "copyConf", "copyLogConf"

}

task installClient() {
    dependsOn "copyDist", "copyConf", "copyLogConf"

}

task copyDistToImage(type: Copy) {
    dependsOn "installDist"
    from "$buildDir/install/client"
    into "$projectDir/image/opt/client"
}


dependencies {

    compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.8.2'
    compile 'ch.qos.logback:logback-core:1.1.3'
    compile 'ch.qos.logback:logback-classic:1.1.3'
    compile 'org.slf4j:slf4j-api:1.7.12'
}

The client main app

The client main class looks very similar to our myapp example. Except now it uses the host and port to connect to an actual server.
package com.example;

/**
 * Created by rick on 5/15/15.
 */

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;


import io.advantageous.boon.core.Sys;
import io.advantageous.qbit.http.client.HttpClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static io.advantageous.qbit.http.client.HttpClientBuilder.httpClientBuilder;

public class ClientMain {

    static final Logger logger = LoggerFactory.getLogger(ClientMain.class);

    public static void main(final String... args) throws IOException {

        final String configLocation = System.getProperty("client.config.file");
        final File confFile = configLocation==null  ?
                new File("./conf/conf.properties") :
                new File(configLocation);



        final Properties properties = new Properties();
        if (confFile.exists()) {
            properties.load(Files.newInputStream(confFile.toPath()));
        } else {
            properties.load(Files.newInputStream(new File("./conf/conf.properties").toPath()));
        }

        final int port = Integer.parseInt(properties.getProperty("port"));
        final String host = properties.getProperty("host");

        logger.info(String.format("The port is set to %d %s\n", port, host));



        final HttpClient httpClient = httpClientBuilder()
                .setHost(host).setPort(port).build();
        httpClient.start();

        for (int index=0; index< 10; index++) {
            System.out.println(httpClient.get("/foo/bar").body());
            Sys.sleep(1_000);
        }

        Sys.sleep(1_000);

        httpClient.stop();
    }

}
It connects to the server with httpClient and does 10 HTTP gets and prints the results out to System.out. The point here is that it is able to find the server and it does not hard code an IP address as both the client and server are Docker container instances which are ephemeral, elastic servers. 

Server application

The server uses the same gradle build file as myapp and the client examples. It uses the same image directory and Dockerfile as those examples as well.
Even the Main class looks similar to the early two examples as the focus in on gradle and Docker not our app per se.

ServerMain.java

package com.example;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;
import io.advantageous.qbit.http.server.HttpServer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import static io.advantageous.qbit.http.server.HttpServerBuilder.httpServerBuilder;

public class ServerMain {

    static final Logger logger = LoggerFactory.getLogger(ServerMain.class);

    public static void main(final String... args) throws IOException {

        final String configLocation = System.getProperty("server.config.file");
        final File confFile = configLocation==null  ?
                new File("./conf/conf.properties") :
                new File(configLocation);



        final Properties properties = new Properties();
        if (confFile.exists()) {
            properties.load(Files.newInputStream(confFile.toPath()));
        } else {
            properties.load(Files.newInputStream(
                    new File("./conf/conf.properties").toPath()));
        }

        final int port = Integer.parseInt(properties.getProperty("port"));


        HttpServer httpServer = httpServerBuilder()
                .setPort(port).build();


        httpServer.setHttpRequestConsumer(httpRequest -> {
            logger.info("Got request " + httpRequest.address()
                    + " " + httpRequest.getBodyAsString());
            httpRequest.getReceiver()
                    .response(200, "application/json", "\"hello\"");
        });


        httpServer.startServer();




    }

}
We just start a server and send a body of "hello" when called. 

Dockerfile

There was some changes to the Dockerfile to create the images. I ran into some issues with QBit and Open JDKs version number (which has been fixed but not released). I switched the example to use the Oracle 8 JDK.

Dockerfile using Oracle JDK 8

# Pull base image.
FROM ubuntu

# Install Java.
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections

RUN  apt-get update
RUN  apt-get install -y software-properties-common python-software-properties
RUN  add-apt-repository ppa:webupd8team/java
RUN  apt-get update

RUN  apt-get install -y oracle-java8-installer
RUN  rm -rf /var/lib/apt/lists/*
RUN  rm -rf /var/cache/oracle-jdk8-installer


# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle

COPY opt /opt
COPY etc /etc
COPY var /var


ENTRYPOINT /opt/client/bin/run.sh

Running server

To run the server docker container use the following:
docker run --name server -t -i example/server:1.0-SNAP
Note that we pass the --name server

Directory layout for client / server example

We don't repeat much from the first example, but hopefully the directory structure will shed some light on how we ogranized the applications and their docker image directories.
$ pwd
../docker-tut/networked-apps
$ tree
.
├── client
│   ├── build.gradle
│   ├── client.iml
│   ├── conf
│   │   ├── conf.properties
│   │   └── logging.xml
│   ├── gradle
│   │   └── wrapper
│   │       ├── gradle-wrapper.jar
│   │       └── gradle-wrapper.properties
│   ├── gradlew
│   ├── gradlew.bat
│   ├── image
│   │   ├── Dockerfile
│   │   ├── buildImage.sh
│   │   ├── env.sh
│   │   ├── etc
│   │   │   └── client
│   │   │       ├── conf.properties
│   │   │       └── logging.xml
│   │   ├── opt
│   │   │   └── client
│   │   │       ├── bin
│   │   │       │   ├── client
│   │   │       │   ├── client.bat
│   │   │       │   └── run.sh
│   │   │       └── lib
│   │   │           ├── boon-json-0.5.5.jar
│   │   │           ├── boon-reflekt-0.5.5.jar
│   │   │           ├── jackson-annotations-2.2.2.jar
│   │   │           ├── jackson-core-2.2.2.jar
│   │   │           ├── jackson-databind-2.2.2.jar
│   │   │           ├── log4j-1.2.16.jar
│   │   │           ├── logback-classic-1.1.3.jar
│   │   │           ├── logback-core-1.1.3.jar
│   │   │           ├── myapp.jar
│   │   │           ├── netty-all-4.0.20.Final.jar
│   │   │           ├── qbit-boon-0.8.2.jar
│   │   │           ├── qbit-core-0.8.2.jar
│   │   │           ├── qbit-vertx-0.8.2.jar
│   │   │           ├── slf4j-api-1.7.12.jar
│   │   │           ├── vertx-core-2.1.1.jar
│   │   │           └── vertx-platform-2.1.1.jar
│   │   ├── runContainer.sh
│   │   └── var
│   │       └── log
│   │           └── client
│   │               └── readme.md
│   ├── settings.gradle
│   └── src
│       └── main
│           └── java
│               └── com
│                   └── example
│                       └── ClientMain.java
└── server
    ├── build.gradle
    ├── conf
    │   ├── conf.properties
    │   └── logging.xml
    ├── gradle
    │   └── wrapper
    │       ├── gradle-wrapper.jar
    │       └── gradle-wrapper.properties
    ├── gradlew
    ├── gradlew.bat
    ├── image
    │   ├── Dockerfile
    │   ├── buildImage.sh
    │   ├── env.sh
    │   ├── etc
    │   │   └── server
    │   │       ├── conf.properties
    │   │       └── logging.xml
    │   ├── opt
    │   │   └── server
    │   │       ├── bin
    │   │       │   ├── run.sh
    │   │       │   ├── server
    │   │       │   └── server.bat
    │   │       └── lib
    │   │           ├── boon-json-0.5.5.jar
    │   │           ├── boon-reflekt-0.5.5.jar
    │   │           ├── jackson-annotations-2.2.2.jar
    │   │           ├── jackson-core-2.2.2.jar
    │   │           ├── jackson-databind-2.2.2.jar
    │   │           ├── log4j-1.2.16.jar
    │   │           ├── logback-classic-1.1.3.jar
    │   │           ├── logback-core-1.1.3.jar
    │   │           ├── netty-all-4.0.20.Final.jar
    │   │           ├── qbit-boon-0.8.2.jar
    │   │           ├── qbit-core-0.8.2.jar
    │   │           ├── qbit-vertx-0.8.2.jar
    │   │           ├── server.jar
    │   │           ├── slf4j-api-1.7.12.jar
    │   │           ├── vertx-core-2.1.1.jar
    │   │           └── vertx-platform-2.1.1.jar
    │   ├── runContainer.sh
    │   └── var
    │       └── log
    │           └── server
    │               └── readme.md
    ├── server.iml
    ├── settings.gradle
    └── src
        └── main
            └── java
                └── com
                    └── example
                        └── ServerMain.java
As you can see, we followed the first guide as a template very rigourously.

Using Docker, Gradle to create Java docker distributions for Java microservices Part 1

Using Docker, Gradle to create Java docker distributions for Java microservices Part 1

Docker and Vagrant great tools for onboarding new developers

Docker and Vagrant are used quite a bit to setup development environments quickly. Docker can even be the deployment container for your application. 
Docker and Vagrant are real lifesavers when you are trying to do some integration tests or just trying to onboard new developers. 
They can help you debug hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers", bet we can get close with Docker and Vagrant. Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation. 
For this blog post, we will focus more on Docker. Perhaps Vagrant can be used for a future blog post.

Goals for the examples in this article

We will create an application distribution with gradle. We will deploy the application distribution locally with gradle. We will setup a docker image. We will deploy an application distribution with docker. We will run our new docker image with our application distribution in it.

Gradle

Gradle tends to get used a lot these days for new projects. Many have grown quite fond of the gradle application and distribution plugins. The gradle application plugin and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development
Before we get into Docker, let's try to do something very simple. Let's use the gradle application plugin to create a simple Java application that reads its config from \etc\myapp\conf.properties and \etc\myapp\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files). Then we will build on this to create a Docker image.

Using Gradle and the Gradle Application plugin and Docker

Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
  • Create a dist tar file using gradle.
  • Create a dockerfile.
Docker uses the Dockerfile to copu the distribution files to the Docker image. From the Dockerfile, you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common. 
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.

Background of why Docker and gradle application plugin advantages

Our binary deliverable is the runnable docker image not a jar file or a zip. A running docker image is called a container. 
The gradle application plugin is an easy way to package up our compiled code and make it easy to shove into our docker image so we can run it as a docker container. 
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker image with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container. 
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container). 
Gradle application plugin generates a zip or tar file with everything we need or installs everything we need into a folder. Gradle application plugin does not require a master Java process, or another repo cache of jars, etc. It is not a container and does not product a container. We just get an easy way to run our Java process.
Between gradle application plugin and docker, we can do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need. 
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chroot like container and not a full virtual machine).
Docker, gradle and gradle application plugin is one of your best options for creating fast integration tests. But of course if you have EC2/boto, Vagrant, Chef, etc., Docker is not the only option. 

Gradle application plugin

Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from  \etc\myapp\conf.properties and \etc\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).
Before we get started let's do some prework.

Creating sample direcgtories for config

$ sudo mkdir /etc/myapp
$ sudo chown rick /etc/myapp
Do the same for /opt/myapp. Where rick is your username. :)

The Java app

Next let's create a really simple Java app since our focus in on the gradle build and the Dockerfile.

Really simple Java main app

package com.example;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;

public class Main {

    public static void main(String... args) throws IOException {
        final String configLocation = System.getProperty("myapp.config.file");
        final File confFile = configLocation==null ?
            new File("./conf/conf.properties") :
            new File(configLocation);

        final Properties properties = new Properties();

        properties.load(Files.newInputStream(confFile.toPath()));

        System.out.printf("The port is %s\n", properties.getProperty("port"));
    }

}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a System.property. If the System.property is null, then it loads the config file from the current working directory plus conf directory.
When you run this program from an IDE, you will get.

Output

The port is 8080
But we want the ability to create an /etc/myapp/conf.properties and an /opt/myapp install dir. To do this we will use the gradle application plugin.
Before we use the applicaiton plugin to install our app, let's make sure we have the right install folders setup.

Prework to setup install folders

$ sudo mkdir /opt/
$ sudo mkdir /opt/myapp
$ sudo chown rick /opt/myapp
Replace rick with your username.

Creating an install directory with the applicaiton plugin

To create /etc/myapp/conf.properties and an /opt/myapp install dir, we will use the gradle application plugin.

gradle application plugin

apply plugin: 'java'
apply plugin: 'application'

mainClassName = 'com.example.Main'
applicationName = 'myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]

repositories {
    mavenCentral()
}

task copyDist(type: Copy) {
    dependsOn "installApp"
    from "$buildDir/install/myapp"
    into '/opt/myapp'
}

task copyConf(type: Copy) {
    from "conf/conf.properties"
    into "/etc/myapp/"
}


dependencies {
}
Running the copyDist task will also run the  installApp which is provided by the application plugin which is configured at the top of the file. We can use the copyConf file to copy over a sample configuration file. 
Here is our build dir layout.

Build dir layout of the myapp gradle project

.
├── build.gradle
├── conf
│   └── conf.properties
├── settings.gradle
└── src
    └── main
        └── java
            └── com
                └── example
                    └── Main.java

conf/conf.properties

port=8080
To build and deploy the project into /opt/myapp, we do the following:

Building and installing our app

$ gradle build copyDist
This creates this directory structure for the install operation.
When we are done, our installed application looks like this:

Our app install

$ tree /opt/myapp/
/opt/myapp/
├── bin
│   ├── myapp
│   └── myapp.bat
└── lib
    └── gradle-app.jar

To deploy a sample config we do this:

Copy sample config

$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.

Edit file and change property

$ nano /etc/myapp/conf.properties 
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
The key point here is that it is prinitng out 9090 instead of 8080. This means it is reading the config under /etc/myapp and not the config that is included in the app.
Change the properties file again. Run the app again. Do you see the change? If not, check to make sure you are editing the right file and you understand.

Logging

Logging should be one of the first things that you setup for on any project. If it is a distributed system, then you need to setup distributed logging agregator as well.
Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it. 
Logback allows you to set the location of the log configuration via a System property called logback.configurationFile
#### Example setting logback via System property 
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file for LogBack. 
  • logback-core-1.1.3.jar
  • logback-classic-1.1.3.jar
  • slf4j-api-1.7.7.jar

Adding Logback dependencies to gradle file

dependencies {
    compile 'ch.qos.logback:logback-core:1.1.3'
    compile 'ch.qos.logback:logback-classic:1.1.3'
    compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the applicationDefaultJvmArgs in the gradle build.

Adding logback.configurationFile System property to launcher script

applicationDefaultJvmArgs = [
        "-Dmyapp.config.file=/etc/myapp/conf.properties",
        "-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.

./conf/logging.xml log config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/opt/logging/logs</file>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
        </encoder>

        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
            <MinIndex>1</MinIndex>
            <MaxIndex>10</MaxIndex>
        </rollingPolicy>

        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>2MB</MaxFileSize>
        </triggeringPolicy>
    </appender>

    <logger name="com.example.Main" level="DEBUG" additivity="false">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FILE" />
    </logger>

    <root level="INFO">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.

Scripts to copy logging script into correct location for install

task copyLogConf(type: Copy) {
    from "conf/logging.xml"
    into "/etc/myapp/"
}

task copyAllConf() {
    dependsOn "copyConf", "copyLogConf"
}

task installMyApp() {
    dependsOn "copyDist", "copyConf", "copyLogConf"
}

To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration instead of System.out.

Main method that uses logkit to do logging.

package com.example;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;


import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Main {

    static final Logger logger = LoggerFactory.getLogger(Main.class);

    public static void main(final String... args) throws IOException {
        final String configLocation = System.getProperty("myapp.config.file");
        final File confFile = configLocation==null ?
            new File("./conf/conf.properties") :
            new File(configLocation);

        final Properties properties = new Properties();

        properties.load(Files.newInputStream(confFile.toPath()));

        logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
    }

}
Now we run the app from the command line, we get.

Output from running the app

12:20:36,081 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
12:20:36,082 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@769c9116 - Registering current configuration as safe fallback point

conf 12:20:36.096 [main] DEBUG c.e.Main - The port is 9090

Installing Docker

You will need to install docker on your Mac OSX machine.
To do this use brew a package manager for OSX:
Install brew follow instructions from brew.
``bash $ sudo chown -R rick /usr/local $ brew install caskroom/cask/brew-cask $ brew cask install virtualbox $ brew install docker $ brew install boot2docker $ boot2docker init $ boot2docker up

 Add the following to your ~/.profile

 #### ~/.profile changes for boot2docker

 ```bash
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/rick/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
For Windows and Linux, follow the install instructions for those OS. Linux does not need boot2docker. OSX and Windows need boot2docker. The boot2docker runs the docker daemon which currently only runs on Linux. For OSX and Windows, the boot2docker runs in virutalbox. 

Docker image folder

To facilitate docker usage, let's create an image folder with all the bits we need for our image.
The image folder will hold the docker image. It will be under the project dir.

Docker image dir Layout

$ pwd
$ tree
.
├── Dockerfile
├── buildImage.sh
├── etc
│   └── myapp
│       ├── conf.properties
│       └── logging.xml
├── opt
│   └── myapp
│       ├── bin
│       │   ├── myapp
│       │   ├── myapp.bat
│       │   └── run.sh
│       └── lib
│           ├── logback-classic-1.1.3.jar
│           ├── logback-core-1.1.3.jar
│           ├── myapp.jar
│           └── slf4j-api-1.7.12.jar
├── runContainer.sh
└── var
    └── log
        └── readme.md
We have added a task to our gradle script to copy the application files to this diretory structure so we can easily deploy a docker image. 
Let's look at the Dockerfile which contains the directives for Docker to build our image.

Dockerfile

We kept the Dockerfile really simple.

Dockerfile for myapp (projectDir/image/Dockerfile)

FROM java:openjdk-8

COPY opt /opt
COPY etc /etc
COPY var /var


ENTRYPOINT /opt/myapp/bin/run.sh
This creates an image from an existing image that has Java OpenJDK 8 already installed. The docker file copies optetc, and var into the Docker image.
To build this image, we run the following docker command:
$ docker build -t example/myapp:1.0-SNAP .
Ok so where do all of the files come from under image. Well most of them you have seen before. Copy over logging.xml and conf.properties to etc. You can configure the image differently than your dev environment. To get the opt directory populated, we added a task to our gradle script. To simplify the entry point (standardize), and allow for setting evn variable as well as other Java system properties, we added a run.sh script.
#!/usr/bin/env bash
/opt/myapp/bin/myapp
Make the launch script executable. We specified the launch script was the entry point (what Docker should run when we call run container), e.g., ENTRYPOINT /opt/myapp/bin/run.sh

Make run.sh executable.

$ pwd
/Users/rick/github/myapp/image

$ chmod +x opt/myapp/bin/run.sh 
Before you build it, you have to have jar files and run scripts from the gradle application plugin. 

Task to gradle script that copies application libs and start scripts into Docker Image

task copyDistToImage(type: Copy) {
    dependsOn "installApp"
    from "$buildDir/install/myapp"
    into "$projectDir/image/opt/myapp"
}

Running copyDistToImage

$ gradle copyDistToImage
Once you copy the dist to the image directory, then you can build the image with docker build -t example/myapp:1.0-SNAP . as described above.
Once you install it, then you can run it.

Running Docker container

$  ./runContainer.sh 
conf 20:39:09.474 [main] DEBUG c.e.Main - The port is set to  9999
From the above run, you can see that I modified the port to 9999 in projectDir/image/etc/conf.properties.

Full gradle build file with copy to image command

apply plugin: 'java'
apply plugin: 'application'


def installOptDir="/opt/myapp"

def installConfDir="/etc/myapp"

mainClassName = 'com.example.Main'
applicationName = 'myapp'

applicationDefaultJvmArgs = [
        "-Dmyapp.config.file=/etc/myapp/conf.properties",
        "-Dlogback.configurationFile=/etc/myapp/logging.xml"]

repositories {
    mavenCentral()
}

task copyDist(type: Copy) {
    dependsOn "installApp"
    from "$buildDir/install/myapp"
    into installOptDir
}

task copyConf(type: Copy) {
    from "conf/conf.properties"
    into installConfDir

}


task copyLogConf(type: Copy) {
    from "conf/logging.xml"
    into installConfDir

}

task copyAllConf() {
    dependsOn "copyConf", "copyLogConf"

}

task installMyApp() {
    dependsOn "copyDist", "copyConf", "copyLogConf"

}

task copyDistToImage(type: Copy) {
    dependsOn "installApp"
    from "$buildDir/install/myapp"
    into "$projectDir/image/opt/myapp"
}


dependencies {
    compile 'ch.qos.logback:logback-core:1.1.3'
    compile 'ch.qos.logback:logback-classic:1.1.3'
    compile 'org.slf4j:slf4j-api:1.7.12'
}
We created an application distribution with gradle. We deployed the application distribution locally with gradle. We setup a docker image. We deployed an application distribution with docker. We ran a docker image with our application distribution in it.

Ideas for future article

Show how to link container

Setup consul

Raw Notes

allprojects {

    group = 'mycompany.router'
    apply plugin: 'idea'
    apply plugin: 'java'
    apply plugin: 'maven'
    apply plugin: 'application'
    version = '0.1-SNAPSHOT'

}


subprojects {


    repositories {
        mavenLocal()
        mavenCentral()
    }

    sourceSets.main.resources.srcDir 'src/main/java'
    sourceCompatibility = JavaVersion.VERSION_1_8
    targetCompatibility = JavaVersion.VERSION_1_8

    dependencies {
        compile "io.fastjson:boon:$boonVersion"

        testCompile "junit:junit:4.11"
        testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
    }

    task buildDockerfile (type: Dockerfile) {
        dependsOn distTar
        from "java:openjdk-8"
        add "$distTar.archivePath", "/"
        workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
        entrypoint "./$project.name"
        if (project.dockerPort) {
            expose project.dockerPort
        }
        if (project.jmxPort) {
            expose project.jmxPort
        }
    }

    task buildDockerImage (type: Exec) {
        dependsOn buildDockerfile
        commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
    }


    task pushDockerImage (type: Exec) {
        dependsOn buildDockerfile
        commandLine "docker", "push", "mycompany/$project.name"
    }


    task runDockerImage (type: Exec) {
        dependsOn buildDockerImage
        if (project.dockerPort) {
        commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
        } else {
        commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
        }
    }


    task runDocker (type: Exec) {
        if (project.dockerPort) {
        commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
        } else {
        commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
        }
    }

}


project(':sample-web-server') {

    mainClassName = "mycompany.sample.web.WebServerApplication"

    applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
                                 "-Dcom.sun.management.jmxremote.authenticate=false",  "-Dcom.sun.management.jmxremote.ssl=false"]

    dependencies {
        compile "io.fastjson:boon:$boonVersion"

        compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
        compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'

        testCompile "junit:junit:4.11"
        testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
    }

    buildDockerfile {
        add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
        add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
        add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
        volume "/etc/consul-template/conf/sample-web-server"
        volume "/etc/sample-web-server"
    }

}


class Dockerfile extends DefaultTask {
    def dockerfileInfo = ""
    def dockerDir = "$project.buildDir/docker"
    def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
    def filesToCopy = []

    File getDockerfileDestination() {
        project.file(dockerfileDestination)
    }

    def from(image="java") {
        dockerfileInfo += "FROM $image\r\n"
    }

    def maintainer(contact) {
        maintainer += "MAINTAINER $contact\r\n"
    }

    def add(sourceLocation, targetLocation) {
        filesToCopy << sourceLocation
        def file = project.file(sourceLocation)
        dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
    }

    def run(command) {
        dockerfileInfo += "RUN $command\r\n"
    }

    def volume(path) {
        dockerfileInfo += "VOLUME $path\r\n"
    }

    def env(var, value) {
        dockerfileInfo += "ENV $var $value\r\n"
    }

    def expose(port) {
        dockerfileInfo += "EXPOSE $port\r\n"
    }

    def workdir(dir) {
        dockerfileInfo += "WORKDIR $dir\r\n"
    }

    def cmd(command) {
        dockerfileInfo += "CMD $command\r\n"
    }

    def entrypoint(command) {
        dockerfileInfo += "ENTRYPOINT $command\r\n"
    }

    @TaskAction
    def writeDockerfile() {
        for (fileName in filesToCopy) {
            def source = project.file(fileName)
            def target = project.file("$dockerDir/$source.name")
            target.parentFile.mkdirs()
            target.delete()
            target << source.bytes
        }
        def file = getDockerfileDestination()
        file.parentFile.mkdirs()
        file.write dockerfileInfo
    }
}
Kafka and Cassandra support, training for AWS EC2 Cassandra 3.0 Training