Rick

Rick
Rick

Thursday, May 7, 2015

Using Docker, Gradle to create Java docker distributions for java microservices draft 4

Using Docker, Gradle to create Java docker distributions for java microservices draft 4

I have used Docker and Vagrant quite a lot to setup series of servers. This is a real lifesaver when you are trying to do some integration tests and run into issues that would be hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers". Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation.
I also tend to use gradle a lot these days and grown quite fond of the application and distribution plugins. To me gradle application plugin and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development.
Before we get into Vagrant or Docker, let's try to do something very simple. Let's use the gradle plugin to create a simple Java application that reads its config from\etc\myapp\conf.properties and \etc\myapp\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).

Using Gradle and the Gradle Application plugin and Docker

Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
  • Create a dist tar file using gradle.
  • Create a dockerfile.
The docker file copies the dist tar to the container, untars it and then runs it inside of docker. Once it is a docker file, then you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common.
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.
Our binary deliverable is the runnable docker container not a jar file or a zip.
The distZip, and/or distTar is just a way to package up our code and make it easy to shove into our docker container.
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker container with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container.
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container).
Gradle application plugin generates a zip or tar file with everything we need and does not require a master Java process, or another repo cache of jars, etc. Between gradle application plugin and docker, we do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need.
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chgroot like and not a full virtual machine).
I think Docker, gradle and gradle application plugin is your best option for creating fast integration tests. But of course if you have EC2/boto, Vagrant, etc., Docker is not the only option.

Gradle application plugin

Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from \etc\myapp\conf.properties and\etc\logging.xml and that we can deploy easily to \opt\myapp\bin (startup scripts) and \opt\myapp\lib (jar files).
Before we get started let's do some prework.
$ sudo mkdir /etc/myapp
$ sudo chown rhightower /etc/myapp
Do the same for /opt/myapp. Where rhightower is your username. :)

The Java app

package com.example;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;

public class Main {

    public static void main(String... args) throws IOException {
        final String configLocation = System.getProperty("myapp.config.file");
        final File confFile = configLocation==null ?
            new File("./conf/conf.properties") :
            new File(configLocation);

        final Properties properties = new Properties();

        properties.load(Files.newInputStream(confFile.toPath()));

        System.out.printf("The port is %s\n", properties.getProperty("port"));
    }

}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a System.property. If the System.property is null, then it loads the config file from the current working directory.
When you run this program from an IDE, you will get.
The port is 8080
But we want the ability to create an /etc/myapp/conf.properties and an /opt/myappinstall dir. To do this we will use the application plugin.

Creating an install directory with the applicaiton plugin

To create /etc/myapp/conf.properties and an /opt/myapp install dir, we will use the gradle application plugin.

gradle application plugin

apply plugin: 'java'
apply plugin: 'application'

mainClassName = 'com.example.Main'
applicationName = 'myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]

repositories {
    mavenCentral()
}

task copyDist(type: Copy) {
    dependsOn "installApp"
    from "$buildDir/install/myapp"
    into '/opt/myapp'
}

task copyConf(type: Copy) {
    from "conf/conf.properties"
    into "/etc/myapp/"
}


dependencies {
}
Running the copyDist task will also run the installApp which is provided by theapplication plugin which is configured at the top of the file. We can use the copyConffile to copy over a sample configuration file.
Here is our build dir layout.

Build dir layout of the myapp gradle project

.
├── build.gradle
├── conf
│   └── conf.properties
├── settings.gradle
└── src
    └── main
        └── java
            └── com
                └── example
                    └── Main.java

conf/conf.properties

port=8080
To build and deploy the project into /opt/myapp, we do the following:

Building and installing our app

$ gradle build copyDist
This creates this directory structure for the install operation.

Our app install

$ tree /opt/myapp/
/opt/myapp/
├── bin
│   ├── myapp
│   └── myapp.bat
└── lib
    └── gradle-app.jar

To deploy a sample config we do this:

Copy sample config

$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.

Edit file and change property

$ nano /etc/myapp/conf.properties 
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
Change the properties file again. Run the app again.

Next up

Configuring logging under /etc/myapp/logging.properties.

Logging

Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it.
Logback allows you to set the location of the log configuration via a System property called logback.configurationFile
#### Example setting logback via System property
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file.
  • logback-core-1.1.3.jar
  • logback-classic-1.1.3.jar
  • slf4j-api-1.7.7.jar

Adding dependencies to gradle file

dependencies {
    compile 'ch.qos.logback:logback-core:1.1.3'
    compile 'ch.qos.logback:logback-classic:1.1.3'
    compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the applicationDefaultJvmArgs in the gradle build.

Adding logback.configurationFile System property to launcher script

applicationDefaultJvmArgs = [
        "-Dmyapp.config.file=/etc/myapp/conf.properties",
        "-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.

./conf/logging.xml log config

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/opt/logging/logs</file>
        <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
        </encoder>

        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
            <MinIndex>1</MinIndex>
            <MaxIndex>10</MaxIndex>
        </rollingPolicy>

        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>2MB</MaxFileSize>
        </triggeringPolicy>
    </appender>

    <logger name="com.example.Main" level="DEBUG" additivity="false">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FILE" />
    </logger>

    <root level="INFO">
        <appender-ref ref="STDOUT" />
    </root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.

Scripts to copy logging script into correct location for install

task copyLogConf(type: Copy) {
    from "conf/logging.xml"
    into "/etc/myapp/"
}

task copyAllConf() {
    dependsOn "copyConf", "copyLogConf"
}

To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration.

Main method that uses logkit to do logging.

package com.example;

import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;


import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Main {

    static final Logger logger = LoggerFactory.getLogger(Main.class);

    public static void main(final String... args) throws IOException {
        final String configLocation = System.getProperty("myapp.config.file");
        final File confFile = configLocation==null ?
            new File("./conf/conf.properties") :
            new File(configLocation);

        final Properties properties = new Properties();

        properties.load(Files.newInputStream(confFile.toPath()));

        logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
    }

}

Next up after that

Configuring dockerfile

Raw Notes

allprojects {

    group = 'mycompany.router'
    apply plugin: 'idea'
    apply plugin: 'java'
    apply plugin: 'maven'
    apply plugin: 'application'
    version = '0.1-SNAPSHOT'

}


subprojects {


    repositories {
        mavenLocal()
        mavenCentral()
    }

    sourceSets.main.resources.srcDir 'src/main/java'
    sourceCompatibility = JavaVersion.VERSION_1_8
    targetCompatibility = JavaVersion.VERSION_1_8

    dependencies {
        compile "io.fastjson:boon:$boonVersion"

        testCompile "junit:junit:4.11"
        testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
    }

    task buildDockerfile (type: Dockerfile) {
        dependsOn distTar
        from "java:openjdk-8"
        add "$distTar.archivePath", "/"
        workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
        entrypoint "./$project.name"
        if (project.dockerPort) {
            expose project.dockerPort
        }
        if (project.jmxPort) {
            expose project.jmxPort
        }
    }

    task buildDockerImage (type: Exec) {
        dependsOn buildDockerfile
        commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
    }


    task pushDockerImage (type: Exec) {
        dependsOn buildDockerfile
        commandLine "docker", "push", "mycompany/$project.name"
    }


    task runDockerImage (type: Exec) {
        dependsOn buildDockerImage
        if (project.dockerPort) {
        commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
        } else {
        commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
        }
    }


    task runDocker (type: Exec) {
        if (project.dockerPort) {
        commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
        } else {
        commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
        }
    }

}


project(':sample-web-server') {

    mainClassName = "mycompany.sample.web.WebServerApplication"

    applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
                                 "-Dcom.sun.management.jmxremote.authenticate=false",  "-Dcom.sun.management.jmxremote.ssl=false"]

    dependencies {
        compile "io.fastjson:boon:$boonVersion"

        compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
        compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'

        testCompile "junit:junit:4.11"
        testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
    }

    buildDockerfile {
        add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
        add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
        add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
        volume "/etc/consul-template/conf/sample-web-server"
        volume "/etc/sample-web-server"
    }

}


class Dockerfile extends DefaultTask {
    def dockerfileInfo = ""
    def dockerDir = "$project.buildDir/docker"
    def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
    def filesToCopy = []

    File getDockerfileDestination() {
        project.file(dockerfileDestination)
    }

    def from(image="java") {
        dockerfileInfo += "FROM $image\r\n"
    }

    def maintainer(contact) {
        maintainer += "MAINTAINER $contact\r\n"
    }

    def add(sourceLocation, targetLocation) {
        filesToCopy << sourceLocation
        def file = project.file(sourceLocation)
        dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
    }

    def run(command) {
        dockerfileInfo += "RUN $command\r\n"
    }

    def volume(path) {
        dockerfileInfo += "VOLUME $path\r\n"
    }

    def env(var, value) {
        dockerfileInfo += "ENV $var $value\r\n"
    }

    def expose(port) {
        dockerfileInfo += "EXPOSE $port\r\n"
    }

    def workdir(dir) {
        dockerfileInfo += "WORKDIR $dir\r\n"
    }

    def cmd(command) {
        dockerfileInfo += "CMD $command\r\n"
    }

    def entrypoint(command) {
        dockerfileInfo += "ENTRYPOINT $command\r\n"
    }

    @TaskAction
    def writeDockerfile() {
        for (fileName in filesToCopy) {
            def source = project.file(fileName)
            def target = project.file("$dockerDir/$source.name")
            target.parentFile.mkdirs()
            target.delete()
            target << source.bytes
        }
        def file = getDockerfileDestination()
        file.parentFile.mkdirs()
        file.write dockerfileInfo
    }
}

No comments:

Post a Comment

Kafka and Cassandra support, training for AWS EC2 Cassandra 3.0 Training