Using Docker, Gradle to create Java docker distributions for Java microservices Part 1
Docker and Vagrant great tools for onboarding new developers
Docker and Vagrant are used quite a bit to setup development environments quickly. Docker can even be the deployment container for your application.
Docker and Vagrant are real lifesavers when you are trying to do some integration tests or just trying to onboard new developers.
They can help you debug hard to track down without running "actual servers". Running everything on one box is not the same as running many "servers", bet we can get close with Docker and Vagrant. Even if your final deployment is VMWare or EC2 or bare metal servers, Docker and Vagrant are great for integration testing and writing setup documentation.
For this blog post, we will focus more on Docker. Perhaps Vagrant can be used for a future blog post.
Goals for the examples in this article
We will create an application distribution with gradle. We will deploy the application distribution locally with gradle. We will setup a docker image. We will deploy an application distribution with docker. We will run our new docker image with our application distribution in it.
Gradle
Gradle tends to get used a lot these days for new projects. Many have grown quite fond of the gradle application and distribution plugins. The gradle
application plugin
and docker (or vargrant or EC2 with boto) are sort of essential way to doing Java microservice development.
Before we get into Docker, let's try to do something very simple. Let's use the gradle application plugin to create a simple Java application that reads its config from
\etc\myapp\conf.properties
and \etc\myapp\logging.xml
and that we can deploy easily to \opt\myapp\bin
(startup scripts) and \opt\myapp\lib
(jar files). Then we will build on this to create a Docker image.Using Gradle and the Gradle Application plugin and Docker
Gradle can create a distribution zip or tar file which is a archive file with the libs and shell scripts you need to run on Linux/Windows/Cygwin/OSX. Or it can just install of this stuff into a directory of your choice.
What I typically do is this….
- Create a dist tar file using gradle.
- Create a dockerfile.
Docker uses the Dockerfile to copu the distribution files to the Docker image. From the Dockerfile, you can make a docker container that you can ship around. The gradle and docker file have all of the config info that is common.
You may even have special gradle build options for different environments. Or your app talks to Consul or etcd on startup and look up the special environments stuff like server locations so the docker binary dist can be identical. Consul and etcd are essential ingredients in a microservices architecture both for elastic consistent config and service discovery.
Background of why Docker and gradle application plugin advantages
Our binary deliverable is the runnable docker image not a jar file or a zip. A running docker image is called a container.
The gradle application plugin is an easy way to package up our compiled code and make it easy to shove into our docker image so we can run it as a docker container.
If you go the docker route, then the docker container is our binary (runnable) distribution not the tar or zip. We do not have to guess what JVM, because we configure the docker image with exactly the JVM we want to use. We can install any drivers or daemons or utilities that we might need from the Linux world into our container.
Think of it this way. With maven and/or gradle you can create a zip or war file that has the right version of the MySQL jar file. With Docker, you can create a Linux runnable binary that has all of the jar files and not only the right MySQL jar file but the actual right version MySQL server which can be packaged in the same runnable binary (the Linux Docker container).
Gradle application plugin generates a zip or tar file with everything we need or installs everything we need into a folder. Gradle application plugin does not require a master Java process, or another repo cache of jars, etc. It is not a container and does not product a container. We just get an easy way to run our Java process.
Between gradle application plugin and docker, we can do whatever we need to do with our binary configuration but in a much more precise manner. Every jar, every linux utility, every thing we need, all in one binary that can be deployed in a prviate cloud, public cloud or just run on your laptop. No need to guess the OS, JVM, or libs. We ship exactly what we need.
Docker is used to make deployements faster and more precise.
If part of the tests include running some integration with virtualization than Docker should be the fastest route for creating new virtual instances (since it is just a chroot like container and not a full virtual machine).
Docker, gradle and gradle application plugin is one of your best options for creating fast integration tests. But of course if you have EC2/boto, Vagrant, Chef, etc., Docker is not the only option.
Gradle application plugin
Our first goal is to do the following. Use the gradle application plugin to create a simple Java application that reads its config from
\etc\myapp\conf.properties
and \etc\logging.xml
and that we can deploy easily to \opt\myapp\bin
(startup scripts) and \opt\myapp\lib
(jar files).
Before we get started let's do some prework.
Creating sample direcgtories for config
$ sudo mkdir /etc/myapp
$ sudo chown rick /etc/myapp
Do the same for /opt/myapp. Where
rick
is your username. :)The Java app
Next let's create a really simple Java app since our focus in on the gradle build and the Dockerfile.
Really simple Java main app
package com.example;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;
public class Main {
public static void main(String... args) throws IOException {
final String configLocation = System.getProperty("myapp.config.file");
final File confFile = configLocation==null ?
new File("./conf/conf.properties") :
new File(configLocation);
final Properties properties = new Properties();
properties.load(Files.newInputStream(confFile.toPath()));
System.out.printf("The port is %s\n", properties.getProperty("port"));
}
}
It is a simple Java app, it looks at a configuration file that has the port. The location of the configuration file is passed via a
System.property
. If the System.property is null, then it loads the config file from the current working directory plus conf directory.
When you run this program from an IDE, you will get.
Output
The port is 8080
But we want the ability to create an
/etc/myapp/conf.properties
and an /opt/myapp
install dir. To do this we will use the gradle application plugin.
Before we use the applicaiton plugin to install our app, let's make sure we have the right install folders setup.
Prework to setup install folders
$ sudo mkdir /opt/
$ sudo mkdir /opt/myapp
$ sudo chown rick /opt/myapp
Replace
rick
with your username.Creating an install directory with the applicaiton plugin
To create
/etc/myapp/conf.properties
and an /opt/myapp
install dir, we will use the gradle application plugin.gradle application plugin
apply plugin: 'java'
apply plugin: 'application'
mainClassName = 'com.example.Main'
applicationName = 'myapp'
applicationDefaultJvmArgs = ["-Dmyapp.config.file=/etc/myapp/conf.properties"]
repositories {
mavenCentral()
}
task copyDist(type: Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into '/opt/myapp'
}
task copyConf(type: Copy) {
from "conf/conf.properties"
into "/etc/myapp/"
}
dependencies {
}
Running the
copyDist
task will also run the installApp
which is provided by the application
plugin which is configured at the top of the file. We can use the copyConf
file to copy over a sample configuration file.
Here is our build dir layout.
Build dir layout of the myapp gradle project
.
├── build.gradle
├── conf
│ └── conf.properties
├── settings.gradle
└── src
└── main
└── java
└── com
└── example
└── Main.java
conf/conf.properties
port=8080
To build and deploy the project into
/opt/myapp
, we do the following:Building and installing our app
$ gradle build copyDist
This creates this directory structure for the install operation.
When we are done, our installed application looks like this:
Our app install
$ tree /opt/myapp/
/opt/myapp/
├── bin
│ ├── myapp
│ └── myapp.bat
└── lib
└── gradle-app.jar
To deploy a sample config we do this:
Copy sample config
$ gradle build copyConf
Now edit the config file and change the port from 8080 to 9090.
Edit file and change property
$ nano /etc/myapp/conf.properties
Now run it.
$ /opt/myapp/bin/myapp
The port is 9090
The key point here is that it is prinitng out 9090 instead of 8080. This means it is reading the config under
/etc/myapp
and not the config that is included in the app.
Change the properties file again. Run the app again. Do you see the change? If not, check to make sure you are editing the right file and you understand.
Logging
Logging should be one of the first things that you setup for on any project. If it is a distributed system, then you need to setup distributed logging agregator as well.
Sl4j is the standard way to install loggers. Logback is the successor to Log4j. The nice thing about Sl4j is you can use built-in logging, log4j or Logback. For now, we are recommending Logback.
We are going to use Logback. Technically we are going to use sl4j, and we are going to use the logback implementation of it.
Logback allows you to set the location of the log configuration via a System property called
logback.configurationFile
#### Example setting logback via System property
java -Dlogback.configurationFile=/path/to/config.xml chapters.configuration.MyApp1
We need to add these dependencies to our gradle file for LogBack.
- logback-core-1.1.3.jar
- logback-classic-1.1.3.jar
- slf4j-api-1.7.7.jar
Adding Logback dependencies to gradle file
dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
The distribution/install that we generate with gradle needs to pass the location to our application. We do that with the
applicationDefaultJvmArgs
in the gradle build.Adding logback.configurationFile System property to launcher script
applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]
Now we can store a logging config in our project so it gets stored in git.
./conf/logging.xml log config
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>conf %d{HH:mm:ss.SSS} [%thread] %-5level %logger{5} - %msg%n</pattern>
</encoder>
</appender>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/opt/logging/logs</file>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<Pattern>%d{yyyy-MM-dd_HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</Pattern>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<FileNamePattern>/opt/logging/logs%i.log.zip</FileNamePattern>
<MinIndex>1</MinIndex>
<MaxIndex>10</MaxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>2MB</MaxFileSize>
</triggeringPolicy>
</appender>
<logger name="com.example.Main" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</logger>
<root level="INFO">
<appender-ref ref="STDOUT" />
</root>
</configuration>
Then we can add some tasks in our build script to copy it to the right location.
Scripts to copy logging script into correct location for install
task copyLogConf(type: Copy) {
from "conf/logging.xml"
into "/etc/myapp/"
}
task copyAllConf() {
dependsOn "copyConf", "copyLogConf"
}
task installMyApp() {
dependsOn "copyDist", "copyConf", "copyLogConf"
}
To deploy our logging script run
gradle copyAllConf
Now after you install the logging config, you can turn it on or off.
Let's change our main method to use the logging configuration instead of System.out.
Main method that uses logkit to do logging.
package com.example;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Main {
static final Logger logger = LoggerFactory.getLogger(Main.class);
public static void main(final String... args) throws IOException {
final String configLocation = System.getProperty("myapp.config.file");
final File confFile = configLocation==null ?
new File("./conf/conf.properties") :
new File(configLocation);
final Properties properties = new Properties();
properties.load(Files.newInputStream(confFile.toPath()));
logger.debug(String.format("The port is %s\n", properties.getProperty("port")));
}
}
Now we run the app from the command line, we get.
Output from running the app
12:20:36,081 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
12:20:36,082 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
12:20:36,082 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@769c9116 - Registering current configuration as safe fallback point
conf 12:20:36.096 [main] DEBUG c.e.Main - The port is 9090
Installing Docker
You will need to install docker on your Mac OSX machine.
To do this use brew a package manager for OSX:
Install brew follow instructions from brew.
``bash $ sudo chown -R rick /usr/local $ brew install caskroom/cask/brew-cask $ brew cask install virtualbox $ brew install docker $ brew install boot2docker $ boot2docker init $ boot2docker up
Add the following to your ~/.profile
#### ~/.profile changes for boot2docker
```bash
export DOCKER_HOST=tcp://192.168.59.103:2376
export DOCKER_CERT_PATH=/Users/rick/.boot2docker/certs/boot2docker-vm
export DOCKER_TLS_VERIFY=1
For Windows and Linux, follow the install instructions for those OS. Linux does not need
boot2docker
. OSX and Windows need boot2docker
. The boot2docker
runs the docker daemon which currently only runs on Linux. For OSX and Windows, the boot2docker runs in virutalbox. Docker image folder
To facilitate docker usage, let's create an image folder with all the bits we need for our image.
The image folder will hold the docker image. It will be under the project dir.
Docker image dir Layout
$ pwd
$ tree
.
├── Dockerfile
├── buildImage.sh
├── etc
│ └── myapp
│ ├── conf.properties
│ └── logging.xml
├── opt
│ └── myapp
│ ├── bin
│ │ ├── myapp
│ │ ├── myapp.bat
│ │ └── run.sh
│ └── lib
│ ├── logback-classic-1.1.3.jar
│ ├── logback-core-1.1.3.jar
│ ├── myapp.jar
│ └── slf4j-api-1.7.12.jar
├── runContainer.sh
└── var
└── log
└── readme.md
We have added a task to our gradle script to copy the application files to this diretory structure so we can easily deploy a docker image.
Let's look at the
Dockerfile
which contains the directives for Docker to build our image.Dockerfile
We kept the Dockerfile really simple.
Dockerfile for myapp (projectDir/image/Dockerfile)
FROM java:openjdk-8
COPY opt /opt
COPY etc /etc
COPY var /var
ENTRYPOINT /opt/myapp/bin/run.sh
This creates an image from an existing image that has Java OpenJDK 8 already installed. The docker file copies
opt
, etc
, and var
into the Docker image.
To build this image, we run the following docker command:
$ docker build -t example/myapp:1.0-SNAP .
Ok so where do all of the files come from under image. Well most of them you have seen before. Copy over logging.xml and conf.properties to etc. You can configure the image differently than your dev environment. To get the opt directory populated, we added a task to our gradle script. To simplify the entry point (standardize), and allow for setting evn variable as well as other Java system properties, we added a run.sh script.
#!/usr/bin/env bash
/opt/myapp/bin/myapp
Make the launch script executable. We specified the launch script was the entry point (what Docker should run when we call run container), e.g.,
ENTRYPOINT /opt/myapp/bin/run.sh
Make run.sh executable.
$ pwd
/Users/rick/github/myapp/image
$ chmod +x opt/myapp/bin/run.sh
Before you build it, you have to have jar files and run scripts from the gradle application plugin.
Task to gradle script that copies application libs and start scripts into Docker Image
task copyDistToImage(type: Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into "$projectDir/image/opt/myapp"
}
Running copyDistToImage
$ gradle copyDistToImage
Once you copy the dist to the image directory, then you can build the image with
docker build -t example/myapp:1.0-SNAP .
as described above.
Once you install it, then you can run it.
Running Docker container
$ ./runContainer.sh
conf 20:39:09.474 [main] DEBUG c.e.Main - The port is set to 9999
From the above run, you can see that I modified the port to 9999 in projectDir/image/etc/conf.properties.
Full gradle build file with copy to image command
apply plugin: 'java'
apply plugin: 'application'
def installOptDir="/opt/myapp"
def installConfDir="/etc/myapp"
mainClassName = 'com.example.Main'
applicationName = 'myapp'
applicationDefaultJvmArgs = [
"-Dmyapp.config.file=/etc/myapp/conf.properties",
"-Dlogback.configurationFile=/etc/myapp/logging.xml"]
repositories {
mavenCentral()
}
task copyDist(type: Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into installOptDir
}
task copyConf(type: Copy) {
from "conf/conf.properties"
into installConfDir
}
task copyLogConf(type: Copy) {
from "conf/logging.xml"
into installConfDir
}
task copyAllConf() {
dependsOn "copyConf", "copyLogConf"
}
task installMyApp() {
dependsOn "copyDist", "copyConf", "copyLogConf"
}
task copyDistToImage(type: Copy) {
dependsOn "installApp"
from "$buildDir/install/myapp"
into "$projectDir/image/opt/myapp"
}
dependencies {
compile 'ch.qos.logback:logback-core:1.1.3'
compile 'ch.qos.logback:logback-classic:1.1.3'
compile 'org.slf4j:slf4j-api:1.7.12'
}
We created an application distribution with gradle. We deployed the application distribution locally with gradle. We setup a docker image. We deployed an application distribution with docker. We ran a docker image with our application distribution in it.
Ideas for future article
Show how to link container
Setup consul
Raw Notes
allprojects {
group = 'mycompany.router'
apply plugin: 'idea'
apply plugin: 'java'
apply plugin: 'maven'
apply plugin: 'application'
version = '0.1-SNAPSHOT'
}
subprojects {
repositories {
mavenLocal()
mavenCentral()
}
sourceSets.main.resources.srcDir 'src/main/java'
sourceCompatibility = JavaVersion.VERSION_1_8
targetCompatibility = JavaVersion.VERSION_1_8
dependencies {
compile "io.fastjson:boon:$boonVersion"
testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}
task buildDockerfile (type: Dockerfile) {
dependsOn distTar
from "java:openjdk-8"
add "$distTar.archivePath", "/"
workdir "/$distTar.archivePath.name" - ".$distTar.extension" + "/bin"
entrypoint "./$project.name"
if (project.dockerPort) {
expose project.dockerPort
}
if (project.jmxPort) {
expose project.jmxPort
}
}
task buildDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "build", "-t", "mycompany/$project.name:$version", buildDockerfile.dockerDir
}
task pushDockerImage (type: Exec) {
dependsOn buildDockerfile
commandLine "docker", "push", "mycompany/$project.name"
}
task runDockerImage (type: Exec) {
dependsOn buildDockerImage
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}
task runDocker (type: Exec) {
if (project.dockerPort) {
commandLine "docker", "run", "-i", "-p", "$project.dockerPort:$project.dockerPort", "-t", "mycompany/$project.name:$version"
} else {
commandLine "docker", "run", "-i", "-t", "mycompany/$project.name:$version"
}
}
}
project(':sample-web-server') {
mainClassName = "mycompany.sample.web.WebServerApplication"
applicationDefaultJvmArgs = ["-Dcom.sun.management.jmxremote", "-Dcom.sun.management.jmxremote.port=${jmxPort}",
"-Dcom.sun.management.jmxremote.authenticate=false", "-Dcom.sun.management.jmxremote.ssl=false"]
dependencies {
compile "io.fastjson:boon:$boonVersion"
compile group: 'io.advantageous.qbit', name: 'qbit-boon', version: '0.5.2-SNAPSHOT'
compile group: 'io.advantageous.qbit', name: 'qbit-vertx', version: '0.5.2-SNAPSHOT'
testCompile "junit:junit:4.11"
testCompile "org.slf4j:slf4j-simple:[1.7,1.8)"
}
buildDockerfile {
add "$project.buildDir/resources/main/conf/sample-web-server-config.json", "/etc/sample-web-server/conf.json"
add "$project.buildDir/resources/main/conf/sample-web-server-config.ctmpl", "/etc/sample-web-server/conf.ctmpl"
add "$project.buildDir/resources/main/conf/sample-web-server-consul-template.cfg", "/etc/consul-template/conf/sample-web-server/sample-web-server-consul-template.cfg"
volume "/etc/consul-template/conf/sample-web-server"
volume "/etc/sample-web-server"
}
}
class Dockerfile extends DefaultTask {
def dockerfileInfo = ""
def dockerDir = "$project.buildDir/docker"
def dockerfileDestination = "$project.buildDir/docker/Dockerfile"
def filesToCopy = []
File getDockerfileDestination() {
project.file(dockerfileDestination)
}
def from(image="java") {
dockerfileInfo += "FROM $image\r\n"
}
def maintainer(contact) {
maintainer += "MAINTAINER $contact\r\n"
}
def add(sourceLocation, targetLocation) {
filesToCopy << sourceLocation
def file = project.file(sourceLocation)
dockerfileInfo += "ADD $file.name ${targetLocation}\r\n"
}
def run(command) {
dockerfileInfo += "RUN $command\r\n"
}
def volume(path) {
dockerfileInfo += "VOLUME $path\r\n"
}
def env(var, value) {
dockerfileInfo += "ENV $var $value\r\n"
}
def expose(port) {
dockerfileInfo += "EXPOSE $port\r\n"
}
def workdir(dir) {
dockerfileInfo += "WORKDIR $dir\r\n"
}
def cmd(command) {
dockerfileInfo += "CMD $command\r\n"
}
def entrypoint(command) {
dockerfileInfo += "ENTRYPOINT $command\r\n"
}
@TaskAction
def writeDockerfile() {
for (fileName in filesToCopy) {
def source = project.file(fileName)
def target = project.file("$dockerDir/$source.name")
target.parentFile.mkdirs()
target.delete()
target << source.bytes
}
def file = getDockerfileDestination()
file.parentFile.mkdirs()
file.write dockerfileInfo
}
}
Nice blog. You have provided such a useful information in this blog. Thanks for sharing.
ReplyDeleteDocker and Kubernetes Training in Hyderabad
Kubernetes Online Training