Gradle Dependencies for Java, use compile or implementation?

While I was explaining to a colleague about using Gradle for Java projects (he was moving away from Maven), we came across various code samples. Some of the examples were using the compile configuration for dependencies, while others were using implements and api.

dependencies {
compile 'commons-httpclient:commons-httpclient:3.1'
compile 'org.apache.commons:commons-lang3:3.5'

dependencies {
api 'commons-httpclient:commons-httpclient:3.1'
implementation 'org.apache.commons:commons-lang3:3.5'

This post was a summary based on the documentation and StackOverflow questions to explain to him which configurations to use.

New Dependency Configurations

Gradle 3.4 introduced the Java-library plugin, which included the then new configurations implementation and api (amongst others). These were meant to replace the compile configuration which was deprecated for this plugin. The idea was that the new configurations would help to prevent leaking of transitive dependencies for multi-module projects.

Please note that in this post I am just using the compile vs implementation/api configurations as an example. Other new replacement configurations were introduced as well, please read the documentation for further information.


For a Java project using Gradle 3.4+, then it depends on whether you are build an application or a library.

For a library project or a library module in a multiple module project, it is recommended to use the Java-library plugin, so in build.gradle use this

apply plugin: 'java-library'

instead of

apply plugin: 'java'

Then you would use either implementation or api, depending on whether you want to expose the dependency to consumers of the library.

For a plain application project, you can stick with the java plugin and continue to use the compile configuration. Having said that, I have tried using the Java-library plugin for an app project and it seems to work fine.


For an Android project, the new configurations came with the Android Gradle Plugin 3.0. So unless you are still using the 2.x version of Android Studio / Android Gradle plugin, the use of compile is deprecated. So you should use implementation, even for an app.

In fact, when I recently upgraded my Android Studio, it came up with the message:

Configuration ‘compile’ is obsolete and has been replaced with ‘implementation’.
It will be removed at the end of 2018

I think this also applies if you use Kotlin instead of Java.


How about a project with Groovy as well as Java? This can be for a mixed Groovy / Java project, or for a Java project which needs Groovy for some support tools (such as Spock or Logback configuration).

In the past I have used the Groovy plugin instead of the Java plugin for mixed projects. The Groovy plugin extends the Java plugin and will handle the compilation for Java sources as well as Groovy sources.

apply plugin: 'groovy'

You can continue to do this for Java application modules, but the documentation states that the Groovy plugin has compatibility issues with the Java-library plugin so will need a work around for library modules.

Of course this short post is for newbies, and has only scratched the surface in terms of learning about all the new dependency configurations.


JRebel for a Gradle Spring Boot App

There is some documentation on how to add JRebel to a Spring Boot app that uses Gradle as the build tool. It is basic but works fine.

All you have to do is to add a few lines to build.gradle:

if (project.hasProperty('rebelAgent')) {
 bootRun.jvmArgs += rebelAgent

Then set the property in

rebelAgent=-agentpath:[path/to/JRebel library]

However there are several ways to improve on this.

Make JRebel Optional

For instance, what if you don’t always want JRebel everytime you start the app with ‘bootRun’? JRebel plugins for IDE’s like Intellij IDEA are smart enough to give you the option of running your app with or without JRebel

There would be several ways of doing this, but one would be to add the JRebel startup configuration in an optional task.

task addRebelAgent << {
  if (project.hasProperty('rebelAgent')) {
    bootRun.jvmArgs += rebelAgent
    println 'rebelAgent property not found'

task rebelRun(dependsOn: ['addRebelAgent', 'bootRun'])

Now running ‘bootRun’ would start the app normally, and if you want JRebel then use the ‘rebelRun’ task instead. I have also added a debug message if the ‘rebelAgent’ property is not available.

Another way would be to pass an optional property to the ‘bootRun’ task to use as a flag whether to add JRebel or not.

if (project.hasProperty('rebelAgent') &&
    project.hasProperty('addJRebel')) {
 bootRun.jvmArgs += rebelAgent

Then to use JRebel you just need to add the extra property.

gradle bootRun -PaddJRebel=true

Finding the Rebel Base

Putting the path to the JRebel library to use as the agent in a properties file allows multiple developers to have their own version. However the path is still hard-coded, which is something that should be avoided if possible.

Another way to specify the path is to use a system environment variable to point to where JRebel is installed. JetBrains recommends using REBEL_BASE. Once set up, that allows you to use the environment variable in multiple ways, e.g. Gradle build files, command line, build scripts, etc.

Here is an example using the additional ‘addRebelAgent’ task that I specified earlier, that I use on my Windows 64 machine.

task addRebelAgent << {
  project.ext {
    rebelAgent = "-agentpath:${System.env.REBEL_BASE}${rebelLibPath}"
  if (project.hasProperty('rebelAgent')) {
    bootRun.jvmArgs += rebelAgent
    println 'rebelAgent property not found'

task rebelRun(dependsOn: ['addRebelAgent', 'bootRun'])

And in I have specified the path to the agent library from the JRebel installation location.


All I’ve done here is to build the path in the ‘rebelAgent’ property from the REBEL_BASE environment variable and another property specifying the internal path to the library.

rebelAgent = "-agentpath:${System.env.REBEL_BASE}${rebelLibPath}"



Pluggable Tools with Docker Data Containers

There are some apps that have a simple installation process. When using them with other applications in Docker, they may be able to be installed in their own data volume container and used in a pluggable way.

The kind of apps I’m talking about are some Java apps (and in fact, Java itself) which follow this installation process:

  1. Install the contents of the app into a single directory
  2. Set an environmental variable to point to the installation directory, e.g. XXX_HOME
  3. Add the executables of the app to the PATH environmental


That’s it.

An example of an app installation that follows this pattern is Gradle:

  1. Uncompress the Gradle files from an archive to a directory.
  2. Set the enviromental variable GRADLE_HOME to point to the gradle installation directory
  3. Add GRADLE_HOME/bin to the PATH



Using Gradle as an example, here is a Dockerfile that installs it in a data volume container:

# Install Gradle as a data volume container. 
# The app container that uses this container will need to set the Gradle environmental variables.
# - set GRADLE_HOME to the gradle installation directory
# - add the /bin directory under the gradle directory to the PATH

FROM mini/base


# setup location for installation

# install Gradle version required

RUN curl -L -O${GRADLE_VERSION} && \
    unzip -qo gradle-${GRADLE_VERSION} && \
    rm -rf gradle-${GRADLE_VERSION}
# to make the container more portable, the installation directory name is changed from the default
# gradle-${GRADLE_VERSION} to just gradle, with the version number stored in a text file for reference
# e.g. instead of /opt/gradle-2.2.1, the directory will be /opt/gradle

RUN mv gradle-${GRADLE_VERSION} gradle && \
    echo ${GRADLE_VERSION} > gradle/version

# echo to make it easy to grep
CMD /bin/sh -c '/bin/echo Data container for Gradle'

(From github

Build the image and container from the Dockerfile. Here I’ve tagged the image with the version number of the Gradle installation, and named the container gradle-2.2.1.

docker build -t yourrepo/gradle:2.2.1 .

docker run -i -t --name gradle-2.2.1 yourrepo/gradle:2.2.1

A few things to note about this installation:

  • I have changed the directory name where Gradle is installed from the default, by removing the version number in order to make it generic.
  • no environmental variables have been set, that will be done later
  • you can use any minimal image as the basis for the container, it justs need curl or wget in order to download the Gradle archive file

Now we have the Gradle installation in a docker data volume that can be persisted and shared by other containers.

You can then repeat this process with difference versions of Gradle to create separate data containers for each version (of course giving the containers different names, e.g. gradle-2.2.1, gradle-1.9, etc).

Use Case

I originally got this idea when I was running my Jenkins CI docker container. Some of the Jenkins builds required Gradle 2.x while others were using Gradle 1.x.

So instead of building multiple Jenkins + Gradle images for the different versions of Gradle required, I can now just run the Jenkins container with the appropriate Gradle data container. This is done by using –volumes-from to get access to the Gradle installation directory and setting the require environmental variables.

To use the data container with Gradle 2.2.1 installed:

docker run -i -t --volumes-from gradle-2.2.1 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

To use the one with Gradle 1.9:

docker run -i -t --volumes-from gradle-1.9 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

Of course there are limitations to this technique since Docker data volume containers were designed to share persistant data rather than application installs. In particular they do not allow sharing of environmental variables.

However this work around that can be useful in some circumstances.

Backup a Docker Data Container with Fig

I have been using data volume containers to persist data in docker containers.  There are various reasons why this tends to be a better option than just using data volumes, but probably the most important is portability.

Of course now we have to backup the data in the data containers. This can be for archiving, or when the containers that use the data need to be upgraded or recreated. If your backup requirements are simple you can simply use the docker cp command or something like tar.

A Jenkins example

As a simple example, let’s run a Jenkins server in a docker container and use a data volume container to persist its data.

1. Pull or build a Jenkins image from the official repository.

2. The Jenkins images uses the directory /var/jenkins_home as the volume to store it’s data, so we need a data volume container for that volume. Here is a sample of a Dockerfile to build the data container:

Build and tag the image from the Dockerfile.

docker build -t your_repository:jenkins-data .

You can now create the data container, giving it a name for convenience. Optionally we can run the docker ps command afterwards to check that the container has been created, it should be in a stopped state.

docker run -i -t --name jenkins-data your_repository:jenkins-data
docker ps -a

3. Run the Jenkins server with the data container attached and make some changes, e.g. create a job, etc. The Jenkins data volume should have your changes in it now.

docker run --name=jenkins-sample -p 8080:8080 --volumes-from=jenkins-data jenkins

4. For this example we will use tar to backup the data container, using this command to create a temporary container to access the data container.

docker run -rm --volumes-from jenkins-data -v $(pwd):/backup busybox tar cvf /backup/jenkins_backup.tar /var/jenkins_home

There should now be a file jenkins_backup.tar in the current directory. Of course for real usage, we would probably run this command from a script and make it generic to be able to backup any data volume container.

I do give a fig …

Something else I use for development with Docker is the orchestration tool Fig (this has saved me a lot of typing!). So here is an example of  doing the same backup on the Jenkins data container using Fig.

1. Create Fig YAML file, using the same information that we used in the backup command.

2. Run Fig, that’s it!

fig up

This is a simple example that has only scratched the surface of what can be done with Docker (and Fig). If the backup requirements for the data is more complex, then you could also consider creating a dedicated container just for doing backups, with all the required tools installed in it.

The great thing about Docker is that once everything has been setup, you can get applications such as Jenkins up and running very quickly.

Another Defection to Android Studio

Like many other developers out there, I have been using Eclipse as my main IDE for many years now. However for Android development I have decided to take the plunge and migrate to Android Studio (especially since it has finally been released).

Here is a blog post I found that closely echoes what I have long thought regarding the issues with Eclipse:

Build, build, build

For me, another reason was that the Ant build files I was using to handle building different versions (free vs paid, dev vs release, etc) were getting too complicated to manage easily. So I can now change over to Gradle at the same time, since that’s what Android Studio uses by default.

Gradle has the concept of build variants to handle building different versions of an Android app.

The Recurring Eclipse Re-install

Here are some other problems that I personally have had with using Eclipse.

  • Plugins, well not the plugins themselves, but having too many plugins. I’ve found that having lots of plugins in one Eclipse installation can cause Eclipse to misbehave , especially after several updates. There are several ways I use to get around this:
    • Keep separate Eclipse installations for different types of development, e.g one for Java, one for Android, one for Cloud, etc. Therefore each installation will only have a few plugins relevant to the type of development. However this is not always convenient if a project does require multiple types of development.
    • Every so often, when Eclipse starts to play up, do a fresh re-install of Eclipse (along with the latest version of the plugins required).
  • Intermittent miscellaneous bugs, e.g. cut and paste stops working, builds not alway done automatically, etc. A lot of these issues are more of a nuisance rather than being a serious problem, but all the same it tends to kill your productivity (and isn’t that why we use IDE’s in the first place?).

No Pain, No …

Make no mistake, despite what the Android Studio documentation might try to tell you, migrating a non-trivial project will take some time and probably involve some pain. But worth the effort I think.