Checking for Artifactory in a Jenkins Pipeline

One of my projects uses the Artifactory as the repository manager. Unfortunately when doing a Jenkins pipeline build, I sometimes forget to ensure the Artifactory server is up first and find the job has failed after running for a while.

I’ve added some script to my Jenkinsfile that will check for the Artifactory server early on and fail fast if it is not running.

Artifactory Check

For my purposes I just try to ping the Artifactory server.
This can be done by sending a http request:
http://[Your Artifactory URL]/artifactory/api/system/ping
and if successful should return the string ‘OK’.

Jenkins Pipeline example

This particular example requires the HTTP Request plugin to be installed. I added a declarative stage in the pipeline before the actual build for the Artifactory check.

In this stage the HTTP request call from the plugin will be successful if the response status code is in the default range (100 to 399) and the response content includes the string ‘OK’. If the response does not fulfil these conditions, then the Jenkins job will fail quickly.

pipeline {
  agent any
  options {
    // Stop the build early in case of compile or test failures
    skipStagesAfterUnstable()
  }
  stages {
    stage('Build') {
      environment {
        // artifactory server url
        artifactoryUrl = 'http://[Your Artifactory URL]/artifactory'
     }

    // stage to check the Artifactory server is up, else will fail the job
    stage('Artifactory check') {
      steps {
        script {
          echo 'Pinging Artifactory'

          // for a successful ping, the response status code must be in default acceptable range (100:399)
          // and contain 'OK' in the content
          def pingResponse = httpRequest url: "${artifactoryUrl}/api/system/ping", validResponseContent: 'OK'

          echo "Ping response status code: ${pingResponse.status}"
          echo "Ping response: ${pingResponse.content}"
        }
      }
    }

    // continue with other stages for the job

Alternatively if you don’t want the Artifactory check to fail the job, just change the parameters to the HTTP request to allow all response status codes and any response content to pass. Then if the ping fails, you can just set a flag, send a notification, print some info, etc, instead and let the job continue.

stage('Artifactory check') {
  steps {
    script {

      // Allow all response codes returned from the Artifactory ping request so it doesn't fail,
      // normal allowable codes are 100:399.
      def pingResponse = httpRequest url: "${artifactoryUrl}${artifactoryPingPath}", validResponseCodes: '100:599'

      echo "Ping response status code: ${pingResponse.status}"
      echo "Ping response: ${pingResponse.content}"

      if (pingResponse.status == 200 && pingResponse.content == 'OK')
        // flag successful check
      else
        // flag ping failure
    }
  }
}

Pluggable Tools with Docker Data Containers

There are some apps that have a simple installation process. When using them with other applications in Docker, they may be able to be installed in their own data volume container and used in a pluggable way.

The kind of apps I’m talking about are some Java apps (and in fact, Java itself) which follow this installation process:

  1. Install the contents of the app into a single directory
  2. Set an environmental variable to point to the installation directory, e.g. XXX_HOME
  3. Add the executables of the app to the PATH environmental

 

That’s it.

An example of an app installation that follows this pattern is Gradle:

  1. Uncompress the Gradle files from an archive to a directory.
  2. Set the enviromental variable GRADLE_HOME to point to the gradle installation directory
  3. Add GRADLE_HOME/bin to the PATH

 

Docker

Using Gradle as an example, here is a Dockerfile that installs it in a data volume container:

# Install Gradle as a data volume container. 
#
# The app container that uses this container will need to set the Gradle environmental variables.
# - set GRADLE_HOME to the gradle installation directory
# - add the /bin directory under the gradle directory to the PATH

FROM mini/base

MAINTAINER David Wong

# setup location for installation
ENV INSTALL_LOCATION /opt

# install Gradle version required
ENV GRADLE_VERSION 2.2.1

WORKDIR ${INSTALL_LOCATION}
RUN curl -L -O http://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip && \
    unzip -qo gradle-${GRADLE_VERSION}-bin.zip && \
    rm -rf gradle-${GRADLE_VERSION}-bin.zip
    
# to make the container more portable, the installation directory name is changed from the default
# gradle-${GRADLE_VERSION} to just gradle, with the version number stored in a text file for reference
# e.g. instead of /opt/gradle-2.2.1, the directory will be /opt/gradle

RUN mv gradle-${GRADLE_VERSION} gradle && \
    echo ${GRADLE_VERSION} > gradle/version
    
VOLUME ${INSTALL_LOCATION}/gradle

# echo to make it easy to grep
CMD /bin/sh -c '/bin/echo Data container for Gradle'

(From github https://github.com/davidwong/docker/blob/master/gradle/Dockerfile)

Build the image and container from the Dockerfile. Here I’ve tagged the image with the version number of the Gradle installation, and named the container gradle-2.2.1.


docker build -t yourrepo/gradle:2.2.1 .

docker run -i -t --name gradle-2.2.1 yourrepo/gradle:2.2.1

A few things to note about this installation:

  • I have changed the directory name where Gradle is installed from the default, by removing the version number in order to make it generic.
  • no environmental variables have been set, that will be done later
  • you can use any minimal image as the basis for the container, it justs need curl or wget in order to download the Gradle archive file

Now we have the Gradle installation in a docker data volume that can be persisted and shared by other containers.

You can then repeat this process with difference versions of Gradle to create separate data containers for each version (of course giving the containers different names, e.g. gradle-2.2.1, gradle-1.9, etc).

Use Case

I originally got this idea when I was running my Jenkins CI docker container. Some of the Jenkins builds required Gradle 2.x while others were using Gradle 1.x.

So instead of building multiple Jenkins + Gradle images for the different versions of Gradle required, I can now just run the Jenkins container with the appropriate Gradle data container. This is done by using –volumes-from to get access to the Gradle installation directory and setting the require environmental variables.

To use the data container with Gradle 2.2.1 installed:

docker run -i -t --volumes-from gradle-2.2.1 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

To use the one with Gradle 1.9:

docker run -i -t --volumes-from gradle-1.9 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

Of course there are limitations to this technique since Docker data volume containers were designed to share persistant data rather than application installs. In particular they do not allow sharing of environmental variables.

However this work around that can be useful in some circumstances.