Setup JRebel with Tomcat and Docker

It’s fairly straightforward to install JRebel to run on a local instance of Tomcat, here is one way of installing it on Tomcat running in a docker container instead. This article assumes a basic knowledge of using docker.

For this particular example I’m using:

  • the Eclipse IDE installation of JRebel
  • the ‘official’ Tomcat 8 image from the Docker hub

Install JRebel in the IDE

I’m using the Eclipse IDE, but there are instructions on the ZeroTurnaround website on using a different IDE or for installing it standalone.

1. For Eclipse, follow these instructions just to install and activate JRebel for the IDE:

https://zeroturnaround.com/software/jrebel/quickstart/eclipse/#!/server-configuration

2. We need the JRebel agent (jrebel.jar) to install into Tomcat.

You can either get this from the JRebel plugin you have just installed into Eclipse (look for the section titled ‘Where do I find jrebel.jar?’);

http://zeroturnaround.com/software/jrebel/learn/remoting/eclipse/

OR you can get it from an archive

https://zeroturnaround.com/software/jrebel/download/prev-releases/

(Note that for Tomcat 8, please use the legacy version of jrebel.jar which is found in the lib sub-directory of the zip archive.)

Install JRebel in the Application Server

1. Get the base Tomcat docker image from the docker hub.

docker pull tomcat:xxx

Here xxx is the specific version of Tomcat you want to use as the base image, e.g. 8.0.23-jre7, 8-jre8, etc. You can find the list in the Tomcat docker repository:

2. Since we are using docker to run the application server, then we will need to run JRebel in remote mode. There are generic instructions on JRebel remoting, which we can adapt to do it in a docker environment. So what we want to do is to create a custom docker image, based on the Tomcat image, which incorporates the JRebel configuration.

2.1 Create an empty directory and copy the JRebel agent jrebel.jar to it.

2.2 Create a Dockerfile to build your custom Tomcat image, for example:

Note that for simplicity, I have just added the JRebel agent to the directory /jrebel. You can use a different directory, as long as the -javaagent configuration can find it.

Also you can take this opportunity to do further customizations on the Tomcat server, e.g. if you want to add your list of users, then copy the your version of tomcat-users.xml to the Tomcat config directory by adding this line to the Dockerfile:

ADD tomcat-users.xml /usr/local/tomcat/conf/

2.3 Build and run the customized Tomcat server (using your own repository name, image name and container name to replace the values in this example).

docker build -t your_repository/tomcat-jrebel .

docker run -i -t -d --name mytomcat -p 8080:8080 your_repository/tomcat-jrebel

We can verify that the JRebel configuration has been included in Tomcat by checking the startup logs.

docker logs mytomcat

We should be able to see the JRebel version and licensing information.

2015-05-22 10:38:40 JRebel:  #############################################################
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  JRebel Legacy Agent 6.2.0 (201505201206)
2015-05-22 10:38:40 JRebel:  (c) Copyright ZeroTurnaround AS, Estonia, Tartu.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Over the last 1 days JRebel prevented
2015-05-22 10:38:40 JRebel:  at least 0 redeploys/restarts saving you about 0 hours.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Server is running with JRebel Remoting.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  #############################################################

Tip: Build Your Own
Of course you can combine these 2 steps for creating a custom image into 1, by creating your own Tomcat image from scratch instead of using the ‘official’ Tomcat image as a base.

 Configure the IDE

Finally we need to configure Eclipse to work with the Tomcat server that we have running in docker. You can do that by following these instructions.

This is a brief summary of the steps:

  1. In Eclipse, right-click on your project, select JRebel -> Add JRebel Nature
  2. Right-click on your project again, select JRebel -> Enable remote server support
  3. Right-click on your project again, select JRebel -> Advanced Properties
  4. In the dialog that pops up, click on “Edit” button next to the “Deployment URLs” text box
  5. Click on “Add” and enter the URL of the application, it will be something like “http://your_docker_host:8080/app_name”
  6. Click on “Continue”, “Apply”, and then “OK”.

Once the app is deployed, any changes you make in the IDE should now be reflected in the server running in the docker container.

No restarts, no redeploys, just code.

Spring Profiles and Reusable Tests, Part 2

In the 1st part of the post, we were able to use Spring profiles to inject a specific implementation of a class to test into some test cases. Now we will try to create a reusable test case by using Spring Dependency Injection and Spring profiles to inject the appropriate expected results when testing a particular implementation.

Firstly we need to get the expected test results into a format that be be injected by Spring DI into a test case.

public interface FormatResults {

  public String getExpectedResult(String testMethodName);
}

public abstract class BaseFormatResults implements FormatResults {

  private Map<String, String>  results;

  public BaseFormatResults()
  {
    results = new HashMap<String, String>();

    setUpResults();
  }

  protected abstract void setUpResults();

  protected void addResult(String testMethodName, String result)
  {
    results.put(testMethodName, result);
  }

  @Override
  public String getExpectedResult(String testMethodName)
  {
    return results.get(testMethodName);
  }
}

public class HelloResults extends BaseFormatResults {

  @Override
  protected void setUpResults()
  {
    addResult("testDave", "Hello Dave");
  }
}

public class GoodByeResults extends BaseFormatResults {

  @Override
  protected void setUpResults()
  {
    addResult("testDave", "Good Bye Dave");
  }
}

For the sample code, a class is created as a wrapper around a map, which stores the expected test results for a test case. The expected result for each test method is keyed by the test method name in the map. Then some methods are included for adding and retrieving these expected results.

I have also made this wrapper class implement an interface and be abstract so that it can be subclassed to add the expected results for a particular test class.

Next we refactor the Spring JavaConfig classes to include these result classes, so that they can be injected into the test case along with the implementation of the class to test.

@Configuration
public class CommonTestConfig {

  @Autowired
  LogTestConfiguration logTestConfiguration;

  @Bean
  public Formatter formatter()
  {
    return logTestConfiguration.formatter();
  }

  @Bean
  public FormatResults results()
  {
    return logTestConfiguration.results();
  }

  @Bean
  public String testData()
  {
    return "Dave";
  }
}

public interface LogTestConfiguration {

  public Formatter formatter();

  public FormatResults results();
}

@Configuration
@Profile("hello")
public class HelloConfig implements LogTestConfiguration {

  @Bean
  public Formatter formatter()
  {
    return new HelloFormatter();
  }

  @Bean
  public FormatResults results()
  {
    return new HelloResults();
  }
}

@Configuration
@Profile("goodbye")
public class GoodByeConfig implements LogTestConfiguration {

  @Bean
  public Formatter formatter()
  {
    return new GoodByeFormatter();
  }

  @Bean
  public FormatResults results()
  {
    return new GoodByeResults();
  }
}

An extra method is added to the ‘LogTestConfiguration’ interface to retrieve a result class. Then the extra @Bean method to the configuration classes for getting the appropriate result class based on the active Spring profile.

Lastly we again refactor the test case to use the updated configuration files.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={CommonTestConfig.class, HelloConfig.class, GoodByeConfig.class})
public class SpringProfileTest {

  @Autowired
  private Formatter formatter;

  @Autowired
  private FormatResults results;

  @Autowired
  private String testData;

  @Test
  public void testDave()
  {
    String result = formatter.format(testData);
    System.out.println("Result = " + result);

    String expected = results.getExpectedResult("testDave");

    assertEquals(expected, result);
  }

  // getters and setters left out for brevity ...
}

Here are the changes from the test case examples in the previous post:

  • there is now only one test case required to test either test class implemenation ‘HelloFormatter’ or ‘GoodByeFormatter’ (previously a separate test case was required for each)
  •  the results for the test is also now injected into the test case via Spring DI
  • all of the Spring configuration files are included in the @ContextConfiguration annotation
  • the @ActiveProfiles annotation has been left out, now we need to specify the active Spring profile to use external to the test case, this can be done by setting the ‘spring.profiles.active’ property before running the test case, and there are various ways to do this.

For the sample code, if you are running the test case using the Eclipse IDE, then this could be set in the Run Configuration for the JUnit test class.

Eclipse JUnit run configuration

If you are using Gradle, then just set the property in the build script in the ‘test’ task.

test {
  systemProperties 'spring.profiles.active': 'hello'
}

The Sample Code

You can download the sample code for this example, SpringProfileTestingExample2.zip from GitHub.

Note that I have kept the sample code very simple for the sake of brevity, and also to more clearly illustrate the point. For instance, you may need to add the @DirtiesContext annotation to the test classes if you don’t want the injected beans to be cached between tests.

Conclusion

Using Spring bean definition profiles, we can make test cases reusable in the particular scenario where we are testing different implementations of a class.

If you only have a few tests to run, then the this setup with the Spring profiles would be overkill.

However for my project, where I had many test cases, it allowed me to reuse them without having to duplicate them for each implementation of a class under test. Of course, it would also allow me to use those same test cases to test any future implementations of the class too.

Spring Profiles and Reusable Tests, Part 1

When writing test cases for an update to an open source project I was working on, I came across the issue of how to make test classes reusable.

I already had dozens of test cases written for testing a particular class. Now I had written another implementation of that class, and I wanted to reuse the test cases I already had, to avoid duplication.

I decided to see if I could use Spring profiles to do this.

Spring Profiles

Spring bean definition profiles allows the wiring of different beans in different environments.

One use of Spring profiles in testing is where different test data or application configurations can be injected into test cases depending on the profile. There are various examples of this:

However my requirements were a bit different.

  • more specifically I had various implementations of a class that logged an object graph and produce some output in differing formats.
  • I had many test cases written that would test a particular implementation of the logging class and then compare the generated logging with some expected output.
  • I wanted to run the same set of tests with the same test data, but testing different implementations of the logging class

So rather than changing the test data, what I wanted to change was the class under test and the expected results using Spring DI. Using profiles would allow me to reuse the same tests and test data.

Of course if there are only a few tests to run, this could also be achieved using parameterized tests. Then for each test you could use JUnit 4 to pass parameters for the test data and expected test results.

Here are some simple code examples, showing one way to use Spring profiles in tests.

Step 1: A Simple Test with Spring

Note: Here I’m also using Spring’s JavaConfig instead of XML for the application contexts, but either would work.

Firstly let’s create some classes to test. Here are 2 classes that do some string formatting, both of which have a common interface:

public interface Formatter {

  public String format(String name);
}

public class HelloFormatter implements Formatter {

  @Override
  public String format(String name) {
    return "Hello " + name;
  }
}

public class GoodByeFormatter implements Formatter {

  @Override
  public String format(String name) {
    return "Good Bye " + name;
  }
}

Now let’s write a simple JUnit test case for to test ‘HelloFormatter’, using Spring to inject the test class and test data.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={SimpleTestConfig.class})
public class SimpleTest {

	@Autowired
	private Formatter formatter;

	@Autowired
	private String testData;

	@Test
	public void testDave() {
		String result = formatter.format(testData);
		System.out.println("Result = " + result);

		String expected = "Hello Dave";

		assertEquals(expected, result);
	}

	// getters and setters left out for brevity ...
}

And here’s the JavaConfig used to wire the beans for the test.

@Configuration
public class SimpleTestConfig {

	@Bean
	public Formatter formatter()
	{
		return new HelloFormatter();
	}

	@Bean
	public String testData()
	{
		return "Dave";
	}
}

Now if we want to test the ‘GoodbyeFormatter’ implementation class as well, we could just write another test case and another Spring JavaConfig class, but this would mean code duplication.

So instead let’s try using Spring profiles to inject the appropriate test class into the test cases.

Step 2 : Using Spring Profiles

Firstly let’s refactor the Spring JavaConfig classes.

@Configuration
public class CommonTestConfig {

	@Autowired
	LogTestConfiguration logTestConfiguration;

	@Bean
	public Formatter formatter()
	{
		return logTestConfiguration.formatter();
	}

	@Bean
	public String testData()
	{
		return "Dave";
	}
}

public interface LogTestConfiguration {

	public Formatter formatter();

}

@Configuration
@Profile("hello")
public class HelloConfig implements LogTestConfiguration {

	@Bean
	public Formatter formatter()
	{
		return new HelloFormatter();
	}
}

@Configuration
@Profile("goodbye")
public class GoodByeConfig implements LogTestConfiguration {

	@Bean
	public Formatter formatter()
	{
		return new GoodByeFormatter();
	}
}

Notice we have split the configuration between 3 classes now:

  • there is one class that configures the beans required for all the test cases
  • there is one configuration file specifically for testing the ‘HelloFormatter’ class, and it has the annotation ‘@Profile(“hello”)’.
  • there is one configuration file specifically for testing the ‘GoodByeFormatter’ class, and it has the annotation ‘@Profile(“goodbye”)’.
  • the 2 configuration files that have profile annotations share a common interface so that they can, in turn, be injected into the common configuration file

This setup is base on the blog, SPRING 3.1 M1: INTRODUCING @PROFILE, which contains a more detailed explanation.

So here are the refactored test cases that use the new configuration files, one test class to test each formatter implementation class.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={CommonTestConfig.class, HelloConfig.class})
@ActiveProfiles("hello")
public class SpringProfileHelloTest {

	@Autowired
	private Formatter formatter;

	@Autowired
	private String testData;

	@Test
	public void testDave() {
		String result = formatter.format(testData);
		System.out.println("Result = " + result);

		String expected = "Hello Dave";

		assertEquals(expected, result);
	}

	// getters and setters left out for brevity ...
}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={CommonTestConfig.class, GoodByeConfig.class})
@ActiveProfiles("goodbye")
public class SpringProfileGoodbyeTest {

	@Autowired
	private Formatter formatter;

	@Autowired
	private String testData;

	@Test
	public void testDave() {
		String result = formatter.format(testData);
		System.out.println("Result = " + result);

		String expected = "Good Bye Dave";

		assertEquals(expected, result);
	}

	// getters and setters left out for brevity ...
}

Note that each test case class has the @ActiveProfiles annotation, so that only the appropriate test class is injected for testing. Also we have added the required configuration files to the @ContextConfiguration annotation.

The Sample Code

You can download the sample code for this example, SpringProfileTestingExample1.zip from GitHub.

The requirements to run the sample code are:

Next

We have achieved half of what we set out to do, by using Spring profiles to determine which implementation of a test class is used for each test case.

In the next part of this post, we will make further improvements so that we have one reusable test case able to test both test class implementations by injecting the expected test results as well.

Path Problems: Java (Android), Windows 64 and 2 Drives

I recently updated my installation of the Android SDK and ran into a problem that I’ve alway had on my current computer – the SDK Manager would not run after the update.

This is due to a Java path problem for a particular setup:

  • a computer running 64-bit Windows (Windows 7 in my case)
  • a computer with more than one disk drive installed

I also have Java 6 64-bit installed on the computer (not Java 7 as I use it for Android development).

Android SDK Manager – it no work

To re-iterate the issue: after installing or updating the Android SDK on a system with this setup, the SDK Manager may not be able run.

When trying to start the SDK Manager, in the ‘tools’ directory of the SDK installation, the batch file ‘android.bat’ is run. Inside the batch file it sets various path variables that point to libraries or to your Java installation that is required to run the SDK Manager.

This is where I run into problems.

One issue is this line in ‘android.bat’:

for /f %%a in ('%java_exe% -jar lib\archquery.jar') do set swt_path=lib\%%a

Here it is looking for the file ‘swt.jar’ which is located in the SDK installation under ‘tools\lib\x86’ or ‘tools\lib\x86_64’ depending on whether you are running a 64-bit system.

If you have only 1 drive in your system, then this should work. But if, like me, you have two drives then it may not work and the variable swt_path is not set.

Then there is this line where it tries to find Java:

call lib\find_java.bat

This runs the batch file ‘find_java.bat’ to set the variable ‘java_exe’ to point to the path of java.exe in your Java installation.

for /f %%a in ('%~dps0\find_java.exe -s') do set java_exe=%%a

Once again in my system this doesn’t work and the batch file can’t find my Java executables.

By the way when you try to do a build with Ant, it runs ‘platform-tools\dx.bat’ which also calls ‘find_java.bat’, so it runs into the same problem of not finding Java.

Why Does This Happen?

It seems this all has to do with the way Windows x64 handles the long and short format of file (and directory) names when working out paths.

Of course, this is for my particular setup. On your system it may all work fine – but you only have to do a quick google to see that many others have similar problems.

If you have this issue, then it’s worth checking for more common errors first, e.g.

  • the appropriate version of Java is installed correctly
  • JAVA_HOME and PATH environmental variables have been configured

Other Java Programs – GMote

I was recently reminded of all this when I was setting up my HTPC. This is a Windows 7 64-bit PC, once again with 2 drives (one for applications and the other for storing media files).

I was trying out GMote (an Android remote control app for playing media on a computer). This has a server component which runs on the computer and is a Java program.

After doing the default installation, I tried to run the GMote server and kept getting an error “Unable to load library ‘libvlc'”. After checking that the ‘libvlc’ library was in a folder in the PATH environmental variable, this error still kept occurring.
It took me a little while to find the problem with getting the GMote server to run:

  • I had the 64 bit version of Java installed, so I installed the 32 bit Java and pointed the JAVA_HOME environmental variable to it.
  • Since the path to the support library was under the default installation of GMote server at “C:\Program Files (x86)\…”, I re-installed the GMote server to another directory name that did not contain any spaces “C:\media\…”.

Finally it worked.

Why Two Drives?

At this point you might ask why do I have 2 drives on my computers?

I tend to install the system and application files on one drive, and data files on the other, I find this convenient for various reasons:
Backup – I can use different backup strategies on the different drives, e.g. do an disc image backup on the system and application drive and some other type of backup on the data that I want on the other drive. I also point the TEMP environmental variable to the data drive so that the temporary files don’t clog up my image backups.
Performance – Use a SSD for the operating system and applications disk for faster booting and app loading (Of course it would be nice to have a SSD for the data drive as well, but if you have a lot of data then this would be very expensive!).

The Moral of the Story …

So in this particular setup of trying to run Java programs on 64-bit Windows on a computer with multiple disk drives, I follow these guidelines:

  • check which version of Java is required by your program (64-bit or 32-bit)
  • check whether it requires the JDK or JRE
  • install Java programs in a path that doesn’t have spaces in the directory names, as you might do on a UNIX or LINUX system
  • point your JAVA_HOME environment to the appropriate Java installation (you can do this in a batch file before launching the Java program so that it is only local and doesn’t affect other Java apps)

So how did I resolve Android SDK issue on my system? If you google this issue, there are various suggestions as to how to fix it. These include things like digging into the Windows and Java internals, registry hacks, etc.

For me, I just took the easy route of editing android.bat to point the variables to the correct paths on my particular system. I know this is not the best way to deal with it, but after many frustrating hours of research, configuration and testing, I wanted to get things up and running quickly.
It’s a shame you have to waste so much time digging into Android issues to find the causes (and if you’re lucky, the solutions) of these problems. It’s not like running Windows x64 or having multiple drives on a computer is all that unusual.

But then the Android build tools have always been a bit flaky, so I guess it’s part of being an Android developer.

Advanced Testing Tips for Android, Part 2

In part 1 of this article, I discussed some of the merits of Android testing on the emulator and rooted devices. In this concluding part, I’ll have a look at the software side of things.

Tracing

It is often useful to include tracing code while developing, one common usage is to trace the entry and exit of methods calls for example.

However it is generally recommended to remove tracing (and debug) logging from releases. This can be a hassle if these trace log statements sprinkled in your code. If you are using proguard to obfuscate your release version, then that is one option you can use to disable the tracing.

Another way is to use an AOP library like AspectJ to handle tracing instead (there are various examples of how to do this in Android available on the internet, just google it).

Some advantages of using AspectJ to handle tracing are:

  • The AspectJ tracing code could be put in separate library project, which may only be included in development builds (i.e. leave out the library in the build script for the release build).
  • Use of pointcuts is more configurable, as it can be used to trace as much or as little of your code as you require. This is useful to target the tracing to areas of your code when checking for errors.

Generally I would leave general method tracing disabled, but it is sometimes useful when running automated tests, e.g. when doing Continuous Integration or running Monkey, to trace the method flow in case there are crashes or other problems.

Monkey

This is another useful testing tool, but sometimes there are sections of your code that you don’t want to be run while testing with Monkey.

In this case, you can disable functionality for those sections of code when testing with monkey using ActivityManager.isUserAMonkey().

if (!ActivityManager.isUserAMonkey())
 {
    // only run if not running in Monkey
    doSomething();
 }

In my case I used this because the app I was working on had some functionality to run other apps on the device. If this code was run during Monkey testing, then Monkey would continue to run on the secondary app rather than the main app that I wanted to test.

Another example of it’s usage is in the Android API demos (to stop Monkey running any ‘dangerous’ code) :
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/DeviceAdminSample.html

Accessory Apps for Testing

To assist in testing, it may be useful to write some additional apps to handle various accessory tasks:

  • configuration of the device before running tests
  • loading test data, for instance I wrote a simple app to load test contacts and SMS messages.
  • test for failures in intents passing, e.g. if the main app runs other apps using intents, you could use intents to run an app that deliberately fails to see how the main app handles it

Using load-time AOP to help testing and debugging

After reading this blog ‘Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy‘, it reminded me how handy AOP tools like AspectJ are, in testing and debugging applications – particularly on servers.

How many times have you encountered a problem or bug in staging (or common test or reference) servers that you can’t reproduce in your own dev server? Ideally deployments should be consistent and reproducable so that this should never happen, but unfortunately in real projects that is not always the case.

Sometimes various issues can cause these differences:

  • manual configurations to the build process
  • continuous integration not used rigorously
  • different environmental settings on different server instances
  • different external services used in development, as opposed to a staging environment

Factors like these may lead to inconsistent builds or differing behaviour in different server environments.

I’ve found that using load-time weaving with AspectJ can be useful testing and debugging the application code on a staging server, without requiring any source code changes. The good thing is that because it doesn’t change the actual build artifacts on the file system (of course the code is changed when the affected classes are loaded!), you can remove the load-time weaving any time and revert back to the original code without having to do another build.

Here are some examples of what can be done to assist testing and debugging on a server that is not directly controlled by the developer. I’m assuming here that you are able to implement load-time weaving on the server.

Also if you are not familiar with AOP and AspectJ, else please read the AspectJ documentation or a book on AspectJ, such as the excellent AspectJ in Action.

Tracing

The generic introductory example for demonstrating how useful AOP is, seems to be tracing. The ability to add logging code to trace (program flow, method parameters and return value, etc) is very handy for debugging, particularly when using pointcuts to narrow the scope of the tracing to the sections of code that are problematic.

Here is a simple example with some test classes:

public class Customer {
  private String name;
  private String phone;
  private String address;

  public String getName() {
    return name;
  }
  public void setName(String name) {
    this.name = name;
  }
  public String getPhone() {
    return phone;
  }
  public void setPhone(String phone) {
    this.phone = phone;
  }
  public String getAddress() {
    return address;
  }
  public void setAddress(String address) {
    this.address = address;
  }
}

public class AccountService {

  public boolean addCustomer(Customer customer, String accountId, boolean isNewCustomer)
  {
    // real code to add customer to an account would be added here

    return true;
  }
}

For the tracing code, you can use the examples from the AspectJ programming guide or the book AspectJ in Action.

Then in a trace aspect class, use a pointcut to trace the code we’re interested in (in this case AccountService.addCustomer()).

pointcut traceOperations() : execution(* trace.blogexample.AccountService.addCustomer(..));

When the code is run, you might get some tracing, showing that the code was run and the parameters that were passed to the method in the pointcut.

Enter  [trace.blogexample.AccountService.addCustomer]
  [ This: trace.blogexample.AccountService@6262937c]
  [Args: (trace.blogexample.Customer@35c0e45a,ABC123,true)]
Exit  [trace.blogexample.AccountService.addCustomer]

Even more useful is if you combine the logging code with reflection to trace not only simple values, but also the contents of objects that you are interested in.
For the above example you might get tracing that shows the fields of objects passed to the method as parameters.
Hopefully the info from the tracing may be helpful in fixing a bug or problem.

Enter  [trace.blogexample.AccountService.addCustomer]
  [ This: trace.blogexample.AccountService@1d5a0305]
  [Args: (trace.blogexample.Customer@377653ae[
  name=John Smith
  phone=1234567890
  address=1 Test Street
  ],java.lang.String@54cbf30e[
  value={A,B,C,1,2,3}
  offset=0
  count=6
  hash=1923891888
  ],java.lang.Boolean@442a15cd[
  value=true
  ])]
Exit  [trace.blogexample.AccountService.addCustomer]

As I’ve mentioned, you can target where the want the tracing to occur:

  • by setting the pointcut to trace in details the areas of code that are (or that you think are) causing problems,
  • where exceptions are being thrown (using an After throwing advice)

You can also build on the AOP tracing to do other useful things, for example I have an open source project (TestDataCaptureJ) that uses tracing to capture data to use in unit testing.

Data Validation

Instead of just tracing, you can go further by doing some data validation as well.

For instance you can insert some code to validate the parameters for a business method, and log any validation errors.

Another way is to use tools, such as Contract4J, to test compliance (checking for preconditions, postconditions and invariants in the business requirements). Use load-time weaving to hook in the Contract4J configuration for business methods that you want to test.

Method Replacement

Another technique I sometimes use is to replace methods invokations in the app with test code.
This involves creating an Around advice for a pointcut that includes a method we want to replace.
Then inside the Around advice you can:
1. run another method altogether

So instead of just invoking the original method …

Object returnValue = joinPoint.proceed(args);

invoke another method with the same return type.

Object returnValue = newMethod(args);

2. run with different parameters

Invoke the original method, but pass in different parameter values (of course the parameter types must match the method signature)

Object returnValue = joinPoint.proceed(newArgs);

So using method replacement techniques you can do things like:

  • inject your own data to test a particular scenario
  • proxy external calls, e.g. for a method that uses an external service, replace with a method that returns some test data we can use for diagnosis

Summary

These are just some ideas that I have found useful when testing and debugging issues in app development. They can be easily applied to your development environment, but using load-time weaving with AOP also makes them useful when you encounter problems in a server that is not directly under your control.

Of course these techniques do not replace your standard tools and debuggers, it is just another tool in your toolbox.

Using a custom ServiceLoader in Android

For those looking for a way to create modular and extendible code using some sort of ‘plugin’ architecture, one of the possibilities is to use the ServiceLoader class.

A service provider is a factory for creating all known implementations of a particular class or interface S. The known implementations are read from a configuration file in META-INF/services/.

Here are some examples of it’s usage in Java:
Creating Extensible Applications With the Java Platform

What I was after was use it in Android, and to be able to put the service implementation classes in their own libray project or jar file, so they could be used independently in a modular fashion. A possible use of this structure is to have different ‘plugins’ for different versions of an app.

There are other similar java plugin frameworks out there, but I thought I’d stick with something that already exists in Java rather than learning yet another API.

However there are some issues with using ServiceLoader in Android, mostly having to do with where to put the configuration file that contains the list of plugin implementation classes. Unfortunately ServiceLoader is hard-coded to look for the file at ‘META-INF/services/’, which makes it hard if we want to make it work in Android.

This article ‘Using serviceloader on android‘ gives some ideas about how use it with Ant. However I wanted a solution that could also be used with Eclipse.

Customizing ServiceLoader

Because ServiceLoader is a final class, it cannot be subclassed but we can still customize it by making our own copy from the Java SDK source code. For example, let’s get a copy of ServiceLoader.java from JDK 1.6 source, and rename it to CustomServiceLoader.

Looking at the source, the first thing we notice is that the location for the config file is hard-coded:

private static final String PREFIX = "META-INF/services/";

Then here is where that location is used (it is only used when hasNext() is called on the iterator due to the lazy initialization pattern):

public boolean hasNext() {
  if (nextName != null) {
  return true;
  }
  if (configs == null) {
    try {
      String fullName = PREFIX + service.getName();
      if (loader == null)
        configs = ClassLoader.getSystemResources(fullName);
      else
        configs = loader.getResources(fullName);
    } catch (IOException x) {
      fail(service, "Error locating configuration files", x);
    }
  } ...

There are several ways we could customize this class to make it easier to use with Android.

Option 1: Replace the hard-code PREFIX field with your own location

So in our custom class, we could just change the location for the PREFIX constant to point to somewhere else in our classpath. This is the quickest way, but suffers from the same inflexibility as the original ServiceLoader.

private static final String PREFIX = "com.yourdomain.services/";

Since the location is being looked up internally using getResources() or getSystemResources() from the classloader, all we have to do is to make sure the configuration file is in the classpath somewhere, and then tell CustomServiceLoader where to search for it.

Another idea for this option was to do it the ‘Android’ way and use the AssetManager to find the configuration file in the assets directory. This would mean passing the AssetManager as an extra parameter in the public API methods for CustomServiceLoader.
However the Android build process will only take into account the assets directory in the main project and ignore the assets directory in library projects, so the file would have to go in the main project. I prefer the configuration to exist in the library along with it’s code so that it could function as an independent unit, so I did not use the assets folder.

Option 2: Pass the location

Rather than hard-coding the location, we could just pass the location as an additional parameter when you want to load the services. This would mean changing the method signatures to pass the location parameter through (or store it) until it is needed. Hence instead of calling …

CustomServiceLoader.load(MyService.class)

you would call …

CustomServiceLoader.load(MyService.class, "com.yourdomain.services/")

I did not end up using this option, since it would have required more substantial changes to the original class.

This is one of the caveats regarding copying source code and changing it, and why I generally dislike doing this. If you make any significant changes to it, then there is always the chance you will introduce bugs into the code (therefore requires more testing!). Also if the original code base changes, your code will not include those changes.

Making it Modular for Multiple Plugins

The above solutions would work if you only had one plugin (i.e. a jar or project containing some service implementation classes and a configuration file), but what if you wanted to be able to handle any number of plugins. My idea was to have each plugin as a independent unit, each with it’s own configuration file.
Then the app would load all the plugin classes it found and aggregate them to make all the service implementations available.

Why would you want to do this? Well, many Android apps have multiple versions, i.e. a paid vs a free version, or a lite vs a pro version. So you could have 1 plugin for the lite version offering a limited number of services.
Then for the pro/paid version, you could either replace the plugin with another version that offers more services or just add those extra services as addition plugins.

It would require a bit more work on CustomServiceLoader to allow it to handle this situation.

This means that, for instance, if the location you want is at ‘com.yourdomain.services’, then if you have 2 plugin libraries or jars with their configuration files at that same location, then the apk will build since it would consider them to be duplicates:

Error generating final archive: Found duplicate file for APK: com.yourdomain.services/com.yourdomain.plugin.MyService

Let’s have a look at the code again:

private static final String PREFIX = "com.yourdomain.services/";

...

public boolean hasNext() {
  if (nextName != null) {
  return true;
  }
  if (configs == null) {
    try {
      String fullName = PREFIX + service.getName();
      if (loader == null)
        configs = ClassLoader.getSystemResources(fullName);
      else
        configs = loader.getResources(fullName);
    } catch (IOException x) {
      fail(service, "Error locating configuration files", x);
    }
  } ...

The problem is that the code is only looking at 1 location as specified by the PREFIX constant, but having the same location in multiple plugin libraries will break the build process due to duplicate file problem.
We can put the configuration files in different locations in each plugin library, but then how does CustomServiceLoader find them?

I’m still working on this, but got some ideas:
1. Add some code to be able to handle wildcards in the location.
e.g.
Plugin1 would have the location of it’s configuration file at ‘com.yourdomain.services1/’.
Plugin2 would have the location at ‘com.yourdomain.services2/’.
We could pass is a wildcard location, such as ‘com.yourdomain.services*’.

In the Spring Framework, there are classes such as ResourcePatternResolver and PathMatchingResourcePatternResolver that does something like this. This is what enables Spring to find ApplicationContext.xml in different locations. This seems to be the most flexible way of doing it, but requires more work.

2. Another way would be to use option 2 above (passing the location as a parameter), but instead of just 1 location, pass in multiple locations (in a Collection or array) and aggregate the results of the getResources() calls.
This is easier, but far from ideal, as you have to hard-code the locations of the configuration file in each plugin library somewhere.

TestDataCaptureJ – Capture data for your unit tests

When you are writing unit tests or doing Test Driven Development (TDD) , one of tasks you may need to do is to get some data to run your tests with. If the data that you need is fairly simple then there are various options, such as manually creating the data classes, create some mocks, stick some data in a database, etc.

However if you need a large amount of complex data, then these options can become very time consuming. This was the situation that I’ve come across myself in various projects, in particular where I need to test a the checkout stage in a online shop using data from the basket. The development environment was running on a snapshot of the production database (therefore ‘real’ data) and for the test scenarios I need the data from the shopping basket with various combinations of basket items. However the basket items were quite complex: the basket item class had about 35 fields, some of which were other objects each containing maybe 5 – 15 fields themselves.

To get all the test data I needed to test all the scenarios, I created a tool which evolved into the TestDataCaptureJ open source project that is hosted on GitHub. This is a Java development tool that can be used in java (web) applications to capture data as you run through the test scenarios. The data is logged to file, but in a format that you can just cut and paste into your unit tests (or better still, into classes that are used by your unit tests).

As an example, in the tutorial I used the jpetstore sample app from the Spring Framework. This just meant configuring the jpetstore web app to run with TestDataCaptureJ, and then going through the checkout process with some items in the basket. Then I used the generated log in some unit tests that I wrote to test part of the checkout process. This is an example of the code that was generated:

public org.springframework.samples.jpetstore.domain.Account createParam1Account_org_springframework_samples_jpetstore_domain_Order_initOrder() {

org.springframework.samples.jpetstore.domain.Account account0 = new org.springframework.samples.jpetstore.domain.Account();
account0.setUsername("j2ee");
account0.setPassword(null);
account0.setEmail("yourname@yourdomain.com");
account0.setFirstName("ABC");
account0.setLastName("XYX");
account0.setStatus("OK");
account0.setAddress1("901 San Antonio Road");
account0.setAddress2("MS UCUP02-206");
account0.setCity("Palo Alto");
account0.setState("CA");
account0.setZip("94303");
account0.setCountry("USA");
account0.setPhone("555-555-5555");
account0.setFavouriteCategoryId("DOGS");
account0.setLanguagePreference("english");
account0.setListOption(true);
account0.setBannerOption(true);
account0.setBannerName("");

return account0;
}

The tutorial also demonstrates some of the limitations of the tool as well.

Caveats

TestDataCaptureJ was really meant to handle data objects which are designed to hold data and follow JavaBean conventions, e.g.

  • objects are created using constructors
  • fields has setter methods using standard naming convention
    e.g. a field named ‘userAccountName’ would have a public setter method ‘setUserAccountName()’

Also to intercept the processing of the application in order to log the data, the test data required must be an object that is either passed to a method as a parameter, or returned from a method as a return value.

Therefore it can’t currently handle object where this isn’t the case, e.g.

  • object that are not created with constructors, e.g. if they are created using factory methods instead
  • fields without setter methods or setter methods that don’t follow the standard naming convention
  • static fields (just not implemented)
  • objects pass into methods as varargs

There is some configuration that

How it works

Basically this is just a glorified version of the common AspectJ tracing example, using weave time loading to intercept the data objects that we are interested in. Only instead of just logging the contents, there is a 2 stage process:

  1. use java reflection to store access the field data recursively and store it in some metadata classes
  2. log the data as java code to file

There is a explanation page in the documentation that goes into more detail.

Please have a look at the code (or better still, fork it and play around with it) or read the documentation if you think this might be useful to you.