Advanced Testing Tips for Android, Part 2

In part 1 of this article, I discussed some of the merits of Android testing on the emulator and rooted devices. In this concluding part, I’ll have a look at the software side of things.

Tracing

It is often useful to include tracing code while developing, one common usage is to trace the entry and exit of methods calls for example.

However it is generally recommended to remove tracing (and debug) logging from releases. This can be a hassle if these trace log statements sprinkled in your code. If you are using proguard to obfuscate your release version, then that is one option you can use to disable the tracing.

Another way is to use an AOP library like AspectJ to handle tracing instead (there are various examples of how to do this in Android available on the internet, just google it).

Some advantages of using AspectJ to handle tracing are:

  • The AspectJ tracing code could be put in separate library project, which may only be included in development builds (i.e. leave out the library in the build script for the release build).
  • Use of pointcuts is more configurable, as it can be used to trace as much or as little of your code as you require. This is useful to target the tracing to areas of your code when checking for errors.

Generally I would leave general method tracing disabled, but it is sometimes useful when running automated tests, e.g. when doing Continuous Integration or running Monkey, to trace the method flow in case there are crashes or other problems.

Monkey

This is another useful testing tool, but sometimes there are sections of your code that you don’t want to be run while testing with Monkey.

In this case, you can disable functionality for those sections of code when testing with monkey using ActivityManager.isUserAMonkey().

if (!ActivityManager.isUserAMonkey())
 {
    // only run if not running in Monkey
    doSomething();
 }

In my case I used this because the app I was working on had some functionality to run other apps on the device. If this code was run during Monkey testing, then Monkey would continue to run on the secondary app rather than the main app that I wanted to test.

Another example of it’s usage is in the Android API demos (to stop Monkey running any ‘dangerous’ code) :
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/DeviceAdminSample.html

Accessory Apps for Testing

To assist in testing, it may be useful to write some additional apps to handle various accessory tasks:

  • configuration of the device before running tests
  • loading test data, for instance I wrote a simple app to load test contacts and SMS messages.
  • test for failures in intents passing, e.g. if the main app runs other apps using intents, you could use intents to run an app that deliberately fails to see how the main app handles it

Advanced Testing Tips for Android, Part 1

I’m currently developing an Android app and have been reading up on Android testing. Here are some tips that I’ve come across and found useful, as well as some ideas of my own.

Emulator vs Device

Testing on a real device is generally recommended over using the Android emulator for various reasons:

  • You should test as close as possible to the user environment.
  • A device is a more realistic environment for acceptance testing.
  • The emulator is slow, and often buggy. I’ve encountered issues with various versions of the emulator, when trying to test things like orientation changes, location services, etc.

While I agree that you should do most of your testing on real devices, sometimes testing on the emulator can be a useful addition.

Advantages of Testing on the Emulator

1. You can increase memory and heap for an AVD (Android Virtual Device).

This may be necessary if you have a lot of tests to run together or have complicated tests (e.g. if using recursion). Of course devices are limited to how much memory they have installed and could run out of memory when running test suites.

2. For testing with particular configurations, e.g system settings such as GPS.

You can setup an AVD with GPS on and another exactly the same except with GPS off (once configured, they would be saved as snapshots). Then you could have annotations in the test cases for which tests to run in which environment.

If you were testing this with a device, then you would need to remember to configure the settings before the tests are run. This may be alright for manual testing, but a major hassle for automated testing (for instance, if you’re using Continuous Integration practices).

In fact, a tool like the Jenkins Android Emulator plugin could be configured to start the appropriate AVD in the emulator when required (or even create one based on a set of configurations).

3. The emulator is free.

Of course using the emulator is cheaper than buying a truckload of devices, especially with so many Android version and screen sizes out there. You can use the emulator to fill in the gaps for whichever configurations you don’t have devices for (is it worth buying separate devices to test Android version 2.3 vs 2.33, for instance).

Testing on a Rooted Device

Most advice about using rooted devices for testing seems to be, DON’T. Most of the time this is good advice, as you want your test environment to be as close to what your customers are using as much as possible.

However you might find it sometimes useful to run some of your tests on a rooted device. In particular, being able to have read and write access to otherwise secure directories can be useful.

Some examples of this would be:

  • to copy a test database onto the device
  • copying junit report files off the device for archiving
  • doing a full backup of various device configurations, to restore later for testing

Some more tips later in part 2 of this article.

Using load-time AOP to help testing and debugging

After reading this blog ‘Practical Introduction into Code Injection with AspectJ, Javassist, and Java Proxy‘, it reminded me how handy AOP tools like AspectJ are, in testing and debugging applications – particularly on servers.

How many times have you encountered a problem or bug in staging (or common test or reference) servers that you can’t reproduce in your own dev server? Ideally deployments should be consistent and reproducable so that this should never happen, but unfortunately in real projects that is not always the case.

Sometimes various issues can cause these differences:

  • manual configurations to the build process
  • continuous integration not used rigorously
  • different environmental settings on different server instances
  • different external services used in development, as opposed to a staging environment

Factors like these may lead to inconsistent builds or differing behaviour in different server environments.

I’ve found that using load-time weaving with AspectJ can be useful testing and debugging the application code on a staging server, without requiring any source code changes. The good thing is that because it doesn’t change the actual build artifacts on the file system (of course the code is changed when the affected classes are loaded!), you can remove the load-time weaving any time and revert back to the original code without having to do another build.

Here are some examples of what can be done to assist testing and debugging on a server that is not directly controlled by the developer. I’m assuming here that you are able to implement load-time weaving on the server.

Also if you are not familiar with AOP and AspectJ, else please read the AspectJ documentation or a book on AspectJ, such as the excellent AspectJ in Action.

Tracing

The generic introductory example for demonstrating how useful AOP is, seems to be tracing. The ability to add logging code to trace (program flow, method parameters and return value, etc) is very handy for debugging, particularly when using pointcuts to narrow the scope of the tracing to the sections of code that are problematic.

Here is a simple example with some test classes:

public class Customer {
  private String name;
  private String phone;
  private String address;

  public String getName() {
    return name;
  }
  public void setName(String name) {
    this.name = name;
  }
  public String getPhone() {
    return phone;
  }
  public void setPhone(String phone) {
    this.phone = phone;
  }
  public String getAddress() {
    return address;
  }
  public void setAddress(String address) {
    this.address = address;
  }
}

public class AccountService {

  public boolean addCustomer(Customer customer, String accountId, boolean isNewCustomer)
  {
    // real code to add customer to an account would be added here

    return true;
  }
}

For the tracing code, you can use the examples from the AspectJ programming guide or the book AspectJ in Action.

Then in a trace aspect class, use a pointcut to trace the code we’re interested in (in this case AccountService.addCustomer()).

pointcut traceOperations() : execution(* trace.blogexample.AccountService.addCustomer(..));

When the code is run, you might get some tracing, showing that the code was run and the parameters that were passed to the method in the pointcut.

Enter  [trace.blogexample.AccountService.addCustomer]
  [ This: trace.blogexample.AccountService@6262937c]
  [Args: (trace.blogexample.Customer@35c0e45a,ABC123,true)]
Exit  [trace.blogexample.AccountService.addCustomer]

Even more useful is if you combine the logging code with reflection to trace not only simple values, but also the contents of objects that you are interested in.
For the above example you might get tracing that shows the fields of objects passed to the method as parameters.
Hopefully the info from the tracing may be helpful in fixing a bug or problem.

Enter  [trace.blogexample.AccountService.addCustomer]
  [ This: trace.blogexample.AccountService@1d5a0305]
  [Args: (trace.blogexample.Customer@377653ae[
  name=John Smith
  phone=1234567890
  address=1 Test Street
  ],java.lang.String@54cbf30e[
  value={A,B,C,1,2,3}
  offset=0
  count=6
  hash=1923891888
  ],java.lang.Boolean@442a15cd[
  value=true
  ])]
Exit  [trace.blogexample.AccountService.addCustomer]

As I’ve mentioned, you can target where the want the tracing to occur:

  • by setting the pointcut to trace in details the areas of code that are (or that you think are) causing problems,
  • where exceptions are being thrown (using an After throwing advice)

You can also build on the AOP tracing to do other useful things, for example I have an open source project (TestDataCaptureJ) that uses tracing to capture data to use in unit testing.

Data Validation

Instead of just tracing, you can go further by doing some data validation as well.

For instance you can insert some code to validate the parameters for a business method, and log any validation errors.

Another way is to use tools, such as Contract4J, to test compliance (checking for preconditions, postconditions and invariants in the business requirements). Use load-time weaving to hook in the Contract4J configuration for business methods that you want to test.

Method Replacement

Another technique I sometimes use is to replace methods invokations in the app with test code.
This involves creating an Around advice for a pointcut that includes a method we want to replace.
Then inside the Around advice you can:
1. run another method altogether

So instead of just invoking the original method …

Object returnValue = joinPoint.proceed(args);

invoke another method with the same return type.

Object returnValue = newMethod(args);

2. run with different parameters

Invoke the original method, but pass in different parameter values (of course the parameter types must match the method signature)

Object returnValue = joinPoint.proceed(newArgs);

So using method replacement techniques you can do things like:

  • inject your own data to test a particular scenario
  • proxy external calls, e.g. for a method that uses an external service, replace with a method that returns some test data we can use for diagnosis

Summary

These are just some ideas that I have found useful when testing and debugging issues in app development. They can be easily applied to your development environment, but using load-time weaving with AOP also makes them useful when you encounter problems in a server that is not directly under your control.

Of course these techniques do not replace your standard tools and debuggers, it is just another tool in your toolbox.

Test Android orientation changes with Robotium

One of the most painful things I found while learning Android development was handling orientation changes (who thought flipping your phone would cause so much trouble).

This is because the activities are being destroyed and recreated every time the phone is changing between portrait and  landscape mode. Here I’m assuming that it is being handled ‘properly’ by saving state data (e.g. in onRetainNonConfigurationInstance() ) and then reloading it when the activity is created.

After this saving and reloading code is in place, then it needs to be unit tested, and here I’ve found Robotium to be very useful in running instrumentation tests with orientation changes.

An example

This is just one way that I’ve found useful, here is an example – take this simple test to simulate the user entering some text and clicking some buttons:

public void testValidateText()
{
  EditText targetText = solo.getEditText(0);
  solo.enterText(targetText, "hello");

  // click on button to bring up dialog
  solo.clickOnButton("Validate");

  // wait to dialog to pop up
  getInstrumentation().waitForIdleSync();

  // click on button in dialog
  solo.clickOnButton("OK");
}

For the sake of simplicity, I’ve left out the Robotium setup code and the asserts (which would normally be the point of the testing!).

Ring the (orientation) changes

In the next version of this test, I will:

  • rename the test method to something that doesn’t start with ‘test’, because Android used JUnit 3, this will stop this method from being run in the test case
  • add some parameters to the test method as flags for doing orientation change
  • add to code in the method to change the orientation
  • write some wrapper methods to call our test method
public void testValidateText()
{
  doTestValidateText(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT, false);
}

public void testValidateTextWithOrientationChange()
{
  doTestValidateText(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT, true);
}

public void testValidateTextInLandscapeWithOrientationChange()
{
  solo.setActivityOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
  doTestValidateText(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE, true);
}

public void doTestValidateText(int currentOrientation, boolean changeOrientation)
{
  int orientation = currentOrientation;

  EditText targetText = solo.getEditText(0);
  solo.enterText(targetText, "hello");

  orientation = changeOrientation(solo, changeOrientation, orientation);

  // click on button to bring up dialog
  solo.clickOnButton("Validate");

  orientation = changeOrientation(solo, changeOrientation, orientation);

  // wait to dialog to pop up
  getInstrumentation().waitForIdleSync();

  orientation = changeOrientation(solo, changeOrientation, orientation);

  // click on button in dialog
  solo.clickOnButton("OK");
}

So now when the test case is run, the original test method will be run in 3 ways:

  • testValidateText() will run the test as before
  • testValidateTextWithOrientationChange() will run the test, but after every user action will change the orientation with the line
    orientation = changeOrientation(solo, changeOrientation, orientation);
  • testValidateTextInLandscapeWithOrientationChange() will also run the test with orientation changes, but starting off in landscape mode

Here is the code for the orientation change.


public int getNextOrientation(int currentOrientation)
{
  if (currentOrientation == ActivityInfo.SCREEN_ORIENTATION_PORTRAIT)
  {
    return ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE;
  }
  else
  {
    return ActivityInfo.SCREEN_ORIENTATION_PORTRAIT;
  }
}

public int changeOrientation(Solo solo, int currentOrientation)
{
  int nextOrientation = getNextOrientation(currentOrientation);
  solo.setActivityOrientation(nextOrientation);

  return nextOrientation;
}

public int changeOrientation(Solo solo, boolean changeOrientation, int orientation)
{
  int newOrientation = orientation;
  if (changeOrientation)
  {
    newOrientation = changeOrientation(solo, orientation);
  }
  return newOrientation;
}

So basically the wrapper methods just invoke the original test method with a flag for the orientation that it starts in, and whether to do orientation changes during the test or not.

In the next post, I will list some problems in my code that I exposed with this testing, and other issues with coding and testing for orientation changes.

TestDataCaptureJ – Capture data for your unit tests

When you are writing unit tests or doing Test Driven Development (TDD) , one of tasks you may need to do is to get some data to run your tests with. If the data that you need is fairly simple then there are various options, such as manually creating the data classes, create some mocks, stick some data in a database, etc.

However if you need a large amount of complex data, then these options can become very time consuming. This was the situation that I’ve come across myself in various projects, in particular where I need to test a the checkout stage in a online shop using data from the basket. The development environment was running on a snapshot of the production database (therefore ‘real’ data) and for the test scenarios I need the data from the shopping basket with various combinations of basket items. However the basket items were quite complex: the basket item class had about 35 fields, some of which were other objects each containing maybe 5 – 15 fields themselves.

To get all the test data I needed to test all the scenarios, I created a tool which evolved into the TestDataCaptureJ open source project that is hosted on GitHub. This is a Java development tool that can be used in java (web) applications to capture data as you run through the test scenarios. The data is logged to file, but in a format that you can just cut and paste into your unit tests (or better still, into classes that are used by your unit tests).

As an example, in the tutorial I used the jpetstore sample app from the Spring Framework. This just meant configuring the jpetstore web app to run with TestDataCaptureJ, and then going through the checkout process with some items in the basket. Then I used the generated log in some unit tests that I wrote to test part of the checkout process. This is an example of the code that was generated:

public org.springframework.samples.jpetstore.domain.Account createParam1Account_org_springframework_samples_jpetstore_domain_Order_initOrder() {

org.springframework.samples.jpetstore.domain.Account account0 = new org.springframework.samples.jpetstore.domain.Account();
account0.setUsername("j2ee");
account0.setPassword(null);
account0.setEmail("yourname@yourdomain.com");
account0.setFirstName("ABC");
account0.setLastName("XYX");
account0.setStatus("OK");
account0.setAddress1("901 San Antonio Road");
account0.setAddress2("MS UCUP02-206");
account0.setCity("Palo Alto");
account0.setState("CA");
account0.setZip("94303");
account0.setCountry("USA");
account0.setPhone("555-555-5555");
account0.setFavouriteCategoryId("DOGS");
account0.setLanguagePreference("english");
account0.setListOption(true);
account0.setBannerOption(true);
account0.setBannerName("");

return account0;
}

The tutorial also demonstrates some of the limitations of the tool as well.

Caveats

TestDataCaptureJ was really meant to handle data objects which are designed to hold data and follow JavaBean conventions, e.g.

  • objects are created using constructors
  • fields has setter methods using standard naming convention
    e.g. a field named ‘userAccountName’ would have a public setter method ‘setUserAccountName()’

Also to intercept the processing of the application in order to log the data, the test data required must be an object that is either passed to a method as a parameter, or returned from a method as a return value.

Therefore it can’t currently handle object where this isn’t the case, e.g.

  • object that are not created with constructors, e.g. if they are created using factory methods instead
  • fields without setter methods or setter methods that don’t follow the standard naming convention
  • static fields (just not implemented)
  • objects pass into methods as varargs

There is some configuration that

How it works

Basically this is just a glorified version of the common AspectJ tracing example, using weave time loading to intercept the data objects that we are interested in. Only instead of just logging the contents, there is a 2 stage process:

  1. use java reflection to store access the field data recursively and store it in some metadata classes
  2. log the data as java code to file

There is a explanation page in the documentation that goes into more detail.

Please have a look at the code (or better still, fork it and play around with it) or read the documentation if you think this might be useful to you.