Protractor Testing with Google Map Markers and Markerclusterers, Part 2

Part 1 of this post was a brief introduction about the Protractor spec I was working on, where I had to locate markers and cluster markers in a Google map . In this second part, there are some tips on how to find those Google maps markers.

Firstly, I must give credit to this blog post which had similar ideas about finder Google maps markers for Selenium testing.

http://tech.adstruc.com/post/34230170061/selenium-testing-google-maps

Configure the marker

Most importantly, when the marker is created it must be configured as being unoptimized. This means the markers are created as elements that can be located in the DOM.

 var marker = new google.maps.Marker({
   position: latLng,
   title: 'your title',
   optimized: false
 }

Be aware that using unoptimized markers should only be done for development and testing, as it significantly affects performance.

In the spec test we can use xpath to find the div’s that represent the markers, but the specific xpath expression will vary depending on various factors, such as whether the marker has events attached to it. For example you may want to have a click event attached to the markers, so that something happens when the user clicks on the them.

google.maps.event.addListener(marker, 'click', function() {
  // do something
});

Another factor that affects how the DOM structure for a marker is rendered, is the platform and browser that the web page with the map is running on.

The best way to formulate the xpath expression you want to use as the locator for the markers, is to use a web inspection tool to have a look at the DOM element(s) for the marker. This should be done for the browsers and platforms that you want to support.

Markers without Map Areas

The first marker DOM structure has a div that looks like this.

<div title="your title" class="gmnoprint">
  <img />
</div>

Some examples of situations where the markers that have this structure include:

  • Chrome (Windows), markers without events
  • Firefox (Windows), markers without events
  • Chrome (Android), markers with events

In the example I’m using for this post, the test spec is locating the markers in order to get a count of all the markers in a map.

element.all(By.xpath("//div[@class=\"gmnoprint\" and @title]")).count();

Several things to note here:

1. The xpath expression is used to locate all the markers in the DOM.

By.xpath(“//div[@class=\”gmnoprint\” and @title]”)

2. The function element.all() returns an ElementArrayFinder and has various utility methods such as count().

3. Since ElementArrayFinder is a promise, if you need to get values from the marker elements you need to use a callback.

For instance if you wanted to get the titles from the markers:

var titles = element.all(By.xpath("//div[@class=\"gmnoprint\" and @title]")).map(function(elem, index) {
  return {
    index : index,
    title : elem.getAttribute('title')
  };
});

Markers with Map Areas

Another marker structure is where the div contains an <area> tag inside a <map> tag.

<div class="gmnoprint">
  <img />
  <map>
    <area title="your title" />
  </map>
</div>

You may encounter this DOM structure in the following:

  • Chrome (Windows), markers with events
  • Firefox (Windows), markers with events

Once again we can get a count of the markers using a xpath expression to match the DOM structure for these markers.

element.all(By.xpath("//div[@class=\"gmnoprint\"]/map/area[@title]")).count();

In the final part of this post, I will show how to find the cluster markers in the map, and also how to find the number of Google maps markers represented by each cluster.

Protractor Testing with Google Map Markers and Markerclusterers, Part 1

While doing e2e testing on an AngularJS app using Protractor, I came across the need to find the markers in a google map within the app. This was further complicated by the fact that we were using MarkerclustererPlus, which meant on the map there could be a mixture of single markers and cluster markers.

This first part is just a bit of an introduction, so if you want, you can go straight to Part 2 which shows how to find the Google Maps markers in a Protractor spec or Part 3 for using markerclusterer.

What is a Markerclusterer?

If you are not familiar with markerclusterers, it is a google maps utility library which deals with maps that have too many markers or are too cluttered, by combining markers that are close together into cluster markers. Have a look at the example page:

http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/docs/examples.html

Also note there are actually 2 libraries, markerclusterer and markerclustererplus.

I’m assuming that the reader is already familiar with setting up and using Protractor for testing AngularJS applications.

AngularJS and Google Maps

Being an AngularJS app, I decided to use a directive to create the Google map.

There are a few AngularJS map directive libraries around, but the one I decided upon was ng-map.  The advantage of this particular library was that although you can just use its tags to create the map, it also allows you to use the Google Maps V3 Javascript API directly. This is very useful, for instance even though there was a markerclusterer tag, I just wrote the code in Javascript which was more flexible and easier to debug (the author of the library also seems to recommend this approach for complicated code).

So for this application, the map libraries that were used were:

Count the Markers …

The spec file I was working on needed to count the number of markers that were displayed on the map. Now as I mentioned earlier, because I was using the markerclusterer library, the markers could appear as single markers (i.e. the default Google Maps markers) or as cluster markers, and the number of markers and clusters would vary depending on the zoom level of the map.

Therefore in the tests I needed code to find:

  • single markers
  • cluster markers
  • the number of markers contained in each cluster marker

In the next post, I will show how to find the single markers displayed on the map. All you will need are web tools that can inspect elements in a web page, such as Firebug or the Chrome Developer Tools.

Setup JRebel with Tomcat and Docker

It’s fairly straightforward to install JRebel to run on a local instance of Tomcat, here is one way of installing it on Tomcat running in a docker container instead. This article assumes a basic knowledge of using docker.

For this particular example I’m using:

  • the Eclipse IDE installation of JRebel
  • the ‘official’ Tomcat 8 image from the Docker hub

Install JRebel in the IDE

I’m using the Eclipse IDE, but there are instructions on the ZeroTurnaround website on using a different IDE or for installing it standalone.

1. For Eclipse, follow these instructions just to install and activate JRebel for the IDE:

https://zeroturnaround.com/software/jrebel/quickstart/eclipse/#!/server-configuration

2. We need the JRebel agent (jrebel.jar) to install into Tomcat.

You can either get this from the JRebel plugin you have just installed into Eclipse (look for the section titled ‘Where do I find jrebel.jar?’);

http://zeroturnaround.com/software/jrebel/learn/remoting/eclipse/

OR you can get it from an archive

https://zeroturnaround.com/software/jrebel/download/prev-releases/

(Note that for Tomcat 8, please use the legacy version of jrebel.jar which is found in the lib sub-directory of the zip archive.)

Install JRebel in the Application Server

1. Get the base Tomcat docker image from the docker hub.

docker pull tomcat:xxx

Here xxx is the specific version of Tomcat you want to use as the base image, e.g. 8.0.23-jre7, 8-jre8, etc. You can find the list in the Tomcat docker repository:

2. Since we are using docker to run the application server, then we will need to run JRebel in remote mode. There are generic instructions on JRebel remoting, which we can adapt to do it in a docker environment. So what we want to do is to create a custom docker image, based on the Tomcat image, which incorporates the JRebel configuration.

2.1 Create an empty directory and copy the JRebel agent jrebel.jar to it.

2.2 Create a Dockerfile to build your custom Tomcat image, for example:

Note that for simplicity, I have just added the JRebel agent to the directory /jrebel. You can use a different directory, as long as the -javaagent configuration can find it.

Also you can take this opportunity to do further customizations on the Tomcat server, e.g. if you want to add your list of users, then copy the your version of tomcat-users.xml to the Tomcat config directory by adding this line to the Dockerfile:

ADD tomcat-users.xml /usr/local/tomcat/conf/

2.3 Build and run the customized Tomcat server (using your own repository name, image name and container name to replace the values in this example).

docker build -t your_repository/tomcat-jrebel .

docker run -i -t -d --name mytomcat -p 8080:8080 your_repository/tomcat-jrebel

We can verify that the JRebel configuration has been included in Tomcat by checking the startup logs.

docker logs mytomcat

We should be able to see the JRebel version and licensing information.

2015-05-22 10:38:40 JRebel:  #############################################################
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  JRebel Legacy Agent 6.2.0 (201505201206)
2015-05-22 10:38:40 JRebel:  (c) Copyright ZeroTurnaround AS, Estonia, Tartu.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Over the last 1 days JRebel prevented
2015-05-22 10:38:40 JRebel:  at least 0 redeploys/restarts saving you about 0 hours.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Server is running with JRebel Remoting.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  #############################################################

Tip: Build Your Own
Of course you can combine these 2 steps for creating a custom image into 1, by creating your own Tomcat image from scratch instead of using the ‘official’ Tomcat image as a base.

 Configure the IDE

Finally we need to configure Eclipse to work with the Tomcat server that we have running in docker. You can do that by following these instructions.

This is a brief summary of the steps:

  1. In Eclipse, right-click on your project, select JRebel -> Add JRebel Nature
  2. Right-click on your project again, select JRebel -> Enable remote server support
  3. Right-click on your project again, select JRebel -> Advanced Properties
  4. In the dialog that pops up, click on “Edit” button next to the “Deployment URLs” text box
  5. Click on “Add” and enter the URL of the application, it will be something like “http://your_docker_host:8080/app_name”
  6. Click on “Continue”, “Apply”, and then “OK”.

Once the app is deployed, any changes you make in the IDE should now be reflected in the server running in the docker container.

No restarts, no redeploys, just code.

Pluggable Tools with Docker Data Containers

There are some apps that have a simple installation process. When using them with other applications in Docker, they may be able to be installed in their own data volume container and used in a pluggable way.

The kind of apps I’m talking about are some Java apps (and in fact, Java itself) which follow this installation process:

  1. Install the contents of the app into a single directory
  2. Set an environmental variable to point to the installation directory, e.g. XXX_HOME
  3. Add the executables of the app to the PATH environmental

 

That’s it.

An example of an app installation that follows this pattern is Gradle:

  1. Uncompress the Gradle files from an archive to a directory.
  2. Set the enviromental variable GRADLE_HOME to point to the gradle installation directory
  3. Add GRADLE_HOME/bin to the PATH

 

Docker

Using Gradle as an example, here is a Dockerfile that installs it in a data volume container:

# Install Gradle as a data volume container. 
#
# The app container that uses this container will need to set the Gradle environmental variables.
# - set GRADLE_HOME to the gradle installation directory
# - add the /bin directory under the gradle directory to the PATH

FROM mini/base

MAINTAINER David Wong

# setup location for installation
ENV INSTALL_LOCATION /opt

# install Gradle version required
ENV GRADLE_VERSION 2.2.1

WORKDIR ${INSTALL_LOCATION}
RUN curl -L -O http://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip && \
    unzip -qo gradle-${GRADLE_VERSION}-bin.zip && \
    rm -rf gradle-${GRADLE_VERSION}-bin.zip
    
# to make the container more portable, the installation directory name is changed from the default
# gradle-${GRADLE_VERSION} to just gradle, with the version number stored in a text file for reference
# e.g. instead of /opt/gradle-2.2.1, the directory will be /opt/gradle

RUN mv gradle-${GRADLE_VERSION} gradle && \
    echo ${GRADLE_VERSION} > gradle/version
    
VOLUME ${INSTALL_LOCATION}/gradle

# echo to make it easy to grep
CMD /bin/sh -c '/bin/echo Data container for Gradle'

(From github https://github.com/davidwong/docker/blob/master/gradle/Dockerfile)

Build the image and container from the Dockerfile. Here I’ve tagged the image with the version number of the Gradle installation, and named the container gradle-2.2.1.


docker build -t yourrepo/gradle:2.2.1 .

docker run -i -t --name gradle-2.2.1 yourrepo/gradle:2.2.1

A few things to note about this installation:

  • I have changed the directory name where Gradle is installed from the default, by removing the version number in order to make it generic.
  • no environmental variables have been set, that will be done later
  • you can use any minimal image as the basis for the container, it justs need curl or wget in order to download the Gradle archive file

Now we have the Gradle installation in a docker data volume that can be persisted and shared by other containers.

You can then repeat this process with difference versions of Gradle to create separate data containers for each version (of course giving the containers different names, e.g. gradle-2.2.1, gradle-1.9, etc).

Use Case

I originally got this idea when I was running my Jenkins CI docker container. Some of the Jenkins builds required Gradle 2.x while others were using Gradle 1.x.

So instead of building multiple Jenkins + Gradle images for the different versions of Gradle required, I can now just run the Jenkins container with the appropriate Gradle data container. This is done by using –volumes-from to get access to the Gradle installation directory and setting the require environmental variables.

To use the data container with Gradle 2.2.1 installed:

docker run -i -t --volumes-from gradle-2.2.1 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

To use the one with Gradle 1.9:

docker run -i -t --volumes-from gradle-1.9 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

Of course there are limitations to this technique since Docker data volume containers were designed to share persistant data rather than application installs. In particular they do not allow sharing of environmental variables.

However this work around that can be useful in some circumstances.

Backup a Docker Data Container with Fig

I have been using data volume containers to persist data in docker containers.  There are various reasons why this tends to be a better option than just using data volumes, but probably the most important is portability.

Of course now we have to backup the data in the data containers. This can be for archiving, or when the containers that use the data need to be upgraded or recreated. If your backup requirements are simple you can simply use the docker cp command or something like tar.

A Jenkins example

As a simple example, let’s run a Jenkins server in a docker container and use a data volume container to persist its data.

1. Pull or build a Jenkins image from the official repository.

http://jenkins-ci.org/content/official-jenkins-lts-docker-image

2. The Jenkins images uses the directory /var/jenkins_home as the volume to store it’s data, so we need a data volume container for that volume. Here is a sample of a Dockerfile to build the data container:

Build and tag the image from the Dockerfile.

docker build -t your_repository:jenkins-data .

You can now create the data container, giving it a name for convenience. Optionally we can run the docker ps command afterwards to check that the container has been created, it should be in a stopped state.

docker run -i -t --name jenkins-data your_repository:jenkins-data
docker ps -a

3. Run the Jenkins server with the data container attached and make some changes, e.g. create a job, etc. The Jenkins data volume should have your changes in it now.

docker run --name=jenkins-sample -p 8080:8080 --volumes-from=jenkins-data jenkins

4. For this example we will use tar to backup the data container, using this command to create a temporary container to access the data container.

docker run -rm --volumes-from jenkins-data -v $(pwd):/backup busybox tar cvf /backup/jenkins_backup.tar /var/jenkins_home

There should now be a file jenkins_backup.tar in the current directory. Of course for real usage, we would probably run this command from a script and make it generic to be able to backup any data volume container.

I do give a fig …

Something else I use for development with Docker is the orchestration tool Fig (this has saved me a lot of typing!). So here is an example of  doing the same backup on the Jenkins data container using Fig.

1. Create Fig YAML file, using the same information that we used in the backup command.

2. Run Fig, that’s it!

fig up

This is a simple example that has only scratched the surface of what can be done with Docker (and Fig). If the backup requirements for the data is more complex, then you could also consider creating a dedicated container just for doing backups, with all the required tools installed in it.

The great thing about Docker is that once everything has been setup, you can get applications such as Jenkins up and running very quickly.

Another Defection to Android Studio

Like many other developers out there, I have been using Eclipse as my main IDE for many years now. However for Android development I have decided to take the plunge and migrate to Android Studio (especially since it has finally been released).

Here is a blog post I found that closely echoes what I have long thought regarding the issues with Eclipse:

http://engineering.meetme.com/2014/02/a-tale-of-migrating-from-eclipse-to-android-studio/

Build, build, build

For me, another reason was that the Ant build files I was using to handle building different versions (free vs paid, dev vs release, etc) were getting too complicated to manage easily. So I can now change over to Gradle at the same time, since that’s what Android Studio uses by default.

Gradle has the concept of build variants to handle building different versions of an Android app.

The Recurring Eclipse Re-install

Here are some other problems that I personally have had with using Eclipse.

  • Plugins, well not the plugins themselves, but having too many plugins. I’ve found that having lots of plugins in one Eclipse installation can cause Eclipse to misbehave , especially after several updates. There are several ways I use to get around this:
    • Keep separate Eclipse installations for different types of development, e.g one for Java, one for Android, one for Cloud, etc. Therefore each installation will only have a few plugins relevant to the type of development. However this is not always convenient if a project does require multiple types of development.
    • Every so often, when Eclipse starts to play up, do a fresh re-install of Eclipse (along with the latest version of the plugins required).
  • Intermittent miscellaneous bugs, e.g. cut and paste stops working, builds not alway done automatically, etc. A lot of these issues are more of a nuisance rather than being a serious problem, but all the same it tends to kill your productivity (and isn’t that why we use IDE’s in the first place?).

No Pain, No …

Make no mistake, despite what the Android Studio documentation might try to tell you, migrating a non-trivial project will take some time and probably involve some pain. But worth the effort I think.

User changes for Address Location Finder

I’m currently working on upgrading my app Address Location Finder. While most of the changes are internal improvements or bug fixes, there are 2 major changes for users.

1. The map will be dropped from the app.

The simple built-in map screen will be removed from the app for the next version. In the future it will come back as an optional add-on.

Why?

I was getting quite a few error reports from users trying to run the app on devices that did not have the mapping requirements.

One of the requirements stated in the Google Play app store for the app was:

– device that supports the standard Google Mapping API (not the same as having the Google Maps app installed)

This means the manufacturers need to have licensed the Google Mapping API v1 for the device in order for the map in Address Location Finder to work. Unfortunately it may be difficult for users to know whether their device has this requirement or not, possibly resulting in the app crashing.

So, removing the map from the app will remove this requirement and allow it to run on more devices (as well as reducing app crashes).

The map will become an optional add-on app for a future version of Address Location Finder.

2. The app will require Android 4.0.3+ to run.

The minimum API level to run the app will be raised to 15, which means it will now require Android version 4.0.3+ on the device to run.

Why?

The original minimum API level required for Address Location Finder was 8 (Android 2.2).

According to the Google dashboard for platform version, devices running Android 2.x now account for only about 10%.

Supporting old versions requires a fair bit of additional work

  • needs more testing
  • requires additional code, e.g. Android support library, to implement newer Android functionality
  • may require internal version checks for functions that would not work on old versions, possibly with alternate code

This all means it takes longer for updates of the app to come out. The diminishing returns in supporting the old versions is not really worth the extra effort and time.

Users with Android older than 4.0.3 who have already installed the app can just continue to use the old version of the app. They should not even see the new version of the app when it comes to the Google Play app store, since the store applies various filters to determine which apps to display (include the minimum Android version require to run).

Help! my Android USB Connection is dropping out

After updating Android on a phone that I use for testing, I started having problems when I connected it up to my PC. The USB connection would be there when I connected the cable, but then would seem to drop out after a short time. Sometimes it would disappear after 10 seconds, sometimes a couple of minutes, sometimes when I started up DDMS or Eclipse.

Very frustrating, but I hoped it would be a quick fix. But an hour later …

I guess I’m writing this as a cautionary tale for myself about going through proper procedures for problem solving instead of just guessing.

Firstly I tried to isolate the problem:

  • use a different USB port
  • use a different USB cable
  • try another Android device

From this I determined that the problem was with the phone, an old Nexus S that I use for testing older hardware and Android versions. Since I had just updated the Android version on the phone, I just jumped to the conclusion that it must have been a USB driver problem.

So next I made sure my Android SDK installation was up-to-date, and then updated the Android USB driver from it. Made no difference.

Then I tried uninstalling and re-installing the USB driver. Still not working.

At this point I was out of ideas and just went away for a bit.

Tip: When you hit a brick wall when trying to fix a problem, sometimes it can be helpful just to go away from it for a bit. Work on something else, go for a walk, have a coffee break, whatever. Let your subconscious do the work.

When I came back later, I remembered that I’ve had other intermittent issues with the phone since it was quite old. So I did what I should have done in the first place, the golden rule about fixing tech equipment.

Turn it off, wait, then turn it on again. Working now!

Guess this shows that it is easy sometimes to forget the basics.

Spring Profiles and Reusable Tests, Part 2

In the 1st part of the post, we were able to use Spring profiles to inject a specific implementation of a class to test into some test cases. Now we will try to create a reusable test case by using Spring Dependency Injection and Spring profiles to inject the appropriate expected results when testing a particular implementation.

Firstly we need to get the expected test results into a format that be be injected by Spring DI into a test case.

public interface FormatResults {

  public String getExpectedResult(String testMethodName);
}

public abstract class BaseFormatResults implements FormatResults {

  private Map<String, String>  results;

  public BaseFormatResults()
  {
    results = new HashMap<String, String>();

    setUpResults();
  }

  protected abstract void setUpResults();

  protected void addResult(String testMethodName, String result)
  {
    results.put(testMethodName, result);
  }

  @Override
  public String getExpectedResult(String testMethodName)
  {
    return results.get(testMethodName);
  }
}

public class HelloResults extends BaseFormatResults {

  @Override
  protected void setUpResults()
  {
    addResult("testDave", "Hello Dave");
  }
}

public class GoodByeResults extends BaseFormatResults {

  @Override
  protected void setUpResults()
  {
    addResult("testDave", "Good Bye Dave");
  }
}

For the sample code, a class is created as a wrapper around a map, which stores the expected test results for a test case. The expected result for each test method is keyed by the test method name in the map. Then some methods are included for adding and retrieving these expected results.

I have also made this wrapper class implement an interface and be abstract so that it can be subclassed to add the expected results for a particular test class.

Next we refactor the Spring JavaConfig classes to include these result classes, so that they can be injected into the test case along with the implementation of the class to test.

@Configuration
public class CommonTestConfig {

  @Autowired
  LogTestConfiguration logTestConfiguration;

  @Bean
  public Formatter formatter()
  {
    return logTestConfiguration.formatter();
  }

  @Bean
  public FormatResults results()
  {
    return logTestConfiguration.results();
  }

  @Bean
  public String testData()
  {
    return "Dave";
  }
}

public interface LogTestConfiguration {

  public Formatter formatter();

  public FormatResults results();
}

@Configuration
@Profile("hello")
public class HelloConfig implements LogTestConfiguration {

  @Bean
  public Formatter formatter()
  {
    return new HelloFormatter();
  }

  @Bean
  public FormatResults results()
  {
    return new HelloResults();
  }
}

@Configuration
@Profile("goodbye")
public class GoodByeConfig implements LogTestConfiguration {

  @Bean
  public Formatter formatter()
  {
    return new GoodByeFormatter();
  }

  @Bean
  public FormatResults results()
  {
    return new GoodByeResults();
  }
}

An extra method is added to the ‘LogTestConfiguration’ interface to retrieve a result class. Then the extra @Bean method to the configuration classes for getting the appropriate result class based on the active Spring profile.

Lastly we again refactor the test case to use the updated configuration files.

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes={CommonTestConfig.class, HelloConfig.class, GoodByeConfig.class})
public class SpringProfileTest {

  @Autowired
  private Formatter formatter;

  @Autowired
  private FormatResults results;

  @Autowired
  private String testData;

  @Test
  public void testDave()
  {
    String result = formatter.format(testData);
    System.out.println("Result = " + result);

    String expected = results.getExpectedResult("testDave");

    assertEquals(expected, result);
  }

  // getters and setters left out for brevity ...
}

Here are the changes from the test case examples in the previous post:

  • there is now only one test case required to test either test class implemenation ‘HelloFormatter’ or ‘GoodByeFormatter’ (previously a separate test case was required for each)
  •  the results for the test is also now injected into the test case via Spring DI
  • all of the Spring configuration files are included in the @ContextConfiguration annotation
  • the @ActiveProfiles annotation has been left out, now we need to specify the active Spring profile to use external to the test case, this can be done by setting the ‘spring.profiles.active’ property before running the test case, and there are various ways to do this.

For the sample code, if you are running the test case using the Eclipse IDE, then this could be set in the Run Configuration for the JUnit test class.

Eclipse JUnit run configuration

If you are using Gradle, then just set the property in the build script in the ‘test’ task.

test {
  systemProperties 'spring.profiles.active': 'hello'
}

The Sample Code

You can download the sample code for this example, SpringProfileTestingExample2.zip from GitHub.

Note that I have kept the sample code very simple for the sake of brevity, and also to more clearly illustrate the point. For instance, you may need to add the @DirtiesContext annotation to the test classes if you don’t want the injected beans to be cached between tests.

Conclusion

Using Spring bean definition profiles, we can make test cases reusable in the particular scenario where we are testing different implementations of a class.

If you only have a few tests to run, then the this setup with the Spring profiles would be overkill.

However for my project, where I had many test cases, it allowed me to reuse them without having to duplicate them for each implementation of a class under test. Of course, it would also allow me to use those same test cases to test any future implementations of the class too.

Reusable Aspect Library for Android, Part 2

In part 1 of this post, we created an Android library containing aspects that could be used to intercept code in an application. However the library would require code changes in order to work with other application, and so was not really reusable as it stood.

In this part, we will modify the sample code to make the library more reusable across multiple applications.

Scenario 2 – The Pointcut in the Application

In this example, the aspect tracing code is kept in the aspect library project, but we will put the pointcut in the application.

Create the Projects

Create an Android application project and library project, and configure them as AspectJ projects in Eclipse, same as for part 1. Then add the library project to the application project, both as an Android library and to the application aspectpath.

Code Changes

1. Aspect Library

Once again put the AspectJ tracing code in the library project,  but this time the pointcut has been made abstract.

public abstract aspect BaseTraceAspect {

    /** Pointcut for tracing method flow tracing */
    protected abstract pointcut traceOperations();

    before() : traceOperations() {
        Signature sig = thisJoinPointStaticPart.getSignature();
        log("Entering [" + sig.toShortString() + "]");
    }

    /** Implement in subaspect for actual logging */
    protected abstract void log(String msg);
}

Notice I have also made the log method abstract as well, this leaves it up to the application to determine how it wants to log the tracing output. In general any funcationality that is specific to the application should be made abstract here, and be implemented in the application.

2. Test Application

In the application project, add some code that can be intercepted by the aspect, as for part 1.

Then create another aspect file which subclasses the aspect in the library project. This aspect just implements the pointcut (and in this sample, the logging method as well).

public aspect TraceAspect extends BaseTraceAspect {

    private final static String TAG = "Aspect-Test";

    protected pointcut traceOperations(): execution(* au.com.example.test.**.testMethod(..));

    protected void log(String msg)
    {
        Log.i(TAG, msg);
    }
}

Run and Verify

Once again run the application and verify in LogCat that the aspect tracing has worked.

Sample app 2

LogCat tracing

The Sample Code

You can download the sample code for this example, aspect-lib-example_2.zip from GitHub. Follow the same instructions as for the previous example for running the sample app.

Conclusion

Now the aspect library project is reusable in other applications. An application would just need to include the library and then override the abstract pointcut to implement a concrete pointcut that is specific to it.

In the last part of this article, we will add another library to the application and show how to apply the aspects to it as well.