Best RxJava Book, so far

These days I tend to read blogs to catch up on the latest programming techniques, although I still read the occasional book. But every now and then I come across a book that is absolutely brilliant, one such as Reactive Programming with RxJava.

This is a great book for learning about RxJava as it goes way beyond just repeating the API and documentation. As well as the usual marble diagrams, the authors help you to take a peek under the hood to explain things like:

  • Use cases about the why and when to use various RxJava constructs.
  • Design decisions about why RxJava does things in certain ways.
  • Comparison with non-Rx and alternative techniques to achieve asynchronous processing, e.g. imperative programming, Futures, manual threading, etc.
  • Integration with existing legacy code and 3rd party libraries.
  • The importance of flatMap operator for asynchronous processing as well as for flattening out nested Observables.

This just scratches the surface in terms of what this book offers, but these were particularly informative for me as a beginner/intermediate RxJava programmer.

Caveat

The reason I put ‘so far’ in the title of this blog is that book documents RxJava 1,  and the downside is that RxJava 2 was released soon after the book was published.

Having said that, most of the contents of the book are still applicable, but you have to mentally translate the concepts and examples to the RxJava 2 API. Hopefully the authors will update the book to RxJava 2 in a future edition.

Groovy and Dagger 2 Android Example

I decided to use the Groovy language for an Android app I was working on. Luckily it was fairly straightforward, but I noticed that the number of blogs and examples available to demonstrate using Groovy for Android development are far fewer than Java ones. Not only that, but many of them were out of date.

This is particularly true if you want to use libraries that are not straightforward coding, such as Dagger 2.

I was intending to write a simple proof-of-concept app to start with, to verify that I could use Dagger 2 and Groovy together. Then I found this example app on Github.

https://github.com/cvoronin/android-groovy-swissknife-dagger2

However this simple Dagger 2 example did now show how to use the @Inject annotation (which is my preferred way to do Dependency Injection into activities), so I created a fork that did.

https://github.com/davidwong/android-groovy-swissknife-dagger2

Note that you will need to use an up-to-date version of Groovy (2.4.x) to get Groovy and Dagger 2 working on Android.

Code Changes

I only had to make a few code changes to use @Inject, you can check the commits in the project to see what they were.

Firstly in the component interface (in the example, it is called demo.simplegroovyapp.component.VehicleComponent), I added a statement that would inject the dependency objects into the activity.


void inject(MainActivity mainActivity)

Then in the activity class (demo.simplegroovyapp.MainActivity) add the @Inject annotation to the field that was to be injected. The other important change I made was to add the public access modifier, I’ll explain later why this is necessary.

In other words, from this:


Vehicle vehicle

to this:


@Inject
public Vehicle vehicle

In the onCreate() method, this line was added to inject the dependencies into the activity.


vehicleComponent.inject(this)

I also commented out the statement that was previously used to manually retrieve the Vehicle object, as it was no longer necessary.


//vehicle = vehicleComponent.provideVehicle()

That’s it!

Property or Field?

In Groovy it is quite common to see data in a class being defined without an access modifier (public, protected, private).


Vehicle vehicle

Likewise in many Dagger 2 Java examples I’ve seen, the @Inject annotation is used on fields without the access modifier.


@Inject

MyPresenter mypresenter;

However when this is done in Groovy, you are specifying a property, not a field.

http://groovy-lang.org/objectorientation.html#_fields_and_properties

Hence Dagger can’t inject the Vehicle object into the activity, even with the @Inject annotation. Once the public access modifier is added, then Dagger works as expected.

 

Protractor Testing with Google Map Markers and Markerclusterers, Part 3

In this final part of this post, we will be locating the cluster markers (from the  Markerclusterer or MarkerclustererPlus library) in a Google map. This is in the context of e2e testing for an AngularJS web application using Protractor.

The first part of the post was a brief introduction , while part 2 showed how to locate Google maps markers.

Markerclusterer

The cluster markers have this DOM structure.

<div class="cluster">
    <img />
    <div>10</div>
</div>

Once again it is just a case of finder the right xpath expression to use as the locator. If we were only interested in the number of cluster markers on the map, we could just use the count() utility method for the ElementArrayFinder as we did for getting a count of the Google maps markers.

element.all(By.xpath("//div[@class=\"cluster\"]/div")).count();

However for the application I was working on, I needed the total number of markers represented by all the cluster markers.

The cluster marker div structure has an inner div that contains a number, this is the number of Google map markers that are not shown but are represented by the cluster. So here we have to get this number from all the cluster markers and add them up.

ElementArrayFinder has various functions such as each() and map() that would allow us to either iterate through the cluster markers and extract the information we need so that we can total the numbers. Luckily it also has a reduce() function that does exactly what we need, and here is an example of a spec test using it.

var checkClusterNumberCount = function(expectedCount) {
    element.all(By.xpath("//div[@class=\"cluster\"]/div")).reduce(function(accum, elem) {
    return elem.getText().then(function(text) {
    var num = parseInt(text);
    return accum + num;
  });
}, 0).then(function(result) {
    expect(result).toEqual(expectedCount);
  });
};

The explanation for this function:

1. The xpath expression is used to locate all the cluster markers in the DOM.

By.xpath(“//div[@class=\”cluster\”]/div”)

2. The function element.all() returns an ElementArrayFinder and the reduce() function is called. In this case it is passed our own reduce function that has 2 parameters:

  • accum holds the accumulated number, which is initially set to zero
  • elem is an ElementFinder from the ElementArrayFinder

3. For each ElementFinder we call the getText() to get the value from the inner div. This value is converted to a number and added to the accumulated number. Notice that since ElementFinder is also a promise, we need to use another callback from the getText() function.

4. We then use the expect() function to compare the final accumulated number to the number that we were expecting in order to pass the test.

Final Tip

Initially after I ran the code to count the markers (both the displayed markers and the ones contained in the cluster markers) I found that, although the code was working properly, some of the tests would occassionally fail. Eventually I worked out that even though in the callbacks the AngularJS code may have finished running, I still sometimes needed to wait for markers and markerclusterers to finish loading in the map.

I worked around this by added a small delay before trying to locate the markers, e.g.
browser.sleep(…);

This is a bit of a hack, but unfortunately I’m not aware of any way to get a notification from Google maps when markers have finished loading.

Protractor Testing with Google Map Markers and Markerclusterers, Part 2

Part 1 of this post was a brief introduction about the Protractor spec I was working on, where I had to locate markers and cluster markers in a Google map . In this second part, there are some tips on how to find those Google maps markers.

Firstly, I must give credit to this blog post which had similar ideas about finder Google maps markers for Selenium testing.

http://tech.adstruc.com/post/34230170061/selenium-testing-google-maps

Configure the marker

Most importantly, when the marker is created it must be configured as being unoptimized. This means the markers are created as elements that can be located in the DOM.

 var marker = new google.maps.Marker({
   position: latLng,
   title: 'your title',
   optimized: false
 }

Be aware that using unoptimized markers should only be done for development and testing, as it significantly affects performance.

In the spec test we can use xpath to find the div’s that represent the markers, but the specific xpath expression will vary depending on various factors, such as whether the marker has events attached to it. For example you may want to have a click event attached to the markers, so that something happens when the user clicks on the them.

google.maps.event.addListener(marker, 'click', function() {
  // do something
});

Another factor that affects how the DOM structure for a marker is rendered, is the platform and browser that the web page with the map is running on.

The best way to formulate the xpath expression you want to use as the locator for the markers, is to use a web inspection tool to have a look at the DOM element(s) for the marker. This should be done for the browsers and platforms that you want to support.

Markers without Map Areas

The first marker DOM structure has a div that looks like this.

<div title="your title" class="gmnoprint">
  <img />
</div>

Some examples of situations where the markers that have this structure include:

  • Chrome (Windows), markers without events
  • Firefox (Windows), markers without events
  • Chrome (Android), markers with events

In the example I’m using for this post, the test spec is locating the markers in order to get a count of all the markers in a map.

element.all(By.xpath("//div[@class=\"gmnoprint\" and @title]")).count();

Several things to note here:

1. The xpath expression is used to locate all the markers in the DOM.

By.xpath(“//div[@class=\”gmnoprint\” and @title]”)

2. The function element.all() returns an ElementArrayFinder and has various utility methods such as count().

3. Since ElementArrayFinder is a promise, if you need to get values from the marker elements you need to use a callback.

For instance if you wanted to get the titles from the markers:

var titles = element.all(By.xpath("//div[@class=\"gmnoprint\" and @title]")).map(function(elem, index) {
  return {
    index : index,
    title : elem.getAttribute('title')
  };
});

Markers with Map Areas

Another marker structure is where the div contains an <area> tag inside a <map> tag.

<div class="gmnoprint">
  <img />
  <map>
    <area title="your title" />
  </map>
</div>

You may encounter this DOM structure in the following:

  • Chrome (Windows), markers with events
  • Firefox (Windows), markers with events

Once again we can get a count of the markers using a xpath expression to match the DOM structure for these markers.

element.all(By.xpath("//div[@class=\"gmnoprint\"]/map/area[@title]")).count();

In the final part of this post, I will show how to find the cluster markers in the map, and also how to find the number of Google maps markers represented by each cluster.

Protractor Testing with Google Map Markers and Markerclusterers, Part 1

While doing e2e testing on an AngularJS app using Protractor, I came across the need to find the markers in a google map within the app. This was further complicated by the fact that we were using MarkerclustererPlus, which meant on the map there could be a mixture of single markers and cluster markers.

This first part is just a bit of an introduction, so if you want, you can go straight to Part 2 which shows how to find the Google Maps markers in a Protractor spec or Part 3 for using markerclusterer.

What is a Markerclusterer?

If you are not familiar with markerclusterers, it is a google maps utility library which deals with maps that have too many markers or are too cluttered, by combining markers that are close together into cluster markers. Have a look at the example page:

http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/docs/examples.html

Also note there are actually 2 libraries, markerclusterer and markerclustererplus.

I’m assuming that the reader is already familiar with setting up and using Protractor for testing AngularJS applications.

AngularJS and Google Maps

Being an AngularJS app, I decided to use a directive to create the Google map.

There are a few AngularJS map directive libraries around, but the one I decided upon was ng-map.  The advantage of this particular library was that although you can just use its tags to create the map, it also allows you to use the Google Maps V3 Javascript API directly. This is very useful, for instance even though there was a markerclusterer tag, I just wrote the code in Javascript which was more flexible and easier to debug (the author of the library also seems to recommend this approach for complicated code).

So for this application, the map libraries that were used were:

Count the Markers …

The spec file I was working on needed to count the number of markers that were displayed on the map. Now as I mentioned earlier, because I was using the markerclusterer library, the markers could appear as single markers (i.e. the default Google Maps markers) or as cluster markers, and the number of markers and clusters would vary depending on the zoom level of the map.

Therefore in the tests I needed code to find:

  • single markers
  • cluster markers
  • the number of markers contained in each cluster marker

In the next post, I will show how to find the single markers displayed on the map. All you will need are web tools that can inspect elements in a web page, such as Firebug or the Chrome Developer Tools.

Setup JRebel with Tomcat and Docker

It’s fairly straightforward to install JRebel to run on a local instance of Tomcat, here is one way of installing it on Tomcat running in a docker container instead. This article assumes a basic knowledge of using docker.

For this particular example I’m using:

  • the Eclipse IDE installation of JRebel
  • the ‘official’ Tomcat 8 image from the Docker hub

Install JRebel in the IDE

I’m using the Eclipse IDE, but there are instructions on the ZeroTurnaround website on using a different IDE or for installing it standalone.

1. For Eclipse, follow these instructions just to install and activate JRebel for the IDE:

https://zeroturnaround.com/software/jrebel/quickstart/eclipse/#!/server-configuration

2. We need the JRebel agent (jrebel.jar) to install into Tomcat.

You can either get this from the JRebel plugin you have just installed into Eclipse (look for the section titled ‘Where do I find jrebel.jar?’);

http://zeroturnaround.com/software/jrebel/learn/remoting/eclipse/

OR you can get it from an archive

https://zeroturnaround.com/software/jrebel/download/prev-releases/

(Note that for Tomcat 8, please use the legacy version of jrebel.jar which is found in the lib sub-directory of the zip archive.)

Install JRebel in the Application Server

1. Get the base Tomcat docker image from the docker hub.

docker pull tomcat:xxx

Here xxx is the specific version of Tomcat you want to use as the base image, e.g. 8.0.23-jre7, 8-jre8, etc. You can find the list in the Tomcat docker repository:

2. Since we are using docker to run the application server, then we will need to run JRebel in remote mode. There are generic instructions on JRebel remoting, which we can adapt to do it in a docker environment. So what we want to do is to create a custom docker image, based on the Tomcat image, which incorporates the JRebel configuration.

2.1 Create an empty directory and copy the JRebel agent jrebel.jar to it.

2.2 Create a Dockerfile to build your custom Tomcat image, for example:

Note that for simplicity, I have just added the JRebel agent to the directory /jrebel. You can use a different directory, as long as the -javaagent configuration can find it.

Also you can take this opportunity to do further customizations on the Tomcat server, e.g. if you want to add your list of users, then copy the your version of tomcat-users.xml to the Tomcat config directory by adding this line to the Dockerfile:

ADD tomcat-users.xml /usr/local/tomcat/conf/

2.3 Build and run the customized Tomcat server (using your own repository name, image name and container name to replace the values in this example).

docker build -t your_repository/tomcat-jrebel .

docker run -i -t -d --name mytomcat -p 8080:8080 your_repository/tomcat-jrebel

We can verify that the JRebel configuration has been included in Tomcat by checking the startup logs.

docker logs mytomcat

We should be able to see the JRebel version and licensing information.

2015-05-22 10:38:40 JRebel:  #############################################################
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  JRebel Legacy Agent 6.2.0 (201505201206)
2015-05-22 10:38:40 JRebel:  (c) Copyright ZeroTurnaround AS, Estonia, Tartu.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Over the last 1 days JRebel prevented
2015-05-22 10:38:40 JRebel:  at least 0 redeploys/restarts saving you about 0 hours.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  Server is running with JRebel Remoting.
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  
2015-05-22 10:38:40 JRebel:  #############################################################

Tip: Build Your Own
Of course you can combine these 2 steps for creating a custom image into 1, by creating your own Tomcat image from scratch instead of using the ‘official’ Tomcat image as a base.

 Configure the IDE

Finally we need to configure Eclipse to work with the Tomcat server that we have running in docker. You can do that by following these instructions.

This is a brief summary of the steps:

  1. In Eclipse, right-click on your project, select JRebel -> Add JRebel Nature
  2. Right-click on your project again, select JRebel -> Enable remote server support
  3. Right-click on your project again, select JRebel -> Advanced Properties
  4. In the dialog that pops up, click on “Edit” button next to the “Deployment URLs” text box
  5. Click on “Add” and enter the URL of the application, it will be something like “http://your_docker_host:8080/app_name”
  6. Click on “Continue”, “Apply”, and then “OK”.

Once the app is deployed, any changes you make in the IDE should now be reflected in the server running in the docker container.

No restarts, no redeploys, just code.

Pluggable Tools with Docker Data Containers

There are some apps that have a simple installation process. When using them with other applications in Docker, they may be able to be installed in their own data volume container and used in a pluggable way.

The kind of apps I’m talking about are some Java apps (and in fact, Java itself) which follow this installation process:

  1. Install the contents of the app into a single directory
  2. Set an environmental variable to point to the installation directory, e.g. XXX_HOME
  3. Add the executables of the app to the PATH environmental

 

That’s it.

An example of an app installation that follows this pattern is Gradle:

  1. Uncompress the Gradle files from an archive to a directory.
  2. Set the enviromental variable GRADLE_HOME to point to the gradle installation directory
  3. Add GRADLE_HOME/bin to the PATH

 

Docker

Using Gradle as an example, here is a Dockerfile that installs it in a data volume container:

# Install Gradle as a data volume container. 
#
# The app container that uses this container will need to set the Gradle environmental variables.
# - set GRADLE_HOME to the gradle installation directory
# - add the /bin directory under the gradle directory to the PATH

FROM mini/base

MAINTAINER David Wong

# setup location for installation
ENV INSTALL_LOCATION /opt

# install Gradle version required
ENV GRADLE_VERSION 2.2.1

WORKDIR ${INSTALL_LOCATION}
RUN curl -L -O http://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip && \
    unzip -qo gradle-${GRADLE_VERSION}-bin.zip && \
    rm -rf gradle-${GRADLE_VERSION}-bin.zip
    
# to make the container more portable, the installation directory name is changed from the default
# gradle-${GRADLE_VERSION} to just gradle, with the version number stored in a text file for reference
# e.g. instead of /opt/gradle-2.2.1, the directory will be /opt/gradle

RUN mv gradle-${GRADLE_VERSION} gradle && \
    echo ${GRADLE_VERSION} > gradle/version
    
VOLUME ${INSTALL_LOCATION}/gradle

# echo to make it easy to grep
CMD /bin/sh -c '/bin/echo Data container for Gradle'

(From github https://github.com/davidwong/docker/blob/master/gradle/Dockerfile)

Build the image and container from the Dockerfile. Here I’ve tagged the image with the version number of the Gradle installation, and named the container gradle-2.2.1.


docker build -t yourrepo/gradle:2.2.1 .

docker run -i -t --name gradle-2.2.1 yourrepo/gradle:2.2.1

A few things to note about this installation:

  • I have changed the directory name where Gradle is installed from the default, by removing the version number in order to make it generic.
  • no environmental variables have been set, that will be done later
  • you can use any minimal image as the basis for the container, it justs need curl or wget in order to download the Gradle archive file

Now we have the Gradle installation in a docker data volume that can be persisted and shared by other containers.

You can then repeat this process with difference versions of Gradle to create separate data containers for each version (of course giving the containers different names, e.g. gradle-2.2.1, gradle-1.9, etc).

Use Case

I originally got this idea when I was running my Jenkins CI docker container. Some of the Jenkins builds required Gradle 2.x while others were using Gradle 1.x.

So instead of building multiple Jenkins + Gradle images for the different versions of Gradle required, I can now just run the Jenkins container with the appropriate Gradle data container. This is done by using –volumes-from to get access to the Gradle installation directory and setting the require environmental variables.

To use the data container with Gradle 2.2.1 installed:

docker run -i -t --volumes-from gradle-2.2.1 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

To use the one with Gradle 1.9:

docker run -i -t --volumes-from gradle-1.9 -e GRADLE_HOME=/opt/gradle -e PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/gradle/bin myjenkins</pre>

Of course there are limitations to this technique since Docker data volume containers were designed to share persistant data rather than application installs. In particular they do not allow sharing of environmental variables.

However this work around that can be useful in some circumstances.

Backup a Docker Data Container with Fig

I have been using data volume containers to persist data in docker containers.  There are various reasons why this tends to be a better option than just using data volumes, but probably the most important is portability.

Of course now we have to backup the data in the data containers. This can be for archiving, or when the containers that use the data need to be upgraded or recreated. If your backup requirements are simple you can simply use the docker cp command or something like tar.

A Jenkins example

As a simple example, let’s run a Jenkins server in a docker container and use a data volume container to persist its data.

1. Pull or build a Jenkins image from the official repository.

http://jenkins-ci.org/content/official-jenkins-lts-docker-image

2. The Jenkins images uses the directory /var/jenkins_home as the volume to store it’s data, so we need a data volume container for that volume. Here is a sample of a Dockerfile to build the data container:

Build and tag the image from the Dockerfile.

docker build -t your_repository:jenkins-data .

You can now create the data container, giving it a name for convenience. Optionally we can run the docker ps command afterwards to check that the container has been created, it should be in a stopped state.

docker run -i -t --name jenkins-data your_repository:jenkins-data
docker ps -a

3. Run the Jenkins server with the data container attached and make some changes, e.g. create a job, etc. The Jenkins data volume should have your changes in it now.

docker run --name=jenkins-sample -p 8080:8080 --volumes-from=jenkins-data jenkins

4. For this example we will use tar to backup the data container, using this command to create a temporary container to access the data container.

docker run -rm --volumes-from jenkins-data -v $(pwd):/backup busybox tar cvf /backup/jenkins_backup.tar /var/jenkins_home

There should now be a file jenkins_backup.tar in the current directory. Of course for real usage, we would probably run this command from a script and make it generic to be able to backup any data volume container.

I do give a fig …

Something else I use for development with Docker is the orchestration tool Fig (this has saved me a lot of typing!). So here is an example of  doing the same backup on the Jenkins data container using Fig.

1. Create Fig YAML file, using the same information that we used in the backup command.

2. Run Fig, that’s it!

fig up

This is a simple example that has only scratched the surface of what can be done with Docker (and Fig). If the backup requirements for the data is more complex, then you could also consider creating a dedicated container just for doing backups, with all the required tools installed in it.

The great thing about Docker is that once everything has been setup, you can get applications such as Jenkins up and running very quickly.

Another Defection to Android Studio

Like many other developers out there, I have been using Eclipse as my main IDE for many years now. However for Android development I have decided to take the plunge and migrate to Android Studio (especially since it has finally been released).

Here is a blog post I found that closely echoes what I have long thought regarding the issues with Eclipse:

http://engineering.meetme.com/2014/02/a-tale-of-migrating-from-eclipse-to-android-studio/

Build, build, build

For me, another reason was that the Ant build files I was using to handle building different versions (free vs paid, dev vs release, etc) were getting too complicated to manage easily. So I can now change over to Gradle at the same time, since that’s what Android Studio uses by default.

Gradle has the concept of build variants to handle building different versions of an Android app.

The Recurring Eclipse Re-install

Here are some other problems that I personally have had with using Eclipse.

  • Plugins, well not the plugins themselves, but having too many plugins. I’ve found that having lots of plugins in one Eclipse installation can cause Eclipse to misbehave , especially after several updates. There are several ways I use to get around this:
    • Keep separate Eclipse installations for different types of development, e.g one for Java, one for Android, one for Cloud, etc. Therefore each installation will only have a few plugins relevant to the type of development. However this is not always convenient if a project does require multiple types of development.
    • Every so often, when Eclipse starts to play up, do a fresh re-install of Eclipse (along with the latest version of the plugins required).
  • Intermittent miscellaneous bugs, e.g. cut and paste stops working, builds not alway done automatically, etc. A lot of these issues are more of a nuisance rather than being a serious problem, but all the same it tends to kill your productivity (and isn’t that why we use IDE’s in the first place?).

No Pain, No …

Make no mistake, despite what the Android Studio documentation might try to tell you, migrating a non-trivial project will take some time and probably involve some pain. But worth the effort I think.

User changes for Address Location Finder

I’m currently working on upgrading my app Address Location Finder. While most of the changes are internal improvements or bug fixes, there are 2 major changes for users.

1. The map will be dropped from the app.

The simple built-in map screen will be removed from the app for the next version. In the future it will come back as an optional add-on.

Why?

I was getting quite a few error reports from users trying to run the app on devices that did not have the mapping requirements.

One of the requirements stated in the Google Play app store for the app was:

– device that supports the standard Google Mapping API (not the same as having the Google Maps app installed)

This means the manufacturers need to have licensed the Google Mapping API v1 for the device in order for the map in Address Location Finder to work. Unfortunately it may be difficult for users to know whether their device has this requirement or not, possibly resulting in the app crashing.

So, removing the map from the app will remove this requirement and allow it to run on more devices (as well as reducing app crashes).

The map will become an optional add-on app for a future version of Address Location Finder.

2. The app will require Android 4.0.3+ to run.

The minimum API level to run the app will be raised to 15, which means it will now require Android version 4.0.3+ on the device to run.

Why?

The original minimum API level required for Address Location Finder was 8 (Android 2.2).

According to the Google dashboard for platform version, devices running Android 2.x now account for only about 10%.

Supporting old versions requires a fair bit of additional work

  • needs more testing
  • requires additional code, e.g. Android support library, to implement newer Android functionality
  • may require internal version checks for functions that would not work on old versions, possibly with alternate code

This all means it takes longer for updates of the app to come out. The diminishing returns in supporting the old versions is not really worth the extra effort and time.

Users with Android older than 4.0.3 who have already installed the app can just continue to use the old version of the app. They should not even see the new version of the app when it comes to the Google Play app store, since the store applies various filters to determine which apps to display (include the minimum Android version require to run).