Testing Errors and Exceptions Using Unittest in Open Event Server

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schema
    def validate_quantity(self, data):
        if 'max_order' in data and 'min_order' in data:
            if data['max_order'] < data['min_order']:
                raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
                                          "max-order should be greater than min-order")

 

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},

           "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

 def test_exceptions(self):
        # Unprocessable Entity Exception
        with self.assertRaises(UnprocessableEntity):
            raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},
                                      "min-quantity should be less than max-quantity")


In the above code,
with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

def test_errors(self):
        with app.test_request_context():
            # Forbidden Error
            forbidden_error = ForbiddenError({'source': ''}, 'Super admin access is required')
            self.assertEqual(forbidden_error.status, 403)

            # Not Found Error
            not_found_error = NotFoundError({'source': ''}, 'Object not found.')
            self.assertEqual(not_found_error.status, 404)


Here we firstly create an object of the error class
ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.
The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.


Resources:

Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:

Creating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

In FOSSASIA‘s Open Event Server, we use the Python unittest library for unit testing various modules of the API code. Unittest library provides us with various assertion functions to assert between the actual and the expected values returned by a function or a module. In normal modules, we simply use these assertions to compare the result since the parameters mostly take as input normal data types. However one very important area for unittesting is File Uploading. We cannot really send a particular file or any such payload to the function to unittest it properly, since it expects a request.files kind of data which is obtained only when file is uploaded or sent as a request to an endpoint. For example in this function:

def uploaded_file(files, multiple=False):
    if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))

    else:
        extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)
        files_uploaded = UploadedFile(file_path, filename)

    return files_uploaded


So, we need to create a mock uploading system to replicate this check. So inside the unittesting function we create an api route for this particular scope to accept a file as a request. Following is the code:

@app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})


In the above code, it creates an app route with endpoint test_upload. It accepts a request.files. Then it sends this object to the
uploaded_file function (the function to be unittested), gets the result of the function, and returns the result in a json format.
With this we have the endpoint to mock a file upload ready. Next we need to send a request with file object. We cannot send a normal data which would then be treated as a normal request.form. But we want to receive it in request.files. So we create 2 different classes inheriting other classes.

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest


MyRequest
class inherits the Request class of Flask framework. We define the file stream of the Request class as the FileObj. Then, we set the request_class attribute of the Flask app to this new MyRequest class.
After we have it all setup, we need to send the request and see if the uploaded file is being saved properly or not. For this purpose we take help of StringIO library. StringIO creates a file-like class which can be then used to replicate a file uploading system. So we send the data as {‘file’: (StringIO(‘1,2,3,4’), ‘test_file.csv’)}. We send this as data to the /test_upload endpoint that we have created previously. As a result, the endpoint receives the function, saves the file, and returns the filename and file_path for the stored file.

 with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))


After this is done, we need to check if the file_path that we receive is the expected file path that we should get. Secondly, we also check whether the file was really created or is this just some dummy data sent. We get the expected path by this:

actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename.

Then we assert that actual_file_path is same as the resulting path we received using the assertEqual. Thirdly, we use assertTrue to ensure that there is a file in that path. That is,

self.assertTrue(os.path.exists(file_path))

Which gives a True if file exists or False if not.

So that basically sums up the unittesting.
1) If the file is saved in the correct path, and
2) The file actually exist
The the unittest passes only if both is True and is thus successful. Else we get either an error or a failure.

Following is the entire code snippet for this unit testing function:

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest

        @app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})

        with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))

Resources:

Testing Deploy Functions Using Sinon.JS in Yaydoc

In yaydoc, we deploy the generated documentation to the GitHub pages as well as Heroku. It is one of the important functions in the source code. I don’t want to break the build in future by any unnoticed change, so I decided to write a test case for deploy function. But the deploy function had lot dependencies like child processes, sockets, etc. Also it is not a pure function, so there is no return object to assert the value. Then I decided to stub for child process to check whether the correct script was passed or not. In order to write stub I decided to use sinon js framework because it can be used for writing stubs, mocks and spies. One of the advantages with sinon is that it’ll work with any testing framework.

sinon.stub(require("child_process"), "spawn").callsFake(function (fileName, args) {
  if (fileName !== "./ghpages_deploy.sh" ) {
    throw new Error(`invalid ${fileName} invoked`);
  }

  if (fileName === "./ghpages_deploy.sh") {
    let ghArgs = ["-e", "-i", "-n", "-o", "-r"];
    ghArgs.forEach(function (x)  {
      if (args.indexOf(x) < 0) {
        throw new Error(`${x} argument is not passed`);
      }
    })
  }
 
  let process = {
    on: function (listenerId, callback) {
      if (listenerId !== "exit") {
        throw new Error("listener id is not exit");
      }
    }
  }
  return process;
});

In sinon you can create s stub by passing the object in the first parameter and the method name in the second parameter to sinon’s stub method. After it returns an object, pass the function which you would like to replace with the “callFakes” function.

In above code, I wrote a simple stub which overwrites NodeJS child_process’s spawn method. So I passed the “child_process” module in the first parameter and “spawn” method name in the second parameter. You must check whether they are passing the correct deploy script and the correct parameter. So, I wrote a function which checks the condition and then pass the method to the callFakes method.

describe('deploy script', function() {
  it("gh-pages deploy", function() {
    deploy.deployPages(fakeSocket, {
      gitURL: "https://github.com/sch00lb0y/yaydoc.git",
      encryptedToken: crypter.encrypt("dummykey"),
      email: "[email protected]",
      uniqueId: "ajshdahsdh",
      username: "fossasia"
    });
  });
});

Finally test the deploy function by calling it. I use mocha as a testing framework. I have already written a blog on mocha. If you’re interested in mocha please check out this blog.

Resources:

Acceptance Testing of a Feature in Open Event Frontend

In Open Event Frontend, we have integration tests for ember components which are used throughout the project. But even after those tests, user interaction could pose some errors. We perform acceptance tests to alleviate such scenarios.

Acceptance tests interact with application as the user does and ensures proper functionality of a feature or to determine whether or not the software system has met the requirement specifications. They are quite helpful for ensuring that our core features work properly.

Let us write an acceptance test for register feature in Open Event Frontend.

import { test } from 'qunit';
import moduleForAcceptance from 'open-event-frontend/tests/helpers/module-for-acceptance';

moduleForAcceptance('Acceptance | register');

In the first line we import test from ‘ember-qunit’ (default unit testing helper suite for Ember) which contains all the required test functions. For example, here we are using test function to check the rendering of our component. We can use test function multiple times to check multiple components.

Next, we import moduleForAcceptance from ‘open-event-frontend/tests/helpers/module-for-acceptance’ which deals with application setup and teardown.

test('visiting /register', function(assert) {
  visit('/register');

  andThen(function() {
    assert.equal(currentURL(), '/register');
  });
});

Inside our test function, we simulate visiting  /register route and then check for the current route to be /register.

test('visiting /register and registering with existing user', function(assert) {
  visit('/register');
  andThen(function() {
    assert.equal(currentURL(), '/register');
    fillIn('input[name=email]', [email protected]');
    fillIn('input[name=password]', 'opev_test_user');
    fillIn('input[name=password_repeat]', 'opev_test_user');
    click('button[type=submit]');
    andThen(function() {
      assert.equal(currentURL(), '/register');
      // const errorMessageDiv = findWithAssert('.ui.negative.message');
      // assert.equal(errorMessageDiv[0].textContent.trim(), 'An unexpected error occurred.');
    });
  });
});

Then we simulate visiting /register route and register a dummy user. For this, we first go to /register route and then check for the current route to be register route. We fill the register form with appropriate data and hit submit.

test('visiting /register after login', function(assert) {
  login(assert);
  andThen(function() {
    visit('/register');
    andThen(function() {
      assert.equal(currentURL(), '/');
    });
  });
});

The third test is to simulate visiting /register route with user logged in and this is very simple. We just visit /register route and then check if we are are at / route or not because a user redirects to / route when he tries to visit /register after login.

And since we checked for all the possible combinations, to run the test we simply use the following command-

ember test --server

But there is a little demerit to acceptance tests. They boot up the whole EmberJS application and start us at the application.index route. We then have to navigate to the page that contains the feature being tested. Writing acceptance tests for each and every feature would be a big waste of time and CPU cycles. For this reason, only core features are tested for acceptance.

Resources

Writing Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

We integrated Selenium Testing in the Open Event Webapp and are in full swing in writing tests to check the major features of the webapp. Tests help us to fix the issues/bugs which have been solved earlier but keep on resurging when some new changes are incorporated in the repo. I describe the major features that we are testing in this.

Bookmark Feature
The first major feature that we want to test is the bookmark feature. It allows the users to mark a session they are interested in and view them all at once with a single click on the starred button. We want to ensure that the feature is working on all the pages.

Let us discuss the design of the test. First, we start with tracks page. We select few sessions (2 here) for test and note down their session_ids. Finding an element by its id is simple in Selenium can be done easily. After we find the session element, we then find the mark button inside it (with the help of its class name) and click on it to mark the session. After that, we click on the starred button to display only the marked sessions and proceed to count the number of visible elements on the page. If the number of visible session elements comes out to be 2 (the ones that we marked), it means that the feature is working. If the number deviates, it indicates that something is wrong and the test fails.

82080522-21e8-403d-9906-3b4f420720b9.png

Here is a part of the code implementing the above logic. The whole code can be seen here

// Returns the number of visible session elements on the tracks page
TrackPage.getNoOfVisibleSessionElems = function() {
 return this.findAll(By.className('room-filter')).then(this.getElemsDisplayStatus).then(function(displayArr) {
   return displayArr.reduce(function(counter, value) { return value == 1 ? counter + 1 : counter; }, 0);
 });
};
// Bookmark the sessions, scrolls down the page and then count the number of visible session elements
TrackPage.checkIsolatedBookmark = function() {
 // Sample sessions having ids of 3014 and 3015 being checked for the bookmark feature
 var sessionIdsArr = ['3014', '3015'];
 var self = this;
 return  self.toggleSessionBookmark(sessionIdsArr).then(self.toggleStarredButton.bind(self)).then(function() {
   return self.driver.executeScript('window.scrollTo(0, 400)').then(self.getNoOfVisibleSessionElems.bind(self));
 });
};

Here is the excerpt of code which matches the actual number of visible session elements to the expected number. You can view the whole test script here

//Test for checking the bookmark feature on the tracks page
it('Checking the bookmark toggle', function(done) {
 trackPage.checkIsolatedBookmark().then(function(num) {
   assert.equal(num, 2);
   done();
 }).catch(function(err) {
   done(err);
 });
});

Now, we want to test this feature on the other pages: schedule and rooms page. We can simply follow the same approach as done on the tracks page but it is time expensive. Checking the visibility of all the sessions elements present on the page takes quite some time due to a large number of sessions. We need to think of a different approach.We had already marked two elements on the tracks page. We then go to the schedule page and click on the starred mode. We calculate the current height of the page. We then unmark a session and then recalculate the height of the page again. If the bookmark feature is working, then the height should decrease. This determines the correctness of the test. We follow the same approach on the rooms pages too. While this is not absolutely correct, it is a good way to check the feature. We have already employed the perfect method on the tracks page so there was no need of applying it on the schedule and the rooms page since it would have increased the time of the testing by a quite large margin.

Here is an excerpt of the code. The whole work can be viewed here

RoomPage.checkIsolatedBookmark = function() {
 // We go into starred mode and unmark sessions having id 3015 which was marked previously on tracks pages. If the bookmark feature works, then length of the web page would decrease. Return true if that happens. False otherwise
 var getPageHeight = 'return document.body.scrollHeight';
 var sessionIdsArr = ['3015'];
 var self = this;
 var oldHeight, newHeight;
 return self.toggleStarredButton().then(function() {
   return self.driver.executeScript(getPageHeight).then(function(height) {
     oldHeight = height;
     return self.toggleSessionBookmark(sessionIdsArr).then(function() {
       return self.driver.executeScript(getPageHeight).then(function(height) {
         newHeight = height;
         return oldHeight > newHeight;
       });
     });
   });
 });
};

Search Feature
Now, let us go to the testing of the search feature in the webapp. The main object of focus is the Search bar. It is present on all the pages: tracks, rooms, schedule, and speakers page and allows the user to search for a particular session or a speaker and instantly fetches the result as he/she types.

We want to ensure that this feature works across all the pages. Tracks, Rooms and Schedule pages are similar in a way that they display all the session of the event albeit in a different manner. Any query made on any one of these pages should fetch the same number of session elements on the other pages too. The speaker page contains mostly information about the speakers only. So, we make a single common test for the former three pages and a little different test for the latter page.

Designing a test for this feature is interesting. We want it to be fast and accurate. A simple way to approach this is to think of the components involved. One is the query text which would be entered in the search input bar. Other is the list of the sessions which would match the text entered and will be visible on the page after the text has been entered. We decide upon a text string and a list containing session ids. This list contains the id of the sessions should be visible on the above query and also contain few id of the sessions which do not match the text entered. During the actual test, we enter the decided text string and check the visibility of the sessions which are present in the decided list. If the result matches the expected order, then it means that the feature is working well and the test passes. Otherwise, it means that there is some problem with the default implementation and the test fails.

For eg: We decide upon the search text ‘Mario’ and then note the ids of the sessions which should be visible in that search.

c0e4910f-cf69-4b2a-8cc1-233badb35eee.png

Suppose the list of the ids come out to be

['3017''3029''3013''3031']

We then add few more session ids which should not be visible on that search text. Like we add two extra false ids 3014, 3015. Modified list would be something like this

['3017''3029''3013''3031''3014''3015']

Now we run the test and determine the visibility of the sessions present in the above list, compare it to the expected output and accordingly determine the fate of the test.

Expected: [truetruetruetruefalsefalse]
Actual Output: [truetruetruetruetruetrue]

Then the test would fail since the last two sessions were not expected to be visible.

Here is some code related to it. The whole work can be seen here

function commonSearchTest(text, idList) {
 var self = this;
 var searchText = text || 'Mario';
 // First 4 session ids should show up on default search text and the last two not. If no idList provided for testing, use the idList for the default search text
 var arrId = idList || ['3017', '3029', '3013', '3031', '3014', '3015'];
 var promise = new Promise(function(resolve) {
   self.search(searchText).then(function() {
     var promiseArr = arrId.map(function(curElem) {
       return self.find(By.id(curElem)).isDisplayed();
     });

     self.resetSearchBar().then(function() {
       resolve(Promise.all(promiseArr));
     });
   });
 });
 return promise;
}

Here is the code for comparing the expected and the actual output. You can view the whole file here

it('Checking search functionality', function(done) {
 schedulePage.commonSearchTest().then(function(boolArr) {
   assert.deepEqual(boolArr, [true, true, true, true, false, false]);
   done();
 }).catch(function(err) {
   done(err);
 });
});

The search functionality test for the speaker’s page is done in the same style. Just instead of having the session ids, we work with speaker ids there. Rest everything is done in a similar manner.

Resources:

Adding unit tests for effects in Loklak Search

Loklak search uses @ngrx/effects to listen to actions dispatched by the user and sending API request to the loklak server. Loklak search, currently, has seven effects such as Search Effects,  Suggest Effects which runs to make the application reactive. It is important to test these effects to ensure that effects make API calls at the right time and then map the response to send it back to the reducer.

I will  explain here how I added unit tests for the effects. Surprisingly, the test coverage increased from 43% to 55% after adding these tests.

Effects to test

We are going to test effects for user search. This effect listens to the event of type USER_SEARCH and makes a call to the user-search service with the query as a parameter. After a response is received, it maps the response and passes it on the UserSearchCompleteSuccessAction action which performs the later operation. If the service fails to get a response, it makes a call to the UserSearchCompleteFailAction.

Code

ApiUserSearchEffects is the effect which detects if the USER_SEARCH action is dispatched from some component of the application and consequently, it makes a call to the UserSearchService and handles the JSON response received from the server. The effects then, dispatch the action new UserSearchCompleteSuccessAction if response is received from server or either dispatch the action new UserSearchCompleteFailAction if no response is received. The debounce time is set to 400 so that response can be flushed if a new USER_SEARCH is dispatched within the next 400ms.

For this effect, we need to test if the effects actually runs when USER_SEARCH action is made. Further, we need to test if the correct parameters are supplied to the service and response is handled carefully. We also, need to check if the response if really flushed out within the certain debounce time limit.

@Injectable()
export class ApiUserSearchEffects {@Effect()
search$: Observable<Action>
= this.actions$
.ofType(userApiAction.ActionTypes.USER_SEARCH)
.debounceTime(400)
.map((action: userApiAction.UserSearchAction) => action.payload)
.switchMap(query => {
const nextSearch$ = this.actions$.ofType(userApiAction.ActionTypes.USER_SEARCH).skip(1);const follow_count = 10;return this.apiUserService.fetchQuery(query.screen_name, follow_count)
.takeUntil(nextSearch$)
.map(response => new userApiAction.UserSearchCompleteSuccessAction(response))
.catch(() => of(new userApiAction.UserSearchCompleteFailAction()));
});constructor(
private actions$: Actions,
private apiUserService: UserService
) { }

}

Unit test for effects

  • Configure the TestBed class before starting the unit test and add all the necessary imports (most important being the EffectsTestingModule) and providers. This step will help to isolate the effects completely from all other components and testing it independently. We also need to create spy object which spies on the method userService.fetchQuery with provider being UserService.

beforeEach(() => TestBed.configureTestingModule({
imports: [
EffectsTestingModule,
RouterTestingModule
],
providers: [
ApiUserSearchEffects,
{
provide: UserService,
useValue: jasmine.createSpyObj(‘userService’, [‘fetchQuery’])
}
]
}));
  • Now, we will be needing a function setup which takes params which are the data to be returned by the Mock User Service. We can now configure the response to returned by the service. Moreover, this function will be initializing EffectsRunner and returning ApiUserSearchEffects so that it can be used for unit testing.

function setup(params?: {userApiReturnValue: any}) {
const userService = TestBed.get(UserService);
if (params) { userService.fetchQuery.and.returnValue(params.userApiReturnValue);
}return {
runner: TestBed.get(EffectsRunner),
apiUserSearchEffects: TestBed.get(ApiUserSearchEffects)
};
}

 

  • Now we will be adding unit tests for the effects. In these tests, we are going to test if the effects recognise the action and return some new action based on the response we want and if it maps the response only after a certain debounce time.We have used fakeAsync() which gives us access to the tick() function. Next, We are calling the function setup and pass on the Mock Response so that whenever User Service is called it returns the Mock Response and never runs the service actually. We will now queue the action UserSearchAction in the runner and subscribe to the value returned by the effects class. We can now test the value returned using expect() block and that the value is returned only after a certain debounce time using tick() block.

it(‘should return a new userApiAction.UserSearchCompleteSuccessAction, ‘ +
‘with the response, on success, after the de-bounce’, fakeAsync(() => {
const response = MockUserResponse;const {runner, apiUserSearchEffects} = setup({userApiReturnValue: Observable.of(response)});

const expectedResult = new userApiAction.UserSearchCompleteSuccessAction(response);

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result);
tick(399); // test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));

it(‘should return a new userApiAction.UserSearchCompleteFailAction,’ +
‘if the SearchService throws’, fakeAsync(() => {
const { runner, apiUserSearchEffects } = setup({ userApiReturnValue: Observable.throw(new Error()) });

const expectedResult = new userApiAction.UserSearchCompleteFailAction();

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result );

tick(399); // Test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));
});

Reference

Documenting Open Event API Using API-Blueprint

FOSSASIA‘s Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually.

Using API Blueprint:-
We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :-

FORMAT: V1
HOST: https://api.eventyay.com

# Open Event API Server

The Open Event API Server

# Group Authentication

The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>`


API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API.

The heading starts with # and the first heading is regarded as the name of the API.

NOTE – Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.
        Following the heading section comes the description of the API. Further, headings are used to break up the description section.

Resource Groups:
—————————–
    By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users.

# Group Users

For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints.


| Parameter | Description | Type | Required |
|:----------|-------------|------|----------|
| `name`  | Name of the user | string | - |
| `password` | Password of the user | string | **yes** |
| `email` | Email of the user | string | **yes** |

 

Resources:
——————
    In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI.

## Users Collection [/v1/users{?page%5bsize%5d,page%5bnumber%5d,sort,filter}]
+ Parameters
    + page%5bsize%5d (optional, integer, `10`) - Maximum number of resources in a single paginated response.
    + page%5bnumber%5d (optional, integer, `2`) - Page number to fetchedfor the paginated response.
    + sort (optional, string, `email`) - Sort the resources according to the given attribute in ascending order. Append '-' to sort in descending order.
    + filter(optional, string, ``) - Filter according to the flask-rest-jsonapi filtering system. Please refer: http://flask-rest-jsonapi.readthedocs.io/en/latest/filtering.html for more.

 

Actions:
————–
    An action is specified with a sub-heading inside of  a resource as the name of Action, followed by HTTP method inside the square brackets.
    Before we get on further, let us discuss what a payload is. A payload is an HTTP transaction message including its discussion and any additional assets such as entity-body validation schema.

There are two payloads inside an Action:

  1. Request: It is a payload containing one specific HTTP Request, with Headers and an optional body.
  2. Response: It is a payload containing one HTTP Response.

A payload may have an identifier-a string for a request payload or an HTTP status code for a response payload.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)


Types of HTTP methods for Actions:

  • GET – In this action, we simply send the header data like Accept and Authorization and no body. Along with it we can send some GET parameters like page[size]. There are two cases for GET: List and Detail. For example, if we consider users, a GET for List helps us retrieve information about all users in the response, while Details, helps us retrieve information about a particular user.

The API Blueprint examples implementation of both GET list and detail request and response are as follows.

### List All Users [GET]
Get a list of Users.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "meta": {
                "count": 2
            },
            "data": [
                {
                    "attributes": {
                        "is-admin": true,
                        "last-name": null,
                        "instagram-url": null,

 

### Get Details [GET]
Get a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "data": {
                "attributes": {
                    "is-admin": false,
                    "last-name": "Doe",
                    "instagram-url": "http://instagram.com/instagram",

 

  • POST – In this action, apart from the header information, we also need to send a data. The data must be correct with jsonapi specifications. A POST body data for an users API would look something like this:
### Create User [POST]
Create a new user using an email, password and an optional name.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data":
              {
                "attributes":
                {
                  "email": "[email protected]",
                  "password": "password",


A POST request with this data, would create a new entry in the table and then return in jsonapi format the particular entry that was made into the table along with the id assigned to this new entry.

  • PATCH – In this action, we change or update an already existing entry in the database. So It has a header data like all other requests and a body data which is almost similar to POST except that it also needs to mention the id of the entry that we are trying to modify.
### Update User [PATCH]
+ `id` (integer) - ID of the record to update **(required)**

Update a single user by setting the email, email and/or name.

Authorized user should be same as user in request body or must be admin.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data": {
                "attributes": {
                  "password": "password1",
                  "avatar_url": "http://example1.com/example1.png",
                  "first-name": "Jane",
                  "last-name": "Dough",
                  "details": "example1",
                  "contact": "example1",
                  "facebook-url": "http://facebook.com/facebook1",
                  "twitter-url": "http://twitter.com/twitter1",
                  "instagram-url": "http://instagram.com/instagram1",
                  "google-plus-url": "http://plus.google.com/plus.google1",
                  "thumbnail-image-url": "http://example1.com/example1.png",
                  "small-image-url": "http://example1.com/example1.png",
                  "icon-image-url": "http://example1.com/example1.png"
                },
                "type": "user",
                "id": "2"
              }
            }

Just like in POST, after we have updated our entry, we get back as response the new updated entry in the database.

  • DELETE – In this action, we delete an entry from the database. The entry in our case is soft deleted by default. Which means that instead of deleting it from the database, we set the deleted_at column with the time of deletion. For deleting we just need to send header data and send a DELETE request to the proper endpoint. If deleted successfully, we get a response as “Object successfully deleted”.
### Delete User [DELETE]
Delete a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
          "meta": {
            "message": "Object successfully deleted"
          },
          "jsonapi": {
            "version": "1.0"
          }
        }


How to check after manually entering all these? We can use the
apiary website to render it, or simply use different renderer to do it. How? Checkout for my next blog on apiary and aglio.

Learn more about api blueprint here: https://apiblueprint.org/

Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

The Joy of Testing with MVP in Open Event Orga App

Testing applications is hard, and testing Android Applications is harder. The natural way an Android Developer codes is completely untestable. We are born and molded into creating God classes – Activities and perform every bit of logic inside them. Some thought that introduction to Fragments will introduce a little bit of modularity, but we proved otherwise by shifting to God Fragments. When the natural urge of an Android Developer to

  • apply logic,
  • load UI,
  • handle concurrency (hopefully, not using AsyncTask),
  • load data – from the network; disk; and cache,
  • and manage the state of the Activity/Fragment

finally, meets with a new form of revelation that he/she should test his/her application, all of the concepts acquired are shattered in a moment. The person goes to Android Docs to see why he started coding that way and realizes that even the system is flawed – Android Documentation is full of examples promoting God Activities, which they have used to show the use of API for only one reason – brevity. It’s not uncommon for us to steal some code from StackOverflow or Android Docs, but we don’t realize that they are presented there without an application environment or structure, all the required component just glued together to get it functionally complete, so that reader does not have to configure it anymore. Why would an answer from StackOverflow load a barcode scanner using a builder? Or build a ContentProvider to show how to query sorted data from SQLite? Or use a singleton for your provider classes? The simple answer is, they won’t. But this doesn’t mean you shouldn’t too.

The first thought that enters developer’s mind when he gets his hand on a piece of code is to paste it in the correct position and get it to work. There is always this moment of hesitation where conscience rings a bell saying, “What are you doing? Is it the right way to do it? How many lines till you stop overloading this Activity? It’s 2000 lines already”. But it dies as soon as we see the feature is working. And why test something which we have confirmed to work, right? I just wrote this code to show a progress bar while the data loads and hide it when it is done. I can see this working, what will I achieve in painfully writing a test for it, mocking the loading conditions and all. Wrong! Unit tests which test your trivial utils like Date Modification, String Parsing, etc are good and needed but are so trivial that they are hard to go wrong, and if they are, they are easy to fix as they have single usage throughout the app, and it is easy to spot bugs and fix them.

The real problem is testing of your app over dynamic conditions, where you have to emulate them so you can see if your app is making the right decisions. The progress bar working example may work for now, but what if it breaks over refactoring, or you wrote the same code elsewhere and forget to hide the progress bar? Simply copying a well-written test and changing 2-3 class names will fail the build and tell you what’s wrong. A well-contracted app can even contain tests to check that there’ll be no memory leaks. But none of this is possible if everything is jumbled into single Activity with callbacks, loaders, UI handling, business logic, etc. You can’t even think that where to begin. In this post, I will briefly discuss, how to design and test applications using MVP pattern.

MVP to the Rescue

Before moving ahead, I must put a disclaimer saying that MVP is a design pattern which follows a very opinionated implementation. The extent to which you want to refactor your app and the number of abstractions you are willing to do are in your hand. There aren’t any golden rules where your implementation will fail to be called as MVP. Remember, the client doesn’t care about architecture, it’s for you, so whatever makes your life and testing easier, works. Also, I’ll use an axiom related to testing here, “Test until fear turns into boredom”. You can apply the same to your MVP implementation. Testing and design patterns are here to eradicate your fear of failure, not to bore you.

So, in this guide, I’ll be using a use case of a QR Code Scanner Activity built using MVP pattern which we have employed in Open Event Orga Application (Github Repo). Because we are focusing on the test pattern and how to make it easy for us to test our app logic, I have omitted the model part from the MVP equation. The reason being that the possible model in the application would have been the camera loader or barcode initializer and the caveats associated with following this is that both these modules rely heavily on the Android specific view classes, namely, SurfaceView and other lifecycle methods. You could always create your way around it to include them in a separate model, but it won’t help us in writing unit tests for them, because firstly, they aren’t our logic to test, and secondly, they can’t be tested in a unit test (those models would have probably just implemented certain setters and getters).

So, the main purpose of our QR Code Scanner class is to scan a QR code and match it with a list of identifiers, and if there is a match, return it successfully to the caller. In this specific example, the identifiers will the ticket IDs of event attendees and the caller will be event organizer scanning QR codes to check the attendee in. The use case sounds simple, but has several mini use cases and dependencies of itself, which we have to take into our account while designing the View and Presenter class. Let’s discuss them one by one:

App State – Start: Activity starts, loads camera

  • Permission Granted: detects that the app already has Camera Permission,
    • starts scanning
  • Permission Absent: detects that the app doesn’t have Camera Permission,
    • Denied: asks for it, denied, shows error
    • Accepted: asks for it, granted, starts scanning

App State – QR Code Detected: Starts parsing them

  • Attendees are not present: Attendees aren’t present because of some reason, either due to internal error or have not loaded yet
    • stops parsing
  • Attendees are present:
    • QR Code does not match with any attendee : Do nothing
    • QR Code matches with one of the attendee : Send attendee to caller

App State: Camera or containing View is getting destroyed

  • Release Camera

There can be much more internal data flows, but this much is sufficient for our example. So, let’s start defining our contracts using the above knowledge. First, we will design our View. So what should be our strategy? Always think of presenters and views to be mapped in a 1:1 relation. They can have conversations with each other, the difference is that the view is only allowed to talk to the presenter, but presenter may talk to models too. So, the view is going to get all of its information from the presenter and can only take action when presenter tells it to, essentially saying that the view is dumb and passive. The presenter can talk to view and model(s), meaning the collection of information presenter has does not have to come from the view only. In fact, the less dependent presenter has to be on view, the better. The main motive of MVP is to make our logic less dependent on views.

Presenter Contract

In our example, presenter relies on the view to tell it when a certain event happens, they can be lifecycle callbacks, permission grants/denies, camera load/destroy or anything purely related to the Android implementation. So, we generally know from the start what kind of information presenter needs from a View, so let’s design our presenter first.

public interface IScanQRPresenter {

    void attach(long eventId, IScanQRView scanQRView);

    void start();

    void detach();

    void cameraPermissionGranted(boolean granted);

    void onBarcodeDetected(Barcode barcode);

    void onScanStarted();

    void onCameraLoaded();

    void onCameraDestroyed();

}

 

The methods are self-explanatory. Don’t worry if you didn’t get why we defined certain hooks like onScanStarted() or why we didn’t define onScanStopped(). These kinds of details will reveal themselves as you develop your components. You can skip to next section if you don’t want to know why we did it.

Basically, you should define a callback for events which are not reliable to be synchronized. What? Let me explain. Let’s say you got your camera permission and requested the view to start scanning, but you have to wait till the camera is loaded as it is not a synchronous call (and it shouldn’t be or your main thread will block). So, instead of that, our presenter will request the camera to load, go in an idle state, wait for the onCameraLoaded() call, and then request the scan to start, and since it is also not a synchronous work, it will go into idle state again and wait for onScanStarted() and then further its work. Whenever the camera is destroyed, the onCameraDestroyed() callback will be called and we will stop the scanning, and since there is nothing to be done after that, we won’t wait till scanning has stopped, thus dropping the need of onScanStopped() callback.

View Contract

View contract will come naturally to you once you have understood the data flow. It will contain the commands presenter will issue on the view and also, the requests for data that view holds. So, this will be our view.

public interface IScanQRView {

    boolean hasCameraPermission();

    void requestCameraPermission();

    void showPermissionError(String error);

    void onScannedAttendee(Attendee attendee);

    void showBarcodePanel(boolean show);

    void showBarcodeData(@NonNull String data);

    void showProgressBar(boolean show);

    void loadCamera();

    void startScan();

    void stopScan();

}

 

Almost all commands are straightforward but let me quickly explain showBarcodePanel(boolean) and showBarcodeData(String). They are used to display the currently visible barcode data to the user.

So, with our implementations set and data flow in place, let’s start writing tests for the feature. Yes, we’ll write tests without implementation and then you’ll see how easy it will be to write your views and presenters with only one goal to mind, to make the tests pass. If your tests are written correctly and cover everything, you should feel confident that your app will work without even seeing the actual implementation, because that is what tests are for. And by making passive and dumb views, imagine how light your instrumentation tests will be. Just check that the individual methods in view implementation are working as expected and you are done! No need to test logic or complex interactions, etc because you have got it covered in the unit tests themselves. This is not only a benefit of data flow tests but also a best practice. You always want to follow DRY, Don’t Repeat Yourself, even while testing.

Tests

So, we will start writing our tests now and you’ll realize how easy it is and all the hard work of abstraction and designing will pay off.

Attach Tests

So, firstly, we will test if the presenter calls appropriate methods when it is attached

@Test
public void shouldLoadAttendeesAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(eventRepository).getAttendees(eventId, false);
}

@Test
public void shouldLoadCameraAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(scanQRView).loadCamera();
}

 

Here, we are using Mockito to mock our EventDataRepository to return locally defined attendees instead of doing an actual network call. Then we are calling attach on presenter in each method and then we verify in the first test that the presenter is calling getAttendees on the EventDataRepository, and in the second test that it is requesting the view to load the camera.

Note that in the implementation of attach function, both loading of Camera and attendee loading will take place, but it is best practice to test them separately so that when a test fails, we know why it did

Detach Tests

@Test
public void shouldDetachViewOnStop() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    assertNotNull(scanQRPresenter.getView());

    scanQRPresenter.detach();

    assertNull(scanQRPresenter.getView());
}

@Test
public void shouldNotAccessViewAfterDetach() {
    scanQRPresenter.detach();

    scanQRPresenter.start();
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    scanQRPresenter.onBarcodeDetected(barcode2);
    scanQRPresenter.onCameraDestroyed();

    verifyZeroInteractions(scanQRView);
}

 

In detach tests, we are verifying that after attaching and view not being null, the detach method call makes the presenter leave the reference to the view making it null. And in the second test, we do all possible interactions after detaching and confirm that no call whatsoever was made on the view. Tests like these enforce to avoid memory leaks and check for any NullPointerExceptions that may happen after the view was made null.

Note: This does not mean that memory leaks will not happen if this test passes. You can cause memory leaks by giving the view reference to any long living object, not just the presenter. This just ensures that presenter will not hold the view reference after detach and won’t reference to the null view in future.

Permission Tests

@Test
public void shouldStartScanOnCameraLoadedIfPermissionPresent() {
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).startScan();
}

@Test
public void shouldAskPermissionOnCameraLoadedIfPermissionsAbsent() {
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).requestCameraPermission();
}

@Test
public void shouldStartScanningOnPermissionGranted() {
    scanQRPresenter.cameraPermissionGranted(true);

    verify(scanQRView).startScan();
}

@Test
public void shouldShowErrorOnPermissionDenied() {
    scanQRPresenter.cameraPermissionGranted(false);

    verify(scanQRView).showPermissionError(matches("(.*permission.*denied.*)|(.*denied.*permission.*)"));
}

The permission tests are straightforward:

Implicit Permission Handling

  1. If the view already has the camera permission and camera has loaded, start scanning
  2. If the view does not have camera permission and the camera has loaded, request the permission

Request Handling

  1. If the request was granted, start scanning
  2. If the permission was denied, show the permission error. Here, I have used regex to match that the error message contains permission and denied words, you can use anyString() from Mockito for more flexibility or a specific message for more tight testing

Camera Destroy Test

@Test
public void shouldStopScanOnCameraDestroyed() {
    scanQRPresenter.onCameraDestroyed();

    verify(scanQRView).stopScan();
}

Pretty simple, stop the scan on destruction of camera

Flow Tests

You can also test that the callback flow happens in order so that not only the unit tests work but also the implementation logic is correct.

/**
 * Checks that the flow of commands happen in order
 */
@Test
public void shouldFollowFlowOnImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).loadCamera();
    scanQRPresenter.onCameraLoaded();
    inOrder.verify(scanQRView).startScan();
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

Here, we are verifying two things, first that if we have the request granted implicitly, we load the camera and start the scan in order. This test isn’t that useful as we already tested that loading of the camera is done on attach and scan is started when the camera is loaded. In fact, this is an example of breaking the DRY rule. Even though it doesn’t hurt to include this, it also doesn’t help as it does not cover anything that hasn’t already been tested.

The second test is important and tests that progress bar is correctly shown and hidden after certain communications have taken place and the scan has started. Similarly, we can also test the progress bar behavior over all of the possible combinations of cases that can happen. The code snippets below show the tests:

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestDeny() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    inOrder.verify(scanQRView).showProgressBar(false);
}

QR Code Detection Tests

@Test
public void shouldNotSendAnyBarcodeIfAttendeesAreNull() {
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldNotSendAttendeeOnWrongBarcodeDetection() {
    scanQRPresenter.setAttendees(attendees);
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldSendAttendeeOnCorrectBarcodeDetection() {
    // Somehow the setting in setUp is not working, a workaround till fix is found
    RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());

    scanQRPresenter.setAttendees(attendees);

    barcode1.displayValue = "test4-91";
    scanQRPresenter.onBarcodeDetected(barcode1);

    verify(scanQRView).onScannedAttendee(attendees.get(3));
}

 

Lastly, the core tests of our presenter. To explain the tests, let me show you the sendNullInterleaved() method

private void sendNullInterleaved() {
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
}

 

So what it basically does is send some barcodes interleaved with null (no barcode detected) to the presenter using the onBarcodeDetected method to emulate the real camera sending barcode values with sometimes sending null whenever no barcode is in view.

The first test simply checks that no attendee is sent if the attendee list is null. Seems pretty obvious. The second one checks that if barcode does not match with any attendee’s identifier, it should not send any attendee as well. It does this by setting an arbitrary list of attendees with different identifiers and sending non-matching barcodes to the presenter. Lastly, the successful test, where a single matching barcode is sent to the presenter and it should send that particular attendee with matching identifier.

Phew! Quite a lot of tests and we are done. The tests were not very large and mostly self-explanatory. Now, you just have to implement the view and presenter methods till all the tests light up to be green and you are done! You have created a feature using MVP design and implemented it using test driven development.

So start testing and get a seal of reliability and confidence about your code and the satisfaction of seeing the green bar fill up.