Parallelizing travis build in Open Event Web app

 

Open Event Web app uses Travis CI as a platform to perform unit testing. Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub. Travis CI automatically detects when a commit has been made and pushed to a GitHub repository that is using Travis CI, and each time this happens, it will try to build the project and run tests. Travis build took around 24 minutes to complete when any commit is made to the project, which is a very large time, to reduce the build time we parallelized the build which uses maximum amount of resources available at that time and run the builds parallely which resulted into better use of resources as well as required lesser amount of time.

Open Event web app uses saucelabs integration to perform selenium tests and travis to perform continuous integration.

Why parallelize the build?

When there are unit tests that are independent of each other and can be executed using a common set of dependencies, those procedures can be performed parallely on different virtual machines bringing out maximum throughput.

Running say a large number of tests on a single machine can increase the build time to a large extent, this build time can be reduced significantly by running the tests parallely on different machines. Open Event Webapp has a build time of around 24 minutes which is reduced to half on parallelising the build.

Parallelizing your builds across virtual machines

To speed up a test suite, you can break it up into several parts using Travis CI’s build matrix feature.

Say you want to split up your unit tests and your integration tests into two different build jobs. They’ll run in parallel and fully utilize the available build capacity and the resources.

The architecture of open event webapp supports test suite for all the pages in the generated application. To parallelize the build, the test suite is divided in different files  with the directory structure as shown below:

   
   ├── test
      ├── serverTest.js
      ├── roomsAndSpeakers.js
      ├── tracks.js
      ├── generatorAndSchedule.js
      ├── sessionAndEvent,js

 

The env key in travis.yml is modified as shown below:

env:
  - TESTFOLDER=test/serverTest.js
  - TESTFOLDER=test/roomsAndSpeakers.js
  - TESTFOLDER=test/tracks.js
  - TESTFOLDER=test/generatorAndSchedule.js
  - TESTFOLDER=test/sessionAndEvent.js

 

The script running tests fetches environment variable and runs the test file accordingly as shown below:

# installing required items for build
install:
 - npm install -g istanbul mocha@3
 - npm install
 - npm install --save-dev

# testing script
script:
 - istanbul cover _mocha -- $TESTFOLDER

# notify codecov and deploy to cloud
after_success:
 if ([ "$TESTFOLDER" == "test/serverTest.js" ]); then
   bash <(curl -s https://codecov.io/bash);
   bash gh_deploy.sh && kubernetes/travis/deploy.sh;
 fi

Results:

The build time which was earlier 23 minutes is reduced to 12 minutes after parallelizing the build.

Resources

Tags:  GsoC’18, Fossasia, Eventyay, Open Event Web App, Unit Test, Parallelize build

 

Unit Tests for REST-API in Python Web Application

Badgeyay backend is now shifted to REST-API and to test functions used in REST-API, we need some testing technology which will test each and every function used in the API. For our purposes, we chose the popular unit tests Python test suite.

In this blog, I’ll be discussing how I have written unit tests to test Badgeyay  REST-API.

First, let’s understand what is unittests and why we have chosen it. Then we will move onto writing API tests for Badgeyay. These tests have a generic structure and thus the code I mention would work in other REST API testing scenarios, often with little to no modifications.

Let’s get started and understand API testing step by step.

What is Unittests?

Unitests is a Python unit testing framework which supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework. The unittest module provides classes that make it easy to support these qualities for a set of tests.

Why Unittests?

We get two primary benefits from unit testing, with a majority of the value going to the first:

  • Guides your design to be loosely coupled and well fleshed out. If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
  • Provides fast automated regression for re-factors and small changes to the code.
  • Unit testing also gives you living documentation about how small pieces of the system work.

We should always strive to write comprehensive tests that cover the working code pretty well.

Now, here is glimpse of how  I wrote unit tests for testing code in the REST-API backend of Badgeyay. Using unittests python package and requests modules, we can test REST API in test automation.

Below is the code snippet for which I have written unit tests in one of my pull requests.

def output(response_type, message, download_link):
    if download_link == '':
        response = [
            {
                'type': response_type,
                'message': message
            }
        ]
    else:
        response = [
            {
                'type': response_type,
                'message': message,
                'download_link': download_link
            }
        ]
    return jsonify({'response': response})

 

To test this function, I basically created a mock object which could simulate the behavior of real objects in a controlled way, so in this case a mock object may simulate the behavior of the output function and return something like an JSON response without hitting the real REST API. Now the next challenge is to parse the JSON response and feed the specific value of the response JSON to the Python automation script. So Python reads the JSON as a dictionary object and it really simplifies the way JSON needs to be parsed and used.

And here’s the content of the backend/tests/test_basic.py file.

 #!/usr/bin/env python3
"""Tests for Basic Functions"""
import sys
import json
import unittest

sys.path.append("../..")
from app.main import *


class TestFunctions(unittest.TestCase):
      """Test case for the client methods."""
    def setup(self):
        app.app.config['TESTING'] = True
        self.app = app.app.test_client()
      # Test of Output function
    def test_output(self):
        with app.test_request_context():
            # mock object
            out = output('error', 'Test Error', 'local_host')
            # Passing the mock object
            response = [
                {
                    'type': 'error',
                    'message': 'Test Error',
                    'download_link': 'local_host'
                }
            ]
            data = json.loads(out.get_data(as_text=True))
            # Assert response
            self.assertEqual(data['response'], response)


if __name__ == '__main__':
    unittest.main()

 

And finally, we can verify that everything works by running nosetests .

This is how I wrote unit tests in BadgeYaY repository. You can find more of work here.

Resources:

  • The Purpose of Unit Testing – Link
  • Unit testing framework – Link