Improving Loklak apps site

In this blog I will be describing some of the recent improvements made to the Loklak apps site. A new utility script has been added to automatically update the loklak app wall after a new app has been made. Invalid app query in app details page has been handled gracefully.

A proper message is shown when a user enters an invalid app name in the url of the details page. Tests has been added for details page.

Developing updatewall script

This is a small utility script to update Loklak wall in order to expose a newly created app or update an existing app. Before moving into the working of this script let us discuss how Loklak apps site tracks all the apps and their details. In the root of the project there is a file names apps.json. This file contains an aggregation of all the app.json files present in the individual apps. Now when the site is loaded, index.html loads the Javascript code present in app_list.js. This app_list.js file makes an ajax call to root apps.json files, loads all the app details in a list and attaches this list to the AngularJS scope variable. After this the app wall consisting of various app details is rendered using html. So whenever a new app is created, in order to expose the app on the wall, the developer needs to copy the contents of the application’s app.json and paste it in the root apps.json file. This is quite tedious on the part of the developer as for making a new app he will first have to know how the site works which is not all directly related to his development work. Next, whenever he updates the app.json of his app, he needs to again update apps.json file with the new data.

This newly added script (updatewall) automates this entire process. After creating a new app all that the developer needs to do is run this script from within his app directory and the app wall will be updated automatically.

Now, let us move into the working of this script. The basic workflow of the updatewall script can be described as follows. The script loads the json data present in the app.json file of the app under consideration. Next it loads the json data present in the root apps.json file.

if __name__ == '__main__':

    #open file containg json object
    json_list_file = open(PATH_TO_ROOT_JSON, 'r')

    #load json object
    json_list = json.load(json_list_file,  object_pairs_hook=OrderedDict)
    json_list_file.close()

    app_json_file = open(PATH_TO_APP_JSON, 'r')
    app_json = json.load(app_json_file,  object_pairs_hook=OrderedDict)
    app_json_file.close()

    #method to update Loklak app wall
    expose_app(json_list, app_json)

When we are loading the json data we are using object_pairs_hook in order to load the data into an OrderedDict rather than a normal python dictionary. We are doing this so that the order of the dictionary items are maintained. Once the data is loaded we invoke the expose method.

def expose_app(json_list, app_json):
    #if app is already present in list then fetch that app
    app = getAppIfPesent(json_list, app_json)

    #if app is not present then add a new entry
    if app == None:
        json_list['apps'].append(app_json)
        update_list_file(json_list)
        print colors.BOLD + colors.OKGREEN + 'App exposed on app wall' + colors.ENDC

    #else update the existing app entry
    else:
        for key in app_json:
            app[key] = app_json[key]
        update_list_file(json_list)
        print colors.BOLD + colors.OKGREEN + 'App updated on app wall' + colors.ENDC

The apps.json file contain a key called apps. This value of this key is a list of json objects, each object being the json data of an individual app’s app.json file. In the above function we iterate over all the json objects present in the list. If we are unable to find a json object whose name value is same as that of the newly created app then we simply append the new app’s app.json object to that list. However if we find an object containing the same name value as that of the newly created app, then we simply update its properties. In short, if the app is a new one, its data gets added to apps.json otherwise the corresponding app data is updated.

Handling invalid app names in the URL of details page

The url of the app details page takes the app name as parameter. If any user wants to see the store listing of an app then he has to use the following url.

https://apps.loklak.org/details.html?q=<app_name>

Here app name is a url parameter used to load the store listing information. Now if anyone enters an invalid app name, that is an app which does not exists, then a proper error message has to be shown to the user. This can be done by checking whether the given app name is present in the root apps.json file or not. If not present if simply set a flag so that the error message can be conditionally rendered.

$scope.getSelectedApp = function() {
        for (var i = 0; i < $scope.apps.length; i++) {
            if ($scope.apps[i].name === $scope.appName) {
                $scope.selectedApp = $scope.apps[i];
                $scope.found = true;
                $("nav").show();
                break;
            }
        }
        if ($scope.found == false) {
            $scope.notFound = true;
        }
    }

In the above snippet if the app is not found then we set notFound to true. This causes the error message to appear on the page.

<div ng-if="notFound" class="not-found">
        <span class="brand-and-image">
          <img src="images/loklak_icon.png">
          <span class="loklak-brand"> <span class="loklak-header">
            loklak </span> <span>apps</span>
          </span>
        </span>
        <span class="error-404">
          Error: Requested app not found
        </span>
        <span class="go-back">
          <a href="/"> Go Back to Home Page >> </a>
        </span>
      </div>

The code renders the error message if notFound is set to true.

Writing tests for store listing page

Almost the entire content of the store listing is loaded dynamically by Javascript logic. So it is very important to write tests for store listing page. Protractor framework has been used to write automated browser test. The tests make sure that for a given app, the content of the middle section is loaded correctly.

it("should have basic information", function() {
    expect(element(by.css(".app-name")).getText()).toEqual("MultiLinePlotter");
    expect(element(by.css(".app-headline")).getText()).toEqual("App to plot tweet aggregations and statistics");
    expect(element(by.css(".author")).getText()).toEqual("by Deepjyoti Mondal");
    expect(element(by.css(".short-desc")).getText()).toEqual("An applicaton to visually compare tweet statistics");
  });

The above tests make sure that the top section is loaded properly. Next we check that getting started section and app use section are not empty.

it("main content should not be empty", function() {
    expect(element(by.css(".get-started-md")).getText()).not.toBe("");
    expect(element(by.css(".app-use-md")).getText()).not.toBe("");
  });

Apart from these, two more tests are performed to check the behaviour of the side bar menu items on click event and the functionality of the Try now button.

Future roadmap

There is still a lot of scope for the site’s improvement and enhancement. Some of the features which can be implemented next are given below.

  • Add more tests to make the site stable and add tests to travis build.
  • Make the apps independent. Work on this has already been started and can be viewed here – issue, PR
  • Optimise the site for mobile using services workers and caching (making a progressive web app).
  • Add a splash screen and home screen icon for mobile.

Important resources

Using Protractor for UI Tests in Angular JS for Loklak Apps Site

Loklak apps site’s home page and app details page have sections where data is dynamically loaded from external javascript and json files. Data is fetched from json files using angular js, processed and then rendered to the corresponding views by controllers. Any erroneous modification to the controller functions might cause discrepancies in the frontend. Since Loklak apps is a frontend project, any bug in the home page or details page will lead to poor UI/UX. How do we deal with this? One way is to write unit tests for the various controller functions and check their behaviours. Now how do we test the behaviours of the site. Most of the controller functions render something on the view. One thing we can do is simulate the various browser actions and test site against known, accepted behaviours with Protractor.

What is Protractor

Protractor is end to end test framework for Angular and AngularJS apps. It runs tests against our app running in browser as if a real user is interacting with our browser. It uses browser specific drivers to interact with our web application as any user would.

Using Protractor to write tests for Loklak apps site

First we need to install Protractor and its dependencies. Let us begin by creating an empty json file in the project directory using the following command.

echo {} > package.json

Next we will have to install Protractor.

The above command installs protractor and webdriver-manager. After this we need to get the necessary binaries to set up our selenium server. This can be done using the following.

./node_modules/protractor/bin/webdriver-manager update
./node_modules/protractor/bin/webdriver-manager start

Let us tidy up things a bit. We will include these commands in package.json file under scripts section so that we can shorten our commands.

Given below is the current state of package.json

{
    "scripts": {
        "start": "./node_modules/http-server/bin/http-server",
        "update-driver": "./node_modules/protractor/bin/webdriver-manager update",
        "start-driver": "./node_modules/protractor/bin/webdriver-manager start",
        "test": "./node_modules/protractor/bin/protractor conf.js"
    },
    "dependencies": {
        "http-server": "^0.10.0",
        "protractor": "^5.1.2"
    }
}

The package.json file currently holds our dependencies and scripts. It contains command for starting development server, updating webdriver and starting webdriver (mentioned just before this) and command to run test.

Next we need to include a configuration file for protractor. The configuration file should contain the test framework to be used, the address at which selenium is running and path to specs file.

// conf.js
exports.config = {
    framework: "jasmine",
    seleniumAddress: "http://localhost:4444/wd/hub",
    specs: ["tests/home-spec.js"]
};

We have set the framework as jasmine and selenium address as http://localhost:4444/wd/hub. Next we need to define our actual file. But before writing tests we need to find out what are the things that we need to test. We will mostly be testing dynamic content loaded by Javascript files. Let us define a spec. A spec is a collection of tests. We will start by testing the category name. Initially when the page loads it should be equal to All apps. Next we test the top right hand side menu which is loaded by javascript using topmenu.json file.

it("should have a category name", function() {
    expect(element(by.id("categoryName")).getText()).toEqual("All apps");
  });

  it("should have top menu", function() {
    let list = element.all(by.css(".topmenu li a"));
    expect(list.count()).toBe(5);
  });

As mentioned earlier, we are using jasmine framework for writing our specs. In the above code snippet ‘it’ describes a particular test. It takes a test description and a callback function thereby providing a very efficient way to document our tests white write the test code itself. In the first test we use expect function to check whether the category name is equal to All apps or not. Here we select the div containing the category name by its id.

Next we write a test for top menu. There should be five menu options in total for the top menu. We select all the list items that are supposed to contain the top menu items and check whether the number of such items are five or not using expect function. As it can be seen from the snippet, the process of selecting a node is almost similar to that of Jquery library.

Next we test the left hand side category list. This list is loaded by AngularJS controller from apps,json file. We should make sure the list is loaded properly and all the options are present.

it("should have a category list", function() {
    let categoryIds = ["All", "Scraper", "Search", "Visualizer", "LoklakLibraries", "InternetOfThings", "Misc"];
    let categoryNames = ["All", "Scraper", "Search", "Visualizer", "Loklak Libraries", "Internet Of Things", "Misc"];

    expect(element(by.css("#catTitle")).getText()).toBe("Categories");

    let categoryList = element.all(by.css(".category-main"));
    expect(categoryList.count()).toBe(7);

    categoryIds.forEach(function(id, index) {
      element(by.css("#" + id)).isPresent().then(function(present) {
        expect(present).toBe(true);
      });

      element(by.css("#" + id)).getText().then(function(text) {
        expect(text).toBe(categoryNames[index]);
      });
    });
  });

At first we maintain two lists of category id and category names. We begin by confirming that Category title is equal to Categories. Next we get the list of categories and iterate over them, For each category we check whether the corresponding id is present in the DOM or not. After confirming this, we match the names of the categories with the expected names. Elements.all function allows us to get a list of selected nodes.

Finally we check the click functionality of the left side menu. Expected behaviour is, on clicking a menu item, the category name should get replaced with the selected category name. For this we need to simulate the click event. Protractor allows us to do it very easily using click function.

it("category list should respond to click", function() {
    let categoryIds = ["All", "Scraper", "Search", "Visualizer", "LoklakLibraries", "InternetOfThings", "Misc"];
    let categoryNames = ["All apps", "Scraper", "Search", "Visualizer", "Loklak Libraries", "Internet Of Things", "Misc"];

    categoryIds.forEach(function(id, index) {
      element(by.id(categoryIds[index])).click().then(function() {
        browser.getCurrentUrl().then(function(url) {
          expect(url).toBe("http://127.0.0.1:8080/#/" + categoryIds[index]);
        });
        element(by.id("categoryName")).getText().then(function(text) {
          expect(text).toBe(categoryNames[index]);
        });
      });
    });
  });



Once again we maintain two lists, category id and category names. We obtain the present list of categories and iterate over them. For each category link we simulate a click event. For each click event we check two values. We check the new browser URL which should now contain the category id. Next we check the value of category name. It should be equal to the category selected.
FInally after all the tests are over we get the final report on our terminal.
In order to run the tests, use the following command.

npm test

This will start executing the tests.

Important resources

Setting up Travis Continuous Integration in Giggity

Travis is a continuous integration service that enables you to run tests against your latest Android builds. You can setup your projects to run both unit and integration tests, which can also include launching an emulator. I recently added Travis Continuous Integration Connfa, Giggity and Giraffe app. In this blog, I describe how to set up Travis Continuous Integration in an Android Project with reference to Giggity app.

  • Use your GitHub account, sign in to either to Travis CI .org for public repositories or Travis CI .com for private repositories
  • Accept the GitHub access permissions confirmation.
  • Once you’re signed in to Travis CI, and synchronized your GitHub repositories, go to your profile page and enable the repository you want to build:

  • Now you need to add a .travis.yml file into the root of your project. This file will tell how Travis handles the builds. You should check your .travis file on Travis Web Lint before committing any changes to it.
  • You can find the very basic instructions for building an Android project from the Travis documentation. But here we specify the .travis.yml build accordingly for Giggity’s continuous integration. Here, language shows that it is an Android project. We write “language: ruby” if it is a ruby project.  If you need a more customizable environment running in a virtual machine, use the Sudo Enabled infrastructure. Similarly, we define the API, play services and libraries defined as shown.
language: android
sudo: required
jdk: 
 - oraclejdk8
# Use the Travis Container-Based Infrastructure
android:
  components:
    - platform-tools
    - tools
    - build-tools-25.0.3
    - android-25
    
    # For Google APIs
    - addon-google_apis-google-$ANDROID_API_LEVEL
    # Google Play Services
    - extra-google-google_play_services
    # Support library
    - extra-android-support
    # Latest artifacts in local repository
    - extra-google-m2repository
    - extra-android-m2repository
    - android-sdk-license-.+
    - '.+'

before_script:
  - chmod +x gradlew    

script:
  - ./gradlew build

Now when you make a commit or pull request Travis check if all the defines checks pass and it is able to be merged. To be more advanced you can also define if you want to build APKs too with every build.

References:

  • Travis Continuous Integration Documentation – https://docs.travis-ci.com/user/getting-started/

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

UI Espresso Test Cases for Phimpme Android

Now we are heading toward a release of Phimpme soon, So we are increasing the code coverage by writing test cases for our app. What is a Test Case? Test cases are the script against which we run our code to test the features implementation. It is basically contains the output, flow and features steps of the app. To release app on multiple platform, it is highly recommended to test the app on test cases.

For example, Let’s consider if we are developing an app which has one button. So first we write a UI test case which checks whether a button displayed on the screen or not? And in response to that it show the pass and fail of a test case.

Steps to add a UI test case using Espresso

Espresso testing framework provides APIs to simulate user interactions. It has a concise API. Even, now in new Version of Android Studio, there is a feature to record Espresso Test cases. I’ll show you how to use Recorder to write test cases in below steps.

  • Setup Project Directory

Android Instrumentation tests must be placed in androidTest directory. If it is not there create a directory in app/src/androidTest/java…

  • Write Test Case

So firstly, I am writing a very simple test case, which checks whether the three Bottom navigation view items are displayed or not?

Espresso Testing framework has basically three components:

ViewMatchers

Which helps to find the correct view on which some actions can be performed E.g. onView(withId(R.id.navigation_accounts). Here I am taking the view of accounts item in Bottom Navigation View.

ViewActions

It allows to perform actions on the view we get earlier. E.g. Very basic operation used is click()

ViewAssertions

It allows to assert the current state of the view E.g. isDisplayed() is an assertion on the view we get. So a basic architecture of an Espresso Test case is

onView(ViewMatcher)       
 .perform(ViewAction)     
   .check(ViewAssertion);

We can also Use Hamcrest framework which provide extra features of checking conditions in the code.

Setup Espresso in Code

Add this in your application level build.gradle

// Android Testing Support Library's runner and rules
androidTestCompile "com.android.support.test:runner:$rootProject.ext.runnerVersion"
androidTestCompile "com.android.support.test:rules:$rootProject.ext.rulesVersion"

// Espresso UI Testing dependencies.
androidTestCompile "com.android.support.test.espresso:espresso-core:$rootProject.ext.espressoVersion"
androidTestCompile "com.android.support.test.espresso:espresso-contrib:$rootProject.ext.espressoVersion"
  • Use recorder to write the Test Case

New recorder feature is great, if you want to set up everything quickly. Go to the Run → Record Espresso Test in Android Studio.

It dumps the current User Interface hierarchy and provide the feature to assert that.

You can edit the assertions by selecting the element and apply the assertion on it.

Save and Run the test cases by right click on Name of the class. Run ‘Test Case name’

Console will show the progress of Test case. Like here it is showing passed, it means it get all the view hierarchy which is required by the Test Case.

Resources

Calibrating the PSLab’s Analog Features for Maximum Accuracy

The hardware design of the PSLab aims to achieve the maximum possible performance from a very conservative bill of materials. There are several analog components such as Op-amps, voltage dividers, and level-shifters involved in input signal processing that have inherent offsets and slopes that must be corrected in order get the best results. Similarly, some analog output signals from the PSLab are also modified by buffers, amplifiers, and level shifting circuits.

One way to improve the initial accuracy is to choose high performance analog components that are factory calibrated , and do not require any additional correction to achieve error margins that are less than the least count of the PSLab’s measurement capabilities. However, such components such as laser trimmed resistor pairs, and low-offset Op-Amps are quite expensive, and we must instead use software based correction methods to achieve similar performance from affordable parts.

Identifying a suitable calibrator for analog signals

In order to calibrate a device, we must first own a similar device whose measurements we can trust, and which has a finer resolution that the PSLab itself. Calibration is a one-time task that will quantify and store the gain and offset errors, and these errors are not expected to behave very differently unless a significant change in temperature, or mechanical stress is experienced.

Such a device may be as expensive as 24-bit, research-grade multimeters which generally cost upwards of $500 , or can be inexpensive analog to digital convertors that might require some expertise to extract data from them, but can still be used for calibration.

Fortunately, we have been able to identify a cheaply available device that puts the calibration process within the reach and capabilities of the end user. The ADS1115 16-bit ADC is a 4-channel, 0-3.3V ADC that can be interfaced via I2C. Typical initial accuracy of the internal voltage reference 0.01% and data rates higher than 500SPS are possible. It is cost effective, and is available in convenient module formats that can be directly plugged into the PSLab itself. It can be purchased through various vendors ( A , B , C )

Therefore, it appears to be most suited to calibrate individual PSLab devices.

Basic requirements for the calibration process

The process to calibrate the analog inputs and outputs involves looping them externally , and monitoring the actual values via the external calibrator.

We’re killing two birds with one stone by calibrating inputs and outputs in tandem, and it makes for a faster calibration process. The complete calibration process for  Digital to Analog converter outputs has enough complexity to warrant a separate blog post.

Let’s take an example; PV1 ( an analog output that can be set between -5V and +5V) can be connected to CH1 (An analog input which can read voltage values between -16 and +16 Volts) with a small segment of wire, and various voltage values can be set on PV1, and read back by CH1 . At the same time, the external calibration utility will also monitor this voltage, and store the error in PV1 (Set Voltage – Actual Voltage) as well as the error in CH1 ( Read Voltage – Actual Voltage ) .

In a similar manner , PV2 can be connected to CH2, and the second channel of the ADS1115 calibrator can be used to monitor the real value, and so on .

Deviations of various analog input channels and their different voltage ranges from the actual values. As is evident from the graph, errors can be as much as 40mV in a full scale range of +/-16,000mV . But since these errors are quite repeatable, we can apply a calibration polynomial to correct these.

 

Integral Non-linearity of the ADC

In addition to the overall slope and offset, you have probably observed in the previous image, a sawtooth pattern with an amplitude as small as the least count of the analog inputs superimposed on them. This error arises from the integral non-linearity (INL) of the analog to digital convertor of the PIC, and affects all analog inputs uniformly. While in principle we can ignore this for all practical purposes, in order to further improve the analog accuracy, we can also store this INL error of the ADC, and apply this correction to any channel after its slope and offset has been corrected.

The overall slope and offset are caused by the analog references and components, and can be corrected with a simple 3-degree polynomial. However, the sawtooth pattern is characteristics of the INL, and must be stored in a correction array with 4096 elements ( Each element represents the error of the corresponding ADC code of the 12-bit ADC )
The yellow trace represents the error in readings from the ADC after applying polynomial and table based correction. There appears to be a small offset that can be attributed to a change in ambient temperature , but can be neglected as it is in the order of 100uV

 

The following utilities and code are necessary for this process
  • An I2C communication library for ADS1115 must be present in order to acquire data from it via the PSLab
  • The library should be able to handle the following tasks
    • read single ended , and differential voltage values from any of the channels
    • Enable selection of voltage range and voltage reference
  • A graphical interface with the following features and algorithms will be required:
    • Vary the output voltages from PV1,2,3 in small, definite intervals
    • Store the errors in the analog outputs and inputs as a function of the actual voltage
    • Generate Cubic interpolation functions for each input and output channel
    • The Programmable Current Source can be calibrated using a measured Load resistor, and calibrated analog channel. Its interpolation function must also be stored.
    • Write all calibration constants into flash memory after assigning a timestamp
    • Store raw calibration data in a client-side folder
Testing of the analog inputs after applying calibration polynomials. It can be observed that the accuracy has been brought within a +/-5mV range for the wide input channels. For CH3, a +/-1mV accuracy is achieved.
Resources

Testing Errors and Exceptions Using Unittest in Open Event Server

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schema
    def validate_quantity(self, data):
        if 'max_order' in data and 'min_order' in data:
            if data['max_order'] < data['min_order']:
                raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
                                          "max-order should be greater than min-order")

 

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},

           "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

 def test_exceptions(self):
        # Unprocessable Entity Exception
        with self.assertRaises(UnprocessableEntity):
            raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},
                                      "min-quantity should be less than max-quantity")


In the above code,
with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

def test_errors(self):
        with app.test_request_context():
            # Forbidden Error
            forbidden_error = ForbiddenError({'source': ''}, 'Super admin access is required')
            self.assertEqual(forbidden_error.status, 403)

            # Not Found Error
            not_found_error = NotFoundError({'source': ''}, 'Object not found.')
            self.assertEqual(not_found_error.status, 404)


Here we firstly create an object of the error class
ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.
The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.


Resources:

Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:

Creating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

In FOSSASIA‘s Open Event Server, we use the Python unittest library for unit testing various modules of the API code. Unittest library provides us with various assertion functions to assert between the actual and the expected values returned by a function or a module. In normal modules, we simply use these assertions to compare the result since the parameters mostly take as input normal data types. However one very important area for unittesting is File Uploading. We cannot really send a particular file or any such payload to the function to unittest it properly, since it expects a request.files kind of data which is obtained only when file is uploaded or sent as a request to an endpoint. For example in this function:

def uploaded_file(files, multiple=False):
    if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))

    else:
        extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)
        files_uploaded = UploadedFile(file_path, filename)

    return files_uploaded


So, we need to create a mock uploading system to replicate this check. So inside the unittesting function we create an api route for this particular scope to accept a file as a request. Following is the code:

@app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})


In the above code, it creates an app route with endpoint test_upload. It accepts a request.files. Then it sends this object to the
uploaded_file function (the function to be unittested), gets the result of the function, and returns the result in a json format.
With this we have the endpoint to mock a file upload ready. Next we need to send a request with file object. We cannot send a normal data which would then be treated as a normal request.form. But we want to receive it in request.files. So we create 2 different classes inheriting other classes.

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest


MyRequest
class inherits the Request class of Flask framework. We define the file stream of the Request class as the FileObj. Then, we set the request_class attribute of the Flask app to this new MyRequest class.
After we have it all setup, we need to send the request and see if the uploaded file is being saved properly or not. For this purpose we take help of StringIO library. StringIO creates a file-like class which can be then used to replicate a file uploading system. So we send the data as {‘file’: (StringIO(‘1,2,3,4’), ‘test_file.csv’)}. We send this as data to the /test_upload endpoint that we have created previously. As a result, the endpoint receives the function, saves the file, and returns the filename and file_path for the stored file.

 with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))


After this is done, we need to check if the file_path that we receive is the expected file path that we should get. Secondly, we also check whether the file was really created or is this just some dummy data sent. We get the expected path by this:

actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename.

Then we assert that actual_file_path is same as the resulting path we received using the assertEqual. Thirdly, we use assertTrue to ensure that there is a file in that path. That is,

self.assertTrue(os.path.exists(file_path))

Which gives a True if file exists or False if not.

So that basically sums up the unittesting.
1) If the file is saved in the correct path, and
2) The file actually exist
The the unittest passes only if both is True and is thus successful. Else we get either an error or a failure.

Following is the entire code snippet for this unit testing function:

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest

        @app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})

        with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))

Resources:

Testing Deploy Functions Using Sinon.JS in Yaydoc

In yaydoc, we deploy the generated documentation to the GitHub pages as well as Heroku. It is one of the important functions in the source code. I don’t want to break the build in future by any unnoticed change, so I decided to write a test case for deploy function. But the deploy function had lot dependencies like child processes, sockets, etc. Also it is not a pure function, so there is no return object to assert the value. Then I decided to stub for child process to check whether the correct script was passed or not. In order to write stub I decided to use sinon js framework because it can be used for writing stubs, mocks and spies. One of the advantages with sinon is that it’ll work with any testing framework.

sinon.stub(require("child_process"), "spawn").callsFake(function (fileName, args) {
  if (fileName !== "./ghpages_deploy.sh" ) {
    throw new Error(`invalid ${fileName} invoked`);
  }

  if (fileName === "./ghpages_deploy.sh") {
    let ghArgs = ["-e", "-i", "-n", "-o", "-r"];
    ghArgs.forEach(function (x)  {
      if (args.indexOf(x) < 0) {
        throw new Error(`${x} argument is not passed`);
      }
    })
  }
 
  let process = {
    on: function (listenerId, callback) {
      if (listenerId !== "exit") {
        throw new Error("listener id is not exit");
      }
    }
  }
  return process;
});

In sinon you can create s stub by passing the object in the first parameter and the method name in the second parameter to sinon’s stub method. After it returns an object, pass the function which you would like to replace with the “callFakes” function.

In above code, I wrote a simple stub which overwrites NodeJS child_process’s spawn method. So I passed the “child_process” module in the first parameter and “spawn” method name in the second parameter. You must check whether they are passing the correct deploy script and the correct parameter. So, I wrote a function which checks the condition and then pass the method to the callFakes method.

describe('deploy script', function() {
  it("gh-pages deploy", function() {
    deploy.deployPages(fakeSocket, {
      gitURL: "https://github.com/sch00lb0y/yaydoc.git",
      encryptedToken: crypter.encrypt("dummykey"),
      email: "[email protected]",
      uniqueId: "ajshdahsdh",
      username: "fossasia"
    });
  });
});

Finally test the deploy function by calling it. I use mocha as a testing framework. I have already written a blog on mocha. If you’re interested in mocha please check out this blog.

Resources: