Setting up Travis Continuous Integration in Giggity

Travis is a continuous integration service that enables you to run tests against your latest Android builds. You can setup your projects to run both unit and integration tests, which can also include launching an emulator. I recently added Travis Continuous Integration Connfa, Giggity and Giraffe app. In this blog, I describe how to set up Travis Continuous Integration in an Android Project with reference to Giggity app.

  • Use your GitHub account, sign in to either to Travis CI .org for public repositories or Travis CI .com for private repositories
  • Accept the GitHub access permissions confirmation.
  • Once you’re signed in to Travis CI, and synchronized your GitHub repositories, go to your profile page and enable the repository you want to build:

  • Now you need to add a .travis.yml file into the root of your project. This file will tell how Travis handles the builds. You should check your .travis file on Travis Web Lint before committing any changes to it.
  • You can find the very basic instructions for building an Android project from the Travis documentation. But here we specify the .travis.yml build accordingly for Giggity’s continuous integration. Here, language shows that it is an Android project. We write “language: ruby” if it is a ruby project.  If you need a more customizable environment running in a virtual machine, use the Sudo Enabled infrastructure. Similarly, we define the API, play services and libraries defined as shown.
language: android
sudo: required
jdk: 
 - oraclejdk8
# Use the Travis Container-Based Infrastructure
android:
  components:
    - platform-tools
    - tools
    - build-tools-25.0.3
    - android-25
    
    # For Google APIs
    - addon-google_apis-google-$ANDROID_API_LEVEL
    # Google Play Services
    - extra-google-google_play_services
    # Support library
    - extra-android-support
    # Latest artifacts in local repository
    - extra-google-m2repository
    - extra-android-m2repository
    - android-sdk-license-.+
    - '.+'

before_script:
  - chmod +x gradlew    

script:
  - ./gradlew build

Now when you make a commit or pull request Travis check if all the defines checks pass and it is able to be merged. To be more advanced you can also define if you want to build APKs too with every build.

References:

  • Travis Continuous Integration Documentation – https://docs.travis-ci.com/user/getting-started/
Continue ReadingSetting up Travis Continuous Integration in Giggity

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

Continue ReadingUsing Firebase Test Lab for Testing test cases of Phimpme Android

UI Espresso Test Cases for Phimpme Android

Now we are heading toward a release of Phimpme soon, So we are increasing the code coverage by writing test cases for our app. What is a Test Case? Test cases are the script against which we run our code to test the features implementation. It is basically contains the output, flow and features steps of the app. To release app on multiple platform, it is highly recommended to test the app on test cases.

For example, Let’s consider if we are developing an app which has one button. So first we write a UI test case which checks whether a button displayed on the screen or not? And in response to that it show the pass and fail of a test case.

Steps to add a UI test case using Espresso

Espresso testing framework provides APIs to simulate user interactions. It has a concise API. Even, now in new Version of Android Studio, there is a feature to record Espresso Test cases. I’ll show you how to use Recorder to write test cases in below steps.

  • Setup Project Directory

Android Instrumentation tests must be placed in androidTest directory. If it is not there create a directory in app/src/androidTest/java…

  • Write Test Case

So firstly, I am writing a very simple test case, which checks whether the three Bottom navigation view items are displayed or not?

Espresso Testing framework has basically three components:

ViewMatchers

Which helps to find the correct view on which some actions can be performed E.g. onView(withId(R.id.navigation_accounts). Here I am taking the view of accounts item in Bottom Navigation View.

ViewActions

It allows to perform actions on the view we get earlier. E.g. Very basic operation used is click()

ViewAssertions

It allows to assert the current state of the view E.g. isDisplayed() is an assertion on the view we get. So a basic architecture of an Espresso Test case is

onView(ViewMatcher)       
 .perform(ViewAction)     
   .check(ViewAssertion);

We can also Use Hamcrest framework which provide extra features of checking conditions in the code.

Setup Espresso in Code

Add this in your application level build.gradle

// Android Testing Support Library's runner and rules
androidTestCompile "com.android.support.test:runner:$rootProject.ext.runnerVersion"
androidTestCompile "com.android.support.test:rules:$rootProject.ext.rulesVersion"

// Espresso UI Testing dependencies.
androidTestCompile "com.android.support.test.espresso:espresso-core:$rootProject.ext.espressoVersion"
androidTestCompile "com.android.support.test.espresso:espresso-contrib:$rootProject.ext.espressoVersion"
  • Use recorder to write the Test Case

New recorder feature is great, if you want to set up everything quickly. Go to the Run → Record Espresso Test in Android Studio.

It dumps the current User Interface hierarchy and provide the feature to assert that.

You can edit the assertions by selecting the element and apply the assertion on it.

Save and Run the test cases by right click on Name of the class. Run ‘Test Case name’

Console will show the progress of Test case. Like here it is showing passed, it means it get all the view hierarchy which is required by the Test Case.

Resources

Continue ReadingUI Espresso Test Cases for Phimpme Android

Calibrating the PSLab’s Analog Features for Maximum Accuracy

The hardware design of the PSLab aims to achieve the maximum possible performance from a very conservative bill of materials. There are several analog components such as Op-amps, voltage dividers, and level-shifters involved in input signal processing that have inherent offsets and slopes that must be corrected in order get the best results. Similarly, some analog output signals from the PSLab are also modified by buffers, amplifiers, and level shifting circuits.

One way to improve the initial accuracy is to choose high performance analog components that are factory calibrated , and do not require any additional correction to achieve error margins that are less than the least count of the PSLab’s measurement capabilities. However, such components such as laser trimmed resistor pairs, and low-offset Op-Amps are quite expensive, and we must instead use software based correction methods to achieve similar performance from affordable parts.

Identifying a suitable calibrator for analog signals

In order to calibrate a device, we must first own a similar device whose measurements we can trust, and which has a finer resolution that the PSLab itself. Calibration is a one-time task that will quantify and store the gain and offset errors, and these errors are not expected to behave very differently unless a significant change in temperature, or mechanical stress is experienced.

Such a device may be as expensive as 24-bit, research-grade multimeters which generally cost upwards of $500 , or can be inexpensive analog to digital convertors that might require some expertise to extract data from them, but can still be used for calibration.

Fortunately, we have been able to identify a cheaply available device that puts the calibration process within the reach and capabilities of the end user. The ADS1115 16-bit ADC is a 4-channel, 0-3.3V ADC that can be interfaced via I2C. Typical initial accuracy of the internal voltage reference 0.01% and data rates higher than 500SPS are possible. It is cost effective, and is available in convenient module formats that can be directly plugged into the PSLab itself. It can be purchased through various vendors ( A , B , C )

Therefore, it appears to be most suited to calibrate individual PSLab devices.

Basic requirements for the calibration process

The process to calibrate the analog inputs and outputs involves looping them externally , and monitoring the actual values via the external calibrator.

We’re killing two birds with one stone by calibrating inputs and outputs in tandem, and it makes for a faster calibration process. The complete calibration process for  Digital to Analog converter outputs has enough complexity to warrant a separate blog post.

Let’s take an example; PV1 ( an analog output that can be set between -5V and +5V) can be connected to CH1 (An analog input which can read voltage values between -16 and +16 Volts) with a small segment of wire, and various voltage values can be set on PV1, and read back by CH1 . At the same time, the external calibration utility will also monitor this voltage, and store the error in PV1 (Set Voltage – Actual Voltage) as well as the error in CH1 ( Read Voltage – Actual Voltage ) .

In a similar manner , PV2 can be connected to CH2, and the second channel of the ADS1115 calibrator can be used to monitor the real value, and so on .

Deviations of various analog input channels and their different voltage ranges from the actual values. As is evident from the graph, errors can be as much as 40mV in a full scale range of +/-16,000mV . But since these errors are quite repeatable, we can apply a calibration polynomial to correct these.

 

Integral Non-linearity of the ADC

In addition to the overall slope and offset, you have probably observed in the previous image, a sawtooth pattern with an amplitude as small as the least count of the analog inputs superimposed on them. This error arises from the integral non-linearity (INL) of the analog to digital convertor of the PIC, and affects all analog inputs uniformly. While in principle we can ignore this for all practical purposes, in order to further improve the analog accuracy, we can also store this INL error of the ADC, and apply this correction to any channel after its slope and offset has been corrected.

The overall slope and offset are caused by the analog references and components, and can be corrected with a simple 3-degree polynomial. However, the sawtooth pattern is characteristics of the INL, and must be stored in a correction array with 4096 elements ( Each element represents the error of the corresponding ADC code of the 12-bit ADC )
The yellow trace represents the error in readings from the ADC after applying polynomial and table based correction. There appears to be a small offset that can be attributed to a change in ambient temperature , but can be neglected as it is in the order of 100uV

 

The following utilities and code are necessary for this process
  • An I2C communication library for ADS1115 must be present in order to acquire data from it via the PSLab
  • The library should be able to handle the following tasks
    • read single ended , and differential voltage values from any of the channels
    • Enable selection of voltage range and voltage reference
  • A graphical interface with the following features and algorithms will be required:
    • Vary the output voltages from PV1,2,3 in small, definite intervals
    • Store the errors in the analog outputs and inputs as a function of the actual voltage
    • Generate Cubic interpolation functions for each input and output channel
    • The Programmable Current Source can be calibrated using a measured Load resistor, and calibrated analog channel. Its interpolation function must also be stored.
    • Write all calibration constants into flash memory after assigning a timestamp
    • Store raw calibration data in a client-side folder
Testing of the analog inputs after applying calibration polynomials. It can be observed that the accuracy has been brought within a +/-5mV range for the wide input channels. For CH3, a +/-1mV accuracy is achieved.
Resources
Continue ReadingCalibrating the PSLab’s Analog Features for Maximum Accuracy

Testing Errors and Exceptions Using Unittest in Open Event Server

Like all other helper functions in FOSSASIA‘s Open Event Server, we also need to test the exception and error helper functions and classes. The error helper classes are mainly used to create error handler responses for known errors. For example we know error 403 is Access Forbidden, but we want to send a proper source message along with a proper error message to help identify and handle the error, hence we use the error classes. To ensure that future commits do not mismatch the error, we implemented the unit tests for errors.

There are mainly two kind of error classes, one are HTTP status errors and the other are the exceptions. Depending on the type of error we get in the try-except block for a particular API, we raise that particular exception or error.

Unit Test for Exception

Exceptions are written in this form:

@validates_schema
    def validate_quantity(self, data):
        if 'max_order' in data and 'min_order' in data:
            if data['max_order'] < data['min_order']:
                raise UnprocessableEntity({'pointer': '/data/attributes/max-order'},
                                          "max-order should be greater than min-order")

 

This error is raised wherever the data that is sent as POST or PATCH is unprocessable. For example, this is how we raise this error:

raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},

           "min-quantity should be less than max-quantity")

This exception is raised due to error in validation of data where maximum quantity should be more than minimum quantity.

To test that the above line indeed raises an exception of UnprocessableEntity with status 422, we use the assertRaises() function. Following is the code:

 def test_exceptions(self):
        # Unprocessable Entity Exception
        with self.assertRaises(UnprocessableEntity):
            raise UnprocessableEntity({'pointer': '/data/attributes/min-quantity'},
                                      "min-quantity should be less than max-quantity")


In the above code,
with self.assertRaises() creates a context of exception type, so that when the next line raises an exception, it asserts that the exception that it was expecting is same as the exception raised and hence ensures that the correct exception is being raised

Unit Test for Error

In error helper classes, what we do is, for known HTTP status codes we return a response that is user readable and understandable. So this is how we raise an error:

ForbiddenError({'source': ''}, 'Super admin access is required')

This is basically the 403: Access Denied error. But with the “Super admin access is required” message it becomes far more clear. However we need to ensure that status code returned when this error message is shown still stays 403 and isn’t modified in future unwantedly.

Here, errors and exceptions work a little different. When we declare a custom error class, we don’t really raise that error. Instead we show that error as a response. So we can’t use the assertRaises() function. However what we can do is we can compare the status code and ensure that the error raised is the same as the expected one. So we do this:

def test_errors(self):
        with app.test_request_context():
            # Forbidden Error
            forbidden_error = ForbiddenError({'source': ''}, 'Super admin access is required')
            self.assertEqual(forbidden_error.status, 403)

            # Not Found Error
            not_found_error = NotFoundError({'source': ''}, 'Object not found.')
            self.assertEqual(not_found_error.status, 404)


Here we firstly create an object of the error class
ForbiddenError with a sample source and message. We then assert that the status attribute of this object is 403 which ensures that this error is of the Access Denied type using the assertEqual() function, which is what was expected.
The above helps us maintain that no one in future unknowingly or by mistake changes the error messages and status code so as to maintain the HTTP status codes in the response.


Resources:
Continue ReadingTesting Errors and Exceptions Using Unittest in Open Event Server

Open Event Server: Testing Image Resize Using PIL and Unittest

FOSSASIA‘s Open Event Server project uses a certain set of functions in order to resize image from its original, example to thumbnail, icon or larger image. How do we test this resizing of images functions in Open Event Server project? To test image dimensions resizing functionality, we need to verify that the the resized image dimensions is same as the dimensions provided for resize.  For example, in this function, we provide the url for the image that we received and it creates a resized image and saves the resized version.

def create_save_resized_image(image_file, basewidth, maintain_aspect, height_size, upload_path,
                              ext='jpg', remove_after_upload=False, resize=True):
    """
    Create and Save the resized version of the background image
    :param resize:
    :param upload_path:
    :param ext:
    :param remove_after_upload:
    :param height_size:
    :param maintain_aspect:
    :param basewidth:
    :param image_file:
    :return:
    """
    filename = '{filename}.{ext}'.format(filename=get_file_name(), ext=ext)
    image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())
    im = Image.open(image_file)

    # Convert to jpeg for lower file size.
    if im.format is not 'JPEG':
        img = im.convert('RGB')
    else:
        img = im

    if resize:
        if maintain_aspect:
            width_percent = (basewidth / float(img.size[0]))
            height_size = int((float(img.size[1]) * float(width_percent)))

        img = img.resize((basewidth, height_size), PIL.Image.ANTIALIAS)

    temp_file_relative_path = 'static/media/temp/' + generate_hash(str(image_file)) + get_file_name() + '.jpg'
    temp_file_path = app.config['BASE_DIR'] + '/' + temp_file_relative_path
    dir_path = temp_file_path.rsplit('/', 1)[0]

    # create dirs if not present
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)

    img.save(temp_file_path)
    upfile = UploadedFile(file_path=temp_file_path, filename=filename)

    if remove_after_upload:
        os.remove(image_file)

    uploaded_url = upload(upfile, upload_path)
    os.remove(temp_file_path)

    return uploaded_url


In this function, we send the
image url, the width and height to be resized to, and the aspect ratio as either True or False along with the folder to be saved. For this blog, we are gonna assume aspect ratio is False which means that we don’t maintain the aspect ratio while resizing. So, given the above mentioned as parameter, we get the url for the resized image that is saved.
To test whether it has been resized to correct dimensions, we use Pillow or as it is popularly know, PIL. So we write a separate function named getsizes() within which get the image file as a parameter. Then using the Image module of PIL, we open the file as a JpegImageFile object. The JpegImageFile object has an attribute size which returns (width, height). So from this function, we return the size attribute. Following is the code:

def getsizes(self, file):
        # get file size *and* image size (None if not known)
        im = Image.open(file)
        return im.size


As we have this function, it’s time to look into the unit testing function. So in unit testing we set dummy width and height that we want to resize to, set aspect ratio as false as discussed above. This helps us to test that both width and height are properly resized. We are using a creative commons licensed image for resizing. This is the code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)


In the above code from
create_save_resized_image, we receive the url for the resized image. Since we have written all the unittests for local settings, we get a url with localhost as the server set. However, we don’t have the server running so we can’t acces the image through the url. So we build the absolute path to the image file from the url and store it in resized_image_file. Then we find the sizes of the image using the getsizes function that we have already written. This  gives us the width and height of the newly resized image. We make an assertion now to check whether the width that we wanted to resize to is equal to the actual width of the resized image. We make the same check with height as well. If both match, then the resizing function had worked perfectly. Here is the complete code:

def test_create_save_resized_image(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            width = 500
            height = 200
            aspect_ratio = False
            upload_path = 'test'
            resized_image_url = create_save_resized_image(image_url_test, width, aspect_ratio, height, upload_path, ext='png')
            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_width, resized_height = self.getsizes(resized_image_file)
            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width, width)
            self.assertEqual(resized_height, height)


In open event orga server, we use this resize function to basically create 3 resized images in various modules, such as events, users,etc. The 3 sizes are names – Large, Thumbnail and Icon. Depending on the one more suitable we use it avoiding the need to load a very big image for a very small div. The exact width and height for these 3 sizes can be changed from the admin settings of the project. We use the same technique as mentioned above. We run a loop to check the sizes for all these. Here is the code:

def test_create_save_image_sizes(self):
        with app.test_request_context():
            image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'
            image_sizes_type = "event"
            width_large = 1300
            width_thumbnail = 500
            width_icon = 75
            image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

            resized_image_url = image_sizes['original_image_url']
            resized_image_url_large = image_sizes['large_image_url']
            resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
            resized_image_url_icon = image_sizes['icon_image_url']

            resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
            resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
            resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
            resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

            resized_width_large, _ = self.getsizes(resized_image_file_large)
            resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
            resized_width_icon, _ = self.getsizes(resized_image_file_icon)

            self.assertTrue(os.path.exists(resized_image_file))
            self.assertEqual(resized_width_large, width_large)
            self.assertEqual(resized_width_thumbnail, width_thumbnail)
            self.assertEqual(resized_width_icon, width_icon)

Resources:
Continue ReadingOpen Event Server: Testing Image Resize Using PIL and Unittest

Creating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

In FOSSASIA‘s Open Event Server, we use the Python unittest library for unit testing various modules of the API code. Unittest library provides us with various assertion functions to assert between the actual and the expected values returned by a function or a module. In normal modules, we simply use these assertions to compare the result since the parameters mostly take as input normal data types. However one very important area for unittesting is File Uploading. We cannot really send a particular file or any such payload to the function to unittest it properly, since it expects a request.files kind of data which is obtained only when file is uploaded or sent as a request to an endpoint. For example in this function:

def uploaded_file(files, multiple=False):
    if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))

    else:
        extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)
        files_uploaded = UploadedFile(file_path, filename)

    return files_uploaded


So, we need to create a mock uploading system to replicate this check. So inside the unittesting function we create an api route for this particular scope to accept a file as a request. Following is the code:

@app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})


In the above code, it creates an app route with endpoint test_upload. It accepts a request.files. Then it sends this object to the
uploaded_file function (the function to be unittested), gets the result of the function, and returns the result in a json format.
With this we have the endpoint to mock a file upload ready. Next we need to send a request with file object. We cannot send a normal data which would then be treated as a normal request.form. But we want to receive it in request.files. So we create 2 different classes inheriting other classes.

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest


MyRequest
class inherits the Request class of Flask framework. We define the file stream of the Request class as the FileObj. Then, we set the request_class attribute of the Flask app to this new MyRequest class.
After we have it all setup, we need to send the request and see if the uploaded file is being saved properly or not. For this purpose we take help of StringIO library. StringIO creates a file-like class which can be then used to replicate a file uploading system. So we send the data as {‘file’: (StringIO(‘1,2,3,4’), ‘test_file.csv’)}. We send this as data to the /test_upload endpoint that we have created previously. As a result, the endpoint receives the function, saves the file, and returns the filename and file_path for the stored file.

 with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))


After this is done, we need to check if the file_path that we receive is the expected file path that we should get. Secondly, we also check whether the file was really created or is this just some dummy data sent. We get the expected path by this:

actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename.

Then we assert that actual_file_path is same as the resulting path we received using the assertEqual. Thirdly, we use assertTrue to ensure that there is a file in that path. That is,

self.assertTrue(os.path.exists(file_path))

Which gives a True if file exists or False if not.

So that basically sums up the unittesting.
1) If the file is saved in the correct path, and
2) The file actually exist
The the unittest passes only if both is True and is thus successful. Else we get either an error or a failure.

Following is the entire code snippet for this unit testing function:

def test_upload_single_file(self):

        class FileObj(StringIO):

            def close(self):
                pass

        class MyRequest(Request):
            def _get_file_stream(*args, **kwargs):
                return FileObj()

        app.request_class = MyRequest

        @app.route("/test_upload", methods=['POST'])
        def upload():
            files = request.files['file']
            file_uploaded = uploaded_file(files=files)
            return jsonify(
                {'path': file_uploaded.file_path,
                 'name': file_uploaded.filename})

        with app.test_request_context():
            client = app.test_client()
            resp = client.post('/test_upload', data = {'file': (StringIO('1,2,3,4'), 'test_file.csv')})
            data = json.loads(resp.data)
            file_path = data['path']
            filename = data['name']
            actual_file_path = app.config.get('BASE_DIR') + '/static/uploads/' + filename
            self.assertEqual(file_path, actual_file_path)
            self.assertTrue(os.path.exists(file_path))

Resources:
Continue ReadingCreating Unit Tests for File Upload Functions in Open Event Server with Python Unittest Library

Testing Deploy Functions Using Sinon.JS in Yaydoc

In yaydoc, we deploy the generated documentation to the GitHub pages as well as Heroku. It is one of the important functions in the source code. I don’t want to break the build in future by any unnoticed change, so I decided to write a test case for deploy function. But the deploy function had lot dependencies like child processes, sockets, etc. Also it is not a pure function, so there is no return object to assert the value. Then I decided to stub for child process to check whether the correct script was passed or not. In order to write stub I decided to use sinon js framework because it can be used for writing stubs, mocks and spies. One of the advantages with sinon is that it’ll work with any testing framework.

sinon.stub(require("child_process"), "spawn").callsFake(function (fileName, args) {
  if (fileName !== "./ghpages_deploy.sh" ) {
    throw new Error(`invalid ${fileName} invoked`);
  }

  if (fileName === "./ghpages_deploy.sh") {
    let ghArgs = ["-e", "-i", "-n", "-o", "-r"];
    ghArgs.forEach(function (x)  {
      if (args.indexOf(x) < 0) {
        throw new Error(`${x} argument is not passed`);
      }
    })
  }
 
  let process = {
    on: function (listenerId, callback) {
      if (listenerId !== "exit") {
        throw new Error("listener id is not exit");
      }
    }
  }
  return process;
});

In sinon you can create s stub by passing the object in the first parameter and the method name in the second parameter to sinon’s stub method. After it returns an object, pass the function which you would like to replace with the “callFakes” function.

In above code, I wrote a simple stub which overwrites NodeJS child_process’s spawn method. So I passed the “child_process” module in the first parameter and “spawn” method name in the second parameter. You must check whether they are passing the correct deploy script and the correct parameter. So, I wrote a function which checks the condition and then pass the method to the callFakes method.

describe('deploy script', function() {
  it("gh-pages deploy", function() {
    deploy.deployPages(fakeSocket, {
      gitURL: "https://github.com/sch00lb0y/yaydoc.git",
      encryptedToken: crypter.encrypt("dummykey"),
      email: "admin@fossasia.org",
      uniqueId: "ajshdahsdh",
      username: "fossasia"
    });
  });
});

Finally test the deploy function by calling it. I use mocha as a testing framework. I have already written a blog on mocha. If you’re interested in mocha please check out this blog.

Resources:

Continue ReadingTesting Deploy Functions Using Sinon.JS in Yaydoc

Acceptance Testing of a Feature in Open Event Frontend

In Open Event Frontend, we have integration tests for ember components which are used throughout the project. But even after those tests, user interaction could pose some errors. We perform acceptance tests to alleviate such scenarios.

Acceptance tests interact with application as the user does and ensures proper functionality of a feature or to determine whether or not the software system has met the requirement specifications. They are quite helpful for ensuring that our core features work properly.

Let us write an acceptance test for register feature in Open Event Frontend.

import { test } from 'qunit';
import moduleForAcceptance from 'open-event-frontend/tests/helpers/module-for-acceptance';

moduleForAcceptance('Acceptance | register');

In the first line we import test from ‘ember-qunit’ (default unit testing helper suite for Ember) which contains all the required test functions. For example, here we are using test function to check the rendering of our component. We can use test function multiple times to check multiple components.

Next, we import moduleForAcceptance from ‘open-event-frontend/tests/helpers/module-for-acceptance’ which deals with application setup and teardown.

test('visiting /register', function(assert) {
  visit('/register');

  andThen(function() {
    assert.equal(currentURL(), '/register');
  });
});

Inside our test function, we simulate visiting  /register route and then check for the current route to be /register.

test('visiting /register and registering with existing user', function(assert) {
  visit('/register');
  andThen(function() {
    assert.equal(currentURL(), '/register');
    fillIn('input[name=email]', 'opev_test_user@nada.email');
    fillIn('input[name=password]', 'opev_test_user');
    fillIn('input[name=password_repeat]', 'opev_test_user');
    click('button[type=submit]');
    andThen(function() {
      assert.equal(currentURL(), '/register');
      // const errorMessageDiv = findWithAssert('.ui.negative.message');
      // assert.equal(errorMessageDiv[0].textContent.trim(), 'An unexpected error occurred.');
    });
  });
});

Then we simulate visiting /register route and register a dummy user. For this, we first go to /register route and then check for the current route to be register route. We fill the register form with appropriate data and hit submit.

test('visiting /register after login', function(assert) {
  login(assert);
  andThen(function() {
    visit('/register');
    andThen(function() {
      assert.equal(currentURL(), '/');
    });
  });
});

The third test is to simulate visiting /register route with user logged in and this is very simple. We just visit /register route and then check if we are are at / route or not because a user redirects to / route when he tries to visit /register after login.

And since we checked for all the possible combinations, to run the test we simply use the following command-

ember test --server

But there is a little demerit to acceptance tests. They boot up the whole EmberJS application and start us at the application.index route. We then have to navigate to the page that contains the feature being tested. Writing acceptance tests for each and every feature would be a big waste of time and CPU cycles. For this reason, only core features are tested for acceptance.

Resources

Continue ReadingAcceptance Testing of a Feature in Open Event Frontend

Writing Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

We integrated Selenium Testing in the Open Event Webapp and are in full swing in writing tests to check the major features of the webapp. Tests help us to fix the issues/bugs which have been solved earlier but keep on resurging when some new changes are incorporated in the repo. I describe the major features that we are testing in this.

Bookmark Feature
The first major feature that we want to test is the bookmark feature. It allows the users to mark a session they are interested in and view them all at once with a single click on the starred button. We want to ensure that the feature is working on all the pages.

Let us discuss the design of the test. First, we start with tracks page. We select few sessions (2 here) for test and note down their session_ids. Finding an element by its id is simple in Selenium can be done easily. After we find the session element, we then find the mark button inside it (with the help of its class name) and click on it to mark the session. After that, we click on the starred button to display only the marked sessions and proceed to count the number of visible elements on the page. If the number of visible session elements comes out to be 2 (the ones that we marked), it means that the feature is working. If the number deviates, it indicates that something is wrong and the test fails.

82080522-21e8-403d-9906-3b4f420720b9.png

Here is a part of the code implementing the above logic. The whole code can be seen here

// Returns the number of visible session elements on the tracks page
TrackPage.getNoOfVisibleSessionElems = function() {
 return this.findAll(By.className('room-filter')).then(this.getElemsDisplayStatus).then(function(displayArr) {
   return displayArr.reduce(function(counter, value) { return value == 1 ? counter + 1 : counter; }, 0);
 });
};
// Bookmark the sessions, scrolls down the page and then count the number of visible session elements
TrackPage.checkIsolatedBookmark = function() {
 // Sample sessions having ids of 3014 and 3015 being checked for the bookmark feature
 var sessionIdsArr = ['3014', '3015'];
 var self = this;
 return  self.toggleSessionBookmark(sessionIdsArr).then(self.toggleStarredButton.bind(self)).then(function() {
   return self.driver.executeScript('window.scrollTo(0, 400)').then(self.getNoOfVisibleSessionElems.bind(self));
 });
};

Here is the excerpt of code which matches the actual number of visible session elements to the expected number. You can view the whole test script here

//Test for checking the bookmark feature on the tracks page
it('Checking the bookmark toggle', function(done) {
 trackPage.checkIsolatedBookmark().then(function(num) {
   assert.equal(num, 2);
   done();
 }).catch(function(err) {
   done(err);
 });
});

Now, we want to test this feature on the other pages: schedule and rooms page. We can simply follow the same approach as done on the tracks page but it is time expensive. Checking the visibility of all the sessions elements present on the page takes quite some time due to a large number of sessions. We need to think of a different approach.We had already marked two elements on the tracks page. We then go to the schedule page and click on the starred mode. We calculate the current height of the page. We then unmark a session and then recalculate the height of the page again. If the bookmark feature is working, then the height should decrease. This determines the correctness of the test. We follow the same approach on the rooms pages too. While this is not absolutely correct, it is a good way to check the feature. We have already employed the perfect method on the tracks page so there was no need of applying it on the schedule and the rooms page since it would have increased the time of the testing by a quite large margin.

Here is an excerpt of the code. The whole work can be viewed here

RoomPage.checkIsolatedBookmark = function() {
 // We go into starred mode and unmark sessions having id 3015 which was marked previously on tracks pages. If the bookmark feature works, then length of the web page would decrease. Return true if that happens. False otherwise
 var getPageHeight = 'return document.body.scrollHeight';
 var sessionIdsArr = ['3015'];
 var self = this;
 var oldHeight, newHeight;
 return self.toggleStarredButton().then(function() {
   return self.driver.executeScript(getPageHeight).then(function(height) {
     oldHeight = height;
     return self.toggleSessionBookmark(sessionIdsArr).then(function() {
       return self.driver.executeScript(getPageHeight).then(function(height) {
         newHeight = height;
         return oldHeight > newHeight;
       });
     });
   });
 });
};

Search Feature
Now, let us go to the testing of the search feature in the webapp. The main object of focus is the Search bar. It is present on all the pages: tracks, rooms, schedule, and speakers page and allows the user to search for a particular session or a speaker and instantly fetches the result as he/she types.

We want to ensure that this feature works across all the pages. Tracks, Rooms and Schedule pages are similar in a way that they display all the session of the event albeit in a different manner. Any query made on any one of these pages should fetch the same number of session elements on the other pages too. The speaker page contains mostly information about the speakers only. So, we make a single common test for the former three pages and a little different test for the latter page.

Designing a test for this feature is interesting. We want it to be fast and accurate. A simple way to approach this is to think of the components involved. One is the query text which would be entered in the search input bar. Other is the list of the sessions which would match the text entered and will be visible on the page after the text has been entered. We decide upon a text string and a list containing session ids. This list contains the id of the sessions should be visible on the above query and also contain few id of the sessions which do not match the text entered. During the actual test, we enter the decided text string and check the visibility of the sessions which are present in the decided list. If the result matches the expected order, then it means that the feature is working well and the test passes. Otherwise, it means that there is some problem with the default implementation and the test fails.

For eg: We decide upon the search text ‘Mario’ and then note the ids of the sessions which should be visible in that search.

c0e4910f-cf69-4b2a-8cc1-233badb35eee.png

Suppose the list of the ids come out to be

['3017''3029''3013''3031']

We then add few more session ids which should not be visible on that search text. Like we add two extra false ids 3014, 3015. Modified list would be something like this

['3017''3029''3013''3031''3014''3015']

Now we run the test and determine the visibility of the sessions present in the above list, compare it to the expected output and accordingly determine the fate of the test.

Expected: [truetruetruetruefalsefalse]
Actual Output: [truetruetruetruetruetrue]

Then the test would fail since the last two sessions were not expected to be visible.

Here is some code related to it. The whole work can be seen here

function commonSearchTest(text, idList) {
 var self = this;
 var searchText = text || 'Mario';
 // First 4 session ids should show up on default search text and the last two not. If no idList provided for testing, use the idList for the default search text
 var arrId = idList || ['3017', '3029', '3013', '3031', '3014', '3015'];
 var promise = new Promise(function(resolve) {
   self.search(searchText).then(function() {
     var promiseArr = arrId.map(function(curElem) {
       return self.find(By.id(curElem)).isDisplayed();
     });

     self.resetSearchBar().then(function() {
       resolve(Promise.all(promiseArr));
     });
   });
 });
 return promise;
}

Here is the code for comparing the expected and the actual output. You can view the whole file here

it('Checking search functionality', function(done) {
 schedulePage.commonSearchTest().then(function(boolArr) {
   assert.deepEqual(boolArr, [true, true, true, true, false, false]);
   done();
 }).catch(function(err) {
   done(err);
 });
});

The search functionality test for the speaker’s page is done in the same style. Just instead of having the session ids, we work with speaker ids there. Rest everything is done in a similar manner.

Resources:

Continue ReadingWriting Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp