Add Unit Test in SUSI.AI Android App

Unit testing is an integral part of software development. Hence, this blog focuses on adding unit tests to SUSI.AI Android app. To keep things simple, take a very basic example of anonymize feedback section. In this section the email of the user is truncated after ‘@’ symbol in order to maintain the anonymity of the user. Here is the function that takes ‘email’ as a parameter and returns the truncated email that had to be displayed in the feedback section :

fun truncateEmailAtEnd(email: String?): String? {
   if (!email.isNullOrEmpty()) {
       val truncateAt = email?.indexOf('@')
       if (truncateAt is Int && truncateAt != -1) {
           return email.substring(0, truncateAt.plus(1)) + " ..."
       }
   }
   return null
}

 

The unit test has to be written for the above function.

Step – 1 : Add the following dependencies to your build.gradle file.

//unit test
testImplementation "junit:junit:4.12"
testImplementation "org.mockito:mockito-core:1.10.19"

 

Step – 2 : Add a file in the correct package (same as the file to be tested) in the test package. The function above is present in the Utils.kt file. Thus create a file, called UtilsTest.kt, in the test folder in the package org.fossasia.susi.ai.helper’.

Step – 3 : Add a method, called testTruncateEmailAtEnd(), to the UtilsTest.kt and add ‘@Test’ annotation to before this method.

Step – 4 : Now add tests for various cases, including all possible corner cases that might occur. This can be using assertEquals() which takes in two paramters – expected value and actual value.

For example, consider an email ‘testuser@example.com’. This email is passed as a parameter to the truncateAtEnd() method. The expected returned string would be ‘testuser@ …’. So, add a test for this case using assertEquals() as :

assertEquals("testuser@ ...", Utils.truncateEmailAtEnd("testuser@example.com"))

 

Similary, add other cases, like empty email string, null string, email with numbers and symbols and so on.

Here is how the UtilsTest.kt class looks like.

package org.fossasia.susi.ai.helper

import junit.framework.Assert.assertEquals
import org.junit.Test

class UtilsTest {
   @Test
   fun testTruncateEmailAtEnd() {
       assertEquals("testuser@ ...", Utils.truncateEmailAtEnd("testuser@example.com"))
       assertEquals(null, Utils.truncateEmailAtEnd("testuser"))
       assertEquals(null, Utils.truncateEmailAtEnd(""))
       assertEquals(null, Utils.truncateEmailAtEnd(" "))
       assertEquals(null, Utils.truncateEmailAtEnd(null))
       assertEquals("test.user@ ...", Utils.truncateEmailAtEnd("test.user@example.com"))
       assertEquals("test_user@ ...", Utils.truncateEmailAtEnd("test_user@example.com"))
       assertEquals("test123@ ...", Utils.truncateEmailAtEnd("test123@example.com"))
       assertEquals(null, Utils.truncateEmailAtEnd("test user@example.com"))
   }
}

 

Note: You can add more tests to check for other general and corner cases.

Step – 5 : Run the tests in UtilsTest.kt.

If all the test cases pass, then the tests pass. But, if the tests fail, try to figure out the cause of failure of the tests and add/modify the code in the Utils.kt accordingly. This approach helps recognize flaws in the existing code thereby reducing the risk of bugs and failures.

Resources

Continue ReadingAdd Unit Test in SUSI.AI Android App

Handling Android Runtime permissions in UI Tests in SUSI.AI Android

With the introduction of Marshmallow (API Level 23), in SUSI.AI it was needed to ensure that:

  • It was verified if we had the permission that was needed, when required
  • The user was requested to grant permission, when it deemed appropriate
  • The request (empty states or data feedback) was correctly handled within the UI to represent the outcome of being granted or denied the required permission

You might have written UI tests. What about instances where the app needs the user’s permissions, like allow the app to access contacts on the device, for the tests to run? Would the tests pass when the test is run on Android 6.0+ devices?
And, can Espresso be used to achieve this? Unfortunately, Espresso doesn’t have the ability to access components from outside of the application package. So, how to handle this?
There are two approaches to handle this :
1) Using the UI Automator
2) Using the GrantPermissionRule

Let us have a look at both of these approaches in detail :

Using UI Automator to Handle Runtime Permissions on Android for UI Tests :

UI Automator is a UI testing framework suitable for cross-app functional UI testing across system and installed apps. This framework requires Android 4.3 (API level 18) or higher.
The UI Automator testing framework provides a set of APIs to build UI tests that perform interactions on user apps and system apps. The UI Automator APIs allows you to perform operations such as opening the Settings menu or the app launcher in a test device. This testing framework is well-suited for writing black box-style automated tests, where the test code does not rely on internal implementation details of the target app.

The key features of this testing framework include the following :

  • A viewer to inspect layout hierarchy. For more information, see UI Automator Viewer.
  • An API to retrieve state information and perform operations on the target device. For more information, see Accessing device stateAPIs that support cross-app UI testing. For more information, see UI Automator APIs.
  • Unlike Espresso, UIAutomator can interact with system applications which means that you’ll be able to interact with the permissions dialog, if needed.

    So, how to do this? Well, if you want to grant a permission in a UI test then you need to find the corresponding UiObject that you wish to click on. In our case, the permissions dialog box is the UiObject. This object is a representation of a view – it is not bound to the view but contains information to locate the matching view at runtime, based on the properties of the UiSelector instance within it’s constructor. A UiSelector instance is an object that declares elements to be targeted by the UI test within the layout. You can set various properties such as a text value, class name or content-description, for this UiSelector instance.
    So, once you have your UiObject (the permissions dialog), you can determine which option you want to select and then use click( ) method to grant/deny permission access.

fun allowPermissionsIfNeeded() {
         iIf (BUILD.VERSION.SDK_INT >= 23){
                   var allowPermissions : UiObject = mDevice
                                       .findObject(UiSelector( ).text("ALLOW"))
                   if(allowPermissions.exists( )) {
                             try {
                                       allowPermissions.click( )
                             } catch (e : UiObjectNotFoundException) {
                                       Log.e(TAG, "There is no permission dialog to interact with", e)
                             }
                   } 
         }
}

Similarly, you can also handle “DENY” approach.

So, this is how you can use UI Automator to handle Android runtime permissions for UI testing. Now, let us have a look at the other and a newer approach :

Using GrantPermissionRule to Handle Runtime Permissions on Android for UI Tests :

GrantPermissionRule is used to grant runtime permissions to avoid the permission dialog from showing up and blocking the UI of the app. In this approach, permissions can only be requested for API level 23 (Android M) or higher. All you need to do is to add the following rule to your UI test :

@Rule 
public GrantPermissionRule mRuntimePermissionRule =     
           GrantPermissionRule.grant(android.Manifest
.permission.ACCESS_FINE_LOCATION);

ACCESS_FINE_LOCATION (in the above code) can be replaced by any other permission that your app requires.

This would be also be implemented in the SUSI.AI Android app for UI tests. Unit tests and UI tests form an integral part of a good software. Hence, you need to write quality tests for your projects to detect and fix bugs and flaws easily and conveniently.

Resources

 

Continue ReadingHandling Android Runtime permissions in UI Tests in SUSI.AI Android

Implement Tests for Feedback List in Open Event Orga App

In the Open Event Orga App test have been written for all the presenters and viewmodel classes to ensure that the implemented functionalities work well.

In the following blog post I have discussed one particular test which I implemented which is the FeedbackList Presenter Test.

Implementation

  1. Instantiation of the variables.
@Rule
public MockitoRule mockitoRule = MockitoJUnit.rule();
@Mock
public FeedbackListView feedbackListView;
@Mock
public FeedbackRepository feedbackRepository;

We should first know the meaning of the Annotations being used:

@Rule : It tells mockito to create the mocks based on the @Mock annotation. This annotation always needs to be used.

@Mock: It tells Mockito to mock the FeedbackListView interface and FeedbackRepository class.

Here we are mocking 3 classes namely: MockitoRule, FeedbackListView, FeedbackRepository.

Before moving forward we first need to understand the meaning of Mock. A mock object is a dummy implementation for an interface or a class in which you define the output of certain method calls. Mock objects are configured to perform a certain behavior during a test. They typically record the interaction with the system and tests can validate that.

 

private static final List<Feedback> FEEDBACKS = Arrays.asList(
  Feedback.builder().id(2L).comment(“Amazing!”).build(),
  Feedback.builder().id(3L).comment(“Awesome!”).build(),
  Feedback.builder().id(4L).comment(“Poor!”).build()
);

The list of feedbacks is populated with demo values which can be used for testing purpose later.

2) The @Before annotation is applied before the set up. Before any tests are created, the setUp( ) is executed. A feedbackListPresenter object is created and the required parameters are passed. The RxJava Plugin’s setIoSchedulerHandler, setComputationSchedulerHandler and setInitmainThreadSchedulerHandler use the Scheduler.Trampoline( ) .  It lets the internal call Observable call to end before asserting the result.

setIOSchedulerHandler( ) -> It basically is a type of Scheduler which handles the Input and Output of the RxJava code.

setComputationSchedulerHandler( ) -> It is another Scheduler which handles the computations which are carried out during call to RxJava methods.

setInitMainThreadSchedulerHandler( ) -> It is called to notify the Scheduler that the IO operations would be carried out on the main thread.

@Before
public void setUp() {
  feedbackListPresenter = new FeedbackListPresenter(feedbackRepository);
  feedbackListPresenter.attach(ID, feedbackListView);

  RxJavaPlugins.setIoSchedulerHandler(scheduler -> Schedulers.trampoline());
  RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());
  RxAndroidPlugins.setInitMainThreadSchedulerHandler(schedulerCallable -> Schedulers.trampoline());
}

Some of the tests are discussed below:

→  The following test is written to ensure that the feedback list gets updated automatically after a feedback is received.

@Test
public void shouldLoadFeedbackListAutomatically() {
  when(feedbackRepository.getFeedbacks(anyLong(), anyBoolean())).thenReturn(Observable.fromIterable(FEEDBACKS));

  feedbackListPresenter.start();

  verify(feedbackRepository).getFeedbacks(ID, false);
}

As can be seen above , I have used the when and return functionality of Mockito. It is basically used to check the return type of the object. So when the required parameters are passed in the getFeedback( ) , then the return type of what is expected is mentioned in the thenReturn( ).

verify ensures that the getFeedback( ) is called on the feedbackfeedbackRepository mock only.

→ The following test is written to ensure that there is an error message on loading data after swipe refresh is made. Firstly the list of feedbacks is fetched from the feedbackRepository with the help of getFeedbacks( ) where the parameters event id and the boolean variable true are passed. Then the thenReturn( ) has the statement Observable.error(Logger.TEST_ERROR) which is actually written to specify the expected result we want i.e in this case we are expecting the TEST_ERROR statement as a response and hence it is written before.

At the end it is verified using the statement verify statement where the feedbackListView is passed and the error is captured.

 

@Test
public void shouldShowErrorMessageOnSwipeRefreshError() {
  when(feedbackRepository.getFeedbacks(ID, true)).thenReturn(Observable.error(Logger.TEST_ERROR));

  feedbackListPresenter.loadFeedbacks(true);

  verify(feedbackListView).showError(Logger.TEST_ERROR.getMessage());
}

3) After the tests have been applied, the RxJava plugins are reset.

@After
public void tearDown() {
  RxJavaPlugins.reset();
  RxAndroidPlugins.reset();

}

Resources:

→ Mockito tutorial :

http://www.vogella.com/tutorials/Mockito/article.html

→ Testing in RxJava:

https://stackoverflow.com/questions/40233956/how-to-use-schedulers-trampoline-inrxjava

Continue ReadingImplement Tests for Feedback List in Open Event Orga App

Adding New tests for Knowledge Service

Testing is done to test our application by executing some functions by creating instances of corresponding classes,executing functions and checking the actual behaviour of our app with expected result. The tools and frameworks used in Angular are Jasmine and Karma. In this blog, I will describe about how I have implemented tests for Newly Added Knowledge API service that helped us to increase overall code coverage by 1.05% in Susper .

Adding tests for Knowledge API:

We need to check the API that whether it is functioning or not and this can be done by using a mocked response (hardcoded response) for any query and then comparing this with the received response from the API. This will help us to check proper functioning of our API.

This is a common practice in Angular and to achieve this we will be using some dependencies like MockBackend, MockConnection, BaseRequestOptions provided by Angular.

import { MockBackend, MockConnection } from '@angular/http/testing';
import { Http, Jsonp, BaseRequestOptions, RequestMethod, Response, ResponseOptions, HttpModule, JsonpModule } from '@angular/http';

 

We will also need to define a Mock response, here I have used hard coded response for query India. Here is the mocked response.

export const MockKnowledgeApi = {
    results: [
        {"batchcomplete": "", "query":
        {"normalized":
        [{"from": "india", "to": "India"}], "pages":
        {"14533": {"pageid": 14533, "ns": 0, "title": "India", "extract": `India (IAST: Bh\u0101rat), also called the Republic of India (IAST: Bh\u0101rat Ga\u1e47ar\u0101jya), is a country in South Asia.
   }}}} ], MaxHits : 5 };

 

Now we will use a Mock constant mock_Http_provider for http and will inject instances of MockBackend and BaseRequestOptions.

const mockHttp_provider = {
  provide: Http,
  deps: [MockBackend, BaseRequestOptions],
  useFactory: (backend: MockBackend, options: BaseRequestOptions) => { return new Http(backend, options); } };

 

Now we need to add all the services and dependencies which we will be using in providers and we will inject the instances of Knowledgeapi Service and MockBackend in beforeEach function.

beforeEach(inject([KnowledgeapiService, MockBackend], (knowledgeService: KnowledgeapiService, mockBackend: MockBackend) => { service = knowledgeService;
    backend = mockBackend;}));

 

Now we will use the same query for which we have created the mocked response and we will be checking the response of from our API.

const searchquery = 'india';
connection.mockRespond(new Response(options));
      expect(connection.request.method).toEqual(RequestMethod.Get);
      expect(connection.request.url).toBe(   `https://en.wikipedia.org/w/api.php?&origin=*&format=json&action=query&prop=extracts&exintro=&explaintext=&` +
                    `titles=${searchquery}`);});
service.getSearchResults(searchquery).subscribe((res) => {
      expect(res).toEqual(MockKnowledgeApi);});

 

This will check the working of our API and if it is working then our test case will pass.

Like this we have implemented the tests for Knowledgeapi Service which helped us to test our API and increase overall code coverage significantly.

Resources

  1. Testing in Angular:https://angular.io/guide/testing
  2. Testing by Mockbackend:https://angular.io/api/http/testing/MockBackend
  3. Using MockBackend to simulate response: https://codecraft.tv/courses/angular/unit-testing/http-and-jsonp/#_using_the_code_mockbackend_code_to_simulate_a_response
Continue ReadingAdding New tests for Knowledge Service

Setting up the Circle.CI config for SUSI Android

SUSI.AI Android app uses CircleCI to check for tests whenever a new PR is made. This is done to ensure that the app is consistent with the new changes and there is no problem adding that particular code change. CircleCI has now launched a v2 version of the .yml file and therefore to continue using CircleCI it is time to upgrade to v2 version of the script.

Circle.CI config version 1

Config file tells the CI on what commands to run to test the build, which will determine if the build is successful or failed like tests, lints etc and hooks commands to run if the tests pass, which environment to run the code in – python, java, android, etc.

Circle.CI published that 31, August 2018 is the last date upto when they will support the config version 1.0, therefore it Circle.CI was updated to version 2.0.

The version 2.0 has an updated syntax of the script and previous script was modified so that it could provide the configuration for the CI builds.

The updated script is shown below :

version: 2
jobs:
build:
  working_directory: ~/code
  docker:
    – image: circleci/android:api-27-alpha
  environment:
    JVM_OPTS: -Xmx3200m
  steps:
    – checkout
    – restore_cache:
        key: jars-{{ checksum “build.gradle” }}-{{ checksum  “app/build.gradle” }}
    – run:
        name: Download Dependencies
        command: ./gradlew androidDependencies
    – save_cache:
        paths:
          – ~/.gradle
        key: jars-{{ checksum “build.gradle” }}-{{ checksum  “app/build.gradle” }}
    – run:
        name: lint check
        command: ./gradlew lint
    – run:
        name: Run Tests
        command: ./gradlew build
    – run:
        command:
          bash exec/prep-key.sh
          bash exec/apk-gen.sh
    – store_artifacts:
        path: app/build/reports
        destination: reports
    – store_artifacts:
        path: app/build/outputs
        destination: outputs
    – store_test_results:
        path: app/build/test-results

Few checks such as the connectedCheck, which were required for the UI testing, are not included in this script and instead an approach towards the increasing number of unit tests is followed. The reason being, implementing UI tests is a hard task for apps which have no constant designs and have constantly changing specifications. Since the architecture used is MVP, so on moving all logic to presenter components there won’t be a need of most UI tests at all. This problem with UI tests will further increase in future. Therefore, moving in the direction of increasing unit tests is better because they are both easy, small and quick to code and run and the degree of dependence on flaky UI tests also decreases.

If the view is implementing small sections of display logic, just verifying that the presenter calls those is enough to guarantee that the UI is going to work.

The command in the previous config such as

./gradlew connectedCheck

was removed and instead ./gradlew build was added to start the unit tests instead, also due to the updated gradle dependencies, changes were made to the apk uploading commands and also changes were performed in the apk-gen.sh file.

bash exec/prep-key.sh
chmod +x exec/apk-gen.sh
./exec/apk-gen.sh

In the above code, lines concerned with apk-gen.sh can be combined and the resultant command was written as :

bash exec/apk-gen.sh

The apk-gen.sh script was configured and the latest build tools were added to make the script work.

REFERENCES

Continuous Integration – Martin Fowler:

https://martinfowler.com/articles/continuousIntegration.html

Circle CI version 2.0 for android :

https://circleci.com/docs/2.0/configuration-reference/#run

Circle CI configuration reference :

https://circleci.com/docs/2.0/configuration-reference/

 

Continue ReadingSetting up the Circle.CI config for SUSI Android

Adding new test cases for increasing test coverage of Loklak Server

It is a good practice to have test cases covering major portion of actual code base. The idea was same to add new test cases in Loklak Server to increase its test coverage. The results were quite amazing with a significant increase of about 3% in total test coverage of the overall project. And about 80-100% increase in the test coverage of individual files for which tests have been written.

Integration Process

For integration, a total of 6 new test cases have been written:

  1. ASCIITest
  2. GeoLocationTest
  3. CommonPatternTest
  4. InstagramProfileScraperTest
  5. EventBriteCrawlerServiceTest
  6. LocationWiseTimeServiceTest

For increasing code coverage, Java docs have been written which increase the lines of code being covered.

Implementation

Basic implementation of adding a new test case is same. Here’s is an example of EventBriteCrawlerServiceTest implementation. This can be used as a reference for adding a new test case in the project.

Prerequisite: If the test file being written tests any additional external service (e.g. EventBriteCrawlerServiceTest tests any event being created on EventBrite platform) then, a corresponding new test service or test event should be written beforehand on the given platform. For EventBriteCrawlerServiceTest, a test-event has been created.

The given below steps will be followed for creating a new test case (EventBriteCrawlerServiceTest), assuming the prerequisite step has been done:

  • A new java file is generated in test/org/loklak/api/search/ as EventBriteCrawlerServiceTest.java.
  • Define package for the test file and import EventBriteCrawlerService.java file which has to be tested along with necessary packages and methods.

package org.loklak.api.search;

import org.loklak.api.search.EventBriteCrawlerService;
import org.loklak.susi.SusiThought;
import org.junit.Test;
import org.json.JSONArray;
import org.json.JSONObject;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertNotNull;

 

  • Define the class and methods that need to be tested. It is a good idea to break the testing into several test methods rather than testing more features in a single method.

public class EventBriteCrawlerServiceTest {
  @Test
  public void apiPathTest() { }

  @Test
  public void eventBriteEventTest() { }

  @Test
  public void eventBriteOrganizerTest() { }

  @Test
  public void eventExtraDetailsTest() { }
}

 

  • Create an object of EventBriteCrawlerService in each test method.

EventBriteCrawlerService eventBriteCrawlerService = new EventBriteCrawlerService();

 

  • Call specific method of EventBriteCrawlerService that needs to be tested in each of the defined methods of test file and assert the actualresult to the expectedresult.

@Test
public void apiPathTest() {
    EventBriteCrawlerService eventBriteCrawlerService = 
        new EventBriteCrawlerService();

     assertEquals("/api/eventbritecrawler.json",
        eventBriteCrawlerService.getAPIPath());
}

 

  • For methods fetching an actual result (Integration tests) of event page, define and initialise an expected set of results and assert the expected result with the actual result by parsing the json result.

@Test
public void eventBriteOrganizerTest() {
    EventBriteCrawlerService eventBriteCrawlerService =
        new EventBriteCrawlerService();
    String eventTestUrl = "https://www.eventbrite.com/e/
        testevent-tickets-46017636991";
    SusiThought resultPage = eventBriteCrawlerService
        .crawlEventBrite(eventTestUrl);
    JSONArray jsonArray = resultPage.getData();
    JSONObject organizer_details =
        jsonArray.getJSONObject(1);
    String organizer_contact_info = organizer_details
        .getString("organizer_contact_info");
    String organizer_link = organizer_details
        .getString("organizer_link");
    String organizer_profile_link = organizer_details
        .getString("organizer_profile_link");
    String organizer_name = organizer_details
        .getString("organizer_name");

    assertEquals("https://www.eventbrite.com
        /e/testevent-tickets-46017636991
        #lightbox_contact",
        organizer_contact_info);
    assertEquals("https://www.eventbrite.com
        /e/testevent-tickets-46017636991
        #listing-organizer", organizer_link);
    assertEquals("", organizer_profile_link);
    assertEquals("aurabh Srivastava", organizer_name);
}

 

  • If the test file is testing the harvester, then import and add the test class in TestRunner.java file. e.g.

import org.loklak.harvester.TwitterScraperTest;

@RunWith(Suite.class)
@Suite.SuiteClasses({
  TwitterScraperTest.class
})

Testing

The last step will be to check if the test case passes with integration of other tests. The gradle will test the test case along with other tests in the project.

./gradlew build

Resources

Continue ReadingAdding new test cases for increasing test coverage of Loklak Server

Deployment terms in Open Event Frontend

In Open Event Frontend, once a pull request is opened, we see some tests running on for the specific pull request like ‘Codacy’, ‘Codecov’, ‘Travis’, etc. New contributors eventually get confused what the tests are about. So this blog would be a walkthrough to these terms that we use and what they mean about the PR.

Travis: Everytime you make a pull request, you will see this test running and in some time giving the output whether the test passed or failed. Travis is the continuous integration test that we are using to test that the changes that the pull request you proposed does not break any other things. Sometimes you will see the following message which indicates that your changes is breaking something else which is not intended.

Thus, by looking at the Travis logs, you can see where the changes proposed in the pull request are breaking things. Thus, you can go ahead and correct the code and push again to run the Travis build until it passes.

Codacy: Codacy is being used to check the code style, duplication, complexity and coverage, etc. When you create a pull request or update the pull request, this test runs which checks whether the code followed certain style guide or if there is duplication in code, etc. For instance let’s say if your code has a html page in which a tag has an attribute which is left undefined. Then codacy will be throwing error failing the tests. Thus you need to see the logs and go correct the bug in code. The following message shows that the codacy test has passed.

Codecov:

Codecov is a code coverage test which indicates how much of the code change that is proposed in the pull request is actually executed. Consider out of the 100 lines of code that you wrote, only 80 lines is being actually executed and rest is not, then the code coverage decreases. The following indicates the codecov report.

Thus, it can be seen that which files are affected by what percent.

Surge:

The surge link is nothing but the deployment link of the changes in your pull request.

Thus, checking the link manually, we can test the behavior of the app in terms of UI/UX or the other features that the pull request adds.

References:

 

 

Continue ReadingDeployment terms in Open Event Frontend

UI automated testing using Selenium in Badgeyay

With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

What is Selenium?

Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7).

First things first:
To install these package run this code on the CLI:

pip install selenium==2.40
pip install nose

Don’t forget to add them in the requirements.txt file

Web Browser:
We also need to have Firefox installed on your machine.

Writing the Test
An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

A test for the web page title:

import unittest
from selenium import webdriver

class SampleTest(unittest.TestCase):

    @classmethod
    def setUpClass(cls):
        cls.driver = webdriver.Firefox()
        cls.driver.get('http://badgeyay-dev.herokuapp.com/')

    def test_title(self):
        self.assertEqual(self.driver.title, 'Badgeyay')

    @classmethod
    def tearDownClass(cls):
        cls.driver.quit()

 

Run the test using nose test.py

Clicking the element
For our next test, we click the menu button, and check if the menu becomes visible.

elem = self.driver.find_element_by_css_selector(".custom-menu-content")
self.driver.find_element_by_css_selector(".glyphicon-th").click()
self.assertTrue(elem.is_displayed())

 

Uploading a CSV file:
For our next test, we upload a CSV file and see if a success message pops up.

def test_upload(self):
        Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png'))
        CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv'))
        self.driver.find_element_by_name("file").send_keys(CSVpath)
        self.driver.find_element_by_name("image").send_keys(Imagepath)
        self.driver.find_element_by_css_selector("form .btn-primary").click()
        time.sleep(3)
        success = self.driver.find_element_by_css_selector(".flash-success")
        self.assertIn(u'Your badges has been successfully generated!', success.text)

 

The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests

We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

Resources

Continue ReadingUI automated testing using Selenium in Badgeyay

Improving Loklak apps site

In this blog I will be describing some of the recent improvements made to the Loklak apps site. A new utility script has been added to automatically update the loklak app wall after a new app has been made. Invalid app query in app details page has been handled gracefully.

A proper message is shown when a user enters an invalid app name in the url of the details page. Tests has been added for details page.

Developing updatewall script

This is a small utility script to update Loklak wall in order to expose a newly created app or update an existing app. Before moving into the working of this script let us discuss how Loklak apps site tracks all the apps and their details. In the root of the project there is a file names apps.json. This file contains an aggregation of all the app.json files present in the individual apps. Now when the site is loaded, index.html loads the Javascript code present in app_list.js. This app_list.js file makes an ajax call to root apps.json files, loads all the app details in a list and attaches this list to the AngularJS scope variable. After this the app wall consisting of various app details is rendered using html. So whenever a new app is created, in order to expose the app on the wall, the developer needs to copy the contents of the application’s app.json and paste it in the root apps.json file. This is quite tedious on the part of the developer as for making a new app he will first have to know how the site works which is not all directly related to his development work. Next, whenever he updates the app.json of his app, he needs to again update apps.json file with the new data.

This newly added script (updatewall) automates this entire process. After creating a new app all that the developer needs to do is run this script from within his app directory and the app wall will be updated automatically.

Now, let us move into the working of this script. The basic workflow of the updatewall script can be described as follows. The script loads the json data present in the app.json file of the app under consideration. Next it loads the json data present in the root apps.json file.

if __name__ == '__main__':

    #open file containg json object
    json_list_file = open(PATH_TO_ROOT_JSON, 'r')

    #load json object
    json_list = json.load(json_list_file,  object_pairs_hook=OrderedDict)
    json_list_file.close()

    app_json_file = open(PATH_TO_APP_JSON, 'r')
    app_json = json.load(app_json_file,  object_pairs_hook=OrderedDict)
    app_json_file.close()

    #method to update Loklak app wall
    expose_app(json_list, app_json)

When we are loading the json data we are using object_pairs_hook in order to load the data into an OrderedDict rather than a normal python dictionary. We are doing this so that the order of the dictionary items are maintained. Once the data is loaded we invoke the expose method.

def expose_app(json_list, app_json):
    #if app is already present in list then fetch that app
    app = getAppIfPesent(json_list, app_json)

    #if app is not present then add a new entry
    if app == None:
        json_list['apps'].append(app_json)
        update_list_file(json_list)
        print colors.BOLD + colors.OKGREEN + 'App exposed on app wall' + colors.ENDC

    #else update the existing app entry
    else:
        for key in app_json:
            app[key] = app_json[key]
        update_list_file(json_list)
        print colors.BOLD + colors.OKGREEN + 'App updated on app wall' + colors.ENDC

The apps.json file contain a key called apps. This value of this key is a list of json objects, each object being the json data of an individual app’s app.json file. In the above function we iterate over all the json objects present in the list. If we are unable to find a json object whose name value is same as that of the newly created app then we simply append the new app’s app.json object to that list. However if we find an object containing the same name value as that of the newly created app, then we simply update its properties. In short, if the app is a new one, its data gets added to apps.json otherwise the corresponding app data is updated.

Handling invalid app names in the URL of details page

The url of the app details page takes the app name as parameter. If any user wants to see the store listing of an app then he has to use the following url.

https://apps.loklak.org/details.html?q=<app_name>

Here app name is a url parameter used to load the store listing information. Now if anyone enters an invalid app name, that is an app which does not exists, then a proper error message has to be shown to the user. This can be done by checking whether the given app name is present in the root apps.json file or not. If not present if simply set a flag so that the error message can be conditionally rendered.

$scope.getSelectedApp = function() {
        for (var i = 0; i < $scope.apps.length; i++) {
            if ($scope.apps[i].name === $scope.appName) {
                $scope.selectedApp = $scope.apps[i];
                $scope.found = true;
                $("nav").show();
                break;
            }
        }
        if ($scope.found == false) {
            $scope.notFound = true;
        }
    }

In the above snippet if the app is not found then we set notFound to true. This causes the error message to appear on the page.

<div ng-if="notFound" class="not-found">
        <span class="brand-and-image">
          <img src="images/loklak_icon.png">
          <span class="loklak-brand"> <span class="loklak-header">
            loklak </span> <span>apps</span>
          </span>
        </span>
        <span class="error-404">
          Error: Requested app not found
        </span>
        <span class="go-back">
          <a href="/"> Go Back to Home Page >> </a>
        </span>
      </div>

The code renders the error message if notFound is set to true.

Writing tests for store listing page

Almost the entire content of the store listing is loaded dynamically by Javascript logic. So it is very important to write tests for store listing page. Protractor framework has been used to write automated browser test. The tests make sure that for a given app, the content of the middle section is loaded correctly.

it("should have basic information", function() {
    expect(element(by.css(".app-name")).getText()).toEqual("MultiLinePlotter");
    expect(element(by.css(".app-headline")).getText()).toEqual("App to plot tweet aggregations and statistics");
    expect(element(by.css(".author")).getText()).toEqual("by Deepjyoti Mondal");
    expect(element(by.css(".short-desc")).getText()).toEqual("An applicaton to visually compare tweet statistics");
  });

The above tests make sure that the top section is loaded properly. Next we check that getting started section and app use section are not empty.

it("main content should not be empty", function() {
    expect(element(by.css(".get-started-md")).getText()).not.toBe("");
    expect(element(by.css(".app-use-md")).getText()).not.toBe("");
  });

Apart from these, two more tests are performed to check the behaviour of the side bar menu items on click event and the functionality of the Try now button.

Future roadmap

There is still a lot of scope for the site’s improvement and enhancement. Some of the features which can be implemented next are given below.

  • Add more tests to make the site stable and add tests to travis build.
  • Make the apps independent. Work on this has already been started and can be viewed here – issue, PR
  • Optimise the site for mobile using services workers and caching (making a progressive web app).
  • Add a splash screen and home screen icon for mobile.

Important resources

Continue ReadingImproving Loklak apps site

Using Protractor for UI Tests in Angular JS for Loklak Apps Site

Loklak apps site’s home page and app details page have sections where data is dynamically loaded from external javascript and json files. Data is fetched from json files using angular js, processed and then rendered to the corresponding views by controllers. Any erroneous modification to the controller functions might cause discrepancies in the frontend. Since Loklak apps is a frontend project, any bug in the home page or details page will lead to poor UI/UX. How do we deal with this? One way is to write unit tests for the various controller functions and check their behaviours. Now how do we test the behaviours of the site. Most of the controller functions render something on the view. One thing we can do is simulate the various browser actions and test site against known, accepted behaviours with Protractor.

What is Protractor

Protractor is end to end test framework for Angular and AngularJS apps. It runs tests against our app running in browser as if a real user is interacting with our browser. It uses browser specific drivers to interact with our web application as any user would.

Using Protractor to write tests for Loklak apps site

First we need to install Protractor and its dependencies. Let us begin by creating an empty json file in the project directory using the following command.

echo {} > package.json

Next we will have to install Protractor.

The above command installs protractor and webdriver-manager. After this we need to get the necessary binaries to set up our selenium server. This can be done using the following.

./node_modules/protractor/bin/webdriver-manager update
./node_modules/protractor/bin/webdriver-manager start

Let us tidy up things a bit. We will include these commands in package.json file under scripts section so that we can shorten our commands.

Given below is the current state of package.json

{
    "scripts": {
        "start": "./node_modules/http-server/bin/http-server",
        "update-driver": "./node_modules/protractor/bin/webdriver-manager update",
        "start-driver": "./node_modules/protractor/bin/webdriver-manager start",
        "test": "./node_modules/protractor/bin/protractor conf.js"
    },
    "dependencies": {
        "http-server": "^0.10.0",
        "protractor": "^5.1.2"
    }
}

The package.json file currently holds our dependencies and scripts. It contains command for starting development server, updating webdriver and starting webdriver (mentioned just before this) and command to run test.

Next we need to include a configuration file for protractor. The configuration file should contain the test framework to be used, the address at which selenium is running and path to specs file.

// conf.js
exports.config = {
    framework: "jasmine",
    seleniumAddress: "http://localhost:4444/wd/hub",
    specs: ["tests/home-spec.js"]
};

We have set the framework as jasmine and selenium address as http://localhost:4444/wd/hub. Next we need to define our actual file. But before writing tests we need to find out what are the things that we need to test. We will mostly be testing dynamic content loaded by Javascript files. Let us define a spec. A spec is a collection of tests. We will start by testing the category name. Initially when the page loads it should be equal to All apps. Next we test the top right hand side menu which is loaded by javascript using topmenu.json file.

it("should have a category name", function() {
    expect(element(by.id("categoryName")).getText()).toEqual("All apps");
  });

  it("should have top menu", function() {
    let list = element.all(by.css(".topmenu li a"));
    expect(list.count()).toBe(5);
  });

As mentioned earlier, we are using jasmine framework for writing our specs. In the above code snippet ‘it’ describes a particular test. It takes a test description and a callback function thereby providing a very efficient way to document our tests white write the test code itself. In the first test we use expect function to check whether the category name is equal to All apps or not. Here we select the div containing the category name by its id.

Next we write a test for top menu. There should be five menu options in total for the top menu. We select all the list items that are supposed to contain the top menu items and check whether the number of such items are five or not using expect function. As it can be seen from the snippet, the process of selecting a node is almost similar to that of Jquery library.

Next we test the left hand side category list. This list is loaded by AngularJS controller from apps,json file. We should make sure the list is loaded properly and all the options are present.

it("should have a category list", function() {
    let categoryIds = ["All", "Scraper", "Search", "Visualizer", "LoklakLibraries", "InternetOfThings", "Misc"];
    let categoryNames = ["All", "Scraper", "Search", "Visualizer", "Loklak Libraries", "Internet Of Things", "Misc"];

    expect(element(by.css("#catTitle")).getText()).toBe("Categories");

    let categoryList = element.all(by.css(".category-main"));
    expect(categoryList.count()).toBe(7);

    categoryIds.forEach(function(id, index) {
      element(by.css("#" + id)).isPresent().then(function(present) {
        expect(present).toBe(true);
      });

      element(by.css("#" + id)).getText().then(function(text) {
        expect(text).toBe(categoryNames[index]);
      });
    });
  });

At first we maintain two lists of category id and category names. We begin by confirming that Category title is equal to Categories. Next we get the list of categories and iterate over them, For each category we check whether the corresponding id is present in the DOM or not. After confirming this, we match the names of the categories with the expected names. Elements.all function allows us to get a list of selected nodes.

Finally we check the click functionality of the left side menu. Expected behaviour is, on clicking a menu item, the category name should get replaced with the selected category name. For this we need to simulate the click event. Protractor allows us to do it very easily using click function.

it("category list should respond to click", function() {
    let categoryIds = ["All", "Scraper", "Search", "Visualizer", "LoklakLibraries", "InternetOfThings", "Misc"];
    let categoryNames = ["All apps", "Scraper", "Search", "Visualizer", "Loklak Libraries", "Internet Of Things", "Misc"];

    categoryIds.forEach(function(id, index) {
      element(by.id(categoryIds[index])).click().then(function() {
        browser.getCurrentUrl().then(function(url) {
          expect(url).toBe("http://127.0.0.1:8080/#/" + categoryIds[index]);
        });
        element(by.id("categoryName")).getText().then(function(text) {
          expect(text).toBe(categoryNames[index]);
        });
      });
    });
  });



Once again we maintain two lists, category id and category names. We obtain the present list of categories and iterate over them. For each category link we simulate a click event. For each click event we check two values. We check the new browser URL which should now contain the category id. Next we check the value of category name. It should be equal to the category selected.
FInally after all the tests are over we get the final report on our terminal.
In order to run the tests, use the following command.

npm test

This will start executing the tests.

Important resources

Continue ReadingUsing Protractor for UI Tests in Angular JS for Loklak Apps Site