Adding new test cases for increasing test coverage of Loklak Server

It is a good practice to have test cases covering major portion of actual code base. The idea was same to add new test cases in Loklak Server to increase its test coverage. The results were quite amazing with a significant increase of about 3% in total test coverage of the overall project. And about 80-100% increase in the test coverage of individual files for which tests have been written.

Integration Process

For integration, a total of 6 new test cases have been written:

  1. ASCIITest
  2. GeoLocationTest
  3. CommonPatternTest
  4. InstagramProfileScraperTest
  5. EventBriteCrawlerServiceTest
  6. LocationWiseTimeServiceTest

For increasing code coverage, Java docs have been written which increase the lines of code being covered.

Implementation

Basic implementation of adding a new test case is same. Here’s is an example of EventBriteCrawlerServiceTest implementation. This can be used as a reference for adding a new test case in the project.

Prerequisite: If the test file being written tests any additional external service (e.g. EventBriteCrawlerServiceTest tests any event being created on EventBrite platform) then, a corresponding new test service or test event should be written beforehand on the given platform. For EventBriteCrawlerServiceTest, a test-event has been created.

The given below steps will be followed for creating a new test case (EventBriteCrawlerServiceTest), assuming the prerequisite step has been done:

  • A new java file is generated in test/org/loklak/api/search/ as EventBriteCrawlerServiceTest.java.
  • Define package for the test file and import EventBriteCrawlerService.java file which has to be tested along with necessary packages and methods.

package org.loklak.api.search;

import org.loklak.api.search.EventBriteCrawlerService;
import org.loklak.susi.SusiThought;
import org.junit.Test;
import org.json.JSONArray;
import org.json.JSONObject;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertNotNull;

 

  • Define the class and methods that need to be tested. It is a good idea to break the testing into several test methods rather than testing more features in a single method.

public class EventBriteCrawlerServiceTest {
  @Test
  public void apiPathTest() { }

  @Test
  public void eventBriteEventTest() { }

  @Test
  public void eventBriteOrganizerTest() { }

  @Test
  public void eventExtraDetailsTest() { }
}

 

  • Create an object of EventBriteCrawlerService in each test method.

EventBriteCrawlerService eventBriteCrawlerService = new EventBriteCrawlerService();

 

  • Call specific method of EventBriteCrawlerService that needs to be tested in each of the defined methods of test file and assert the actualresult to the expectedresult.

@Test
public void apiPathTest() {
    EventBriteCrawlerService eventBriteCrawlerService = 
        new EventBriteCrawlerService();

     assertEquals("/api/eventbritecrawler.json",
        eventBriteCrawlerService.getAPIPath());
}

 

  • For methods fetching an actual result (Integration tests) of event page, define and initialise an expected set of results and assert the expected result with the actual result by parsing the json result.

@Test
public void eventBriteOrganizerTest() {
    EventBriteCrawlerService eventBriteCrawlerService =
        new EventBriteCrawlerService();
    String eventTestUrl = "https://www.eventbrite.com/e/
        testevent-tickets-46017636991";
    SusiThought resultPage = eventBriteCrawlerService
        .crawlEventBrite(eventTestUrl);
    JSONArray jsonArray = resultPage.getData();
    JSONObject organizer_details =
        jsonArray.getJSONObject(1);
    String organizer_contact_info = organizer_details
        .getString("organizer_contact_info");
    String organizer_link = organizer_details
        .getString("organizer_link");
    String organizer_profile_link = organizer_details
        .getString("organizer_profile_link");
    String organizer_name = organizer_details
        .getString("organizer_name");

    assertEquals("https://www.eventbrite.com
        /e/testevent-tickets-46017636991
        #lightbox_contact",
        organizer_contact_info);
    assertEquals("https://www.eventbrite.com
        /e/testevent-tickets-46017636991
        #listing-organizer", organizer_link);
    assertEquals("", organizer_profile_link);
    assertEquals("aurabh Srivastava", organizer_name);
}

 

  • If the test file is testing the harvester, then import and add the test class in TestRunner.java file. e.g.

import org.loklak.harvester.TwitterScraperTest;

@RunWith(Suite.class)
@Suite.SuiteClasses({
  TwitterScraperTest.class
})

Testing

The last step will be to check if the test case passes with integration of other tests. The gradle will test the test case along with other tests in the project.

./gradlew build

Resources

Continue ReadingAdding new test cases for increasing test coverage of Loklak Server

UI automated testing using Selenium in Badgeyay

With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

What is Selenium?

Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7).

First things first:
To install these package run this code on the CLI:

pip install selenium==2.40
pip install nose

Don’t forget to add them in the requirements.txt file

Web Browser:
We also need to have Firefox installed on your machine.

Writing the Test
An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

A test for the web page title:

import unittest
from selenium import webdriver

class SampleTest(unittest.TestCase):

    @classmethod
    def setUpClass(cls):
        cls.driver = webdriver.Firefox()
        cls.driver.get('http://badgeyay-dev.herokuapp.com/')

    def test_title(self):
        self.assertEqual(self.driver.title, 'Badgeyay')

    @classmethod
    def tearDownClass(cls):
        cls.driver.quit()

 

Run the test using nose test.py

Clicking the element
For our next test, we click the menu button, and check if the menu becomes visible.

elem = self.driver.find_element_by_css_selector(".custom-menu-content")
self.driver.find_element_by_css_selector(".glyphicon-th").click()
self.assertTrue(elem.is_displayed())

 

Uploading a CSV file:
For our next test, we upload a CSV file and see if a success message pops up.

def test_upload(self):
        Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png'))
        CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv'))
        self.driver.find_element_by_name("file").send_keys(CSVpath)
        self.driver.find_element_by_name("image").send_keys(Imagepath)
        self.driver.find_element_by_css_selector("form .btn-primary").click()
        time.sleep(3)
        success = self.driver.find_element_by_css_selector(".flash-success")
        self.assertIn(u'Your badges has been successfully generated!', success.text)

 

The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests

We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

Resources

Continue ReadingUI automated testing using Selenium in Badgeyay

Testing child process using Mocha in Yaydoc

Mocha is a javascript testing framework. It can be used in both nodeJS and browser as well, also it is one of the most popular testing framework available out there. Mocha is widely used for the Behavior Driven Development (BDD). In yaydoc, we are using mocha to test our web UI. One of the main task in yaydoc is documentation generation. We build a bash script to do our documentation generation. We run the bash script using node’s child_process module, but then in order to run the test you have to execute the child process before test execution. This can be achieved by mochas’s before hook. Install mocha in to your system

npm install -g mocha

Here is the test case which i wrote in yaydoc test file.

const assert = require('assert')
const spawn = require('child_process').spawn
const uuidV4 = require("uuid/v4")
describe('WebUi Generator', () => {
  let uniqueId = uuidV4()
  let email = 'fossasia@gmail.com'
  let args = [
    "-g", "https://github.com/fossasia/yaydoc.git",
    "-t", "alabaster",
    "-m", email,
    "-u", uniqueId,
    "-w", "true"
  ]
  let exitCode

  before((done) => {
    let process = spawn('./generate.sh', args)
    process.on('exit', (code) => {
      exitCode = code
      done()
    })
  })
  it('exit code should be zero', () => {
    assert.equal(exitCode, 0)
  })
 })

Describe() function is used to describe our test case. In our scenario we’re testing the generate script so we write as webui generator. As I mentioned above we have to run our child_process in before hook. It() function is the place where we write our test case. If the test case fails, an error will be thrown. We use the assert module from mocha to do the assertion. You can see our assertion in first it()  block for checking exit code is zero or not.

mocha test.js --timeout 1500000

Since documentation takes time so we have to mention time out while running mocha. If your test case passes successfully, you will get output similar to this.

WebUi Generator
    ✓ exit code should be zero

Resources:

 

Continue ReadingTesting child process using Mocha in Yaydoc

Generating Documentation and Modifying the sTeam-REST API

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

Documentation

Documentation is an important part of software engineering. Types of documentation include:

  1. Requirements – Statements that identify attributes, capabilities, characteristics, or qualities of a system. This is the foundation for what will be or has been implemented.
  2. Architecture/Design – Overview of software. Includes relations to an environment and construction principles to be used in design of software components.
  3. Technical – Documentation of code, algorithms, interfaces, and APIs.
  4. End user – Manuals for the end-user, system administrators and support staff.
  5. Marketing – How to market the product and analysis of the market demand.

Doxygen

Doxygen is the de facto standard tool for generating documentation from annotated C++ sources, but it also supports other popular programming languages such as C, Objective-C, C#, PHP, Java, Python, IDL (Corba, Microsoft, and UNO/OpenOffice flavors), Fortran, VHDL, Tcl, and to some extent D.
The Doxygen treats files of other languages as C/C++ and creates documentation for it accordingly.
sTeam documentation was tried to be created with the doxygen. But empty documentation was created due to the lack of the doxygen annotations used in the project.
 Doxygen doc generation.
DoxyGenTerminal
Doxygen Docs
DoxyDoc
The next way to create documentation was to make use of the autodoc utility provided by the Pike. The utility to generate docs was provided in the later versions of the Pike(>=8.0.155).
The autodoc files are generated and  later these are converted into  html pages. The commands used for generating the autodoc include:-
pike -x extract_autodoc /source
pike -x autodoc_to_html /src /opfile
The autodoc_to_html utility converts a single autodoc file to an html page. As a result a shell script was written to convert all the generated autodoc files to the html file.
docGenerator.sh
#!/bin/bash  

shopt -s globstar  
for filename in ./**/*.pike.xml; do  
    outputFile=doc/${filename#./}  
    outputFile=${outputFile%.xml}."html"  
    if [ -d $(dirname "./"$outputFile) ]; then  
      touch "./"$outputFile  
    else  
      mkdir -p $(dirname "./"$outputFile) && touch "./"$outputFile  
    fi  
    pike -x autodoc_to_html $filename "./"$outputFile  
done  

Autodoc Documentation
AutoDocThe documentation generated by this was less informative and lacked the referrals to other classes and headers. The societyserver project was developed long back but the autodoc utility was introduced in the later versions of pike. As a result the source files lacked the autodoc tags which are required to generate a well informative documentation with bindings to other files.

Restructuring the sTeam-REST API

The sTeam-REST API project made use of the angular-seed to initiate the development during the early phases. However these files still existed in the project. This had lead to a pandemonium and created difficulty in understanding the project. The files had to be removed and the app was in dire need of a restructuring. The following issues have been reported and resolved.

Issue. Github Issue Github PR
sTeam-REST Issues Issues PR

The new UI can be seen below.

Home

Home

Register

Register

About

About

Testing the REST API

The functionality to run the tests using the npm test command was added to the project. Now the user can run the tests using these commands instead of the traditional approach of running the tests using jasmine-node and specifying the test directory. The domain name of the urls to which the request was sent was changed. The e2e tests and frisby tests were conducted successfully.

e2e Tests.

e2eTests

Frisby Tests

NPM tests

The next step would be to do add more tests for the REST API.

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue ReadingGenerating Documentation and Modifying the sTeam-REST API