Unit Testing and Travis

Tests are an important part of any software development process. We need to write test codes for any feature that we develop to check if that feature is working properly.
In this post, I am gonna talk about writing Unit tests and running those test codes.

If you are a developer, I assume you have heard about unit tests. Most of you probably even wrote one in your life. Unit testing is becoming more and more popular in software development. Let’s first talk about what Unit testing is:

What is unit testing?

Unit testing is the process through which units of source code are tested to verify if they work properly. Performing unit tests is a way to ensure that all functionalities of an application are working as they should. Unit tests inform the developer when a change in one unit interferes with the functionality of another. Modern unit testing frameworks are typically implemented using the same code used by the system under test. This enables a developer who is writing application code in a particular language to write their unit tests in that language as well.

What is a unit testing framework?

Unit testing frameworks are developed for the purpose of simplifying the process of unit-testing. Those frameworks enable the creation of Test Fixtures, which are classes that have specific attributes enabling them to be picked up by a Test Runner.

Although it is possible to perform unit tests without such a framework, the process can be difficult, complicated and very manual.

There are a lot of unit testing frameworks available. Each of the frameworks has its own merits and selecting one depends on what features are needed and the level of expertise of the development team. For my project, Engelsystem I choose PHPUnit as the testing framework.

PHPUnit

With PHPUnit, the most basic thing you’ll write is a test case. A test case is just a term for a class with several different tests all related to the same functionality. There are a few rules you’ll need to worry about when writing your cases so that they’ll work with PHPUnit:

  • The test class would extend the PHPUnit_Framework_TestCase class.
  • The test parameters will never receive any parameters.

Below is an example of a test code from my project, Engelsystem

<?php

class ShiftTypes_Model_test extends PHPUnit_Framework_TestCase {

private $shift_id = null;

public function create_ShiftType(){
$this->shift_id = ShiftType_create('test', '1', 'test_description');
}

public function test_ShiftType_create() {
$count = count(ShiftTypes());
$this->assertNotFalse(create_ShiftType($shift_id));

// There should be one more ShiftTypes now
$this->assertEquals(count(ShiftTypes()), $count + 1);
}

public function test_ShiftType(){
$this->create_ShiftType();
$shift_type = ShiftType($this->shift_id);
$this->assertNotFalse($shift_type);
$this->assertTrue(count(ShiftTypes()) > 0);
$this->assertNotNull($shift_type);
$this->assertEquals($shift_type['name'], 'test');
$this->assertEquals(count(ShiftTypes()), 0);
$this->assertNull(ShiftTypes(-1));
}

public function teardown() {
if ($this->shift_id != null)
ShiftType_delete($this->shift_id);
}

}

?>

We can use different Assertions to test the functionality.

We are running these tests on Travis-CI

What is Travis-CI?

Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.

Open source projects may be tested at no charge via travis-ci.org. Private projects may be tested at the same location on a fee basis. TravisPro provides custom deployments of a proprietary version on the customer’s own hardware.

Although the source is technically free software and available piecemeal on GitHub under permissive licenses, the company notes that it is unlikely that casual users could successfully integrate it on their own platforms.

To get started with Travis-CI, visit the following link, Getting started with Travis-CI.

We are developing new feature for Engelsystem.  Developers who are interested in contributing can work with us.

Development: https://github.com/fossasia/engelsystem             Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Continue ReadingUnit Testing and Travis

Responsive UI: Testing & Reporting

Few days back I wrote a blog about how to make a website responsive. But how do you exactly check for responsiveness of a website? Well, you can do it using your browser itself. Both Mozilla Firefox and Google Chrome provides responsive mode for testing responsiveness of websites for standard resolutions and also to drag and check for any resolution. Open Event Organizer’s Server has quite some responsive issues, so I would tell about reporting the issues as well.

Continue ReadingResponsive UI: Testing & Reporting

How can you add a bug?

It’s very simple to start testing, You don’t need any special experience in testing area.To start testing in Open Event project you need to open our web application http://open-event.herokuapp.com/ and you can start.Isn’t it easy? So You should focus on finding as many bugs as possible to provide your users with perfectly working software. If you find a bug you need to describe many details

How can you report a bug to Open Event?

Go to Github page issues, click new issue(green button).

Screen Shot 2016-07-01 at 21.03.45.png

Our Requirements:

Good description – If you found a bug you have to reproduce it, so you have nice background to describe it very well. It’s important because, good issue’s description saves developers time. They don’t need to ask announcer about details of bug. The best description which tester can add is how developer can simply achieve a bug step by step.

Logs – description is sometimes not enough so you need to attach logs which are generated from our server(It’s nice if you have access, if you don’t have ask me)

Pictures – it’s helpful, because we can quickly find where bug is located

Labels – You need to assign Screen Shot 2016-07-01 at 21.26.17.png label to issue

That’s all!

 

Continue ReadingHow can you add a bug?

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Continue ReadingCode Quality in the knittingpattern Python Library

Testing with Jasmine-Node

This week I have started testing my own code with Jasmine-Node. Testing seems beneficial only when we do it with a right approach. I have gone through various articles and find out Jasmine as one of the best frameworks for testing Javascript applications.

For specifically NodeJS, a better option will be to use Jasmine-Node.

How to start with Jasmine-Node ?

For a quick-start, you need to have Jasmine-Node installed with npm.

npm install jasmine-node -g

After the installation, make a folder say spec/ in the root directory of the project. This will contain all the tests for the node app.

We can include the command  to run the tests in package.json file of the project.

./node_modules/.bin/jasmine-node spec

 

script

Like here it is included with the key tester. Now, we are ready to write our first test file.

So, in the Open-event-webapp spec/ folder I have created a file serverSpec.js. The file will test a basic function written in fold.js which is the file that includes all the functions of Open-event-webapp.

//fold.js

'use strict';


var exports = module.exports = {};

const fs = require('fs-extra');
const moment = require('moment');
const handlebars = require('handlebars');
const async = require('async');
const archiver = require('archiver');

//function to test

function speakerNameWithOrg(speaker) {
 return speaker.organisation ?
 `${speaker.name} (${speaker.organisation})` :
 speaker.name;
}

To test the above function, I have written the test like this

//serverSpec.js

'use strict'

const request = require("request");
const fold = require(__dirname +'/../src/generator/backend/fold.js');
const port = process.env.PORT || 5000;
const base_url = "http://localhost:"+ port +"/"

describe("Open Event Server", function() {
  describe("GET /", function() {
    it("returns status code 200", function(done) {
      request.get(base_url, function(error, response, body) {
      expect(response.statusCode).toBe(200);
      done();
    });
   });
 }); 
}); 

describe("Testing Fold.js",function() {
  describe("Testing SpeakerWithOrg function",function(){
    it("it will test speakername", function(){
       let speaker= {
          name: "speakername",
        }
    expect(fold.speakerNameWithOrg(speaker)).toEqual("speakername");
   });

    it("it will test speakername and organisation", function(){
       let speakerWithOrg= {
          name:"speakername",
          organisation:"Organisation",
        }
       expect(fold.speakerNameWithOrg(speakerWithOrg)).toEqual("speakername (Organisation)");
    });
 });
 
 });

There are two describe blocks here one to test if the server is running properly and the second block includes two ‘it’ statements which are used to test the function behavior.

The describe blocks are nested which is a nice feature provided by Node-Jasmine.

To run the above test in Open-event-webapp , use the command:

1. npm run start
2. npm run tester

On success, The output will be something like this

test

Other Sources

  1. Jasmine Presentation
  2. HTML goodies

 

 

Continue ReadingTesting with Jasmine-Node

Importance of the test cases for the KnitLib

Having test cases is very important especially for a library like KnitLib because using test cases; we can clearly test particular fields.  In KnitLib, test cases show the information of how the KnitLib should be checked. Also test cases help for new contributors to understand about the KnitLib.

There are several test cases for the current KnitLib implementation such as tests on ayab communication, tests on ayab image, tests on command line interface, tests on KnitPat module and tests on knitting plugin.

For an example in ayab communication there are several important functions have been tested. Test on closing serial port communication, test on opening serial port with a baud rate of 115200 which ayab fits, tests on sending start message to the controller, tests on sending line of data via serial port and tests on reading line from serial communication.  Most of these tests have been done using mock tests. Mock is a python library to test in python.  Using mocks we can replace parts of our system with mock objects and have assertions about how they have been used. We can easily represent some complex objects without having to manually set up stubs as mock objects during a test.

It is very important to improve further test cases on the KnitLib because with the help of good test cases we can guarantee that the KnitLib’s features and functionalities should be working great.

Regards,

Shiluka.

Continue ReadingImportance of the test cases for the KnitLib