Keep your node server running using PM2

The open event webapp generator is a node projects (using an express server), and a copy of it runs all the time on my personal DigitalOcean box (other than our heroku instance).

On a service like Heroku, the platform manages the task of bringing your server process up. But on, say a Linux distro on the DO box, you have to manually do

 npm run server

to be able to run the server.

While that is all good, it is a foreground shell process, which means, you will lose the node server, when you log out (or your internet connection into the ssh breaks).
So we need to be able to keep the process running in the background.

The way we do it in bash on Unix, is that we can do either of the following

 npm run server&

The “&” at the end means it will make a background fork of this task. Or if you’ve already started it without it, you can also do the following.

npm run server # starts in foreground
#Press Ctrl + Z, this pauses task and frees the shell
bg 1 # sends task no 1 to background thread.

Again, both these are hacky methods, will work only on Unix OSs, and are not really recommended for production.
For production, we need a Process Manager, for Node.js the best we can get is pm2 – purpose built process manager for node.

Install pm2 first

sudo npm install -g pm2

Using pm2, we can start any process that can be started with node. We can start the app.js script like this

pm2 start src/app.js

Also, pm2 can run npm tasks too like

pm2 start npm -- start

Pm2 has a pretty status message display window. And we can start, stop, pause, kill and/or restart any process.

 

Screenshot from 2016-08-28 01-19-29

sTeam GSoC 2016 Windup

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

An overview of the work done by ajinkya007 during Google Summer of code 2016 with FOSSASIA on its project sTeam.

The community bonding period saw the creation of a docker image and a debian package for the sTeam server. The integration of the sTeam shell into vi, improvements in the export and import to git scripts, user and group manipulation commands, sending mails through the commandline, viewing logs and the edit script modifications were done subsequently. The later part of GSOC saw that the sTeam-rest repository was restructured, unit and api-end point tests were performed. The new web interface developed was tested.
The code written during this period by me and siddhant was merged and the conflicts were resolved. The merged code was tested thoroughly as no automated test integration tool supports pike programming language. Documentation was generated using Doxygen and deployed in the gh-pages of the sTeam server repository.

A trello board was maintained throughout the course of GSOC 2016.

Trello Board: sTeam

Accomplishments

Issues Reported and Resolved

A list of tasks covered and all the Pull requests related to each:

Tasks Issue PR
Make changes in the Makefile for installation of sTeam. Issue-25 Issue-27 PR-66 PR-67
Edit script modifications Issue-20 Issue-29 Issue-43 PR-44 PR-48
Indentation of output in steal-shell. Issue-24 PR-42
Integrate steam-shell into vim or emacs. Issue-37 Issue-43 Issue-49 PR-41 PR-48 PR-51
Improve the import and export from git scripts. Issue-9 Issue-14 Issue-16 Issue-18 Issue-19 Issue-46 PR-45 PR-54 PR-55 PR-76
Create, Delete and List the user through commandline Issue-58 Issue-69 Issue-72 PR-59 PR-70 PR-78
Sending Mails through commandline Issue-74 PR-85
Generate error logs and display them in CLI Issue-83 PR-86
Create a file of any mime type from command line. Issue-79 PR-82
Add more commands for group operations. Issue-80 PR-84
Add more utility to the steam-shell Issue-56 Issue-71 Issue-73 PR-57 PR-75 PR-81
Restructure the sTeam-rest repository List of Issue’s List of PR’s
Write test cases to test sTeam-rest api List of Issue’s List of PR’s
Create a debian package and a docker image for easy deployment Create docker image Docker Image
Document the work done Issue 149 sTeam Server Structure, sTeam Server Documentation
Test the web-interface

Commits Merged

During the course of GSOC 2016, work was done on the sTeam and sTeam-rest repositories.

1. The work done on the sTeam repository.

We have combined all the work into two branches for the ease of creating a debian package. The commits made by me in each branch can be seen here.

2. The work done on the sTeam-rest repository

The push request’s sent for the issue’s are yet to be merged in the main repository. The list of PR’s for the sTeam-rest repository.

sTeam-rest PR’s

The weekly blogs

The blogs summarizing the work done during the week were published on my personal website. These can be found on Weekly Blogs
All the blogs can also be found on the Fossasia blog.
The list in reverse chronological order is as follows.

Scrums

Scrum reports were posted on the #steam-devel on irc.freenode.net and sTeam google group. The sTeam trello board also has everyday scrum reports.

Further Improvements

  1. sTeam command line lacks the functionality to read and set the object access permissions. sTeam function analogous to getfacl() to change the sTeam server object permisssions.
  2. sTeam debian package for easy installation of the sTeam server. The debian package is yet to be fully packaged.

Special Thanks

  • I would like to thank my mentors Mario Behling, Hong Phuc Dang, Martin Bahr, Trilok Tourani and my peers for being there to help me and guide me.
  • I would like to thank FOSSASIA, sTeam and Pike Community for giving me this opportunity and guiding me in this endeavour.
  • I would also like to thank Google Summer of Code for this experience.

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Software Testing

In this post I would like to explain what is software testing and different methods, levels of software testing.

What is software testing?

Software Testing is the process of evaluating a system or its components with the intent to find whether it satisfies the specified requirements or not.

Testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.

What are the Levels of software testing?

Software products are tested at four levels:
  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing

  • During unit testing, modules are tested in isolation. If all modules were to be tested together, it may not be easy to determine which module has the error.
  • Unit testing reduces debugging effort several folds. Programmers carry out unit testing immediately after they complete the coding of a module.

Role of Unit Testing

  • Assure minimum quality of units before integration into system
  • Focus attention on relatively small units
  • Testing forces us to read our own code – spend more time reading than writing
  • Automated tests support maintainability and extendibility

Integration testing

After different modules of a system have been coded and unit tested:
  • modules are integrated in steps according to an integration plan
  • partially integrated system is tested at each integration step.

Role of Integration Testing

  • Gain confidence in the integrity of overall system design
  • Ensure proper interaction of components

Integration Testing Strategies

  • Big-bang
  • Top-down
  • Bottom-up
  • Critical-first
  • Function-at-a-time
  • As-delivered
  • Sandwich

System Testing

  • Gain confidence in the integrity of the system as a whole
  • Ensure compliance with functional requirements
  • Ensure compliance with performance requirements

Acceptance Testing

  • Testing performed by the customer or end-user himself
  • to determine whether the system should be accepted or reject

Testing is very important step of software development. We have used unit testing for our project Engelsystem. We are developing new features. Interested Developers can work with us.

Development: https://github.com/fossasia/engelsystem

Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Unit Testing and Travis

Tests are an important part of any software development process. We need to write test codes for any feature that we develop to check if that feature is working properly.
In this post, I am gonna talk about writing Unit tests and running those test codes.

If you are a developer, I assume you have heard about unit tests. Most of you probably even wrote one in your life. Unit testing is becoming more and more popular in software development. Let’s first talk about what Unit testing is:

What is unit testing?

Unit testing is the process through which units of source code are tested to verify if they work properly. Performing unit tests is a way to ensure that all functionalities of an application are working as they should. Unit tests inform the developer when a change in one unit interferes with the functionality of another. Modern unit testing frameworks are typically implemented using the same code used by the system under test. This enables a developer who is writing application code in a particular language to write their unit tests in that language as well.

What is a unit testing framework?

Unit testing frameworks are developed for the purpose of simplifying the process of unit-testing. Those frameworks enable the creation of Test Fixtures, which are classes that have specific attributes enabling them to be picked up by a Test Runner.

Although it is possible to perform unit tests without such a framework, the process can be difficult, complicated and very manual.

There are a lot of unit testing frameworks available. Each of the frameworks has its own merits and selecting one depends on what features are needed and the level of expertise of the development team. For my project, Engelsystem I choose PHPUnit as the testing framework.

PHPUnit

With PHPUnit, the most basic thing you’ll write is a test case. A test case is just a term for a class with several different tests all related to the same functionality. There are a few rules you’ll need to worry about when writing your cases so that they’ll work with PHPUnit:

  • The test class would extend the PHPUnit_Framework_TestCase class.
  • The test parameters will never receive any parameters.

Below is an example of a test code from my project, Engelsystem

<?php

class ShiftTypes_Model_test extends PHPUnit_Framework_TestCase {

private $shift_id = null;

public function create_ShiftType(){
$this->shift_id = ShiftType_create('test', '1', 'test_description');
}

public function test_ShiftType_create() {
$count = count(ShiftTypes());
$this->assertNotFalse(create_ShiftType($shift_id));

// There should be one more ShiftTypes now
$this->assertEquals(count(ShiftTypes()), $count + 1);
}

public function test_ShiftType(){
$this->create_ShiftType();
$shift_type = ShiftType($this->shift_id);
$this->assertNotFalse($shift_type);
$this->assertTrue(count(ShiftTypes()) > 0);
$this->assertNotNull($shift_type);
$this->assertEquals($shift_type['name'], 'test');
$this->assertEquals(count(ShiftTypes()), 0);
$this->assertNull(ShiftTypes(-1));
}

public function teardown() {
if ($this->shift_id != null)
ShiftType_delete($this->shift_id);
}

}

?>

We can use different Assertions to test the functionality.

We are running these tests on Travis-CI

What is Travis-CI?

Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.

Open source projects may be tested at no charge via travis-ci.org. Private projects may be tested at the same location on a fee basis. TravisPro provides custom deployments of a proprietary version on the customer’s own hardware.

Although the source is technically free software and available piecemeal on GitHub under permissive licenses, the company notes that it is unlikely that casual users could successfully integrate it on their own platforms.

To get started with Travis-CI, visit the following link, Getting started with Travis-CI.

We are developing new feature for Engelsystem.  Developers who are interested in contributing can work with us.

Development: https://github.com/fossasia/engelsystem             Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Responsive UI: Testing & Reporting

Few days back I wrote a blog about how to make a website responsive. But how do you exactly check for responsiveness of a website? Well, you can do it using your browser itself. Both Mozilla Firefox and Google Chrome provides responsive mode for testing responsiveness of websites for standard resolutions and also to drag and check for any resolution. Open Event Organizer’s Server has quite some responsive issues, so I would tell about reporting the issues as well.

Continue reading Responsive UI: Testing & Reporting

How can you add a bug?

It’s very simple to start testing, You don’t need any special experience in testing area.To start testing in Open Event project you need to open our web application http://open-event.herokuapp.com/ and you can start.Isn’t it easy? So You should focus on finding as many bugs as possible to provide your users with perfectly working software. If you find a bug you need to describe many details

How can you report a bug to Open Event?

Go to Github page issues, click new issue(green button).

Screen Shot 2016-07-01 at 21.03.45.png

Our Requirements:

Good description – If you found a bug you have to reproduce it, so you have nice background to describe it very well. It’s important because, good issue’s description saves developers time. They don’t need to ask announcer about details of bug. The best description which tester can add is how developer can simply achieve a bug step by step.

Logs – description is sometimes not enough so you need to attach logs which are generated from our server(It’s nice if you have access, if you don’t have ask me)

Pictures – it’s helpful, because we can quickly find where bug is located

Labels – You need to assign Screen Shot 2016-07-01 at 21.26.17.png label to issue

That’s all!

 

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Testing with Jasmine-Node

This week I have started testing my own code with Jasmine-Node. Testing seems beneficial only when we do it with a right approach. I have gone through various articles and find out Jasmine as one of the best frameworks for testing Javascript applications.

For specifically NodeJS, a better option will be to use Jasmine-Node.

How to start with Jasmine-Node ?

For a quick-start, you need to have Jasmine-Node installed with npm.

npm install jasmine-node -g

After the installation, make a folder say spec/ in the root directory of the project. This will contain all the tests for the node app.

We can include the command  to run the tests in package.json file of the project.

./node_modules/.bin/jasmine-node spec

 

script

Like here it is included with the key tester. Now, we are ready to write our first test file.

So, in the Open-event-webapp spec/ folder I have created a file serverSpec.js. The file will test a basic function written in fold.js which is the file that includes all the functions of Open-event-webapp.

//fold.js

'use strict';


var exports = module.exports = {};

const fs = require('fs-extra');
const moment = require('moment');
const handlebars = require('handlebars');
const async = require('async');
const archiver = require('archiver');

//function to test

function speakerNameWithOrg(speaker) {
 return speaker.organisation ?
 `${speaker.name} (${speaker.organisation})` :
 speaker.name;
}

To test the above function, I have written the test like this

//serverSpec.js

'use strict'

const request = require("request");
const fold = require(__dirname +'/../src/generator/backend/fold.js');
const port = process.env.PORT || 5000;
const base_url = "http://localhost:"+ port +"/"

describe("Open Event Server", function() {
  describe("GET /", function() {
    it("returns status code 200", function(done) {
      request.get(base_url, function(error, response, body) {
      expect(response.statusCode).toBe(200);
      done();
    });
   });
 }); 
}); 

describe("Testing Fold.js",function() {
  describe("Testing SpeakerWithOrg function",function(){
    it("it will test speakername", function(){
       let speaker= {
          name: "speakername",
        }
    expect(fold.speakerNameWithOrg(speaker)).toEqual("speakername");
   });

    it("it will test speakername and organisation", function(){
       let speakerWithOrg= {
          name:"speakername",
          organisation:"Organisation",
        }
       expect(fold.speakerNameWithOrg(speakerWithOrg)).toEqual("speakername (Organisation)");
    });
 });
 
 });

There are two describe blocks here one to test if the server is running properly and the second block includes two ‘it’ statements which are used to test the function behavior.

The describe blocks are nested which is a nice feature provided by Node-Jasmine.

To run the above test in Open-event-webapp , use the command:

1. npm run start
2. npm run tester

On success, The output will be something like this

test

Other Sources

  1. Jasmine Presentation
  2. HTML goodies

 

 

Importance of the test cases for the KnitLib

Having test cases is very important especially for a library like KnitLib because using test cases; we can clearly test particular fields.  In KnitLib, test cases show the information of how the KnitLib should be checked. Also test cases help for new contributors to understand about the KnitLib.

There are several test cases for the current KnitLib implementation such as tests on ayab communication, tests on ayab image, tests on command line interface, tests on KnitPat module and tests on knitting plugin.

For an example in ayab communication there are several important functions have been tested. Test on closing serial port communication, test on opening serial port with a baud rate of 115200 which ayab fits, tests on sending start message to the controller, tests on sending line of data via serial port and tests on reading line from serial communication.  Most of these tests have been done using mock tests. Mock is a python library to test in python.  Using mocks we can replace parts of our system with mock objects and have assertions about how they have been used. We can easily represent some complex objects without having to manually set up stubs as mock objects during a test.

It is very important to improve further test cases on the KnitLib because with the help of good test cases we can guarantee that the KnitLib’s features and functionalities should be working great.

Regards,

Shiluka.