Debugging with Node

Nodejs is a powerful Event-driven I/O server-side JavaScript environment based on V8. Debugging of Node applications is not as easy as the browser code. But the node wonderful debugger can make it easy if it is used effectively.

How to start the debugger

For debugging node applications, we have to only run a debug command. Here for a quick demonstration, I have used Open-Event-Webapp fold.js code.

Suppose there is a problem in the script file and we have to debug it. All we have to do is to run the debug command from the directory containing the file.

node debug ./fold.js

The output screen will show something like this:

25

This brings us into the debug mode. When we are in debug mode, we can try various commands like ‘n’ for next, ‘s’ for the step into a function, ‘list(k)’ where k is the number of lines of the code you want on the screen.

26

The ‘n’ command always takes us to next instruction, hence in long codebases, it is always recommended to use ‘c’ or Continue Execution command for going to the next breakpoint.

To set the breakpoint, we can use the command:

setBreakpoint() or sb()

The snapshot shows the output once the breakpoint is set. We can check the values at the breakpoint by using command repl.

4

Debugging the code correctly can save a lot of time and effort. These techniques provided by the debugger are necessary to learn.

Alternates

Continue ReadingDebugging with Node

How can you add a bug?

It’s very simple to start testing, You don’t need any special experience in testing area.To start testing in Open Event project you need to open our web application http://open-event.herokuapp.com/ and you can start.Isn’t it easy? So You should focus on finding as many bugs as possible to provide your users with perfectly working software. If you find a bug you need to describe many details

How can you report a bug to Open Event?

Go to Github page issues, click new issue(green button).

Screen Shot 2016-07-01 at 21.03.45.png

Our Requirements:

Good description – If you found a bug you have to reproduce it, so you have nice background to describe it very well. It’s important because, good issue’s description saves developers time. They don’t need to ask announcer about details of bug. The best description which tester can add is how developer can simply achieve a bug step by step.

Logs – description is sometimes not enough so you need to attach logs which are generated from our server(It’s nice if you have access, if you don’t have ask me)

Pictures – it’s helpful, because we can quickly find where bug is located

Labels – You need to assign Screen Shot 2016-07-01 at 21.26.17.png label to issue

That’s all!

 

Continue ReadingHow can you add a bug?

sTeam REST API

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

REST Services

REST is the software architectural style of the World Wide Web. REST (Representational State Transfer) was introduced by Roy Fielding in his doctoral dissertation in 2000. Its purpose is to induce performance, scalability, simplicity, modifiability, visibility, portability, and reliability.It has client/server relationship with a uniform interface and is stateless. REST is most commonly associated with HTTP but it is not strictly related to it.

REST Principles

  • Resources : Each and every component is a resource.A resource is accessed by a common interface using HTTP standard methods.
  • Messages use HTTP methods like GET, POST, PUT, and DELETE.
  • Resource identification through URI: Resources are identified using URI. Resources are represented using JSON or XML.
  • Stateless interactions take place between the server and the client. No context is saved for the requests at the server.The client maintains the state of the session.

HTTP methods

The CRUD(create, retrieve, update and delete ) operations are performed using the HTTP methods.

GET

It is used to retrieve information. GET requests executed any number of times with the same parameters, the results would not change. This makes it idempotent. Partial or conditional requests can be sent. It is a read only type of operation.

Retrieve a list of users:

GET /api.example.com/UserService/users

POST

POST is usually used to create a new entity. It can also be used to update an existing entity. The request will have to do something with the entity provided in the URI.

Create a new user with an ID 2:

POST /api.example.com/UserService/users/2

PUT

PUT request is always idempotent. Executing the same request any number of times will not change the output. PUT can be used to create or update an existing entity.

Modify the user with an ID of 1:

PUT /api.example.com/UserService/users/1

PATCH

It is idempotent. PATCH requests only updates the specified fields of an entity.

Modify a user with an id of 1:

PATCH /api.example.com/UserService/users/1

DELETE

It can be asynchronous or a long-running request. It removes the resource. It can be removed immediately or at a later instance of time.

Delete a user with an ID of 1:

DELETE /api.example.com/UserService/users/1

 sTeam-REST API

Installing and activating the REST API

The REST API is developed as an application inside the sTeam server. This simplifies development quite a lot, as we don’t need to restart the server for every change. Instead just the API code gets updated and reloaded. It may eventually be integrated into the core, however the longterm plan is actually to move functionality out of the core, to make development easier.

To get the current version of the API clone the steam-rest repo into your home or to any place where you keep your development repos. Then change to the tools directory of your installation and run import-from-git.

git clone https://github.com/societyserver/steam-rest
cd steam-rest
git checkout origin/rest.pike
export steamrest=`pwd`
cd /usr/local/lib/steam/tools
./import-from-git.pike -u root $steamrest /

Note: The new import-from-git.pike script supports importing documents of all mime types.

It is important that the first import is done as root because the API code needs to run with root privileges and it will only do that if the object that holds the source is created as root.

Once the api code is loaded there are just a few tweaks needed to make it work.

We need to fix the mime-type, as the import script is not doing that yet.

OBJ("/sources/rest.pike")->set_attribute("DOC_MIME_TYPE", "source/pike");

Changing the mime type will change the class of the rest api script from Document to DocLpc.

> OBJ("/sources/rest.pike");                                               
(1) Result: 127.0.0.1:1900/rest.pike(#840,/classes/Document,17,source/pike)
> OBJ("/sources/rest.pike");                                               
(2) Result: 127.0.0.1:1900/rest.pike+(#840,/classes/DocLpc,529,source/pike,0 Instances, ({  }))

This takes a moment, check the type a few times until it’s done. Then instantiate an object from the source, give it a proper name, and move it to the /scripts/ container”

object rest = OBJ("/sources/rest.pike")->provide_instance();
rest->set_attribute("OBJ_NAME", "rest.pike");
rest->move(OBJ("/scripts/"));

Instantiating the object needs to be done as sTeam-root, in order for it to have permissions to run on behalf of other users.

Once this is done you are ready to start using the API.

sTeam-REST API tests

The project contains a set of examples and tests for the RESTful API for the sTeam server.

The code is written in coffee script and needs node.js only for coffeescript translation. Deployment can be done as static javascript files, and does not need any kind of dynamic server for the front-end. The back-end is a RESTful API written for the sTeam server as used by steam.realss.com

Development instructions

step 1: install node.js

http://nodejs.org/download/

step 2: clone the repository

git clone https://github.com/societyserver/steam-rest

step 3: install node packages:

npm install

This installs all dependencies (including coffee) for our project into the project’s node_modules directory based on the ‘package.json’ file

step 4: start the server

node_modules/.bin/coffee scripts/server.coffee

but for convenience we can install coffee in the global node environment:

npm install -g coffee-script

so we can just say

coffee scripts/server.coffee

if the server is working you’ll see:

Listening on port 7000

Testing

FrisbyJS is used to test the API. It is run through Jasmine and is based on nodejs.

Once you have nodejs installed, run the following statement to install Frisby and Jasmine:

npm install -g jasmine-node frisby

Then execute the test by:

cd project/directory
jasmine-node test/

The karma testing framework is also used for testing the sTeam REST API.

There were some inherent issues with the test framework which were addressed.

Issue. Github Issue Github PR
Update Readme.md Update Readme PR-2
Add javascript dependencies Issue-4 PR-6
Add node dependencies Issue-5 PR-7
Add angular-mocks.js script for testing the REST services. Issue-8 PR-9

The project dependencies were not met and this resulted into the error when the project was run on the localhost. The angular-ui-router, angular-bootstrap and bootstrap js frameworks were not installed in the node modules of the project. As a result the bower.json script was modified to include these dependencies.

bower.json

{
  "name": "bower",
  "version": "0.1",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "test",
    "tests"
  ],
  "dependencies": {
    "angular": "",
    "angular-route": "~1.4.8",
    "angular-ui-router": "",
    "angular-bootstrap": "",
    "bootstrap": ""
  }
}

The node dependencies of karma, frisby and jasmine-node were included in the package.json. These would be installed when the npm install is executed.

package.json

{
"name": "TechGrind",
"version": "0.1.1",
"private": true,
"dependencies": {
"express": "",
"coffee-script": "",
"morgan": "",
"compression": "",
"method-override": "",
"body-parser": "",
"serve-static": "",
"errorhandler": "",
"bower": "",
"jasmine-node": "",
"frisby": "",
"karma": ""
},
"production_dirs": {
"coffee_src": "src/",
"src": "app/",
"dest": "app_production/"
},
"devDependencies": {
},
"scripts": {
"postinstall": "bower install"
}
}

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue ReadingsTeam REST API

Sending Email using Sendgrid API

sendgrid1

One of the important features while creating server side code for some website/ web application is sending emails. But how do we send emails. There are different ways and packages using which you can setup SMTP ports and send emails. So why specifically sendgrid? Because along with SMTP relay, sendgrid also allows you to send emails using its Web API which makes work much easier. Here, we will discuss about using Web API.

Continue ReadingSending Email using Sendgrid API

Drag and Drop directives in Angular Js | sTeam-web-UI | sTeam

week6gsoc1

The recent developments with the sTeam web interface, involved me writing drag and drop directives for giving scope to drag/move movements in the workarea and the workspace. The concept here is providing user with the feasibility to arrange rooms/documents swiftly and easily.

The idea behind it :
So, in sTeam-web-UI the concept of rooms/documents is implemented using the workareaCtrl. After creating a document/room the created objects appear in the workarea. So in the future, the items in the work area can be sorted and searched for; But adding the scope for drag movements to the web interface makes the the UI/UX to be commendable.

Implementation strategy :
Firstly we need to create directives in angular that can support the actions which when triggered in the front end must result in relative change. So in order to link those actions we must write directives which properly catch the emitted events.

The drag Objective
The way around for implementing the drag objective is to get the element which is selected for the drag option and then emit the event in order to complete the drag action. So we have to go the traditional way for working our way around with this.

// get the element first
angular.element(element).attr("isDraggable": true)
var id = angular.element(element).attr("id")
if(!id) {
id = uuid.create()
angular.element(element).attr("id", id)
}

Now we emit the events.

// emit events
element.bind("startDrag", function (send) {
data.originalEvent.dataTransfer.setData('text', id);
console.log("Starting to drag")
$rootScope.emit("STEAM-DRAG-STRT")
});
element.bind("stopDrag", function (send) {
console.log('Stopping to drag')
$rootScope.emit("STEAM-DRAG-STOP")
});

If we observe the element capturing part, we are just taking the id’s of the elements which are in the DOM and then we bind the respective element with an action, be it startDrag or stopDrag the idea here is to store the id of the element and bind respective action/event to it.

The drop Objective
Before we understand how the drop directive is written there has to be some clarity given on how should we go ahead in writing the directive. The drag and drop directives which when implemented should be carefully monitered since the entire process is interlinked. So the elements must be properly caught and the respective events must be properly binded. While testing we should check if the events are emitting proper actions to the elements or not.

$rootScope.$on("STEAM-DRAG-STRT", function () {
var domelm = document.getElementById(id);
angular.element(domelm).addClass("drag-target");
});
$rootScope.$on("STEAM-DRAG-STOP", function () {
var domelm = document.getElementById(id);
angular.element(domelm).removeClass("drag-target");
angular.element(domelm).removeClass("drag-over");
});

There are alternatives to replicate this mechanism in jQuery and also plain Javascript. But the efficacy of the implementation comes to play only when the DOM can be played with easily.

Thats it folks,
Happy Hacking !!

Continue ReadingDrag and Drop directives in Angular Js | sTeam-web-UI | sTeam

PSLab Code Repository and Installation

PSLab  is a new addition to FOSSASIA Science Lab. This tiny pocket science lab  provides  an array of necessary equipments for doing science and engineering experiments. It can function like an oscilloscope, waveform generator, frequency counter, programmable voltage and current source and also as a data logger.

pslabdesign
New Front Panel Design
psl2
Size:62mmx78mmx13mm

The control and measurement functions are written in Python programming language. Pyqtgraph is used for plotting library. We are now working on Qt based GUI applications for various experiments.

The following are the code repositories of PSLab.

Installation

To install PSLab on Debian based Gnu/Linux system, the following dependencies must be installed.

Dependencies
============
PyQt 4.7+, PySide, or PyQt5
python 2.6, 2.7, or 3.x
NumPy, Scipy
pyqt4-dev-tools          #for pyuic4
Pyqtgraph                #Plotting library
pyopengl and qt-opengl   #for 3D graphics
iPython-qtconsole        #optional
Now clone both the repositories pslab-apps and pslab .

Libraries must be installed in the following order

1. pslab-apps

2. pslab

To install, cd into the directories

$ cd <SOURCE_DIR>

and run the following (for both the repos)

$ sudo make clean
$ sudo make 

$ sudo make install

Now you are ready with the PSLab software on your machine 🙂

For the main GUI (Control panel), you can run Experiments from the terminal.

$ Experiments

If the device is not connected the following splash screen will be displayed.

SplashNotConnected
Device not connected

After clicking OK, you will get the control panel with menus for Experiments, Controls, Advanced Controls and Help etc. (Experiments can not be accessed unless the device is connected)

controlPanelNotConnected

The splash screen and the control panel, when PSLab is connected to the pc.

SplashScreen
PSLab connected
controlpanel
Control Panel – Main GUI

From this control panel one can access controls, help files and various experiments through independent GUI’s written for each experiment.

You can help
------------

Please report a bug/install errors here 
Your suggestions to improve PSLab are welcome :)

What Next:

We are now working on a general purpose Experimental designer. This will allow selecting controls and channels and then generate a spread sheet. The columns from this spreadsheet can be selected and plotted.

 

Continue ReadingPSLab Code Repository and Installation

CommonsNet – WiFi Standards

Introduction

There is no doubt that WiFi is a crucial technology that most of us use every day. But have you ever  noticed on wifi router that there are a few different number and letter tagged on the end?  These designations present different properties of the WiFi like speed, allowed devices, range and frequency and they create WiFi standards. If you know what standard you have, you can tell much about your wireless connection, and use it in the way you want. CommonsNet team focuses on providing users with transparent wifi information so let’s talk today about them.

WIFI Standards

802 – this strange number means naming system which is used by networking standards. WiFi uses 802.11. All WiFi varieties has this number, followed by a letter or two which, is very useful for consumers, because as mentioned above it helps to identify wifi properties life maximum speed, range and required devices.

Specific router may support not only single, but multiple standards at the same time. It happens in order to ensure compatibility with different pieces of hardware and network.

 

802.11

This standard was created In 1997 by the Institute of Electrical and Electronics Engineers (IEEE).  It was used for medicine and industrial purposes. Unfortunately, 802.11 supported a maximum network bandwidth up to 1 or 2 Mbps – not fast enough for applications. Therefore this standard was rapidly supplanted and is no longer used.

802.11b

This standard became the most commonly adopted in consumer devices, especially because of its low-cost. It supports bandwidth up to 11 Mbps. 802.11b uses radio signaling frequency  – 2.4 GHz, and due to this, its signal has good range – about 100m – and is not easily obstructed, but due to the fact that it works on 2, GHz it may interfere with home appliances.

802.11a

This standard bandwidth is up to 54 Mbps and has signals in a regulated frequency  around 5 GHz. There is no doubt that this higher frequency shortens the range, and needs more power to work correctly. This also means that signal has more difficulties while penetrating obstructions like walls, doors, windows. This standard due to working on different frequency is incompatible with 802.11b standard.

802.11g

In 2002 products supporting a new standard emerged on the market.I t’s actually the most popular WiFi standard. It focuses on combining the best of both 802.11a and 802.11b. It supports bandwidth up to 54Mbps, and it uses the 2.4 Ghz frequency for greater range. It is compatible with other standards. But it’s impossible to use it in older devices. If you try to do it, the speed will be 4 times slower.

802.11n

This standard was designed  to improve  802.11g  by utilizing multiple wireless signals and antennas (called MIMO technology) instead of one. It provides up to 600 Mbps  of network bandwidth, but in reality it usually reaches up to 150 Mbps. 802.11n also offers  better range over earlier Wi-Fi standards due to its increased signal intensity, and it is backward-compatible with 802.11b/g gear.

802.11ac

The newest generation of Wi-Fi signaling in popular use, utilizes dual band wireless technology, supporting simultaneous connections on both the 2.4 GHz and 5 GHz. It offers compatibility to 802.11b/g/n and bandwidth  up to 1300 Mbps on the 5 GHz band plus up to 450 Mbps on 2.4 GHz.

Summary

As you can see based on above description there are different wifi standard which differ from each other in their speed, range and devices’ support. Some of them are not actual anymore, but some of them can be still used simultaneously. You can choose this one , which is best suitable to your needs.

As a CommonsNet team we believe that we will create a great CommonsNet website which helps users to be aware of wifi’s properties they have or use, and if necessary improve it to provide and share with other people the transparent wireless connection of the best quality.

With support of http://compnetworking.about.com/cs/wireless80211/a/aa80211standard.htmhttp://www.androidauthority.com/wifi-standards-explained-802-11b-g-n-ac-ad-ah-af-666245/

Continue ReadingCommonsNet – WiFi Standards

Deploying PHP and Mysql Apps on Heroku

This tutorial will help you deploying a PHP and Mysql app.

Prerequisites

  1. a free Heroku account.
  2. PHP installed locally.
  3. Composer installed locally.

Set up

In this step you will install the Heroku Toolbelt. This provides you access to the Heroku Command Line Interface (CLI), which can be used for managing and scaling your applications and add-ons.

To install the Toolbelt for ubuntu/Debian

 wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

After installing Toolbelt you can use the heroku command from your command shell.

$ heroku login
Enter your Heroku credentials.
Email: dz@example.com
Password:
...

Authenticating is required to allow both the heroku and git commands to operate.

Prepare the app

In this step, you will prepare a fossasia/engelsystem application that can be deployed.

To clone the sample application so that you have a local version of the code that you can then deploy to Heroku, execute the following commands in your local command shell or terminal:

$ git clone --recursive https://github.com/fossasia/engelsystem.git
$ cd engelsystem/

If it is not a git repository you follow these steps

$ cd engelsystem/
$ git init

You now have a functioning git repository that contains a simple application now we need to add a composer.json file. Make sure you’ve installed Composer.

The Heroku PHP Support will be applied to applications only when the application has a file named composer.json in the root directory. Even if an application has no Composer dependencies, it must include at least an empty ({}) composer.json in order to be recognized as a PHP application.

When Heroku recognizes a PHP application, it will respond accordingly during a push:

$ git push heroku master
-----> PHP app detected

Define a Procfile

A Procfile is a text file in the root directory of your application that defines process types and explicitly declares what command should be executed to start your app. Your Procfile will look something like this for engelsystem:

web: vendor/bin/heroku-php-apache2 public/

Since our folder named public that contains your JavaScript, CSS, images and index.php file, your Procfile would define the Apache web server with that directory used as document root.

Create the app

In this step you will create the app to Heroku.

Create an app on Heroku, which prepares Heroku to receive your source code:

$ heroku create
Creating sharp-rain-871... done, stack is cedar-14
http://sharp-rain-871.herokuapp.com/ | https://git.heroku.com/sharp-rain-871.git
Git remote heroku added

When you create an app, a git remote (called heroku) is also created and associated with your local git repository.

Heroku generates a random name (in this case sharp-rain-871) for your app, or you can pass a parameter to specify your own app name.

But Once you open http://sharp-rain-871.herokuapp.com/ we will not be able to view the site if there are database connections. We need to migrate the database using Cleardb

ClearDB MySQL

Migrating database

Creating your ClearDB database

To create your ClearDB database, simply type the following Heroku command:

$ heroku addons:create cleardb:ignite
-----> Adding cleardb to sharp-mountain-4005... done, v18 (free)

This will automatically provision your new ClearDB database for you and will return the database URL to access it.

You can retrieve your new ClearDB database URL by issuing the following command:

$ heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL: mysql://bda37eff166954:69445d28@us-cdbr-iron-east-04.cleardb.net/heroku_3c94174e0cc6cd8?reconnect=true

After getting the cleardb database url we can import the tables by following command:

$mysql -u bda37eff166954 -h us-cdbr-iron-east-04.cleardb.net -p heroku_3c94174e0cc6cd8

than you well get a mysql prompt with connection to the database. Than you can import the tables using the following commands

mysql> source [path to engelsystem]/engelsystem/db/install.sql;
mysql> source [path to engelsystem]/engelsystem/db/update.sql;
mysql> exit;

Now the tables are migrated successfully.

Declare app dependencies

Since we have added the mysql database we need to add the dependencies also.

{
  "require": {
    "ext-mysql": "*"
  },
   "require": {
      "ext-gettext": "*"
    },
   "require-dev": {
      "heroku/heroku-buildpack-php": "*"
   }
}

The composer.json file specifies the dependencies that should be installed with your application. When an app is deployed, Heroku reads this file and installs the appropriate dependencies into the vendor directory.

Run the following command to install the dependencies, preparing your system for running the app locally:

$ composer update
Loading composer repositories with package information
Updating dependencies (including require-dev)
  - Installing psr/log (1.0.0)
    Loading from cache
...
Writing lock file
Generating autoload files

You should always check composer.json and composer.lock into your git repo. The vendor directory should be included in your .gitignore file.

Using ClearDB with PHP

Connecting to ClearDB from PHP merely requires the parsing of the CLEARDB_DATABASE_URL environment variable and passing the extracted connection information to your MySQL library of choice, e.g. MySQLi:

we need to modify it in the config/config.php file

$url = parse_url(getenv("CLEARDB_DATABASE_URL"));
$server = $url["host"];
$username = $url["user"];
$password = $url["pass"];
$db = substr($url["path"], 1);

$config = array(
    'host' => $server ,
    'user' => $username ,
    'pw' => $password,
    'db' => $db 
);

Deploy the app

All the steps are completed now we need to deploy it. Push the code to Heroku. For pushing pushing development branch we need to follow these commands.

$ git add -A
$ git commit -m "heroku deploy"
$ git push heroku development:master
Initializing repository, done.
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (7/7), 1.66 KiB | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)

-----> PHP app detected
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version 64ac32fca9e64eb38e50abfadc6eb6f2d0470039 2014-05-24 20:57:50
       Loading composer repositories with package information
       Installing dependencies from lock file
...
         - Installing monolog/monolog (1.9.1)
       Generating optimized autoload files
-----> Building runtime environment...
-----> Discovering process types
       Procfile declares types -> web
-----> Compressing... done, 57.4MB
-----> Launching... done, v3
       http://sharp-rain-871.herokuapp.com/ deployed to Heroku

To git@heroku.com:sharp-rain-871.git
 * [new branch]      development -> master

Now your app is successfully deployed you can view it here http://sharp-rain-871.herokuapp.com/

Englesystem

Development: https://github.com/fossasia/engelsystem

Issues/Bugs: Issues

 

 

Continue ReadingDeploying PHP and Mysql Apps on Heroku

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Continue ReadingCode Quality in the knittingpattern Python Library

Adding more functions to command line interface of steam-shell

sTeam allows the creation of groups and joining, leaving and listing them. However these functions were only available in the web interface. My task involved the addition of these functions to the command line interface, that is, steam-shell. The task sounded like a difficult one because it involved coding out new commands for the shell and perform actions that have never been done before from the shell. This didn’t turn out to be true.

Issue: https://github.com/societyserver/sTeam/issues/68

I began with using and understanding the group functions from the web interface. First I took up the command for the creation of groups. I listed the attributes needed by referring the web interface and then extended the create command already present in the shell to also create groups. The task turned out to be easy against what I thought earlier. This was because of the elegance of pike and modularity of the sTeam server. The code for creation of object was already present in the command and I had to pass the type of object that is group and write a few lines to accept the attributes required.

Next command was for the listing of groups, for this I created a new command called ‘group’ and inside the function called by group I switch cased on the next sub-command to find out if it was join, leave or list. After that I wrote the code to perform the action for each command in their respective cases. This is where the modularity of sTeam helped me a lot. The core portion of these functions turned out to be one liners.

Code to get a list of all groups:

array(object) groups = _Server->get_module(“groups”)->get_groups();

Code to join a group:

int result = group->add_member(me);

Code to leave a group:

group->remove_member(me);

group code 1group code 2

Soon all my command were ready. I tested these and everything seemed to be working fine. I pushed my changes and created a new pull request. It was after this that Martin asked me to change the interface. He introduced me to MUDs, Multi User Dungeon. MUDs are type of text based games. The interface for sTeam is based on the these games and these are also an inspiration for the entire project. Just like MUDs create a virtual space we at sTeam create a virtual office space. This helped me to understand not only the interface but also the project. I will be talking more about this in my next blog. Anyways the standard interface is

<action> <object> <additional attributes>

I changed the interface and now the syntax for the commands are

Create a group: create group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> create group group_test
 How would you describe it?^Jtest group
 Subgroup of?^J
 /home/root>

List groups: list groups


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> list groups


Here is a list of all groups
abcd Admin coder Everyone Groups group_test
help justnow PrivGroups sTeam testg testg;
testGroup testing test_group WikiGroups


 /home/root>
 

Join a group: join group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> join group group_test
 Joined group group_test
 /home/root>
 

Leave a group: leave group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> leave group group_test
 /home/root>
 

Solution: https://github.com/societyserver/sTeam/pull/77

Continue ReadingAdding more functions to command line interface of steam-shell