Continuous Integration and Automated Testing for Engelsystem

Every software development group tests its products, yet delivered software always has defects. Test engineers strive to catch them before the product is released but they always creep in and they often reappear, even with the best manual testing processes. Using automated testing is the best way to increase the effectiveness, efficiency and coverage of your software testing.

Manual software testing is performed by a human sitting in front of a computer carefully going through application screens, trying various usage and input combinations, comparing the results to the expected behavior and recording their observations. Manual tests are repeated often during development cycles for source code changes and other situations like multiple operating environments and hardware configurations.

Continuous integration (CI) has emerged as one of the most efficient ways to develop code. But testing has not always been a major part of the CI conversation.

In some respects, that’s not surprising. Traditionally, CI has been all about speeding up the coding, building, and release process. Instead of having each programmer write code separately, integrate it manually, and then wait until the next daily or weekly build to see if the changes broke anything, CI lets developers code and compile on a virtually continuous basis. It also means developers and admins can work together seamlessly since the programming and build processes are always in sync.

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

By integrating regularly, you can detect errors quickly, and locate them more easily.

Solve problems quickly

Because you’re integrating so frequently, there is significantly less back-tracking to discover where things went wrong, so you can spend more time building features.

Continuous Integration is cheap. Not continuously integrating is costly. If you don’t follow a continuous approach, you’ll have longer periods between integrations. This makes it exponentially more difficult to find and fix problems. Such integration problems can easily knock a project off-schedule, or cause it to fail altogether.

Continuous Integration brings multiple benefits to your organization:

  • Say goodbye to long and tense integrations
  • Increase visibility which enables greater communication
  • Catch issues fast and nip them in the bud
  • Spend less time debugging and more time adding features
  • Proceed with the confidence you’re building on a solid foundation
  • Stop waiting to find out if your code’s going to work
  • Reduce integration problems allowing you to deliver software more rapidly

“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.”

– Martin Fowler, Chief Scientist, ThoughtWorks

Continuous Integration is backed by several important principles and practices.

Practices in Continuous Integration:

  • Maintain a single source repository
  • Automate the build
  • Make your build self-testing
  • Every commit should build on an integration machine
  • Keep the build fast
  • Test in a clone of the production environment
  • Make it easy for anyone to get the latest executable
  • Everyone can see what’s happening
  • Automate deployment

How to do Continuous Integration:

  • Developers check out code into their private workspaces.
  • When done, commit the changes to the repository.
  • The CI server monitors the repository and checks out changes when they occur.
  • The CI server builds the system and runs unit and integration tests.
  • The CI server releases deployable artifacts for testing.
  • The CI server assigns a build label to the version of the code it just built.
  • The CI server informs the team of the successful build.
  • If the build or tests fail, the CI server alerts the team.
  • The team fixes the issue at the earliest opportunity.
  • Continue to continually integrate and test throughout the project.

The CI implemented in Engelsystem are as follows:

  • Travis-CITravis CI is a hosted, distributed continuous integration service used to build and test software projects hosted on GitHub. It is integrated using the .travis.yml file in the root folder.
    language: php
    php:
    - '5.4'
    - '5.5'
    - '5.6'
    - '7.0'
    script: cd test && phpunit
  • Nitpick-CI:Automatic comments on PSR-2 violations in one click, so your team can focus on better code review. It requires one click for integratting with the repository.
  • Circle-CI: CircleCI was founded in 2011 with the mission of giving every developer state-of-the-art automated testing and continuous integration tools. It is integrated using a circle.yml file in the root folder of the repository.
    machine:
    php:
    version: 5.4.5
    deployment:
    master:
    branch: master
    owner: fossasia
    commands:
    - ./deploy_master.sh
    dependencies:
    pre:
    - curl -s http://getcomposer.org/installer | php
    - php composer.phar install -n
    - sed -i 's/^;//' ~/.phpenv/versions/$(phpenv global)/etc/conf.d/xdebug.ini
    
    test:
    post:
    - php test/
    - bash <(curl -s https://codecov.io/bash)
  • Codacy: Check code style, security, duplication, complexity and coverage on every change while tracking code quality throughout your sprints.

Development: https://github.com/fossasia/engelsystem Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Continue ReadingContinuous Integration and Automated Testing for Engelsystem

sTeam REST API

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

REST Services

REST is the software architectural style of the World Wide Web. REST (Representational State Transfer) was introduced by Roy Fielding in his doctoral dissertation in 2000. Its purpose is to induce performance, scalability, simplicity, modifiability, visibility, portability, and reliability.It has client/server relationship with a uniform interface and is stateless. REST is most commonly associated with HTTP but it is not strictly related to it.

REST Principles

  • Resources : Each and every component is a resource.A resource is accessed by a common interface using HTTP standard methods.
  • Messages use HTTP methods like GET, POST, PUT, and DELETE.
  • Resource identification through URI: Resources are identified using URI. Resources are represented using JSON or XML.
  • Stateless interactions take place between the server and the client. No context is saved for the requests at the server.The client maintains the state of the session.

HTTP methods

The CRUD(create, retrieve, update and delete ) operations are performed using the HTTP methods.

GET

It is used to retrieve information. GET requests executed any number of times with the same parameters, the results would not change. This makes it idempotent. Partial or conditional requests can be sent. It is a read only type of operation.

Retrieve a list of users:

GET /api.example.com/UserService/users

POST

POST is usually used to create a new entity. It can also be used to update an existing entity. The request will have to do something with the entity provided in the URI.

Create a new user with an ID 2:

POST /api.example.com/UserService/users/2

PUT

PUT request is always idempotent. Executing the same request any number of times will not change the output. PUT can be used to create or update an existing entity.

Modify the user with an ID of 1:

PUT /api.example.com/UserService/users/1

PATCH

It is idempotent. PATCH requests only updates the specified fields of an entity.

Modify a user with an id of 1:

PATCH /api.example.com/UserService/users/1

DELETE

It can be asynchronous or a long-running request. It removes the resource. It can be removed immediately or at a later instance of time.

Delete a user with an ID of 1:

DELETE /api.example.com/UserService/users/1

 sTeam-REST API

Installing and activating the REST API

The REST API is developed as an application inside the sTeam server. This simplifies development quite a lot, as we don’t need to restart the server for every change. Instead just the API code gets updated and reloaded. It may eventually be integrated into the core, however the longterm plan is actually to move functionality out of the core, to make development easier.

To get the current version of the API clone the steam-rest repo into your home or to any place where you keep your development repos. Then change to the tools directory of your installation and run import-from-git.

git clone https://github.com/societyserver/steam-rest
cd steam-rest
git checkout origin/rest.pike
export steamrest=`pwd`
cd /usr/local/lib/steam/tools
./import-from-git.pike -u root $steamrest /

Note: The new import-from-git.pike script supports importing documents of all mime types.

It is important that the first import is done as root because the API code needs to run with root privileges and it will only do that if the object that holds the source is created as root.

Once the api code is loaded there are just a few tweaks needed to make it work.

We need to fix the mime-type, as the import script is not doing that yet.

OBJ("/sources/rest.pike")->set_attribute("DOC_MIME_TYPE", "source/pike");

Changing the mime type will change the class of the rest api script from Document to DocLpc.

> OBJ("/sources/rest.pike");                                               
(1) Result: 127.0.0.1:1900/rest.pike(#840,/classes/Document,17,source/pike)
> OBJ("/sources/rest.pike");                                               
(2) Result: 127.0.0.1:1900/rest.pike+(#840,/classes/DocLpc,529,source/pike,0 Instances, ({  }))

This takes a moment, check the type a few times until it’s done. Then instantiate an object from the source, give it a proper name, and move it to the /scripts/ container”

object rest = OBJ("/sources/rest.pike")->provide_instance();
rest->set_attribute("OBJ_NAME", "rest.pike");
rest->move(OBJ("/scripts/"));

Instantiating the object needs to be done as sTeam-root, in order for it to have permissions to run on behalf of other users.

Once this is done you are ready to start using the API.

sTeam-REST API tests

The project contains a set of examples and tests for the RESTful API for the sTeam server.

The code is written in coffee script and needs node.js only for coffeescript translation. Deployment can be done as static javascript files, and does not need any kind of dynamic server for the front-end. The back-end is a RESTful API written for the sTeam server as used by steam.realss.com

Development instructions

step 1: install node.js

http://nodejs.org/download/

step 2: clone the repository

git clone https://github.com/societyserver/steam-rest

step 3: install node packages:

npm install

This installs all dependencies (including coffee) for our project into the project’s node_modules directory based on the ‘package.json’ file

step 4: start the server

node_modules/.bin/coffee scripts/server.coffee

but for convenience we can install coffee in the global node environment:

npm install -g coffee-script

so we can just say

coffee scripts/server.coffee

if the server is working you’ll see:

Listening on port 7000

Testing

FrisbyJS is used to test the API. It is run through Jasmine and is based on nodejs.

Once you have nodejs installed, run the following statement to install Frisby and Jasmine:

npm install -g jasmine-node frisby

Then execute the test by:

cd project/directory
jasmine-node test/

The karma testing framework is also used for testing the sTeam REST API.

There were some inherent issues with the test framework which were addressed.

Issue. Github Issue Github PR
Update Readme.md Update Readme PR-2
Add javascript dependencies Issue-4 PR-6
Add node dependencies Issue-5 PR-7
Add angular-mocks.js script for testing the REST services. Issue-8 PR-9

The project dependencies were not met and this resulted into the error when the project was run on the localhost. The angular-ui-router, angular-bootstrap and bootstrap js frameworks were not installed in the node modules of the project. As a result the bower.json script was modified to include these dependencies.

bower.json

{
  "name": "bower",
  "version": "0.1",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "test",
    "tests"
  ],
  "dependencies": {
    "angular": "",
    "angular-route": "~1.4.8",
    "angular-ui-router": "",
    "angular-bootstrap": "",
    "bootstrap": ""
  }
}

The node dependencies of karma, frisby and jasmine-node were included in the package.json. These would be installed when the npm install is executed.

package.json

{
"name": "TechGrind",
"version": "0.1.1",
"private": true,
"dependencies": {
"express": "",
"coffee-script": "",
"morgan": "",
"compression": "",
"method-override": "",
"body-parser": "",
"serve-static": "",
"errorhandler": "",
"bower": "",
"jasmine-node": "",
"frisby": "",
"karma": ""
},
"production_dirs": {
"coffee_src": "src/",
"src": "app/",
"dest": "app_production/"
},
"devDependencies": {
},
"scripts": {
"postinstall": "bower install"
}
}

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue ReadingsTeam REST API

Sending Email using Sendgrid API

sendgrid1

One of the important features while creating server side code for some website/ web application is sending emails. But how do we send emails. There are different ways and packages using which you can setup SMTP ports and send emails. So why specifically sendgrid? Because along with SMTP relay, sendgrid also allows you to send emails using its Web API which makes work much easier. Here, we will discuss about using Web API.

Continue ReadingSending Email using Sendgrid API

Drag and Drop directives in Angular Js | sTeam-web-UI | sTeam

week6gsoc1

The recent developments with the sTeam web interface, involved me writing drag and drop directives for giving scope to drag/move movements in the workarea and the workspace. The concept here is providing user with the feasibility to arrange rooms/documents swiftly and easily.

The idea behind it :
So, in sTeam-web-UI the concept of rooms/documents is implemented using the workareaCtrl. After creating a document/room the created objects appear in the workarea. So in the future, the items in the work area can be sorted and searched for; But adding the scope for drag movements to the web interface makes the the UI/UX to be commendable.

Implementation strategy :
Firstly we need to create directives in angular that can support the actions which when triggered in the front end must result in relative change. So in order to link those actions we must write directives which properly catch the emitted events.

The drag Objective
The way around for implementing the drag objective is to get the element which is selected for the drag option and then emit the event in order to complete the drag action. So we have to go the traditional way for working our way around with this.

// get the element first
angular.element(element).attr("isDraggable": true)
var id = angular.element(element).attr("id")
if(!id) {
id = uuid.create()
angular.element(element).attr("id", id)
}

Now we emit the events.

// emit events
element.bind("startDrag", function (send) {
data.originalEvent.dataTransfer.setData('text', id);
console.log("Starting to drag")
$rootScope.emit("STEAM-DRAG-STRT")
});
element.bind("stopDrag", function (send) {
console.log('Stopping to drag')
$rootScope.emit("STEAM-DRAG-STOP")
});

If we observe the element capturing part, we are just taking the id’s of the elements which are in the DOM and then we bind the respective element with an action, be it startDrag or stopDrag the idea here is to store the id of the element and bind respective action/event to it.

The drop Objective
Before we understand how the drop directive is written there has to be some clarity given on how should we go ahead in writing the directive. The drag and drop directives which when implemented should be carefully monitered since the entire process is interlinked. So the elements must be properly caught and the respective events must be properly binded. While testing we should check if the events are emitting proper actions to the elements or not.

$rootScope.$on("STEAM-DRAG-STRT", function () {
var domelm = document.getElementById(id);
angular.element(domelm).addClass("drag-target");
});
$rootScope.$on("STEAM-DRAG-STOP", function () {
var domelm = document.getElementById(id);
angular.element(domelm).removeClass("drag-target");
angular.element(domelm).removeClass("drag-over");
});

There are alternatives to replicate this mechanism in jQuery and also plain Javascript. But the efficacy of the implementation comes to play only when the DOM can be played with easily.

Thats it folks,
Happy Hacking !!

Continue ReadingDrag and Drop directives in Angular Js | sTeam-web-UI | sTeam

PSLab Code Repository and Installation

PSLab  is a new addition to FOSSASIA Science Lab. This tiny pocket science lab  provides  an array of necessary equipments for doing science and engineering experiments. It can function like an oscilloscope, waveform generator, frequency counter, programmable voltage and current source and also as a data logger.

pslabdesign
New Front Panel Design
psl2
Size:62mmx78mmx13mm

The control and measurement functions are written in Python programming language. Pyqtgraph is used for plotting library. We are now working on Qt based GUI applications for various experiments.

The following are the code repositories of PSLab.

Installation

To install PSLab on Debian based Gnu/Linux system, the following dependencies must be installed.

Dependencies
============
PyQt 4.7+, PySide, or PyQt5
python 2.6, 2.7, or 3.x
NumPy, Scipy
pyqt4-dev-tools          #for pyuic4
Pyqtgraph                #Plotting library
pyopengl and qt-opengl   #for 3D graphics
iPython-qtconsole        #optional
Now clone both the repositories pslab-apps and pslab .

Libraries must be installed in the following order

1. pslab-apps

2. pslab

To install, cd into the directories

$ cd <SOURCE_DIR>

and run the following (for both the repos)

$ sudo make clean
$ sudo make 

$ sudo make install

Now you are ready with the PSLab software on your machine 🙂

For the main GUI (Control panel), you can run Experiments from the terminal.

$ Experiments

If the device is not connected the following splash screen will be displayed.

SplashNotConnected
Device not connected

After clicking OK, you will get the control panel with menus for Experiments, Controls, Advanced Controls and Help etc. (Experiments can not be accessed unless the device is connected)

controlPanelNotConnected

The splash screen and the control panel, when PSLab is connected to the pc.

SplashScreen
PSLab connected
controlpanel
Control Panel – Main GUI

From this control panel one can access controls, help files and various experiments through independent GUI’s written for each experiment.

You can help
------------

Please report a bug/install errors here 
Your suggestions to improve PSLab are welcome :)

What Next:

We are now working on a general purpose Experimental designer. This will allow selecting controls and channels and then generate a spread sheet. The columns from this spreadsheet can be selected and plotted.

 

Continue ReadingPSLab Code Repository and Installation

CommonsNet – WiFi Standards

Introduction

There is no doubt that WiFi is a crucial technology that most of us use every day. But have you ever  noticed on wifi router that there are a few different number and letter tagged on the end?  These designations present different properties of the WiFi like speed, allowed devices, range and frequency and they create WiFi standards. If you know what standard you have, you can tell much about your wireless connection, and use it in the way you want. CommonsNet team focuses on providing users with transparent wifi information so let’s talk today about them.

WIFI Standards

802 – this strange number means naming system which is used by networking standards. WiFi uses 802.11. All WiFi varieties has this number, followed by a letter or two which, is very useful for consumers, because as mentioned above it helps to identify wifi properties life maximum speed, range and required devices.

Specific router may support not only single, but multiple standards at the same time. It happens in order to ensure compatibility with different pieces of hardware and network.

 

802.11

This standard was created In 1997 by the Institute of Electrical and Electronics Engineers (IEEE).  It was used for medicine and industrial purposes. Unfortunately, 802.11 supported a maximum network bandwidth up to 1 or 2 Mbps – not fast enough for applications. Therefore this standard was rapidly supplanted and is no longer used.

802.11b

This standard became the most commonly adopted in consumer devices, especially because of its low-cost. It supports bandwidth up to 11 Mbps. 802.11b uses radio signaling frequency  – 2.4 GHz, and due to this, its signal has good range – about 100m – and is not easily obstructed, but due to the fact that it works on 2, GHz it may interfere with home appliances.

802.11a

This standard bandwidth is up to 54 Mbps and has signals in a regulated frequency  around 5 GHz. There is no doubt that this higher frequency shortens the range, and needs more power to work correctly. This also means that signal has more difficulties while penetrating obstructions like walls, doors, windows. This standard due to working on different frequency is incompatible with 802.11b standard.

802.11g

In 2002 products supporting a new standard emerged on the market.I t’s actually the most popular WiFi standard. It focuses on combining the best of both 802.11a and 802.11b. It supports bandwidth up to 54Mbps, and it uses the 2.4 Ghz frequency for greater range. It is compatible with other standards. But it’s impossible to use it in older devices. If you try to do it, the speed will be 4 times slower.

802.11n

This standard was designed  to improve  802.11g  by utilizing multiple wireless signals and antennas (called MIMO technology) instead of one. It provides up to 600 Mbps  of network bandwidth, but in reality it usually reaches up to 150 Mbps. 802.11n also offers  better range over earlier Wi-Fi standards due to its increased signal intensity, and it is backward-compatible with 802.11b/g gear.

802.11ac

The newest generation of Wi-Fi signaling in popular use, utilizes dual band wireless technology, supporting simultaneous connections on both the 2.4 GHz and 5 GHz. It offers compatibility to 802.11b/g/n and bandwidth  up to 1300 Mbps on the 5 GHz band plus up to 450 Mbps on 2.4 GHz.

Summary

As you can see based on above description there are different wifi standard which differ from each other in their speed, range and devices’ support. Some of them are not actual anymore, but some of them can be still used simultaneously. You can choose this one , which is best suitable to your needs.

As a CommonsNet team we believe that we will create a great CommonsNet website which helps users to be aware of wifi’s properties they have or use, and if necessary improve it to provide and share with other people the transparent wireless connection of the best quality.

With support of http://compnetworking.about.com/cs/wireless80211/a/aa80211standard.htmhttp://www.androidauthority.com/wifi-standards-explained-802-11b-g-n-ac-ad-ah-af-666245/

Continue ReadingCommonsNet – WiFi Standards

Deploying PHP and Mysql Apps on Heroku

This tutorial will help you deploying a PHP and Mysql app.

Prerequisites

  1. a free Heroku account.
  2. PHP installed locally.
  3. Composer installed locally.

Set up

In this step you will install the Heroku Toolbelt. This provides you access to the Heroku Command Line Interface (CLI), which can be used for managing and scaling your applications and add-ons.

To install the Toolbelt for ubuntu/Debian

 wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

After installing Toolbelt you can use the heroku command from your command shell.

$ heroku login
Enter your Heroku credentials.
Email: dz@example.com
Password:
...

Authenticating is required to allow both the heroku and git commands to operate.

Prepare the app

In this step, you will prepare a fossasia/engelsystem application that can be deployed.

To clone the sample application so that you have a local version of the code that you can then deploy to Heroku, execute the following commands in your local command shell or terminal:

$ git clone --recursive https://github.com/fossasia/engelsystem.git
$ cd engelsystem/

If it is not a git repository you follow these steps

$ cd engelsystem/
$ git init

You now have a functioning git repository that contains a simple application now we need to add a composer.json file. Make sure you’ve installed Composer.

The Heroku PHP Support will be applied to applications only when the application has a file named composer.json in the root directory. Even if an application has no Composer dependencies, it must include at least an empty ({}) composer.json in order to be recognized as a PHP application.

When Heroku recognizes a PHP application, it will respond accordingly during a push:

$ git push heroku master
-----> PHP app detected

Define a Procfile

A Procfile is a text file in the root directory of your application that defines process types and explicitly declares what command should be executed to start your app. Your Procfile will look something like this for engelsystem:

web: vendor/bin/heroku-php-apache2 public/

Since our folder named public that contains your JavaScript, CSS, images and index.php file, your Procfile would define the Apache web server with that directory used as document root.

Create the app

In this step you will create the app to Heroku.

Create an app on Heroku, which prepares Heroku to receive your source code:

$ heroku create
Creating sharp-rain-871... done, stack is cedar-14
http://sharp-rain-871.herokuapp.com/ | https://git.heroku.com/sharp-rain-871.git
Git remote heroku added

When you create an app, a git remote (called heroku) is also created and associated with your local git repository.

Heroku generates a random name (in this case sharp-rain-871) for your app, or you can pass a parameter to specify your own app name.

But Once you open http://sharp-rain-871.herokuapp.com/ we will not be able to view the site if there are database connections. We need to migrate the database using Cleardb

ClearDB MySQL

Migrating database

Creating your ClearDB database

To create your ClearDB database, simply type the following Heroku command:

$ heroku addons:create cleardb:ignite
-----> Adding cleardb to sharp-mountain-4005... done, v18 (free)

This will automatically provision your new ClearDB database for you and will return the database URL to access it.

You can retrieve your new ClearDB database URL by issuing the following command:

$ heroku config | grep CLEARDB_DATABASE_URL
CLEARDB_DATABASE_URL: mysql://bda37eff166954:69445d28@us-cdbr-iron-east-04.cleardb.net/heroku_3c94174e0cc6cd8?reconnect=true

After getting the cleardb database url we can import the tables by following command:

$mysql -u bda37eff166954 -h us-cdbr-iron-east-04.cleardb.net -p heroku_3c94174e0cc6cd8

than you well get a mysql prompt with connection to the database. Than you can import the tables using the following commands

mysql> source [path to engelsystem]/engelsystem/db/install.sql;
mysql> source [path to engelsystem]/engelsystem/db/update.sql;
mysql> exit;

Now the tables are migrated successfully.

Declare app dependencies

Since we have added the mysql database we need to add the dependencies also.

{
  "require": {
    "ext-mysql": "*"
  },
   "require": {
      "ext-gettext": "*"
    },
   "require-dev": {
      "heroku/heroku-buildpack-php": "*"
   }
}

The composer.json file specifies the dependencies that should be installed with your application. When an app is deployed, Heroku reads this file and installs the appropriate dependencies into the vendor directory.

Run the following command to install the dependencies, preparing your system for running the app locally:

$ composer update
Loading composer repositories with package information
Updating dependencies (including require-dev)
  - Installing psr/log (1.0.0)
    Loading from cache
...
Writing lock file
Generating autoload files

You should always check composer.json and composer.lock into your git repo. The vendor directory should be included in your .gitignore file.

Using ClearDB with PHP

Connecting to ClearDB from PHP merely requires the parsing of the CLEARDB_DATABASE_URL environment variable and passing the extracted connection information to your MySQL library of choice, e.g. MySQLi:

we need to modify it in the config/config.php file

$url = parse_url(getenv("CLEARDB_DATABASE_URL"));
$server = $url["host"];
$username = $url["user"];
$password = $url["pass"];
$db = substr($url["path"], 1);

$config = array(
    'host' => $server ,
    'user' => $username ,
    'pw' => $password,
    'db' => $db 
);

Deploy the app

All the steps are completed now we need to deploy it. Push the code to Heroku. For pushing pushing development branch we need to follow these commands.

$ git add -A
$ git commit -m "heroku deploy"
$ git push heroku development:master
Initializing repository, done.
Counting objects: 7, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (7/7), 1.66 KiB | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)

-----> PHP app detected
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version 64ac32fca9e64eb38e50abfadc6eb6f2d0470039 2014-05-24 20:57:50
       Loading composer repositories with package information
       Installing dependencies from lock file
...
         - Installing monolog/monolog (1.9.1)
       Generating optimized autoload files
-----> Building runtime environment...
-----> Discovering process types
       Procfile declares types -> web
-----> Compressing... done, 57.4MB
-----> Launching... done, v3
       http://sharp-rain-871.herokuapp.com/ deployed to Heroku

To git@heroku.com:sharp-rain-871.git
 * [new branch]      development -> master

Now your app is successfully deployed you can view it here http://sharp-rain-871.herokuapp.com/

Englesystem

Development: https://github.com/fossasia/engelsystem

Issues/Bugs: Issues

 

 

Continue ReadingDeploying PHP and Mysql Apps on Heroku

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Continue ReadingCode Quality in the knittingpattern Python Library

Adding more functions to command line interface of steam-shell

sTeam allows the creation of groups and joining, leaving and listing them. However these functions were only available in the web interface. My task involved the addition of these functions to the command line interface, that is, steam-shell. The task sounded like a difficult one because it involved coding out new commands for the shell and perform actions that have never been done before from the shell. This didn’t turn out to be true.

Issue: https://github.com/societyserver/sTeam/issues/68

I began with using and understanding the group functions from the web interface. First I took up the command for the creation of groups. I listed the attributes needed by referring the web interface and then extended the create command already present in the shell to also create groups. The task turned out to be easy against what I thought earlier. This was because of the elegance of pike and modularity of the sTeam server. The code for creation of object was already present in the command and I had to pass the type of object that is group and write a few lines to accept the attributes required.

Next command was for the listing of groups, for this I created a new command called ‘group’ and inside the function called by group I switch cased on the next sub-command to find out if it was join, leave or list. After that I wrote the code to perform the action for each command in their respective cases. This is where the modularity of sTeam helped me a lot. The core portion of these functions turned out to be one liners.

Code to get a list of all groups:

array(object) groups = _Server->get_module(“groups”)->get_groups();

Code to join a group:

int result = group->add_member(me);

Code to leave a group:

group->remove_member(me);

group code 1group code 2

Soon all my command were ready. I tested these and everything seemed to be working fine. I pushed my changes and created a new pull request. It was after this that Martin asked me to change the interface. He introduced me to MUDs, Multi User Dungeon. MUDs are type of text based games. The interface for sTeam is based on the these games and these are also an inspiration for the entire project. Just like MUDs create a virtual space we at sTeam create a virtual office space. This helped me to understand not only the interface but also the project. I will be talking more about this in my next blog. Anyways the standard interface is

<action> <object> <additional attributes>

I changed the interface and now the syntax for the commands are

Create a group: create group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> create group group_test
 How would you describe it?^Jtest group
 Subgroup of?^J
 /home/root>

List groups: list groups


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> list groups


Here is a list of all groups
abcd Admin coder Everyone Groups group_test
help justnow PrivGroups sTeam testg testg;
testGroup testing test_group WikiGroups


 /home/root>
 

Join a group: join group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> join group group_test
 Joined group group_test
 /home/root>
 

Leave a group: leave group <group_name>


 siddhant@omega:~/Documents/sTeam/tools$ ./steam-shell.pike
 Connecting to sTeam server...
 Password for root@127.0.0.1 [steam]: *****
 Pike v7.8 release 866 running Hilfe v3.5 (Incremental Pike Frontend)
 /home/root> leave group group_test
 /home/root>
 

Solution: https://github.com/societyserver/sTeam/pull/77

Continue ReadingAdding more functions to command line interface of steam-shell

FOSSASIA Code-In Grand Prize Winners Gathering at Google Headquarter

FOSSASIA was thrilled to be selected once again as a mentor organisation of Google Code-In (GCI) 2015 – a contest to introduce pre-university students (ages 13-17) to open source software development. Together with 13 other orgs we reached out to 980 students from 65 countries completed a total number of 4,776 tasks. As a part of our participation, I got a chance to present FOSSASIA at the Grand Prize Winners trip.

FOSSASIA Team, photo by Jeremy Allison
FOSSASIA Team, photo by Jeremy Allison

GCI 2015 Awards Ceremony

28 grand prize winners, their parents along with one mentor from each participating organisation were invited to a trip to the Bay Area as a reward to their hard work during the last GCI program. Students had a chance to meet with mentors and to interact with their fellow students from other projects, enjoyed a few days in San Francisco and received many cool gifts/swags from Google.

Chris DiBona and Jason Wong, photo by Jeremy Allision
Chris DiBona and Jason Wong, photo by Jeremy Allision

Chris DiBona – Director of Open Source at Google – a super busy man who was so kind to spend his morning personally congratulated each single student in front of his/her parent. I do believe enjoy what you are doing and get recognition for your work is the best gift ever and to be able to share it with your family is even better. Thanks Google for celebrating the open source culture.

Meet, learn and share

I was very impressed by the level of knowledge and abilities of all the 28 students. They are young, enthusiastic and inspiring. Thanks to all the parents for believing and supporting the kids in pursuing their open source journey.

Group photo by Jeremy Allison
Group photo by Jeremy Allison

It was wonderful to meet our two FOSSASIA GCI students for the first time. Jason grew up in the States, seemed a bit reserved while Yathannsh from India was very outspoken. They both were very new to open source when they joined the program and now have become active contributors and very eager to learn more. Three of us had a team presentation on FOSSASIA labs and our achievement from GCI 2015. Jason expressed his wish to go on as a mentor for the next GCI.

Jason and Yathannsh
Jason and Yathannsh

I had several interesting conversations with the parents who finally understood why their kids were on the computers all the time. About 14% of the parents are working in IT and very aware of open technology. The rest was super excited to learn about various open source projects. Many said to me they would love to have their second son/daughter to join the program as well.

The mentor group had a few discussions on pros and cons, how to improve and maximize the outcome of the program, and ways to keep students engaging afterwards. I learned a lot from other orgs and also shared FOSSASIA workflow and guidelines with them. The 7 weeks of GCI was an amazing experience for me and my team. I must give our FOSSASIA mentors credit for their incredible efforts. It was truly a pleasure to work with Mario, Sean, Mohit, Praveen, Nikunj, Abhishek, Jigyasa, Dukeleto, Manan, Saptaks, Aruna, Rohit, Arnav, Diwanshi, Martin, Nicco, Sudheesh, Samarjeet, Harsh, Luther, Jung and many more.

mentors
GCI 2015 – 14 Org Mentors, photo by Jeremy Allison

The Fun

It was the best field trip ever! The program was carefully planned: Meeting with Google engineers, a tour of the Google campus, a day of sightseeing around San Francisco and much more.

Segway Tour

I was my first time on a Segway and I loved it, so cool! Thanks Stephanie for encouraging me to try this. It is never too late to learn something.

Segway by the bay
Segway by the bay

Afternoon walk over the Golden Gate Bridge

Sanya and I could have completed the entire bridge but.. because of our slow male mentors we only made it halfway through. To all my geek friends out there – Please do more exercises!

On Golden Gate Bridge
On Golden Gate Bridge
Walking on the bridge, photo by Florian Schmidt
Walking on the bridge, photo by Florian Schmidt

Yacht Dinner Cruise

This was the highlight for many of us: sailing along the bay, relaxing time on the water, beautiful landscape, nice chats and yummy food.

Photo by Jeremy Allison
Photo by Jeremy Allison
Ladies pose, photo by Jeremy Allison
Ladies pose, photo by Jeremy Allison
with Cat Allman, photo by Jeremy Allison
with Cat Allman, photo by Jeremy Allison

Thank you organisers!

We just couldn’t thank Stephanie enough for her hard work and the extreme energy not only to GCI but also to the whole open source community and especially her care for us all during our trip. I was amazed by the level of details that been brought in: additional medication, sunscreen, chocolate tips, gift card, travel guide, luggage storage, special diet etc.

Stephanie Taylor, GCI program manager, photo by Jeremy Allison
Stephanie Taylor, GCI program manager, photo by Jeremy Allison

Last but not least thank to the lovely Mary, kind Helen, cool Eric, friendly Josh, awesome photographer Jeremy and of course my favorite Cat Allman for another unforgettable experience!

Links

Photos: https://goo.gl/photos/htCKY4yJooX9ZSNBA

Google Code-In: developers.google.com/open-source/gci/

Google Summer of Code: developers.google.com/open-source/gsoc/

FOSSASIA Labs: http://labs.fossasia.org/

Continue ReadingFOSSASIA Code-In Grand Prize Winners Gathering at Google Headquarter