Testing Hero

Our sTeam code base had no tests written and therefore we were facing a lot of issues and were not able to merge our code easily. My next task dealt with this problem. I had to write test cases for the function calls to COAL commands. On some basic research I found out that the only testing framework available for pike is used for testing the pike interpreter itself. This includes a set of scripts. mktestsuite is one of the scripts and is responsible for generating the tests. The problem with this is that the tests should have a particular syntax and since it is used to test interpreter is assumes each line of pike code is a test. This prevented us from writing multiple line tests and also from setting up the client.

Issue:

https://github.com/societyserver/sTeam/issues/104 (Establish testing framework),

https://github.com/societyserver/sTeam/issues/107 (Add tests for create)

My first approach to the problem was to try using the scripts available to write the tests, however this didn’t turn out very well and tests written were very confusing and out of context. The lines written for setting up the client was also being counted as tests and there was no continuation that is variables defined in one line was not accessible in the next. I realized that this is not going to work out and decided to write my own testing framework. I started by writing a simple testing structure.

The framework has a central script called test.pike. This script is used to run all the test cases. This script uses the the scripts called move.pike and create.pike which are the scripts containing the actual test cases. These scripts contain various functions each of which is a test case and return 1 on passing the case and 0 on failing. test.pike, the central node, is responsible for looping through this functions and calling each of these and recording the result. The framework then outputs the total number of cases passed or failed. Again implementing this became simple as we can import pike scripts as objects and also extract the functions from them.

test
Running tests for move
test1
Running test for create (Earlier version of framework)

test.pike involved initialization for various variables. It establishes a connection to the server and initializes _Server and me, which are then passed to all the test cases. move.pike has various test cases involving moving things around. Moving user into a room, into a container or to an non existential location and moving a room inside a container. I also got my first failing test case which is moving user into a non existential location. This shouldn’t have been allowed but is not throwing any error and incorrectly writes the trail.

create.pike involved cases creating various kinds of objects and also attempting to create objects for classes that do not exist. As I went along writing these cases I also kept improving the central test.pike. I added the code for creation of a special room for testing while initializing and also clearing this room before destroying the object and exiting.

Solution:

 https://github.com/societyserver/sTeam/pull/105 (Establish testing framework),

https://github.com/societyserver/sTeam/pull/108 (Add tests for create)

Continue ReadingTesting Hero

WordPress Upgrade Process

Have you ever wondered how WordPress handles the automated core, plugin, and theme upgrades? Well, it’s time for some quick education!

WordPress is an Open Source project, which means there are hundreds of people all over the world working on it. (More than most commercial platforms.)

There are mainly 2 kinds of Automatic upgrades for WordPress:

  • The main Core’ upgrades
  • The ‘Child’ upgrades, that works for themes and plugins that are being used in your WordPress.

We all see the plugin and theme installer while installing any theme or specific plugin required by the user, where it downloads the plugin to /wp-content/upgrade/ , extracts the plugin, deletes the old one (if an old one exists), and copies the new one. It is used every time we install a theme or a plugin to the WordPress.

Since this is used more often than the core updater (most of the time as many different plugins are used), it’s the sort of upgrade we’re all used to and familiar with. And we think ‘WordPress deletes before upgrading, sure.’ This makes sense. After all, you want to make sure to clean out the old files, especially the ones that aren’t used anymore.

But that not how the WordPress Core updates work.

Core updates are available when there is a new version of WordPress is available with new features or some solved bugs.

Core updates are subdivided into three types:

  1. Core development updates, known as the “bleeding edge”
  2. Minor core updates, such as maintenance and security releases
  3. Major core release updates

By default, every site has automatic updates enabled for minor core releases and translation files. Sites already running a development version also have automatic updates to further development versions enabled by default.

The WordPress updates can be configured by 2 ways:

  • defining constants in wp-config.php,
  • or adding filters using a Plugin

Let’s discuss for both the methods.

Configuration via wp-config.php

We can configure the wp-config.php to, disable automatic updates, and the core updates can be disabled or configured based on update type.

To completely disable all types of automatic updates, core or otherwise, add the following to your wp-config.php file:

define( 'AUTOMATIC_UPDATER_DISABLED', true );

To enable automatic updates for major releases or development purposes, the place to start is with the WP_AUTO_UPDATE_CORE constant. Defining this constant one of three ways allows you to blanket-enable, or blanket-disable several types of core updates at once.

define( 'WP_AUTO_UPDATE_CORE', true );

WP_AUTO_UPDATE_CORE can be defined with one of three values, each producing a different behavior:

  • Value of true – Development, minor, and major updates are all enabled
  • Value of false – Development, minor, and major updates are all disabled
  • Value of 'minor' – Minor updates are enabled, development, and major updates are disabled

 

Configuration via Filters using Plugins

Using filters allows for fine-tuned control of automatic updates.

The best place to put these filters is a must-use plugin.

Do not add add_filter() calls directly in wp-config.php. WordPress isn’t fully loaded and can cause conflicts with other applications such as WP-CLI.

We can also enable/disable all automatic updates by changing the values in the following filter:

add_filter( 'update_val', '__return_true' );

We can set the parameters update_val as mentioned below:

  • To disable the automatic updates we can set update_val to automatic_updater_disabled
  • To enable the automatci updates we can set update_val to different values as mentioned below:
    • auto_update_core to enable all the updates
    • allow_dev_auto_core_updates to enable the devlopement updates
    • allow_minor_auto_core_updates to enable minor updates
    • allow_major_auto_core_updates to enable the major updates

WordPress has a magnificent auto-update system that notifies you when new versions of the WordPress core, installed plugins or themes are available. The notifications are displayed in the Admin Bar and also on the plugins page where you can get more details about the new version.

To install the new version, you simply hit the “Update automatically” button. WordPress will automatically download the new package, extract it and replace the old files. No FTP, removing old files, and uploading is required.

img01

There is also a dedicated page for updates which can be reached from the dashboard menu. It’s helpful when you want to do bulk updates of multiple plugins instead of updating each one separately. It also has a “Check Again” button which checks for new updates. By default, WordPress does this check every 12 hours.

img02

In the later weeks, we will be applying this WordPress like update system to Engelsystem. Developers who are interested to contribute can work with us.

Development: https://github.com/fossasia/engelsystem             Issues/Bugs:https://github.com/fossasia/engelsystem/issues

 

Continue ReadingWordPress Upgrade Process

Dockerizing PHP and Mysql application

Docker Introduction

Docker is based on the concept of building images which contain the necessary software and configuration for applications. We can also build distributable images that contain pre-configured software like an Apache server, Caching server, MySQL server, etc. We can share our final image on the Docker HUB to make it accessible to everyone.

First we need to install docker on our local machine. Steps to install docker for ubuntu

Prerequisites

  • Docker requires a 64-bit installation regardless of your Ubuntu version.
  • Your kernel must be 3.10 at minimum. The latest 3.10 minor version or a newer maintained version are also acceptable.

To check your current kernel version, open a terminal and use uname -r to display your kernel version:

$ uname -r
3.11.0-15-generic

Update your apt sources

Docker’s APT repository contains Docker 1.7.1 and higher. To set APT to use packages from the new repository:

  1. Log into your machine as a user with sudo or root privileges.
  2. Open a terminal window.
  3. Update package information, ensure that APT works with the https method, and that CA certificates are installed.
     $ sudo apt-get update
     $ sudo apt-get install apt-transport-https ca-certificates
    
  4. Add the new GPG key.
    $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
    
  5. Open the /etc/apt/sources.list.d/docker.list file in your favorite editor.If the file doesn’t exist, create it.
  6. Remove any existing entries.
  7. Add an entry for your Ubuntu operating system
  8. On Ubuntu Trusty 14.04 (LTS)
    deb https://apt.dockerproject.org/repo ubuntu-trusty main
    
  • Save and close the /etc/apt/sources.list.d/docker.list file.
  • Update the APT package index.
    $ sudo apt-get update

For Ubuntu Trusty, Wily, and Xenial, it’s recommended to install the linux-image-extra kernel package. The linux-image-extra package allows you use the aufs storage driver.

Install both the required and optional packages.

$ sudo apt-get install linux-image-generic-lts-trusty

INSTALL

Log into your Ubuntu installation as a user with sudo privileges.

  1. Update your APT package index.
    $ sudo apt-get update
    
  2. Install Docker.
    $ sudo apt-get install docker-engine
    
  3. Start the docker daemon.
    $ sudo service docker start
    
  4. Verify docker is installed correctly.
    $ sudo docker run hello-world
    

    This command downloads a test image and runs it in a container. When the container runs, it prints an informational message. If it runs successfully then docker is installed.

Docker Images

Docker images are the basis of containers. An image can be considered a class definition. We define its properties and behavior. To browse the available images, we can visit the Docker HUB and run docker pull <image> to download them to the host machine.

Listing images on the host

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              14.04               1d073211c498        3 days ago          187.9 MB
busybox             latest              2c5ac3f849df        5 days ago          1.113 MB
training/webapp     latest              54bb4e8718e8        5 months ago        348.7 MB

Working with Dockerfile

Create a Dockerfile in your PHP project. This is the docker file for engelsystem.

Our Dockerfile is now complete and ready to be built:

Screenshot from 2016-07-18 09:08:52

Building the Image

The docker build . command will build the Dockerfile inside the current directory:

Our image is now labeled and tagged. The final step is to push it to the Docker HUB. This step is optional, but it’s still useful if we’re planning on sharing the image and and helping others with their development environment.

Continue ReadingDockerizing PHP and Mysql application

Implementing Database Migrations

Database Migrations Using Phinx

Database migrations can transform your database in many ways such as creating new tables, inserting rows, adding indexes and modifying columns. It avoids the use of writing MYSQL by hand and instead offers a powerful API for creating migrations using PHP code.

Advantages of using Phinx

  • Phinx keeps track of which migrations have been run so you can worry less about the state of your database and instead focus on building better software
  • Each migration is represented by a PHP class in a unique file. We can write our migrations using the Phinx PHP API, or raw SQL.
  • Phinx has an easy installation process and easy to use command line instructions and easy to Integrate with various other PHP tools (Phing, PHPUnit) and web frameworks.

Installating Phinx

Phinx should be installed using Composer. Composer is a tool for dependency management in PHP. We need to require the dependency in composer.json.

php composer.phar require robmorgan/phinx

Then run Composer:

php composer.phar install --no-dev

Now Create a folder in your database directory called migrations with adequate permissions. It is where we write our migrations. In engelsystem it is created in db directory

Phinx can now be executed from within your project:

php vendor/bin/phinx init

Writing Migrations For SQL files

Creating a New Migration

Let’s start by creating a new Phinx migration. Run Phinx using the create command. This will create a new migration in the format YYYYMMDDHHMMSS_my_new_migration.php where the first 14 characters are replaced with the current timestamp down to the second. This will create a skeleton file with a single method.

$ php vendor/bin/phinx create MyNewMigration

The File looks something like this

Screenshot from 2016-07-18 08:22:42

Explaining the File

The AbstractMigration Class

Abstraction class provides the necessary support to create your database migrations. All Phinx migrations extend from the AbstractMigration class. Phinx provides different methods in the abstraction class like change, up and down method.

The Change Method

This is the default migration method. I will explain how to write the change method for an example MYSQL query. For example following MYSQL query can also be executed using Phinx change method.

MYSQL Query

ALTER TABLE `AngelTypes` ADD `requires_driver_license` BOOLEAN NOT NULL;

Equivalent change method

public function change()
 {
   $table = $this->table('AngelTypes');
   $table->addColumn('requires_driver_license', 'boolean', array('null' => 'false'))
               ->update();
 }

The Up Method

We should use the up method to transform the database with your intended changes. For example following MYSQL query to create a new settings table can be executed using equivalent up method.

MYSQL Query

DROP TABLE IF EXISTS `Settings`;
 CREATE TABLE IF NOT EXISTS `Settings` (
    `event_name` varchar(255) DEFAULT NULL,
   `buildup_start_date` int(11) DEFAULT NULL,
   `event_start_date` int(11) DEFAULT NULL,
   `event_end_date` int(11) DEFAULT NULL,
   `teardown_end_date` int(11) DEFAULT NULL,
   `event_welcome_msg` varchar(255) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;

Equivalent up method

public function up()
 {
 $table = $this->table('Settings');
 $table->addColumn('event_name', 'string', array('limit' => 255))
 ->addColumn('buildup_start_date', 'integer', array('limit' => 11))
 ->addColumn('event_start_date', 'integer', array('limit' => 11))
 ->addColumn('event_end_date', 'integer', array('limit' => 11))
 ->addColumn('teardown_end_date', 'integer', array( 'limit' => 11))
 ->addColumn('event_welcome_msg', 'string', array('limit' => 255))
 ->save();
 }We have now created a table. Now we will learn to insert data into the tables using migrations.MYSQL QueryINSERT INTO `Privileges` (`id`, `name`, `desc`) VALUES (39, 'admin_settings', 'Admin Settings');
 INSERT INTO `GroupPrivileges` (`id`, `group_id`, `privilege_id`) VALUES (218, -4, 39);public function up()
{
    // inserting into Privileges
   $Rows = [
     [
     'id'   => 39,
     'name' => 'admin_settings',
     'desc' => 'Admin Settings'
     ]
   ];
  $this->insert('Privileges', $Rows);
  // inserting into GroupPrivileges.
   $rows = [
    [
    'id'    => 218,
    'group_id'  => -4,
    'privilege_id' => 39                                                 
  ]
   ];
  $this->insert('GroupPrivileges', $rows);
}

The Down Method

The down method is automatically run by Phinx when you are migrating down. We should use the down method to reverse/undo the transformations described in the up method.

MYSQL Query

DELETE * FROM  `Users`;

Equivalent Down method

public function down()
    {
        $this->execute('DELETE FROM Users');
    }

Since we have learned how to write migrations. Now we will execute the created migrations.

Configuring phinx.yml

When you initialize your project using the init command, Phinx creates a default file called phinx.yml.We can edit the database name, environment. We need to add the password for mysql user. The file looks something like this.Screenshot from 2016-07-18 07:56:13

Executing the migrations

To Migrate the database we use Migrate command. It runs over all the available migrations. Command to migrate for development environment is:

$ phinx migrate -e development

To migrate to a specific version we use the --target parameter or -t for short.

$ phinx migrate -e development -t 20110103081132

To know whether all your migrations have run successfully we use the status command

$ phinx status -e development

After migrating the database for engelsystem. The status command gives the following output.

88114b3e-4753-11e6-9afa-207e55650c1e

Continue ReadingImplementing Database Migrations

Transcript from the Python Toolbox 101

At the Python User Group Berlin, I lead a talk/discussion about free-of-charge tools for open-source development based on what we use GSoC. The whole content was in an Etherpad and people could add their ideas.

Because there are a lot of tools, I thought, I would share it with you. Maybe it is of use. Here is the talk:


Python Users Berlin 2016/07/14 Talk & Discussion

 

START: 19:15
Agenda 1min END: 19:15
======
– Example library
– What is code
– Version Control
  – Python Package Index
– …, see headings
– discussion: write down, what does not fit into my structure
Example Library (2min)  19:17
======================
What is Code (2min) 19:19
===================
.. note:: This frames our discussion
– Source files .py, .pyw
– tests
– documentation
– quality
– readability
– bugs and problems
– <3
Configurationsfiles plain Text for editing
Version Control (2min) 19:21
======================
.. note:: Sharing and Collaboration
– no Version Control:
  – Dropbox
  – Google drive
  – Telekom cloud
  – ftp, windows share
– Version Control Tools:
  – git
    – gitweb own server
    – 
  – mecurial
  – svn
  – perforce (proprietary)
  
  
  
  
  
  
Python Package Index (3min) 19:24
—————————
.. note:: Shipping to the users
hosts python packages you develop.
Example: “knittingpattern” package
pip
Installation from Pypi:
    $ python3 -m pip install knittingpattern # Linux
    > py -3.4 -m pip install knittingpattern # Windows
Documentation upload included!
Documentation (3min) 19:27
====================
.. note:: Inform users
I came across a talk:
Documentation can be:
– tutorials
– how to
– introduction to the community/development process
– code documentation!!!
– chat
– 
Building the documentation (3min)  19:30
———————————
Formats:
– HTML
– PDF
– reRST
– EPUB
– doc strings in source code
– test?
Tools:
– Sphinx
– doxygen
– doc strings
  – standard how to put in docstrings in Python
    – 
Example: Sphinx  3min 19:33
~~~~~~~~~~~~~~~
– Used for Python
– Used for knittingpattern
Python file:
Documentation file with sphinx.ext.autodoc:
Built documentation:
    See the return type str, Intersphinx can reference across documentations.
    Intersphinx uses objects inventory, hosted with the documentation:
Testing the documentation:
    – TODO: link
      – evertying is included in the docs
      – everything that is public is documented
      
      syntax
      – numpy 
      – google 
      – sphinx
Hosting the Documentation (3min) 19:36
——————————–
Tools:
– pythonhosted
  only latest version
– readthedocs.io
  several branches, versions, languages
– wiki pages
– 
Code Testing 2min 19:38
============
.. note:: Tests show the presence of mistakes, not their absence.
What can be tested:
– features
– style: pep8, pylint, 
– documentation
– complexity
– 
Testing Features with unit tests 4min 19:42
——————————–
code:
    def fib(i): …
Tools with different styles
– unittest
  
    import unittest
    from fibonacci import fib
    class FibonacciTest(unittest.TestCase):
        def testCalculation(self):
            self.assertEqual(fib(0), 0)
            self.assertEqual(fib(1), 1)
            self.assertEqual(fib(5), 5)
            self.assertEqual(fib(10), 55)
            self.assertEqual(fib(20), 6765)
    if __name__ == “__main__”: 
        unittest.main()
 
– doctest
    import doctest
    def fib(n):
        “”” 
        Calculates the n-th Fibonacci number iteratively  
        >>> fib(0)
        0
        >>> fib(1)
        1
        >>> fib(10) 
        55
        >>> fib(15)
        610
        >>> 
        “””
        a, b = 0, 1
        for i in range(n):
            a, b = b, a + b
        return a
    if __name__ == “__main__”: 
        doctest.testmod()
– pytest (works with unittest)
    import pytest
    from fibonacci import fib
    
    @pytest.mark.parametrize(“parameter,value”,[(0, 0), (1, 1), (10, 55), (15, 610)])
    def test_fibonacci(parameter, value):
        assert fib(parameter) == value
– nose tests?
– …
– pyhumber
– assert in code,  PyHamcrest
– Behaviour driven development
  – human test
Automated Test Run & Continuous Integration 2min 19:44
===========================================
.. note:: 
Several branches:
– production branch always works
– feature branches
– automated test before feature is put into production
Tools running tests 6min 19:50
——————-
– Travis CI for Mac, Ubuntu
– Appveyor for Windows
Host yourself:
– buildbot
– Hudson
– Jenkins
– Teamcity
– circle CI
  + selenium for website test
– 
– …?????!!!!!!
Tools for code quality 4min 19:54
———————-
– landscape
  complexity, style, documentation
  – libraries are available separately
    – flake8
    – destinate
    – pep257
– codeclimate
  code duplication, code coverage
  – libraries are available separately
– PyCharm
  – integrated what landscape has 
  – + complexity
Bugs, Issues, Pull Requests, Milestones 4min 19:58
=======================================
.. note:: this is also a way to get people into the project
1. find bug
2. open issue if big bug, discuss
3. create pull request
4. merge
5. deploy
– github
  issue tracker
– waffle.io – scrumboard
  merge several github issues tracker
– Redmine
JIRA
– trac 
– github issues + zenhub integrated in github
– gitlab
– gerrit framework that does alternative checking https://www.gerritcodereview.com/
  1. propose change
  2. test
  3. someone reviews the code
      – X people needed
  QT company uses it
Localization 2min 20:00
============
crowdin.com
    Crowdsourced translation tool:
    
Discussion
– spellchecker is integrated in PyCharm
  – character set
  – new vocabulary
  – not for continuous integration (CI)
– Emacs
  – 
– pylint plugin 
   – not all languages?
– readthedocs
  – add github project, 
  – hosts docs
– sphinx-plugin?
– PyCon testing talk:
    – Hypothesis package
      – tries to break your code
      – throws in a lot of edge cases (huge number, nothing, …)
      -> find obscure edge cases
      
Did someone create a Pylint plugin
– question:
    – cyclomatic code complexity
    – which metrics tools do you know?
    –
Virtual Environment:
    nobody should install everything in the system
    -> switch between different python versions
    – python3-venv
      – slightly different than virtual-env(more mature)
Beginners:
    Windows:
        install Anaconda
Continue ReadingTranscript from the Python Toolbox 101

Read and Understand Codacy Reports

To begin understanding reports, let’s start with what Codacy is.

So. What is Codacy?

Codacy is an automated code review tool that helps developers to save time in code reviews and to tackle technical debt. It centralises customizable code patterns and enforces them within engineering teams. Codacy tracks new issues by severity level for every commit and pull request.

It can be integrated with the GitHub repository to review every commit and pull request in terms of quality and errors. It checks code style, security, duplication, complexity and coverage on every change while tracking code quality throughout your sprints.

You can integrate Codacy in your private/public repository by going here. Sign up with your Github account and follow the steps mentioned. More information regarding GitHub integration can be found here.

Features of Codacy:

  • SAVE TIME IN CODE REVIEWS
  • INTEGRATED IN YOUR WORKFLOW
  • TRACK YOUR PROJECT QUALITY EVOLUTION

Now we get to understanding Codacy reports.

Below shows and image of how a Codacy dashboard looks like, to evaluate a project. To evaluate a project we should know what are the Software Metrics”.

download

A software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonymous. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in the schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. You can learn about software metrics by visiting here.

A Codacy Dashboard provides answer to the following 3 main things:

  • What is the state of your projects code quality?
  • How is it evolving throughout time?
  • What are the hotspots in your code?

Component of the Dashboard:

  • Introduction: The Dashboard is the central screen of any project on Codacy.
  • Project Certification: After running a complete on the project or the GitHub repository, Codacy provides an overall grade to the project from A-F. The grade depends on the following parameters.
    • Error Prone
    • Code Complexity
    • Code Style
    • Unused Code
    • Security
    • Compatibility
    • Documentation
    • Performancedashboard-certification
  • Issues Breakdown: Issues breakdown represents the different issues from different areas in a pictorial representation. It provides a quick overview of the total number of issues in the repository and the breakdown per category.
    Issues Brakedown
    Users can click on the specific category for more details.
  • Code Coverage: If you setup the code coverage on your repository, you will be able to see the overall covered percentage on the dashboard. It will also show the files with the worst code coverage allowing you to directly jump to the file to see the details.
    Coverage
  • Goals: Users can define individual goals to remove errors and get better grades for their projects.
  • Historic data: Codacy dashboard also provides an analysis of the Historic data, so as to keep a track of the progress on improving the code, milestones covered in reaching the goal.
    dashboard-historic-issues

Codacy provides a nice dashboard showing the metrics. Codacy saves hours in code review and code quality monitoring, from small teams to big companies. And as the Codacy team itself says “LOVED BY DEVELOPERS”, being a developer I wouldn’t deny this statement. It helped a lot in improving the code quality of my project Engelsystem.

Development: https://github.com/fossasia/engelsystem Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Continue ReadingRead and Understand Codacy Reports

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced.

Issue: https://github.com/societyserver/sTeam/issues/30

Solution: https://github.com/societyserver/sTeam/pull/95

Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates.

This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.

gsoc look
output of look before the changes
gsoc look1
output of look after the changes

 

Issue: https://github.com/societyserver/sTeam/issues/100

Solution: https://github.com/societyserver/sTeam/pull/101

By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Continue ReadingAdding extensive help for sTeam

Deploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

In this sprint for the kniteditor library we focused on automatic deployment for Windows and Mac. The idea: whenever a tag is pushed to Github, a new travis build is triggered. The new built app is uploaded to Github as an “.dmg” file.

Travis

Travis is configured with the “.travis.yml” file which you can see here:

language: python

# see https://docs.travis-ci.com/user/multi-os/
matrix:
  include:
    - os: linux
      python: 3.4
    - os: osx
      language: generic
  allow_failures:
    - os: osx

install:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/install.sh ; fi

script:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/test.sh ; fi

before_deploy:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then cp mac-build/dist/KnitEditor.dmg /Users/travis/KnitEditor.dmg ; fi

deploy:
  # see https://docs.travis-ci.com/user/deployment/releases/
  - provider: releases
    api_key:
      secure: v18ZcrXkIMgyb7mIrKWJYCXMpBmIGYWXhKul4/PL/TVpxtg2f/zfg08qHju7mWnAZYApjTV/EjOwWCtqn/hm2CfPFo=
    file: /Users/travis/KnitEditor.dmg
    on:
      tags: true
      condition:  "\"$TRAVIS_OS_NAME\" == \"osx\""
      repo: AllYarnsAreBeautiful/kniteditor

Note that it builds both Linux and OSX. Thus, for each step one must distinguish. Here, only the OSX parts are shown. These steps are executed:

  1. Installation. The app and dmg files are built.
  2. Testing. The tests are shipped with the app in our case. This allows us to execute them at many more locations – where the user is.
  3. Before Deploy. Somehow Travis did not manage to upload from the original location. Maybe it was a bug. Thus, a absolute path was created for the use in (4).
  4. Deployment to github. In this case we use an API key. One could also use a password.

Installation:

#!/bin/bash
#
# execute with --user to pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

brew update

echo "# install python3"
brew install python3
echo -n "Python version: "
python3 --version
python3 -m pip install --upgrade pip

echo "# install pygame"
python3 -m pip uninstall -y pygame || true
# locally compiled pygame version
# see https://bitbucket.org/pygame/pygame/issues/82/homebrew-on-leopard-fails-to-install#comment-636765
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
brew install mercurial || true
python3 -m pip install $USER hg+http://bitbucket.org/pygame/pygame

echo "# install kivy dependencies"
brew install sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer

echo "# install requirements"
python3 -m pip install $USER -I Cython==0.23 \
                       --install-option="--no-cython-compile"
USE_OSX_FRAMEWORKS=0 python3 -m pip install $USER kivy
python3 -m pip uninstall -y Cython==0.23
python3 -m pip install $USER -r ../requirements.txt
python3 -m pip install $USER -r ../test-requirements.txt
python3 -m pip install $USER PyInstaller

./build.sh $USER

The first step is to update brew. It cost me 4 hours to find this bug, 2 hours to work around it before. If brew is not updated, Python 3.4 is installed instead of Python 3.5.

Then, Python, Pygame as the window provider for Kivy is installed, and the other requirements. It goes on with the build step. While installation is executed once on a personal Mac, the build step is executed several times, when the source code is changed.

#!/bin/bash
#
# execute with --user to make pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

(
  cd ..

  echo "# removing old installation of kniteditor"
  python3 -m pip uninstall -y kniteditor || true

  echo "# build the distribution"
  python3 -m pip install $USER wheel
  python3 setup.py sdist --formats=zip
  python3 setup.py bdist_wheel
  python3 -m pip uninstall -y wheel

  echo "# install the current version from the build"
  python3 -m pip install $USER --upgrade dist/kniteditor-`linux-build/package_version`.zip

  echo "# install test requirements"
  python3 -m pip install $USER --upgrade -r test-requirements.txt
)

echo "# build the app"
# see https://pythonhosted.org/PyInstaller/usage.html
python3 -m PyInstaller -d -y KnitEditor.spec

echo "# create the .dmg file"
# see http://stackoverflow.com/a/367826/1320237
KNITEDITOR_DMG="`pwd`/dist/KnitEditor.dmg"
rm -f "$KNITEDITOR_DMG"
hdiutil create -srcfolder dist/KnitEditor.app "$KNITEDITOR_DMG"

echo "The installer can be found in \"$KNITEDITOR_DMG\"."

In the first steps we install the kniteditor from the built “sdist”  zip file. This way we can uninstall it with pip. Also, the dependencies are installed. Then, PyInstaller is invoked with a spec and then the .dmg file is created.

The spec looks like this:

# -*- mode: python -*-

import sys
site_packages = [path for path in sys.path if path.rstrip("/\\").endswith('site-packages')]
print("site_packages:", site_packages)

from kivy.tools.packaging.pyinstaller_hooks import get_deps_all, \
    hookspath, runtime_hooks

block_cipher = None

added_files = [(site_packages_, ".") for site_packages_ in site_packages]

kwargs = get_deps_all()
kwargs["datas"] = added_files
kwargs["hiddenimports"] += ['queue', 'unittest', 'unittest.mock']


a = Analysis(['_KnitEditor.py'],
             pathex=[],
             binaries=None,
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             hookspath=hookspath(),
             runtime_hooks=runtime_hooks(),
             **kwargs)
pyz = PYZ(a.pure, a.zipped_data,
             cipher=block_cipher)
exe = EXE(pyz,
          a.scripts,
          exclude_binaries=True,
          name='KnitEditorX',
          debug=False,
          strip=False,
          upx=True,
          console=True )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               name='KnitEditor')
app = BUNDLE(coll,
             name='KnitEditor.app',
             icon=None,
             bundle_identifier="com.ayab-knitting.KnitEditor")

Note, that all files in all site-packages are included. This way, we do not need to cope with missing modules. Also, there are three different names for

  • the entry script “_KnitEditor.py”
  • the executable “KnitEditorX”
  • the library “kniteditor”

While on the command line, OSX is case sensitive, it is not sensitive on the file system. Thus, if one of the names is the same, we can get errors durig the PyInstaller build.

Lessons learned

Do “brew update” on travis.

Use absolute paths for deployment on Mac OSX travis.

Never use the same names in PyInstaller for the main script, a library and the executable. Otherwise you get a “not a directory” or “not a file” error.

Travis OSX build max out from time to time. It is much faster to have a Mac computer there, to create the scripts.

Continue ReadingDeploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

What is CommonsNet?

It is already a few posts of Commons Net but I suppose you may wonder what CommonsNet is and what can it do for you. I think that it’ s the highest time to talk about CommonsNet. Let me explain it step by step.

CommonsNet is an open source project of FOSSASIA . FOSSASIA has participated for several years as a mentor organization in Google Summer of Code, and  CommonsNet is being developed as this year project.

This is how FOSSASIA sees CommonsNet:

CommonsNet

Develop Website for Standardized Open Networks Agreement similar to Creative Commons

Creative Commons is a wonderful example how standardized processes can enable millions of people to share freely. What about sharing your Internet connection? Across the world there are different legal settings and requirements for sharing of Internet connections and specifically Open Wifi connections. The Picopeering Agreement already offered a basis for completely free and open sharing: http://www.picopeer.net/

However, in many settings people cannot enable unrestricted level of sharing, e.g. if you share Internet in your office, you might need to reserve a certain bandwidth for you and restrict the bandwidth of users.

We need a website, that reflects these details and makes it transparent to the user. Just like at Creative Commons the site should generate a) a human readable file and b) a machine readable file of the level of sharing that is offered by someone. Examples are, that networks are completely free (all ports are open, all services are enabled, bandwidth is unrestricted) compared to networks with restrictions e.g. no torrent sharing, limited bandwidth and an accepted user agreement that is required.

This is how it actually looks, and is being developed on CommonsNet website

We decided to create a simple and user-friendly wizard form to let our users provide their wifi details in a convient way. We divided these different kind of wifi information into four different section like wireless settings – the most basic wifi information like ssid, password, authentication, speed and standard. Then we’ve decided that payment and time limit are also really important part of sharing wireless connection, so we enable our users to mention these details as well.

CommonsNet-wizard 1

Then, you can provide your wireless connection usage’s conditions.

CommonsNet wizard3

And finally we’ve put a section legal restrictions which is one of the most important part of using wireless connection and Internet resources and what vary over the world due to different legal systems.

For now our users are obliged to type legal restrictions on their own, but we are working hard on making it simpler for everyone by creating legal restrictions’ data base, which will let you simply choose your own country and the legal details will be provided and put into wizard form automatically thanks to collected resources.

CommonsNet wizard 4

And as you successfully finish filling form, you will be able to generate your pdf file or code to your website in order to share a transparent wireless connection with your users. It is so simple, but makes our world more transparent and trustworthy.

Feel free to visit us and test it https://commonsnet.herokuapp.com/

Don’t forget to join us on our social media profile where you can be up to date with our amazing work, and updates.

Continue ReadingWhat is CommonsNet?

How to create a Windows Installer from tagged commits

I working on an open-source Python project, an editor for knit work called the “KnitEditor”. Development takes place at Github. Here, I would like to give some insight in how we automated deployment of the application to a Windows installer.

The process works like this:

  1. Create a tag with git and push it to Github.
  2. AppVeyor build the application.
  3. AppVeyor pushes the application to the Github release.

(1) Create a tag and push it

Tags should reflect the version of the software. Version “0.0.1” is in tag “v0.0.1”. We automated the tagging with the “setup.py” in the repository. Now, you can run

py -3.4 setup.py tag_and_deploy

Which checks that there is no such tag already. Several commits have the same version, so, we like to make sure that we do not have two versions with the same name. Also, this can only be executed on the master branch. This way, the software has gone through all the automated quality assurance. Here is the code from the setup.py:

from distutils.core import Command
# ...
class TagAndDeployCommand(Command):

    description = "Create a git tag for this version and push it to origin."\
                  "To trigger a travis-ci build and and deploy."
    user_options = []
    name = "tag_and_deploy"
    remote = "origin"
    branch = "master"

    def initialize_options(self):
        pass

    def finalize_options(self):
        pass

    def run(self):
        if subprocess.call(["git", "--version"]) != 0:
            print("ERROR:\n\tPlease install git.")
            exit(1)
        status_lines = subprocess.check_output(
            ["git", "status"]).splitlines()
        current_branch = status_lines[0].strip().split()[-1].decode()
        print("On branch {}.".format(current_branch))
        if current_branch != self.branch:
            print("ERROR:\n\tNew tags can only be made from branch"
                  " \"{}\".".format(self.branch))
            print("\tYou can use \"git checkout {}\" to switch"
                  " the branch.".format(self.branch))
            exit(1)
        tags_output = subprocess.check_output(["git", "tag"])
        tags = [tag.strip().decode() for tag in tags_output.splitlines()]
        tag = "v" + __version__
        if tag in tags:
            print("Warning: \n\tTag {} already exists.".format(tag))
            print("\tEdit the version information in {}".format(
                    os.path.join(HERE, PACKAGE_NAME, "__init__.py")
                ))
        else:
            print("Creating tag \"{}\".".format(tag))
            subprocess.check_call(["git", "tag", tag])
        print("Pushing tag \"{}\" to remote \"{}\"."
              "".format(tag, self.remote))
        subprocess.check_call(["git", "push", self.remote, tag])
# ...
SETUPTOOLS_METADATA = dict(
# ...
    cmdclass={
# ...
        TagAndDeployCommand.name: TagAndDeployCommad
    },
)
# ...
if __name__ == "__main__":
    import setuptools
    METADATA.update(SETUPTOOLS_METADATA)
    setuptools.setup(**METADATA) # METADATA can be found in several other 

Above, you can see a “distutils” command that executed git through the command line interface.

(2) AppVeyor builds the application

As mentioned above, you can configure AppVeyor to build your application. Here are some parts of the “appveyor.yml” file, that I comment on inline:

# see https://packaging.python.org/appveyor/#adding-appveyor-support-to-your-project
environment:
  PYPI_USERNAME: niccokunzmann3
  PYPI_PASSWORD:
    secure: Gxrd9WI60wyczr9mHtiQHvJ45Oq0UyQZNrvUtKs2D5w=

  # For Python versions available on Appveyor, see
  # http://www.appveyor.com/docs/installed-software#python
  # The list here is complete (excluding Python 2.6, which
  # isn't covered by this document) at the time of writing.

  # we only need Python 3.4 for kivy
  PYTHON: "C:\\Python34"


install:
  - "%PYTHON%\\python.exe -m pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew"
  - "%PYTHON%\\python.exe -m pip install -r requirements.txt"
  - "%PYTHON%\\python.exe -m pip install -r test-requirements.txt"
  - "%PYTHON%\\python.exe setup.py install"
  
build_script:
- cmd: cmd /c windows-build\build.bat

test_script:
  # Put your test command here.
  # If you don't need to build C extensions on 64-bit Python 3.3 or 3.4,
  # you can remove "build.cmd" from the front of the command, as it's
  # only needed to support those cases.
  # Note that you must use the environment variable %PYTHON% to refer to
  # the interpreter you're using - Appveyor does not do anything special
  # to put the Python version you want to use on PATH.
  - windows-build\dist\KnitEditor\KnitEditor.exe /test
  - "%PYTHON%\\python.exe -m pytest --pep8 kniteditor"

artifacts:
  # bdist_wheel puts your built wheel in the dist directory
- path: dist/*
  name: distribution
- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
- path: windows-build/dist/KnitEditor
  name: standalone

deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Note that the line

  - windows-build\dist\KnitEditor\KnitEditor.exe /test

executes the tests in the built application.

These commands are executed to build the application and are executed by this step:

build_script:
- cmd: cmd /c windows-build\build.bat
"%PYTHON%\python.exe" -m pip install pyinstaller

The line above installs pyinstaller

"%PYTHON%\python.exe" -m PyInstaller KnitEditor.spec

The line above uses pyinstaller to create an executable from the specification.

"Inno Setup 5\ISCC.exe" KnitEditor.iss

The line above uses Inno Setup to create the Installer for the built application.

(3) Deploy to Github

As you can see in the “appveyor.yml” file, the resulting executable is listed as an artifact. Artifacts can be downloaded directly from appveyor or used to deploy. In this case, we use the github deploy, which can be customized via the UI of appveyor.

- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Summary

Now, every time we push a tag to Github, AppVeyor build a new installer for our application.

Continue ReadingHow to create a Windows Installer from tagged commits