Transcript from the Python Toolbox 101

At the Python User Group Berlin, I lead a talk/discussion about free-of-charge tools for open-source development based on what we use GSoC. The whole content was in an Etherpad and people could add their ideas.

Because there are a lot of tools, I thought, I would share it with you. Maybe it is of use. Here is the talk:


Python Users Berlin 2016/07/14 Talk & Discussion

 

START: 19:15
Agenda 1min END: 19:15
======
– Example library
– What is code
– Version Control
  – Python Package Index
– …, see headings
– discussion: write down, what does not fit into my structure
Example Library (2min)  19:17
======================
What is Code (2min) 19:19
===================
.. note:: This frames our discussion
– Source files .py, .pyw
– tests
– documentation
– quality
– readability
– bugs and problems
– <3
Configurationsfiles plain Text for editing
Version Control (2min) 19:21
======================
.. note:: Sharing and Collaboration
– no Version Control:
  – Dropbox
  – Google drive
  – Telekom cloud
  – ftp, windows share
– Version Control Tools:
  – git
    – gitweb own server
    – 
  – mecurial
  – svn
  – perforce (proprietary)
  
  
  
  
  
  
Python Package Index (3min) 19:24
—————————
.. note:: Shipping to the users
hosts python packages you develop.
Example: “knittingpattern” package
pip
Installation from Pypi:
    $ python3 -m pip install knittingpattern # Linux
    > py -3.4 -m pip install knittingpattern # Windows
Documentation upload included!
Documentation (3min) 19:27
====================
.. note:: Inform users
I came across a talk:
Documentation can be:
– tutorials
– how to
– introduction to the community/development process
– code documentation!!!
– chat
– 
Building the documentation (3min)  19:30
———————————
Formats:
– HTML
– PDF
– reRST
– EPUB
– doc strings in source code
– test?
Tools:
– Sphinx
– doxygen
– doc strings
  – standard how to put in docstrings in Python
    – 
Example: Sphinx  3min 19:33
~~~~~~~~~~~~~~~
– Used for Python
– Used for knittingpattern
Python file:
Documentation file with sphinx.ext.autodoc:
Built documentation:
    See the return type str, Intersphinx can reference across documentations.
    Intersphinx uses objects inventory, hosted with the documentation:
Testing the documentation:
    – TODO: link
      – evertying is included in the docs
      – everything that is public is documented
      
      syntax
      – numpy 
      – google 
      – sphinx
Hosting the Documentation (3min) 19:36
——————————–
Tools:
– pythonhosted
  only latest version
– readthedocs.io
  several branches, versions, languages
– wiki pages
– 
Code Testing 2min 19:38
============
.. note:: Tests show the presence of mistakes, not their absence.
What can be tested:
– features
– style: pep8, pylint, 
– documentation
– complexity
– 
Testing Features with unit tests 4min 19:42
——————————–
code:
    def fib(i): …
Tools with different styles
– unittest
  
    import unittest
    from fibonacci import fib
    class FibonacciTest(unittest.TestCase):
        def testCalculation(self):
            self.assertEqual(fib(0), 0)
            self.assertEqual(fib(1), 1)
            self.assertEqual(fib(5), 5)
            self.assertEqual(fib(10), 55)
            self.assertEqual(fib(20), 6765)
    if __name__ == “__main__”: 
        unittest.main()
 
– doctest
    import doctest
    def fib(n):
        “”” 
        Calculates the n-th Fibonacci number iteratively  
        >>> fib(0)
        0
        >>> fib(1)
        1
        >>> fib(10) 
        55
        >>> fib(15)
        610
        >>> 
        “””
        a, b = 0, 1
        for i in range(n):
            a, b = b, a + b
        return a
    if __name__ == “__main__”: 
        doctest.testmod()
– pytest (works with unittest)
    import pytest
    from fibonacci import fib
    
    @pytest.mark.parametrize(“parameter,value”,[(0, 0), (1, 1), (10, 55), (15, 610)])
    def test_fibonacci(parameter, value):
        assert fib(parameter) == value
– nose tests?
– …
– pyhumber
– assert in code,  PyHamcrest
– Behaviour driven development
  – human test
Automated Test Run & Continuous Integration 2min 19:44
===========================================
.. note:: 
Several branches:
– production branch always works
– feature branches
– automated test before feature is put into production
Tools running tests 6min 19:50
——————-
– Travis CI for Mac, Ubuntu
– Appveyor for Windows
Host yourself:
– buildbot
– Hudson
– Jenkins
– Teamcity
– circle CI
  + selenium for website test
– 
– …?????!!!!!!
Tools for code quality 4min 19:54
———————-
– landscape
  complexity, style, documentation
  – libraries are available separately
    – flake8
    – destinate
    – pep257
– codeclimate
  code duplication, code coverage
  – libraries are available separately
– PyCharm
  – integrated what landscape has 
  – + complexity
Bugs, Issues, Pull Requests, Milestones 4min 19:58
=======================================
.. note:: this is also a way to get people into the project
1. find bug
2. open issue if big bug, discuss
3. create pull request
4. merge
5. deploy
– github
  issue tracker
– waffle.io – scrumboard
  merge several github issues tracker
– Redmine
JIRA
– trac 
– github issues + zenhub integrated in github
– gitlab
– gerrit framework that does alternative checking https://www.gerritcodereview.com/
  1. propose change
  2. test
  3. someone reviews the code
      – X people needed
  QT company uses it
Localization 2min 20:00
============
crowdin.com
    Crowdsourced translation tool:
    
Discussion
– spellchecker is integrated in PyCharm
  – character set
  – new vocabulary
  – not for continuous integration (CI)
– Emacs
  – 
– pylint plugin 
   – not all languages?
– readthedocs
  – add github project, 
  – hosts docs
– sphinx-plugin?
– PyCon testing talk:
    – Hypothesis package
      – tries to break your code
      – throws in a lot of edge cases (huge number, nothing, …)
      -> find obscure edge cases
      
Did someone create a Pylint plugin
– question:
    – cyclomatic code complexity
    – which metrics tools do you know?
    –
Virtual Environment:
    nobody should install everything in the system
    -> switch between different python versions
    – python3-venv
      – slightly different than virtual-env(more mature)
Beginners:
    Windows:
        install Anaconda

Read and Understand Codacy Reports

To begin understanding reports, let’s start with what Codacy is.

So. What is Codacy?

Codacy is an automated code review tool that helps developers to save time in code reviews and to tackle technical debt. It centralises customizable code patterns and enforces them within engineering teams. Codacy tracks new issues by severity level for every commit and pull request.

It can be integrated with the GitHub repository to review every commit and pull request in terms of quality and errors. It checks code style, security, duplication, complexity and coverage on every change while tracking code quality throughout your sprints.

You can integrate Codacy in your private/public repository by going here. Sign up with your Github account and follow the steps mentioned. More information regarding GitHub integration can be found here.

Features of Codacy:

  • SAVE TIME IN CODE REVIEWS
  • INTEGRATED IN YOUR WORKFLOW
  • TRACK YOUR PROJECT QUALITY EVOLUTION

Now we get to understanding Codacy reports.

Below shows and image of how a Codacy dashboard looks like, to evaluate a project. To evaluate a project we should know what are the Software Metrics”.

download

A software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonymous. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in the schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments. You can learn about software metrics by visiting here.

A Codacy Dashboard provides answer to the following 3 main things:

  • What is the state of your projects code quality?
  • How is it evolving throughout time?
  • What are the hotspots in your code?

Component of the Dashboard:

  • Introduction: The Dashboard is the central screen of any project on Codacy.
  • Project Certification: After running a complete on the project or the GitHub repository, Codacy provides an overall grade to the project from A-F. The grade depends on the following parameters.
    • Error Prone
    • Code Complexity
    • Code Style
    • Unused Code
    • Security
    • Compatibility
    • Documentation
    • Performancedashboard-certification
  • Issues Breakdown: Issues breakdown represents the different issues from different areas in a pictorial representation. It provides a quick overview of the total number of issues in the repository and the breakdown per category.
    Issues Brakedown
    Users can click on the specific category for more details.
  • Code Coverage: If you setup the code coverage on your repository, you will be able to see the overall covered percentage on the dashboard. It will also show the files with the worst code coverage allowing you to directly jump to the file to see the details.
    Coverage
  • Goals: Users can define individual goals to remove errors and get better grades for their projects.
  • Historic data: Codacy dashboard also provides an analysis of the Historic data, so as to keep a track of the progress on improving the code, milestones covered in reaching the goal.
    dashboard-historic-issues

Codacy provides a nice dashboard showing the metrics. Codacy saves hours in code review and code quality monitoring, from small teams to big companies. And as the Codacy team itself says “LOVED BY DEVELOPERS”, being a developer I wouldn’t deny this statement. It helped a lot in improving the code quality of my project Engelsystem.

Development: https://github.com/fossasia/engelsystem Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced.

Issue: https://github.com/societyserver/sTeam/issues/30

Solution: https://github.com/societyserver/sTeam/pull/95

Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates.

This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.

gsoc look
output of look before the changes
gsoc look1
output of look after the changes

 

Issue: https://github.com/societyserver/sTeam/issues/100

Solution: https://github.com/societyserver/sTeam/pull/101

By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Deploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

In this sprint for the kniteditor library we focused on automatic deployment for Windows and Mac. The idea: whenever a tag is pushed to Github, a new travis build is triggered. The new built app is uploaded to Github as an “.dmg” file.

Travis

Travis is configured with the “.travis.yml” file which you can see here:

language: python

# see https://docs.travis-ci.com/user/multi-os/
matrix:
  include:
    - os: linux
      python: 3.4
    - os: osx
      language: generic
  allow_failures:
    - os: osx

install:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/install.sh ; fi

script:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/test.sh ; fi

before_deploy:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then cp mac-build/dist/KnitEditor.dmg /Users/travis/KnitEditor.dmg ; fi

deploy:
  # see https://docs.travis-ci.com/user/deployment/releases/
  - provider: releases
    api_key:
      secure: v18ZcrXkIMgyb7mIrKWJYCXMpBmIGYWXhKul4/PL/TVpxtg2f/zfg08qHju7mWnAZYApjTV/EjOwWCtqn/hm2CfPFo=
    file: /Users/travis/KnitEditor.dmg
    on:
      tags: true
      condition:  "\"$TRAVIS_OS_NAME\" == \"osx\""
      repo: AllYarnsAreBeautiful/kniteditor

Note that it builds both Linux and OSX. Thus, for each step one must distinguish. Here, only the OSX parts are shown. These steps are executed:

  1. Installation. The app and dmg files are built.
  2. Testing. The tests are shipped with the app in our case. This allows us to execute them at many more locations – where the user is.
  3. Before Deploy. Somehow Travis did not manage to upload from the original location. Maybe it was a bug. Thus, a absolute path was created for the use in (4).
  4. Deployment to github. In this case we use an API key. One could also use a password.

Installation:

#!/bin/bash
#
# execute with --user to pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

brew update

echo "# install python3"
brew install python3
echo -n "Python version: "
python3 --version
python3 -m pip install --upgrade pip

echo "# install pygame"
python3 -m pip uninstall -y pygame || true
# locally compiled pygame version
# see https://bitbucket.org/pygame/pygame/issues/82/homebrew-on-leopard-fails-to-install#comment-636765
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
brew install mercurial || true
python3 -m pip install $USER hg+http://bitbucket.org/pygame/pygame

echo "# install kivy dependencies"
brew install sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer

echo "# install requirements"
python3 -m pip install $USER -I Cython==0.23 \
                       --install-option="--no-cython-compile"
USE_OSX_FRAMEWORKS=0 python3 -m pip install $USER kivy
python3 -m pip uninstall -y Cython==0.23
python3 -m pip install $USER -r ../requirements.txt
python3 -m pip install $USER -r ../test-requirements.txt
python3 -m pip install $USER PyInstaller

./build.sh $USER

The first step is to update brew. It cost me 4 hours to find this bug, 2 hours to work around it before. If brew is not updated, Python 3.4 is installed instead of Python 3.5.

Then, Python, Pygame as the window provider for Kivy is installed, and the other requirements. It goes on with the build step. While installation is executed once on a personal Mac, the build step is executed several times, when the source code is changed.

#!/bin/bash
#
# execute with --user to make pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

(
  cd ..

  echo "# removing old installation of kniteditor"
  python3 -m pip uninstall -y kniteditor || true

  echo "# build the distribution"
  python3 -m pip install $USER wheel
  python3 setup.py sdist --formats=zip
  python3 setup.py bdist_wheel
  python3 -m pip uninstall -y wheel

  echo "# install the current version from the build"
  python3 -m pip install $USER --upgrade dist/kniteditor-`linux-build/package_version`.zip

  echo "# install test requirements"
  python3 -m pip install $USER --upgrade -r test-requirements.txt
)

echo "# build the app"
# see https://pythonhosted.org/PyInstaller/usage.html
python3 -m PyInstaller -d -y KnitEditor.spec

echo "# create the .dmg file"
# see http://stackoverflow.com/a/367826/1320237
KNITEDITOR_DMG="`pwd`/dist/KnitEditor.dmg"
rm -f "$KNITEDITOR_DMG"
hdiutil create -srcfolder dist/KnitEditor.app "$KNITEDITOR_DMG"

echo "The installer can be found in \"$KNITEDITOR_DMG\"."

In the first steps we install the kniteditor from the built “sdist”  zip file. This way we can uninstall it with pip. Also, the dependencies are installed. Then, PyInstaller is invoked with a spec and then the .dmg file is created.

The spec looks like this:

# -*- mode: python -*-

import sys
site_packages = [path for path in sys.path if path.rstrip("/\\").endswith('site-packages')]
print("site_packages:", site_packages)

from kivy.tools.packaging.pyinstaller_hooks import get_deps_all, \
    hookspath, runtime_hooks

block_cipher = None

added_files = [(site_packages_, ".") for site_packages_ in site_packages]

kwargs = get_deps_all()
kwargs["datas"] = added_files
kwargs["hiddenimports"] += ['queue', 'unittest', 'unittest.mock']


a = Analysis(['_KnitEditor.py'],
             pathex=[],
             binaries=None,
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             hookspath=hookspath(),
             runtime_hooks=runtime_hooks(),
             **kwargs)
pyz = PYZ(a.pure, a.zipped_data,
             cipher=block_cipher)
exe = EXE(pyz,
          a.scripts,
          exclude_binaries=True,
          name='KnitEditorX',
          debug=False,
          strip=False,
          upx=True,
          console=True )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               name='KnitEditor')
app = BUNDLE(coll,
             name='KnitEditor.app',
             icon=None,
             bundle_identifier="com.ayab-knitting.KnitEditor")

Note, that all files in all site-packages are included. This way, we do not need to cope with missing modules. Also, there are three different names for

  • the entry script “_KnitEditor.py”
  • the executable “KnitEditorX”
  • the library “kniteditor”

While on the command line, OSX is case sensitive, it is not sensitive on the file system. Thus, if one of the names is the same, we can get errors durig the PyInstaller build.

Lessons learned

Do “brew update” on travis.

Use absolute paths for deployment on Mac OSX travis.

Never use the same names in PyInstaller for the main script, a library and the executable. Otherwise you get a “not a directory” or “not a file” error.

Travis OSX build max out from time to time. It is much faster to have a Mac computer there, to create the scripts.

What is CommonsNet?

It is already a few posts of Commons Net but I suppose you may wonder what CommonsNet is and what can it do for you. I think that it’ s the highest time to talk about CommonsNet. Let me explain it step by step.

CommonsNet is an open source project of FOSSASIA . FOSSASIA has participated for several years as a mentor organization in Google Summer of Code, and  CommonsNet is being developed as this year project.

This is how FOSSASIA sees CommonsNet:

CommonsNet

Develop Website for Standardized Open Networks Agreement similar to Creative Commons

Creative Commons is a wonderful example how standardized processes can enable millions of people to share freely. What about sharing your Internet connection? Across the world there are different legal settings and requirements for sharing of Internet connections and specifically Open Wifi connections. The Picopeering Agreement already offered a basis for completely free and open sharing: http://www.picopeer.net/

However, in many settings people cannot enable unrestricted level of sharing, e.g. if you share Internet in your office, you might need to reserve a certain bandwidth for you and restrict the bandwidth of users.

We need a website, that reflects these details and makes it transparent to the user. Just like at Creative Commons the site should generate a) a human readable file and b) a machine readable file of the level of sharing that is offered by someone. Examples are, that networks are completely free (all ports are open, all services are enabled, bandwidth is unrestricted) compared to networks with restrictions e.g. no torrent sharing, limited bandwidth and an accepted user agreement that is required.

This is how it actually looks, and is being developed on CommonsNet website

We decided to create a simple and user-friendly wizard form to let our users provide their wifi details in a convient way. We divided these different kind of wifi information into four different section like wireless settings – the most basic wifi information like ssid, password, authentication, speed and standard. Then we’ve decided that payment and time limit are also really important part of sharing wireless connection, so we enable our users to mention these details as well.

CommonsNet-wizard 1

Then, you can provide your wireless connection usage’s conditions.

CommonsNet wizard3

And finally we’ve put a section legal restrictions which is one of the most important part of using wireless connection and Internet resources and what vary over the world due to different legal systems.

For now our users are obliged to type legal restrictions on their own, but we are working hard on making it simpler for everyone by creating legal restrictions’ data base, which will let you simply choose your own country and the legal details will be provided and put into wizard form automatically thanks to collected resources.

CommonsNet wizard 4

And as you successfully finish filling form, you will be able to generate your pdf file or code to your website in order to share a transparent wireless connection with your users. It is so simple, but makes our world more transparent and trustworthy.

Feel free to visit us and test it https://commonsnet.herokuapp.com/

Don’t forget to join us on our social media profile where you can be up to date with our amazing work, and updates.

How to create a Windows Installer from tagged commits

I working on an open-source Python project, an editor for knit work called the “KnitEditor”. Development takes place at Github. Here, I would like to give some insight in how we automated deployment of the application to a Windows installer.

The process works like this:

  1. Create a tag with git and push it to Github.
  2. AppVeyor build the application.
  3. AppVeyor pushes the application to the Github release.

(1) Create a tag and push it

Tags should reflect the version of the software. Version “0.0.1” is in tag “v0.0.1”. We automated the tagging with the “setup.py” in the repository. Now, you can run

py -3.4 setup.py tag_and_deploy

Which checks that there is no such tag already. Several commits have the same version, so, we like to make sure that we do not have two versions with the same name. Also, this can only be executed on the master branch. This way, the software has gone through all the automated quality assurance. Here is the code from the setup.py:

from distutils.core import Command
# ...
class TagAndDeployCommand(Command):

    description = "Create a git tag for this version and push it to origin."\
                  "To trigger a travis-ci build and and deploy."
    user_options = []
    name = "tag_and_deploy"
    remote = "origin"
    branch = "master"

    def initialize_options(self):
        pass

    def finalize_options(self):
        pass

    def run(self):
        if subprocess.call(["git", "--version"]) != 0:
            print("ERROR:\n\tPlease install git.")
            exit(1)
        status_lines = subprocess.check_output(
            ["git", "status"]).splitlines()
        current_branch = status_lines[0].strip().split()[-1].decode()
        print("On branch {}.".format(current_branch))
        if current_branch != self.branch:
            print("ERROR:\n\tNew tags can only be made from branch"
                  " \"{}\".".format(self.branch))
            print("\tYou can use \"git checkout {}\" to switch"
                  " the branch.".format(self.branch))
            exit(1)
        tags_output = subprocess.check_output(["git", "tag"])
        tags = [tag.strip().decode() for tag in tags_output.splitlines()]
        tag = "v" + __version__
        if tag in tags:
            print("Warning: \n\tTag {} already exists.".format(tag))
            print("\tEdit the version information in {}".format(
                    os.path.join(HERE, PACKAGE_NAME, "__init__.py")
                ))
        else:
            print("Creating tag \"{}\".".format(tag))
            subprocess.check_call(["git", "tag", tag])
        print("Pushing tag \"{}\" to remote \"{}\"."
              "".format(tag, self.remote))
        subprocess.check_call(["git", "push", self.remote, tag])
# ...
SETUPTOOLS_METADATA = dict(
# ...
    cmdclass={
# ...
        TagAndDeployCommand.name: TagAndDeployCommad
    },
)
# ...
if __name__ == "__main__":
    import setuptools
    METADATA.update(SETUPTOOLS_METADATA)
    setuptools.setup(**METADATA) # METADATA can be found in several other 

Above, you can see a “distutils” command that executed git through the command line interface.

(2) AppVeyor builds the application

As mentioned above, you can configure AppVeyor to build your application. Here are some parts of the “appveyor.yml” file, that I comment on inline:

# see https://packaging.python.org/appveyor/#adding-appveyor-support-to-your-project
environment:
  PYPI_USERNAME: niccokunzmann3
  PYPI_PASSWORD:
    secure: Gxrd9WI60wyczr9mHtiQHvJ45Oq0UyQZNrvUtKs2D5w=

  # For Python versions available on Appveyor, see
  # http://www.appveyor.com/docs/installed-software#python
  # The list here is complete (excluding Python 2.6, which
  # isn't covered by this document) at the time of writing.

  # we only need Python 3.4 for kivy
  PYTHON: "C:\\Python34"


install:
  - "%PYTHON%\\python.exe -m pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew"
  - "%PYTHON%\\python.exe -m pip install -r requirements.txt"
  - "%PYTHON%\\python.exe -m pip install -r test-requirements.txt"
  - "%PYTHON%\\python.exe setup.py install"
  
build_script:
- cmd: cmd /c windows-build\build.bat

test_script:
  # Put your test command here.
  # If you don't need to build C extensions on 64-bit Python 3.3 or 3.4,
  # you can remove "build.cmd" from the front of the command, as it's
  # only needed to support those cases.
  # Note that you must use the environment variable %PYTHON% to refer to
  # the interpreter you're using - Appveyor does not do anything special
  # to put the Python version you want to use on PATH.
  - windows-build\dist\KnitEditor\KnitEditor.exe /test
  - "%PYTHON%\\python.exe -m pytest --pep8 kniteditor"

artifacts:
  # bdist_wheel puts your built wheel in the dist directory
- path: dist/*
  name: distribution
- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
- path: windows-build/dist/KnitEditor
  name: standalone

deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Note that the line

  - windows-build\dist\KnitEditor\KnitEditor.exe /test

executes the tests in the built application.

These commands are executed to build the application and are executed by this step:

build_script:
- cmd: cmd /c windows-build\build.bat
"%PYTHON%\python.exe" -m pip install pyinstaller

The line above installs pyinstaller

"%PYTHON%\python.exe" -m PyInstaller KnitEditor.spec

The line above uses pyinstaller to create an executable from the specification.

"Inno Setup 5\ISCC.exe" KnitEditor.iss

The line above uses Inno Setup to create the Installer for the built application.

(3) Deploy to Github

As you can see in the “appveyor.yml” file, the resulting executable is listed as an artifact. Artifacts can be downloaded directly from appveyor or used to deploy. In this case, we use the github deploy, which can be customized via the UI of appveyor.

- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Summary

Now, every time we push a tag to Github, AppVeyor build a new installer for our application.

Pike

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

The project is written in Pike programming language. Many of us would not be aware of it. Let’s take a dive into Pike and learn more about it.

What is pike?

Pike is a general purpose programming language, which means that you can put it to use for almost any task. Its application domain spans anything from the world of the Net to the world of multimedia applications, or environments where your shell could use some spicy text processing or system administration tools. Your imagination sets the limit, but Pike will probably extend it far beyond what you previously considered within reach. Pike is a dynamic programming language with a syntax similar to Java and C. It is simple to learn, does not require long compilation passes and has powerful built-in data types allowing simple and really fast data manipulation. Pike is released under the GNU GPL, GNU LGPL and MPL; this means that you can fetch it and use it for almost any purpose you please.

Who made pike?

We will not bother you here with the entire history of Pike, but in a quick summary we should credit Fredrik Hübinette, who started writing Pike (at that time called µLPC), Roxen Internet Software, who funded the Pike development during its first years, the Pike development team that continues its development at present and the Software and Systems division of the Department of Computer and Information Science (IDA for short) at Linköping University, who currently provides funding for some Pike research and development, as well as this site. Also, without the participation of the friendly community of Pike users and advocates all over the world, Pike would hardly be the same either; we are grateful for your commitment.

Who uses pike?

Besides those already mentioned (Roxen IS and SaS, IDA, LiU), there are many other people scattered throughout the world who have put Pike to good use. Read some of their testimonials and find out more about how they value Pike.

…and for what?

Roxen Internet Software wrote the free web servers Spinner, Roxen Challenger and Roxen WebServer in Pike, as well as the highly appraised commercial content management system Roxen Platform / Roxen CMS. SaS uses Pike for their research, currently concentrated on the field of compositioning technology and language connectors. Other noteworthy applications include the works of Per Hedbor, who among other things has written AIDO, a nifty network aware peer-to-peer client/server media player and a distributed jukebox system, both in Pike.

Why use pike?

Pike is Powerful – Being a high-level language, Pike gives you concise, modular code, automatic memory management, flexible and efficient data types, transparent bignum support, a powerful type system, exception handling and quick iterative development cycles, alleviating the need for compiling and linking code before you can run it; on-the-fly modifications are milliseconds away from being put to practice.
Pike is Fast – Most of the time critical parts of Pike are heavily optimized; Pike is really, really fast and uses efficient, carefully handcrafted algorithms and data types. Visit The Computer Language Shootout Benchmarks for more facts and figures on Pike’s performance.
Pike is Extendable – with modules written in C for speed or Pike for brevity. Pike supports multiple inheritance and most other constructs you would demand from a modern programming language.
Pike is Scalable – as useful for small scripts as for bigger and more complex applications. Where some other scripting languages aim for providing unreadable language constructs for minimal code size, Pike aims for a small orthogonal set of readable language elements that encourage good habits and improve maintainability.
Pike is Portable – Platform independence has always been our aim with Pike, and it compiles on most flavors of Unix, as well as on Windows (both ia32 and ia64 versions) and Mac OS X. To see the present status of how well the stable and development branches of Pike work on some of the many hardware architectures and operating systems Pike supports, visit the pikefarm pages.
Pike is Free – Pike is released under multiple licenses; the GNU licenses GPL and LGPL, as well as the Mozilla license, MPL.
Paradigms – Pike supports most programming paradigms, including (but not limited to) object orientation, functional programming, aspect orientation and imperative programming.
Pike is Constantly Improving – While already being a great language, Pike is actively developed and backed by both an active Pike community and the computer science scientific research community. This means that Pike will stay the razorsharp tool that Pike people over the world expect it to be, while assimilating recent findings from the scientific forefront of research, spanning fields such as compositioning, regexp technology and the world of ontologies, also known as the Semantic Web.
Pike is available via git – To help you get your hands on the very latest development versions of Pike, we provide Pike to anyone and everyone who knows his/her way around git. Stay as updated as you like on recent activity in the repository with our on-site repository browser.
You Too are Invited – We welcome contributions to pike, and it is our intention to provide write access to our repositories for those of you who want to join us in improving Pike, be it by contributing code, documentation, work with the web site or making tools and applications of general interest.

Example:


int main() {
  write("Hello world!\n");
  return 0;
}

Pike Resources

Documentation and other resources about the project can be found on the official website of Pike.

Installing Pike

Pike can be installed in the debian based distro’s by running the command

sudo apt-get install pike

Feel free to explore more about this programming language.

(source: http://pike.lysator.liu.se/)

Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continuous Integration and Automated Testing for Engelsystem

Every software development group tests its products, yet delivered software always has defects. Test engineers strive to catch them before the product is released but they always creep in and they often reappear, even with the best manual testing processes. Using automated testing is the best way to increase the effectiveness, efficiency and coverage of your software testing.

Manual software testing is performed by a human sitting in front of a computer carefully going through application screens, trying various usage and input combinations, comparing the results to the expected behavior and recording their observations. Manual tests are repeated often during development cycles for source code changes and other situations like multiple operating environments and hardware configurations.

Continuous integration (CI) has emerged as one of the most efficient ways to develop code. But testing has not always been a major part of the CI conversation.

In some respects, that’s not surprising. Traditionally, CI has been all about speeding up the coding, building, and release process. Instead of having each programmer write code separately, integrate it manually, and then wait until the next daily or weekly build to see if the changes broke anything, CI lets developers code and compile on a virtually continuous basis. It also means developers and admins can work together seamlessly since the programming and build processes are always in sync.

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

By integrating regularly, you can detect errors quickly, and locate them more easily.

Solve problems quickly

Because you’re integrating so frequently, there is significantly less back-tracking to discover where things went wrong, so you can spend more time building features.

Continuous Integration is cheap. Not continuously integrating is costly. If you don’t follow a continuous approach, you’ll have longer periods between integrations. This makes it exponentially more difficult to find and fix problems. Such integration problems can easily knock a project off-schedule, or cause it to fail altogether.

Continuous Integration brings multiple benefits to your organization:

  • Say goodbye to long and tense integrations
  • Increase visibility which enables greater communication
  • Catch issues fast and nip them in the bud
  • Spend less time debugging and more time adding features
  • Proceed with the confidence you’re building on a solid foundation
  • Stop waiting to find out if your code’s going to work
  • Reduce integration problems allowing you to deliver software more rapidly

“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.”

– Martin Fowler, Chief Scientist, ThoughtWorks

Continuous Integration is backed by several important principles and practices.

Practices in Continuous Integration:

  • Maintain a single source repository
  • Automate the build
  • Make your build self-testing
  • Every commit should build on an integration machine
  • Keep the build fast
  • Test in a clone of the production environment
  • Make it easy for anyone to get the latest executable
  • Everyone can see what’s happening
  • Automate deployment

How to do Continuous Integration:

  • Developers check out code into their private workspaces.
  • When done, commit the changes to the repository.
  • The CI server monitors the repository and checks out changes when they occur.
  • The CI server builds the system and runs unit and integration tests.
  • The CI server releases deployable artifacts for testing.
  • The CI server assigns a build label to the version of the code it just built.
  • The CI server informs the team of the successful build.
  • If the build or tests fail, the CI server alerts the team.
  • The team fixes the issue at the earliest opportunity.
  • Continue to continually integrate and test throughout the project.

The CI implemented in Engelsystem are as follows:

  • Travis-CITravis CI is a hosted, distributed continuous integration service used to build and test software projects hosted on GitHub. It is integrated using the .travis.yml file in the root folder.
    language: php
    php:
    - '5.4'
    - '5.5'
    - '5.6'
    - '7.0'
    script: cd test && phpunit
  • Nitpick-CI:Automatic comments on PSR-2 violations in one click, so your team can focus on better code review. It requires one click for integratting with the repository.
  • Circle-CI: CircleCI was founded in 2011 with the mission of giving every developer state-of-the-art automated testing and continuous integration tools. It is integrated using a circle.yml file in the root folder of the repository.
    machine:
    php:
    version: 5.4.5
    deployment:
    master:
    branch: master
    owner: fossasia
    commands:
    - ./deploy_master.sh
    dependencies:
    pre:
    - curl -s http://getcomposer.org/installer | php
    - php composer.phar install -n
    - sed -i 's/^;//' ~/.phpenv/versions/$(phpenv global)/etc/conf.d/xdebug.ini
    
    test:
    post:
    - php test/
    - bash <(curl -s https://codecov.io/bash)
  • Codacy: Check code style, security, duplication, complexity and coverage on every change while tracking code quality throughout your sprints.

Development: https://github.com/fossasia/engelsystem Issues/Bugs:https://github.com/fossasia/engelsystem/issues

PSLab Code Repository and Installation

PSLab  is a new addition to FOSSASIA Science Lab. This tiny pocket science lab  provides  an array of necessary equipments for doing science and engineering experiments. It can function like an oscilloscope, waveform generator, frequency counter, programmable voltage and current source and also as a data logger.

pslabdesign
New Front Panel Design
psl2
Size:62mmx78mmx13mm

The control and measurement functions are written in Python programming language. Pyqtgraph is used for plotting library. We are now working on Qt based GUI applications for various experiments.

The following are the code repositories of PSLab.

Installation

To install PSLab on Debian based Gnu/Linux system, the following dependencies must be installed.

Dependencies
============
PyQt 4.7+, PySide, or PyQt5
python 2.6, 2.7, or 3.x
NumPy, Scipy
pyqt4-dev-tools          #for pyuic4
Pyqtgraph                #Plotting library
pyopengl and qt-opengl   #for 3D graphics
iPython-qtconsole        #optional
Now clone both the repositories pslab-apps and pslab .

Libraries must be installed in the following order

1. pslab-apps

2. pslab

To install, cd into the directories

$ cd <SOURCE_DIR>

and run the following (for both the repos)

$ sudo make clean
$ sudo make 

$ sudo make install

Now you are ready with the PSLab software on your machine 🙂

For the main GUI (Control panel), you can run Experiments from the terminal.

$ Experiments

If the device is not connected the following splash screen will be displayed.

SplashNotConnected
Device not connected

After clicking OK, you will get the control panel with menus for Experiments, Controls, Advanced Controls and Help etc. (Experiments can not be accessed unless the device is connected)

controlPanelNotConnected

The splash screen and the control panel, when PSLab is connected to the pc.

SplashScreen
PSLab connected
controlpanel
Control Panel – Main GUI

From this control panel one can access controls, help files and various experiments through independent GUI’s written for each experiment.

You can help
------------

Please report a bug/install errors here 
Your suggestions to improve PSLab are welcome :)

What Next:

We are now working on a general purpose Experimental designer. This will allow selecting controls and channels and then generate a spread sheet. The columns from this spreadsheet can be selected and plotted.

 

CommonsNet – WiFi Standards

Introduction

There is no doubt that WiFi is a crucial technology that most of us use every day. But have you ever  noticed on wifi router that there are a few different number and letter tagged on the end?  These designations present different properties of the WiFi like speed, allowed devices, range and frequency and they create WiFi standards. If you know what standard you have, you can tell much about your wireless connection, and use it in the way you want. CommonsNet team focuses on providing users with transparent wifi information so let’s talk today about them.

WIFI Standards

802 – this strange number means naming system which is used by networking standards. WiFi uses 802.11. All WiFi varieties has this number, followed by a letter or two which, is very useful for consumers, because as mentioned above it helps to identify wifi properties life maximum speed, range and required devices.

Specific router may support not only single, but multiple standards at the same time. It happens in order to ensure compatibility with different pieces of hardware and network.

 

802.11

This standard was created In 1997 by the Institute of Electrical and Electronics Engineers (IEEE).  It was used for medicine and industrial purposes. Unfortunately, 802.11 supported a maximum network bandwidth up to 1 or 2 Mbps – not fast enough for applications. Therefore this standard was rapidly supplanted and is no longer used.

802.11b

This standard became the most commonly adopted in consumer devices, especially because of its low-cost. It supports bandwidth up to 11 Mbps. 802.11b uses radio signaling frequency  – 2.4 GHz, and due to this, its signal has good range – about 100m – and is not easily obstructed, but due to the fact that it works on 2, GHz it may interfere with home appliances.

802.11a

This standard bandwidth is up to 54 Mbps and has signals in a regulated frequency  around 5 GHz. There is no doubt that this higher frequency shortens the range, and needs more power to work correctly. This also means that signal has more difficulties while penetrating obstructions like walls, doors, windows. This standard due to working on different frequency is incompatible with 802.11b standard.

802.11g

In 2002 products supporting a new standard emerged on the market.I t’s actually the most popular WiFi standard. It focuses on combining the best of both 802.11a and 802.11b. It supports bandwidth up to 54Mbps, and it uses the 2.4 Ghz frequency for greater range. It is compatible with other standards. But it’s impossible to use it in older devices. If you try to do it, the speed will be 4 times slower.

802.11n

This standard was designed  to improve  802.11g  by utilizing multiple wireless signals and antennas (called MIMO technology) instead of one. It provides up to 600 Mbps  of network bandwidth, but in reality it usually reaches up to 150 Mbps. 802.11n also offers  better range over earlier Wi-Fi standards due to its increased signal intensity, and it is backward-compatible with 802.11b/g gear.

802.11ac

The newest generation of Wi-Fi signaling in popular use, utilizes dual band wireless technology, supporting simultaneous connections on both the 2.4 GHz and 5 GHz. It offers compatibility to 802.11b/g/n and bandwidth  up to 1300 Mbps on the 5 GHz band plus up to 450 Mbps on 2.4 GHz.

Summary

As you can see based on above description there are different wifi standard which differ from each other in their speed, range and devices’ support. Some of them are not actual anymore, but some of them can be still used simultaneously. You can choose this one , which is best suitable to your needs.

As a CommonsNet team we believe that we will create a great CommonsNet website which helps users to be aware of wifi’s properties they have or use, and if necessary improve it to provide and share with other people the transparent wireless connection of the best quality.

With support of http://compnetworking.about.com/cs/wireless80211/a/aa80211standard.htmhttp://www.androidauthority.com/wifi-standards-explained-802-11b-g-n-ac-ad-ah-af-666245/