Involving to the end-user

The Knit Editor software aims to be installable and usable by end-users. In the whole summer of code, we focused on development and code documentation from the perspective of a developer. In this blog post we will discuss how the Knit Editor is presented to the end user. So, you reading this blog: Please comment with your thoughts on the sketches.

Currently, a site is in the making that shall present the Knit Editor software as is state of the art. The inspiration came from the talk by Tracy Osborn at PyCon 2016: “Web Design for Non Designers”. The site is currently in the making at fossasia.github.io/kniteditor. If you click on the following images, you get redirected to the implementation of the concept.

The Main Page

First thing that comes into view is the download button. This leads to the download site. Then, wen can see three popular use-cases of the knit editor. At the bottom, new developers can see that they can contribute.

End users are knitters of all ages. As tested with my mom, they expect the language to be at the top-right of the page.Main Page

Both, the download and the start developing button are highlighted in a different color to make certain that they are an action the user is expected to perform.

When you access the site you get automatically redirected to your browsers configured language.

The Download Site

When you click the download button on the main page, you reach the download site. Depending on the operating system information the browser sends, your download starts automatically, below, this is  sketched for Windows.

P1040129

Next to the download page, you may want to find other versions of the software or not. This is to be evaluated. Maybe a slightly less visible button is right for that or it can be left out. Usually, no-one uses the old software.

At the bottom, you can see that there is a predecessor of the software which is called “AYAB-Apparat”. Some people may expect to find this software, too.

The Developer Site

If you clicked “Start Developing” on the main page, you will be confronted with the site for development.

There are two ways main flavors of contributing. Either you translate or you write program code. Therefore, we have two buttons that skip to the corresponding sections.

P1040130

At the bottom, you can see that there are tutorials on how to set up the environment for development. Videos for this can be found under this Youtube playlist.

Summary

At the end of GSoC we should document the code. Since we did documentation-driven development, there was already a focus on the developers from the start. End-user involvement fell short during the development phase. “Documentation is the way of informing people.” – this is something I learned from a talk. Thus, I create the new site for the knit editor as a documentation about the project fit for non-developers.

Continue ReadingInvolving to the end-user

The new AYABInterface module

One create knit work with knitting machines and the AYAB shield. Therefore, the computer communicates with the machine. This communication shall be done, in the future, with this new library, the AYABInterface.

Here are some design decisions:

Complete vs. Incomplete

The idea is to have the AYAB seperated from the knittingpattern format. The knittingpattern format is an incomplete format that can be extended for any use case.  In contrast, the AYAB machine has a complete instruction set. The knittingpattern format is a means to transform these formats into different complete instruction sets. They should be convertible but not mixed.

Desciptive vs. Imperative

The idea is to be able to pass the format to the AYABInterface as a description. As much knowledge about the behavior is capsuled in the AYABInterface module. With this striving, we are less prone to intermix concerns across the applications.

Responsibilty Driven Design

I see these separated responsibilities:

  • A communication part focusing on the protocol to talk and the messages sent across the wire. It is an interpreter of the protocol, transforming it from bytes to objects.
  • A configuration that is passed to the interface
  • Different Machines types supported.
  • Actions the user shall perform.

Different Representations

I see these representations:

  • Commands are transferred across the wire. (PySerial)
  • For each movement of a carriage, the needles are used and put into a new position, B or D. (communication)
  • We would like to knit a list of rows with different colors. (interface)
    • Holes can be described by a list of orders in which meshes are moved to other locations, i.e. on needle 1 we can find mesh 1, on needle 2 we find mesh 2 first and then mesh 3, so mesh 2 and mesh 3 are knit together in the following step
  • The knitting pattern format.

Actions and Information for the User

The user should be informed about actions to take. These actions should not be in the form of text but rather in the form of an object that represents the action, i.e. [“move”, “this carriage”, “from right to left”]. This way, they can be adequately represented in the UI and translated somewhere central in the UI.

Summary

The new design separates concerns and allows testing. The bridge between the machine and the knittnigpattern format are primitive, descriptive objects such as lists and integers.

Continue ReadingThe new AYABInterface module

Transcript from the Python Toolbox 101

At the Python User Group Berlin, I lead a talk/discussion about free-of-charge tools for open-source development based on what we use GSoC. The whole content was in an Etherpad and people could add their ideas.

Because there are a lot of tools, I thought, I would share it with you. Maybe it is of use. Here is the talk:


Python Users Berlin 2016/07/14 Talk & Discussion

 

START: 19:15
Agenda 1min END: 19:15
======
– Example library
– What is code
– Version Control
  – Python Package Index
– …, see headings
– discussion: write down, what does not fit into my structure
Example Library (2min)  19:17
======================
What is Code (2min) 19:19
===================
.. note:: This frames our discussion
– Source files .py, .pyw
– tests
– documentation
– quality
– readability
– bugs and problems
– <3
Configurationsfiles plain Text for editing
Version Control (2min) 19:21
======================
.. note:: Sharing and Collaboration
– no Version Control:
  – Dropbox
  – Google drive
  – Telekom cloud
  – ftp, windows share
– Version Control Tools:
  – git
    – gitweb own server
    – 
  – mecurial
  – svn
  – perforce (proprietary)
  
  
  
  
  
  
Python Package Index (3min) 19:24
—————————
.. note:: Shipping to the users
hosts python packages you develop.
Example: “knittingpattern” package
pip
Installation from Pypi:
    $ python3 -m pip install knittingpattern # Linux
    > py -3.4 -m pip install knittingpattern # Windows
Documentation upload included!
Documentation (3min) 19:27
====================
.. note:: Inform users
I came across a talk:
Documentation can be:
– tutorials
– how to
– introduction to the community/development process
– code documentation!!!
– chat
– 
Building the documentation (3min)  19:30
———————————
Formats:
– HTML
– PDF
– reRST
– EPUB
– doc strings in source code
– test?
Tools:
– Sphinx
– doxygen
– doc strings
  – standard how to put in docstrings in Python
    – 
Example: Sphinx  3min 19:33
~~~~~~~~~~~~~~~
– Used for Python
– Used for knittingpattern
Python file:
Documentation file with sphinx.ext.autodoc:
Built documentation:
    See the return type str, Intersphinx can reference across documentations.
    Intersphinx uses objects inventory, hosted with the documentation:
Testing the documentation:
    – TODO: link
      – evertying is included in the docs
      – everything that is public is documented
      
      syntax
      – numpy 
      – google 
      – sphinx
Hosting the Documentation (3min) 19:36
——————————–
Tools:
– pythonhosted
  only latest version
– readthedocs.io
  several branches, versions, languages
– wiki pages
– 
Code Testing 2min 19:38
============
.. note:: Tests show the presence of mistakes, not their absence.
What can be tested:
– features
– style: pep8, pylint, 
– documentation
– complexity
– 
Testing Features with unit tests 4min 19:42
——————————–
code:
    def fib(i): …
Tools with different styles
– unittest
  
    import unittest
    from fibonacci import fib
    class FibonacciTest(unittest.TestCase):
        def testCalculation(self):
            self.assertEqual(fib(0), 0)
            self.assertEqual(fib(1), 1)
            self.assertEqual(fib(5), 5)
            self.assertEqual(fib(10), 55)
            self.assertEqual(fib(20), 6765)
    if __name__ == “__main__”: 
        unittest.main()
 
– doctest
    import doctest
    def fib(n):
        “”” 
        Calculates the n-th Fibonacci number iteratively  
        >>> fib(0)
        0
        >>> fib(1)
        1
        >>> fib(10) 
        55
        >>> fib(15)
        610
        >>> 
        “””
        a, b = 0, 1
        for i in range(n):
            a, b = b, a + b
        return a
    if __name__ == “__main__”: 
        doctest.testmod()
– pytest (works with unittest)
    import pytest
    from fibonacci import fib
    
    @pytest.mark.parametrize(“parameter,value”,[(0, 0), (1, 1), (10, 55), (15, 610)])
    def test_fibonacci(parameter, value):
        assert fib(parameter) == value
– nose tests?
– …
– pyhumber
– assert in code,  PyHamcrest
– Behaviour driven development
  – human test
Automated Test Run & Continuous Integration 2min 19:44
===========================================
.. note:: 
Several branches:
– production branch always works
– feature branches
– automated test before feature is put into production
Tools running tests 6min 19:50
——————-
– Travis CI for Mac, Ubuntu
– Appveyor for Windows
Host yourself:
– buildbot
– Hudson
– Jenkins
– Teamcity
– circle CI
  + selenium for website test
– 
– …?????!!!!!!
Tools for code quality 4min 19:54
———————-
– landscape
  complexity, style, documentation
  – libraries are available separately
    – flake8
    – destinate
    – pep257
– codeclimate
  code duplication, code coverage
  – libraries are available separately
– PyCharm
  – integrated what landscape has 
  – + complexity
Bugs, Issues, Pull Requests, Milestones 4min 19:58
=======================================
.. note:: this is also a way to get people into the project
1. find bug
2. open issue if big bug, discuss
3. create pull request
4. merge
5. deploy
– github
  issue tracker
– waffle.io – scrumboard
  merge several github issues tracker
– Redmine
JIRA
– trac 
– github issues + zenhub integrated in github
– gitlab
– gerrit framework that does alternative checking https://www.gerritcodereview.com/
  1. propose change
  2. test
  3. someone reviews the code
      – X people needed
  QT company uses it
Localization 2min 20:00
============
crowdin.com
    Crowdsourced translation tool:
    
Discussion
– spellchecker is integrated in PyCharm
  – character set
  – new vocabulary
  – not for continuous integration (CI)
– Emacs
  – 
– pylint plugin 
   – not all languages?
– readthedocs
  – add github project, 
  – hosts docs
– sphinx-plugin?
– PyCon testing talk:
    – Hypothesis package
      – tries to break your code
      – throws in a lot of edge cases (huge number, nothing, …)
      -> find obscure edge cases
      
Did someone create a Pylint plugin
– question:
    – cyclomatic code complexity
    – which metrics tools do you know?
    –
Virtual Environment:
    nobody should install everything in the system
    -> switch between different python versions
    – python3-venv
      – slightly different than virtual-env(more mature)
Beginners:
    Windows:
        install Anaconda
Continue ReadingTranscript from the Python Toolbox 101

Deploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

In this sprint for the kniteditor library we focused on automatic deployment for Windows and Mac. The idea: whenever a tag is pushed to Github, a new travis build is triggered. The new built app is uploaded to Github as an “.dmg” file.

Travis

Travis is configured with the “.travis.yml” file which you can see here:

language: python

# see https://docs.travis-ci.com/user/multi-os/
matrix:
  include:
    - os: linux
      python: 3.4
    - os: osx
      language: generic
  allow_failures:
    - os: osx

install:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/install.sh ; fi

script:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/test.sh ; fi

before_deploy:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then cp mac-build/dist/KnitEditor.dmg /Users/travis/KnitEditor.dmg ; fi

deploy:
  # see https://docs.travis-ci.com/user/deployment/releases/
  - provider: releases
    api_key:
      secure: v18ZcrXkIMgyb7mIrKWJYCXMpBmIGYWXhKul4/PL/TVpxtg2f/zfg08qHju7mWnAZYApjTV/EjOwWCtqn/hm2CfPFo=
    file: /Users/travis/KnitEditor.dmg
    on:
      tags: true
      condition:  "\"$TRAVIS_OS_NAME\" == \"osx\""
      repo: AllYarnsAreBeautiful/kniteditor

Note that it builds both Linux and OSX. Thus, for each step one must distinguish. Here, only the OSX parts are shown. These steps are executed:

  1. Installation. The app and dmg files are built.
  2. Testing. The tests are shipped with the app in our case. This allows us to execute them at many more locations – where the user is.
  3. Before Deploy. Somehow Travis did not manage to upload from the original location. Maybe it was a bug. Thus, a absolute path was created for the use in (4).
  4. Deployment to github. In this case we use an API key. One could also use a password.

Installation:

#!/bin/bash
#
# execute with --user to pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

brew update

echo "# install python3"
brew install python3
echo -n "Python version: "
python3 --version
python3 -m pip install --upgrade pip

echo "# install pygame"
python3 -m pip uninstall -y pygame || true
# locally compiled pygame version
# see https://bitbucket.org/pygame/pygame/issues/82/homebrew-on-leopard-fails-to-install#comment-636765
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
brew install mercurial || true
python3 -m pip install $USER hg+http://bitbucket.org/pygame/pygame

echo "# install kivy dependencies"
brew install sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer

echo "# install requirements"
python3 -m pip install $USER -I Cython==0.23 \
                       --install-option="--no-cython-compile"
USE_OSX_FRAMEWORKS=0 python3 -m pip install $USER kivy
python3 -m pip uninstall -y Cython==0.23
python3 -m pip install $USER -r ../requirements.txt
python3 -m pip install $USER -r ../test-requirements.txt
python3 -m pip install $USER PyInstaller

./build.sh $USER

The first step is to update brew. It cost me 4 hours to find this bug, 2 hours to work around it before. If brew is not updated, Python 3.4 is installed instead of Python 3.5.

Then, Python, Pygame as the window provider for Kivy is installed, and the other requirements. It goes on with the build step. While installation is executed once on a personal Mac, the build step is executed several times, when the source code is changed.

#!/bin/bash
#
# execute with --user to make pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

(
  cd ..

  echo "# removing old installation of kniteditor"
  python3 -m pip uninstall -y kniteditor || true

  echo "# build the distribution"
  python3 -m pip install $USER wheel
  python3 setup.py sdist --formats=zip
  python3 setup.py bdist_wheel
  python3 -m pip uninstall -y wheel

  echo "# install the current version from the build"
  python3 -m pip install $USER --upgrade dist/kniteditor-`linux-build/package_version`.zip

  echo "# install test requirements"
  python3 -m pip install $USER --upgrade -r test-requirements.txt
)

echo "# build the app"
# see https://pythonhosted.org/PyInstaller/usage.html
python3 -m PyInstaller -d -y KnitEditor.spec

echo "# create the .dmg file"
# see http://stackoverflow.com/a/367826/1320237
KNITEDITOR_DMG="`pwd`/dist/KnitEditor.dmg"
rm -f "$KNITEDITOR_DMG"
hdiutil create -srcfolder dist/KnitEditor.app "$KNITEDITOR_DMG"

echo "The installer can be found in \"$KNITEDITOR_DMG\"."

In the first steps we install the kniteditor from the built “sdist”  zip file. This way we can uninstall it with pip. Also, the dependencies are installed. Then, PyInstaller is invoked with a spec and then the .dmg file is created.

The spec looks like this:

# -*- mode: python -*-

import sys
site_packages = [path for path in sys.path if path.rstrip("/\\").endswith('site-packages')]
print("site_packages:", site_packages)

from kivy.tools.packaging.pyinstaller_hooks import get_deps_all, \
    hookspath, runtime_hooks

block_cipher = None

added_files = [(site_packages_, ".") for site_packages_ in site_packages]

kwargs = get_deps_all()
kwargs["datas"] = added_files
kwargs["hiddenimports"] += ['queue', 'unittest', 'unittest.mock']


a = Analysis(['_KnitEditor.py'],
             pathex=[],
             binaries=None,
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             hookspath=hookspath(),
             runtime_hooks=runtime_hooks(),
             **kwargs)
pyz = PYZ(a.pure, a.zipped_data,
             cipher=block_cipher)
exe = EXE(pyz,
          a.scripts,
          exclude_binaries=True,
          name='KnitEditorX',
          debug=False,
          strip=False,
          upx=True,
          console=True )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               name='KnitEditor')
app = BUNDLE(coll,
             name='KnitEditor.app',
             icon=None,
             bundle_identifier="com.ayab-knitting.KnitEditor")

Note, that all files in all site-packages are included. This way, we do not need to cope with missing modules. Also, there are three different names for

  • the entry script “_KnitEditor.py”
  • the executable “KnitEditorX”
  • the library “kniteditor”

While on the command line, OSX is case sensitive, it is not sensitive on the file system. Thus, if one of the names is the same, we can get errors durig the PyInstaller build.

Lessons learned

Do “brew update” on travis.

Use absolute paths for deployment on Mac OSX travis.

Never use the same names in PyInstaller for the main script, a library and the executable. Otherwise you get a “not a directory” or “not a file” error.

Travis OSX build max out from time to time. It is much faster to have a Mac computer there, to create the scripts.

Continue ReadingDeploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

How to create a Windows Installer from tagged commits

I working on an open-source Python project, an editor for knit work called the “KnitEditor”. Development takes place at Github. Here, I would like to give some insight in how we automated deployment of the application to a Windows installer.

The process works like this:

  1. Create a tag with git and push it to Github.
  2. AppVeyor build the application.
  3. AppVeyor pushes the application to the Github release.

(1) Create a tag and push it

Tags should reflect the version of the software. Version “0.0.1” is in tag “v0.0.1”. We automated the tagging with the “setup.py” in the repository. Now, you can run

py -3.4 setup.py tag_and_deploy

Which checks that there is no such tag already. Several commits have the same version, so, we like to make sure that we do not have two versions with the same name. Also, this can only be executed on the master branch. This way, the software has gone through all the automated quality assurance. Here is the code from the setup.py:

from distutils.core import Command
# ...
class TagAndDeployCommand(Command):

    description = "Create a git tag for this version and push it to origin."\
                  "To trigger a travis-ci build and and deploy."
    user_options = []
    name = "tag_and_deploy"
    remote = "origin"
    branch = "master"

    def initialize_options(self):
        pass

    def finalize_options(self):
        pass

    def run(self):
        if subprocess.call(["git", "--version"]) != 0:
            print("ERROR:\n\tPlease install git.")
            exit(1)
        status_lines = subprocess.check_output(
            ["git", "status"]).splitlines()
        current_branch = status_lines[0].strip().split()[-1].decode()
        print("On branch {}.".format(current_branch))
        if current_branch != self.branch:
            print("ERROR:\n\tNew tags can only be made from branch"
                  " \"{}\".".format(self.branch))
            print("\tYou can use \"git checkout {}\" to switch"
                  " the branch.".format(self.branch))
            exit(1)
        tags_output = subprocess.check_output(["git", "tag"])
        tags = [tag.strip().decode() for tag in tags_output.splitlines()]
        tag = "v" + __version__
        if tag in tags:
            print("Warning: \n\tTag {} already exists.".format(tag))
            print("\tEdit the version information in {}".format(
                    os.path.join(HERE, PACKAGE_NAME, "__init__.py")
                ))
        else:
            print("Creating tag \"{}\".".format(tag))
            subprocess.check_call(["git", "tag", tag])
        print("Pushing tag \"{}\" to remote \"{}\"."
              "".format(tag, self.remote))
        subprocess.check_call(["git", "push", self.remote, tag])
# ...
SETUPTOOLS_METADATA = dict(
# ...
    cmdclass={
# ...
        TagAndDeployCommand.name: TagAndDeployCommad
    },
)
# ...
if __name__ == "__main__":
    import setuptools
    METADATA.update(SETUPTOOLS_METADATA)
    setuptools.setup(**METADATA) # METADATA can be found in several other 

Above, you can see a “distutils” command that executed git through the command line interface.

(2) AppVeyor builds the application

As mentioned above, you can configure AppVeyor to build your application. Here are some parts of the “appveyor.yml” file, that I comment on inline:

# see https://packaging.python.org/appveyor/#adding-appveyor-support-to-your-project
environment:
  PYPI_USERNAME: niccokunzmann3
  PYPI_PASSWORD:
    secure: Gxrd9WI60wyczr9mHtiQHvJ45Oq0UyQZNrvUtKs2D5w=

  # For Python versions available on Appveyor, see
  # http://www.appveyor.com/docs/installed-software#python
  # The list here is complete (excluding Python 2.6, which
  # isn't covered by this document) at the time of writing.

  # we only need Python 3.4 for kivy
  PYTHON: "C:\\Python34"


install:
  - "%PYTHON%\\python.exe -m pip install docutils pygments pypiwin32 kivy.deps.sdl2 kivy.deps.glew"
  - "%PYTHON%\\python.exe -m pip install -r requirements.txt"
  - "%PYTHON%\\python.exe -m pip install -r test-requirements.txt"
  - "%PYTHON%\\python.exe setup.py install"
  
build_script:
- cmd: cmd /c windows-build\build.bat

test_script:
  # Put your test command here.
  # If you don't need to build C extensions on 64-bit Python 3.3 or 3.4,
  # you can remove "build.cmd" from the front of the command, as it's
  # only needed to support those cases.
  # Note that you must use the environment variable %PYTHON% to refer to
  # the interpreter you're using - Appveyor does not do anything special
  # to put the Python version you want to use on PATH.
  - windows-build\dist\KnitEditor\KnitEditor.exe /test
  - "%PYTHON%\\python.exe -m pytest --pep8 kniteditor"

artifacts:
  # bdist_wheel puts your built wheel in the dist directory
- path: dist/*
  name: distribution
- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
- path: windows-build/dist/KnitEditor
  name: standalone

deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Note that the line

  - windows-build\dist\KnitEditor\KnitEditor.exe /test

executes the tests in the built application.

These commands are executed to build the application and are executed by this step:

build_script:
- cmd: cmd /c windows-build\build.bat
"%PYTHON%\python.exe" -m pip install pyinstaller

The line above installs pyinstaller

"%PYTHON%\python.exe" -m PyInstaller KnitEditor.spec

The line above uses pyinstaller to create an executable from the specification.

"Inno Setup 5\ISCC.exe" KnitEditor.iss

The line above uses Inno Setup to create the Installer for the built application.

(3) Deploy to Github

As you can see in the “appveyor.yml” file, the resulting executable is listed as an artifact. Artifacts can be downloaded directly from appveyor or used to deploy. In this case, we use the github deploy, which can be customized via the UI of appveyor.

- path: windows-build/dist/Installer/KnitEditorInstaller.exe
  name: installer
deploy:
- provider: GitHub
  # http://www.appveyor.com/docs/deployment/github
  tag: $(APPVEYOR_REPO_TAG_NAME)
  description: "Release $(APPVEYOR_REPO_TAG_NAME)"
  auth_token:
    secure: j1EbCI55pgsetM/QyptIM/QDZC3SR1i4Xno6jjJt9MNQRHsBrFiod0dsuS9lpcC7
  artifact: installer
  force_update: true
  draft: false
  prerelease: false
  on:
    branch: master                 # release from master branch only
    appveyor_repo_tag: true        # deploy on tag push only

Summary

Now, every time we push a tag to Github, AppVeyor build a new installer for our application.

Continue ReadingHow to create a Windows Installer from tagged commits

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Continue ReadingCode Quality in the knittingpattern Python Library

Knitting Pattern Conversion

conversion_273cdef7c-3747-11e6-8ece-a573d396521917-diag

We can convert knitting patterns to svg (middle) which proves the concept but is still a different from the original (right)

Our goal is to create a knit-work exchange format. This includes the conversion to a scematic view of the knittting pattern as svg – to make it endlessly scalable and allow conversions to png, pdf and paper.

This week we ended the prototype of the SVG conversion. The positions are a bit off and instructions are placed above eachother. Most of the work is done.

We are also able to load and save knitting patterns a png files.

(1)a34e6d2c-372d-11e6-9bbd-71c846ead7f9 (2)f6a6bf82-372e-11e6-9467-8bab0e07c099

(3)39e5a556-380b-11e6-8999-726fea9b6078

We loaded them (1), converted them to a knitting pattern and then saved them again as png (2). This way we path our way towards using the ayab software and actually knitting the pattern. Also we can convert the knitting pattern to an svg consisting all of knit instructions (3). Here is the code for it in version 0.0.8.

>>> import knittingpattern
>>> from knittingpattern.convert.image_to_knittingpattern import *
>>> convert_image_to_knitting_pattern.path("head-band.png").temporary_path(".json")
"head-band.json"
>>> k = knittingpattern.load_from_path("head-band.json")
>>> k.to_svg(10).temporary_path(".svg")
"head-band.svg"

Here you can see a proof of concept video:

 

Continue ReadingKnitting Pattern Conversion

A look into the knitting pattern format

We are currently working on a format that allows to exchange instructions for knitting independent of how it is going to be knit: by a machine like Brother, Pfaff, or by hand.

For this to be possible we need the format to be as general as possible and have no ties to a specific form of knitting. At some point the transition to a format specifically designed for a certain machine is necessary. However, we believe that it is possible to have a format so general that instructions for all types of machines and for hand knitting could be generated from it.

We have decided to use JSON as the language to describe the format, because it is machine readable and human readable at the same time.
The structure of our format follows the rows in knitting.

Our first thought was about creating a format that would describe how meshes are connected and how the thread travels. This would allow for great flexibility and it should be possible to represent everything like this. However, we have decided against it, because we think this format would be quite complicated (different orientations and twists of meshes possible, different threads for multiple colors…) and would get quite big very quickly, because of all the different properties for each mesh and because of all the meshes. Furthermore, and this point is probably more important, knitters do not think this way. If we had a format like that, it would not be easy to understand what was happening for human beings.

Whenever I knit by hand, I never think about how all the meshes are connected by this single thread I am using. I always think about which operations I am performing when knitting in each row. Instructions for creating patterns in knitting are also written this way. They give the knitter a set of knitting instructions to do and possibly repeat. We have concluded, that most knitters think in knitting operations performed rather than connections between meshes.

17-diag

Knitting instructions from Garnstudio’s Café.

Therefore we have decided to base our format on knitting instructions/operations. The most common instructions probably are: knit, purl, cast on, bind off, knit two together, yarn over. Of course for increases and decreases there are many different operations which work in a similar way but have slight differences (e.g. skp, k2tog).
Since in knitting many things are possible and it is unlikely that we ever manage to create a complete list of all the possible operations you can perform in knitting we have decided to have a very open format, that allows the definition of new instructions.

Instructions are also defined in JSON format. Here is an example, the “k2tog” instruction:

[
    {
        "type" : "k2tog",
        "title" : {
            "en-en" : "Knit 2 Together"
        },
        "number of consumed meshes" : 2,
        "description" : {
            "wikipedia" : {
                "en-en" : "https://en.wikipedia.org/wiki/Knitting_abbreviations#Types_of_knitting_abbreviations"
            },
            "text" : {
                "en-en" : "Knit two stitches together, as if they were one stitch."
            }
        }
    }
]

 

Knitting patterns consist of multiple rows, which consist of multiple instructions. Furthermore we want to define the connections between rows. This is important, so we can express gaps or slits which are multiple rows long. For example when knitting pants the two legs will be separate. They will be knit separately and their combined width will be increased in comparison to the width of the hip.
Here is an example for a pattern which specifies a cast on in the first row, then a row where all stitches are knit, then the last row is bound off.

{
  "type" : "knitting pattern",
  "version" : "0.1",
  "comment" : {
    "content" : "cast on and bind off",
    "type" : "markdown"
    },
  "patterns" : [
    {
      "id" : "knit",
      "name" : "cobo",
      "rows" : [
        {
          "id" : 1,
          "instructions" : [
            {"id": "1.0", "type": "co"},
            {"id": "1.1", "type": "co"},
            {"id": "1.2", "type": "co"},
            {"id": "1.3", "type": "co"}
          ]
        },
        {
          "id" : 2,
          "instructions" : [
            {"id": "2.0"},
            {"id": "2.1"},
            {"id": "2.2"},
            {"id": "2.3"}
          ]
        },
        {
          "id" : 3,
          "instructions" : [
            {"id": "3.0", "type": "bo"},
            {"id": "3.1", "type": "bo"},
            {"id": "3.2", "type": "bo"},
            {"id": "3.3", "type": "bo"}
          ]
        }
      ],
      "connections" : [
        {
          "from" : {
            "id" : 1
          }, 
          "to" : {
            "id" : 2
          }
        },
        {
          "from" : {
            "id" : 2
          }, 
          "to" : {
            "id" : 3
          }
        }
      ]
    }
  ]
}

Connections are defined “from” one row “to” another.  The ids identify the rows. The optional attribute start defines the mesh where the connection starts. If start is not defined, the first mesh of the row is assumed. When indexing the list of meshes in a row the first index is 1. The optional attribute “meshes” describes how many meshes will be connected, starting from the mesh defined in “start”.

The resulting parsed Python object structure looks like this:

row model

The Python object structure for working with the parsed knitting pattern.

Each row has a list of instructions. Each instruction produces a number of meshes and consumes a number of meshes. These meshes are also the meshes that are consumed/produced by the rows.

 

Continue ReadingA look into the knitting pattern format

Towards a unified digital aproach to knitting

Our idea is to create a knitting library for a format that allows conversion of knitting projects, patterns and tutorials. Usually, communities will only focus on the knitting format for their machines. Our approach should be different and be able to support any knitting communities efforts.

Here is our strategy to achieve this:

  • We connect to different communities to get a broader view on what their needs are.
  • Our knitting format is based on knitting instructions like knit, purl, yarn over, skp. We found a comprehensive list on Wikipedia.

Other Communities

From time to time we meet with other people who also knit and could use our software.

First, we met with Viktoria from ETIB Berlin. She taught us a lot about knitting, how she does it, that almost everything could be created from one peace with the machine. Also, that AYAB is used for lace patterns. We saw examples where she let meshes fall so that larger holes were created. Our goal is to support laces in the file format.  Color patterns should be possible across sewing edges.

We are also in touch with Valentina Project. With their software we would be able to connect to yet another community and use their sewing patterns for custom-fit clothes.

We got in touch with Kniterate. They and we share a lot of goals. Because they create a startup, they are very cautious what they release. They focus on their open-source knitting machine first and later on the software. They already created an editor much like we imagined ours to be, but as a web application. A way of collaboration could be that we understand their file format and see how we can support it.

Only talking about our GSoC project is worth it as other people may have seen alike at Maker Faires and other hacky places. We have the chance to bring communities and efforts together.

Knitting Format

A universal knitting format has many concerns:

  • Languages of users differ
  • It should be possible to knit by hand
  • Mesh sizes and wool differ
  • Different knitting machines with different abilities
  • A knitting format for exchange is never complete. A knitting format for machines must be complete.

In contrast to a knitting format for a automatic machine, it is possible, to have machines operate in semi-automatic modes or just to knit by hand. In both cases, meshes could be changed in a way that was never foreseen. This is why we did not base it on meshes and mesh types but rather on instructions – closer to the mental model of the knitters who perform instructions with their hand.

Some of the instructions are understood by the machines, some could be adapted a bit so the machine can do it automatically or faster and some are still necessary to be done by hand. We created a Python module for that, “knittingpattern“. We work on it in a test-driven way.

 

Continue ReadingTowards a unified digital aproach to knitting

Introduction of the 2016 GSoC projects with AYAB

Hi, we are Kirstin and Nicco and for the next couple of months we will be working on the AYAB project. AYAB is an abbreviation for All Yarns Are Beautiful and the project works on the Software that interfaces with the knitting machine and directs in in what to knit. Zou can read more about the existing software in the recent blogposts.
We want to implement two new features and expand on the functionality of the software that replaces the original firmware of the knitting machine.

We want to work on an exchangeable format that stores all information necessary for knitting e.g. a sweatshirt. It should contain the sewing pattern, the color pattern(s) defined for the different parts of the knit piece and the stitch pattern, also defined for the different parts of the knit piece.

This new information that we will have through the format will allow us to create directions for the user in case the machine cannot switch yarn colors on its own or add or remove stitches, but the user can. We want the software to have on screen instructions for the user, telling the user what to do before they knit the next row. This will make knitting with more than two yarn colors possible and make it easier for users to knit shapes where they need to add and remove stitches. This is very much needed when knitting clothes.

Our next steps are to make user studies to find out about how people use the knitting machines, what is necessary and important. If you like to lend us a hand, please contact us. Also we are doing weekly hangouts in the evening, Tuesday 21:00 (Berlin, GMT+2).

Continue ReadingIntroduction of the 2016 GSoC projects with AYAB