PSLab Code Repository and Installation

PSLab  is a new addition to FOSSASIA Science Lab. This tiny pocket science lab  provides  an array of necessary equipments for doing science and engineering experiments. It can function like an oscilloscope, waveform generator, frequency counter, programmable voltage and current source and also as a data logger.

pslabdesign
New Front Panel Design
psl2
Size:62mmx78mmx13mm

The control and measurement functions are written in Python programming language. Pyqtgraph is used for plotting library. We are now working on Qt based GUI applications for various experiments.

The following are the code repositories of PSLab.

Installation

To install PSLab on Debian based Gnu/Linux system, the following dependencies must be installed.

Dependencies
============
PyQt 4.7+, PySide, or PyQt5
python 2.6, 2.7, or 3.x
NumPy, Scipy
pyqt4-dev-tools          #for pyuic4
Pyqtgraph                #Plotting library
pyopengl and qt-opengl   #for 3D graphics
iPython-qtconsole        #optional
Now clone both the repositories pslab-apps and pslab .

Libraries must be installed in the following order

1. pslab-apps

2. pslab

To install, cd into the directories

$ cd <SOURCE_DIR>

and run the following (for both the repos)

$ sudo make clean
$ sudo make 

$ sudo make install

Now you are ready with the PSLab software on your machine 🙂

For the main GUI (Control panel), you can run Experiments from the terminal.

$ Experiments

If the device is not connected the following splash screen will be displayed.

SplashNotConnected
Device not connected

After clicking OK, you will get the control panel with menus for Experiments, Controls, Advanced Controls and Help etc. (Experiments can not be accessed unless the device is connected)

controlPanelNotConnected

The splash screen and the control panel, when PSLab is connected to the pc.

SplashScreen
PSLab connected
controlpanel
Control Panel – Main GUI

From this control panel one can access controls, help files and various experiments through independent GUI’s written for each experiment.

You can help
------------

Please report a bug/install errors here 
Your suggestions to improve PSLab are welcome :)

What Next:

We are now working on a general purpose Experimental designer. This will allow selecting controls and channels and then generate a spread sheet. The columns from this spreadsheet can be selected and plotted.

 

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Adding revisioning to SQLAlchemy Models

{ Repost from my personal blog @ https://blog.codezero.xyz/adding-revisioning-to-sqlalchemy-models }

In an application like Open Event, where a single piece of information can be edited by multiple users, it’s always good to know who changed what. One should also be able to revert to a previous version if needed. That is where revisioning comes into picture.

We use SQLAlchemy as the database toolkit and ORM. So, we wanted a versioning tool that would work well with our existing setup. That’s when I came across SQLAlchemy-Continuum – a versioning extension for SQLAlchemy.

Installation

The installation of the module is just like any other python library. (don’t forget to add it to your requirements.txt file, if you have one)

pip install SQLAlchemy-Continuum
Setup

Now, it’s time to enable SQLAlchemy-Continuum for the required models.

Let’s consider an Event model.

import sqlalchemy as sa

class Event(Base):
    __tablename__ = 'events'
    id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
    name = sa.Column(sa.String)
	start_time = sa.Column(db.DateTime, nullable=False)
    end_time = sa.Column(db.DateTime, nullable=False)
    description = db.Column(db.Text)
    schedule_published_on = db.Column(db.DateTime)

We need to do three things to enable SQLAlchemy-Continuum.

  1. Call make_versioned() before the model(s) is/are defined.
  2. Add __versioned__ = {} to all the models that we want to be versioned
  3. Call configure_mappers from SQLAlchemy after declaring all the models.
import sqlalchemy as sa
from sqlalchemy_continuum import make_versioned

# Must be called before defining all the models
make_versioned()

class Event(Base):

    __tablename__ = 'events'
    __versioned__ = {}  # Must be added to all models that are to be versioned

    id = sa.Column(sa.Integer, primary_key=True, autoincrement=True)
    name = sa.Column(sa.String)
	start_time = sa.Column(db.DateTime, nullable=False)
    end_time = sa.Column(db.DateTime, nullable=False)
    description = db.Column(db.Text)
    schedule_published_on = db.Column(db.DateTime)

# Must be called after defining all the models
sa.orm.configure_mappers()

SQLAlchemy-Continuum creates two tables:

  1. events_version which stores the version history for the Event model linked to the transaction table via a foreign key
  2. transaction which stores information about each transaction (like the user who performed the transaction, transaction datetime, etc)

SQLAlchemy-Continuum also adds listeners to Event to record all create, update, delete actions.

Usage

All the CRUD (Create, read, update, delete) operations can be done as usual. SQLAlchemy-Continuum takes care of creating a version record for each CUD operation. The versions can be easily accessed using the versions property that is now available in the Event model.

event = Event(name="FOSSASIA 2017", description="Open source conference in asia")
session.add(event)  # Event added to transaction
session.commit() # Transaction comitted and event recored created

# This would have created the first version record which can be accessed
event.versions[0].name == "FOSSASIA 2017"

# Lets make some changes to the recored.
event.name = "FOSSASIA 2016"
session.add(event)
session.commit()

# This would have created the second version record which can be accessed
event.versions[1].name == "FOSSASIA 2016"

# The first version record still remains and can be accessed
event.versions[0].name == "FOSSASIA 2017"

So, that’s how basic versioning can be implemented in SQLAlchemy using SQLAlchemy-Continuum.

Using S3 for Cloud storage

In this post, I will talk about how we can use the Amazon S3 (Simple Storage Service) for cloud storage. As you may know, S3 is a no-fuss, super easy cloud storage service based on the IaaS model. There is no limit on the size of file or the amount of files you can keep on S3, you are only charged for the amount of bandwidth you use. This makes S3 very popular among enterprises of all sizes and individuals.

Now let’s see how to use S3 in Python. Luckily we have a very nice library called Boto for it. Boto is a library developed by the AWS team to provide a Python SDK for the amazon web services. Using it is very simple and straight-forward. Here is a basic example of uploading a file on S3 using Boto –

import boto
from boto.s3.key import Key
# connect to the bucket
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(BUCKET_NAME)
# set the key
key = 'key/for/file'
file = '/full/path/to/file'
# create a key to keep track of our file in the storage
k = Key(bucket)
k.key = key
k.set_contents_from_filename(file)

The above example uploads a file to s3 bucket BUCKET_NAME.

Buckets are containers which store data. The key here is the unique key for an item in the bucket. Every item in the bucket is identified by a unique key assigned to it. The file can be downloaded from the urlBUCKET_NAME.s3.amazonaws.com/{key}. It is therefore essential to choose the key name smartly so that you don’t end up overwriting an existing item on the server.

In the Open Event project, I thought of a scheme that will allows us to avoid conflicts. It relies on using IDs of items for distinguishing them and goes as follows –

  • When uploading user avatar, key should be ‘users/{userId}/avatar’
  • When uploading event logo, key should be ‘events/{eventId}/logo’
  • When uploading audio of session, key should be ‘events/{eventId}/sessions/{sessionId}/audio’

Note that to store user ‘avatar’, I am setting the key as /avatar and not /avatar.extension. This is because if user uploads pictures in different formats, we will end up storing different copies of avatars for the same user. This is nice but it’s limitation is that downloading file from the url will give the file without an extension. So to solve this issue, we can use the Content-Disposition header.

k.set_contents_from_filename(
	file,
	headers={
		'Content-Disposition': 'attachment; filename=filename.extension'
	}
)

So now when someone tries to download the file from that link, they will get the file with an extension instead of a no-extension “Choose what you want to do” file.

This covers up the basics of using S3 for your Python project. You may explore Boto’s S3 documentation to find other interesting functions like deleting a folder, copy one folder to another and so.

Also don’t forget to have a look at the awesome documentation we wrote for the Open Event project. It provides a more pictorial and detailed guide on how to setup S3 for your project.

 

{{ Repost from my personal blog http://aviaryan.in/blog/gsoc/s3-for-storage.html }}

Knitting Pattern Conversion

conversion_273cdef7c-3747-11e6-8ece-a573d396521917-diag

We can convert knitting patterns to svg (middle) which proves the concept but is still a different from the original (right)

Our goal is to create a knit-work exchange format. This includes the conversion to a scematic view of the knittting pattern as svg – to make it endlessly scalable and allow conversions to png, pdf and paper.

This week we ended the prototype of the SVG conversion. The positions are a bit off and instructions are placed above eachother. Most of the work is done.

We are also able to load and save knitting patterns a png files.

(1)a34e6d2c-372d-11e6-9bbd-71c846ead7f9 (2)f6a6bf82-372e-11e6-9467-8bab0e07c099

(3)39e5a556-380b-11e6-8999-726fea9b6078

We loaded them (1), converted them to a knitting pattern and then saved them again as png (2). This way we path our way towards using the ayab software and actually knitting the pattern. Also we can convert the knitting pattern to an svg consisting all of knit instructions (3). Here is the code for it in version 0.0.8.

>>> import knittingpattern
>>> from knittingpattern.convert.image_to_knittingpattern import *
>>> convert_image_to_knitting_pattern.path("head-band.png").temporary_path(".json")
"head-band.json"
>>> k = knittingpattern.load_from_path("head-band.json")
>>> k.to_svg(10).temporary_path(".svg")
"head-band.svg"

Here you can see a proof of concept video:

 

New Tools and Sensors for FOSSASIA PSLab and ExpEYES

ExpEYES: Open Source Science Lab’ is a project FOSSASIA is supporting since 2014. As a part of GSoC-14 and GSoC-15 we started actively developing Pocket Science Lab for open science education. The objective is to make create the most affordable open source pocket lab which can help millions of students and citizen scientists all over the world to  learn science by exploring and experimenting.

We are currently working on  adding new tools/sensors and also  developing a new lab interface with higher capabilities to be added to FOSSASIA Science Lab. My goal for this year’s project is to add new experiments to the ExpEYES library. I also started working on new lab interface.

Here is my kitchen converted to a work space, my GSoC Lab:)

Linear Air track for mechanics experiments, super-critical dryer which uses PSLab for temperature control and monitoring with other instruments.

In the month of May-16, I spent few days at IUAC – Inter University Accelerator Centre, New Delhi, to work with Dr. Ajith Kumar ( Inventor of Expeyes). The time spent at IUAC was most useful as we got help and inputs from many people at IUAC and also the participant teachers of ExpEYES training programme. We designed some new experiments to be done with ExpEYES. Planned improvements in Mechanics experiments especially the experiments on linear air track. We also started working on the new lab interface. Thanks to Jithin B.P. for helping us out with all the development. With the continuous collective efforts now we have a new lab interface. “PSLab: Pocket Science Lab from FOSSASIA”. Here I am trying to give all the details of the equipment and the development done so far and the things planned for next couple of months.


PSLab: Pocket Science Lab from FOSSASIA

Size of PSLab is 62mmx78mmx13mm. The front panel will be slightly different than the one in the picture. It will have little extra portion in the top right corner to accommodative 90 degree connector pins. something like this.pslab
We will finalize the front panel design in a week and get the panels screen printed. The sample kits will be sent to my mentors for testing and suggestions.)

Main Features and GUI’s

PSLab can function like an oscilloscope, data logger, waveform generator, frequency counter, programmable voltage source etc. It can be plugged in to USB port of PC or SBC’s like Raspberry Pi. PSLab has:

  • 2 variable sine waves
  • 4 programmable  square wave generators
  • 3 programmable voltage sources
  • Programmable constant current source
  • 4 channels for fetching data
  • Sensor input
  • Berg Strip sockets  etc…

We are also working on to add wireless sensor interface. This will enable PSLab in accessing various sensors using a wireless module.

PSLab Code repository , Installation and Communicating with PSLab

All the programs are written in Python. PyQt is used for GUI designing and Pyqtgraph is used for plotting library. I have created two repositories  for PSLab

  • https://github.com/fossasia/pslab-apps: GUI programs and templates for various experiments. (Depends on python-pyqtgraph (>=0.9.10), python-qt4 (>=4.10), ipython(>=1.2), ipython-qtconsole(>=1.2)

In addition to the above development work we also conducted a few demonstration sessions in science and engineering colleges at Belgaum, India. The feedback from teachers and students in improving the kit  is really helpful in modifying the GUI’s for better user experience.

Next Steps/To Do

  • Add new experiments to PSLab
  • Complete Voltammetry module for ExpEYES
  • Complete Unified GUI for all  Mechanics Experiments using ExpEYES
  • Documentation for PSLab

We are  getting about 25 PSLab  kits ready in the first batch by the end of this month. Thanks to funding from GSoC-15.) Need to work on the [email protected] website. Next immediate plan is to get about 50-100 kits ready and update the website with all the information and user manuals before FOSSASIA-17. I am also working on a plan to reach-out to  maximum number of science and engineering students who will definitely get benefit from PSLab.)

A look into the knitting pattern format

We are currently working on a format that allows to exchange instructions for knitting independent of how it is going to be knit: by a machine like Brother, Pfaff, or by hand.

For this to be possible we need the format to be as general as possible and have no ties to a specific form of knitting. At some point the transition to a format specifically designed for a certain machine is necessary. However, we believe that it is possible to have a format so general that instructions for all types of machines and for hand knitting could be generated from it.

We have decided to use JSON as the language to describe the format, because it is machine readable and human readable at the same time.
The structure of our format follows the rows in knitting.

Our first thought was about creating a format that would describe how meshes are connected and how the thread travels. This would allow for great flexibility and it should be possible to represent everything like this. However, we have decided against it, because we think this format would be quite complicated (different orientations and twists of meshes possible, different threads for multiple colors…) and would get quite big very quickly, because of all the different properties for each mesh and because of all the meshes. Furthermore, and this point is probably more important, knitters do not think this way. If we had a format like that, it would not be easy to understand what was happening for human beings.

Whenever I knit by hand, I never think about how all the meshes are connected by this single thread I am using. I always think about which operations I am performing when knitting in each row. Instructions for creating patterns in knitting are also written this way. They give the knitter a set of knitting instructions to do and possibly repeat. We have concluded, that most knitters think in knitting operations performed rather than connections between meshes.

17-diag

Knitting instructions from Garnstudio’s Café.

Therefore we have decided to base our format on knitting instructions/operations. The most common instructions probably are: knit, purl, cast on, bind off, knit two together, yarn over. Of course for increases and decreases there are many different operations which work in a similar way but have slight differences (e.g. skp, k2tog).
Since in knitting many things are possible and it is unlikely that we ever manage to create a complete list of all the possible operations you can perform in knitting we have decided to have a very open format, that allows the definition of new instructions.

Instructions are also defined in JSON format. Here is an example, the “k2tog” instruction:

[
    {
        "type" : "k2tog",
        "title" : {
            "en-en" : "Knit 2 Together"
        },
        "number of consumed meshes" : 2,
        "description" : {
            "wikipedia" : {
                "en-en" : "https://en.wikipedia.org/wiki/Knitting_abbreviations#Types_of_knitting_abbreviations"
            },
            "text" : {
                "en-en" : "Knit two stitches together, as if they were one stitch."
            }
        }
    }
]

 

Knitting patterns consist of multiple rows, which consist of multiple instructions. Furthermore we want to define the connections between rows. This is important, so we can express gaps or slits which are multiple rows long. For example when knitting pants the two legs will be separate. They will be knit separately and their combined width will be increased in comparison to the width of the hip.
Here is an example for a pattern which specifies a cast on in the first row, then a row where all stitches are knit, then the last row is bound off.

{
  "type" : "knitting pattern",
  "version" : "0.1",
  "comment" : {
    "content" : "cast on and bind off",
    "type" : "markdown"
    },
  "patterns" : [
    {
      "id" : "knit",
      "name" : "cobo",
      "rows" : [
        {
          "id" : 1,
          "instructions" : [
            {"id": "1.0", "type": "co"},
            {"id": "1.1", "type": "co"},
            {"id": "1.2", "type": "co"},
            {"id": "1.3", "type": "co"}
          ]
        },
        {
          "id" : 2,
          "instructions" : [
            {"id": "2.0"},
            {"id": "2.1"},
            {"id": "2.2"},
            {"id": "2.3"}
          ]
        },
        {
          "id" : 3,
          "instructions" : [
            {"id": "3.0", "type": "bo"},
            {"id": "3.1", "type": "bo"},
            {"id": "3.2", "type": "bo"},
            {"id": "3.3", "type": "bo"}
          ]
        }
      ],
      "connections" : [
        {
          "from" : {
            "id" : 1
          }, 
          "to" : {
            "id" : 2
          }
        },
        {
          "from" : {
            "id" : 2
          }, 
          "to" : {
            "id" : 3
          }
        }
      ]
    }
  ]
}

Connections are defined “from” one row “to” another.  The ids identify the rows. The optional attribute start defines the mesh where the connection starts. If start is not defined, the first mesh of the row is assumed. When indexing the list of meshes in a row the first index is 1. The optional attribute “meshes” describes how many meshes will be connected, starting from the mesh defined in “start”.

The resulting parsed Python object structure looks like this:

row model

The Python object structure for working with the parsed knitting pattern.

Each row has a list of instructions. Each instruction produces a number of meshes and consumes a number of meshes. These meshes are also the meshes that are consumed/produced by the rows.

 

Towards a unified digital aproach to knitting

Our idea is to create a knitting library for a format that allows conversion of knitting projects, patterns and tutorials. Usually, communities will only focus on the knitting format for their machines. Our approach should be different and be able to support any knitting communities efforts.

Here is our strategy to achieve this:

  • We connect to different communities to get a broader view on what their needs are.
  • Our knitting format is based on knitting instructions like knit, purl, yarn over, skp. We found a comprehensive list on Wikipedia.

Other Communities

From time to time we meet with other people who also knit and could use our software.

First, we met with Viktoria from ETIB Berlin. She taught us a lot about knitting, how she does it, that almost everything could be created from one peace with the machine. Also, that AYAB is used for lace patterns. We saw examples where she let meshes fall so that larger holes were created. Our goal is to support laces in the file format.  Color patterns should be possible across sewing edges.

We are also in touch with Valentina Project. With their software we would be able to connect to yet another community and use their sewing patterns for custom-fit clothes.

We got in touch with Kniterate. They and we share a lot of goals. Because they create a startup, they are very cautious what they release. They focus on their open-source knitting machine first and later on the software. They already created an editor much like we imagined ours to be, but as a web application. A way of collaboration could be that we understand their file format and see how we can support it.

Only talking about our GSoC project is worth it as other people may have seen alike at Maker Faires and other hacky places. We have the chance to bring communities and efforts together.

Knitting Format

A universal knitting format has many concerns:

  • Languages of users differ
  • It should be possible to knit by hand
  • Mesh sizes and wool differ
  • Different knitting machines with different abilities
  • A knitting format for exchange is never complete. A knitting format for machines must be complete.

In contrast to a knitting format for a automatic machine, it is possible, to have machines operate in semi-automatic modes or just to knit by hand. In both cases, meshes could be changed in a way that was never foreseen. This is why we did not base it on meshes and mesh types but rather on instructions – closer to the mental model of the knitters who perform instructions with their hand.

Some of the instructions are understood by the machines, some could be adapted a bit so the machine can do it automatically or faster and some are still necessary to be done by hand. We created a Python module for that, “knittingpattern“. We work on it in a test-driven way.

 

A low-cost laboratory for everyone: Sensor Plug-ins for ExpEYES to measure temperature, pressure, humidity, wind speed, acceleration, tilt angle and magnetic field

Working on ExpEYES in the last few months has been an amazing journey and I am gratful of the support of Mario Behling, Hong Phuc Dang and Andre Rebentisch at FOSSASIA. I had a lot of learning adventures with experimenting and exploring with new ideas to build sensor plug-ins for ExpEYES. There were some moments which were disappointing and there were some other moments which brought the joy of creating sensor plug-ins, add-on devices and GUI improvements for ExpEYES.

My GSoC Gallery of Sensors and Devices: Here are all the sensors I played with for PSLab..

The complete list of sensor plug-ins developed is available at http://gnovi.edublogs.org/2015/08/21/gsoc-2015-with-fossasia-list-of-sensor-plug-ins-developed-for-expeyes/

Sensor Plugins for ExpEYES

The aim of my project is to develop new Sensor Plug-ins for ExpEYES to measure a variety of parameters like temperature, pressure, humidity, wind speed, acceleration, tilt angle, magnetic field etc. and to provide low-cost open source laboratory equipment for students and citizien scientists all over the world.

We are enhancing the scope of ExpEYES for using it to perform several new experiments. Developing a low-cost stand alone data acquisition system that can be used for weather monitoring or environmental studies is another objective of our project.

I am happy to see that the things have taken good shape with additional gas sensors added which were not included in the initial plan and we have almost achieved all the objectives of the project, except for some difficulties in calibrating sensor outputs and documentation. This issue will be solved in a couple of days.

Experimenting with different sensors in my kitchen laboratory

I started exploring and experimenting with different sensors. After doing preliminary studies I procured analog and a few digital sensors for measuring weather parameters like temperature, relative humidity and barometric pressure. A few other sensors like low cost piezoelectric sensor, accelerometer ADXL-335, Hall effect magnetic sensor, Gyro-module etc were also added to my kitchen laboratory. We then decided to add gas sensors for detecting Carbon Monoxide, LPG and Methane.

With this development ExpEYES can now be used for pollution monitoring and also in safety systems in Physics/chemistry laboratory. The work on the low-cost Dust Sensor is under progress.

Challenges, Data Sheet, GUI programs

I had to spend a lot of time in getting the sensor components, studying their data sheets, soldering and setting them up with ExpEYES. And then little time in writing GUI Programs. I started working almost 8 to 10 hours every evening after college hours (sometimes whole night) and now things have taken good shape.

Thanks to my mentor at FOSSASIA for pushing me, sometimes with strict words. I could add many new sensor plug-ins to ExpEYES and now I will also be working on Light sensors so that the Pocket Science Lab can be used in optics. With these new sensor plug-ins one can replace many costly devices from Physics, Chemistry, Biology and also Geology Lab.

What’s next? My Plan for next steps

  • Calibration of sensor data

  • Prototyping stand-alone weather station

  • Pushing data to Loklak server

  • Work on [email protected] website

  • Fossasia Live Cd based on Lubuntu with ExpEYES and other educational softwares

  • Set-up Documentation for possible science experiments with the sensor plug-ins and low-cost, open source apparatus

Importance of the test cases for the KnitLib

Having test cases is very important especially for a library like KnitLib because using test cases; we can clearly test particular fields.  In KnitLib, test cases show the information of how the KnitLib should be checked. Also test cases help for new contributors to understand about the KnitLib.

There are several test cases for the current KnitLib implementation such as tests on ayab communication, tests on ayab image, tests on command line interface, tests on KnitPat module and tests on knitting plugin.

For an example in ayab communication there are several important functions have been tested. Test on closing serial port communication, test on opening serial port with a baud rate of 115200 which ayab fits, tests on sending start message to the controller, tests on sending line of data via serial port and tests on reading line from serial communication.  Most of these tests have been done using mock tests. Mock is a python library to test in python.  Using mocks we can replace parts of our system with mock objects and have assertions about how they have been used. We can easily represent some complex objects without having to manually set up stubs as mock objects during a test.

It is very important to improve further test cases on the KnitLib because with the help of good test cases we can guarantee that the KnitLib’s features and functionalities should be working great.

Regards,

Shiluka.