Communication by pySerial python module in PSLab

In the PSLab Desktop App we use Python for communication between the PC and PSLab device. The PSLab device is connected to PC via USB cable. The power for the hardware device is provided by the host through USB which in this case is a PC. We need well structured methods to establish communication between PC and PSLab device and this is where pySerial module comes in. We will discuss how to communicate efficiently from PC to a device like PSLab itself using pySerial module.

How to read and write data back to PSLab device?

pySerial is a python module which is used to communicate serially with microcontroller devices like Arduino, RaspBerry Pi, PSLab (Pocket Science Lab), etc. Serial data transfer is easier using this module, you just need to open a port and obtain serial object, which provides useful and powerful functionality. Users can send string (which is an array of bytes) or any other data type all data types can be expressed as byte string using struct module in python, read a specific number of bytes or read till some specific character like ‘\n’ is encountered. We are using this module to create custom read and write functions.

How to Install pySerial and obtain serial object for communication?

You can install pySerial using pip by following command

pip install pyserial

Once it’s installed we can now import it in our python script for use.

Obtain Serial Object

In Linux

>>> import serial
>>> ser = serial.Serial(‘/dev/ttyUSB0’)

In Windows

>>> ser = serial.Serial()
>>> ser.baudrate = 19200
>>> ser.port = ‘COM1’

Or

>>> ser = serial.Serial(‘COM1’, 19200)

You can specify other properties like timeout, stopbits, etc to Serial constructor.

Complete list of parameters is available here. Now this “ser” is an object of Serial class that provides all the functionalities through its interface. In PSLab we obtain a serial object and implement custom methods to handle communication which isn’t directly provided by pySerial, for example if we need to implement a function to get the version of the PSLab device connected. Inside the version read function we need to send some bytes to the device in order to obtain the version string from device as a byte response.

What goes under the hood?

We send some sequence of bytes to PSLab device, every sequence of bytes corresponds to a unique function which is already written in device’s firmware. Device recognises the function and responses accordingly.

Let’s look at code to understand it better.

ser.write(struct.Struct(‘B’).pack(11))  #  Sends 11 as byte string
ser.write(struct.Struct(‘B’).pack(5))   #  Sends 5 as bytes string
x = ser.readline()                      #  Reads bytes until ‘\n’ is encountered   

To understand packing and unpacking using struct module, you can have a read at my other blog post Packing And Unpacking Data in JAVA in which I discussed packing and unpacking of data as byte strings and touched a bit on How it’s done in Python.  

You can specify how many bytes you want to read like shown in code below, which is showing and example for 100 bytes :

x = ser.read(100)

After your communication is complete you can simply close the port by:

ser.close()

Based on these basic interface methods more complex functions can be written to handle your specific needs. More details one how to implement custom methods is available at python-communication-library of PSLab which uses pySerial for communication between Client and PSLab device.

An example of custom read function is suppose I want to write a function to read an int from the device. int is of 2 bytes as firmware is written in C, so we read 2 bytes from device and unpack them in client side i.e on PC. For more such custom functions refer packet_handler.py of PSLab python communication library.

def getInt(self):
      “””
      reads two bytes from the serial port and
      returns an integer after combining them
      “””
      ss = ser.read(2)  # reading 2 bytes from serial object
      try:
          if len(ss) == 2:
              return CP.ShortInt.unpack(ss)[0]  # unpacking bytes to make int
      except Exception as ex:
          self.raiseException(ex, “Communication Error , Function : get_Int”)

Resources

Continue ReadingCommunication by pySerial python module in PSLab

Documenting Open Event API Using API-Blueprint

FOSSASIA‘s Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually.

Using API Blueprint:-
We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :-

FORMAT: V1
HOST: https://api.eventyay.com

# Open Event API Server

The Open Event API Server

# Group Authentication

The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>`


API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API.

The heading starts with # and the first heading is regarded as the name of the API.

NOTE – Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.
        Following the heading section comes the description of the API. Further, headings are used to break up the description section.

Resource Groups:
—————————–
    By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users.

# Group Users

For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints.


| Parameter | Description | Type | Required |
|:----------|-------------|------|----------|
| `name`  | Name of the user | string | - |
| `password` | Password of the user | string | **yes** |
| `email` | Email of the user | string | **yes** |

 

Resources:
——————
    In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI.

## Users Collection [/v1/users{?page%5bsize%5d,page%5bnumber%5d,sort,filter}]
+ Parameters
    + page%5bsize%5d (optional, integer, `10`) - Maximum number of resources in a single paginated response.
    + page%5bnumber%5d (optional, integer, `2`) - Page number to fetchedfor the paginated response.
    + sort (optional, string, `email`) - Sort the resources according to the given attribute in ascending order. Append '-' to sort in descending order.
    + filter(optional, string, ``) - Filter according to the flask-rest-jsonapi filtering system. Please refer: http://flask-rest-jsonapi.readthedocs.io/en/latest/filtering.html for more.

 

Actions:
————–
    An action is specified with a sub-heading inside of  a resource as the name of Action, followed by HTTP method inside the square brackets.
    Before we get on further, let us discuss what a payload is. A payload is an HTTP transaction message including its discussion and any additional assets such as entity-body validation schema.

There are two payloads inside an Action:

  1. Request: It is a payload containing one specific HTTP Request, with Headers and an optional body.
  2. Response: It is a payload containing one HTTP Response.

A payload may have an identifier-a string for a request payload or an HTTP status code for a response payload.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)


Types of HTTP methods for Actions:

  • GET – In this action, we simply send the header data like Accept and Authorization and no body. Along with it we can send some GET parameters like page[size]. There are two cases for GET: List and Detail. For example, if we consider users, a GET for List helps us retrieve information about all users in the response, while Details, helps us retrieve information about a particular user.

The API Blueprint examples implementation of both GET list and detail request and response are as follows.

### List All Users [GET]
Get a list of Users.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "meta": {
                "count": 2
            },
            "data": [
                {
                    "attributes": {
                        "is-admin": true,
                        "last-name": null,
                        "instagram-url": null,

 

### Get Details [GET]
Get a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "data": {
                "attributes": {
                    "is-admin": false,
                    "last-name": "Doe",
                    "instagram-url": "http://instagram.com/instagram",

 

  • POST – In this action, apart from the header information, we also need to send a data. The data must be correct with jsonapi specifications. A POST body data for an users API would look something like this:
### Create User [POST]
Create a new user using an email, password and an optional name.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data":
              {
                "attributes":
                {
                  "email": "example@example.com",
                  "password": "password",


A POST request with this data, would create a new entry in the table and then return in jsonapi format the particular entry that was made into the table along with the id assigned to this new entry.

  • PATCH – In this action, we change or update an already existing entry in the database. So It has a header data like all other requests and a body data which is almost similar to POST except that it also needs to mention the id of the entry that we are trying to modify.
### Update User [PATCH]
+ `id` (integer) - ID of the record to update **(required)**

Update a single user by setting the email, email and/or name.

Authorized user should be same as user in request body or must be admin.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data": {
                "attributes": {
                  "password": "password1",
                  "avatar_url": "http://example1.com/example1.png",
                  "first-name": "Jane",
                  "last-name": "Dough",
                  "details": "example1",
                  "contact": "example1",
                  "facebook-url": "http://facebook.com/facebook1",
                  "twitter-url": "http://twitter.com/twitter1",
                  "instagram-url": "http://instagram.com/instagram1",
                  "google-plus-url": "http://plus.google.com/plus.google1",
                  "thumbnail-image-url": "http://example1.com/example1.png",
                  "small-image-url": "http://example1.com/example1.png",
                  "icon-image-url": "http://example1.com/example1.png"
                },
                "type": "user",
                "id": "2"
              }
            }

Just like in POST, after we have updated our entry, we get back as response the new updated entry in the database.

  • DELETE – In this action, we delete an entry from the database. The entry in our case is soft deleted by default. Which means that instead of deleting it from the database, we set the deleted_at column with the time of deletion. For deleting we just need to send header data and send a DELETE request to the proper endpoint. If deleted successfully, we get a response as “Object successfully deleted”.
### Delete User [DELETE]
Delete a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
          "meta": {
            "message": "Object successfully deleted"
          },
          "jsonapi": {
            "version": "1.0"
          }
        }


How to check after manually entering all these? We can use the
apiary website to render it, or simply use different renderer to do it. How? Checkout for my next blog on apiary and aglio.

Learn more about api blueprint here: https://apiblueprint.org/

Continue ReadingDocumenting Open Event API Using API-Blueprint

Trigger Controls in Oscilloscope in PSLab

PSLab Desktop App has a feature of oscilloscope. Modern day oscilloscopes found in laboratories support a lot of advanced features and addition of trigger controls in oscilloscope was one such attempt in adding an advanced feature in the oscilloscope. As the current implementation of trigger is not robust enough, this feature would help in better stabilisation of waveforms.

Captured waveforms often face the problem of distortion and trigger helps to solve this problem. Trigger in oscilloscope is an essential feature for signal characterisation.  as it synchronises the horizontal sweep of the oscilloscope to the proper point of the signal. The trigger control enables users to stabilise repetitive waveforms as well as capture single-shot waveforms. By repeatedly displaying similar portion of the input signal, the trigger makes repetitive waveform look static. In order to visualise how an oscilloscope looks with or without a trigger see the following figures below.

blog_post_5_1

blog_post_5_2

Fig 1: (a) Without trigger  (b) With trigger

The Fig:1(a) is the actual waveform received by the oscilloscope and it can be easily noticed that interpreting it is confusing due to the overlapping of multiple waveforms together. So, in Fig:1(b) the trigger control stabilises the waveforms and captures just one waveform.

In general the commonly used trigger modes in laboratory oscilloscopes are:-

  • Auto – This trigger mode allows the oscilloscope to acquire a waveform even when it does not detect a trigger condition. If no trigger condition occurs while the oscilloscope waits for a specific period (as determined by the time-base setting), it will force itself to trigger.
  • Normal – The Normal mode allows the oscilloscope to acquire a waveform only when it is triggered. If no trigger occurs, the oscilloscope will not acquire a new waveform, and the previous waveform, if any, will remain on the display.
  • Single – The Single mode allows the oscilloscope to acquire one waveform each time you press the RUN button, and the trigger condition is detected.
  • Scan – The Scan mode continuously sweeps waveform from left to right.

Implementing Trigger function in PSLab

PSLab has a built in basic functionality of trigger control in the configure_trigger method in sciencelab.py. The method gets called when trigger is enabled in the GUI. The trigger is activated when the incoming wave reaches a certain voltage threshold and the PSLab also provides an option of either selecting the rising or falling edge for trigger. Trigger is especially useful in experiments handling waves like sine waves, square wave etc. where trigger helps to get a clear picture.

In order to initiate trigger in the PSLab desktop app, the configure_trigger method in sciencelab.py is called. The configure_trigger method takes some parameters for input but they are optional. If values are not specified the default values are assumed.

def configure_trigger(self, chan, name, voltage, resolution=10, **kwargs):
        
  prescaler = kwargs.get('prescaler', 0)
        try:
            self.H.__sendByte__(CP.ADC)
            self.H.__sendByte__(CP.CONFIGURE_TRIGGER)
            self.H.__sendByte__(
                (prescaler << 4) | (1 << chan))  
            if resolution == 12:
                level = self.analogInputSources[name].voltToCode12(voltage)
                level = np.clip(level, 0, 4095)
            else:
                level = self.analogInputSources[name].voltToCode10(voltage)
                level = np.clip(level, 0, 1023)

            if level > (2 ** resolution - 1):
                level = (2 ** resolution - 1)
            elif level < 0:
                level = 0

            self.H.__sendInt__(int(level))  # Trigger
            self.H.__get_ack__()
        
        except Exception as ex:
  	    self.raiseException(ex, "Communication Error , Function : " + inspect.currentframe().f_code.co_name)

The method takes the following parameters in the method call

  • chan – Channel . 0, 1,2,3. corresponding to the channels being recorded by the capture routine(not the analog inputs).
  • name – The name of the channel. ‘CH1’… ‘V+’.
  • voltage – The voltage level that should trigger the capture sequence(in Volts).

The similar feature will also be used in oscilloscope in the Android app with the code corresponding to this method  in ScienceLab written in Java.

Additional Resources

  1. Read more about Trigger here – http://www.radio-electronics.com/info/t_and_m/oscilloscope/oscilloscope-trigger.php
  2. Learn more about trigger modes in oscilloscopes – https://www.picotech.com/library/oscilloscopes/advanced-digital-triggers
  3. PSLab Python repository to know the underlying code – https://github.com/fossasia/pslab-python

 

Continue ReadingTrigger Controls in Oscilloscope in PSLab

Improving Custom PyPI Theme Support In Yaydoc

Yaydoc has been supporting custom themes from nearly it’s inception. Themes, which it could not find locally, it would automatically try to install it via pip and set up appropriate metadata about the themes in the generated conf.py.  It was one of the first major enhancement we provided as compared to when using bare sphinx to generate documentation. Since then, a large number of features have been added to ease the process of documentation generation but the core theming aspects have remained unchanged.

To use a theme, sphinx needs the exact name of the theme and the absolute path to it. To obtain these metadata, the existing implementation accessed the __file__ attribute of the imported package to get the absolute path to the __init__.py file, a necessary element of all python packages. From there we searched for a file named theme.conf, and thus the directory containing that file was our required theme.

There were a few mistakes in our earlier implementation. For starters, we assumed that the distribution name of the theme in PyPI and the package name which should be imported would be same. This is generally true but is not necessary. One such theme from PyPI is Flask-Sphinx-Themes. While you need to install it using

pip install Flask-Sphinx-Themes

yet to import it in a module one needs to

import flask_sphinx_themes

This lead to build errors when specific themes like this was used. To solve this, we used the pkg_resources package. It allows us to get various metadata about a package in an abstract way without needing to specifically handle if the package is zipped or not.

try:
    dist = pkg_resources.get_distribution('{{ html_theme }}')
    top_level = list(dist._get_metadata('top_level.txt'))[0]
    dist_path = os.path.join(dist.location, top_level)
except (pkg_resources.DistributionNotFound, IndexError):
    print("\nError with distribution {0}".format('{{ html_theme }}'))
    html_theme = 'fossasia_theme'
    html_theme_path = ['_themes']

The idea here is that instead of searching for __init__.py, we read the name of the top_level directory using the first entry of the top_level.txt, a file created by setuptools when installing the package. We build the path by joining the location attribute of the Distribution object and the name of the top_level directory. The advantage with this approach is that we don’t need to import anything and thus no longer need to know the exact package name.

With this update, Support for custom themes has been greatly increased.

Resources

Continue ReadingImproving Custom PyPI Theme Support In Yaydoc

Using a YAML file to read configuration options in Yaydoc

Yaydoc provides access to a lot of configurable variables which can be set as per requirements to configure various sections of the build process. You can see the entire list of variables in the project’s homepage. Till now the only way to do this was to set appropriate environment variables. Since a web user interface for yaydoc is in development, providing a clean UI was very important. This meant that we could not just create a bunch of input fields for all variables as that could be overwhelming for any new user. So we decided to ask only minimal information in the web form and read other variables if the user chooses to specify from a YAML file in the target repository.

To read a YAML file, we used PyYaml. It is a well established Python package to safely read info from a YAML file and convert it to a Python’s dictionary. Here is the code snippet for that.

def get_yaml_config():
    try:
        with open('.yaydoc.yml', 'r') as file:
            conf = yaml.safe_load(file)
    except FileNotFoundError:
        return {}
    return conf

The above code snippet returns a dictionary specifying all keys read from the YAML file. Since none of the options are required, we first create a dictionary with all defaults and recursively merges it with the yaml dict. The merging is done using the following code snippet:

for key, value in head.items():
    if isinstance(base, dict):
        if isinstance(value, dict):
            base[key] = update_dict(base.get(key, {}), value)
        else:
           base[key] = head[key]
    else:
        base = {key: head[key]}
return base

Now you can create a .yaydoc.yml file in the root of your repository and yaydoc would read options from there. Here is a sample yml file.

metadata:
  author: FOSSASIA
  projectname: Yaydoc
  version: development

build:
  doctheme: fossasia_theme
  docpath: docs/
  logo: images/logo.svg
  markdown_flavour: markdown_github

publish:
  ghpages:
    docurl: yaydoc.fossasia.org

It should be noted that the layout of the file may change in the future as the project is in active development.

Resources

Continue ReadingUsing a YAML file to read configuration options in Yaydoc

Automatically Generating index for documentation in Yaydoc

Yaydoc which uses Sphinx Documentation Generator internally needs a document named index.rst describing the overall layout of the documentation to generate a proper table of contents. Without an index.rst present, the build fails. With this week’s update that constraint has been relaxed. Now if yaydoc detects that index.rst has not been supplied, it automatically generates a minimal index for basic use. Although it is still recommended to provide your own index, you won’t be punished for its absence. The following sections show how this was implemented and also shows this feature in action.

Implementation

For generating a minimal index.rst, we perform the following steps:

  • If the repository has a README.rst or a README.md, we include it in the index
  • Several toctrees are generated as per how the documents in the repository are arranged.

The following code snippet returns a valid rst block which includes the document dirpath/filename

def get_include(dirpath, filename):
    ext = os.path.splitext(filename)[1]
    if ext == '.md':
        directive = 'mdinclude'
    else:
        directive = 'include'
    template = '.. {directive}:: {document}'
    path = os.path.relpath(os.path.join(dirpath, filename))
    document = path.replace(os.path.sep, '/')
    return template.format(directive=directive, document=document)

The following code snippet returns a valid rst block which creates a toctree of dirpath.

def get_toctree(dirpath, filenames):
    toctree = ['.. toctree::', '   :maxdepth: 1']
    caption_template = '   :caption: {caption}'
    content_template = '   {document}'

    caption = os.path.basename(dirpath).replace('_', ' ').title()
    if caption == os.curdir:
        caption = 'Contents'
    toctree.append(caption_template.format(caption=caption))
    # Inserting a blank line
    toctree.append('')

    valid = False
    for filename in filenames:
        path, ext = os.path.splitext(os.path.join(dirpath, filename))
        if ext not in ('.md', '.rst'):
            continue
        document = path.replace(os.path.sep, '/')
        document = document.lstrip('./').rstrip('/')
        toctree.append(content_template.format(document=document))
        valid = True

    if valid:
        return '\n'.join(toctree)
    else:
        return ''

The following code snippet walks the documentation directory and returns a valid content to be written to index.rst.

def get_index(root):
    index = []
    # Include README from root
    root_files = next(os.walk(root))[2]
    if 'README.rst' in root_files:
        index.append(get_include(root, 'README.rst'))
    elif 'README.md' in root_files:
        index.append(get_include(root, 'README.md'))
    # Add toctrees as per the directory structure
    for (dirpath, dirnames, filenames) in os.walk(os.curdir):
    if filenames:
        toctree = get_toctree(dirpath, filenames)
        if toctree:
            index.append(toctree)
    return '\n\n'.join(index) + '\n'

Result

Let’s assume that a sample project has the following directory tree for documentation.

+---_README.md
+---_docs/
|   +---_installation_guide/
|   |   +--- setup_heroku.md
|   |   +--- setup_docker.md
|   +---_tutorial/
|   |   +--- basic.md
|   |   +--- advanced.md

The following index.rst would be generated from the above tree

.. mdinclude:: ../README.md

.. toctree::
   :caption: Installation Guide
   :maxdepth: 1

   setup_heroku
   setup_docker

.. toctree::
   :caption: Tutorial
   :maxdepth:

   basic
   advanced

As you can see, this index.rst would be enough for most use cases. This update decreases the entry barrier for yaydoc. More features are on the way.

Resources

Continue ReadingAutomatically Generating index for documentation in Yaydoc

Using Root Directory as the Documentation Directory with Yaydoc

In our test builds for Yaydoc, we found that If we set the root as the documentation directory, the build would fail with a very long build log. In the build process, we create some temporary directories such as a virtual environment and the build directory in the root. After some inspection of the build logs, we found out that when the root is itself used as the documentation directory, we were accidently recursively copying the build directory into itself which led to build failure. Together with this, since the virtual environment directory was also being accidently copied to the build directory, we were actually building the documentation of the entire Python standard library on each build.

Once the problem and It’s cause was known, the course of action to be taken was clear. We needed to ensure that any temporary directories which we create as part of the build process was not being copied to the build directory. The following changes were made to achieve that.

  • The virtual environment directory was now being created in the HOME directory instead of the root.
  • Any other temporary directories which except the main build directory was now deleted before copying.
  • To prevent the recursive copying, we used the –exclude parameter of rsync.
rsync --exclude=BUILD_DIR DOCS_DIR/ BUILD_DIR/

After this patch, root can also be used as the documentation directory with Yaydoc. To do so, just set the environment variable DOCPATH as “.”

Continue ReadingUsing Root Directory as the Documentation Directory with Yaydoc

Handling date-time in the Open Event Project

Handling date time in code becomes little tricky when the project is used internationally because then there comes additional term Timezone. Timezone is a property of a location which needs to be considered while comparing that time with the time in another location. For example – there are two villages A and B. One day Ram from village A calls his friend Shyam in village B at 8:00 am to wish “good morning”. But Shyam receives Ram’s call at 6pm on same day and he replies “good evening”. That means village A’s timezone is 10 hrs  behind village B’s timezone. So here we need some reference timezone about which all other timezones can be declared. This is where UTC (Coordinated Universal Time) comes into play. UTC is reference timezone by which all the timezones are declared. For example – Indian timezone is 5 hrs and 30 mins ahead of UTC which is denoted as UTC+05:30. In languages, these timezones are declared in date time library using constants such as ‘Asia/Kolkata’ which is Indian Standard Time. I will be talking about working with  date time in python in this blog. In the FOSSASIA’s Open Event project since it is event management system, handling date-time with the timezone is one of the important tasks.

Here is the relevant code:

>>> import datetime
>>> import pytz
>>> now = datetime.datetime.now()

datetime.datetime.now()  returns naive datetime as of the time setting of the machine on which the code is running. Naive date means it doesn’t contain any info about the timezone. It just contains some number values of year, month, hours etc. So just by looking at naive date we cannot actually understand the time. There comes aware datetime which contains timezone info.

>> now
datetime.datetime(2017, 5, 12, 21, 46, 16, 909983)
>>> now.isoformat()
'2017-05-12T21:46:16.909983'
>>> aware_now = pytz.timezone('Asia/Kolkata').localize(now)
>>> aware_now
datetime.datetime(2017, 5, 12, 21, 46, 16, 909983, tzinfo=<DstTzInfo 'Asia/Kolkata' IST+5:30:00 STD>

Pytz provides timezone object which takes string argument for timezone which has localize method which adds timezone info to the datetime object. Hence now aware datetime has timezone info too. Now if we print the time.

>>> aware_now.isoformat()
'2017-05-12T21:46:16.909983+05:30

We get the +05:30 extra string at the end which gives timezone info. +05:30 means the timezone is 5 hrs and 30 mins ahead of UTC timezone. The comparison between datetimes can be made between naive-naive and aware-aware. If we try to compare between naive and aware,

>>> now < aware_now
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't compare offset-naive and offset-aware datetimes

>>> now2 = datetime.datetime.now()
>>> now2
datetime.datetime(2017, 5, 15, 9, 44, 25, 990666)
>>> now < now2
True
>>> aware_now.tzinfo
<DstTzInfo 'Asia/Kolkata' IST+5:30:00 STD>

tzinfo carries timezone info of the datetime object. We can make aware date to unaware by method replacing tzinfo to None.

>>> unaware_now = aware_now.replace(tzinfo=None)
>>> unaware_now.isoformat()
'2017-05-12T21:46:16.909983'

Formating datetime is done by mostly these two methods. One of which takes string format in which the result is required and another returns datetime in iso format.

>>> now.strftime('%Y-%m-%dT%H:%M:%S%z')
'2017-05-12T21:46:16'
>>> aware_now.strftime('%Y-%m-%dT%H:%M:%S%z')
'2017-05-12T21:46:16+0530'

>>> now.time().isoformat()
'21:46:16.909983'
>>> now.date().isoformat()
'2017-05-12'

>>> now_in_brazil_east = datetime.datetime.now(pytz.timezone('Brazil/East'))
>>> now_in_brazil_east.isoformat()
'2017-05-15T06:49:51.311012-03:00'

We can pass timezone argument to the now method to get current time in the passed timezone. But the care must be taken as this will use the timezone setting of the machine on which code is running to calculate the time at the supplied timezone.

Application

In the FOSSASIA’s Open Event Project, date-time is taken from the user along with the timezone like in one of the example shown below.

Date Time Getter Open Event
Date Time Getter in Create Event Step 1 Open Event Front End

This is part of the event creation page where user has to provide date-time along with the timezone choice. At back-end the date-time and timezone are stored separately in the database. The event model looks like

class Event(db.Model):
 """Event object table"""
 ...
 start_time = db.Column(db.DateTime, nullable=False)
 end_time = db.Column(db.DateTime, nullable=False)
 timezone = db.Column(db.String, nullable=False, default="UTC")
 ...

The comparison between the stored date-time info with the real time cannot be done directly. Since the timezones of the both times need to be considered along with the date-time values while comparison. Likewise there is one case in the project code where tickets are filtered based on the start time.

def get_sales_open_tickets(event_id, event_timezone='UTC'):
  tickets = Ticket.query.filter(Ticket.event_id == event_id).filter(
      Ticket.sales_start <= datetime.datetime.now(pytz.timezone(event_timezone)).replace(tzinfo=None)).filter(
      Ticket.sales_end >= datetime.datetime.now(pytz.timezone(event_timezone)).replace(tzinfo=None))
  …

In this case, first current time is found out using timezone method in the timezone which is stored as a separate data field in the database. Since comparison cannot be done between aware and naive date-time. Hence once current date-time is found out in the user’s timezone, it is made naive using replace method which makes the aware date-time into naive again. Hence can be compared with the naive date-time stored already.

Continue ReadingHandling date-time in the Open Event Project

Adding support for Markdown in Yaydoc

Yaydoc being based on sphinx natively supports reStructuredText. From the official docs:

reStructuredText is an easy-to-read, what-you-see-is-what-you-get plaintext markup syntax and parser system. It is useful for quickly creating simple web pages, and for standalone documents. reStructuredText is designed for extensibility for specific application domains.

Although it being superior to markdown in terms of features, Markdown is still the most heavily used markup language out there. This week we added support for markdown into Yaydoc. Now you can use Markdown to document your project and Yaydoc would create a site with no changes required from your end. To achieve this, we used recommonmark, which enables sphinx to parse CommonMark, a strongly defined, highly compatible specification of Markdown. It solved most of the problem with 3 lines of code in our customized conf.py .

from recommonmark.parser import CommonMarkParser

source_parsers = {
'.md': CommonMarkParser,
}

source_suffix = ['.rst', '.md']

With this addition, sphinx can now use recommonmark to convert markdown to html. But not everything has been solved. Here is an excerpt from a previous blogpost which explains a problem yet to be solved.

Now sphinx requires an index.rst file within docs directory  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository. But the Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors.

To solve this problem, a custom directive mdinclude was created. Directives are the primary extension mechanism of reStructuredText. Most of it’s implementation is a copy of the built in Include directive from the docutils package. Before including in the doctree, mdinclude converts the docs from markdown to reStructuredText using pypandoc. The implementation is similar to the one also discussed in a previous blogpost.

class MdInclude(rst.Directive):

required_arguments = 1
optional_arguments = 0

def run(self):
    if not self.state.document.settings.file_insertion_enabled:
        raise self.warning('"%s" directive disabled.' % self.name)
    source = self.state_machine.input_lines.source(
        self.lineno - self.state_machine.input_offset - 1)
    source_dir = os.path.dirname(os.path.abspath(source))
    path = rst.directives.path(self.arguments[0])
    path = os.path.normpath(os.path.join(source_dir, path))
    path = utils.relative_path(None, path)
    path = nodes.reprunicode(path)

    encoding = self.options.get(
        'encoding', self.state.document.settings.input_encoding)
    e_handler = self.state.document.settings.input_encoding_error_handler
    tab_width = self.options.get(
        'tab-width', self.state.document.settings.tab_width)

    try:
        self.state.document.settings.record_dependencies.add(path)
        include_file = io.FileInput(source_path=path,
                                    encoding=encoding,
                                    error_handler=e_handler)
    except UnicodeEncodeError as error:
        raise self.severe('Problems with "%s" directive path:\n'
                          'Cannot encode input file path "%s" '
                          '(wrong locale?).' %
                          (self.name, SafeString(path)))
    except IOError as error:
        raise self.severe('Problems with "%s" directive path:\n%s.' %
                          (self.name, ErrorString(error)))

    try:
        rawtext = include_file.read()
    except UnicodeError as error:
        raise self.severe('Problem with "%s" directive:\n%s' %
                          (self.name, ErrorString(error)))

    output = md2rst(rawtext)
    include_lines = statemachine.string2lines(output,
                                              tab_width, 
                                              convert_whitespace=True)
    self.state_machine.insert_input(include_lines, path)
    return []

With this, Yaydoc can now be used on projects that exclusively use markdown. There are some more hurdles which we need to cross in the following weeks. Stay tuned for more updates.

Continue ReadingAdding support for Markdown in Yaydoc

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website,

What is celery?

Celery is an asynchronous task queue/job queue based on distributed message passing.

What are tasks?

The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing.

After the tasks had been set up, an error constantly came up whenever a task was called

The error was:

DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed

The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit().

The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to :

@celery.task(name='lorem.ipsum')
def lorem_ipsum():
    pass

But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form:

def restore_session(session_id):
    session = DataGetter.get_session(session_id)
    session.deleted_at = None
    lorem_ipsum.delay()
    save_to_db(session, "Session restored from Trash")
    update_version(session.event_id, False, 'sessions_ver')


In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/.

def make_celery(app):
    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])
    celery.conf.update(app.config)
    task_base = celery.Task

    class ContextTask(task_base):
        abstract = True

        def __call__(self, *args, **kwargs):
            if current_app.config['TESTING']:
                with app.test_request_context():
                    return task_base.__call__(self, *args, **kwargs)
            with app.app_context():
                return task_base.__call__(self, *args, **kwargs)

    celery.Task = ContextTask
    return celery

celery = make_celery(current_app)


The __call__ method ensures that celery task is provided with proper app context to work with.

 

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server