Mailing Attachments Using Terminal in Open Event Android

The latest version of Open Event Android App Generator, v2 lacked the feature of mailing the generated APK to the email ID that is entered at the start of the app generation process. This also included mailing the error logs in case of APK failure.

This is an important feature for app generator because the process of app generation is a time taking one. The users have to wait for the app to be generated so that they can download the generated APK. To avoid this, the generator can automatically email the APK as soon as it is generated.

I took up this issue a few days back and started working on it. I started with thinking about the ways through which it will be implemented. This required some discussions with the mentors and co-developers. We finalised on the following ways:

  • Using Sendgrid
  • Using SMTP

I will be discussing the implementation of both of them in this blog. The code for APK mailing starts with the function call Notification.send in generator.py

if completed and apk_path and not error:
   Notification.send(
       to=self.creator_email,
       subject='Your android application for %s has been generated ' % self.event_name,
       message='Hi,<br><br>'
               'Your android application for the \'%s\' event has been generated. '
               'And apk file has been attached along with this email.<br><br>'
               'Thanks,<br>'
               'Open Event App Generator' % self.event_name,
       file_attachment=apk_path,
       via_api=self.via_api
   )
else:
   Notification.send(
       to=self.creator_email,
       subject='Your android application for %s could not generated ' % self.event_name,
       message='Hi,<br><br> '
               'Your android application for the \'%s\' event could not generated. '
               'The error message has been provided below.<br><br>'
               '<code>%s</code><br><br>'
               'Thanks,<br>'
               'Open Event App Generator' % (self.event_name, str(error) if error else ''),
       file_attachment=apk_path,
       via_api=self.via_api
   )

This leads me to the class Notification.py. It has three functions:-

1. send(to, subject, message, file_attachment, via_api)
2. send_mail_via_smtp(payload):
3. send_email_via_sendgrid(payload):

As the name suggests, the first function:

send(to, subject, message, file_attachment, via_api)

mainly decides which service (out of smtp and sendgrid) should be used to send the email, on the basis of the input parameters (especially, the ‘EMAIL_SERVICE’ parameter that has to be set in config.py).
The function looks like as follows:

send(to, subject, message, file_attachment, via_api)

It is in the send() that the other two functions are called. If the email_service is smtp, it calls the Notification.send_mail_via_smtp(payload). Otherwise, the Notification.send_email_via_sendgrid(payload) is called.
The sendgrid function is pretty straightforward:

@staticmethod
def send_email_via_sendgrid(payload):

   key = current_app.config['SENDGRID_KEY']
   if not key:
       logger.info('Sendgrid key not defined')
       return
   headers = {
       "Authorization": ("Bearer " + key)
   }
   requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

It requires a personalised sendgrid key which is accessed from the config.py file. Apart from that it handles some errors by giving logs in celery tasks. The main line in the function that initiates the email is a POST request made using the python library ‘requests’. The request is made as follows:

 requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

The send_mail_via_smtp(payload): function looks for some configurations before sending the mail:

@staticmethod
def send_mail_via_smtp(payload):
   """
   Send email via SMTP
   :param config:
   :param payload:
   :return:
   """
   smtp_encryption = current_app.config['SMTP_ENCRYPTION']
   if smtp_encryption == 'tls':
       smtp_encryption = 'required'
   elif smtp_encryption == 'ssl':
       smtp_encryption = 'ssl'
   elif smtp_encryption == 'tls_optional':
       smtp_encryption = 'optional'
   else:
       smtp_encryption = 'none'
   config = {
       'host': current_app.config['SMTP_HOST'],
       'username': current_app.config['SMTP_USERNAME'],
       'password': current_app.config['SMTP_PASSWORD'],
       'encryption': smtp_encryption,
       'port': current_app.config['SMTP_PORT'],
   }
   mailer_config = {
       'transport': {
           'use': 'smtp',
           'host': config['host'],
           'username': config['username'],
           'password': config['password'],
           'tls': config['encryption'],
           'port': config['port']
       }
   }

   mailer = Mailer(mailer_config)
   mailer.start()
   message = Message(author=payload['from'], to=payload['to'])
   message.subject = payload['subject']
   message.plain = strip_tags(payload['message'])
   message.rich = payload['message']
   message.attach(payload['attachment'])
   mailer.send(message)
   mailer.stop()

It is using the Marrow Mailer Python library to email with attachments(APK). This Python library can be installed using
pip install marrow.mailer
To use Marrow Mailer you instantiate a marrow.mailer.Mailer object with the configuration, then pass Message instances to the Mailer instance’s send() method.

You can refer to the following guides for more information about sending emails through command line:
https://github.com/marrow/mailer is the official repo of Marrow Mailer repository.
https://pypi.python.org/pypi/marrow.mailer
More detailled information on sending emails using Sendgrid can be found here https://www.daveperrett.com/articles/2013/03/19/setting-up-sendmail-with-sendgrid-on-ubuntu/

Keep updating Build status in Meilix Generator

One of the problems we faced while working Meilix Generator was to provide user with the status of the custom ISO build in the Meilix Generator web app so we came up with the idea of checking the status of the link generated by the web app. If the link is available the status code would be 200 otherwise it would be 404.

We have used python script for checking the status of URL. For generating URL, we use the tag name which will be used as a variable to generate the URL of the unique event user wants the ISO for and the date will help in generation of link rest of the link remains the same.

tag = os.environ["TRAVIS_TAG"]
date = datetime.datetime.now().strftime('%Y%m%d')
url=https://github.com/xeon-zolt/meilix/releases/download/"+tag+"/meilix-zesty-"+date+"-i386.iso"

 

Now we will use urllib for monitoring the status of link.

req = Request(url)
    try:
        response = urlopen(req)
    except HTTPError as e:
        return('Building Your Iso')
    except URLError as e:
        return('We failed to reach the server.')
    else:
        return('Build Sucessful : ' + url)

 

After monitoring the status the next step was to update the status dynamically on the status page.

So we’llll use a status function in the flask app which is used by JavaScript to get status of the link after intervals of time.

Flask :

@app.route('/now')
def status_url():
    return (status())

 

Javascript:

<script type ="text/javascript">
let url ="/now"
function getstatus(url)
{
    fetch(url).then(function(response){
        return response.text()
    }).then(function(text){
        console.log("status",text)
        document.querySelector("div#status")
        .innerHTML = text
    })
    }
window.onload = function(){
    fetch(url).then(function(response){
        return response.text()
    }).then(function(text){
        console.log("status",text)
        document.querySelector("div#status")
        .innerHTML = text
    })
    window.setInterval(getstatus.bind(null,url),30*1000)
}
/*setInterval(function,interval in millsecs)*/
</script>

 

This covers various steps to prompt user whether the build is ready or not.

Resource

Flask App to Upload Wallpaper On the Server for Meilix Generator

We had a problem of getting a wallpaper from the user using Meilix Generator and use the wallpaper with the Meilix build scripts to generate the ISO. So, we were required to host the wallpaper on the server and downloaded by Travis CI during the build to include it in the ISO.

A solution is to render HTML templates and access data sent by POST using the request object from the flask. Redirect and url_for will be used to redirect the user once the upload is done and send_from_directory will help us to host the file under the /uploads that the user just uploaded which will be downloaded by the Travis for building the ISO.

We start by creating the HTML form marked with enctype=multipart/form-data.

<form action="upload" method="post" enctype="multipart/form-data">
        <input type="file" name="file"><br /><br />
        <input type="submit" value="Upload">
 </form>

 

First, we need imports of modules required. Most important is werkzeug.secure_filename().

import os
from flask import Flask, render_template, request, redirect, url_for, send_from_directory
from werkzeug import secure_file

 

Now, we’ll define where to upload and the type of file allowed for uploading. The path to upload directory on the server is defined by the extensions in app.config which is uploads/ here.

app.config['UPLOAD_FOLDER'] = 'uploads/'
app.config['ALLOWED_EXTENSIONS'] = set(['png', 'jpg', 'jpeg'])

 

This functions will check for valid extension for the wallpaper which are png, jpg and jpeg in this case defined above in app.config.

def allowed_file(filename):
    return '.' in filename and \
           filename.rsplit('.', 1)[1] in app.config['ALLOWED_EXTENSIONS']

 

After, getting the name of uploaded file from the user then using above function check if there are allowed file type and store it in a variable filename after that it move the files to the upload folder to save it.

Upload function check if the file name is safe and remove unsupported characters (line 3) after that moves it from a temporal folder to the upload folder. After moving, it renames the file as wallpaper so that the download link is same always which we have used in Meilix build script to download from server.

def upload():
    file = request.files['file']
    if file and allowed_file(file.filename):
        filename = secure_filename(file.filename)
        file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
         os.rename(UPLOAD_FOLDER + filename, UPLOAD_FOLDER+'wallpaper')
         filename = 'wallpaper'

 

At this point, we have only uploaded the wallpaper and renamed the uploaded file to ‘wallpaper’ only. We cannot access the file outside the server it will result in 403 error so to make it available, the uploaded file need to be registered and then hosted using below code snippet.

We can also register uploaded_file as build_only rule and use the SharedDataMiddleware.

@app.route('/uploads/<filename>')
def uploaded_file(filename):
    return send_from_directory(app.config['UPLOAD_FOLDER'],filename)

The hosted wallpaper is used by Meilix in Travis CI to generate ISO using the download link which remains same for the uploaded wallpaper.

Why should we use secure secure_filename() function?

just imagine someone sends the following information as the filename to your app.

filename = "../../../../home/username/.sh"

 

If the number of ../ is correct and you would join this with your UPLOAD_FOLDER the hacker might have the ability to modify a file on the server’s filesystem that he or she should not modify.

Now, let’s look how the function works.

secure_filename('../../../../home/username/.sh')
'home_username_.sh'

Improving the uploads

We can add validation to the size of the file to be uploaded so that in case a user tries to upload a file too much big that may increase load on the server.

from flask import Flask, Request
app = Flask(__name__)
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024

Resources

Managing Related Endpoints in Permission Manager of Open Event API Server

Open Event API Server has its permission manager to manage all permission to different endpoints and some of the left gaps were filled by new helper method has_access. The next challenge for permission manager was to incorporate a feature many related endpoints points to the same resource.
Example:

  • /users-events-roles/<int:users_events_role_id>/user or
  • /event-invoices/<int:event_invoice_id>/user

Both endpoints point to Users API where they are fetching the record of a single user and for this, we apply the permission “is_user_itself”. This permission ensures that the logged in user is the same user whose record is asked through the API and for this we need the “user_id” as the “id” in the permission function, “is_user_itself”
Thus there is need to add the ability in permission manager to fetch this user_id from different models for different endpoints. For example, if we consider above endpoints then we need the ability to get user_id from UsersEventsRole and EventInvoice models and pass it to permission function so that it can use it for the check.

Adding support

To add support for multiple keys, we have to look for two things.

  • fetch_key_url
  • model

These two are key attributes to add this feature, fetch_key_url will take the comma separated list which will be matched with view_kwargs and model receives the array of the Model Classes which will be used to fetch the related records from the model
This snippet provides the main logic for this:

for index, mod in enumerate(model):
   if is_multiple(fetch_key_url):
       f_url = fetch_key_url[index]
   else:
       f_url = fetch_key_url
   try:
       data = mod.query.filter(getattr(mod, fetch_key_model) == view_kwargs[f_url]).one()
   except NoResultFound, e:
       pass
   else:
       found = True

if not found:
   return NotFoundError({'source': ''}, 'Object not found.').respond()

From the above snippet we are:

  • We iterate through the models list
  • Check if fetch_key_url has multiple keys or not
  • Get the key from fetch_key_url on the basis of multiple keys or single key in it.
  • We try to attempt to get object from model for the respective iteration
  • If there is any record/object in the database then it’s our data. Skipping further process
  • Else continue iteration till we get the object or to the end.

To use multiple mode

Instead of providing the single model to the model option of permission manager, provide an array of models. Also, it is optional to provide comma separated values to fetch_key_url
Now there can be scenario where you want to fetch resource from database model using different keys present on your view_kwargs
for example, consider these endpoints

  1. `/notifications/<notification_id>/event`
  2. `/orders/<order_id>/event`

Since they point to same resource and if you want to ensure that logged in user is organizer then you can use these two things as:

  1. fetch_key_url=”notification_id, order_id”
  2. model=[Notification, Order]

Permission manager will always match indexes in both options, the first key of fetch_key_url will be only used for the first key of the model and so on.
Also, fetch_key_url is an optional parameter and even in multiple mode you can provide a single value as well.  But if you provide multiple commas separated values make sure you provide all values such that no of values in fetch_key_url and model must be equal.

Resources

Custom Data Layer in Open Event API Server

Open Event API Server uses flask-rest-jsonapi module to implement JSON API. This module provides a good logical abstraction in the data layer.
The data layer is a CRUD interface between resource manager and data. It is a very flexible system to use any ORM or data storage. The default layer you get in flask-rest-jsonapi is the SQLAlchemy ORM Layer and API Server makes use of default alchemy layer almost everywhere except the case where I worked on email verification part.

To add support for adding user’s email verification in API Server, there was need to create an endpoint for POST /v1/users/<int:user_id>/verify
Clearly here we are working on a single resource i.e, specific user record. This requires us to use ResourceDetail and the only issue was there is no any POST method or view in ResourceDetail class. To solve this I created a custom data layer which enables me to redefine all methods and views by inheriting abstract class. A custom data layer must inherit from flask_rest_jsonapi.data_layers.base.Base.

Creating Custom Layer

To solve email verification process, a custom layer was created at app/api/data_layers/VerifyUserLayer.py

def create_object(self, data, view_kwargs):
   user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
   s = get_serializer()
   try:
       data = s.loads(data['token'])
   except Exception:
       raise UnprocessableEntity({'source': 'token'}, "Invalid Token")

   if user.email == data[0]:
       user.is_verified = True
       save_to_db(user)
       return user
   else:
       raise UnprocessableEntity({'source': 'token'}, "Invalid Token")

Using custom layer in API

We can easily provide custom layer in API Resource using one of the properties of the Resource Class

data_layer = {
   'class': VerifyUserLayer,
   'session': db.session
}

This is all we have to provide in the custom layer, now all CRUD method will be directed to our custom data layer.

Solution to our issue
Setting up custom layer provides us the ability to create our custom resource methods, i.e, modifying the view for POST request and allowing us to verify the registered users in API Server.
On Setting up the data layer all I need to do is create a ResourceList with using this layer and with permissions

class VerifyUser(ResourceList):

   methods = ['POST', ]
   decorators = (jwt_required,)
   schema = VerifyUserSchema
   data_layer = {
       'class': VerifyUserLayer,
       'session': db.session
   }

This enables me to use the custom layer, VerifyUserLayer for ResourceList resource.

Resources

Using Flask SocketIO Library in the Apk Generator of the Open Event Android App

Recently Flask SocketIO library was used in the apk generator of the Open Event Android App as it gave access to the low latency bi-directional communications between the client and the server side. The client side of the apk generator was written in Javascript which helped us to use a SocketIO official client library to establish a permanent connection to the server.

The main purpose of using the library was to display logs to the user when the app generation process goes on. This gives the user an additional help to check what is the mistake in the file uploaded by them in case an error occurs. Here the library established a connection between the server and the client so that during the app generation process the server would send real time logs to the client so that they can be viewed by the user displayed in the frontend.

To use this library we first need to download it using pip command:

pip install flask-socketio

This library depends on the asynchronous services which can be selected amongst the following listed below:

  1. Eventlet
  2. Gevent
  3. Flask development server based on Werkzeug

Amongst the three listed, eventlet is the best performant option as it has support for long polling and WebSocket transports.

The next thing was importing this library in the flask application i.e the apk generator of the app as follows:

from flask_socketio import SocketIO

current_app = create_app()
socketio = SocketIO(current_app)

if __name__ == '__main__':
    socketio.run(current_app)

The main use of the above statement (socket.run(current_app)) is that with this the startup of the web server is encapsulated. When the application is run in debug mode it is preferred to use the Werkzeug development server. However in production mode the use of eventlet web server or gevent web server is recommended.

We wanted to show the status messages currently shown to the user in the form of logs. For this firstly the generator.py file was looked upon which had the status messages and these were sent to the client side of the generator by establishing a connection between them using this library. The code written on the server side to send messages to the client side was as follows:

def handle_message(message):
    if message not None:
        namespace = “/” + self.identifier;
        send(message, namespace = namespace)

As we can see from the above code the message had to be sent to that particular namespace. The main idea of using namespace was that if there were more than one users using the generator at the same time, it would mean the send() method would send the messages to all the clients connected which would lead to the mixing up of status messages and hence it was important to use namespace so that a message was sent to that particular client.

Once the server sent the messages to the client we needed to add functionality to the our client side to receive the messages from them and also update the logs area with suitable content received. For that first the socket connection was established once the generate button was clicked which generated the identifier for us which had to be used as the namespace for that process.

socket = io.connect(location.protocol + "//" + location.host + "/" + identifier, {reconnection: false});

This piece of code established the connection of the socket and helps the client side to receive and send messages from and to the server end.

Next thing was receiving messages from the server. For this the following code snippet was added:

socket.on('message', function(message) {
    $buildLog.append(message);
});

This piece of code receives the messages from the server for unnamed events. Once the connection is established and the messages are received, the logs are updated with each message being appended so as to show the user the real time information about their build status of their app.

This was how the idea of logs being shown in the apk generator was incorporated by making the required changes in the server and client side code using the Flask SocketIO library.

Related Links:

Using Marshmallow Fields in Open Event API Server

he nextgen Open Event API Server  provides API endpoints to fetch the data, and to modify and update it. These endpoints have been written using flask-rest-jsonapi, which is a flask extension to build APIs around the specifications provided by JSONAPI 1.0. This extension helps you, quoting from their website:

flask-rest-jsonAPI’s data abstraction layer lets us expose the resources in a flexible way. This is achieved by using Marshmallow fields by marshmallow-jsonapi. This blog post explains how we use the marshmallow fields for building API endpoints in Open Event API Server.

The marshmallow library is used to serialize, deserialize and validate input data. Marshmallow uses classes to define output schemas. This makes it easier to reuse and configure the and also extend the schemas. When we write the API Schema for any database model from the Open Event models, all the columns have to be added as schema fields in the API class.

This is the API Server’s event schema using marshmallow fields:

These are the Marshmallow Field classes for various types of data. You can pass the following parameters when creating a field object. The ones which are used in the API Server as described below. For the rest, you can read more on marshmallow docs.

Let’s take a look at each of these fields. Each of the following snippets sample writing fields for API Schema.

identifier = fields.Str(dump_only=True)
  • This is a field of data-type String.
  • dump_only :  This field will be skipped during deserialization as it is set to True here. Setting this true essentially means marking `identifier` as read-only( for HTTP API) 
name = fields.Str(required=True)
  • This again is a field of data-type String.
  • This is a required field and a ValidationError is raised if found missing during deserialization. Taking a look at the database backend:

Since this field is set to non-nullable in the database model, it is made required in API Schema.

 external_event_url = fields.Url(allow_none=True)
  • This is a field of datatype URL.
  • Since this is not a required field, NULL values are allowed for this field in the database model. To reflect the same in the API, we have to add allow_none=True. If missing=None is unset, it defaults to false.
ends_at = fields.DateTime(required=Truetimezone=True)
  • Field of datatype DateTime
  • It is a required field for an event and the time used here is timezone aware.
latitude = fields.Float(validate=lambda n: -90 <= n <= 90allow_none=True)
  • Field of datatype Float.
  • In marshmallow fields, we can use validator clauses on the input value of a field using the validate: parameter. It returns a boolean, which when false raises a Validation Error. These validators are called during deserialization.
is_map_shown = fields.Bool(default=False)
  • Field of datatype boolean.
  • Default value for the marshmallow fields can be set by defining using default: Here, is_map_shown attribute is set to false as default for an event.
privacy = fields.Str(default="public")
  • privacy is set to default “public”.

When the input value for a field is missing during serialization, the default value will be used. This parameter can either be a value or a callable.

As described in the examples above, you can write the field as field.<data-type>(*parameters to marshmallow.fields.Field constructor*).

The parameters passed to the class constructor must reflect the column definition in the database model, else you might run into unexpected errors.


An example to quote from Open Event development would be that null values were not being allowed to be posted even for nullable columns. This behavior was because allow_none defaults to false in schema, and it has to be explicitly set to True in order to receive null values. ( Issue for the same: 
Make non-required attributes nullable and the Pull Request made for fix.)

Fields represent a database model column and are serialized and deserialized, so that these can be used in any format, like JSON objects which we use in API server. Each field corresponds to an attribute of the object type like location, starts-at, ends-at, event-url for an event. marshmallow allows us to define data-types for the fields, validate input data and reinforce column level constraints from database model.

This list is not exhaustive of all the parameters available for marshmallow fields. To read further about them and marshmallow, check out their documentation.

Additional Resources

Code involved in API Server:

Working With Inter-related Resource Endpoints In Open Event API Server

For each resource object we have the endpoints related to it

– GET, POST for List

– GET, PATCH, DELETE for Detail

– GET, POST, PATCH, DELETE for Relationship

In this blogpost I will discuss how the resource inter-related endpoints work. These are the endpoints which involve two resource objects which are also related to another same resource object.

The discussion in this post is of the endpoints related to Sessions Model.

In the API server, there exists a relationship between event and sessions. Apart from these, session also has relationships with microlocations, tracks, speakers and session-types. Let’s take a look at the endpoints related with the above.

`/v1/tracks/<int:track_id>/sessions` is a list endpoint which can be used to list and create the sessions related to a particular track of an event. To get the list we define the query() method in ResourceList class as such:

The query method is executed for GET requests, so this if clause looks for track_id in view_kwargs dict. When the request is made to `/v1/tracks/<int:track_id>/sessions`, track id will be present as ‘track_id in the view_kwargs. The tracks are filtered based on the id passed here and then joined on the query with all sessions object from database.

For the POST method, we need to add the track_id from view_kwargs to pass into the track_id field of database model. This is achieved by using flask-rest-jsonapi’s before_create_object() method. The implementation for track_id is the following:

When a POST request is made to `/v1/tracks/<int:track_id>/sessions`, the view_kwargs dict will have ‘track_id’ in it. So if track_id is present in the url params, we first ensure that a track with the passed id exists, then only proceed to create a sessions object under the given track. Now the safe_query() method is a generic custom method written to check for such things. The model is passed along with the filter attribute, and a field to include in the error message. This method throws an ObjectNotFound exception if no such object exists for a given id.

We also need to take care of the permissions for these endpoints. As the decorators are called even before schema validation, it was difficult to get the event_id for permissions unless adding highly endpoint-specific code in the permission manager core, leading to loss of generality.  So the leave_if parameter of permission check was used to overcome this issue. Since the permissions manager isn’t fully developed yet, this is to be changed in the improved implementation of the permissions manager.

Similar implementations for micro locations and session types was done. All the same is not explained in this blogpost. For extended code, take a look at the source code of this schema.

For speaker relation, a few things were different. This is because speakers-sessions is a many-to-many relationship. Let’s take a look at this:

As it is a many-to-many relationship, a association_table was used with flask-sqlalchemy. So for the query() method, the same association table is queried after extracting the speaker_id from view_kwargs dict.

For the POST request on `/v1/speakers/<int:speaker_id>/sessions` , flask-rest-jsonapi’s after_create_object() method is used to insert the request in association_table. In this method the parameters are the following: self, obj, data, view_kwargs

Now view_kwargs contains the url parameters, so we make a check for speaker_id in view_wargs. If it is present, then before proceeding to insert data, we ensure that a speaker exists with that id using the safe_query() method as described above. After that, the obj argument of the method is used. This contains the object that was created in  previous method. So now once the sessions object has been created and we are sure that a speaker exists with given speaker_id, this is just to be appended to obj.speakers  so that this relationship tuple is inserted into the association table.

This updates the association table ‘speakers_session’ in this case. The other such endpoints are being worked upon in a similar fashion and will be consolidated as part of a set of robust APIs, along with the improved permissions manager for the Open Event API Server.

Additional Resources

Code, Issues and Pull Request involved

Designing A Virtual Laboratory With PSLab

What is a virtual laboratory

A virtual lab interface gives students remote access to equipment in laboratories via the Internet without having to be physically present near the equipment. The idea is that lab experiments can be made accessible to a larger audience which may not have the resources to set up the experiment at their place. Another use-case scenario is that the experiment setup must be placed at a specific location which may not be habitable.

The PSLab’s capabilities can be increased significantly by setting up a framework that allows remote data acquisition and control. It can then be deployed in various test and measurement scenarios such as an interactive environment monitoring station.

What resources will be needed for such a setup

The proposed virtual lab will be platform independent, and should be able to run in web-browsers. This necessitates the presence of a lightweight web-server software running on the hardware to which the PSLab is connected. The web-server must have a framework that must handle multiple connections, and allow control access to only authenticated users.

Proposed design for the backend

The backend framework must be able to handle the following tasks:

  • Communicate with the PSLab hardware attached to the server
  • Host lightweight web-pages with various visual aids
  • Support an authentication framework via a database that contains user credentials
  • Reply with JSON data after executing single commands on the PSLab
  • Execute remotely received python scripts, and relay the HTML formatted output. This should include plots

Proposed design for the frontend

  • Responsive, aesthetic layouts and widget styles.
  • Essential utilities such as Sign-up and Sign-in pages.
  • Embedded plots with basic zooming and panning facilities.
  • Embedded code-editor with syntax highlighting
  • WIdgets to submit the code to the server for execution, and subsequent display of received response.

A selection of tools that can assist with this project, and the purpose they will serve:

Backend

  • The Python communication library for the PSLab
  • FLASK: ‘Flask is a BSD Licensed microframework for Python based on Werkzeug, Jinja 2 and good intentions.’   . It can handle concurrent requests, and will be well suited to serve as our web server
  • MySQL: This is a database management utility that can be used to store user credentials, user scripts, queues etc
  • WerkZeug: The utilities to create and check password hashes are essential for exchanging passwords via the database
  • Json: For relaying measurement results to the client
  • Gunicorn + Nginx: Will be used when more scalable deployment is needed, and the built-in webserver of Flask is unable to handle the load.

Frontend

  • Bootstrap-css: For neatly formatted, responsive UIs
  • Jqplot: A versatile and expandable js based plotting library
  • Ace code editor: A browser based code editor with syntax highlighting, automatic indentation and other user-friendly features. Written in JS
  • Display documentation:  These can be generated server side from Markdown files using Jekyll. Several documentation files are already available from the pslab-desktop-apps, and can be reused after replacing the screenshot images only.

Flow Diagram

Recommended Reading

[1]: Tutorial series  for creating a web-app using python-flask and mysql. This tutorial will be extensively followed for creating the virtual-lab setup.

[2]: Introduction to the Virtual Labs initiative by the Govt of India

[3]: Virtual labs at IIT Kanpur

Documenting Open Event API Using API-Blueprint

FOSSASIA‘s Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually.

Using API Blueprint:-
We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :-

FORMAT: V1
HOST: https://api.eventyay.com

# Open Event API Server

The Open Event API Server

# Group Authentication

The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>`


API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API.

The heading starts with # and the first heading is regarded as the name of the API.

NOTE – Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.
        Following the heading section comes the description of the API. Further, headings are used to break up the description section.

Resource Groups:
—————————–
    By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users.

# Group Users

For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints.


| Parameter | Description | Type | Required |
|:----------|-------------|------|----------|
| `name`  | Name of the user | string | - |
| `password` | Password of the user | string | **yes** |
| `email` | Email of the user | string | **yes** |

 

Resources:
——————
    In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI.

## Users Collection [/v1/users{?page%5bsize%5d,page%5bnumber%5d,sort,filter}]
+ Parameters
    + page%5bsize%5d (optional, integer, `10`) - Maximum number of resources in a single paginated response.
    + page%5bnumber%5d (optional, integer, `2`) - Page number to fetchedfor the paginated response.
    + sort (optional, string, `email`) - Sort the resources according to the given attribute in ascending order. Append '-' to sort in descending order.
    + filter(optional, string, ``) - Filter according to the flask-rest-jsonapi filtering system. Please refer: http://flask-rest-jsonapi.readthedocs.io/en/latest/filtering.html for more.

 

Actions:
————–
    An action is specified with a sub-heading inside of  a resource as the name of Action, followed by HTTP method inside the square brackets.
    Before we get on further, let us discuss what a payload is. A payload is an HTTP transaction message including its discussion and any additional assets such as entity-body validation schema.

There are two payloads inside an Action:

  1. Request: It is a payload containing one specific HTTP Request, with Headers and an optional body.
  2. Response: It is a payload containing one HTTP Response.

A payload may have an identifier-a string for a request payload or an HTTP status code for a response payload.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)


Types of HTTP methods for Actions:

  • GET – In this action, we simply send the header data like Accept and Authorization and no body. Along with it we can send some GET parameters like page[size]. There are two cases for GET: List and Detail. For example, if we consider users, a GET for List helps us retrieve information about all users in the response, while Details, helps us retrieve information about a particular user.

The API Blueprint examples implementation of both GET list and detail request and response are as follows.

### List All Users [GET]
Get a list of Users.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "meta": {
                "count": 2
            },
            "data": [
                {
                    "attributes": {
                        "is-admin": true,
                        "last-name": null,
                        "instagram-url": null,

 

### Get Details [GET]
Get a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "data": {
                "attributes": {
                    "is-admin": false,
                    "last-name": "Doe",
                    "instagram-url": "http://instagram.com/instagram",

 

  • POST – In this action, apart from the header information, we also need to send a data. The data must be correct with jsonapi specifications. A POST body data for an users API would look something like this:
### Create User [POST]
Create a new user using an email, password and an optional name.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data":
              {
                "attributes":
                {
                  "email": "[email protected]",
                  "password": "password",


A POST request with this data, would create a new entry in the table and then return in jsonapi format the particular entry that was made into the table along with the id assigned to this new entry.

  • PATCH – In this action, we change or update an already existing entry in the database. So It has a header data like all other requests and a body data which is almost similar to POST except that it also needs to mention the id of the entry that we are trying to modify.
### Update User [PATCH]
+ `id` (integer) - ID of the record to update **(required)**

Update a single user by setting the email, email and/or name.

Authorized user should be same as user in request body or must be admin.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data": {
                "attributes": {
                  "password": "password1",
                  "avatar_url": "http://example1.com/example1.png",
                  "first-name": "Jane",
                  "last-name": "Dough",
                  "details": "example1",
                  "contact": "example1",
                  "facebook-url": "http://facebook.com/facebook1",
                  "twitter-url": "http://twitter.com/twitter1",
                  "instagram-url": "http://instagram.com/instagram1",
                  "google-plus-url": "http://plus.google.com/plus.google1",
                  "thumbnail-image-url": "http://example1.com/example1.png",
                  "small-image-url": "http://example1.com/example1.png",
                  "icon-image-url": "http://example1.com/example1.png"
                },
                "type": "user",
                "id": "2"
              }
            }

Just like in POST, after we have updated our entry, we get back as response the new updated entry in the database.

  • DELETE – In this action, we delete an entry from the database. The entry in our case is soft deleted by default. Which means that instead of deleting it from the database, we set the deleted_at column with the time of deletion. For deleting we just need to send header data and send a DELETE request to the proper endpoint. If deleted successfully, we get a response as “Object successfully deleted”.
### Delete User [DELETE]
Delete a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
          "meta": {
            "message": "Object successfully deleted"
          },
          "jsonapi": {
            "version": "1.0"
          }
        }


How to check after manually entering all these? We can use the
apiary website to render it, or simply use different renderer to do it. How? Checkout for my next blog on apiary and aglio.

Learn more about api blueprint here: https://apiblueprint.org/