Creating an Installer for PSLab Desktop App

PSLab device is made useful with applications running on two platforms. One is Android and the other one is a desktop application developed using Python frameworks. Desktop application uses half a dozen of dependent libraries and they are required to be installed prior to installing the application itself.

For someone with zero or less knowledge on how to install packages in a Linux environment, this task will be quite difficult. To ease up the process of installing the desktop application in a computer, we can use a script to run specific commands which will install the dependencies and the application.

Dependencies required by PSLab  Desktop app

  • PyQt 4.7
  • Python 2.6, 2.7 or 3.x
  • NumPy, Scipy
  • pyqt4-dev-tools
  • Pyqtgraph
  • pyopengl and qt-opengl
  • iPython-qtconsole

These dependencies can be made installed using a bash script running with root permission. A bash script will have the file extension “.sh” and a header line;

#!/bin/bash

A bash script needs to be made executable by the user himself. To do this, user needs to type a one line command in the terminal as follows and enter his password;

sudo chmod +x <Name_of_the_script>.sh

The keyword “sudo” interprets as “Super User DO” and the line follows will be executed with root permission. In other words with administrative privileges to modify system settings such as copying content to system folders.

The keyword “chmod” stands for “Change Mode” which will alter the mode of a file. In current context, the file is made executable by adding the executable property to the bash script using “+x” syntax.

Once the script is made executable, it can be executed using;

sudo ./<Name_of_the_script>.sh

An installer can be made attractive by using different colors rather than the plain old text outputs. For this purpose we can use color syntax in bash script. They are represented using ANSI escape codes and following is a list of commonly used colors;

Black        0;30     Dark Gray     1;30
Red          0;31     Light Red     1;31
Green        0;32     Light Green   1;32
Brown/Orange 0;33     Yellow        1;33
Blue         0;34     Light Blue    1;34
Purple       0;35     Light Purple  1;35
Cyan         0;36     Light Cyan    1;36
Light Gray   0;37     White         1;37

As in any programming language, rather than using the same line in many places, we can define variables in a bash script. The syntax will be the variable name followed by an equal sign with the value. There cannot be spaces around the equal sign or it will generate an error.

GREEN='\033[0;32m'

These variables can be accessed using a special syntax as follows;

${GREEN}

Finally we can output a message to the console using the “echo” command

echo -e "${GREEN}Welcome to PSLab Desktop app installer${NOCOLOR}"

Note that the keyword “-e” is used to enable interpretation of the following backslash escapes.

In order to install the packages and libraries, we use two package management tools. One is “apt” which stands for “Advanced Packaging Tool” and the second is “pip” which is used to download python related packages from “Python Package Index”. The following two lines illustrates how the two commands can be accessed.

apt-get install python-pip python-dev build-essential -y

pip install pyqtgraph

The keyword “-y” avoids the confirmation prompt in console to allow installation by pressing “Y” key every time it installs a package from “apt”.

Resources:

Continue ReadingCreating an Installer for PSLab Desktop App

Implementing Permissions for Orders API in Open Event API Server

Open Event API Server Orders API is one of the core APIs. The permissions in Orders API are robust and secure enough to ensure no leak on payment and ticketing.The permission manager provides the permissions framework to implement the permissions and proper access controls based on the dev handbook.

The following table is the permissions in the developer handbook.

 

List View Create Update Delete
Superadmin/admin
Event organizer [1] [1] [1] [1][2] [1][3]
Registered user [4]
Everyone else
  1. Only self-owned events
  2. Can only change order status
  3. A refund will also be initiated if paid ticket
  4. Only if order placed by self

Super Admins and admins are allowed to create any order with any amount but any coupon they apply is not consumed on creating order. They can update almost every field of the order and can provide any custom status to the order. Permissions are applied with the help of Permission Manager which takes care the authorization roles. For example, if a permission is set based on admin access then it is automatically set for super admin as well i.e., to the people with higher rank.

Self-owned events

This allows the event admins, Organizer and Co-Organizer to manage the orders of the event they own. This allows then to view all orders and create orders with or without discount coupon with any custom price and update status of orders. Event admins can provide specific status while others cannot

if not has_access('is_coorganizer', event_id=data['event']):
   data['status'] = 'pending'

And Listing requires Co-Organizer access

elif not has_access('is_coorganizer', event_id=kwargs['event_id']):
   raise ForbiddenException({'source': ''}, "Co-Organizer Access Required")

Can only change order status

The organizer cannot change the order fields except the status of the order. Only Server Admin and Super Admins are allowed to update any field of the order.

if not has_access('is_admin'):
   for element in data:
       if element != 'status':
           setattr(data, element, getattr(order, element))

And Delete access is prohibited to event admins thus only Server admins can delete orders by providing a cancelling note which will be provided to the Attendee/Buyer.

def before_delete_object(self, order, view_kwargs):
   if not has_access('is_coorganizer', event_id=order.event.id):
       raise ForbiddenException({'source': ''}, 'Access Forbidden')

Registered User

A registered user can create order with basic details like the attendees’ records and payment method with fields like country and city. They are not allowed to provide any custom status to the order they are creating. All orders will be set by default to “pending”

Also, they are not allowed to update any field in their order. Any status update will be done internally thus maintaining the security of Order System. Although they are allowed to view their place orders. This is done by comparing their logged in user id with the user id of the purchaser.

if not has_access('is_coorganizer_or_user_itself', event_id=order.event_id, user_id=order.user_id):
   return ForbiddenException({'source': ''}, 'Access Forbidden')

Event Admins

The event admins have one more restriction, as an event admin, you cannot provide discount coupon and even if you do it will be ignored.

# Apply discount only if the user is not event admin
if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):

Also an event admin any amount you will provide on creating order will be final and there will be no further calculation of the amount will take place

if not has_access('is_coorganizer', event_id=data['event']):
   TicketingManager.calculate_update_amount(order)

Creating Attendees Records

Before sending a request to Orders API it is required to create to attendees mapped to some ticket and for this registered users are allowed to create the attendees without adding a relationship of the order. The mapping with the order is done internally by Orders API and its helpers.

Resources

  1. Dev Handbook – Niranjan R
    The Open Event Developer Handbook
  2. Flask-REST-JSONAPI Docs
    Permissions and Data layer | Flask-REST-JSONAPI
  3. A guide to use permission manager in API Server
    https://blog.fossasia.org/a-guide-to-use-permission-manager-in-open-event-api-server/

 

Continue ReadingImplementing Permissions for Orders API in Open Event API Server

Generating Ticket PDFs in Open Event API Server

In the ordering system of Open Event API Server, there is a requirement to send email notifications to the attendees. These attendees receive the URL of the pdf of the generated ticket. On creating the order, first the pdfs are generated and stored in the preferred storage location and then these are sent to the users through the email.

Generating PDF is a simple process, using xhtml2pdf we can generate PDFs from the html. The generated pdf is then passed to storage helpers to store it in the desired location and pdf-url is updated in the attendees record.

Sample PDF

PDF Template

The templates are written in HTML which is then converted using the module xhtml2pdf.
To store the templates a new directory was created at  app/templates where all HTML files are stored. Now, The template directory needs to be updated at flask initializing app so that template engine can pick the templates from there. So in app/__init__.py we updated flask initialization with

template_dir = os.path.dirname(__file__) + "/templates"

app = Flask(__name__, static_folder=static_dir, template_folder=template_dir)

This allows the template engine to pick the templates files from this template directory.

Generating PDFs

Generating PDF is done by rendering the html template first. This html content is then parsed into the pdf

file = open(dest, "wb")

pisa.CreatePDF(cStringIO.StringIO(pdf_data.encode('utf-8')), file)

file.close()

The generated pdf is stored in the temporary location and then passed to storage helper to upload it.

uploaded_file = UploadedFile(dest, filename)

upload_path = UPLOAD_PATHS['pdf']['ticket_attendee'].format(identifier=get_file_name())

new_file = upload(uploaded_file, upload_path)

This generated pdf path is returned here

Rendering HTML and storing PDF

for holder in order.ticket_holders:

  if holder.id != current_user.id:

      pdf = create_save_pdf(render_template('/pdf/ticket_attendee.html', order=order, holder=holder))

  else:

      pdf = create_save_pdf(render_template('/pdf/ticket_purchaser.html', order=order))

  holder.pdf_url = pdf

  save_to_db(holder)

The html is rendered using flask template engine and passed to create_save_pdf and link is updated on the attendee record.

Sending PDF on email

These pdfs are sent as a link to the email after creating the order. Thus a ticket is sent to each attendee and a summarized order details with attendees to the purchased.

send_email(

  to=holder.email,

  action=TICKET_PURCHASED_ATTENDEE,

  subject=MAILS[TICKET_PURCHASED_ATTENDEE]['subject'].format(

      event_name=order.event.name,

      invoice_id=order.invoice_number

  ),

  html= MAILS[TICKET_PURCHASED_ATTENDEE]['message'].format(

      pdf_url=holder.pdf_url,

      event_name=order.event.name

  )

)

References

  1. Readme – xhtml2pdf
    https://github.com/xhtml2pdf/xhtml2pdf/blob/master/README.rst
  2. Using xhtml2pdf and create pdfs
    https://micropyramid.com/blog/generating-pdf-files-in-python-using-xhtml2pdf/

 

Continue ReadingGenerating Ticket PDFs in Open Event API Server

User Guide for the PSLab Remote-Access Framework

The remote-lab framework of the pocket science lab has been designed to enable user to access their devices remotely via the internet. The pslab-remote repository includes an API server built with Python-Flask and a webapp that uses EmberJS. This post is a guide for users who wish to test the framework. A series of blog posts have been previously written which have explored and elaborated various aspect of the remote-lab such as designing the API server, remote execution of function strings, automatic deployment on various domains etc. In this post, we shall explore how to execute function strings, execute example scripts, and write a script ourselves.

A live demo is hosted at pslab-remote.surge.sh . The API server is hosted at pslab-stage.herokuapp.com, and an API reference which is being developed can be accessed at pslab-stage.herokuapp.com/apidocs . A screencast of the remote lab is also available

Create an account

Signing up at this point is very straightforward, and does not include any third party verification tools since the framework is under active development, and cannot be claimed to be ready for release yet.

Click on the sign-up button, and provide a username, email, and password. The e-mail will be used as the login-id, and needs to be unique.

Login to the remote lab

Use the email-id used for signing up, enter the password, and the app will redirect you to your new home-page, where you will be greeted with a similar screen.

Your home-page

On the home-page, you will find that the first section includes a text box for entering a function string, and an execute button. Here, you can enter any valid PSLab function such as `get_resistance()` , and click on the execute button in order to run the function on the PSLab device connected to the API server, and view the results. A detailed blog post on this process can be found here.

Since this is a new account, no saved scripts are present in the Your Scripts section. We will come to that shortly, but for now, there are some pre-written example scripts that will let you test them as well as view their source code in order to copy into your own collection, and modify them.

Click on the play icon next to `multimeter.py` in order to run the script. The eye icon to the right of the row enables you to view the source code, but this can also be done while the app is running. The multimeter app looks something like this, and you can click on the various buttons to try them out.

You may also click on the Source Code tab in order to view the source

Create and execute a small python script

We can now try to create a simple script of our own. Click on the `New Python Script` button in the top-bar to navigate to a page that will allow you to create and save your own scripts. We shall write a small 3-line code to print some sinusoidal coordinates, save it, and test it. Copy the following code for a sine wave with 30 points, and publish your script.

import numpy as np
x=np.linspace(0,2*np.pi,30)
print (x, np.sin(x))

Create a button widget and associate a callback to the get_voltage function

A small degree of object oriented capabilities have also been added, and the pslab-remote allows you to create button widgets and associate their targets with other widgets and labels.
The multimeter demo script uses this feature, and a single line of code suffices to demonstrate this feature.

button('Voltage on CH1 >',"get_voltage('CH1')","display_number")

You can copy the above line into a new script in order to try it out.

Associate a button’s callback to the capture routines, and set the target as a plot

The callback target for a button can be set to point to a plot. This is useful if the callback involves arrays such as those returned by the capture routines.

Example code to show a sine wave in a plot, and make button which will replace it with captured data from the oscilloscope:

import numpy as np
x=np.linspace(0,2*np.pi,30)
plt = plot(x, np.sin(x))
button('capture 1',"capture1('CH1',100,10)","update-plot",target=plt)
Figure: Demo animation from the plot_test example. Capture1 is connected to the plot shown.
Resources
Continue ReadingUser Guide for the PSLab Remote-Access Framework

How to Send a Script as Variable to the Meilix ISO with Travis and Meilix Generator

We wanted to add more features to Melix Generator web app to be able to customize Meilix ISO with more features so we thought of sending every customization we want to apply as a different variable and then use the scripts from Meilix Generator repo to generate ISO but that idea was bad as many variables are to be made and need to be maintained on both Heroku and Travis CI and keep growing with addition of features to web app.

So we thought of a better idea of creating a combined script with web app for each feature to be applied to ISO and send it as a variable to Travis CI.

Now another problem was how to send script as a variable after generating it as json do not support special characters inside the script. We tried escaping the special characters and the data was successfully sent to Travis CI and was shown in config but when setting that variable as an environment variable in Travis CI the whole value of variable was not taken as we had spaces in the script.

So to eliminate that problem we encoded the variable in the app as base64 and sent it to Travis CI and used it using following code.

To generate the variable from script.

with open('travis_script_1.sh','rb') as f:
    os.environ["TRAVIS_SCRIPT"] = str(base64.b64encode(f.read()))[1:]

 

For this we have to import base64 module and open the script generated in binary mode and using base64 we encode the script and using Travis CI API we send variable as script to the Travis CI to build the ISO with script in chroot we were also required to make changes in Meilix to be able to decode the script and then copy it into chroot during the ISO build.

sudo su <<EOF
echo "$TRAVIS_SCRIPT" > edit/meilix-generator.sh
mv browser.sh edit/browser.sh
EOF 

 

Using script inside chroot.

chmod +x meilix-generator.sh browser.sh
echo "$(<meilix-generator.sh)" #to test the file
./browser.sh
rm browser.sh

Resources

Base64 python documentation from docs.python.org

Base64 bash tutorial from scottlinux.com by Scott Miller

su in a script from unix.stackexchange.com answered by Ankit

Continue ReadingHow to Send a Script as Variable to the Meilix ISO with Travis and Meilix Generator

Creating GUI for configuring SUSI Linux Settings

SUSI Linux app provides access to SUSI on Linux distributions on desktop as well as hardware devices like Raspberry Pi. The settings for SUSI Linux are controlled with the use of a config.json file. You may edit the file manually, but to provide safe configurations, we have a config generator script. You may run the script to configure settings like TTS Engine, STT Engine, authentication, choice about the hotword engine etc. Generally, it is easier to configure application settings through a GUI. Thus, we added a GUI for it using PyGTK and Glade.

Glade is a GUI designer for GNOME based Linux systems. I wrote a blog about how to create user interfaces in Glade and access it from Python code in SUSI Linux. Now, for creating UI for Configuration screen, we need to choose an ideal layout. Glade provides various choices like BoxLayout, GridLayout, FlowBox, ListBox , Notebook etc. Since, we need to display only basic settings options, we select the BoxLayout for this purpose.

BoxLayout as the name suggests, forms a box like arrangement for widgets. You can arrange the widgets in either Landscape or Horizontal Layout. We select Application Window as a top-level container and add a BoxLayout container in it. Now, in each box of the BoxLayout, we need to add the widgets like ComboBox and Switch for user’s choice and a Label. This can be done by using a horizontal BoxLayout with corresponding widgets. After arranging the UI in above described manner, we have a GUI like below.

If you see the current window in the preview now, you will find that the ComboBox do not have any items. We need to define items in the ComboBox using a GTKListStore. You may refer to this video tutorial to see how this can be done.

Now, when we see the preview, our GUI is fully functional. We have options for Speech Recognition Service, Text to Speech Service in ComboBox. Other simple settings are available as switches.

Now, we need to add functionality to our UI. We want our code to be modular and structured, therefore, we declare a ConfigurationWindow class. Though the ideal way to handle such cases is inheriting from the Gtk.Window class, but reading the documentation of PyGTK+ 3, I could not find a way to do this for windows created through Glade. Thus, we will use composition for storing the window object. We add window and other widgets present in the UI as properties of ConfigurationWindow class like this.

class ConfigurationWindow:
   def __init__(self) -> None:
       super().__init__()
       builder = Gtk.Builder()
       builder.add_from_file(os.path.join("glade_files/configure.glade"))

       self.window = builder.get_object("configuration_window")
       self.stt_combobox = builder.get_object("stt_combobox")
       self.tts_combobox = builder.get_object("tts_combobox")
       self.auth_switch = builder.get_object("auth_switch")
       self.snowboy_switch = builder.get_object("snowboy_switch")
       self.wake_button_switch = builder.get_object("wake_button_switch")

Now, we need to connect the Signals from our configuration window to the Handler. We declare the Handler as a nested class in the ConfigurationWindow class because its scope of usage is inside the ConfigurationWindow object. Then you may connect signals to an object of the Handler class.

builder.connect_signals(ConfigurationWindow.Handler(self))

Since we may need to modify the state of the widgets, we hold a reference of the parent ConfigurationWindow object in the Handler and pass the self as a parameter to the Handler. You may read more about using the handlers in my previous blog.

In the Handler, we connect to the config.json file and change the parameters of the the file based on the user inputs on the GUI. We handle it for the Text to Speech selection comboBox in the following manner. We also declare two addition Dialogs for handling the input of credentials by the users for the Watson and Bing services.

def on_stt_combobox_changed(self, combo: Gtk.ComboBox):
   selection = combo.get_active()

   if selection == 0:
       config['default_stt'] = 'google'

   elif selection == 1:
       credential_dialog = WatsonCredentialsDialog(self.config_window.window)
       response = credential_dialog.run()

       if response == Gtk.ResponseType.OK:
           username = credential_dialog.username_field.get_text()
           password = credential_dialog.password_field.get_text()
           config['default_stt'] = 'watson'
           config['watson_stt_config']['username'] = username
           config['watson_stt_config']['password'] = password
       else:
           self.config_window.init_stt_combobox()

       credential_dialog.destroy()

   elif selection == 2:
       credential_dialog = BingCredentialDialog(self.config_window.window)
       response = credential_dialog.run()

       if response == Gtk.ResponseType.OK:
           api_key = credential_dialog.api_key_field.get_text()
           config['default_stt'] = 'bing'
           config['bing_speech_api_key']['username'] = api_key
       else:
           self.config_window.init_stt_combobox()

       credential_dialog.destroy()

Now, we declare two more methods to show and exit the Window.

def show_window(self):
   self.window.show_all()
   Gtk.main()

def exit_window(self):
   self.window.destroy()
   Gtk.main_quit()

Now, we may use the ConfigurationWindow class object anywhere from our code. This modularized approach is better when you need to manage multiple windows as you can just declare the Window of a particular type and show it whenever need in your code.

Resources

  • Glade usage Youtube tutorial: https://www.youtube.com/watch?v=vOGK3TveDDk
  • Creating GUI using PyGTK for SUSI Linux: https://blog.fossasia.org/making-gui-for-susi-linux-with-pygtk/
  • PyGObject Documentation: http://pygobject.readthedocs.io/en/latest/getting_started.html
Continue ReadingCreating GUI for configuring SUSI Linux Settings

Showing Pull Request Build logs in Yaydoc

In Yaydoc, I added the feature to show build status of the Pull Request. But there was no way for the user to see the reason for build failure, hence I decided to show the build log in the Pull Request similar to that of TRAVIS CI. For this, I had to save the build log into the database, then use GitHub status API to show the build log url in the Pull Request which redirects to Yaydoc website where we render the build log.

StatusLog.storeLog(name, repositoryName, metadata,  `temp/admin@fossasia.org/generate_${uniqueId}.txt`, function(error, data) {
                            if (error) {
                              status = "failure";
                            } else {
                              targetBranch = `https://${process.env.HOSTNAME}/prstatus/${data._id}`
                            }
                            github.createStatus(commitId, req.body.repository.full_name, status, description, targetBranch, repositoryData.accessToken, function(error, data) {
                              if (error) {
                                console.log(error);
                              } else {
                                console.log(data);
                              }
                            });
                          });

In the above snippet, I’m storing the build log which is generated from the build script to the mongodb and I’m appending the mongodb unqiueID to the `prstatus` url so that we can use that id to retrieve build log from the database.

exports.createStatus = function(commitId, name, state, description, targetURL, accessToken, callback) {
  request.post({
    url: `https://api.github.com/repos/${name}/statuses/${commitId}`,
    headers: {
      'User-Agent': 'Yaydoc',
      'Authorization': 'token ' + crypter.decrypt(accessToken)
    },
    "content-type": "application/json",
    body: JSON.stringify({
      state: state,
      target_url: targetURL,
      description: description,
      context: "Yaydoc CI"
    })
  }, function(error, response, body) {
    if (error!== null) {
      return callback({description: 'Unable to create status'}, null);
    }
    callback(null, JSON.parse(body));
  });
};

After saving the build log, I’m sending the request to GitHub for showing the status of the build along with build log url where user can click the detail link and can see the build log.

Resources:

Continue ReadingShowing Pull Request Build logs in Yaydoc

Making GUI for SUSI Linux with PyGTK

SUSI Linux app provides access to SUSI on Linux distributions on desktop as well as hardware devices like Raspberry Pi. It started off as a headless client but we decided to add a minimalist GUI to SUSI Linux for performing login and configuring settings. Since, SUSI Linux is a Python App, it was desirable to use a GUI Framework compatible with Python. Many popular GUI frameworks now provide bindings for Python. Some popular available choices are:

wxPython: wxPython is a Python GUI framework based on wxWidgets, a cross-platform GUI library written in C++. In addition to the standard dialogs, it includes a 2D path drawing API, dockable windows, support for many file formats and both text-editing and word-processing widgets. wxPython though mainly support Python 2 as programming language.

PyQT: Qt is a multi-licensed cross-platform framework written in C++. Qt needs a commercial licence for use but if application is completely Open Source, community license can be used. Qt is an excellent choice for GUIs and many applications are based on it.

PyGTK / PyGObject: PyGObject is a Python module that lets you write GUI applications in GTK+. It provides bindings to GObject, a cross platform C library. GTK+ applications are natively supported in most distros and you do not need to install any other development tools for developing with PyGTK.

Comparing all these frameworks, PyGTK was found to meet our needs very well. To make UIs in PyGTK, you have a WYSIWYG (What you see is what you get) editor called Glade. Though you can design whole UI programmatically, it is always convenient to use an editor like Glade to simplify the creation and styling of widgets.

To create a UI, you need to install Glade in your specific distribution. After that open glade, and add a Top Level container Window or AppWindow to your app.

Once that is done, you may pick from the available Layout Managers. We are using BoxLayout Manager in SUSI Linux GUIs. Once that is done, add your widgets to the Application Window using Drag and Drop.

Properties of widgets are available on the right panel. Edit your widget properties to give them meaningful IDs so we can address them later in our code. GTK also provides Signals for signaling about a events associated with the widgets. Open the Signals tab in the Widget properties pane. Then, you need to write name of the signal handler for the events associated with Widgets. A signal handler is a function that is fired upon the occurrence of the associated event. For example, we have signals like text_changed in Text Entry boxes, and clicked for Button.

After completing the design of GUI, we can address the .glade file of the UI we just created in the Python code. We can do this using the following snippet.

import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk

builder = Gtk.Builder()
builder.add_from_file("glade_files/signin.glade")

You can reference each widget from the Glade file using its ID like below.

email_field = builder.get_object("email_field")

Now, to handle all the declared signals in the Glade file, we need to make a Handler class. In this class, you need to define call the valid callbacks for your signals. On the occurrence of the signal, respective callback is fired.

class Handler:

   def onDeleteWindow(self, *args):
       Gtk.main_quit(*args)

   def signInButtonClicked(self, *args):
       # implementation

   def input_changed(self, *args):
       # implementation

We may associate a handler function to more than one Signal. For that, we just need to specify the respective function in both the Signals.

Now, we need to connect this Handler to builder signals. This can be done using the following line.

builder.connect_signals(Handler())

Now, we can show our window using the following lines.

window.show_all()
Gtk.main()

The above lines displays the window and start the Gtk main loop. The script waits on the Gtk main loop. The app may be quitted using the Gtk.main_quit() call. Running this script shows the Login Screen of our app like below.

Resources:

Continue ReadingMaking GUI for SUSI Linux with PyGTK

Handling Errors While Parsing the yaml File in Yaydoc

Yaydoc, our automatic documentation generator uses a yaml file to read a user’s configuration. The internal configuration parser basically converts the yaml file to a python dictionary. Then, it serializes the values of that dictionary using a custom serialization format. From there it associates those values with environment variables which are then passed to bash scripts for various tasks such as deployment, generation, etc.. Some of those environment variables are again passed to another python layer which interacts with sphinx where they are deserialized before use. This whole system works pretty well for our use cases.

Now let’s assume a user adds a yaml file where they have a malformed section in the file. For example, to specify a theme, one needs to add the following to the yaml file.

build:
  theme:
    name: sphinx_fossasia_theme

But our user has the following in their yaml file.

build:
  theme: sphinx_fossasia_theme

Now this will raise an error as we expect a dictionary as a value for the key ‘theme’ but we got a string. Now how do we handle such cases without ignoring the entire file as that would be too much of a penalty for such a small mistake? One approach would have been to wrap each call to connect with a bunch of try-catch but that would render the code unreadable as the initial motivation for implementing the connect method was to abstract the internal implementation so that other contributors who may not be well versed with python can also easily add config options without needing to learn a bunch of python constructs.

So, what we did was that, while merging the dictionary containing default options and the dictionary containing the user preferences, we check whether the default has the same data type as that of the incoming value. If they are, It’s deemed safe to merge. There are certain relaxations though, like if the current type is a list, then the incoming value can be of any time as that can always be converted to a list of a single element. This is required to support the following syntax.

key:
  - value
key: value

The above two blocks are equivalent due to the above-mentioned approach although the type is different.

Now, after this pre-validation step is over we can ensure that the if the assumed type for a key is let’s say a dictionary, then it would be a dictionary. Hence no type errors would be raised like trying to access a dict method for another object, say a string which happened with the earlier implementation. After this, an extra parameter was added to the connect method to which we can now pass a validation function which if returns false, those values would be ignored. Usage of this feature has been implemented to a small level where we validate the links to subprojects and if they look like a valid github repo only then will they be included. Note that their existence is not checked. Only a regex based validation is performed.

It was also important to notify the user about these events when we detect that a specific section is invalid and provide informative and helpful error messages without failing the build. Hence proper error messages were also added which were informative so that the user knows exactly which section is to blame. This is similar to compilers where the error message is crucial to debug a certain piece of code.

Resources

Continue ReadingHandling Errors While Parsing the yaml File in Yaydoc

Showing Pull Request Build Status in Yaydoc

Yaydoc is integrated to various open source projects in FOSSASIA.  We have to make sure that the contributors PR should not break the build. So, I decided to check whether the PR is breaking the build or not. Then, I would notify the status of the build using GitHub status API.

exports.registerHook = function (data, accessToken) {
  return new Promise(function(resolve, reject) {
    var hookurl = 'http://' + process.env.HOSTNAME + '/ci/webhook';
    if (data.sub === true) {
      hookurl += `?sub=true`;
    }
    request({
      url: `https://api.github.com/repos/${data.name}/hooks`,
      headers: {
        'User-Agent': 'Yaydoc',
        'Authorization': 'token ' + crypter.decrypt(accessToken)
      },
      method: 'POST',
      json: {
        name: "web",
        active: true,
        events: [
          "push",
          "pull_request"
        ],
        config: {
          url: hookurl,
          content_type: "json"
        }
      }
    }, function(error, response, body) {
      if (response.statusCode !== 201) {
        console.log(response.statusCode + ': ' + response.statusMessage);
        resolve({status: false, body:body});
      } else {
        resolve({status: true, body: body});
      }
    });
  });
};

I’ll register the webhook, when user registers the repository to yaydoc for push and pull request event. Push event will be for building documentation and hosting the documentation to the GitHub pages. Pull_request event would be for checking the build of the pull request.

github.createStatus(commitId, req.body.repository.full_name, "pending", "Yaydoc is checking your build", repositoryData.accessToken, function(error, data) {
                    if (!error) {
                      var user = req.body.pull_request.head.label.split(":")[0];
                      var targetBranch = req.body.pull_request.head.label.split(":")[1];
                      var gitURL = `https://github.com/${user}/${req.body.repository.name}.git`;
                      var data = {
                        email: "admin@fossasia.org",
                        gitUrl: gitURL,
                        docTheme: "",
                        debug: true,
                        docPath: "",
                        buildStatus: true,
                        targetBranch: targetBranch
                      };
                      generator.executeScript({}, data, function(error, generatedData) {
                        var status, description;
                        if(error) {
                          status = "failure";
                          description = error.message;
                        } else {
                          status = "success";
                          description = generatedData.message;
                        }
                        github.createStatus(commitId, req.body.repository.full_name, status, description, repositoryData.accessToken, function(error, data) {
                          if (error) {
                            console.log(error);
                          } else {
                            console.log(data);
                          }
                       });
                 });
              }
        });

When anyone opens a new PR, GitHub will send  a request to yaydoc webhook. Then, I’ll send the status to GitHub saying that “Yaydoc is checking your build” with status `pending`. After, that I’ll documentation will be generated.Then, I’ll check the exit code. If the exit code is zero,  I’ll send the status `success` otherwise I’ll send `error` status.
Resources:

Continue ReadingShowing Pull Request Build Status in Yaydoc