Designing A Virtual Laboratory With PSLab

What is a virtual laboratory A virtual lab interface gives students remote access to equipment in laboratories via the Internet without having to be physically present near the equipment. The idea is that lab experiments can be made accessible to a larger audience which may not have the resources to set up the experiment at their place. Another use-case scenario is that the experiment setup must be placed at a specific location which may not be habitable. The PSLab’s capabilities can be increased significantly by setting up a framework that allows remote data acquisition and control. It can then be deployed in various test and measurement scenarios such as an interactive environment monitoring station. What resources will be needed for such a setup The proposed virtual lab will be platform independent, and should be able to run in web-browsers. This necessitates the presence of a lightweight web-server software running on the hardware to which the PSLab is connected. The web-server must have a framework that must handle multiple connections, and allow control access to only authenticated users. Proposed design for the backend The backend framework must be able to handle the following tasks: Communicate with the PSLab hardware attached to the server Host lightweight web-pages with various visual aids Support an authentication framework via a database that contains user credentials Reply with JSON data after executing single commands on the PSLab Execute remotely received python scripts, and relay the HTML formatted output. This should include plots Proposed design for the frontend Responsive, aesthetic layouts and widget styles. Essential utilities such as Sign-up and Sign-in pages. Embedded plots with basic zooming and panning facilities. Embedded code-editor with syntax highlighting WIdgets to submit the code to the server for execution, and subsequent display of received response. A selection of tools that can assist with this project, and the purpose they will serve: Backend The Python communication library for the PSLab FLASK: ‘Flask is a BSD Licensed microframework for Python based on Werkzeug, Jinja 2 and good intentions.’   . It can handle concurrent requests, and will be well suited to serve as our web server MySQL: This is a database management utility that can be used to store user credentials, user scripts, queues etc WerkZeug: The utilities to create and check password hashes are essential for exchanging passwords via the database Json: For relaying measurement results to the client Gunicorn + Nginx: Will be used when more scalable deployment is needed, and the built-in webserver of Flask is unable to handle the load. Frontend Bootstrap-css: For neatly formatted, responsive UIs Jqplot: A versatile and expandable js based plotting library Ace code editor: A browser based code editor with syntax highlighting, automatic indentation and other user-friendly features. Written in JS Display documentation:  These can be generated server side from Markdown files using Jekyll. Several documentation files are already available from the pslab-desktop-apps, and can be reused after replacing the screenshot images only. Flow Diagram Recommended Reading [1]: Tutorial series  for creating a web-app using…

Continue ReadingDesigning A Virtual Laboratory With PSLab

Documenting Open Event API Using API-Blueprint

FOSSASIA's Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually. Using API Blueprint:- We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :- FORMAT: V1 HOST: https://api.eventyay.com # Open Event API Server The Open Event API Server # Group Authentication The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>` API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API. The heading starts with # and the first heading is regarded as the name of the API. NOTE - Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.         Following the heading section comes the description of the API. Further, headings are used to break up the description section. Resource Groups: -----------------------------     By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users. # Group Users For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints. | Parameter | Description | Type | Required | |:----------|-------------|------|----------| | `name` | Name of the user | string | - | | `password` | Password of the user | string | **yes** | | `email` | Email of the user | string | **yes** |   Resources: ------------------     In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI. ## Users Collection…

Continue ReadingDocumenting Open Event API Using API-Blueprint

Open Event Server: Working with Migration Files

FOSSASIA's Open Event Server uses alembic migration files to handle all database operations and updations.  From creating tables to updating tables and database, all works with help of the migration files. However, many a times we tend to miss out that automatically generated migration files mainly drops and adds columns rather than just changing them. One example of this would be: def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.add_column('session', sa.Column('submission_date', sa.DateTime(), nullable=True)) op.drop_column('session', 'date_of_submission') Here, the idea was to change the has_session_speakers(string) to is_session_speakers_enabled (boolean), which resulted in the whole dropping of the column and creation of a new boolean column. We realize that, on doing so we have the whole data under  has_session_speakers lost. How to solve that? Here are two ways we can follow up: op.alter_column: ---------------------------------- When update is as simple as changing the column names, then we can use this. As discussed above, usually if we migrate directly after changing a column in our model, then the automatic migration created would drop the old column and create a new column with the changes. But on doing this in the production will cause huge loss of data which we don’t want. Suppose we want to just change the name of the column of start_time to starts_at. We don’t want the entire column to be dropped. So an alternative to this is using op.alter_column. The two main necessary parameters of the op.alter_column is the table name and the column which you are willing to alter. The other parameters include the new changes. Some of the commonly used parameters are: nullable – Optional: specify True or False to alter the column’s nullability. new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation. type_ – Optional: a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated. autoincrement –  Optional: set the AUTO_INCREMENT flag of the column; currently understood by the MySQL dialect. existing_type– Optional: a TypeEngine type object to specify the previous type. This is required for all column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a column. So, for example, if you want to change a column name from “start_time” to “starts_at” in events table you would write: op.alter_column(‘events’, ‘start_time’, new_column_name=’starts_at’) def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.alter_column('sessions_version', 'end_time', new_column_name='ends_at') op.alter_column('sessions_version', 'start_time', new_column_name='starts_at') op.alter_column('events_version', 'end_time', new_column_name='ends_at') op.alter_column('events_version', 'start_time', new_column_name='starts_at') Here, session_version and events_version are the tables name altering columns start_time to starts_at and end_time to ends_at with the op_alter_column parameter new_column_name. op.execute: -------------------- Now with alter_column, most of the alteration in the column name or constraints or types is achievable. But there can be a separate scenario for changing the column properties. Suppose I change a table with column “aspect_ratio” which was a string column and had values “on” and…

Continue ReadingOpen Event Server: Working with Migration Files

Automatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

One goal for the next version of the Open Event project is to allow an automatic import of events from various event listing sites. We will implement this using Open Event Import APIs and two additional modules: Query Server and Event Collect. The idea is to run the modules as micro-services or as stand-alone solutions. Query Server The query server is, as the name suggests, a query processor. As we are moving towards an API-centric approach for the server, query-server also has API endpoints (v1). Using this API you can get the data from the server in the mentioned format. The API itself is quite intuitive. API to get data from query-server GET /api/v1/search/<search-engine>/query=query&format=format Sample Response Header Cache-Control: no-cache Connection: keep-alive Content-Length: 1395 Content-Type: application/xml; charset=utf-8 Date: Wed, 24 May 2017 08:33:42 GMT Server: Werkzeug/0.12.1 Python/2.7.13 Via: 1.1 vegur The server is built in Flask. The GitHub repository of the server contains a simple Bootstrap front-end, which is used as a testing ground for results. The query string calls the search engine result scraper scraper.py that is based on the scraper at searss. This scraper takes search engine, presently Google, Bing, DuckDuckGo and Yahoo as additional input and searches on that search engine. The output from the scraper, which can be in XML or in JSON depending on the API parameters is returned, while the search query is stored into MongoDB database with the query string indexing. This is done keeping in mind the capabilities to be added in order to use Kibana analyzing tools. The frontend prettifies results with the help of PrismJS. The query-server will be used for initial listing of events from different search engines. This will be accessed through the following API. The query server app can be accessed on heroku. ➢ api/list​: To provide with an initial list of events (titles and links) to be displayed on Open Event search results. When an event is searched on Open Event, the query is passed on to query-server where a search is made by calling scraper.py with appending some details for better event hunting. Recent developments with Google include their event search feature. In the Google search app, event searches take over when Google detects that a user is looking for an event. The feed from the scraper is parsed for events inside query server to generate a list containing Event Titles and Links. Each event in this list is then searched for in the database to check if it exists already. We will be using elastic search to achieve fuzzy searching for events in Open Event database as elastic search is planned for the API to be used. One example of what we wish to achieve by implementing this type of search in the database follows. The user may search for -Google Cloud Event Delhi -Google Event, Delhi -Google Cloud, Delhi -google cloud delhi -Google Cloud Onboard Delhi -Google Delhi Cloud event All these searches should match with “Google Cloud Onboard Event, Delhi” with good accuracy.…

Continue ReadingAutomatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website, What is celery? Celery is an asynchronous task queue/job queue based on distributed message passing. What are tasks? The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing. After the tasks had been set up, an error constantly came up whenever a task was called The error was: DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit(). The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to : @celery.task(name='lorem.ipsum') def lorem_ipsum():    pass But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form: def restore_session(session_id):    session = DataGetter.get_session(session_id)    session.deleted_at = None    lorem_ipsum.delay()    save_to_db(session, "Session restored from Trash")    update_version(session.event_id, False, 'sessions_ver') In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/. def make_celery(app):    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])    celery.conf.update(app.config)    task_base = celery.Task    class ContextTask(task_base):        abstract = True        def __call__(self, *args, **kwargs):            if current_app.config['TESTING']:                with app.test_request_context():                    return task_base.__call__(self, *args, **kwargs)            with app.app_context():                return task_base.__call__(self, *args, **kwargs)    celery.Task = ContextTask    return celery celery = make_celery(current_app) The __call__ method ensures that celery task is provided with proper app context to work with.  

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

Event-driven programming in Flask with Blinker signals

Setting up blinker: The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event. The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time. That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events. Given below is the code to create named signals in a custom namespace: from blinker import Namespace event_signals = Namespace() speakers_modified = event_signals.signal('event_json_modified') If you want to emit a signal, you can do so by calling the send() method: speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id) From the user guide itself: “ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “ To subscribe to a signal, blinker provides neat decorator based signal subscriptions. @speakers_modified.connect def name_of_signal_handler(app, **kwargs):   Some Design Decisions: When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project. @speakers_modified.connect def new_handler(app, **kwargs): # do whatever you want to do with kwargs['event_id'] In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made. When to use signals and when not ? The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be…

Continue ReadingEvent-driven programming in Flask with Blinker signals

Adding Google Analytics To All Pages Using Flask

Google Analytics gives a detailed insight about your website including how many people visited, time, demography, how many returning visitors and all such information. It's a real important tool to have. All you have to do is create a Universal Analytics Tracking code and use it in a javascript code. The only problem is this code needs to be present in all the pages that you wants the analytics data for. So changing any part of the javascript code anytime, needs to be changed in all .html files. However, there is a better way of doing it in flask. Create a file base.html and write the code: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', '<track-code>', 'auto'); ga('send', 'pageview'); </script> Then using the property of jinja2 template extend this file in all the html files, i.e. {% extends 'gentelella/admin/base.html' %}. Thus now when you make some change in the above mentioned javascrpt code, you need to change it only in one place and it is changed in all other places.

Continue ReadingAdding Google Analytics To All Pages Using Flask

R14 – Memory Quota Exceeded

We, like many other organisations, are using heroku as the deployment server for our project open event organizer server. Things are pretty simple and awesome when your project is in its beginning phase and things run pretty smoothly. But as your project grows, there comes some server problem. And one of the biggest problems as your project grows is memory. Now since various packages have a different amount of memory assigned to you in case of hosting in generic servers such as heroku, so it might result in memory quota exceeded. Recently, we faced such a problem. R14 – Memory Quota Exceeded. Took us quite some time to understand what and why and how this occurred. So let me share a few things I found about this error.

(more…)

Continue ReadingR14 – Memory Quota Exceeded

Python code examples

I’ve met many weird examples of  behaviour in python language while working on Open Event project. Today I’d like to share some examples with you. I think this knowledge is necessary, if you’d like to increase a  bit your knowledge in python area. Simple adding one element to python list: def foo(value, x=[]):  x.append(value)   return x >>> print(foo(1)) >>> print(foo(2)) >>> print(foo(3, [])) >>> print(foo(4)) OUTPUT [1] [1, 2] [3] [1, 2, 4] First output is obvious, but second not exactly. Let me explain it, It happens because x(empty list) argument is only evaluated once, So on every call foo(), we modify that list, appending a value to it. Finally we have [1,2, 4] output. I recommend to avoid mutable params as default. Another example: Do you know which type it is? >>> print(type([ el for el in range(10)])) >>> print(type({ el for el in range(10)})) >>> print(type(( el for el in range(10)))) Again first and second type are obvious <class 'list'>, <class 'set'>. You may  think that last one should return type tuple but it returns a generator <class 'generator'>. Example: Do you think that below code returns an exception? list= [1,2,3,4,5] >>> print(list [8:]) If you think that above expression returns index error you’re wrong. It returns empty list []. Example funny boolean operators >>> 'c' == ('c' or 'b') True >>> 'd' == ('a' or 'd') False >>> 'c' == ('c' and 'b') False >>> 'd' == ('a' and 'd') True You can think that that OR and AND operators are broken. You have to know how python interpreter behaves while looking for OR and AND operators. So OR Expression takes the first statement and checks if it is true. If the first statement is true, then Python returns object’s value without checking second value. If first statement is false interpreter checks second value and returns that value. AND operator checks if first statement is false, the whole statement has to be false. So it returns first value, but if first statement is true it checks second statement and returns second value. Below i will show you how it works >>> 'c' == ('c' or 'b') >>> 'c' == 'c' True >>> 'd' == ('a' or 'd') >>> 'd' == 'a' False >>> 'c' == ('c' and 'b') >>> 'c' == 'b' False >>> 'd' == ('a' and 'd') >>> 'd' == 'd' True I hope that i have explained you how the python interpreter checks OR and AND operators. So know above examples should be more understandable.

Continue ReadingPython code examples

GET and POST requests

If you wonder how to get or update page resource, you have to read this article. It’s trivial if you have basic knowledge about HTTP protocol. I’d like to get you little involved to this subject. So GET and POST are most useful methods in HTTP protocol. What is HTTP? Hypertext transfer protocol - allow us to communicate between client and server side. In Open Event project we use web browser as client and for now we use Heroku for server side. Difference between GET and POST methods GET - it allows to get data from specified resources POST - it allows to submit new data to specified resources for example by html form. GET samples: For example we use it to get details about event curl http://open-event-dev.herokuapp.com/api/v2/events/95 Response from server: Of course you can use this for another needs, If you are a poker player I suppose that you’d like to know how many percentage you have on hand. curl http://www.propokertools.com/simulations/show?g=he&s=generic&b&d&h1=AA&h2=KK&h3&h4&h5&h6&_ POST samples: curl -X POST https://example.com/resource.cgi You can often find this action in a contact page or in a login page. How does request look in python? We use Requests library to communication between client and server side. It’s very readable for developers. You can find great documentation  and a lot of code samples on their website. It’s very important to see how it works. >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')) >>> r.status_code 200 I know that samples are very important, but take a look how Requests library fulfils our requirements in 100%. We have decided to use it because we would like to communicate between android app generator and orga server application. We have needed to send request with params(email, app_name, and api of event url) by post method to android generator resource. It executes the process of sending an email - a package of android application to a provided email address. data = { "email": login.current_user.email, "app_name": self.app_name, "endpoint": request.url_root + "api/v2/events/" + str(self.event.id) } r = requests.post(self.app_link, json=data)  

Continue ReadingGET and POST requests