Implemeting Permissions for Speakers API in Open Event API Server

In my previous blogpost I talked about what the permissions enlisted in developer handbook means and which part of the codebase defines what part of the permissions clauses. The permission manager provides the permissions framework to implement the permissions and proper access controls based on the dev handbook. In this blogpost, the actual implementation of the permissions is described. (Speakers API is under consideration here). The following table is the permissions in the developer handbook. List View Create Update Delete Superadmin/admin ✓ ✓ ✓ ✓ ✓ Event organizer ✓ [1] ✓ [1] ✓ [1] ✓ [1] ✓ [1] Registered User ✓ [3] ✓ [3] ✓ [4] ✓ [3] ✓ [3] Everyone else ✓ [2][4] ✓ [2][4] Only self-owned events Only of sessions with state approved or accepted Only of self-submitted sessions Only to events with state published. Super admin and admin should be able to access all the methods - list, view, create, update and delete. All the permissions are implemented through functions derived from permissions manager.Since all the functions have first check for super admin and admin, these are automatically taken care of. Only of self-submitted sessions This means that a registered user can list, view, edit or delete speakers of a session which he himself submitted. This requires adding a ‘creator’ attribute to session object which will help us determine if the session was created by the user. So before making a post for sessions, the current user identity is included as part of the payload. def before_post(self, args, kwargs, data):    data['creator_id'] = current_identity.id Now that we have added creator id to a session, a method is used to check if session was created by the same user. def is_session_self_submitted(view, view_args, view_kwargs, *args, **kwargs): user = current_identity Firstly the current identity is set as user which will later be used to check id. Sequentially, admin, superadmin, organizer and co-organizers are checked. After this a session is fetched using kwargs[session_id]. Then if the current user id is same as the creator id of the session fetched, access is granted, else Forbidden Error is returned. if session.creator_id == user.id: return view(*view_args, **view_kwargs) In the before_post method of speakers class, the session ids received in the data are passed to this function in kwargs as session_id. The permissions are then checked there using current user. If the session id are not those of self submitted sessions, ‘Session Not Found’ is returned. if not has_access('is_session_self_submitted', session_id=session_id): raise ObjectNotFound({'parameter': 'session_id'}, "Session: {} not found".format(session_id)) Only of sessions with state approved or accepted This check is required for user who has not submitted the session himself, so he can only see speaker profiles of accepted sessions. First, if the user is not authenticated, permissions are not checked. If co-organizer access is available, then the user can see all the speakers, so for this case filtering is not done. If not, then ‘is_session_self_submitted’ is checked. If yes, then then again no filtering, but if not then the following query filters accepted sessions. if not has_access('is_session_self_submitted', session_id=session.id): query_ = query_.filter(Session.state == "approved" or Session.state == "accepted") Similarly all the permissions first generate a list of all objects and then filtering is done based on the access level, instead of getting the list based…

Continue ReadingImplemeting Permissions for Speakers API in Open Event API Server

Understanding Permissions for Various APIs in Open Event API Server

Since the Open Event Server has various elements, a proper permissions system is essential. This huge list of permissions is well compiled in the developer handbook which can be found here. In this blogpost, permissions listed in the developer handbook are discussed. Let’s start with what we wish to achieve, that is, how to make sense of these permissions and where does each clause fit in the API Server’s codebase. For example, Sponsors API has the following permissions. List View Create Update Delete Superadmin/admin ✓ ✓ ✓ ✓ ✓ Event organizer ✓ [1] ✓ [1] ✓ [1] ✓ [1] ✓ [1] Registered User ✓ [3] ✓ [3] ✓ [4] ✓ [3] ✓ [3] Everyone else ✓ [2][4] ✓ [2][4] Only self-owned events Only sessions with state approved or accepted Only self-submitted sessions Only to events with state published. Based on flask-rest-jsonapi resource manager, we get list create under ResourceList through ResourceList’s GET and POST methods, whereas View, Update, Delete work on single objects and hence are provided by ResourceDetail’s GET, PATCH and DELETE respectively. Each function of the permission manager has a jwt_required decorator. @jwt_required def is_super_admin(view, view_args, view_kwargs, *args, **kwargs): @jwt_required def is_session_self_submitted(view, view_args, view_kwargs, *args, **kwargs): This ensures that whenever a check for access control is made to the permission manager, the user is signed in to Open Event. Additionally, the permissions are written in a hierarchical way such that for every permission, first the useris checked for admin or super admin, then for other accesses. Similar hierarchy is kept for organizer accesses like track organizer, registrar, staff or organizer and co-organizer. Some APIs resources require no authentication for List. To do this we need to add a check for Authentication token in the headers. Since each of the functions of permission manager have jwt_required as decorator, it is important to checkfor the presence of JWT token in request headers, because we can proceed to check for specific permissions in that case only. if 'Authorization' in request.headers: _jwt_required(current_app.config['JWT_DEFAULT_REALM']) Since the resources are created by endpoints of the form : ‘/v1/<resource>/` , this is derived from the separate ResourceListPost class. This class is POST only and has a before_create object method where the required relationships and permissions are checked before inserting the data in the tables. In the before_create method, let’s say that event is a required relationship, which will be defined by the ResourceRelationRequired , then we use our custom method def require_relationship(resource_list, data): for resource in resource_list: if resource not in data: raise UnprocessableEntity({'pointer': '/data/relationships/{}'.format(resource)}, "A valid relationship with {} resource is required".format(resource)) to check if the required relationships are present in the data. The event_id here can also be used to check for organizer or co-organizer access in the permissions manager for a particular event. Here’s another permissions structure for a different API - Settings. List View Create Update Delete Superadmin/admin ✓ ✓ Everyone else ✓ [1] Only app_name, tagline, analytics_key, stripe_publishable_key, google_url, github_url, twitter_url, support_url, facebook_url, youtube_url, android_app_url, web_app_url fields . This API does not allow access to the complete object, but to only some fields which are listed above. The complete details can be checked here. Resources The Open Event Developer Handbook - Niranjan R Resource Manager | Flask-REST-JSONAPI Permissions | Flask-REST-JSONAPI REST APIs with Python and Flask - Miguel Grinberg [blog] User Roles in Flask - Safari, O'Reilly Media [blog]

Continue ReadingUnderstanding Permissions for Various APIs in Open Event API Server

Using Custom Forms In Open Event API Server

One feature of the  Open Event management system is the ability to add a custom form for an event. The nextgen API Server exposes endpoints to view, edit and delete forms and form-fields. This blogpost describes how to use a custom-form in Open Event API Server. Custom forms allow the event organizer to make a personalized forms for his/her event. The form object includes an identifier set by the user, and the form itself in the form of a string. The user can also set the type for the form which can be either of text or checkbox depending on the user needs. There are other fields as well, which are abstracted. These fields include: id : auto generated unique identifier for the form event_id : id of the event with which the form is associated is_required : If the form is required is_included : if the form is to be included is_fixed : if the form is fixedThe last three of these fields are boolean fields and provide the user with better control over forms use-cases in the event management. Only the event organizer has permissions to edit or delete these forms, while any user who is logged in to eventyay.com can see the fields available for a custom form for an event. To create a custom-form for event with id=1, the following request is to be made: POST  https://api.eventyay.com/v1/events/1/custom-forms?sort=type&filter=[] with all the above described fields to be included in the request body.  For example: { "data": { "type": "custom_form", "attributes": { "form": "form", "type": "text", "field-identifier": "abc123", "is-required": "true", "is-included": "false", "is-fixed": "false" } } } The API returns the custom form object along with the event relationships and other self and related links. To see what the response looks like exactly, please check the sample here. Now that we have created a form, any user can get the fields for the same. But let’s say that the event organiser wants to update some field or some other attribute for the form, he can make the following request along with the custom-form id. PATCH https://api.eventyay.com/v1/custom-forms/1 (Note: custom-form id must be included in both the URL as well as request body) Similarly, to delete the form, DELETE https://api.eventyay.com/v1/custom-forms/1     can be used. Resources Adding Custom Form to Open Event API Server Commit: Implement Custom Form API Create a Custom Form Field Type from Template - blog, general reading

Continue ReadingUsing Custom Forms In Open Event API Server

Writing Dredd Test for Event Topic-Event Endpoint in Open Event API Server

The API Server exposes a large set of endpoints which are well documented using apiary’s API Blueprint. Ton ensure that these documentations describe exactly what the API does, as in the response made to a request, testing them is crucial. This testing is done through Dredd Documentation testing with the help of FactoryBoy for faking objects. In this blogpost I describe how to use FactoryBoy to write Dredd tests for the Event Topic- Event endpoint of Open Event API Server. The endpoint for which tests are described here is this: For testing this endpoint, we need to simulate the API GET request by making a call to our database and then compare the response received to the expected response written in the api_blueprint.apib file. For GET to return some data we need to insert an event with some event topic in the database. The documentation for this endpoint is the following: To add the event topic and event objects for GET events-topics/1/events, we use a hook. This hook is written in hook_main.py file and is run before the request is made. We add this decorator on the function which will add objects to the database. This decorator basically traverses the APIB docs following level with number of ‘#’ in the documentation to ‘>’ in the decorator. So for  we have, Now let’s write the method itself. In the method here, we first add the event topic object using EventTopic Factory defined in the factories/event-topic.py file, the code for which can be found here. Since the endpoint also requires some event to be created in order to fetch events related to an event topic, we add an event object too based on the EventFactoryBasic class in factories/event.py  file. [Code] To fetch the event related to a topic, the event must be referenced in that particular event topic. This is achieved by passing event_topic_id=1 when creating the event object, so that for the event that is created by the constructor, event topic is set as id = 1. event = EventFactoryBasic(event_topic_id=1) In the EventFactoryBasic class, the event_topic_id is set as ‘None’, so that we don’t have to create event topic for creating events in other endpoints testing also. This also lets us to not add event-topic as a related factory. To add event_topic_id=1 as the event’s attribute, an event topic with id = 1 must be already present, hence event_topic object is added first. After adding the event object also, we commit both of these into the database. Now that we have an event topic object with id = 1, an event object with id = 1 , and the event is related to that event topic, we can make a call to GET event-topics/1/events and get the correct response. Related: Python Hooks for Dredd API Testing Framework Use Automatic API Documentation Testing - Blog, Anthony Art Commit: Add Tests for event topic-event endpoint

Continue ReadingWriting Dredd Test for Event Topic-Event Endpoint in Open Event API Server

How User Event Roles relationship is handled in Open Event Server

Users and Events are the most important part of FOSSASIA's Open Event Server. Through the advent and upgradation of the project, the way of implementing user event roles has gone through a lot many changes. When the open event organizer server was first decoupled to serve as an API server, the user event roles like all other models was decided to be served as a separate API to provide a data layer above the database for making changes in the entries. Whenever a new role invite was accepted, a POST request was made to the User Events Roles table to insert the new entry. Whenever there was a change in the role of an user for a particular event, a PATCH request was made. Permissions were made such that a user could insert only his/her user id and not someone else’s entry. def before_create_object(self, data, view_kwargs): """ method to create object before post :param data: :param view_kwargs: :return: """ if view_kwargs.get('event_id'): event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id') data['event_id'] = event.id elif view_kwargs.get('event_identifier'): event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier') data['event_id'] = event.id email = safe_query(self, User, 'id', data['user'], 'user_id').email invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\ .filter_by(event_id=data['event_id']).one_or_none() if not invite: raise ObjectNotFound({'parameter': 'invite'}, "Object: not found") def after_create_object(self, obj, data, view_kwargs): """ method to create object after post :param data: :param view_kwargs: :return: """ email = safe_query(self, User, 'id', data['user'], 'user_id').email invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\ .filter_by(event_id=data['event_id']).one_or_none() if invite: invite.status = "accepted" save_to_db(invite) else: raise ObjectNotFound({'parameter': 'invite'}, "Object: not found") Initially what we did was when a POST request was sent to the User Event Roles API endpoint, we would first check whether a role invite from the organizer exists for that particular combination of user, event and role. If it existed, only then we would make an entry to the database. Else we would raise an “Object: not found” error. After the entry was made in the database, we would update the role_invites table to change the status for the role_invite. Later it was decided that we need not make a separate API endpoint. Since API endpoints are all user accessible and may cause some problem with permissions, it was decided that the user event roles would be handled entirely through the model instead of a separate API. Also, the workflow wasn’t very clear for an user. So we decided on a workflow where the role_invites table is first updated with the particular status and after the update has been made, we make an entry to the user_event_roles table with the data that we get from the role_invites table. When a role invite is accepted, sqlalchemy add() and commit() is used to insert a new entry into the table. When a role is changed for a particular user, we make a query, update the values and save it back into the table. So the entire process is handled in the data layer level rather than the API level. The code implementation is as follows: def before_update_object(self, role_invite, data, view_kwargs): """ Method to edit object…

Continue ReadingHow User Event Roles relationship is handled in Open Event Server

Designing A Remote Laboratory With PSLab: execution of function strings

In the previous blog post, we introduced the concept of a ‘remote laboratory’, which would enable users to access the various features of the PSLab via the internet. Many aspects of the project were worked upon, which also involved creation of a web-app using EmberJS that enables users to create accounts , sign in, and prepare Python programs to be sent to the server for execution. A backend APi server based on Python-flask was also developed to handle these tasks, and maintain a postgresql database using sqlalchemy . The following screencast shows the basic look and feel of the proposed remote lab running in a web browser. This blog post will deal with implementing a way for the remote user to submit a simple function string, such as get_voltage(‘CH1’), and retrieve the results from the server. There are three parts to this: Creating a dictionary of the functions available in the sciencelab instance. The user will only be allowed access to these functions remotely, and we may protect some functions as the initialization and destruction routines by blocking them from the remote user Creating an API method to receive a form containing the function string, execute the corresponding function from the dictionary, and reply with JSON data Testing the API using the postman chrome extension Creating a dictionary of functions : The function dictionary maps function names against references to the actual functions from an instance of PSL.sciencelab . A simple dictionary containing just the get_voltage function can be generated in the following way: from PSL import sciencelab I=sciencelab.connect() functionList = {'get_voltage':I.get_voltage} This dictionary is then used with the eval method in order to evaluate a function string: result = eval('get_voltage('CH1')',functionList) print (result) 0.0012 A more efficient way to create this list is to use the inspect module, and automatically extract all the available methods into a dictionary functionList = {} for a in dir(I): attr = getattr(I, a) if inspect.ismethod(attr) and a!='__init__': functionList[a] = attr In the above, we have made a dictionary of all the methods except __init__ This approach can also be easily extrapolated to automatically generate a dictionary for inline documentation strings which can then be passed on to the web app. Creating an API method to execute submitted function strings We create an API method that accepts a form containing the function string and option that specifies if the returned value is to be formatted as a string or JSON data. A special case arises for numpy arrays which cannot be directly converted to JSON, and the toList function must first be used for them. @app.route('/evalFunctionString',methods=['POST']) def evalFunctionString(): if session.get('user'): _stringify=False try: _user = session.get('user')[1] _fn = request.form['function'] _stringify = request.form.get('stringify',False) res = eval(_fn,functionList) except Exception as e: res = str(e) #dump string if requested. Otherwise json array if _stringify: return json.dumps({'status':True,'result':str(res),'stringified':True}) else: #Try to simply convert the results to json try: return json.dumps({'status':True,'result':res,'stringified':False}) # If that didn't work, it's due to the result containing numpy arrays. except Exception as e: #try to convert the…

Continue ReadingDesigning A Remote Laboratory With PSLab: execution of function strings

Generate Requirement File for Python App for Meilix-Generator

Meilix-Generator is based upon Flask (a Python framework) which has several dependencies to fulfill before actually running the app properly. This article will guide you through the way I used it to automatically generate the requirement file for Meilix Generator app so that one doesn’t have to manually type all the requirements. An app powered by Python always has several dependencies to fulfill to run the app successfully. The app root directory contains a file named as requirements.txt which contains the name of the dependency and their version. There are features ways to generate the requirement file for an app but the one which I will demonstrate is the best one. So I used this idea to generate the requirement file for webapp Meilix Generator. Ways to get the requirement.txt The internet has a featured way through which one has just to run a command to get a list of all the different dependencies within an app. pip freeze > requirements.txt This way will generate a bunch of dependencies that we not even required. Why do we really require to generate a requirement file? Yes, one may even ask that we can even write the dependency in the requirements.txt file. Why do we need a command to generate it? Since because it will take care of two important things: 1. It will ensure that all the dependencies have been included, from user input one may forget to find some of the dependency and to include that. It will also take care of the Python Package Version Pinning which is really important. People use to version pinning for Python requirements as ">=" style. It’s important to follow “==” style because If we want to install the program in one year in the future, the required packages should be pinned to assure that the API changes in the installed packages do not break the program. Please read here for more info. The way mentioned below will ensure to provide both these features. How I generated it for Meilix Generator? Meilix Generator run on Flask that require a requirement.txt file to fulfill the dependencies. Let’s get straight to the way to generate it for the project. First we will simply create a requirements.in file in which we will simply mention all the dependencies in a simple way: Flask gunicorn Werkzeug Now we will use a command to latest packages: pip install --upgrade -r requirements.in #Note that if you would like to change the requirements, please edit the requirements.in file and run this command to update the dependencies Then type this command to generate the requirements.txt file from requirements.in pip-compile --output-file requirements.txt requirements.in #fix the versions that definitely work for an eternity. This will generate a file something as: click==6.7 # via flask Flask==0.12.2 gunicorn==19.7.1 itsdangerous==0.24 # via flask Jinja2==2.9.6 # via flask MarkupSafe==1.0 # via jinja2 Werkzeug==0.12.2 # via flask Now you generated a perfect requirements.txt file with all the dependencies satisfied with proper python package pinning. The meilix-generator repo which uses this:…

Continue ReadingGenerate Requirement File for Python App for Meilix-Generator

Checking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

In the Open Event Server, in the pull requests, if there is some change in the sqlalchemy model, sometimes proper migrations for the same are missed in the PR. The first approach to check whether the migrations were up to date in the database was with the following health check function: from subprocess import check_output def health_check_migrations():    """    Checks whether database is up to date with migrations, assumes there is a single migration head    :return:    """    head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]        if head == version_num:        return True, 'database up to date with migrations'    return False, 'database out of date with migrations'   In the above function, we get the head according to the migration files as following: head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0] The table alembic_version contains the latest alembic revision to which the database was actually upgraded. We can get this revision from the following line: version_num = (db.session.execute('SELECT version_num from alembic_version').fetchone())['version_num']   Then we compare both of the given heads and return a proper tuple based on the comparison output.While this method was pretty fast, there was a drawback in this approach. If the user forgets to generate the migration files for the the changes done in the sqlalchemy model, this approach will fail to raise a failure status in the health check. To overcome this drawback, all the sqlalchemy models were fetched automatically and simple sqlalchemy select queries were made to check whether the migrations were up to date. Remember that a raw SQL query will not serve our purpose in this case as you’d have to specify the columns explicitly in the query. But in the case of a sqlalchemy query, it generates a SQL query based on the fields defined in the db model, so if migrations are missing to incorporate the said change proper error will be raised. We can accomplish this from the following function: def health_check_migrations():    """    Checks whether database is up to date with migrations by performing a select query on each model    :return:    """    # Get all the models in the db, all models should have a explicit __tablename__    classes, models, table_names = [], [], []    # noinspection PyProtectedMember    for class_ in db.Model._decl_class_registry.values():        try:            table_names.append(class_.__tablename__)            classes.append(class_)        except:            pass    for table in db.metadata.tables.items():        if table[0] in table_names:            models.append(classes[table_names.index(table[0])])    for model in models:        try:            db.session.query(model).first()        except:            return False, '{} model out of date with migrations'.format(model)    return True, 'database up to date with migrations'   In the above code, we automatically get all the models and tables present in the database. Then for each model we try a simple SELECT query which returns the first row found. If there is any error in doing so, False, '{} model out of date with migrations'.format(model) is returned, so as to ensure a failure status in health checks. Related: Alembic: http://alembic.zzzcomputing.com/en/latest/ Alembic migrations quick start: https://michaelheap.com/alembic-python-migrations-quick-start/

Continue ReadingChecking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

Implementing Health Check Endpoint in Open Event Server

A health check endpoint was required in the Open Event Server be used by Kubernetes to know when the web instance is ready to receive requests. Following are the checks that were our primary focus for health checks: Connection to the database. Ensure sql-alchemy models are inline with the migrations. Connection to celery workers. Connection to redis instance. Runscope/healthcheck seemed like the way to go for the same. Healthcheck wraps a Flask app object and adds a way to write simple health-check functions that can be used to monitor your application. It’s useful for asserting that your dependencies are up and running and your application can respond to HTTP requests. The Healthcheck functions are exposed via a user defined flask route so you can use an external monitoring application (monit, nagios, Runscope, etc.) to check the status and uptime of your application. Health check endpoint was implemented at /health-check as following: from healthcheck import HealthCheck health = HealthCheck(current_app, "/health-check")   Following is the function for checking the connection to the database: def health_check_db():    """    Check health status of db    :return:    """    try:        db.session.execute('SELECT 1')        return True, 'database ok'    except:        sentry.captureException()        return False, 'Error connecting to database'   Check functions take no arguments and should return a tuple of (bool, str). The boolean is whether or not the check passed. The message is any string or output that should be rendered for this check. Useful for error messages/debugging. The above function executes a query on the database to check whether it is connected properly. If the query runs successfully, it returns a tuple True, 'database ok'. sentry.captureException() makes sure that the sentry instance receives a proper exception event with all the information about the exception. If there is an error connecting to the database, the exception will be thrown. The tuple returned in this case will be return False, 'Error connecting to database'. Finally to add this to the endpoint: health.add_check(health_check_db) Following is the response for a successful health check: {    "status": "success",    "timestamp": 1500915121.52474,    "hostname": "shubham",    "results": [        {            "output": "database ok",            "checker": "health_check_db",            "expires": 1500915148.524729,            "passed": true,            "timestamp": 1500915121.524729        }    ] } If the database is not connected the following error will be shown: {            "output": "Error connecting to database",            "checker": "health_check_db",            "expires": 1500965798.307425,            "passed": false,            "timestamp": 1500965789.307425 } Related: Health Endpoint in API Design : http://byterot.blogspot.in/2014/11/health-endpoint-in-api-design-slippery-rest-api-design-canary-endpoint-hysterix-asp-net-web-api.html Kubernetes Health Checking: https://kubernetes.io/docs/user-guide/walkthrough/k8s201/#application-health-checking

Continue ReadingImplementing Health Check Endpoint in Open Event Server

Designing a Remote Laboratory with PSLab using Python Flask Framework

In the introductory post about remote laboratories, a general set of tools to create a framework and handle its various aspects was also introduced. In this blog post, we will explore the implementation of several aspects of the backend app designed with python-flask, and the frontend based on EmberJS. A clear separation of the frontend and backend facilitates minimal disruption of either sections due to the other. Implementing API methods in Python-Flask In the Flask web server, page requests are handled via ‘routes’ , which are essentially URLs linked to a python function. Routes are also capable of handling payloads such as POST data, and various return types are also supported. We shall use an example to demonstrate how a Sign-Up request sent from the sign-up form in the remote lab frontend for PSLab is handled. @app.route('/signUp',methods=['POST']) def signUp(): """Sign Up for Virtual Lab POST: Submit sign-up parameters. The following must be present: inputName : The name of your account. does not need to be unique inputEmail : e-mail ID used for login . must be unique. inputPassword: password . Returns HTTP 404 when data does not exist. """ # read the posted values from the UI _name = request.form['inputName'] _email = request.form['inputEmail'] _password = request.form['inputPassword'] # validate the received values if _name and _email and _password: _hashed_password = generate_password_hash(_password) newUser = User(_email, _name,_hashed_password) try: db.session.add(newUser) db.session.commit() return json.dumps({'status':True,'message':'User %s created successfully. e-mail:%s !'%(_name,_email)}) except Exception as exc: reason = str(exc) return json.dumps({'status':False,'message':str(reason)})   In this example, the first line indicates that all URL requests made to <domain:port>/signUp will be handled by the function signUp . During development, we host the server on localhost, and use the default PORT number 8000, so sign-up forms must be submitted to 127.0.0.1:8000/signUp . For deployment on a globally accessible server, a machine with a static IP, and a DNS record must be used. An example for such a deployment would be the heroku subdomain where pslab-remote is automatically deployed ; https://pslab-stage.herokuapp.com/signUp A closer look at the above example will tell you that POST data can be accessed via the request.form dictionary, and that the sign-up routine requires inputName,inputEmail, and inputPassword. A password hash is generated before writing the parameters to the database. Testing API methods using the Postman chrome extension The route described in the above example requires form data to be submitted along with the URL, and we will use a rather handy developer tool called Postman to help us do this. In the frontend apps , AJAX methods are usually employed to do such tasks as well as handle the response from the server.   The above screenshot shows Postman being used to submit form data to /signUp on our API server running at localhost:8000 . The fields inputName, inputDescription, and inputPassword are also posted along with it. In the bottom section, one can see that the server returned a positive status variable, as well as a descriptive message. Submitting the sign up form via an Ember controller. Setting up a…

Continue ReadingDesigning a Remote Laboratory with PSLab using Python Flask Framework