Data Access Layer in Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Its core feature is scanning a QR Code to validate Attendee Check In. Other features of the App are to display an overview of sales and tickets management. The App maintains a local database and syncs it with the Open Event API Server. The Data Access Layer in the App is designed such that the data is fetched from the server or taken from the local database according to the user’s need. For example, simply showing the event sales overview to the user will fetch the data from the locally saved database. But when the user wants to see the latest data then the App need to fetch the data from the server to show it to the user and also update the locally saved data for future reference. I will be talking about the data access layer in the Open Event Organizer App in this blog.

The App uses RxJava to perform all the background tasks. So all the data access methods in the app return the Observables which is then subscribed in the presenter to get the data items. So according to the data request, the App has to create the Observable which will either load the data from the locally saved database or fetch the data from the API server. For this, the App has AbstractObservableBuilder class. This class gets to decide which Observable to return on a data request.

Relevant Code:

final class AbstractObservableBuilder<T> {
   ...
   ...
   @NonNull
   private Callable<Observable<T>> getReloadCallable() {
       return () -> {
           if (reload)
               return Observable.empty();
           else
               return diskObservable
                   .doOnNext(item -> Timber.d("Loaded %s From Disk on Thread %s",
                       item.getClass(), Thread.currentThread().getName()));
       };
   }

   @NonNull
   private Observable<T> getConnectionObservable() {
       if (utilModel.isConnected())
           return networkObservable
               .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                   item.getClass(), Thread.currentThread().getName()));
       else
           return Observable.error(new Throwable(Constants.NO_NETWORK));
   }

   @NonNull
   private <V> ObservableTransformer<V, V> applySchedulers() {
       return observable -> observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread());
   }

   @NonNull
   public Observable<T> build() {
       if (diskObservable == null || networkObservable == null)
           throw new IllegalStateException("Network or Disk observable not provided");

       return Observable
               .defer(getReloadCallable())
               .switchIfEmpty(getConnectionObservable())
               .compose(applySchedulers());
   }
}

 

The class is used to build the Abstract Observable which contains both types of Observables, making data request to the API server and the locally saved database. Take a look at the method build. Method getReloadCallable provides an observable which will be the default one to be subscribed which is a disk observable which means data is fetched from the locally saved database. The method checks parameter reload which if true suggests to make the data request to the API server or else to the locally saved database. If the reload is false which means data can be fetched from the locally saved database, getReloadCallable returns the disk observable and the data will be fetched from the locally saved database. If the reload is true which means data request must be made to the API server, then the method returns an empty observable.

The method getConnectionObservable returns a network observable which makes the data request to the API server. In the method build, switchIfEmpty operator is applied on the default observable which is empty if reload is true, and the network observable is passed to it. So when reload is true, network observable is subscribed and when it is false disk observable is subscribed. For example of usage of this class to make a events data request is:

public Observable<Event> getEvents(boolean reload) {
   Observable<Event> diskObservable = Observable.defer(() ->
       databaseRepository.getAllItems(Event.class)
   );

   Observable<Event> networkObservable = Observable.defer(() ->
       eventService.getEvents(JWTUtils.getIdentity(getAuthorization()))
           ...
           ...

   return new AbstractObservableBuilder<Event>(utilModel)
       .reload(reload)
       .withDiskObservable(diskObservable)
       .withNetworkObservable(networkObservable)
       .build();
}

 

So according to the boolean parameter reload, a correct observable is subscribed to complete the data request.

Links:
1. Documentation about the Operators in ReactiveX
2. Information about the Data Access Layer on Wikipedia

Continue ReadingData Access Layer in Open Event Organizer Android App

FOSSASIA at Google Code-In 2016 Grand Prize Trip

This year FOSSASIA came up with a whopping number of GCI participants, making it to the top. FOSSASIA is a mentor organization at the Google Code-In contest, which introduces pre-university students towards open source development.

Every year Google conducts the grand prize trip to all the GCI winners and I represented FOSSASIA as a mentor.

FOSSASIA GCI winners and Mentor at Google Mountain View Campus.

Day 1: Meet and Greet with the Diverse Communities

We all headed towards the San Francisco Google office and had a great time interacting with members from diverse open source organizations from different parts of the world. I had some interactive conversations with the kids, on how they scheduled their sleep hours in order to complete the task and got feedback from the mentors from different time zones! I was also overwhelmed while listening to their interests apart from open source contributions.

“I am a science enthusiast, mainly interested in Computer Science and its wide range of applications. I also enjoy playing the piano, reading, moving, and having engaging conversations with my friends. As a participant in the GCI contest, I got the chance to learn by doing, I got an insight of how it is like to work on a real open-source project, met some great people, helped others (and received help myself). Shortly, it was amazing, and I’m proud to have been a part of it. ” Shared by one of our Winner Oana Rosca.

There were people from almost 14 different countries, in fact, FOSSASIA, as a team, was the most diverse group 🙂

Day 2: Award Ceremony

We had two winners from FOSSASIA, Arkhan Kaiser from Indonesia and Oana Rosca from Romania. There were 8 organizations with 16 winners. The award ceremony was celebrated on day 2 and each winner was felicitated by Chris DiBona, the director of the Google open source team.

Talks by Googlers

We had amazing speakers from Google who spoke about their work, experiences, and journey to Google. Our first speaker was Jeremy Allison, a notable contributor to “Samba” which is a free software re-implementation of the SMB/CIFS networking protocol. He spoke on “How the Internet works” and gave a deeper view of the internet magic.

We had various speakers from different domains such as Grant Grundler from the Chrome team, Lyman Missimer from Google Expeditions, Katie Dektar from the Making and Science team, Sean Lip from Oppia(Googler and Oppia org admin), Timothy Papandreou from Waymo and Andrew Selle from TensorFlow.

Day 3: Fun Activities

We had various fun activities organized by the Google team. I had a great time cruising towards the Alcatraz island.  Later we had a walk on the Golden Gate bridge. Here comes the fun part of the tour “the cruise dinner” which was the best part of the day.

Day 4: End of the trip

Oana, Arkhan and I gave a nice presentation about our work during GCI. We spoke about all the amazing projects under FOSSASIA. One cool thing we did is that we “Doodled” our presentation 🙂 Here are few images from the actual presentation.

The day ended well with loads of good memories and information. Thanks to the open source technologies and their availability along with a beautiful friendly community, these memories and connections will now remain for a lifetime.

Continue ReadingFOSSASIA at Google Code-In 2016 Grand Prize Trip

Export an Event using APIs of Open Event Server

We in FOSSASIA’s Open Event Server project, allow the organizer, co-organizer and the admins to export all the data related to an event in the form of an archive of JSON files. This way the data can be reused in some other place for various different purposes. The basic workflow is something like this:

  • Send a POST request in the /events/{event_id}/export/json with a payload containing whether you require the various media files.
  • The POST request starts a celery task in the background to start extracting data related to event and jsonifying them
  • The celery task url is returned as a response. Sending a GET request to this url gives the status of the task. If the status is either FAILED or SUCCESS then there is the corresponding error message or the result.
  • Separate JSON files for events, speakers, sessions, micro-locations, tracks, session types and custom forms are created.
  • All this files are then archived and the zip is then served on the endpoint /events/{event_id}/exports/{path}
  • Sending a GET request to the above mentioned endpoint downloads a zip containing all the data related to the endpoint.

Let’s dive into each of these points one-by-one

POST request ( /events/{event_id}/export/json)

For making a POST request you firstly need a JWT authentication like most of the other API endpoints. You need to send a payload containing the settings for whether you want the media files related with the event to be downloaded along with the JSON files. An example payload looks like this:

{
   "image": true,
   "video": true,
   "document": true,
   "audio": true
 }

def export_event(event_id):
    from helpers.tasks import export_event_task

    settings = EXPORT_SETTING
    settings['image'] = request.json.get('image', False)
    settings['video'] = request.json.get('video', False)
    settings['document'] = request.json.get('document', False)
    settings['audio'] = request.json.get('audio', False)
    # queue task
    task = export_event_task.delay(
        current_identity.email, event_id, settings)
    # create Job
    create_export_job(task.id, event_id)

    # in case of testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        # send_export_mail(event_id, task.get())
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


Taking the settings about the media files and the event id, we pass them as parameter to the export event celery task and queue up the task. We then create an entry in the database with the task url and the event id and the user who triggered the export to keep a record of the activity. After that we return as response the url for the celery task to the user.

If the celery task is still underway it show a response with ‘state’:’WAITING’. Once, the task is completed, the value of ‘state’ is either ‘FAILED’ or ‘SUCCESS’. If it is SUCCESS it returns the result of the task, in this case the download url for the zip.

Celery Task to Export Event

Exporting an event is a very time consuming process and we don’t want that this process to come in the way of user interaction with other services. So we needed to use a queueing system that would queue the tasks and execute them in the background with disturbing the main worker from executing the other user requests. We have used celery to queue tasks in the background and execute them without disturbing the other user requests.

We have created a celery task namely “export.event” which calls the event_export_task_base() which in turn calls the export_event_json() where all the jsonification process is carried out. To start the celery task all we do is export_event_task.delay(event_id, settings) and it return a celery task object with a task id that can be used to check the status of the task.

@celery.task(base=RequestContextTask, name='export.event', bind=True)
def export_event_task(self, email, event_id, settings):
    event = safe_query(db, Event, 'id', event_id, 'event_id')
    try:
        logging.info('Exporting started')
        path = event_export_task_base(event_id, settings)
        # task_id = self.request.id.__str__()  # str(async result)
        download_url = path

        result = {
            'download_url': download_url
        }
        logging.info('Exporting done.. sending email')
        send_export_mail(email=email, event_name=event.name, download_url=download_url)
    except Exception as e:
        print(traceback.format_exc())
        result = {'__error': True, 'result': str(e)}
        logging.info('Error in exporting.. sending email')
        send_export_mail(email=email, event_name=event.name, error_text=str(e))

    return result


After exporting a path to the export zip is returned. We then get the downloading endpoint and return it as the result of the celery task. In case there is an error in the celery task, we print an entire traceback in the celery worker and return the error as a result.

Make the Exported Zip Ready

We have a separate export_helpers.py file in the helpers module of API for performing various tasks related to exporting all the data of the event. The most important function in this file is the export_event_json(). This file accepts the event_id and the settings dictionary. In the export helpers we have global constant dictionaries which contain the order in which the fields are to appear in the JSON files created while exporting.

Firstly, we create the directory for storing the exported JSON and finally the archive of all the JSON files. Then we have a global dictionary named EXPORTS which contains all the tables and their corresponding Models which we want to extract from the database and store as JSON.  From the EXPORTS dict we get the Model names. We use this Models to make queries with the given event_id and retrieve the data from the database. After retrieving data, we use another helper function named _order_json which jsonifies the sqlalchemy data in the order that is mentioned in the dictionary. After this we download the media data, i.e. the slides, images, videos etc. related to that particular Model depending on the settings.

def export_event_json(event_id, settings):
    """
    Exports the event as a zip on the server and return its path
    """
    # make directory
    exports_dir = app.config['BASE_DIR'] + '/static/uploads/exports/'
    if not os.path.isdir(exports_dir):
        os.mkdir(exports_dir)
    dir_path = exports_dir + 'event%d' % int(event_id)
    if os.path.isdir(dir_path):
        shutil.rmtree(dir_path, ignore_errors=True)
    os.mkdir(dir_path)
    # save to directory
    for e in EXPORTS:
        if e[0] == 'event':
            query_obj = db.session.query(e[1]).filter(
                e[1].id == event_id).first()
            data = _order_json(dict(query_obj.__dict__), e)
            _download_media(data, 'event', dir_path, settings)
        else:
            query_objs = db.session.query(e[1]).filter(
                e[1].event_id == event_id).all()
            data = [_order_json(dict(query_obj.__dict__), e) for query_obj in query_objs]
            for count in range(len(data)):
                data[count] = _order_json(data[count], e)
                _download_media(data[count], e[0], dir_path, settings)
        data_str = json.dumps(data, indent=4, ensure_ascii=False).encode('utf-8')
        fp = open(dir_path + '/' + e[0], 'w')
        fp.write(data_str)
        fp.close()
    # add meta
    data_str = json.dumps(
        _generate_meta(), sort_keys=True,
        indent=4, ensure_ascii=False
    ).encode('utf-8')
    fp = open(dir_path + '/meta', 'w')
    fp.write(data_str)
    fp.close()
    # make zip
    shutil.make_archive(dir_path, 'zip', dir_path)
    dir_path = dir_path + ".zip"

    storage_path = UPLOAD_PATHS['exports']['zip'].format(
        event_id=event_id
    )
    uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
    storage_url = upload(uploaded_file, storage_path)

    return storage_url


After we receive the json data from the _order_json() function, we create a dump of the json using json.dumps with an indentation of 4 spaces and utf-8 encoding. Then we save this dump in a file named according to the model from which the data was retrieved. This process is repeated for all the models that are mentioned in the EXPORTS dictionary. After all the JSON files are created and all the media is downloaded, we make a zip of the folder.

To do this we use shutil.make_archive. It creates a zip and uploads the zip to the storage service used by the server such as S3, google storage, etc. and returns the url for the zip through which it can be accessed.

Apart from this function, the other major function in this file is to create an export job entry in the database so that we can keep a track about which used started a task related to which event and help us in debugging and security purposes.

Downloading the Zip File

After the exporting is completed, if you send a GET request to the task url, you get a response similar to this:

{
   "result": {
     "download_url": "http://localhost:5000/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
   },
   "state": "SUCCESS"
 }

So on opening the download url in the browser or using any other tool, you can download the zip file.

One big question however remains is, all the workflow is okay but how do you understand after sending the POST request, that the task is completed and ready to be downloaded? One way of solving this problem is a technique known as polling. In polling what we do is we send a GET request repeatedly after every fixed interval of time. So, what we do is from the POST request we get the url for the export task. You keep polling this task url until the state is either “FAILED” or “SUCCESS”. If it is a SUCCESS you append the download url somewhere in your website which can then clicked to download the archived export of the event.

 

Reference:

 

Continue ReadingExport an Event using APIs of Open Event Server

Uploading Files via APIs in the Open Event Server

There are two file upload endpoints. One is endpoint for image upload and the other is for all other files being uploaded. The latter endpoint is to be used for uploading files such as slides, videos and other presentation materials for a session. So, in FOSSASIA’s Orga Server project, when we need to upload a file, we make an API request to this endpoint which is turn uploads the file to the server and returns back the url for the uploaded file. We then store this url for the uploaded file to the database with the corresponding row entry.

Sending Data

The endpoint /upload/file  accepts a POST request, containing a multipart/form-data payload. If there is a single file that is uploaded, then it is uploaded under the key “file” else an array of file is sent under the key “files”.

A typical single file upload cURL request would look like this:

curl -H “Authorization: JWT <key>” -F file=@file.pdf -x POST http://localhost:5000/v1/upload/file

A typical multi-file upload cURL request would look something like this:

curl -H “Authorization: JWT <key>” -F files=@file1.pdf -F files=@file2.pdf -x POST http://localhost:5000/v1/upload/file

Thus, unlike other endpoints in open event orga server project, we don’t send a json encoded request. Instead it is a form data request.

Saving Files

We use different services such as S3, google cloud storage and so on for storing the files depending on the admin settings as decided by the admin of the project. One can even ask to save the files locally by passing a GET parameter force_local=true. So, in the backend we have 2 cases to tackle- Single File Upload and Multiple Files Upload.

Single File Upload

if 'file' in request.files:
        files = request.files['file']
        file_uploaded = uploaded_file(files=files)
        if force_local == 'true':
            files_url = upload_local(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )
        else:
            files_url = upload(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )


We get the file, that is to be uploaded using
request.files[‘file’] with the key as ‘file’ which was used in the payload. Then we use the uploaded_file() helper function to convert the file data received as payload into a proper file and store it in a temporary storage. After this, if force_local is set as true, we use the upload_local helper function to upload it to the local storage, i.e. the server where the application is hosted, else we use whatever service is set by the admin in the admin settings.

In uploaded_file() function of helpers module, we extract the filename and the extension of the file from the form-data payload. Then we check if the suitable directory already exists. If it doesn’t exist, we create a new directory and then save the file in the directory

extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)


After that the upload function gets the settings key for either s3 or google storage and then uses the corresponding functions to upload this temporary file to the storage.

Multiple File Upload

 elif 'files[]' in request.files:
        files = request.files.getlist('files[]')
        files_uploaded = uploaded_file(files=files, multiple=True)
        files_url = []
        for file_uploaded in files_uploaded:
            if force_local == 'true':
                files_url.append(upload_local(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))
            else:
                files_url.append(upload(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))


In case of multiple files upload, we get a list of files instead of a single file. Hence we get the list of files sent as form data using
request.files.getlist(‘files[]’). Here ‘files’ is the key that is used and since it is an array of file content, hence it is written as files[]. We again use the uploaded_file() function to get back a list of temporary files from the content that has been uploaded as form-data. After that we loop over all the temporary files that are stored in the variable files_uploaded in the above code. Next, for every file in the list of temporary files, we use the upload() helper function to save these files in the storage system of the application.

In the uploaded_file() function of the helpers module, since this time there are multiple files and their content sent, so things work differently. We loop over all the files that are received and for each of these files we find their filename and extension. Then we create directories to save these files in and then save the content of the file with the corresponding filename and extension. After the file has been saved, we append it to a list and finally return the entire list so that we can get a list of all files.

if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))


The
upload() function then finally returns us the urls for the files after saving them.

API Response

The file upload endpoint either returns a single url or a list of urls depending on whether a single file was uploaded or multiple files were uploaded. The url for the file depends on the storage system that has been used. After the url or list of urls is received, we jsonify the entire response so that we can send a proper JSON response that can be parsed properly in the frontend and used for saving corresponding information to the database using the other API services.

A typical single file upload response looks like this:

{
     "url": "https://xyz.storage.com/asd/fgh/hjk/12332233.docx"
 }

Multiple file upload response looks like this:

{
     "url": [
         "https://xyz.storage.com/asd/fgh/hjk/12332233.docx",
         "https://xyz.storage.com/asd/fgh/hjk/66777777.ppt"
     ]
 }

You can find the related documentations and example payloads on how to use this endpoint to upload files here: http://open-event-api.herokuapp.com/#upload-file-upload.

 

Reference:

Continue ReadingUploading Files via APIs in the Open Event Server

How User Event Roles relationship is handled in Open Event Server

Users and Events are the most important part of FOSSASIA‘s Open Event Server. Through the advent and upgradation of the project, the way of implementing user event roles has gone through a lot many changes. When the open event organizer server was first decoupled to serve as an API server, the user event roles like all other models was decided to be served as a separate API to provide a data layer above the database for making changes in the entries. Whenever a new role invite was accepted, a POST request was made to the User Events Roles table to insert the new entry. Whenever there was a change in the role of an user for a particular event, a PATCH request was made. Permissions were made such that a user could insert only his/her user id and not someone else’s entry.

def before_create_object(self, data, view_kwargs):
        """
        method to create object before post
        :param data:
        :param view_kwargs:
        :return:
        """
        if view_kwargs.get('event_id'):
            event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
            data['event_id'] = event.id

        elif view_kwargs.get('event_identifier'):
            event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
            data['event_id'] = event.id
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if not invite:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")

    def after_create_object(self, obj, data, view_kwargs):
        """
        method to create object after post
        :param data:
        :param view_kwargs:
        :return:
        """
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if invite:
            invite.status = "accepted"
            save_to_db(invite)
        else:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")


Initially what we did was when a POST request was sent to the User Event Roles API endpoint, we would first check whether a role invite from the organizer exists for that particular combination of user, event and role. If it existed, only then we would make an entry to the database. Else we would raise an “Object: not found” error. After the entry was made in the database, we would update the role_invites table to change the status for the role_invite.

Later it was decided that we need not make a separate API endpoint. Since API endpoints are all user accessible and may cause some problem with permissions, it was decided that the user event roles would be handled entirely through the model instead of a separate API. Also, the workflow wasn’t very clear for an user. So we decided on a workflow where the role_invites table is first updated with the particular status and after the update has been made, we make an entry to the user_event_roles table with the data that we get from the role_invites table.

When a role invite is accepted, sqlalchemy add() and commit() is used to insert a new entry into the table. When a role is changed for a particular user, we make a query, update the values and save it back into the table. So the entire process is handled in the data layer level rather than the API level.

The code implementation is as follows:

def before_update_object(self, role_invite, data, view_kwargs):
        """
        Method to edit object
        :param role_invite:
        :param data:
        :param view_kwargs:
        :return:
        """
        user = User.query.filter_by(email=role_invite.email).first()
        if user:
            if not has_access('is_user_itself', id=user.id):
                raise UnprocessableEntity({'source': ''}, "Only users can edit their own status")
        if not user and not has_access('is_organizer', event_id=role_invite.event_id):
            raise UnprocessableEntity({'source': ''}, "User not registered")
        if not has_access('is_organizer', event_id=role_invite.event_id) and (len(data.keys())>1 or 'status' not in data):
            raise UnprocessableEntity({'source': ''}, "You can only change your status")

    def after_update_object(self, role_invite, data, view_kwargs):
        user = User.query.filter_by(email=role_invite.email).first()
        if 'status' in data and data['status'] == 'accepted':
            role = Role.query.filter_by(name=role_invite.role_name).first()
            event = Event.query.filter_by(id=role_invite.event_id).first()
            uer = UsersEventsRoles.query.filter_by(user=user).filter_by(event=event).filter_by(role=role).first()
            if not uer:
                uer = UsersEventsRoles(user, event, role)
                save_to_db(uer, 'Role Invite accepted')


In the above code, there are two main functions –
before_update_object which gets executed before the entry in the role_invites table is updated, and after_update_object which gets executed after.

In the before_update_object, we verify that the user is accepting or rejecting his own role invite and not someone else’s role invite. Also, we ensure that the user is allowed to only update the status of the role invite and not any other sensitive data like the role_name or email. If the user tried to edit any other field except status, then an error is shown to him/her. However if the user has organizer access, then he/she can edit the other fields of the role_invites table as well. The has_access() helper permission function helps us ensure the permission checks.

In the after_update_object we make the entry to the user event roles table. In the after_update_object from the role_invite parameter we can get the exact values of the newly updated row in the table. We use the data of this role invite to find the user, event and role associated with this role. Then we create a UsersEventsRoles object with user, event and role as parameters for the constructor. Then we use save_to_db helper function to save the new entry to the database. The save_to_db function uses the session.add() and session.commit() functions of flask-sqlalchemy to add the new entry directly to the database.

Thus, we maintain the flow of the user event roles relationship. All the database entries and operation related to users-events-roles table remains encapsulated from the client user so that they can use the various API features without thinking about the complications of the implementations.

 

Reference:

Continue ReadingHow User Event Roles relationship is handled in Open Event Server
Read more about the article Automatic handling of view/data interactions in Open Event Orga App
Abstract 3d white geometric background. White seamless texture with shadow. Simple clean white background texture. 3D Vector interior wall panel pattern.

Automatic handling of view/data interactions in Open Event Orga App

During the development of Open Event Orga Application (Github Repo), we have strived to minimize duplicate code wherever possible and make the wrappers and containers around data and views intelligent and generic. When it comes to loading the data into views, there are several common interactions and behaviours that need to be replicated in each controller (or presenter in case of MVP architecture as used in our project). These interactions involve common ceremony around data loading and setting patterns and should be considered as boilerplate code. Let’s look at some of the common interactions on views:

Loading Data

While loading data, there are 3 scenarios to be considered:

  • Data loading succeeded – Pass the data to view
  • Data loading failed – Show appropriate error message
  • Show progress bar on starting of the data loading and hide when completed

If instead of loading a single object, we load a list of them, then the view may be emptiable, meaning you’ll have to show the empty view if there are no items.

Additionally, there may be a success message too, and if we are refreshing the data, there will be a refresh complete message as well.

These use cases present in each of the presenter cause a lot of duplication and can be easily handled by using Transformers from RxJava to compose common scenarios on views. Let’s see how we achieved it.

Generify the Views

The first step in reducing repetition in code is to use Generic classes. And as the views used in Presenters can be any class such as Activity or Fragment, we need to create some interfaces which will be implemented by these classes so that the functionality can be implementation agnostic. We broke these scenarios into common uses and created disjoint interfaces such that there is little to no dependency between each one of these contracts. This ensures that they can be extended to more contracts in future and can be used in any View without the need to break them down further. When designing contracts, we should always try to achieve fundamental blocks of building an API rather than making a big complete contract to be filled by classes. The latter pattern makes it hard for this contract to be generally used in all classes as people will refrain from implementing all its methods for a small functionality and just write their own function for it. If there is a need for a class to make use of a huge contract, we can still break it into components and require their composition using Java Generics, which we have done in our Transformers.

First, let’s see our contracts. Remember that the names of these Contracts are opinionated and up to the developer. There is no rule in naming interfaces, although adjectives are preferred as they clearly denote that it is an interface describing a particular behavior and not a concrete class:

Emptiable

A view which contains a list of items and thus can be empty

public interface Emptiable<T> {
   void showResults(List<T> items);
   void showEmptyView(boolean show);
}

Erroneous

A view that can show an error message on failure of loading data

public interface Erroneous {
   void showError(String error);
}

ItemResult

A view that contains a single object as data

public interface ItemResult<T> {
   void showResult(T item);
}

Progressive

A view that can show and hide a progress bar while loading data

public interface Progressive {
   void showProgress(boolean show);
}

Note that even though Progressive view can only be the one which is either ItemResult or Emptiable as they are the ones containing any data, but we have decoupled it, making it possible for a view to load data without progress or show progress for any other implementation other than loading data.

Refreshable

A view that can be refreshed and show the refresh complete message

public interface Refreshable {
   void onRefreshComplete();
}

There should also be a method for refresh failure, but the app is under development and will be added soon

Successful

A view that can show a success message

public interface Successful {
   void onSuccess(String message);
}

Implementation

Now, we will implement the Observable Transformers for these contracts

Erroneous

public static <T, V extends Erroneous> ObservableTransformer<T, T> erroneous(V view) {
   return observable ->  observable
             .doOnError(throwable -> view.showError(throwable.getMessage()));
}

We simply call showError on a view implementing Erroneous on the call of doOnError of the Observable

Progressive

private static <T, V extends Progressive> ObservableTransformer<T, T> progressive(V view) {
   return observable -> observable
           .doOnSubscribe(disposable -> view.showProgress(true))
           .doFinally(() -> view.showProgress(false));
}

Here we show the progress when the observable is subscribed and finally, we hide it whether it succeeded or failed

ItemResult

public static <T, V extends ItemResult<T>> ObservableTransformer<T, T> result(V view) {
   return observable -> observable.doOnNext(view::showResult);
}

We call showResult on call of onNext

 

Refreshable

private static <T, V extends Refreshable> ObservableTransformer<T, T> refreshable(V view, boolean forceReload) {
   return observable ->
       observable.doFinally(() -> {
           if (forceReload) view.onRefreshComplete();
       });
}

As we only refresh a view if it is a forceReload, so we check it before calling onRefreshComplete

 

Emptiable

public static <T, V extends Emptiable<T>> SingleTransformer<List<T>, List<T>> emptiable(V view, List<T> items) {
   return observable -> observable
       .doOnSubscribe(disposable -> view.showEmptyView(false))
       .doOnSuccess(list -> {
           items.clear();
           items.addAll(list);
           view.showResults(items);
       })
       .doFinally(() -> view.showEmptyView(items.isEmpty()));
}

Here we hide the empty view on start of the loading of data and finally we show it if the items are empty. Also, since we keep only one copy of a final list variable which is also used in view along with the presenter, we clear and add all items in that variable and call showResults on the view

Bonus: You can also merge the functions for composite usage as mentioned above like this

public static <T, V extends Progressive & Erroneous> ObservableTransformer<T, T> progressiveErroneous(V view) {
   return observable -> observable
       .compose(progressive(view))
       .compose(erroneous(view));
}

public static <T, V extends Progressive & Erroneous & ItemResult<T>> ObservableTransformer<T, T> progressiveErroneousResult(V view) {
   return observable -> observable
       .compose(progressiveErroneous(view))
       .compose(result(view));
}

Usage

Finally we use the above transformers

eventsDataRepository
   .getEvents(forceReload)
   .compose(dispose(getDisposable()))
   .compose(progressiveErroneousRefresh(getView(), forceReload))
   .toSortedList()
   .compose(emptiable(getView(), events))
   .subscribe(Logger::logSuccess, Logger::logError);

To give you an idea of what we have accomplished here, this is how we did the same before adding transformers

eventsView.showProgressBar(true);
eventsView.showEmptyView(false);

getDisposable().add(eventsDataRepository
   .getEvents(forceReload)
   .toSortedList()
   .subscribeOn(Schedulers.computation())
   .subscribe(events -> {
       if(eventsView == null)
           return;
       eventsView.showEvents(events);
       isListEmpty = events.size() == 0;
       hideProgress(forceReload);
   }, throwable -> {
       if(eventsView == null)
           return;

       eventsView.showEventError(throwable.getMessage());
       hideProgress(forceReload);
   }));

Sure looks ugly as compared to the current solution.

Note that if you don’t provide the error handler in subscribe method of the observable, it will throw an onErrorNotImplemented exception even if you have added a doOnError side effect

Here are some resources related to RxJava Transformers:

Continue ReadingAutomatic handling of view/data interactions in Open Event Orga App

Implementing User Permissions API on Open Event Frontend to View and Update User Permission Settings

This blog article will illustrate how the user permissions  are displayed and updated on the admin permissions page in Open Event Frontend, using the user permissions API. Our discussion primarily will involve the admin/permissions/index route to illustrate the process.

The primary end point of Open Event API with which we are concerned with for fetching the permissions  for a user is

GET /v1/user-permissions

First we need to create a model for the user-permissions, which will have the fields corresponding to the api, so we proceed with the ember CLI command:

ember g model user-permission

Next we define the model according to the requirements. The model needs to extend the base model class, and other than the name and description all the fields will be boolean since the user permissions frontend primarily consists of checkboxes to grant and revoke permissions. Hence the model will be of the following format.

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 name           : attr('string'),
 description    : attr('string'),
 unverifiedUser : attr('boolean'),
 anonymousUser  : attr('boolean')
});

Now we need to load the data from the api using this model, so will send a get request to the api to fetch the current permissions. This can be easily achieved via a store query in the model hook of the admin/permissions/system-roles route. It is important to note here, that findAll is preferred over an empty query. To quote the source of this information,

The reason findAll is preferred over query when no filtering is done is, query will always make a server request. findAll on the other hand, will not make a server request if findAll has already been used once somewhere before. It’ll re-use the data already available whenever possible.

model() {
   return this.get('store').findAll('user-permission');
 }

The user permissions form is not a separate component and is directly embedded in the route template hence, there is no need to explicitly pass the model, it will be available in the route template by default. And can be used as following:

{{#each model as |userPermission|}}
<tr>
  <td>
    {{userPermission.name}}
    <div class="muted text">
      {{userPermission.description}}
    </div>
  </td>
  <td>
     {{ui-checkbox label=(t 'Unverified User') checked=userPermission.unverifiedUser onChange=(action (mut userPermission.unverifiedUser))}}
  </td>
  <td>
    {{ui-checkbox label=(t 'Anonymous User') checked=userPermission.anonymousUser onChange=(action (mut userPermission.anonymousUser))}}
  </td>
</tr>
{{/each}}

In the template after mutating the model’s values according to whether the checkboxes are checked or not, the only thing left is triggering the update action in the controller which will be triggered with the default submit action of the form.

updatePermissions() {
     this.set('isLoading', true);
     this.get('model').save()
       .then(() => {
         this.set('isLoading', false);
         this.notify.success(this.l10n.t('User permissions have been saved successfully.'));
       })
       .catch(()=> {
         this.set('isLoading', false);
         this.notify.error(this.l10n.t('An unexpected error has occurred. User permissions not saved.'));
       });
   }

The controller action first sets the isLoading property to true. This adds the semantic UI class loading to the the form,  and so the form goes in the loading state, to let the user know the request is being processed. Then the save()  call occurs and this makes a PATCH request to the API to update the values stored inside the database. And if the PATCH request is successful, the .then() clause executes, which in addition to setting the isLoading as false, notifies the user that the settings have been saved  successfully using the notify service.

However, in case there is an unexpected error and the PATCH request fails, the .catch() executes. After setting isLoading to false, it notifies the user of the error via an error notification.

Resources

 

 

Continue ReadingImplementing User Permissions API on Open Event Frontend to View and Update User Permission Settings

API Error Handling in the Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Open Event API server acts as a backend for this App. So basically the App makes data requests to the API and in return, the API performs required actions on the data and sends back the response to the App which is used to display relevant info to the user and to update the App’s local database. The error responses returned by the API need to parse and show the understandable error message to the user.

The App uses Retrofit+OkHttp for making network requests to the API. Hence the request method returns a Throwable in the case of an error in the action. The Throwable contains a string message which can be get using the method named getMessage. But the message is not understandable by the normal user. Open Event Organizer App uses ErrorUtils class for this work. The class has a method which takes a Throwable as a parameter and returns a good error message which is easier to understand to the user.

Relevant code:

public final class ErrorUtils {

   public static final int BAD_REQUEST = 400;
   public static final int UNAUTHORIZED = 401;
   public static final int FORBIDDEN = 403;
   public static final int NOT_FOUND = 404;
   public static final int METHOD_NOT_ALLOWED = 405;
   public static final int REQUEST_TIMEOUT = 408;

   private ErrorUtils() {
       // Never Called
   }

   public static String getMessage(Throwable throwable) {
       if (throwable instanceof HttpException) {
           switch (((HttpException) throwable).code()) {
               case BAD_REQUEST:
                   return "Something went wrong! Please check any empty field if a form.";
               case UNAUTHORIZED:
                   return "Invalid Credentials! Please check your credentials.";
               case FORBIDDEN:
                   return "Sorry, you are not authorized to make this request.";
               case NOT_FOUND:
                   return "Sorry, we couldn't find what you were looking for.";
               case METHOD_NOT_ALLOWED:
                   return "Sorry, this request is not allowed.";
               case REQUEST_TIMEOUT:
                   return "Sorry, request timeout. Please retry after some time.";
               default:
                   return throwable.getMessage();
           }
       }
       return throwable.getMessage();
   }
}

ErrorUtils.java
app/src/main/java/org/fossasia/openevent/app/common/utils/core/ErrorUtils.java

All the error codes are stored as static final fields. It is always a good practice to follow a making the constructor private for a utility class to make sure the class is never initialized anywhere in the app. The method getMessage takes a Throwable and checks if it is an instance of the HttpException to get an HTTP error code. Actually, there are two exceptions – HttpException and IOException. The prior one is returned from the server. In the method by using the error codes, relevant good error messages are returned which are shown to the user in a snackbar layout.

It is always a good practice to show a more understandable user-friendly error messages than simply the default ones which are not clear to the normal user.

Links:
1. List of the HTTP Client Error Codes – Wikipedia Link
2. Class Throwable javadoc

Continue ReadingAPI Error Handling in the Open Event Organizer Android App

Implementing Speakers Call API in Open Event Frontend

This article will illustrate how to display the speakers call details on the call for speakers page in the Open Event Frontend project using the Open Event Orga API. The API endpoints which will be mainly focussing on for fetching the speaker call details are:

GET /v1/speakers-calls/{speakers_call_id}

In the case of Open Event, the speakers are asked to submit their proposal beforehand if they are interested in giving some talk. For the same purpose, we have a section on the event’s website called as Call for Speakers on the event’s public page where the details about the speakers call are present along with the button Submit Proposal which redirects to the link where they can upload the proposal if the speakers call is open. Since the speakers call page is present on the event’s public page so the route which will be concerned with will be public/index route and its subroute public/index/cfs in the application. As the call for speakers details are nested within the events model so we need to first fetch the event and then from there we need to fetch the speaker-calls detail from the model.

The code to fetch the event model looks like this:

model(params) {
return this.store.findRecord('event', params.event_id, { include: 'social-links' });
}

The above model takes care of fetching all the data related to the event but, we can see that speakers call is not included as the parameter. The main reason behind this is the fact that the speakers is not required on each of the public route, rather it is required only for the subroute public/index/cfs route. Let’s see how the code for the speaker-call modal work to fetch the speaker calls detail from the above event model.  

model() {
    const eventDetails = this.modelFor('public');
    return RSVP.hash({
      event        : eventDetails,
      speakersCall : eventDetails.get('speakersCall')
    });
}

In the above code, we made the use of this.modelFor(‘public’) to make the use of the event data fetched in the model of the public route, eliminating the separate API call for the getting the event details in the speaker call route. Next, using the ember’s get method we are fetching the speakers call data from the eventDetails and placing it inside the speakersCall JSON object for using it lately to display speakers call details in public/index subroute.

Until now, we have fetched event details and speakers call details in speakers call subroute but we need to display this on the index page of the sub route. So we will pass the model from file cfs.hbs to call-for-speakers.hbs the code for which looks like this:

{{public/call-for-speakers speakersCall=model.speakersCall}}  

The trickiest part in implementing the speakers call is to check whether the speakers call is open or closed. The code which checks whether the call for speaker has to be open or closed is:

isOpen: computed('startsAt', 'endsAt', function() {
     return moment().isAfter(this.get('startsAt')) && moment().isBefore(this.get('endsAt'));
})

In the above-computed property isOpen of speakers-call model, we are passing the starting time and the ending time of the speakers call. We are then comparing if the starting time is after the current time and the current time is before the ending time than if both conditions satisfy to be true then the speakers call is open else it will be closed.  

Now, we need a template file where we will define how the user interface for call-for-speakers based on the above property, isOpen. The code for displaying UI based on its open or closed status is

  {{#if speakersCall.isOpen}}
    <a class="ui basic green label">{{t 'Open'}} </a>
    <div class="sub header">
      {{t 'Call for Speakers Open until'}} {{moment-format speakersCall.endsAt 'ddd, MMM DD HH:mm A'}}
    </div>
  {{else}}
    <a class="ui basic red label">{{t 'Closed'}}</a>
  {{/if}}

In the above code, we are checking is the speakersCall is open then we show a label open and display the date until which speakers call is opened using the moment helper in the format “ddd, MMM DD HH:mm A” else we show a label closed. The UI for the above code looks like this.

Fig. 1: The heading of speakers call page when the call for speakers is open

The complete UI of the page looks like this.

Fig. 2: The user interface for the speakers call page

The entire code for implementing the speakers call API can be seen here.

To conclude, this is how we efficiently fetched the speakers call details using the Open-Event-Orga speakers call API, ensuring that there is no unnecessary API call to fetch the data.  

Resources:

Continue ReadingImplementing Speakers Call API in Open Event Frontend

Adding JSON API support to ember-models-table in Open Event Front-end

Open Event Front-end project uses ember-models-table for handling all the table components in the application. Although ember-models-table is great for handling server requests for operations like pagination, sorting & filtering, but it does not support JSON API used in the Front-end project.

In this blog we will see how we integrated JSON API standards to ember-models-table. Lets see how we added support for JSON API to table and made requests to the Open Event Orga-server.

Adding JSON API support for filtering & sorting

The JSON API specs follow a strict structure for supporting meta data & filtering options, the server expects an array of objects for specifying the name of the field, operation and the value for filtering. The name attribute specifies the column for which we need to apply the filter. eg we use `name` for the events name in the. `op` attribute specifies the operation to be used for filtration, `val` attribute is used to provide a value for comparison. You can check the list of all the supported operations here.

For implementation of filter we will check if the column filter is being used i.e if the filter string is empty or not, if the string is not empty we add a filter object of the column using the specified specs, else we remove the filter object of the column.

if (filter) {
  query.filter.pushObject({
    name : filterTitle,
    op   : 'ilike',
    val  : `%${filter}%`
  });
} else {
  query.filter.removeObject({
    name : filterTitle,
    op   : 'ilike',
    val  : `%${filter}%`
  });
}

For sort functionally we need to pass a query parameter called `sort` which is a string value in the URL. Sorting can be done in ascending or descending order for which the server expects different values. We pass `sort=name` & `sort=-name` for sorting in ascending order & descending order respectively.

const sortSign = {
  none : '',
  asc  : '-',
  desc : ''
};
let sortedBy = get(column, 'sortedBy');
if (typeOf(sortedBy) === 'undefined') {
  sortedBy = get(column, 'propertyName');
}

Adding support for pagination

The pagination in JSON API is implemented using query parameters `page[size]` & `page[number]` which specify the size of the page & the current page number respectively eg

page[size]=10&page[number]=1

This will load the first ten events from the server in the application.

Once the data is loaded in the application we calculate the number of pages to be rendered. The response from the server has attached meta-data which contains the total number of the events in the following structure:

meta: {
  count: 100
}

We calculate the number of pages by dividing the total count by the size of the page. We check if the number of items is greater than the pageSize, and calculate the number of the pages using the formula `items / pagesize + (items % pagesize ? 1 : 0)`. If the items are less than the pageSize we do not have to calculate the pages and we simply hide the pagination in the footer.

if (pageSize > items) {
  this.$('.pagination').css({
    display: 'none'
  });
} else {
  this.$('.pagination').removeAttr('style');
  pages = parseInt((items / pageSize));
  if (items % pageSize) {
    pages = pages + 1;
  }
}

Adding dynamic routing support to ember-models-table

We may want to use the ember-models-table for dynamic routes like `events/list` route, where we load live, drafted & past events based on the current route. The ember-models-table by default do not support the dynamic routes. To add this we override the didReceiveAttrs() method of the component which is executed every time the component updates. We add reset the pageSize, currentPageNumber and the content of the table, as the routes change.

didReceiveAttrs() {
  set(this, 'pageSize', 10);
  set(this, 'currentPageNumber', 1);
  set(this, 'filteredContent', get(this, 'data'));
}

The result of this we now have tables supporting JSON API in the Open Event Front-end application

Thank you for reading the blog, you can check the source code for the example here.

Resources

Continue ReadingAdding JSON API support to ember-models-table in Open Event Front-end