Adding Static Code Analyzers in Open Event Orga Android App

This week, in Open Event Orga App project (Github Repo), we wanted to add some static code analysers that run on each build to ensure that the app code is free of potential bugs and follows a certain style. Codacy handles a few of these things, but it is quirky and sometimes produces false positives. Furthermore, it is not a required check for builds so errors can creep in gradually. We chose checkstyle, PMD and Findbugs for static analysis as they are most popular for Java. The area they work on kind of overlaps but gives security regarding code quality. Findbugs actually analyses the bytecode instead of source code to find possible JVM bugs.

Adding dependencies

The first step was to add the required dependencies. We chose the library android-check as it contained all 3 libraries and was focused on Android and easily configurable. First, we add classpath in project level build.gradle

dependencies {
   classpath 'com.noveogroup.android:check:1.2.4'
}

 

Then, we apply the plugin in app level build.gradle

apply plugin: 'com.noveogroup.android.check'

 

This much is enough to get you started, but by default, the build will not fail if any violations are found. To change this behaviour, we add this block in app level build.gradle

check {
   abortOnError true
}

 

There are many configuration options available for the library. Do check out the project github repo using the link provided above

Configuration

The default configuration is of easy level, and will be enough for most projects, but it is of course configurable. So we took the default hard configs for 3 analysers and disabled properties which we did not need. The place you need to store the config files is the config folder in either root project directory or the app directory. The name of the config file should be checkstyle.xml, pmd.xml and findbugs.xml

These are the default settings and you can obviously configure them by following the instructions on the project repo

Checkstyle

For checkstyle, you can find the easy and hard configuration here

The basic principle is that if you need to add a check, you include a module like this:

<module name="NewlineAtEndOfFile" />

 

If you want to modify the default value of some property, you do it like this:

<module name="RegexpSingleline">
   <property name="format" value="\s+$" />
   <property name="minimum" value="0" />
   <property name="maximum" value="0" />
   <property name="message" value="Line has trailing spaces." />
   <property name="severity" value="info" />
</module>

 

And if you want to remove a check, you can ignore it like this:

<module name="EqualsHashCode">
   <property name="severity" value="ignore" />
</module>

 

It’s pretty straightforward and easy to configure.

Findbugs

For findbugs, you can find the easy and hard configuration here

Findbugs configuration exists in the form of filters where we list resources it should skip analyzing, like:

<Match>
   <Class name="~.*\.BuildConfig" />
</Match>

 

If we want to ignore a particular pattern, we can do so like this:

<!-- No need to force hashCode for simple models -->
<Match>
   <Bug pattern="HE_EQUALS_USE_HASHCODE " />
</Match>

 

Sometimes, you’d want to only ignore a pattern only for certain files or fields. Findbugs supports regex to match such items:

<!-- Don't complain about rules in tests. -->
<Match>
   <Field name="~.*mockitoRule"/>
   <Bug pattern="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD" />
</Match>

 

You can also annotate your code to suppress warning in the particular class, mehod or field rather than disabling it for the whole project. For that, you need to add findbugs annotations dependency in the project

compile 'com.google.code.findbugs:findbugs-annotations:3.0.1'

 

And then use it like this:

@SuppressFBWarnings(
   value = "ICAST_IDIV_CAST_TO_DOUBLE",
   justification = "We want granularity to be integer")
public void showChart(LineChart lineChart) {
   ...
}

 

It also allows setting the justification of suppressing the rule for clarity

PMD

For findbugs, you can find the easy and hard configuration here

Like checkstyle, you have to first add a rule set to tell PMD which checks to perform:

<rule ref="rulesets/java/android.xml" />

 

If you want to modify the default value of the rule, you can do it like this:

<rule ref="rulesets/java/codesize.xml/TooManyMethods">
   <properties>
       <property name="maxmethods" value="15" />
   </properties>
</rule>

 

Or if you want to entirely exclude a rule, you can do it like this:

<rule ref="rulesets/java/basic.xml">
   <exclude name="OverrideBothEqualsAndHashcode" />
</rule>

 

PMD also supports suppressing warnings in the code itself using annotations. You don’t require any external libraries for it as it supports the in built java.lang.SuppessWarnings annotations. You can use it like this:

@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops") // Entries cannot be created outside loop
private LineDataSet setData(Map<String, Long> map, String label) throws ParseException {
   ...
}

 

As you can see, we need to prepend “PMD.” to the rule name so that there are no clashes while annotation processing. Remember to comment the reason for suppressing the warning so that your co-developers know and can remove it in future if criteria does not meet anymore.

There is a lot more to learn about these static analyzers, which you can read upon in their official documentation:

Implementing Admin Statistics Mail and Session API on Open Event Frontend

This blog article will illustrate how the admin-statistics-mail and admin-statistics-session API  are implemented on the admin dashboard page in Open Event Frontend.Our discussion primarily will involve the admin/index route to illustrate the process.The primary end points of Open Event API with which we are concerned with for fetching the admin statistics  for the dashboard are

GET /v1/admin/statistics/mails
GET /v1/admin/statistics/sessions

First we need to create the corresponding models according to the type of the response returned by the server , which in this case will be admin-statistics-event and admin-statistics-sessions, so we proceed with the ember CLI commands:

ember g model admin-statistics-mail
ember g model admin-statistics-session

Next we define the model according to the requirements. The model needs to extend the base model class, and all the fields will be number since the all the data obtained via these models from the API will be numerical statistics

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 oneDay     : attr('number'),
 threeDays  : attr('number'),
 sevenDays  : attr('number'),
 thirtyDays : attr('number')
});

And the model for sessions will be the following. It too will consist all the attributes of type number since it represents statistics

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 confirmed : attr('number'),
 accepted  : attr('number'),
 submitted : attr('number'),
 draft     : attr('number'),
 rejected  : attr('number'),
 pending   : attr('number')
});

Now we need to load the data from the api using the models, so will send a get request to the api to fetch the current permissions. This can be easily achieved via a store query in the model hook of the admin/index route.However this cannot be a normal get request. Because the the urls for the end point are /v1/admin/statistics/mails & /v1/admin/statistics/sessions but there are no relationships between statistics and various sub routes, which is what ember’s default behaviour would expect.

Hence we need to override the generated default request url using custom adapters and use buildUrl method to customize the request urls.

import ApplicationAdapter from './application';

export default ApplicationAdapter.extend({
 buildURL(modelName, id, snapshot, requestType, query) {
   let url = this._super(modelName, id, snapshot, requestType, query);
   url = url.replace('admin-statistics-session', 'admin/statistics/session');
   return url;
 }
});

The buildURL method replaces the the default  URL for admin-statistics-session  with admin/statistics/session otherwise the the default request would have been

GET v1/admin-statistics-session

Similarly it must be done for the mail statistics too. These will ensure that the correct request is sent to the server. Now all that remains is making the requests in the model hooks and adjusting the template slightly for the new model.

model() {
   return RSVP.hash({
         mails: this.get('store').queryRecord('admin-statistics-mail', {
       filter: {
         name : 'id',
         op   : 'eq',
         val  : 1
       }
     }),
     sessions: this.get('store').queryRecord('admin-statistics-session', {
       filter: {
         name : 'id',
         op   : 'eq',
         val  : 1
       }
     })
   });
 }


queryRecord is used instead of query because only a single record is expected to be returned by the API.

Resources

Tags :

Open event, Open event frontend, ember JS, ember service, semantic UI, ember-data, ember adapters,  tickets, Open Event API, Ember models

Implementing Tracks Filter in Open Event Webapp using the side track name list

4f451d29-c6c2-44be-9ef7-d91a45fc1eb0.png

On Clicking the Design, Art, Community Track

ff85d907-512b-4a41-be49-888bbd17bf83.png

But, it was not an elegant solution. We already had a track names list present on the side of the page which remained unused. A better idea was to use this side track names list to filter the sessions. Other event management sites like http://sched.org follow the same idea. The relevant issue for it is here and the major work can be seen in this Pull Request. Below is the screenshot of the unused side track names list.

5b15a297-fd5e-4c23-bc1b-dbed193db0f4.png

The end behavior should be something like this, the user clicks on a track and only sessions belonging to the track should be visible and the rest be hidden. There should also be a button for clearing the applied filter and reverting the page back to its default view. Let’s jump to the implementation part.

First, we make the side track name list and make the individual tracks clickable.

<div class="track-names col-md-3 col-sm-3"> 
  {{#tracknames}}
    <div class="track-info">
      <span style="background-color: {{color}};" 
      class="titlecolor"></span>
      <span class="track-name" style="cursor: pointer">{{title}}
      </span>
    </div>
  {{/tracknames}}
</div>

f45f1591-937d-4e2f-9245-237cfaf3af0d.png

Now we need to write a function for handling the user click event on the track name. Before writing the function, we need to see the basic structure of the tracks page. The divs with the class date-filter contain all the sessions scheduled on a given day. Inside that div, we have another div with class tracks-filter which contains the name of the track and all the sessions of that track are inside the div with class room-filter.

Below is a relevant block of code from the tracks.hbs file

<div class="date-filter">
  // Contains all the sessions present in a single day
  <div class="track-filter row">
    // Contains all the sessions of a single track
    <div class="row">
      // Contains the name of the track
      <h5 class="text">{{caption}}</h4>
    </div>
    <div class="room-filter" id="{{session_id}}">
      // Contain the information about the session
    </div>
  </div>
</div>

We iterate over all the date-filter divs and check all the track-filter divs inside it. We extract the name of the track and compare it to the name of the track which the user selected. If both of them are same, then we show that track div and all the sessions inside it. If the names don’t match, then we hide that track div and all the content inside it. We also keep a variable named flag and set it to 0 initially. If the user selected track is present on a given day, we set the flag to 1. Based on it, we decide whether to display that particular day or not. If the flag is set, we display the date-filter div of that day and the matched track inside it. Otherwise, we hide the div and all tracks inside it.

$('.track-name').click(function() {
  // Get the name of the track which the user clicked
  trackName = $(this).text();
  // Show the button for clearing the filter applied and reverting to the default view
  $('#filterInfo').show();
  $('#curFilter').text(trackName);
  // Iterate through the divs and show sessions of user selected track
  $('.date-filter').each(function() {
    var flag = 0;
    $(this).find('.track-filter').each(function() {
      var name = $(this).find('.text').text();
      if(name != trackName) {
        $(this).hide();
        return;
      }
      flag = 1;
      $(this).show();
    });
    if (flag) {
     $(this).show();
    } else {
      $(this).hide();
    }
  });
});

On Selecting the Android Track of FOSSASIA Summit, we see something like this

935f208b-c17c-4d41-abb6-7197c003d962.png

Now the user may want to remove the filter. He/she can just click on the Clear Filter button shown in the above screenshot to remove the filter and revert back to the default view of the page.

$('#clearFilter').click(function() {                                                                                                   
  trackFilterMode = 0;                                                                                                                 
  display();                                                                                                                           
  $('#filterInfo').hide();                                                                                                             
});

Back to the default view of the page

2ff61ccf-17cf-4595-9c2f-e6825de549f7.png

References:

Implementing the UI of the Scheduler in Open Event Frontend with Sub Rooms and External Events

This blog article will illustrate how exactly the UI of the scheduler is implemented in  Open Event Frontend, using the fullcalendar library. Our discussion primarily will involve the events/view/scheduler  route.

FullCalendar is an open source javascript scheduler with an option to use it’s scheduler functionality. To use it with ember JS, we make use of it’s ember-wrapper (ember-fullcalendar). We begin by installing it via CLI

npm install  ember-fullcalendar

Next to initialise it, we need to specify some properties in the config/environment.js File.

Note: It is wrongly mentioned in the official documentation. (issue) Hence specified in this blog.

emberFullCalendar: {
includeScheduler: true
}

This enables the scheduler functionality of the calendar, next we need to specify the model, which will include the details of the rooms and the events which need to be displayed, the full calendar scheduler requires them in a specific way, hence the model will hook will be:

model() {
return RSVP.hash({
events: [{
title      : 'Session 1',
start      : '2017-07-26T07:08:08',
end        : '2017-07-26T09:08:08',
resourceId : 'a'
}],
rooms: [
{ id: 'a', title: 'Auditorium A' },
{ id: 'b', title: 'Auditorium B', eventColor: 'green' },
{ id: 'c', title: 'Auditorium C', eventColor: 'orange' },
{ id       : 'd', title    : 'Auditorium D', children : [
{ id: 'd1', title: 'Room D1' },
{ id: 'd2', title: 'Room D2' }
] },
{ id: 'e', title: 'Auditorium E' },
{ id: 'f', title: 'Auditorium F', eventColor: 'red' }
]
});

Now we begin with the basic structure we will require for the scheduler in the template files.Since we need an option of dragging and dropping external events, we will split the page into two columns, and make a separate component for storing the list of external events.The component is aptly named external-event-list. Hence the scheduler.hbs will have the following grid column structure.

<div class="ui grid">
<div class="row">
<div class="three wide column">
{{scheduler/external-event-list}}
</div>
<div class="thirteen wide column">
{{full-calendar events=model.events
editable=true
resources=model.rooms
header=header
views=views
viewName='timelineDay'
drop=(action 'drop')
eventReceive=(action 'eventReceive')
ondragover=(action 'eventDrop')
}}
</div>
</div>
</div>

It is important to note here that the full-calendar has various callbacks which are triggered when the events are dragged and dropped. The editable property allows the user to change the events’ venue and timeline, and is enabled only for the organiser of the event. The resources refer to the rooms/locations where the session/events will be held. Since ember executes functionality via functions, each of the standard callback of the full calendar has been translated into an action. Please see the official docs for what each callback does.The only thing which remains is the list of external events. It is necessary for the actual event to be wrapped in fc-event spans, as  they are the default classes for full calendar, and the calendar is able to fetch the event or session name from these spans.

<div id='external-events'>
<h4 class="ui header">{{t 'Events'}}</h4>
<span class='fc-event' draggable="true">My Event 1</span>
<span class='fc-event' draggable="true">My Event 2</span>
<span class='fc-event' draggable="true">My Event 3</span>
</div>

Whenever the events will be dragged and dropped onto the scheduler, they will trigger the drop action inside it, which will fetch the data from them and create an event object for the scheduler.

Resources

Tags :

Open event, Open event frontend, ember JS, ember service, semantic UI, ember-data, ember controllers,  tickets, Open Event API, Ember models

Handling Requests for hasMany Relationships in Open Event Front-end

In Open Event Front-end we use Ember Data and JSON API specs to integrate the application with the server. Ember Data provides an easy way to handle API requests, however it does not support a direct POST for saving bulk data which was the problem we faced while implementing event creation using the API.

In this blog we will take a look at how we implemented POST requests for saving hasMany relationships, using an example of sessions-speakers route to see how we saved the tracks, microlocations & session-types. Lets see how we did it.

Fetching the data from the server

Ember by default does not support hasMany requests for getting related model data. However we can use external add on which enable the hasMany Get requests, we use ember-data-has-many-query which is a great add on for querying hasMany relations of a model.

let data = this.modelFor('events.view.edit');
data.tracks = data.event.get('tracks');
data.microlocations = data.event.get('microlocations');
data.sessionTypes = data.event.get('sessionTypes');
return RSVP.hash(data);

In the above example we are querying the tracks, microlocations & sessionTypes which are hasMany relationships, related to the events model. We can simply do a to do a GET request for the related model.

data.event.get('tracks');

In the above example we are retrieving the all the tracks of the event.

Sending a POST request for hasMany relationship
Ember currently does not saving bulk data POST requests for hasMany relations. We solved this by doing a POST request for individual data of the hasMany array.

We start with creating a `promises` array which contains all the individual requests. We then iterate over all the hasMany relations & push it to the `promises` array. Now each request is an individual promise.

let promises = [];

promises.push(this.get('model.event.tracks').toArray().map(track => track.save()));
promises.push(this.get('model.event.sessionTypes').toArray().map(type => type.save()));
promises.push(this.get('model.event.microlocations').toArray().map(location => location.save()));

Once we have all the promises we then use RSVP to make the POST requests. We make use of all() method which takes an array of promises as parameter and resolves all the promises. If the promises are not resolved successfully then we simply notify the user using the notify service, else we redirect to the home page.

RSVP.Promise.all(promises)
  .then(() => {
    this.transitionToRoute('index');
  }, function() {
    this.get('notify').error(this.l10n.t(Data did not save. Please try again'));
  });

The result of this we can now retrieve & create new tracks, microlocations & sessionTypes on sessions-speakers route.

Thank you for reading the blog, you can check the source code for the example here.

Resources

 

Data Access Layer in Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Its core feature is scanning a QR Code to validate Attendee Check In. Other features of the App are to display an overview of sales and tickets management. The App maintains a local database and syncs it with the Open Event API Server. The Data Access Layer in the App is designed such that the data is fetched from the server or taken from the local database according to the user’s need. For example, simply showing the event sales overview to the user will fetch the data from the locally saved database. But when the user wants to see the latest data then the App need to fetch the data from the server to show it to the user and also update the locally saved data for future reference. I will be talking about the data access layer in the Open Event Organizer App in this blog.

The App uses RxJava to perform all the background tasks. So all the data access methods in the app return the Observables which is then subscribed in the presenter to get the data items. So according to the data request, the App has to create the Observable which will either load the data from the locally saved database or fetch the data from the API server. For this, the App has AbstractObservableBuilder class. This class gets to decide which Observable to return on a data request.

Relevant Code:

final class AbstractObservableBuilder<T> {
   ...
   ...
   @NonNull
   private Callable<Observable<T>> getReloadCallable() {
       return () -> {
           if (reload)
               return Observable.empty();
           else
               return diskObservable
                   .doOnNext(item -> Timber.d("Loaded %s From Disk on Thread %s",
                       item.getClass(), Thread.currentThread().getName()));
       };
   }

   @NonNull
   private Observable<T> getConnectionObservable() {
       if (utilModel.isConnected())
           return networkObservable
               .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                   item.getClass(), Thread.currentThread().getName()));
       else
           return Observable.error(new Throwable(Constants.NO_NETWORK));
   }

   @NonNull
   private <V> ObservableTransformer<V, V> applySchedulers() {
       return observable -> observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread());
   }

   @NonNull
   public Observable<T> build() {
       if (diskObservable == null || networkObservable == null)
           throw new IllegalStateException("Network or Disk observable not provided");

       return Observable
               .defer(getReloadCallable())
               .switchIfEmpty(getConnectionObservable())
               .compose(applySchedulers());
   }
}

 

The class is used to build the Abstract Observable which contains both types of Observables, making data request to the API server and the locally saved database. Take a look at the method build. Method getReloadCallable provides an observable which will be the default one to be subscribed which is a disk observable which means data is fetched from the locally saved database. The method checks parameter reload which if true suggests to make the data request to the API server or else to the locally saved database. If the reload is false which means data can be fetched from the locally saved database, getReloadCallable returns the disk observable and the data will be fetched from the locally saved database. If the reload is true which means data request must be made to the API server, then the method returns an empty observable.

The method getConnectionObservable returns a network observable which makes the data request to the API server. In the method build, switchIfEmpty operator is applied on the default observable which is empty if reload is true, and the network observable is passed to it. So when reload is true, network observable is subscribed and when it is false disk observable is subscribed. For example of usage of this class to make a events data request is:

public Observable<Event> getEvents(boolean reload) {
   Observable<Event> diskObservable = Observable.defer(() ->
       databaseRepository.getAllItems(Event.class)
   );

   Observable<Event> networkObservable = Observable.defer(() ->
       eventService.getEvents(JWTUtils.getIdentity(getAuthorization()))
           ...
           ...

   return new AbstractObservableBuilder<Event>(utilModel)
       .reload(reload)
       .withDiskObservable(diskObservable)
       .withNetworkObservable(networkObservable)
       .build();
}

 

So according to the boolean parameter reload, a correct observable is subscribed to complete the data request.

Links:
1. Documentation about the Operators in ReactiveX
2. Information about the Data Access Layer on Wikipedia

FOSSASIA at Google Code-In 2016 Grand Prize Trip

This year FOSSASIA came up with a whopping number of GCI participants, making it to the top. FOSSASIA is a mentor organization at the Google Code-In contest, which introduces pre-university students towards open source development.

Every year Google conducts the grand prize trip to all the GCI winners and I represented FOSSASIA as a mentor.

FOSSASIA GCI winners and Mentor at Google Mountain View Campus.

Day 1: Meet and Greet with the Diverse Communities

We all headed towards the San Francisco Google office and had a great time interacting with members from diverse open source organizations from different parts of the world. I had some interactive conversations with the kids, on how they scheduled their sleep hours in order to complete the task and got feedback from the mentors from different time zones! I was also overwhelmed while listening to their interests apart from open source contributions.

“I am a science enthusiast, mainly interested in Computer Science and its wide range of applications. I also enjoy playing the piano, reading, moving, and having engaging conversations with my friends. As a participant in the GCI contest, I got the chance to learn by doing, I got an insight of how it is like to work on a real open-source project, met some great people, helped others (and received help myself). Shortly, it was amazing, and I’m proud to have been a part of it. ” Shared by one of our Winner Oana Rosca.

There were people from almost 14 different countries, in fact, FOSSASIA, as a team, was the most diverse group 🙂

Day 2: Award Ceremony

We had two winners from FOSSASIA, Arkhan Kaiser from Indonesia and Oana Rosca from Romania. There were 8 organizations with 16 winners. The award ceremony was celebrated on day 2 and each winner was felicitated by Chris DiBona, the director of the Google open source team.

Talks by Googlers

We had amazing speakers from Google who spoke about their work, experiences, and journey to Google. Our first speaker was Jeremy Allison, a notable contributor to “Samba” which is a free software re-implementation of the SMB/CIFS networking protocol. He spoke on “How the Internet works” and gave a deeper view of the internet magic.

We had various speakers from different domains such as Grant Grundler from the Chrome team, Lyman Missimer from Google Expeditions, Katie Dektar from the Making and Science team, Sean Lip from Oppia(Googler and Oppia org admin), Timothy Papandreou from Waymo and Andrew Selle from TensorFlow.

Day 3: Fun Activities

We had various fun activities organized by the Google team. I had a great time cruising towards the Alcatraz island.  Later we had a walk on the Golden Gate bridge. Here comes the fun part of the tour “the cruise dinner” which was the best part of the day.

Day 4: End of the trip

Oana, Arkhan and I gave a nice presentation about our work during GCI. We spoke about all the amazing projects under FOSSASIA. One cool thing we did is that we “Doodled” our presentation 🙂 Here are few images from the actual presentation.

The day ended well with loads of good memories and information. Thanks to the open source technologies and their availability along with a beautiful friendly community, these memories and connections will now remain for a lifetime.

Export an Event using APIs of Open Event Server

We in FOSSASIA’s Open Event Server project, allow the organizer, co-organizer and the admins to export all the data related to an event in the form of an archive of JSON files. This way the data can be reused in some other place for various different purposes. The basic workflow is something like this:

  • Send a POST request in the /events/{event_id}/export/json with a payload containing whether you require the various media files.
  • The POST request starts a celery task in the background to start extracting data related to event and jsonifying them
  • The celery task url is returned as a response. Sending a GET request to this url gives the status of the task. If the status is either FAILED or SUCCESS then there is the corresponding error message or the result.
  • Separate JSON files for events, speakers, sessions, micro-locations, tracks, session types and custom forms are created.
  • All this files are then archived and the zip is then served on the endpoint /events/{event_id}/exports/{path}
  • Sending a GET request to the above mentioned endpoint downloads a zip containing all the data related to the endpoint.

Let’s dive into each of these points one-by-one

POST request ( /events/{event_id}/export/json)

For making a POST request you firstly need a JWT authentication like most of the other API endpoints. You need to send a payload containing the settings for whether you want the media files related with the event to be downloaded along with the JSON files. An example payload looks like this:

{
   "image": true,
   "video": true,
   "document": true,
   "audio": true
 }

def export_event(event_id):
    from helpers.tasks import export_event_task

    settings = EXPORT_SETTING
    settings['image'] = request.json.get('image', False)
    settings['video'] = request.json.get('video', False)
    settings['document'] = request.json.get('document', False)
    settings['audio'] = request.json.get('audio', False)
    # queue task
    task = export_event_task.delay(
        current_identity.email, event_id, settings)
    # create Job
    create_export_job(task.id, event_id)

    # in case of testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        # send_export_mail(event_id, task.get())
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


Taking the settings about the media files and the event id, we pass them as parameter to the export event celery task and queue up the task. We then create an entry in the database with the task url and the event id and the user who triggered the export to keep a record of the activity. After that we return as response the url for the celery task to the user.

If the celery task is still underway it show a response with ‘state’:’WAITING’. Once, the task is completed, the value of ‘state’ is either ‘FAILED’ or ‘SUCCESS’. If it is SUCCESS it returns the result of the task, in this case the download url for the zip.

Celery Task to Export Event

Exporting an event is a very time consuming process and we don’t want that this process to come in the way of user interaction with other services. So we needed to use a queueing system that would queue the tasks and execute them in the background with disturbing the main worker from executing the other user requests. We have used celery to queue tasks in the background and execute them without disturbing the other user requests.

We have created a celery task namely “export.event” which calls the event_export_task_base() which in turn calls the export_event_json() where all the jsonification process is carried out. To start the celery task all we do is export_event_task.delay(event_id, settings) and it return a celery task object with a task id that can be used to check the status of the task.

@celery.task(base=RequestContextTask, name='export.event', bind=True)
def export_event_task(self, email, event_id, settings):
    event = safe_query(db, Event, 'id', event_id, 'event_id')
    try:
        logging.info('Exporting started')
        path = event_export_task_base(event_id, settings)
        # task_id = self.request.id.__str__()  # str(async result)
        download_url = path

        result = {
            'download_url': download_url
        }
        logging.info('Exporting done.. sending email')
        send_export_mail(email=email, event_name=event.name, download_url=download_url)
    except Exception as e:
        print(traceback.format_exc())
        result = {'__error': True, 'result': str(e)}
        logging.info('Error in exporting.. sending email')
        send_export_mail(email=email, event_name=event.name, error_text=str(e))

    return result


After exporting a path to the export zip is returned. We then get the downloading endpoint and return it as the result of the celery task. In case there is an error in the celery task, we print an entire traceback in the celery worker and return the error as a result.

Make the Exported Zip Ready

We have a separate export_helpers.py file in the helpers module of API for performing various tasks related to exporting all the data of the event. The most important function in this file is the export_event_json(). This file accepts the event_id and the settings dictionary. In the export helpers we have global constant dictionaries which contain the order in which the fields are to appear in the JSON files created while exporting.

Firstly, we create the directory for storing the exported JSON and finally the archive of all the JSON files. Then we have a global dictionary named EXPORTS which contains all the tables and their corresponding Models which we want to extract from the database and store as JSON.  From the EXPORTS dict we get the Model names. We use this Models to make queries with the given event_id and retrieve the data from the database. After retrieving data, we use another helper function named _order_json which jsonifies the sqlalchemy data in the order that is mentioned in the dictionary. After this we download the media data, i.e. the slides, images, videos etc. related to that particular Model depending on the settings.

def export_event_json(event_id, settings):
    """
    Exports the event as a zip on the server and return its path
    """
    # make directory
    exports_dir = app.config['BASE_DIR'] + '/static/uploads/exports/'
    if not os.path.isdir(exports_dir):
        os.mkdir(exports_dir)
    dir_path = exports_dir + 'event%d' % int(event_id)
    if os.path.isdir(dir_path):
        shutil.rmtree(dir_path, ignore_errors=True)
    os.mkdir(dir_path)
    # save to directory
    for e in EXPORTS:
        if e[0] == 'event':
            query_obj = db.session.query(e[1]).filter(
                e[1].id == event_id).first()
            data = _order_json(dict(query_obj.__dict__), e)
            _download_media(data, 'event', dir_path, settings)
        else:
            query_objs = db.session.query(e[1]).filter(
                e[1].event_id == event_id).all()
            data = [_order_json(dict(query_obj.__dict__), e) for query_obj in query_objs]
            for count in range(len(data)):
                data[count] = _order_json(data[count], e)
                _download_media(data[count], e[0], dir_path, settings)
        data_str = json.dumps(data, indent=4, ensure_ascii=False).encode('utf-8')
        fp = open(dir_path + '/' + e[0], 'w')
        fp.write(data_str)
        fp.close()
    # add meta
    data_str = json.dumps(
        _generate_meta(), sort_keys=True,
        indent=4, ensure_ascii=False
    ).encode('utf-8')
    fp = open(dir_path + '/meta', 'w')
    fp.write(data_str)
    fp.close()
    # make zip
    shutil.make_archive(dir_path, 'zip', dir_path)
    dir_path = dir_path + ".zip"

    storage_path = UPLOAD_PATHS['exports']['zip'].format(
        event_id=event_id
    )
    uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
    storage_url = upload(uploaded_file, storage_path)

    return storage_url


After we receive the json data from the _order_json() function, we create a dump of the json using json.dumps with an indentation of 4 spaces and utf-8 encoding. Then we save this dump in a file named according to the model from which the data was retrieved. This process is repeated for all the models that are mentioned in the EXPORTS dictionary. After all the JSON files are created and all the media is downloaded, we make a zip of the folder.

To do this we use shutil.make_archive. It creates a zip and uploads the zip to the storage service used by the server such as S3, google storage, etc. and returns the url for the zip through which it can be accessed.

Apart from this function, the other major function in this file is to create an export job entry in the database so that we can keep a track about which used started a task related to which event and help us in debugging and security purposes.

Downloading the Zip File

After the exporting is completed, if you send a GET request to the task url, you get a response similar to this:

{
   "result": {
     "download_url": "http://localhost:5000/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
   },
   "state": "SUCCESS"
 }

So on opening the download url in the browser or using any other tool, you can download the zip file.

One big question however remains is, all the workflow is okay but how do you understand after sending the POST request, that the task is completed and ready to be downloaded? One way of solving this problem is a technique known as polling. In polling what we do is we send a GET request repeatedly after every fixed interval of time. So, what we do is from the POST request we get the url for the export task. You keep polling this task url until the state is either “FAILED” or “SUCCESS”. If it is a SUCCESS you append the download url somewhere in your website which can then clicked to download the archived export of the event.

 

Reference:

 

Uploading Files via APIs in the Open Event Server

There are two file upload endpoints. One is endpoint for image upload and the other is for all other files being uploaded. The latter endpoint is to be used for uploading files such as slides, videos and other presentation materials for a session. So, in FOSSASIA’s Orga Server project, when we need to upload a file, we make an API request to this endpoint which is turn uploads the file to the server and returns back the url for the uploaded file. We then store this url for the uploaded file to the database with the corresponding row entry.

Sending Data

The endpoint /upload/file  accepts a POST request, containing a multipart/form-data payload. If there is a single file that is uploaded, then it is uploaded under the key “file” else an array of file is sent under the key “files”.

A typical single file upload cURL request would look like this:

curl -H “Authorization: JWT <key>” -F [email protected] -x POST http://localhost:5000/v1/upload/file

A typical multi-file upload cURL request would look something like this:

curl -H “Authorization: JWT <key>” -F [email protected] -F [email protected] -x POST http://localhost:5000/v1/upload/file

Thus, unlike other endpoints in open event orga server project, we don’t send a json encoded request. Instead it is a form data request.

Saving Files

We use different services such as S3, google cloud storage and so on for storing the files depending on the admin settings as decided by the admin of the project. One can even ask to save the files locally by passing a GET parameter force_local=true. So, in the backend we have 2 cases to tackle- Single File Upload and Multiple Files Upload.

Single File Upload

if 'file' in request.files:
        files = request.files['file']
        file_uploaded = uploaded_file(files=files)
        if force_local == 'true':
            files_url = upload_local(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )
        else:
            files_url = upload(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )


We get the file, that is to be uploaded using
request.files[‘file’] with the key as ‘file’ which was used in the payload. Then we use the uploaded_file() helper function to convert the file data received as payload into a proper file and store it in a temporary storage. After this, if force_local is set as true, we use the upload_local helper function to upload it to the local storage, i.e. the server where the application is hosted, else we use whatever service is set by the admin in the admin settings.

In uploaded_file() function of helpers module, we extract the filename and the extension of the file from the form-data payload. Then we check if the suitable directory already exists. If it doesn’t exist, we create a new directory and then save the file in the directory

extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)


After that the upload function gets the settings key for either s3 or google storage and then uses the corresponding functions to upload this temporary file to the storage.

Multiple File Upload

 elif 'files[]' in request.files:
        files = request.files.getlist('files[]')
        files_uploaded = uploaded_file(files=files, multiple=True)
        files_url = []
        for file_uploaded in files_uploaded:
            if force_local == 'true':
                files_url.append(upload_local(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))
            else:
                files_url.append(upload(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))


In case of multiple files upload, we get a list of files instead of a single file. Hence we get the list of files sent as form data using
request.files.getlist(‘files[]’). Here ‘files’ is the key that is used and since it is an array of file content, hence it is written as files[]. We again use the uploaded_file() function to get back a list of temporary files from the content that has been uploaded as form-data. After that we loop over all the temporary files that are stored in the variable files_uploaded in the above code. Next, for every file in the list of temporary files, we use the upload() helper function to save these files in the storage system of the application.

In the uploaded_file() function of the helpers module, since this time there are multiple files and their content sent, so things work differently. We loop over all the files that are received and for each of these files we find their filename and extension. Then we create directories to save these files in and then save the content of the file with the corresponding filename and extension. After the file has been saved, we append it to a list and finally return the entire list so that we can get a list of all files.

if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))


The
upload() function then finally returns us the urls for the files after saving them.

API Response

The file upload endpoint either returns a single url or a list of urls depending on whether a single file was uploaded or multiple files were uploaded. The url for the file depends on the storage system that has been used. After the url or list of urls is received, we jsonify the entire response so that we can send a proper JSON response that can be parsed properly in the frontend and used for saving corresponding information to the database using the other API services.

A typical single file upload response looks like this:

{
     "url": "https://xyz.storage.com/asd/fgh/hjk/12332233.docx"
 }

Multiple file upload response looks like this:

{
     "url": [
         "https://xyz.storage.com/asd/fgh/hjk/12332233.docx",
         "https://xyz.storage.com/asd/fgh/hjk/66777777.ppt"
     ]
 }

You can find the related documentations and example payloads on how to use this endpoint to upload files here: http://open-event-api.herokuapp.com/#upload-file-upload.

 

Reference:

How User Event Roles relationship is handled in Open Event Server

Users and Events are the most important part of FOSSASIA‘s Open Event Server. Through the advent and upgradation of the project, the way of implementing user event roles has gone through a lot many changes. When the open event organizer server was first decoupled to serve as an API server, the user event roles like all other models was decided to be served as a separate API to provide a data layer above the database for making changes in the entries. Whenever a new role invite was accepted, a POST request was made to the User Events Roles table to insert the new entry. Whenever there was a change in the role of an user for a particular event, a PATCH request was made. Permissions were made such that a user could insert only his/her user id and not someone else’s entry.

def before_create_object(self, data, view_kwargs):
        """
        method to create object before post
        :param data:
        :param view_kwargs:
        :return:
        """
        if view_kwargs.get('event_id'):
            event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
            data['event_id'] = event.id

        elif view_kwargs.get('event_identifier'):
            event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
            data['event_id'] = event.id
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if not invite:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")

    def after_create_object(self, obj, data, view_kwargs):
        """
        method to create object after post
        :param data:
        :param view_kwargs:
        :return:
        """
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if invite:
            invite.status = "accepted"
            save_to_db(invite)
        else:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")


Initially what we did was when a POST request was sent to the User Event Roles API endpoint, we would first check whether a role invite from the organizer exists for that particular combination of user, event and role. If it existed, only then we would make an entry to the database. Else we would raise an “Object: not found” error. After the entry was made in the database, we would update the role_invites table to change the status for the role_invite.

Later it was decided that we need not make a separate API endpoint. Since API endpoints are all user accessible and may cause some problem with permissions, it was decided that the user event roles would be handled entirely through the model instead of a separate API. Also, the workflow wasn’t very clear for an user. So we decided on a workflow where the role_invites table is first updated with the particular status and after the update has been made, we make an entry to the user_event_roles table with the data that we get from the role_invites table.

When a role invite is accepted, sqlalchemy add() and commit() is used to insert a new entry into the table. When a role is changed for a particular user, we make a query, update the values and save it back into the table. So the entire process is handled in the data layer level rather than the API level.

The code implementation is as follows:

def before_update_object(self, role_invite, data, view_kwargs):
        """
        Method to edit object
        :param role_invite:
        :param data:
        :param view_kwargs:
        :return:
        """
        user = User.query.filter_by(email=role_invite.email).first()
        if user:
            if not has_access('is_user_itself', id=user.id):
                raise UnprocessableEntity({'source': ''}, "Only users can edit their own status")
        if not user and not has_access('is_organizer', event_id=role_invite.event_id):
            raise UnprocessableEntity({'source': ''}, "User not registered")
        if not has_access('is_organizer', event_id=role_invite.event_id) and (len(data.keys())>1 or 'status' not in data):
            raise UnprocessableEntity({'source': ''}, "You can only change your status")

    def after_update_object(self, role_invite, data, view_kwargs):
        user = User.query.filter_by(email=role_invite.email).first()
        if 'status' in data and data['status'] == 'accepted':
            role = Role.query.filter_by(name=role_invite.role_name).first()
            event = Event.query.filter_by(id=role_invite.event_id).first()
            uer = UsersEventsRoles.query.filter_by(user=user).filter_by(event=event).filter_by(role=role).first()
            if not uer:
                uer = UsersEventsRoles(user, event, role)
                save_to_db(uer, 'Role Invite accepted')


In the above code, there are two main functions –
before_update_object which gets executed before the entry in the role_invites table is updated, and after_update_object which gets executed after.

In the before_update_object, we verify that the user is accepting or rejecting his own role invite and not someone else’s role invite. Also, we ensure that the user is allowed to only update the status of the role invite and not any other sensitive data like the role_name or email. If the user tried to edit any other field except status, then an error is shown to him/her. However if the user has organizer access, then he/she can edit the other fields of the role_invites table as well. The has_access() helper permission function helps us ensure the permission checks.

In the after_update_object we make the entry to the user event roles table. In the after_update_object from the role_invite parameter we can get the exact values of the newly updated row in the table. We use the data of this role invite to find the user, event and role associated with this role. Then we create a UsersEventsRoles object with user, event and role as parameters for the constructor. Then we use save_to_db helper function to save the new entry to the database. The save_to_db function uses the session.add() and session.commit() functions of flask-sqlalchemy to add the new entry directly to the database.

Thus, we maintain the flow of the user event roles relationship. All the database entries and operation related to users-events-roles table remains encapsulated from the client user so that they can use the various API features without thinking about the complications of the implementations.

 

Reference: