Handling Requests for hasMany Relationships in Open Event Front-end

In Open Event Front-end we use Ember Data and JSON API specs to integrate the application with the server. Ember Data provides an easy way to handle API requests, however it does not support a direct POST for saving bulk data which was the problem we faced while implementing event creation using the API.

In this blog we will take a look at how we implemented POST requests for saving hasMany relationships, using an example of sessions-speakers route to see how we saved the tracks, microlocations & session-types. Lets see how we did it.

Fetching the data from the server

Ember by default does not support hasMany requests for getting related model data. However we can use external add on which enable the hasMany Get requests, we use ember-data-has-many-query which is a great add on for querying hasMany relations of a model.

let data = this.modelFor('events.view.edit');
data.tracks = data.event.get('tracks');
data.microlocations = data.event.get('microlocations');
data.sessionTypes = data.event.get('sessionTypes');
return RSVP.hash(data);

In the above example we are querying the tracks, microlocations & sessionTypes which are hasMany relationships, related to the events model. We can simply do a to do a GET request for the related model.

data.event.get('tracks');

In the above example we are retrieving the all the tracks of the event.

Sending a POST request for hasMany relationship
Ember currently does not saving bulk data POST requests for hasMany relations. We solved this by doing a POST request for individual data of the hasMany array.

We start with creating a `promises` array which contains all the individual requests. We then iterate over all the hasMany relations & push it to the `promises` array. Now each request is an individual promise.

let promises = [];

promises.push(this.get('model.event.tracks').toArray().map(track => track.save()));
promises.push(this.get('model.event.sessionTypes').toArray().map(type => type.save()));
promises.push(this.get('model.event.microlocations').toArray().map(location => location.save()));

Once we have all the promises we then use RSVP to make the POST requests. We make use of all() method which takes an array of promises as parameter and resolves all the promises. If the promises are not resolved successfully then we simply notify the user using the notify service, else we redirect to the home page.

RSVP.Promise.all(promises)
  .then(() => {
    this.transitionToRoute('index');
  }, function() {
    this.get('notify').error(this.l10n.t(Data did not save. Please try again'));
  });

The result of this we can now retrieve & create new tracks, microlocations & sessionTypes on sessions-speakers route.

Thank you for reading the blog, you can check the source code for the example here.

Resources

 

Continue ReadingHandling Requests for hasMany Relationships in Open Event Front-end

Data Access Layer in Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Its core feature is scanning a QR Code to validate Attendee Check In. Other features of the App are to display an overview of sales and tickets management. The App maintains a local database and syncs it with the Open Event API Server. The Data Access Layer in the App is designed such that the data is fetched from the server or taken from the local database according to the user’s need. For example, simply showing the event sales overview to the user will fetch the data from the locally saved database. But when the user wants to see the latest data then the App need to fetch the data from the server to show it to the user and also update the locally saved data for future reference. I will be talking about the data access layer in the Open Event Organizer App in this blog.

The App uses RxJava to perform all the background tasks. So all the data access methods in the app return the Observables which is then subscribed in the presenter to get the data items. So according to the data request, the App has to create the Observable which will either load the data from the locally saved database or fetch the data from the API server. For this, the App has AbstractObservableBuilder class. This class gets to decide which Observable to return on a data request.

Relevant Code:

final class AbstractObservableBuilder<T> {
   ...
   ...
   @NonNull
   private Callable<Observable<T>> getReloadCallable() {
       return () -> {
           if (reload)
               return Observable.empty();
           else
               return diskObservable
                   .doOnNext(item -> Timber.d("Loaded %s From Disk on Thread %s",
                       item.getClass(), Thread.currentThread().getName()));
       };
   }

   @NonNull
   private Observable<T> getConnectionObservable() {
       if (utilModel.isConnected())
           return networkObservable
               .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                   item.getClass(), Thread.currentThread().getName()));
       else
           return Observable.error(new Throwable(Constants.NO_NETWORK));
   }

   @NonNull
   private <V> ObservableTransformer<V, V> applySchedulers() {
       return observable -> observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread());
   }

   @NonNull
   public Observable<T> build() {
       if (diskObservable == null || networkObservable == null)
           throw new IllegalStateException("Network or Disk observable not provided");

       return Observable
               .defer(getReloadCallable())
               .switchIfEmpty(getConnectionObservable())
               .compose(applySchedulers());
   }
}

 

The class is used to build the Abstract Observable which contains both types of Observables, making data request to the API server and the locally saved database. Take a look at the method build. Method getReloadCallable provides an observable which will be the default one to be subscribed which is a disk observable which means data is fetched from the locally saved database. The method checks parameter reload which if true suggests to make the data request to the API server or else to the locally saved database. If the reload is false which means data can be fetched from the locally saved database, getReloadCallable returns the disk observable and the data will be fetched from the locally saved database. If the reload is true which means data request must be made to the API server, then the method returns an empty observable.

The method getConnectionObservable returns a network observable which makes the data request to the API server. In the method build, switchIfEmpty operator is applied on the default observable which is empty if reload is true, and the network observable is passed to it. So when reload is true, network observable is subscribed and when it is false disk observable is subscribed. For example of usage of this class to make a events data request is:

public Observable<Event> getEvents(boolean reload) {
   Observable<Event> diskObservable = Observable.defer(() ->
       databaseRepository.getAllItems(Event.class)
   );

   Observable<Event> networkObservable = Observable.defer(() ->
       eventService.getEvents(JWTUtils.getIdentity(getAuthorization()))
           ...
           ...

   return new AbstractObservableBuilder<Event>(utilModel)
       .reload(reload)
       .withDiskObservable(diskObservable)
       .withNetworkObservable(networkObservable)
       .build();
}

 

So according to the boolean parameter reload, a correct observable is subscribed to complete the data request.

Links:
1. Documentation about the Operators in ReactiveX
2. Information about the Data Access Layer on Wikipedia

Continue ReadingData Access Layer in Open Event Organizer Android App

Using Data Access Object to Store Information

We often need to store the information received from the network to retrieve that later. Although we can store and read data directly but by using data access object to store information enables us to do data operations without exposing details of the database. Using data access object is also a best practice in software engineering. Recently I modified Connfa app to store the data received in Open Event format. In this blog, I describe how to use data access object.

The goal is to abstract and encapsulate all access to the data and provide an interface. This is called Data Access Object pattern. In a nutshell, the DAO “knows” which data source (that could be a database, a flat file or even a WebService) to connect to and is specific for this data source. It makes no difference in applications when it accesses a relational database or parses xml files (using a DAO). The DAO is usually able to create an instance of a data object (“to read data”) and also to persist data (“to save data”) to the datasource.

Consider the example from Connfa app in which get the tracks from API and store them in SQL database. We use DAO to create a layer between model and database. Where AbstractEntityDAO is an abstract class which have the functions to perform CRUD operation. We extend it to implement them in our DAO model. Here is TrackDAO structure,

public class TrackDao extends AbstractEntityDAO<Track, Long> {

    public static final String TABLE_NAME = "table_track";

    @Override
    protected String getSearchCondition() {
        return "_id=?";
    }
    
    ...
}

Find the complete class to see the detailed methods to implement search conditions, get key columns, create instance etc.  here.

Here is a general method to get the data from the database. Where getFacade() for the given layer element, this method returns the requested facade object to represent the passed in layer element.

public List<ClassToSave> getAllSafe() {
   ILAPIDBFacade facade = getFacade();
   try {
       facade.open();
       return getAll();

   } finally {
       facade.close();
   }
}

Now we can create an instance to use these methods instead of directly using SQL operations. This functions gets the data and sort it accordingly.

private TrackDao mTrackDao;
 public List<Track> getTracks() {
   List<Track> tracks = mTrackDao.getAllSafe();
   Collections.sort(tracks, new Comparator<Track>() {
       @Override
       public int compare(Track track, Track track2) {
           return Double.compare(track.getOrder(), track2.getOrder());
       }
   });
   return tracks;
}

References:

 

Continue ReadingUsing Data Access Object to Store Information

Receiving Data From the Network Asynchronously

We often need to receive information from networks in Android apps. Although it is a simple process but the problem arises when it is done through the main user interaction thread. To understand this problem better consider using an app which shows some text downloaded from the online database. As the user clicks a button to display a text it may take some time to get the HTTP response. So what does an activity do in that time? It creates a lag in the user interface and makes the app to stop responding. Recently I implemented this in Connfa App to download data in Open Event Format.

To solve that problem we use concurrent thread to download the data and send the result back to be processed separating it from main user interaction thread. AsyncTask allows you to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.

AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most). If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent package such as Executor, ThreadPoolExecutor and FutureTask.

An asynchronous task is defined by a computation that runs on a background thread and whose result is published on the UI thread. An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute.

In this blog, I describe how to download data in an android app with AsyncTask using OkHttp library.

First, you need to add dependency for OkHttp Library,

compile 'com.squareup.okhttp:okhttp:2.4.0' 


Extend AsyncTask class and implement all the functions in the class by overriding them to modify them accordingly. Here I declare a class named AsyncDownloader extending AsyncTask. We took the string parameter, integer type progress and string type result to return instead which will be our JSON data and param is our URL. Instead of using get the function of AsyncTask we implement an interface to receive data so the main user interaction thread does not have to wait for the return of the result.

public class AsyncDownloader extends AsyncTask<String, Integer, String> {
    private String jsonData = null;

    public interface JsonDataSetter {
        void setJsonData(String str);
    }


    JsonDataSetter jsonDataSetterListener;

    public AsyncDownloader(JsonDataSetter context) {

        this.jsonDataSetterListener = context;
    }

    @Override
    protected void onPreExecute() {
        super.onPreExecute();
    }

    @Override
    protected String doInBackground(String... params) {

        String url = params[0];

        OkHttpClient client = new OkHttpClient();
        Request request = new Request.Builder()
                .url(url)
                .build();
        Call call = client.newCall(request);

        Response response = null;

        try {
            response = call.execute();

            if (response.isSuccessful()) {
                jsonData = response.body().string();

            } else {
                jsonData = null;
            }

        } catch (IOException e) {
            e.printStackTrace();
        }


        return jsonData;
    }

    @Override
    protected void onPostExecute(String jsonData) {
        super.onPostExecute(jsonData);
        jsonDataSetterListener.setJsonData(jsonData);
    }
}


We download the data in doInBackground making a request using OkHttp library and send the result back in onPostExecute method using our interface method. See above to check out the complete class implementation.

You can make an instance to AsyncDownloader in an activity. Implement interface to receive that data.

Tip: Always close AsyncTask after using to avoid many concurrent tasks at one time.

Find the complete code here.

References:

Continue ReadingReceiving Data From the Network Asynchronously

Advantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Event apps like Giggity and Giraffe use event formats like xCal, pentabarf/frab XML, iCal etc. In this blog, I present some of the advantages of using FOSSASIA’s Open Event data format over other formats. I added support for Open Event format in these two apps so I describe the advantages and improvements that were made with respect to them.

  • The main problem that is faced in Giggity app is that all the data like social links, microlocations, the link for the logo file etc., can not be fetched from a single file, so a separate raw file is being added to provide this data. Our Open Event format provides all this information from the single URL that could be received from the server so no need to use any separate file.
  • Here is the pentabarf format data for FOSSASIA 2016 conference excluding sessions. Although it provides all the necessary information it leaves the information for logo URL, details for longitude and latitude for microlocations (rooms) and links to social media and website. While the open event format provides all the missing details including some extra information like language, comments etc. See FOSSASIA 2016 Open Event format sample.
<conference>
<title>FOSSASIA 2016</title>
<subtitle/>
<venue>Science Centre Road</venue>
<city>Singapore</city>
<start>2016-03-18</start>
<end>2016-03-20</end>
<days>3</days>
<day_change>09:00:00</day_change>
<timeslot_duration>00:00:00</timeslot_duration>
</conference>
  • The parsing of received file format gets very complicated in case of iCal, xCal etc. as tags needs to be matched to get the data. Howsoever there are various libraries available for parsing JSON data. So we can create simply an array list of the received data to send it to the adapter. See this example for more information of working code. You can also see the parser for iCal to compare the complexity of the code.
  • The other more common problem is the structure of the formats received is sometimes it becomes complicated to define the sub parts of a single element. For example for the location we define latitude and longitude separately while in iCal format it is just separated by a comma. For example for

iCal

GEO:1.333194;103.736132

JSON

{
           "id": 1,
           "name": "Stage 1",
           "floor": 0,
           "latitude": 37.425420,
           "longitude": -122.080291,
           "room": null
}

And the information provided is more detailed.

  • Open Event format is well documented and it makes it easier for other developers to work on it. Find the documentation here.

References:

 

Continue ReadingAdvantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Export an Event using APIs of Open Event Server

We in FOSSASIA’s Open Event Server project, allow the organizer, co-organizer and the admins to export all the data related to an event in the form of an archive of JSON files. This way the data can be reused in some other place for various different purposes. The basic workflow is something like this:

  • Send a POST request in the /events/{event_id}/export/json with a payload containing whether you require the various media files.
  • The POST request starts a celery task in the background to start extracting data related to event and jsonifying them
  • The celery task url is returned as a response. Sending a GET request to this url gives the status of the task. If the status is either FAILED or SUCCESS then there is the corresponding error message or the result.
  • Separate JSON files for events, speakers, sessions, micro-locations, tracks, session types and custom forms are created.
  • All this files are then archived and the zip is then served on the endpoint /events/{event_id}/exports/{path}
  • Sending a GET request to the above mentioned endpoint downloads a zip containing all the data related to the endpoint.

Let’s dive into each of these points one-by-one

POST request ( /events/{event_id}/export/json)

For making a POST request you firstly need a JWT authentication like most of the other API endpoints. You need to send a payload containing the settings for whether you want the media files related with the event to be downloaded along with the JSON files. An example payload looks like this:

{
   "image": true,
   "video": true,
   "document": true,
   "audio": true
 }

def export_event(event_id):
    from helpers.tasks import export_event_task

    settings = EXPORT_SETTING
    settings['image'] = request.json.get('image', False)
    settings['video'] = request.json.get('video', False)
    settings['document'] = request.json.get('document', False)
    settings['audio'] = request.json.get('audio', False)
    # queue task
    task = export_event_task.delay(
        current_identity.email, event_id, settings)
    # create Job
    create_export_job(task.id, event_id)

    # in case of testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        # send_export_mail(event_id, task.get())
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


Taking the settings about the media files and the event id, we pass them as parameter to the export event celery task and queue up the task. We then create an entry in the database with the task url and the event id and the user who triggered the export to keep a record of the activity. After that we return as response the url for the celery task to the user.

If the celery task is still underway it show a response with ‘state’:’WAITING’. Once, the task is completed, the value of ‘state’ is either ‘FAILED’ or ‘SUCCESS’. If it is SUCCESS it returns the result of the task, in this case the download url for the zip.

Celery Task to Export Event

Exporting an event is a very time consuming process and we don’t want that this process to come in the way of user interaction with other services. So we needed to use a queueing system that would queue the tasks and execute them in the background with disturbing the main worker from executing the other user requests. We have used celery to queue tasks in the background and execute them without disturbing the other user requests.

We have created a celery task namely “export.event” which calls the event_export_task_base() which in turn calls the export_event_json() where all the jsonification process is carried out. To start the celery task all we do is export_event_task.delay(event_id, settings) and it return a celery task object with a task id that can be used to check the status of the task.

@celery.task(base=RequestContextTask, name='export.event', bind=True)
def export_event_task(self, email, event_id, settings):
    event = safe_query(db, Event, 'id', event_id, 'event_id')
    try:
        logging.info('Exporting started')
        path = event_export_task_base(event_id, settings)
        # task_id = self.request.id.__str__()  # str(async result)
        download_url = path

        result = {
            'download_url': download_url
        }
        logging.info('Exporting done.. sending email')
        send_export_mail(email=email, event_name=event.name, download_url=download_url)
    except Exception as e:
        print(traceback.format_exc())
        result = {'__error': True, 'result': str(e)}
        logging.info('Error in exporting.. sending email')
        send_export_mail(email=email, event_name=event.name, error_text=str(e))

    return result


After exporting a path to the export zip is returned. We then get the downloading endpoint and return it as the result of the celery task. In case there is an error in the celery task, we print an entire traceback in the celery worker and return the error as a result.

Make the Exported Zip Ready

We have a separate export_helpers.py file in the helpers module of API for performing various tasks related to exporting all the data of the event. The most important function in this file is the export_event_json(). This file accepts the event_id and the settings dictionary. In the export helpers we have global constant dictionaries which contain the order in which the fields are to appear in the JSON files created while exporting.

Firstly, we create the directory for storing the exported JSON and finally the archive of all the JSON files. Then we have a global dictionary named EXPORTS which contains all the tables and their corresponding Models which we want to extract from the database and store as JSON.  From the EXPORTS dict we get the Model names. We use this Models to make queries with the given event_id and retrieve the data from the database. After retrieving data, we use another helper function named _order_json which jsonifies the sqlalchemy data in the order that is mentioned in the dictionary. After this we download the media data, i.e. the slides, images, videos etc. related to that particular Model depending on the settings.

def export_event_json(event_id, settings):
    """
    Exports the event as a zip on the server and return its path
    """
    # make directory
    exports_dir = app.config['BASE_DIR'] + '/static/uploads/exports/'
    if not os.path.isdir(exports_dir):
        os.mkdir(exports_dir)
    dir_path = exports_dir + 'event%d' % int(event_id)
    if os.path.isdir(dir_path):
        shutil.rmtree(dir_path, ignore_errors=True)
    os.mkdir(dir_path)
    # save to directory
    for e in EXPORTS:
        if e[0] == 'event':
            query_obj = db.session.query(e[1]).filter(
                e[1].id == event_id).first()
            data = _order_json(dict(query_obj.__dict__), e)
            _download_media(data, 'event', dir_path, settings)
        else:
            query_objs = db.session.query(e[1]).filter(
                e[1].event_id == event_id).all()
            data = [_order_json(dict(query_obj.__dict__), e) for query_obj in query_objs]
            for count in range(len(data)):
                data[count] = _order_json(data[count], e)
                _download_media(data[count], e[0], dir_path, settings)
        data_str = json.dumps(data, indent=4, ensure_ascii=False).encode('utf-8')
        fp = open(dir_path + '/' + e[0], 'w')
        fp.write(data_str)
        fp.close()
    # add meta
    data_str = json.dumps(
        _generate_meta(), sort_keys=True,
        indent=4, ensure_ascii=False
    ).encode('utf-8')
    fp = open(dir_path + '/meta', 'w')
    fp.write(data_str)
    fp.close()
    # make zip
    shutil.make_archive(dir_path, 'zip', dir_path)
    dir_path = dir_path + ".zip"

    storage_path = UPLOAD_PATHS['exports']['zip'].format(
        event_id=event_id
    )
    uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
    storage_url = upload(uploaded_file, storage_path)

    return storage_url


After we receive the json data from the _order_json() function, we create a dump of the json using json.dumps with an indentation of 4 spaces and utf-8 encoding. Then we save this dump in a file named according to the model from which the data was retrieved. This process is repeated for all the models that are mentioned in the EXPORTS dictionary. After all the JSON files are created and all the media is downloaded, we make a zip of the folder.

To do this we use shutil.make_archive. It creates a zip and uploads the zip to the storage service used by the server such as S3, google storage, etc. and returns the url for the zip through which it can be accessed.

Apart from this function, the other major function in this file is to create an export job entry in the database so that we can keep a track about which used started a task related to which event and help us in debugging and security purposes.

Downloading the Zip File

After the exporting is completed, if you send a GET request to the task url, you get a response similar to this:

{
   "result": {
     "download_url": "http://localhost:5000/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
   },
   "state": "SUCCESS"
 }

So on opening the download url in the browser or using any other tool, you can download the zip file.

One big question however remains is, all the workflow is okay but how do you understand after sending the POST request, that the task is completed and ready to be downloaded? One way of solving this problem is a technique known as polling. In polling what we do is we send a GET request repeatedly after every fixed interval of time. So, what we do is from the POST request we get the url for the export task. You keep polling this task url until the state is either “FAILED” or “SUCCESS”. If it is a SUCCESS you append the download url somewhere in your website which can then clicked to download the archived export of the event.

 

Reference:

 

Continue ReadingExport an Event using APIs of Open Event Server

Uploading Files via APIs in the Open Event Server

There are two file upload endpoints. One is endpoint for image upload and the other is for all other files being uploaded. The latter endpoint is to be used for uploading files such as slides, videos and other presentation materials for a session. So, in FOSSASIA’s Orga Server project, when we need to upload a file, we make an API request to this endpoint which is turn uploads the file to the server and returns back the url for the uploaded file. We then store this url for the uploaded file to the database with the corresponding row entry.

Sending Data

The endpoint /upload/file  accepts a POST request, containing a multipart/form-data payload. If there is a single file that is uploaded, then it is uploaded under the key “file” else an array of file is sent under the key “files”.

A typical single file upload cURL request would look like this:

curl -H “Authorization: JWT <key>” -F file=@file.pdf -x POST http://localhost:5000/v1/upload/file

A typical multi-file upload cURL request would look something like this:

curl -H “Authorization: JWT <key>” -F files=@file1.pdf -F files=@file2.pdf -x POST http://localhost:5000/v1/upload/file

Thus, unlike other endpoints in open event orga server project, we don’t send a json encoded request. Instead it is a form data request.

Saving Files

We use different services such as S3, google cloud storage and so on for storing the files depending on the admin settings as decided by the admin of the project. One can even ask to save the files locally by passing a GET parameter force_local=true. So, in the backend we have 2 cases to tackle- Single File Upload and Multiple Files Upload.

Single File Upload

if 'file' in request.files:
        files = request.files['file']
        file_uploaded = uploaded_file(files=files)
        if force_local == 'true':
            files_url = upload_local(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )
        else:
            files_url = upload(
                file_uploaded,
                UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
            )


We get the file, that is to be uploaded using
request.files[‘file’] with the key as ‘file’ which was used in the payload. Then we use the uploaded_file() helper function to convert the file data received as payload into a proper file and store it in a temporary storage. After this, if force_local is set as true, we use the upload_local helper function to upload it to the local storage, i.e. the server where the application is hosted, else we use whatever service is set by the admin in the admin settings.

In uploaded_file() function of helpers module, we extract the filename and the extension of the file from the form-data payload. Then we check if the suitable directory already exists. If it doesn’t exist, we create a new directory and then save the file in the directory

extension = files.filename.split('.')[1]
        filename = get_file_name() + '.' + extension
        filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
        if not os.path.isdir(filedir):
            os.makedirs(filedir)
        file_path = filedir + filename
        files.save(file_path)


After that the upload function gets the settings key for either s3 or google storage and then uses the corresponding functions to upload this temporary file to the storage.

Multiple File Upload

 elif 'files[]' in request.files:
        files = request.files.getlist('files[]')
        files_uploaded = uploaded_file(files=files, multiple=True)
        files_url = []
        for file_uploaded in files_uploaded:
            if force_local == 'true':
                files_url.append(upload_local(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))
            else:
                files_url.append(upload(
                    file_uploaded,
                    UPLOAD_PATHS['temp']['event'].format(uuid=uuid.uuid4())
                ))


In case of multiple files upload, we get a list of files instead of a single file. Hence we get the list of files sent as form data using
request.files.getlist(‘files[]’). Here ‘files’ is the key that is used and since it is an array of file content, hence it is written as files[]. We again use the uploaded_file() function to get back a list of temporary files from the content that has been uploaded as form-data. After that we loop over all the temporary files that are stored in the variable files_uploaded in the above code. Next, for every file in the list of temporary files, we use the upload() helper function to save these files in the storage system of the application.

In the uploaded_file() function of the helpers module, since this time there are multiple files and their content sent, so things work differently. We loop over all the files that are received and for each of these files we find their filename and extension. Then we create directories to save these files in and then save the content of the file with the corresponding filename and extension. After the file has been saved, we append it to a list and finally return the entire list so that we can get a list of all files.

if multiple:
        files_uploaded = []
        for file in files:
            extension = file.filename.split('.')[1]
            filename = get_file_name() + '.' + extension
            filedir = current_app.config.get('BASE_DIR') + '/static/uploads/'
            if not os.path.isdir(filedir):
                os.makedirs(filedir)
            file_path = filedir + filename
            file.save(file_path)
            files_uploaded.append(UploadedFile(file_path, filename))


The
upload() function then finally returns us the urls for the files after saving them.

API Response

The file upload endpoint either returns a single url or a list of urls depending on whether a single file was uploaded or multiple files were uploaded. The url for the file depends on the storage system that has been used. After the url or list of urls is received, we jsonify the entire response so that we can send a proper JSON response that can be parsed properly in the frontend and used for saving corresponding information to the database using the other API services.

A typical single file upload response looks like this:

{
     "url": "https://xyz.storage.com/asd/fgh/hjk/12332233.docx"
 }

Multiple file upload response looks like this:

{
     "url": [
         "https://xyz.storage.com/asd/fgh/hjk/12332233.docx",
         "https://xyz.storage.com/asd/fgh/hjk/66777777.ppt"
     ]
 }

You can find the related documentations and example payloads on how to use this endpoint to upload files here: http://open-event-api.herokuapp.com/#upload-file-upload.

 

Reference:

Continue ReadingUploading Files via APIs in the Open Event Server

How User Event Roles relationship is handled in Open Event Server

Users and Events are the most important part of FOSSASIA‘s Open Event Server. Through the advent and upgradation of the project, the way of implementing user event roles has gone through a lot many changes. When the open event organizer server was first decoupled to serve as an API server, the user event roles like all other models was decided to be served as a separate API to provide a data layer above the database for making changes in the entries. Whenever a new role invite was accepted, a POST request was made to the User Events Roles table to insert the new entry. Whenever there was a change in the role of an user for a particular event, a PATCH request was made. Permissions were made such that a user could insert only his/her user id and not someone else’s entry.

def before_create_object(self, data, view_kwargs):
        """
        method to create object before post
        :param data:
        :param view_kwargs:
        :return:
        """
        if view_kwargs.get('event_id'):
            event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
            data['event_id'] = event.id

        elif view_kwargs.get('event_identifier'):
            event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
            data['event_id'] = event.id
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if not invite:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")

    def after_create_object(self, obj, data, view_kwargs):
        """
        method to create object after post
        :param data:
        :param view_kwargs:
        :return:
        """
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if invite:
            invite.status = "accepted"
            save_to_db(invite)
        else:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")


Initially what we did was when a POST request was sent to the User Event Roles API endpoint, we would first check whether a role invite from the organizer exists for that particular combination of user, event and role. If it existed, only then we would make an entry to the database. Else we would raise an “Object: not found” error. After the entry was made in the database, we would update the role_invites table to change the status for the role_invite.

Later it was decided that we need not make a separate API endpoint. Since API endpoints are all user accessible and may cause some problem with permissions, it was decided that the user event roles would be handled entirely through the model instead of a separate API. Also, the workflow wasn’t very clear for an user. So we decided on a workflow where the role_invites table is first updated with the particular status and after the update has been made, we make an entry to the user_event_roles table with the data that we get from the role_invites table.

When a role invite is accepted, sqlalchemy add() and commit() is used to insert a new entry into the table. When a role is changed for a particular user, we make a query, update the values and save it back into the table. So the entire process is handled in the data layer level rather than the API level.

The code implementation is as follows:

def before_update_object(self, role_invite, data, view_kwargs):
        """
        Method to edit object
        :param role_invite:
        :param data:
        :param view_kwargs:
        :return:
        """
        user = User.query.filter_by(email=role_invite.email).first()
        if user:
            if not has_access('is_user_itself', id=user.id):
                raise UnprocessableEntity({'source': ''}, "Only users can edit their own status")
        if not user and not has_access('is_organizer', event_id=role_invite.event_id):
            raise UnprocessableEntity({'source': ''}, "User not registered")
        if not has_access('is_organizer', event_id=role_invite.event_id) and (len(data.keys())>1 or 'status' not in data):
            raise UnprocessableEntity({'source': ''}, "You can only change your status")

    def after_update_object(self, role_invite, data, view_kwargs):
        user = User.query.filter_by(email=role_invite.email).first()
        if 'status' in data and data['status'] == 'accepted':
            role = Role.query.filter_by(name=role_invite.role_name).first()
            event = Event.query.filter_by(id=role_invite.event_id).first()
            uer = UsersEventsRoles.query.filter_by(user=user).filter_by(event=event).filter_by(role=role).first()
            if not uer:
                uer = UsersEventsRoles(user, event, role)
                save_to_db(uer, 'Role Invite accepted')


In the above code, there are two main functions –
before_update_object which gets executed before the entry in the role_invites table is updated, and after_update_object which gets executed after.

In the before_update_object, we verify that the user is accepting or rejecting his own role invite and not someone else’s role invite. Also, we ensure that the user is allowed to only update the status of the role invite and not any other sensitive data like the role_name or email. If the user tried to edit any other field except status, then an error is shown to him/her. However if the user has organizer access, then he/she can edit the other fields of the role_invites table as well. The has_access() helper permission function helps us ensure the permission checks.

In the after_update_object we make the entry to the user event roles table. In the after_update_object from the role_invite parameter we can get the exact values of the newly updated row in the table. We use the data of this role invite to find the user, event and role associated with this role. Then we create a UsersEventsRoles object with user, event and role as parameters for the constructor. Then we use save_to_db helper function to save the new entry to the database. The save_to_db function uses the session.add() and session.commit() functions of flask-sqlalchemy to add the new entry directly to the database.

Thus, we maintain the flow of the user event roles relationship. All the database entries and operation related to users-events-roles table remains encapsulated from the client user so that they can use the various API features without thinking about the complications of the implementations.

 

Reference:

Continue ReadingHow User Event Roles relationship is handled in Open Event Server
Read more about the article Automatic handling of view/data interactions in Open Event Orga App
Abstract 3d white geometric background. White seamless texture with shadow. Simple clean white background texture. 3D Vector interior wall panel pattern.

Automatic handling of view/data interactions in Open Event Orga App

During the development of Open Event Orga Application (Github Repo), we have strived to minimize duplicate code wherever possible and make the wrappers and containers around data and views intelligent and generic. When it comes to loading the data into views, there are several common interactions and behaviours that need to be replicated in each controller (or presenter in case of MVP architecture as used in our project). These interactions involve common ceremony around data loading and setting patterns and should be considered as boilerplate code. Let’s look at some of the common interactions on views:

Loading Data

While loading data, there are 3 scenarios to be considered:

  • Data loading succeeded – Pass the data to view
  • Data loading failed – Show appropriate error message
  • Show progress bar on starting of the data loading and hide when completed

If instead of loading a single object, we load a list of them, then the view may be emptiable, meaning you’ll have to show the empty view if there are no items.

Additionally, there may be a success message too, and if we are refreshing the data, there will be a refresh complete message as well.

These use cases present in each of the presenter cause a lot of duplication and can be easily handled by using Transformers from RxJava to compose common scenarios on views. Let’s see how we achieved it.

Generify the Views

The first step in reducing repetition in code is to use Generic classes. And as the views used in Presenters can be any class such as Activity or Fragment, we need to create some interfaces which will be implemented by these classes so that the functionality can be implementation agnostic. We broke these scenarios into common uses and created disjoint interfaces such that there is little to no dependency between each one of these contracts. This ensures that they can be extended to more contracts in future and can be used in any View without the need to break them down further. When designing contracts, we should always try to achieve fundamental blocks of building an API rather than making a big complete contract to be filled by classes. The latter pattern makes it hard for this contract to be generally used in all classes as people will refrain from implementing all its methods for a small functionality and just write their own function for it. If there is a need for a class to make use of a huge contract, we can still break it into components and require their composition using Java Generics, which we have done in our Transformers.

First, let’s see our contracts. Remember that the names of these Contracts are opinionated and up to the developer. There is no rule in naming interfaces, although adjectives are preferred as they clearly denote that it is an interface describing a particular behavior and not a concrete class:

Emptiable

A view which contains a list of items and thus can be empty

public interface Emptiable<T> {
   void showResults(List<T> items);
   void showEmptyView(boolean show);
}

Erroneous

A view that can show an error message on failure of loading data

public interface Erroneous {
   void showError(String error);
}

ItemResult

A view that contains a single object as data

public interface ItemResult<T> {
   void showResult(T item);
}

Progressive

A view that can show and hide a progress bar while loading data

public interface Progressive {
   void showProgress(boolean show);
}

Note that even though Progressive view can only be the one which is either ItemResult or Emptiable as they are the ones containing any data, but we have decoupled it, making it possible for a view to load data without progress or show progress for any other implementation other than loading data.

Refreshable

A view that can be refreshed and show the refresh complete message

public interface Refreshable {
   void onRefreshComplete();
}

There should also be a method for refresh failure, but the app is under development and will be added soon

Successful

A view that can show a success message

public interface Successful {
   void onSuccess(String message);
}

Implementation

Now, we will implement the Observable Transformers for these contracts

Erroneous

public static <T, V extends Erroneous> ObservableTransformer<T, T> erroneous(V view) {
   return observable ->  observable
             .doOnError(throwable -> view.showError(throwable.getMessage()));
}

We simply call showError on a view implementing Erroneous on the call of doOnError of the Observable

Progressive

private static <T, V extends Progressive> ObservableTransformer<T, T> progressive(V view) {
   return observable -> observable
           .doOnSubscribe(disposable -> view.showProgress(true))
           .doFinally(() -> view.showProgress(false));
}

Here we show the progress when the observable is subscribed and finally, we hide it whether it succeeded or failed

ItemResult

public static <T, V extends ItemResult<T>> ObservableTransformer<T, T> result(V view) {
   return observable -> observable.doOnNext(view::showResult);
}

We call showResult on call of onNext

 

Refreshable

private static <T, V extends Refreshable> ObservableTransformer<T, T> refreshable(V view, boolean forceReload) {
   return observable ->
       observable.doFinally(() -> {
           if (forceReload) view.onRefreshComplete();
       });
}

As we only refresh a view if it is a forceReload, so we check it before calling onRefreshComplete

 

Emptiable

public static <T, V extends Emptiable<T>> SingleTransformer<List<T>, List<T>> emptiable(V view, List<T> items) {
   return observable -> observable
       .doOnSubscribe(disposable -> view.showEmptyView(false))
       .doOnSuccess(list -> {
           items.clear();
           items.addAll(list);
           view.showResults(items);
       })
       .doFinally(() -> view.showEmptyView(items.isEmpty()));
}

Here we hide the empty view on start of the loading of data and finally we show it if the items are empty. Also, since we keep only one copy of a final list variable which is also used in view along with the presenter, we clear and add all items in that variable and call showResults on the view

Bonus: You can also merge the functions for composite usage as mentioned above like this

public static <T, V extends Progressive & Erroneous> ObservableTransformer<T, T> progressiveErroneous(V view) {
   return observable -> observable
       .compose(progressive(view))
       .compose(erroneous(view));
}

public static <T, V extends Progressive & Erroneous & ItemResult<T>> ObservableTransformer<T, T> progressiveErroneousResult(V view) {
   return observable -> observable
       .compose(progressiveErroneous(view))
       .compose(result(view));
}

Usage

Finally we use the above transformers

eventsDataRepository
   .getEvents(forceReload)
   .compose(dispose(getDisposable()))
   .compose(progressiveErroneousRefresh(getView(), forceReload))
   .toSortedList()
   .compose(emptiable(getView(), events))
   .subscribe(Logger::logSuccess, Logger::logError);

To give you an idea of what we have accomplished here, this is how we did the same before adding transformers

eventsView.showProgressBar(true);
eventsView.showEmptyView(false);

getDisposable().add(eventsDataRepository
   .getEvents(forceReload)
   .toSortedList()
   .subscribeOn(Schedulers.computation())
   .subscribe(events -> {
       if(eventsView == null)
           return;
       eventsView.showEvents(events);
       isListEmpty = events.size() == 0;
       hideProgress(forceReload);
   }, throwable -> {
       if(eventsView == null)
           return;

       eventsView.showEventError(throwable.getMessage());
       hideProgress(forceReload);
   }));

Sure looks ugly as compared to the current solution.

Note that if you don’t provide the error handler in subscribe method of the observable, it will throw an onErrorNotImplemented exception even if you have added a doOnError side effect

Here are some resources related to RxJava Transformers:

Continue ReadingAutomatic handling of view/data interactions in Open Event Orga App

Adding Github buttons to Generated Documentation with Yaydoc

Many times repository owners would want to link to their github source code, issue tracker etc. from the documentation. This would also help to direct some users to become a potential contributor to the repository. As a step towards this feature, we added the ability to add automatically generated GitHub buttons to the top of the docs with Yaydoc.

To do so we created a custom sphinx extension which makes use of http://buttons.github.io/ which is an excellent service to embed GitHub buttons to any website. The extension takes multiple config values and using them generates the `html` which it adds to the top of the internal docutils tree using a raw node.

GITHUB_BUTTON_SPEC = {
    'watch': ('eye', 'https://github.com/{user}/{repo}/subscription'),
    'star': ('star', 'https://github.com/{user}/{repo}'),
    'fork': ('repo-forked', 'https://github.com/{user}/{repo}/fork'),
    'follow': ('', 'https://github.com/{user}'),
    'issues': ('issue-opened', 'https://github.com/{user}/{repo}/issues'),
}

def get_button_tag(user, repo, btn_type, show_count, size):
    spec = GITHUB_BUTTON_SPEC[btn_type]
    icon, href = spec[0], spec[1].format(user=user, repo=repo)
    tag_fmt = '<a class="github-button" href="{href}" data-size="{size}"'
    if icon:
        tag_fmt += ' data-icon="octicon-{icon}"'
    tag_fmt += ' data-show-count="{show_count}">{text}</a>'
    return tag_fmt.format(href=href,
                          icon=icon,
                          size=size,
                          show_count=show_count,
                          text=btn_type.title())

The above snippet shows how it takes various parameters such as the user name, name of the repository, the button type which can be one of fork, issues, watch, follow and star, whether to display counts beside the buttons and whether a large button should be used. Another method named get_button_tags is used to read the various configs and call the above method with appropriate parameters to generate each button.

The extension makes use of the doctree-resolved event emitted by sphinx to hook into the internal doctree. The following snippet shows how it is done.

def on_doctree_resolved(app, doctree, docname):
    if not app.config.github_user_name or not app.config.github_repo:
        return
    buttons = nodes.raw('', get_button_tags(app.config), format='html')
    doctree.insert(0, buttons)

Finally we add the custom javascript using the add_javascript method.

app.add_javascript('https://buttons.github.io/buttons.js')

To use this with yaydoc, users would just need to add the following to their .yaydoc.yml file.

build:
  github_button:
    buttons:
      watch: true
      star: true
      issues: true
      fork: true
      follow: true
    show_count: true
    large: true

Resources

  1.  Homepage of Github:buttons – http://buttons.github.io/
  2. Sphinx extension Tutorial – http://www.sphinx-doc.org/en/stable/extdev/tutorial.html
Continue ReadingAdding Github buttons to Generated Documentation with Yaydoc