Enabling Google App Signing for Android Project

Signing key management of Android Apps is a hectic procedure and can grow out of hand rather quickly for large organizations with several independent projects. We, at FOSSASIA also had to face similar difficulties in management of individual keys by project maintainers and wanted to gather all these Android Projects under singular key management platform:

To handle the complexities and security aspect of the process, this year Google announced App Signing optional program where Google takes your existing key’s encrypted file and stores it on their servers and asks you to create a new upload key which will be used to sign further updates of the app. It takes the certificates of your new upload key and maps it to the managed private key. Now, whenever there is a new upload of the app, it’s signing certificate is matched with the upload key certificate and after verification, the app is signed by the original private key on the server itself and delivered to the user. The advantage comes where you lose your key, its password or it is compromised. Before App Signing program, if your key got lost, you had to launch your app under a new package name, losing your existing user base. With Google managing your key, if you lose your upload key, then the account owner can request Google to reassign a new upload key as the private key is secure on their servers.

There is no difference in the delivered app from the previous one as it is still finally signed by the original private key as it was before, except that Google also optimizes the app by splitting it into multiple APKs according to hardware, demographic and other factors, resulting in a much smaller app! This blog will take you through the steps in how to enable the program for existing and new apps. A bit of a warning though, for security reasons, opting in the program is permanent and once you do it, it is not possible to back out, so think it through before committing.

For existing apps:

First you need to go to the particular app’s detail section and then into Release Management > App Releases. There you would see the Get Started button for App Signing.

The account owner must first agree to its terms and conditions and once it’s done, a page like this will be presented with information about app signing infrastructure at top.

So, as per the instructions, download the PEPK jar file to encrypt your private key. For this process, you need to have your existing private key and its alias and password. It is fine if you don’t know the key password but store password is needed to generate the encrypted file. Then execute this command in the terminal as written in Step 2 of your Play console:

java -jar pepk.jar –keystore={{keystore_path}} –alias={{alias}} –output={{encrypted_file_output_path}} –encryptionkey=eb10fe8f7c7c9df715022017b00c6471f8ba8170b13049a11e6c09ffe3056a104a3bbe4ac5a955f4ba4fe93fc8cef27558a3eb9d2a529a2092761fb833b656cd48b9de6a

You will have to change the bold text inside curly braces to the correct keystore path, alias and the output file path you want respectively.

Note: The encryption key has been same for me for 3 different Play Store accounts, but might be different for you. So please confirm in Play console first

When you execute the command, it will ask you for the keystore password, and once you enter it, the encrypted file will be generated on the path you specified. You can upload it using the button on console.

After this, you’ll need to generate a new upload key. You can do this using several methods listed here, but for demonstration we’ll be using command line to do so:

keytool -genkey -v -keystore {{keystore_path}} -alias {{alias_name}} -keyalg RSA -keysize 2048 -validity 10000

The command will ask you a couple of questions related to the passwords and signing information and then the key will be generated. This will be your public key and be used for further signing of your apps. So keep it and the password secure and handy (even if it is expendable now).

After this step, you need to create a PEM upload certificate for this key, and in order to do so, execute this command:

keytool -export -rfc -keystore {{keystore_path}} -alias {{alias_name}} -file {{upload_certificate.pem}}

After this is executed, it’ll ask you the keystore password, and once you enter it, the PEM file will be generated and you will have to upload it to the Play console.

If everything goes right, your Play console will look something like this:

 

Click enrol and you’re done! Now you can go to App Signing section of the Release Management console and see your app signing and new upload key certificates

 

You can use the SHA1 hash to confirm the keys as to which one corresponds to private and upload if ever in confusion.

For new apps:

For new apps, the process is like a walk in park. You just need to enable the App Signing, and you’ll get an option to continue, opt-out or re-use existing key.

 

If you re-use existing key, the process is finished then and there and an existing key is deployed as the upload key for this app. But if you choose to Continue, then App Signing will be enabled and Google will use an arbitrary key as private key for the app and the first app you upload will get its key registered as the upload key

 

This is the screenshot of the App Signing console when there is no first app uploaded and you can see that it still has an app signing certificate of a key which you did not upload or have access to.

If you want to know more about app signing program, check out these links:

Implementing Roles API on Open Event Frontend to Create Roles Using an External Modal

This blog article will illustrate how the roles are created via the external model  on the admin permissions page in Open Event Frontend, using the roles API. Our discussion primarily will involve the admin/permissions/index route to illustrate the process.The primary end point of Open Event API with which we are concerned with for fetching the permissions  for a user is

POST /v1/roles

First we need to create a model for the user-permissions, which will have the fields corresponding to the api, so we proceed with the ember CLI command:

ember g model role

Next we define the model according to the requirements. The model needs to extend the base model class, and has only two fields one for the title and one for the actual name of the role.

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 name           : attr('string'),
 titleName      : attr('string')
 });

Next we need to modify the existing modal to incorporate the API and creation of roles in it. It is very important to note here that using createRecord as the model will result in a major flaw. If createRecord is used and the user tries to create multiple roles, other than the first POST request all the subsequent requests will be PATCH requests and will keep on modifying the same role. To avoid this, a new record needs to be created every time the user clicks on Add Role.  We slightly modify the modal component call to pass in the name and titleName to it.

{{modals/add-system-role-modal  isOpen=isAddSystemRoleModalOpen
                                isLoading=isLoading
                                name=name
                                titleName=titleName
                                addSystemRole=(action 'addSystemRole')}}

Upon entering the details of the roles and successful validation of the form, if the user clicks the Add Role button of the modal, the action addSystemRole will be triggered. We will write the entire logic for the same in the respective controller of the route.

addSystemRole() {
     this.set('isLoading', true);
     this.get('store').createRecord('role', {
       name      : this.get('name'),
       titleName : this.get('titleName')
     }).save()
       .then(() => {
         this.set('isLoading', false);
         this.notify.success(this.l10n.t('User permissions have 
         been saved successfully.'));
         this.set('isAddSystemRoleModalOpen', false);
         this.setProperties({
           name          : null,
           roleTitleName : null
         });
       })
       .catch(()=> {
         this.set('isLoading', false);
         this.notify.error(this.l10n.t('An unexpected error has occurred.
         User permissions not saved.'));
       });
   },

At first the isLoading property is made true.This adds the semantic UI class loading to the the form,  and so the form goes in the loading state, Next, a record is created of the type role  and it’s properties are made equal to the corresponding values entered by the user.

Then save() is called, which subsequently makes a POST request to the server. If the request is successful the modal is closed by setting the isAddSystemRoleModalOpen property to false. Also, the fields of the modal are cleared for a  better user experience in case multiple roles need to be added one after the other.

In cases when  there is an error during the processing of the request the catch() block executes. And the modal is not closed. Neither are the fields cleared.

Resources

Giving Offline Support to the Open Event Organizer Android App

Open Event Organizer is an Android Application for Event Organizers and Entry Managers which uses Open Event API Server as a backend. The core feature of the App is to scan a QR code to validate an attendee’s check in. The App maintains a local database and syncs it with the server. The basic workflow of the attendee check in is – the App scans a QR code on an attendee’s ticket. The code scanned is processed to validate the attendee from the attendees database which is maintained locally. On finding, the App makes a check in status toggling request to the server. The server toggles the status of the attendee and sends back a response containing the updated attendee’s data which is updated in the local database. Everything described above goes well till the App gets a good network connection always which cannot be assumed as a network can go down sometimes at the event site. So to support the functionality even in the absence of the network, Orga App uses Job Schedulers which handle requests in absence of network and the requests are made when the network is available again. I will be talking about its implementation in the App through this blog.

The App uses the library Android-Job developed by evernote which handles jobs in the background. The library provides a class JobManager which does most of the part. The singleton of this class is initialized in the Application class. Job is the class which is where actually a background task is implemented. There can be more than one jobs in the App, hence the library requires to implement JobCreator interface which has create method which takes a string tag and the relevant Job is returned. JobCreator is passed to the JobManager in Application while initialization. The relevant code is:

JobManager.create(this).addJobCreator(new OrgaJobCreator());

Initialization of JobManager in Application class

public class OrgaJobCreator implements JobCreator {
   @Override
   public Job create(String tag) {
       switch (tag) {
           case AttendeeCheckInJob.TAG:
               return new AttendeeCheckInJob();
           default:
               return null;
       }
   }
}

Implementation of JobCreator

public class AttendeeCheckInJob extends Job {
   ...
   ...
   @NonNull
   @Override
   protected Result onRunJob(Params params) {
       ...
       ...
       Iterable<Attendee> attendees = attendeeRepository.getPendingCheckIns().blockingIterable();
       for (Attendee attendee : attendees) {
           try {
               Attendee toggled = attendeeRepository.toggleAttendeeCheckStatus(attendee).blockingFirst();
               ...
           } catch (Exception exception) {
               ...
               return Result.RESCHEDULE;
           }
       }
       return Result.SUCCESS;
   }

   public static void scheduleJob() {
       new JobRequest.Builder(AttendeeCheckInJob.TAG)
           .setExecutionWindow(1, 5000L)
           .setBackoffCriteria(10000L, JobRequest.BackoffPolicy.EXPONENTIAL)
           .setRequiredNetworkType(JobRequest.NetworkType.CONNECTED)
           .setRequirementsEnforced(true)
           .setPersisted(true)
           .setUpdateCurrent(true)
           .build()
           .schedule();
   }
}

Job class for attendee check in job

To create a Job, these two methods are overridden. onRunJob is where the actual background job is going to run. This is the place where you implement your job logic which should be run in the background. In this method, the attendees with pending sync are fetched from the local database and the network requests are made. On failure, the same job is scheduled again. The process goes on until the job is done. scheduleJob method is where the related setting options are set. This method is used to schedule an incomplete job.

So after this implementation, the workflow described above is changed. Now on attendee is found, it is updated in local database before making any request to the server and the attendee is flagged as pending sync. Accordingly, in the UI single tick is shown for the attendee which is pending for sync with the server. Once the request is made to the server and the response is received, the pending sync flag of the attendee is removed and double tick is shown against the attendee.

Links:
1. Documentation for Android-Job Library by evernote
2. Github Repository of Android-Job Library

Implementing Users API to Display the Users at Admin Route in Open Event Frontend

This article will illustrate how the users are displayed and updated on the /admin/users route, their roles, user links etc. using the users API in Open Event Frontend. The primary end point of Open Event API with which we are concerned with for fetching the users is

GET /v1/users

First, we need to create a model for the user, which will have the fields corresponding to the API, so we proceed with the ember CLI command:

ember g model user

Next, we need to define the model according to the requirements. The model needs to extend the base model class. As a user can have multiple notifications, orders and  sessions etc. so we have to use ember data relationships “hasMany”. Hence, the model will have the following format.

import ModelBase from 'open-event-frontend/models/base';
import { hasMany } from 'ember-data/relationships';

export default ModelBase.extend({
  email        : attr('string'),
  password     : attr('string'),
  isVerified   : attr('boolean', { readOnly: true }),
  isSuperAdmin : attr('boolean', { readOnly: true }),
  isAdmin      : attr('boolean', { readOnly: true }),
  firstName : attr('string'),
  lastName  : attr('string')
});

The complete code for the model can be seen here

Now, we need to load the data from the API using the above model, so we will send a GET request to the API to fetch the users. This can be easily achieved using this.

return this.get('store').query('user', {'page[size]': 10 });

The above line is querying for the users from the store which is place where cache of all of the records that have been loaded by our application is there. If a route asks for a record, the store can return it immediately if it is there in the cache and we want to display only 10 users in a page so defined how many number of users has to be loaded at a time.

Now we need to filter the users based on whether they are active or they have deleted their accounts. For this purpose, we need to pass filter to the query which will tell what type of users to be loaded at once.

The next thing we need to do is to display the above data fetched from the API into an ember table. Ember table helps in allowing us to render very large data sets by only rendering the rows that are being displayed. For this, we defined a controller class which will help in letting the table know what all columns will be required to display and the attribute values they correspond in the API. We can also define the template for each column. The code for the controller class looks like this.

import Ember from 'ember';

const { Controller } = Ember;
export default Controller.extend({
  columns: [
    {
      propertyName     : 'first-name',
      title            : 'Name',
      disableSorting   : true,
      disableFiltering : true
    },
    {
      propertyName     : 'email',
      title            : 'Email',
      disableSorting   : true,
      disableFiltering : true
     },
     {
      propertyName     : 'last-accessed-at',
      title            : 'Last Accessed',
      template         : 'components/ui-table/cell/admin/users/cell-last-accessed-at',
      disableSorting   : true,
      disableFiltering : true
    }
   ]
});

In the above code, we can see a field called ‘disableSorting’ which is true if we don’t want to sort the table based on that column. Since we want the last-accessed-at column to be customized, so we have separately added a template for the column which will ensure how it will look in the column. The complete code for the other columns which are there in table apart from these can be found here.

Now to display the ember table we will write the following code.

{{events/events-table columns=columns data=model
    useNumericPagination=true
    showGlobalFilter=true
    showPageSize=true
}}

In the above piece of code, we are calling the same ember table as we used in case of events to reduce the code duplication. We are passing the columns and data in the table which remains unique to the table. Next, we are ensuring that our page shows the amount of data we’re fetching at one go, allows the filtering the table based on the columns.

The UI of the users page for the above code snippets look like this.

Fig 1: The UI of the users table under admin/users route

The entire code for implementing the users API can be seen here.

To conclude, this is how we efficiently fetched the users using the Open-Event-Orga users API, ensuring that there is no unnecessary API call to fetch the data and no code duplication using the same ember table again.

Resources:

Implementing Sessions API for the event in Open Event Frontend

This article will illustrate how the sessions are displayed and updated on the events/{event_id}/sessions route to display the sessions available for a particular event using the sessions API in Open Event Frontend. The primary end point of Open Event API with which we are concerned with for fetching the sessions is

GET /v1/sessions/{session_id}

First, we need to create a model for the sessions, which will have the fields corresponding to the API, so we proceed with the ember CLI command:

ember g model session

Next, we need to define the model according to the requirements. The model needs to extend the base model class. As a session can have multiple speakers and a session always belongs to an event, so we have to use ember data relationships “hasMany” and “belongsTo”. Hence, the model will have the following format.

import ModelBase from 'open-event-frontend/models/base';
import { belongsTo, hasMany } from 'ember-data/relationships';

export default ModelBase.extend({
  title         : attr('string'),
  subtitle      : attr('string'),

  speakers      : hasMany('speaker'),
  event         : belongsTo('event')
});

Complete code for the model can be seen here

Now, we need to load the data from the API using the above model, so we will send a GET request to the API to fetch the sessions corresponding to a particular event. This can be easily achieved using this.

return this.modelFor('events.view').query('sessions');

The above line is asking for getting the current model that is on the route events.view and query for the sessions property from that model.

Now we need to filter the sessions based on their sessions whether they have been accepted or confirmed or pending or rejected and display them on different pages. For this purpose, we need to pass filter and pages to the query which will tell what type and now of sessions to be loaded at once. Also, we need to display the speakers associated with session and event details. For this case, the above query will be formatted like this.

return this.modelFor('events.view').query('sessions', {
      include      : 'event,speakers',
    filter       : filterOptions,
      'page[size]' : 10
    });  

In the above query, the filterOptions are designed in such a way which check for what type of sessions user is querying for. The code can be found here.

The next thing we need to do is to display the above data fetched from the API into an ember table. For this, we need to have a controller class which will help in letting the table know what all columns will be required to display and the attribute values they correspond in the API. We can also define the template for each column. The code for the controller class looks like this.

export default Controller.extend({
  columns: [
    {
      propertyName   : 'state',
      title          : 'State',
      disableSorting : true,
      template       : 'components/ui-table/cell/events/view/sessions/cell-session-state'
    },
    {
      propertyName : 'title',
      title          : 'Title'
    },
    {
      propertyName    : 'speakers',
      template       : 'components/ui-table/cell/cell-speakers',
      title          : 'Speakers',
      disableSorting  : true
     }]
});

In the above code, we can see a field called ‘disableSorting’ which is true if we don’t want to sort the table based on that column. Since we want the state column to be customized, so we have separately added a template for the column which will ensure how it will look in the column. The complete code for the other columns which are there in table apart from the state, title and speakers can be found here.

Now to display the ember table we will write the following code.

{{events/events-table columns=columns data=model
    useNumericPagination=true
    showGlobalFilter=true
    showPageSize=true
}}
I

In the above piece of code, we are calling the same ember table as we used in case of events to reduce the code duplication. We are passing the columns and data in the table which remains unique to the table. Next, we are ensuring that our page shows the amount of data we’re fetching at one go, allows the filtering the table based on the columns.

The UI of the sessions page for the above code snippets look like this.

Fig 1: The UI of the session table under events/{event_id}/session route

The entire code for implementing the sessions API can be seen here.

To conclude, this is how we efficiently fetched the sessions details using the Open-Event-Orga sessions API, ensuring that there is no unnecessary API call to fetch the data and no code duplication using the same ember table again.

Resources:

Implementing ICS/ICAL to sync calendars with the event schedule in Open Event Webapp

As an end result, we want to provide a button to the user which will export the whole data of the event schedule to an ICS file and present it to the user for download after clicking the button. The whole work regarding the feature can be seen here.

Instead of implementing the whole specification ourselves which would be much tougher and time-consuming, we looked for some good open source libraries to do a bit of heavy lifting for us. After searching exhaustively for the solution, we came across this library which seemed appropriate for the job. The heart of the library is a function which takes in an object which contains information about the session. It expects information about the start and end time, subject, description and location of the session. Here is an excerpt from the function. The whole file can be seen here

var addEvent = function (session) {
 var calendarEvent = [
   'BEGIN:VEVENT',
   'UID:' + session.uid,
   'CLASS:PUBLIC',
   'DESCRIPTION:' + session.description,
   'DTSTART;VALUE=DATETIME:' + session.begin,
   'DTEND;VALUE=DATE:' + session.stop,
   'LOCATION:' + session.location,
   'SUMMARY;LANGUAGE=en-us:' + session.subject,
   'TRANSP:TRANSPARENT',
   'END:VEVENT'
 ];
 calendarEvents.push(calendarEvent);
};

We need to call the above function for every session in the event schedule. In the schedule template file, we have the jsonData object available which contain all the information about the event. It contains a field called timeList which contains the chronological order of the different sessions taking place throughout the events. The structure of that sub-object is something like this.

[{'slug': '2017-03-20', 'times': {'caption' : '09:00-09:30', 'sessions': [{'title': 'Welcome', 'description': 'Opening of the event', 'start': '09:00', 'end': '09:30'}]}]

So, we define a function for iterating through every session in the above object and adding it to the calendar. We can use most of the attributes directly but have to modify the date and time fields of the session to an appropriate format before adding it. The specification expects time in the ISO 8601 Format. You can read more about the specification here. For eg – If the date is 2017-03-20 and the time is 09:30 then it should be written as 20170320T093000. Here is some part of the function here

function exportICS() {
 var scheduleArr = {{{json timeList}}};
 // Helper functions for converting time to ISO 8601 Format
 function removeDashFromDate(date) {
   return date.replace(/-/g, '');
 }
 function removeColonFromTime(time) {
   return time.replace(/:/g, '');
 }
 // Iteration through the object and adding every session to the calendar
 scheduleArr.forEach(function(scheduleDay) {
   var date = removeDashFromDate(scheduleDay.slug);
   scheduleDay.times.forEach(function(time) {
     time.sessions.forEach(function(session) {
       var sessObj = {};
       sessObj.begin = date + 'T' + removeColonFromTime(session.start) + '00';
       sessObj.stop = date + 'T' + removeColonFromTime(session.end) + '00';
       sessObj.subject = session.title;
       sessObj.description = session.description;
       sessObj.location = session.location;
       cal.addEvent(sessObj);
     });
   });
 });
 cal.download('calendar', 'ics', false); // Download the ics file of the calendar
}

After defining the function, we add a button for starting the download of the whole schedule of the event. On clicking, we call the function which initiates the download after all the sessions of the event have been added.

<span class="schedule-download">
 <button type="button" class="btn btn-default export-schedule"><i class="fa fa-calendar" aria-hidden="true"></i></button>
</span>

$('.export-schedule').click(function() {
 exportICS();
});

Here is the export schedule button

65203af9-3962-4ab5-9655-3250bf2253a0.png

This is the download pop-up of the ICS file of the event.

Screenshot from 2017-08-10 21-56-16.png

After importing it in the Google calendar

Screenshot from 2017-08-10 23-01-22.png

References

Customising URL Using Custom Adapters in Open Event Front-end

Open-Event Front-end uses Ember data for handling Open Event Orga API which abides by JSON API specs. The API has relationships which represent models in the database, however there are some API endpoints for which the URL is not direct. We make use of custom adapter to build a custom URL for the requests.
In this blog we will see how to Implement relationships which do not have a model in the API server. Lets see how we implemented the admin-statistics-event API using custom adapter?

Creating Order-statistics model
To create a new model we use ember-cli command:

ember g model admin-statistics-event

The generated model:

export default ModelBase.extend({
  draft     : attr('number'),
  published : attr('number'),
  past      : attr('number')
})

The API returns 3 attributes namely draft, published & past which represent the total number of drafted, live and past event in the system. The admin-statistics-event is an admin related model.
Creating custom adapter
To create a new adapter we use ember-cli command:

ember g adapter event-statistics-event

If we try to do a GET request the URL for the request will be ‘v1/admin-statistics-event’ which is an incorrect endpoint. We create a custom adapter to override the buildURL method.

buildURL(modelName, id, snapshot, requestType, query) {
  let url = this._super(modelName, id, snapshot, requestType, query);
  url = url.replace('admin-statistics-event', 'admin/statistics/event');
  return url;
}

We create a new variable url which holds the url generated by the buildURL method of the super adapter. We call the super method using ‘this._super’. We will now replace the ‘admin-statistics-event’ with ‘admin/statistics/event’ in url variable. We return the new url variable. This results in generation of correct URL for the request.
Thank you for reading the blog, you can check the source code for the example here.
Resources

Preparing for Automatic Publishing of Android Apps in Play Store

I spent this week searching through libraries and services which provide a way to publish built apks directly through API so that the repositories for Android apps can trigger publishing automatically after each push on master branch. The projects to be auto-deployed are:

I had eyes on fastlane for a couple of months and it came out to be the best solution for the task. The tool not only allows publishing of APK files, but also Play Store listings, screenshots, and changelogs. And that is only a subset of its capabilities bundled in a subservice supply.

There is a process before getting started to use this service, which I will go through step by step in this blog. The process is also outlined in the README of the supply project.

Enabling API Access

The first step in the process is to enable API access in your Play Store Developer account if you haven’t done so. For that, you have to open the Play Dev Console and go to Settings > Developer Account > API access.

If this is the first time you are opening it, you’ll be presented with a confirmation dialog detailing about the ramifications of the action and if you agree to do so. Read carefully about the terms and click accept if you agree with them. Once you do, you’ll be presented with a setting panel like this:

Creating Service Account

As you can see there is no registered service account here and we need to create one. So, click on CREATE SERVICE ACCOUNT button and this dialog will pop up giving you the instructions on how to do so:

So, open the highlighted link in the new tab and Google API Console will open up, which will look something like this:

Click on Create Service Account and fill in these details:

Account Name: Any name you want

Role: Project > Service Account Actor

And then, select Furnish a new private key and select JSON. Click CREATE.

A new JSON key will be created and downloaded on your device. Keep this secret as anyone with access to it can at least change play store listings of your apps if not upload new apps in place of existing ones (as they are protected by signing keys).

Granting Access

Now return to the Play Console tab (we were there in Figure 2 at the start of Creating Service Account), and click done as you have created the Service Account now. And you should see the created service account listed like this:

Now click on grant access, choose Release Manager from Role dropdown, and select these PERMISSIONS:

Of course you don’t want the fastlane API to access financial data or manage orders. Other than that it is up to you on what to allow or disallow. Same choice with expiry date as we have left it to never expire. Click on ADD USER and you’ll see the Release Manager created in the user list like below:

Now you are ready to use the fastlane service, or any other release management service for that matter.

Using fastlane

Install fastlane by

sudo gem install fastlane

Go to your project folder and run

fastlane supply init

First it will ask the location of the private key JSON file you downloaded, and then the package name of the application you are trying to initialize fastlane for.

Then it will create metadata folder with listing information excluding the images. So you’ll have to download and place the images manually for the first time

After modifying the listing, images or APK, run the command:

fastlane supply run

That’s it. Your app along with the store listing has been updated!

This is a very brief introduction to the capabilities of the supply service. All interactive options can be supplied via command line arguments, certain parts of the metadata can be omitted and alpha beta management along with release rollout can be done in steps! Make sure to check out the links below:

Create Event by Importing JSON files in Open Event Server

Apart from the usual way of creating an event in  FOSSASIA’s Orga Server project by using POST requests in Events API, another way of creating events is importing a zip file which is an archive of multiple JSON files. This way you can create a large event like FOSSASIA with lots of data related to sessions, speakers, microlocations, sponsors just by uploading JSON files to the system. Sample JSON file can be found in the open-event project of FOSSASIA. The basic workflow of importing an event and how it works is as follows:

  • First step is similar to uploading files to the server. We need to send a POST request with a multipart form data with the zipped archive containing the JSON files.
  • The POST request starts a celery task to start importing data from JSON files and storing them in the database.
  • The celery task URL is returned as a response to the POST request. You can use this celery task for polling purposes to get the status. If the status is FAILURE, we get the error text along with it. If status is SUCCESS we get the resulting event data
  • In the celery task, each JSON file is read separately and the data is stored in the db with the proper relations.
  • Sending a GET request to the above mentioned celery task, after the task has been completed returns the event id along with the event URL.

Let’s see how each of these points work in the background.

Uploading ZIP containing JSON Files

For uploading a zip archive instead of sending a JSON data in the POST request we send a multipart form data. The multipart/form-data format of sending data allows an entire file to be sent as a data in the POST request along with the relevant file informations. One can know about various form content types here .

An example cURL request looks something like this:

curl -H "Authorization: JWT <access token>" -X POST -F 'file=@event1.zip' http://localhost:5000/v1/events/import/json

The above cURL request uploads a file event1.zip from your current directory with the key as ‘file’ to the endpoint /v1/events/import/json. The user uploading the feels needs to have a JWT authentication key or in other words be logged in to the system as it is necessary to create an event.

@import_routes.route('/events/import/<string:source_type>', methods=['POST'])
@jwt_required()
def import_event(source_type):
    if source_type == 'json':
        file_path = get_file_from_request(['zip'])
    else:
        file_path = None
        abort(404)
    from helpers.tasks import import_event_task
    task = import_event_task.delay(email=current_identity.email, file=file_path,
                                   source_type=source_type, creator_id=current_identity.id)
    # create import job
    create_import_job(task.id)

    # if testing
    if current_app.config.get('CELERY_ALWAYS_EAGER'):
        TASK_RESULTS[task.id] = {
            'result': task.get(),
            'state': task.state
        }
    return jsonify(
        task_url=url_for('tasks.celery_task', task_id=task.id)
    )


After the request is received we check if a file exists in the key ‘file’ of the form-data. If it is there, we save the file and get the path to the saved file. Then we send this path over to the celery task and run the task with the
.delay() function of celery. After the celery task is started, the corresponding data about the import job is stored in the database for future debugging and logging purposes. After this we return the task url for the celery task that we started.

Celery Task to Import Data

Just like exporting of event, importing is also a time consuming task and we don’t want other application requests to be paused because of this task. Hence, we use a celery queue to execute this task. Whenever an import task is started, it is added to the celery queue. When it comes to the front of the queue it is executed.

For importing, we have created a celery task, import.event which calls the import_event_task_base() function that uses the import helper functions to get the data from JSON files imported and saved in the DB. After the task is completed, we update the import job data in the table with the status as either SUCCESS or FAILURE depending on the outcome of the celery task.

As a result of the celery task, the newly created event’s id and the frontend link from where we can visit the url is returned. This along with the status of the celery task is returned as the response for a GET request on the celery task. If the celery task fails, then the state is changed to FAILURE and the error which the celery faced is returned as the error message in the result key. We also print an error traceback in the celery worker.

@celery.task(base=RequestContextTask, name='import.event', bind=True, throws=(BaseError,))
def import_event_task(self, file, source_type, creator_id):
    """Import Event Task"""
    task_id = self.request.id.__str__()  # str(async result)
    try:
        result = import_event_task_base(self, file, source_type, creator_id)
        update_import_job(task_id, result['id'], 'SUCCESS')
        # return item
    except BaseError as e:
        print(traceback.format_exc())
        update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure')
        result = {'__error': True, 'result': e.to_dict()}
    except Exception as e:
        print(traceback.format_exc())
        update_import_job(task_id, e.message, e.status if hasattr(e, 'status') else 'failure')
        result = {'__error': True, 'result': ServerError().to_dict()}
    # send email
    send_import_mail(task_id, result)
    # return result
    return result

 

Save Data from JSON

In import helpers, we have the functions which perform the main task of reading the JSON files, creating sqlalchemy model objects from them and saving them in the database. There are few global dictionaries which help maintain the order in which the files are to be imported and saved and also the file vs model mapping. The first JSON file to be imported is the event JSON file. Since all the other tables to be imported are related to the event table so first we read the event JSON file. After that the order in which the files are read is as follows:

  1. SocialLink
  2. CustomForms
  3. Microlocation
  4. Sponsor
  5. Speaker
  6. Track
  7. SessionType
  8. Session

This order helps maintain the foreign constraints. For importing data from these files we use the function create_service_from_json(). It sorts the elements in the data list  based on the key “id”. It then loops over all the elements or dictionaries contained in the data list. In each iteration delete the unnecessary key-value pairs from the dictionary. Then set the event_id for that element to the id of the newly created event from import instead of the old id present in the data.  After all this is done, create a model object based on the mapping with the filename with the dict data. Then save that model data into the database.

def create_service_from_json(task_handle, data, srv, event_id, service_ids=None):
    """
    Given :data as json, create the service on server
    :service_ids are the mapping of ids of already created services.
        Used for mapping old ids to new
    """
    if service_ids is None:
        service_ids = {}
    global CUR_ID
    # sort by id
    data.sort(key=lambda k: k['id'])
    ids = {}
    ct = 0
    total = len(data)
    # start creating
    for obj in data:
        # update status
        ct += 1
        update_state(task_handle, 'Importing %s (%d/%d)' % (srv[0], ct, total))
        # trim id field
        old_id, obj = _trim_id(obj)
        CUR_ID = old_id
        # delete not needed fields
        obj = _delete_fields(srv, obj)
        # related
        obj = _fix_related_fields(srv, obj, service_ids)
        obj['event_id'] = event_id
        # create object
        new_obj = srv[1](**obj)
        db.session.add(new_obj)
        db.session.commit()
        ids[old_id] = new_obj.id
        # add uploads to queue
        _upload_media_queue(srv, new_obj)

    return ids


After the data has been saved, the next thing to do is upload all the media files to the server. This we do using the
_upload_media_queue()  function. It takes paths to upload the files to from the storage.py helper file for APIs. Then it uploads the files using the various helper functions to the static data storage services like AWS S3, Google storage, etc.

Other than this, the import helpers also contains the function to create an import job that keeps a record of all the imports along with the task url and the user id of the user who started the importing task. It also stores the status of the task. Then there is the get_file_from_request()  function which saves the file that is uploaded through the POST request and returns the path to that file.

Get Response about Event Imported

The POST request returns a task url of the form /v1/tasks/ebe07632-392b-4ae9-8501-87ac27258ce5. To get the final result, you need to keep polling this URL. To know more about polling read my previous blog about exporting event or visit this link. So when the task is completed you would get a “result” key along with the status. The state can either be SUCCESS or FAILURE. If it is a FAILURE you will get a corresponding error message due to which the celery task failed. If it is a success then you get data related to the corresponding event that was created because of import. The data returned are the event id, event name and the event url which you can use to visit the event from the frontend. This data is also sent to the user as an email and notification.

An example response looks something like this:

{ 
    “result”: {
“event_name” : “FOSSASIA 2016”,
     “id” : “24”,
     “url” : “https://eventyay.com/events/ab3de6
},
    “state” : “SUCCESS”
}

The corresponding event name and the url is also sent to the user who started the import task. From the frontend, one can use the object value of the result to show the name of the event that is imported along with providing the event url. Since the id and identifier are both present in the result returned one can also make use of them to send GET, PATCH and other API requests to the events/ endpoint and get the corresponding relationship urls from it to query the other APIs. Thus, the entire data that is imported gets available to the frontend as well.

 

Reference Links:

 

Image Loading in Open Event Organizer Android App using Glide

Open Event Organizer is an Android App for the Event Organizers and Entry Managers. Open Event API Server acts as a backend for this App. The core feature of the App is to scan a QR code from the ticket to validate an attendee’s check in. Other features of the App are to display an overview of sales and ticket management. As per the functionality, the performance of the App is very important. The App should be functional even on a weak network. Talking about the performance, the image loading part in the app should be handled efficiently as it is not an essential part of the functionality of the App. Open Event Organizer uses Glide, a fast and efficient image loading library created by Sam Judd. I will be talking about its implementation in the App in this blog.

First part is the configuration of the glide in the App. The library provides a very easy way to do that. Your app needs to implement a class named AppGlideModule using annotations provided by the library and it generates a glide API which can be used in the app for all the image loading stuff. The AppGlideModule implementation in the Orga App looks like:

@GlideModule
public final class GlideAPI extends AppGlideModule {

   @Override
   public void registerComponents(Context context, Glide glide, Registry registry) {
       registry.replace(GlideUrl.class, InputStream.class, new OkHttpUrlLoader.Factory());
   }

   // TODO: Modify the options here according to the need
   @Override
   public void applyOptions(Context context, GlideBuilder builder) {
       int diskCacheSizeBytes = 1024 * 1024 * 10; // 10mb
       builder.setDiskCache(new InternalCacheDiskCacheFactory(context, diskCacheSizeBytes));
   }

   @Override
   public boolean isManifestParsingEnabled() {
       return false;
   }
}

 

This generates the API named GlideApp by default in the same package which can be used in the whole app. Just make sure to add the annotation @GlideModule to this implementation which is used to find this class in the app. The second part is using the generated API GlideApp in the app to load images using URLs. Orga App uses data binding for layouts. So all the image loading related code is placed at a single place in DataBinding class which is used by the layouts. The class has a method named setGlideImage which takes an image view, an image URL, a placeholder drawable and a transformation. The relevant code is:

private static void setGlideImage(ImageView imageView, String url, Drawable drawable, Transformation<Bitmap> transformation) {
       if (TextUtils.isEmpty(url)) {
           if (drawable != null)
               imageView.setImageDrawable(drawable);
           return;
       }
       GlideRequest<Drawable> request = GlideApp
           .with(imageView.getContext())
           .load(Uri.parse(url));

       if (drawable != null) {
           request
               .placeholder(drawable)
               .error(drawable);
       }
       request
           .centerCrop()
           .transition(withCrossFade())
           .transform(transformation == null ? new CenterCrop() : transformation)
           .into(imageView);
   }

 

The method is very clear. First, the URL is checked for nullability. If null, the drawable is set to the imageview and method returns. Usage of GlideApp is simpler. Pass the URL to the GlideApp using the method with which returns a GlideRequest which has operators to set other required options like transitions, transformations, placeholder etc. Lastly, pass the imageview using into operator. By default, Glide uses HttpURLConnection provided by android to load the image which can be changed to use Okhttp using the extension provided by the library. This is set in the AppGlideModule implementation in the registerComponents method.

Links:
1. Documentation for Glide, an Image Loading Library
2. Documentation for Okhttp, an HTTP client for Android and Java Applications