Copying Event in Open Event API Server

The Event Copy feature of Open Event API Server provides the ability to create a xerox copy of event copies with just one API call. This feature creates the complete copy of event by copying the related objects as well like tracks, sponsors, micro-locations, etc. This API is based on the simple method where an object is first removed is from current DB session and then applied make_transient. Next step is to remove the unique identifying columns like “id”, “identifier” and generating the new identifier and saving the new record. The process seems simple but becomes a little complex when you have to generate copies of media files associated and copies of related multiple objects ensuring no orders, attendees, access_codes relations are copied.

Initial Step

The first thing to copy the event is first to get the event object and all related objects first

if view_kwargs.get('identifier').isdigit():
   identifier = 'id'

event = safe_query(db, Event, identifier, view_kwargs['identifier'], 'event_'+identifier)

Next thing is to get all related objects to this event.

Creating the new event

After removing the current event object from “db.session”, It is required to remove “id” attribute and regenerate “identifier” of the event.

db.session.expunge(event)  # expunge the object from session
make_transient(event)
delattr(event, 'id')
event.identifier = get_new_event_identifier()
db.session.add(event)
db.session.commit()

Updating related object with new event

The new event created has new “id” and “identifier”. This new “id” is added into foreign keys columns of the related object thus providing a relationship with the new event created.

for ticket in tickets:
   ticket_id = ticket.id
   db.session.expunge(ticket)  # expunge the object from session
   make_transient(ticket)
   ticket.event_id = event.id
   delattr(ticket, 'id')
   db.session.add(ticket)
   db.session.commit()

Finishing up

The last step of Updating related objects is repeated for all related objects to create the copy. Thus a new event is created with all related objects copied with the single endpoint.

References

How to clone a sqlalchemy object
https://stackoverflow.com/questions/28871406/how-to-clone-a-sqlalchemy-db-object-with-new-primary-key

Continue ReadingCopying Event in Open Event API Server

Filtering List with Search Manager in Connfa Android App

It is a good practice to provide the facility to filter lists in Android apps to improve the user experience. It often becomes very unpleasing to scroll through the entire list when you want to reach a certain data point. Recently I modified Connfa app to read the list of speakers from the Open Event Format. In this blog I describe how to add filtering facility in lists with Search Manager.

First, we declare the search menu so that the widget appears in it.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item android:id="@+id/search"
        android:title="Search"
        android:icon="@drawable/search"
        android:showAsAction="collapseActionView ifRoom"
        android:actionViewClass="android.widget.SearchView" />
</menu>

In above menu item the collapseActionView attribute allows your SearchView to expand to take up the whole action bar and collapse back down into a normal action bar item when not in use. Now we create the SearchableConfiguration which defines how SearchView behaves.

<?xml version="1.0" encoding="utf-8"?>
<searchable
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:label="@string/app_name"
    android:hint="Search friend">
</searchable>

Also add this to the activity that will be used with <meta-data> tag in the manifest file. Then associate searchable configuration with the SearchView in the activity class

@Override
public boolean onCreateOptionsMenu(Menu menu) {
    MenuInflater inflater = getMenuInflater();
    inflater.inflate(R.menu.search_menu, menu);

    SearchManager searchManager = (SearchManager)
                            getSystemService(Context.SEARCH_SERVICE);
    searchMenuItem = menu.findItem(R.id.search);
    searchView = (SearchView) searchMenuItem.getActionView();

    searchView.setSearchableInfo(searchManager.
                            getSearchableInfo(getComponentName()));
    searchView.setSubmitButtonEnabled(true);
    searchView.setOnQueryTextListener(this);

    return true;
}

Implement SearchView.OnQueryTextListener in activity, need to override two new methods now

@Override
public boolean onQueryTextSubmit(String searchText) {
  
  return true;
}

@Override
public boolean onQueryTextChange(String searchedText) {

   if (mSpeakersAdapter != null) {
       lastSearchRequest = searchedText;
       mSpeakersAdapter.getFilter().filter(searchedText);
   }
   return true;
}

Find the complete implementation here. In the end it will look like this,

 

References

Android Search View documentation – https://developer.android.com/reference/android/widget/SearchView.html

Continue ReadingFiltering List with Search Manager in Connfa Android App

Reset password in Open Event API Server

The addition of reset password API in the Open Event API Server enables the user to send a forgot password request to the server so that user can reset the password. Reset Password API is a two step process. The first endpoint allows you to request a token to reset the password and this token is sent to the user via email. The second process is making a PATCH request with the token and new password to set the new password on user’s account.

Creating a Reset token

This endpoint is not JSON spec based API. A reset token is simply a hash of random bits which is stored in a specific column of user’s table.

hash_ = random.getrandbits(128)
self.reset_password = str(hash_)

Once the user completed the resetting of the password using the specific token, the old token is flushed and the new token is generated. These tokens are all one time use only.

Requesting a Token

A token can be requested on a specific endpoint  POST /v1/auth/reset-password
The token with the direct link will be sent to registered email.

link = make_frontend_url('/reset-password', {'token': user.reset_password})
send_email_with_action(user, PASSWORD_RESET,     
                       app_name=get_settings()['app_name'], link=link)

Flow with frontend

The flow is broken into 2 steps with front end is serving to the backend. The user when click on forget password will be redirected to reset password page in the front end which will call the API endpoint in the backend with an email to send the token. The email received will contain the link for the front end URL which when clicked will redirect the user to the front end page of providing the new password. The new password entered with the token will be sent to API server by the front end and reset password will complete.

Updating Password

Once clicked on the link in the email, the user will be asked to provide the new password. This password will be sent to the endpoint PATCH /v1/auth/reset-password. The body will receive the token and the new password to update. The user will be identified using the token and password is updated for the respective user.

try:
   user = User.query.filter_by(reset_password=token).one()
except NoResultFound:
   return abort(
       make_response(jsonify(error="User not found"), 404)
   )
else:
   user.password = password
   save_to_db(user)

References

  1. Understand Self-service reset password
    https://en.wikipedia.org/wiki/Self-service_password_reset
  2. Python – getrandbits()
    https://docs.python.org/2/library/random.html
Continue ReadingReset password in Open Event API Server

Adding EditText With Google Input Option While Sharing In Phimpme App

In Phimpme Android App an user can share images on multiple platforms. While sharing we have also included caption option to enter description about the image. That caption can be entered by using keyboard as well Google Voice Input method. So in this post, I will be explaining how to add edittext with google voice input option.

Let’s get started

Step-1 Add EditText and Mic Button in layout file

<ImageView
       android:layout_width="20dp"
       android:id="@+id/button_mic"
       android:layout_height="20dp"
       android:background="?android:attr/selectableItemBackground"
       android:background="@drawable/ic_mic_black"
       android:scaleType="fitCenter" />
</RelativeLayout>


Caption Option in Share Activity in Phimpme

In Phimpme we have material design dialog box so right now I am using getTextInputDialogBox(). It will prompt the material design dialog box to enter caption to share image on multiple platform.

Step-2

Now we can get caption from edittext easily using the following code.

if (!captionText.isEmpty()) {
  caption = captionText;
  text_caption.setText(caption);
  captionEditText.setSelection(caption.length());
} else {
  caption = null;
  text_caption.setText(caption);
}


Step – 3 (Now add option Google Voice input option)

To use google input option I have added a global function in Utils class. To use that method just call that method with proper arguments.

public static void promptSpeechInput(Activity activity, int requestCode, View parentView, String promtMsg) {
  Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
  intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
  intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
  intent.putExtra(RecognizerIntent.EXTRA_PROMPT, promtMsg);
  try {
      activity.startActivityForResult(intent, requestCode);
  } catch (ActivityNotFoundException a) {
      SnackBarHandler.show(parentView,activity.getString(R.string.speech_not_supported));
  }

}

Just pass the request code to receive the speech text in onActvityResult() method and passs promt message which will be visible to user.

Step – 4 (Set text to caption )

if (requestCode == REQ_CODE_SPEECH_INPUT && data!=null){
       ArrayList<String> result = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
       String voiceInput = result.get(0);
       text_caption.setText(voiceInput);
       caption = voiceInput;
       return;

}

Now we can set the text in caption string right now I am adding text in existing caption i.e If the user enter some text using edittext and after that user clicked on mic button then the extra text will be added after the previous text.

So this is how I have used Google voice input method and made the function globally.

Resources:

Continue ReadingAdding EditText With Google Input Option While Sharing In Phimpme App

Store Log History for Repositories Registered to Yaydoc

Yaydoc, our automatic documentation generation and deployment project, generates and deploys documentation for each of its registered repository. For every commit made to the registered repository, there is a corresponding build process running at Yaydoc. These build processes have their own logs which are stored as text files. However, until now, these commits were never visible to the user. So, if there would have been a failed build process, the user would never know the reason behind, rendering the user unable to rectify the error.

Hence, there was a need to make these logs available to users. The initial thought was to store only the latest log overriding all the previous logs for the repository. However, it was unanimously decided by the developers to store a history of logs for the repository. The main motive behind this was to enable users to compare logs between different commits.

The content from the log files created is stored in a MongoDB collection. Following is the schema defined for the build logs.

const BuildLog = mongoose.model(‘BuildLog’, mongoose.Schema({
  repository: String,  // `full_name` of the repository
  buildNumber: {       // Incrementing number for each build
    type: Number,
    default: 0,
  },
  generate: {
    data: Buffer,      // Generate Logs content
    datetime: Date,    // Date time of generate log creation
  },
  ghpages: {
    data: Buffer,      // Github Pages Logs content
    datetime: Date,    // Date time of github pages log creation
  },
}));

The repository collection is also updated, adding a builds key storing the number of times the build process was triggered for a given repository. This key is incremented on every new build and the new value is stored along with the builds as buildNumber.

The build process involves a documentation generation and a documentation deployment script. The process of incrementing the build number in the repository occurs when we store the documentation generation logs. After that, Github pages logs are stored when the documentation deployment process is completed.

Since the logs are stored in a text file at the location temp/<email>/<filename>.txt, we had to read the file using NodeJS File system. The file is read synchronously using the fs.readFileSync(filename) function and then stored in the MongoDB collection.

/**
 * Store logs created while generating docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file path of the generate logs
 * @param callback
 */
module.exports.storeGenerateLogs = function (name, filepath, callback) {
  Repository.incrementBuildNumber(name, function (error, repository) {
    var buildlog = new BuildLog({
      repository: name,
      buildNumber: repository.builds,
      generate: {
        data: fs.readFileSync(filepath),
        datetime: new Date()
      }
    });
    buildlog.save(function (error, repository) {
      callback(error, repository);
    });
  });
};

/**
 * Store logs created while deploying docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file of the ghpages deploy logs
 * @param callback
 */
module.exports.storeGithubPagesLogs = function (name, filepath, callback) {
  Repository.getRepositoryByName(name, function (error, repository) {
    if (error) {
      callback(error);
    } else {
      BuildLog.getParticularBuildLog(repository.name, repository.builds, 
      function (error, buildLog) {
        buildLog.ghpages.data = fs.readFileSync(filepath);
        buildLog.ghpages.datetime = new Date();
        buildLog.save(function (error, buildLog) {
          callback(error, buildLog);
        });
      });
    }
  });
};

The stored logs can then be retrieved at two different routes, with /:owner/:name/logs showing a list to logs generated in at most 10 builds and /:owner/:name showing the latest log. Similar to logs generated by Travis, accessing these routes doesn’t require the user to login to Yaydoc.

/**
 * Get a single repository with a log history of 10
 * @param name: `full_name` of the repository
 * @param callback
 */
module.exports.getRepositoryWithLogs = function (name, callback) {
  Repository.aggregate([
    { $match: {name: name}},
    {
      $lookup: {
        from: ‘buildLogs’,
        localField: ‘name’,
        foreignField: ‘repository’,
        as: ‘logs’
      }
    },
    { $unwind: ‘$logs’ },
    { $sort: { ‘logs.buildNumber’: -1 } },
    { $limit: 10 }
  ]).exec(function (error, results){
    callback(error, results);
  });
};

In order to retrieve a repository along with its logs, we perform an aggregation in MongoDB which is similar to a Left Join in SQL. This is the $lookup aggregation and it performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. A similar method is used to retrieve the latest log by setting the limit aggregation to 1.

Resources:

  1. MongoDB Aggregation Lookup: https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/
  2. Mongoose Aggregate Constructor: http://mongoosejs.com/docs/api.html#index_Mongoose-Aggregate
  3. NodeJS File System: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options
Continue ReadingStore Log History for Repositories Registered to Yaydoc

Creating Sajuno Filter in Editor of Phimpme Android

What is Sajuno filter?

Sajuno filter is an image filter which we used in the editor of Phimpme Android application for brightening the skin region of a portrait photo.

How to perform?

Generally in a portrait photo, the dark regions formed due to shadows or low lightning conditions or even due to tanning of skin contains more darker reds than the greens and blues. So, for fixing this darkness of picture, we need to find out the area where reds are more dominant than the greens and blues. After finding the region of interest, we need to brighten that area corresponding to the difference of the reds and other colors.

How we implemented in Phimpme Android?

In Phimpme Android application, first we created a mask satisfying the above conditions. It can be created by subtracting blues and greens from red. The intensity can be varied by adjusting the intensity of reds. The condition mentioned here in programmatical way is shown below.


bright = saturate_cast<uchar>((intensity * r - g * 0.4 - b * 0.4));

In the above statement, r,g,b are the reds, greens and blues of the pixels respectively. The coefficients can be tweaked a little. But these are the values which we used in Phimpme Android application. After the above operation, a mask is generated as below.

 

This mask has the values that correspond to the difference of reds and greens and difference of reds and blues. So, we used this mask directly to increase the brightness of the dark toned skin of the original image. We simply need to add the mask and the original image. This results the required output image shown below.

 

As you can see the resultant image has less darker shades on the face than the original image. The way of implementation which we used in Phimpme Android editor is shown below.


double intensity = 0.5 + 0.35 * val;     // 0 < val < 1
dst = Mat::zeros(src.size(), src.type());
uchar r, g, b;
int bright;

for (y = 0; y < src.rows; y++) {
   for (x = 0; x < src.cols; x++) {
       r = src.at<Vec3b>(y, x)[0];
       g = src.at<Vec3b>(y, x)[1];
       b = src.at<Vec3b>(y, x)[2];

       bright = saturate_cast<uchar>((intensity * r - g * 0.4 - b * 0.4));
       dst.at<Vec3b>(y, x)[0] =
               saturate_cast<uchar>(r + bright);
       dst.at<Vec3b>(y, x)[1] =
               saturate_cast<uchar>(g + bright);
       dst.at<Vec3b>(y, x)[2] =
               saturate_cast<uchar>(b + bright);
   }
}

Resources:

Continue ReadingCreating Sajuno Filter in Editor of Phimpme Android

Showing sample queries in SUSI.AI Bots

We need to give the user a good start to their chat with SUSI.AI. Engaging the users with some good skills at the start of the conversation, can leave a good impression about SUSI.AI. In SUSI messenger bots, we show up with some sample queries to try, during the conversation with SUSI.AI. In this blog, SUSI_Tweetbot and SUSI_FBbot are used as examples.

These queries are shown as quick replies i.e. the user can click on any of these sample queries and get an answer from SUSI.AI.  

Facebook:

When the user clicks on the “Start chatting” button, we send a descriptive message on what can the user ask to SUSI.AI .

Code snippet used for this step is:

var queryUrl = 'http://api.susi.ai/susi/chat.json?q='+'Start+chatting';
var startMessage = '';
// Wait until done and reply
request({
        url: queryUrl,
        json: true
}, function (error, response, body) {
if (!error && response.statusCode === 200) {
        startMessage = body.answers[0].actions[0].expression;
    }
else{
    startMessage = errMessage;
    }
sendTextMessage(sender, startMessage, 0);

Just a text message is not much engaging. To further enhance the experience of the user, we show some quick reply options to the user. We have finalized some skills to show to the user:

Due to the character limit for the text shown on buttons, we try to show short queries as shown in the above picture. This way the user gets an idea about what type of queries can be asked.

Generic template, help us achieve this feature in SUSI_FBbot.

The code snippet used:

var messageT = {
               "type": "template",
               "payload": {
                "template_type": "generic",
                "elements": [{
                                    "title": 'You can try the following:',
                                    "buttons": [{                                               
                                               "type":"postback",
                                               "title":"What is FOSSASIA?",                                  
                                               "payload":"What is FOSSASIA?"            
                                            }]
                            }]
                }
            };
sendTextMessage(sender, messageT, 1);

As seen in the code above, each button has a corresponding postback text. So that whenever that button is clicked the postback text is sent to our chat automatically:

This postback text acts as a query to SUSI API which fetches the response from the server and shows it back to the user.

Twitter:

As SUSI.AI bots must be generic among all the messenger platforms available , we will inculcate the same skills available in SUSI_FBbot to SUSI_Tweetbot. The quick reply feature provided by Twitter devs help us to accomplish this task at hand.

As in SUSI_FBbot a descriptive message is shown to the users first and then some quick reply options following it.

Message_create event helps in adding quick replies:

var msg = {
               "event": {
               "type": "message_create",
               "message_create": {
                   "target": {
                       "recipient_id": senderId
                    },
                    "message_data": {
                        "text": "You can try the following:",
                        "quick_reply": {
                            "type": "options",
                            "options": [{
                                "label": "What is FOSSASIA?",
                                "metadata": "external_id_4"
                            }]
                        }
                    }
                }
           }
    };
T.post('direct_messages/events/new', msg, sent);

One thing to keep in mind while coding is to send the quick reply message after the initial descriptive message i.e. the code used to send quick replies should be written inside the function, which sends the descriptive message first and aafter that step is complete it runs the code for quick replies. If we accidentally write quick reply code outside that function, it’s highly likely to find bugs in the replies by SUSI.AI.

Resources

  1. Speed up customer service with quick replies and welcome messages by Ian Cairns from Twitter blog.
  2. Link Ads to Messenger, Enhanced Mobile Websites, Payments and More by Seth Rosenberg from Facebook developers blog
Continue ReadingShowing sample queries in SUSI.AI Bots

Real Time Upload Progress in Phimpme Android

The Phimpme Android application along with a wonderful gallery, edit image and camera section comes in with an option to share the images to different connected accounts. For sharing the images to different accounts, we have made use of different SDK’s provided to help users to share the images to multiple accounts at once without having to install other applications on their devices. When the user connects the account and shares the image to any account, we display a snackbar at the bottom that the upload has started and then we display the progress of the uploads in the notification panel as depicted in the screenshot below.

In this tutorial, I will be explaining how we achieved this feature of displaying the upload progress in the Phimpme Android application using a Notification handler class.

Step 1

The first thing we need to do is to create an AsyncTask that will be handling the upload progress and the notification handling in the background without affecting the main UI of the application. This can be done using the upload progress class which is a subclass of the AsyncTask class as depicted below.

private class UploadProgress extends AsyncTask<Void, Integer, Void> {
}

The AsyncTask overrides three methods which are onPreExecute, doInBackground and onPostExecute methods. In the onPreExecute method we will make the uploading notification visible to the user via the Notification handler class.

Step 2

After this, we need to create a notification handler class which will be handling the uploads progress. We will be needing four methods inside of the Notification handler class to :

  1. Make the app notification in the notification panel.
  2. To update the progress of the upload.
  3. To display the upload failed progress.
  4. To display the upload passed progress.

The notification display can be made using the following lines of code below:

mNotifyManager = (NotificationManager) ActivitySwitchHelper.getContext().getSystemService(Context.NOTIFICATION_SERVICE);
mBuilder = new NotificationCompat.Builder(ActivitySwitchHelper.getContext());
mBuilder.setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_progress))
      .setContentText(ActivitySwitchHelper.getContext().getString(R.string.progress))
      .setSmallIcon(R.drawable.ic_cloud_upload_black_24dp)
      .setOngoing(true);
mBuilder.setProgress(0, 0, true);
// Issues the notification
mNotifyManager.notify(id, mBuilder.build());

The above code makes use of the Android’s NotificationManager class to get the notification service and sets the title and the upload image which is to be displayed to the user at the time of image uploads.

Now we need to update the notification after every each second to display the real time upload progress to the user. This can be done by using the upload progress method which takes in total file size and the amount of data uploaded as a parameter.

public static void updateProgress(int uploaded, int total, int percent){
  mBuilder.setProgress(total, uploaded, false);
  mBuilder.setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_progress)+" ("+Integer.toString(percent)+"%)");
  // Issues the notification
  mNotifyManager.notify(id, mBuilder.build());

The above updating process can be done in the doInBackground task of the AsyncTask described in step 1.

Step 3

After the upload has completed, the onPostExecute method will be executed and in that we need to make display the status whether the upload passed or failed and we need to set the onProgress value of the notification to be false so that user can remove the notification. This can be done using the following line of code below:

mBuilder.setContentText(ActivitySwitchHelper.getContext().getString(R.string.upload_done))
      // Removes the progress bar
      .setProgress(0,0,false)
      .setContentTitle(ActivitySwitchHelper.getContext().getString(R.string.upload_complete))
      .setOngoing(false);
mNotifyManager.notify(0, mBuilder.build());
mNotifyManager.cancel(id);

This is how we have created and made use of the Notification handler class in the Phimpme Application to display the upload progress in the application. To get the full source code for implementing the uploads to multiple accounts and to display the notification, please refer to the Phimpme Android GitHub repository.

Resources

  1. Google Developer’s Guide – Notification Handling – https://developer.android.com/guide/topics/ui/notifiers/notifications.html
  2. Google Developer’s Guide – AsyncTask in Android – https://developer.android.com/reference/android/os/AsyncTask.html
  3. StackOverflow – Notification Handling – https://stackoverflow.com/questions/31063920/how-to-program-android-notification
  4. GitHub – Phimpme Android Repository – https://github.com/fossasia/phimpme-android/
Continue ReadingReal Time Upload Progress in Phimpme Android

Implement Sessions API for the ‘admin/sessions’ Route in Open Event Frontend

In Open Event Frontend, under the ‘admin/sessions’ route, the admin can track the record of the sessions. The info which is shown along with the sessions is the speakers for the session, its title, submitted date, start time, end time. So to integrate the sessions API, we followed the following approach:

Firstly, we add a sessions model and establish its relationship with the speakers model since we need speaker names also:

export default ModelBase.extend({
  title         : attr('string'),
  subtitle      : attr('string'),
  startsAt      : attr('moment'),
  endsAt        : attr('moment'),
  j
  sessionType   : belongsTo('session-type'),
  microlocation : belongsTo('microlocation'),
  track         : belongsTo('track'),
  speakers      : hasMany('speaker'),
  event         : belongsTo('event'), // temporary
});

After creating the model for sessions, we do the query to get the sessions.
Since, the sessions here are classified into the sections like ‘pending’, ‘accepted’, ‘rejected’, ‘deleted’, etc, we need to filter the response returned by the query:

if (params.sessions_state === 'pending') {
      filterOptions = [
        {
          name : 'state',
          op   : 'eq',
          val  : 'pending'
        }
      ];
    } else if (params.sessions_state === 'accepted') {
      filterOptions = [
        {
          name : 'state',
          op   : 'eq',
          val  : 'accepted'
        }
      ];
    }

Above code shows how we filter the response on the server side itself. Hence we pass the filterOptions array to the query as follows:

return this.get('store').query('session', {
      include      : 'event,speakers',
      filter       : filterOptions,
      'page[size]' : 10
});

Once we get data from the query, we just pass it to our controller to implement the columns.
Once we retrieve data from the query, we just pass it to our controller to implement the columns. Since, we are using a different way to render the ember data in the column, the approach goes as follows:
In our controller we do:

export default Controller.extend({
  columns: [
    {
      propertyName   : 'event.name',
      title          : 'Event Name',
      disableSorting : true
    },
    {
      propertyName : 'title',
      title        : 'Title'
    },
    {
      propertyName   : 'speakers',
      template       : 'components/ui-table/cell/cell-speakers',
      title          : 'Speakers',
      disableSorting : true
    }
  ]
});

I have shown here only limited columns, there are others too. Here, we are mapping the propertyName to the attribute returned by the server. Also the ‘title’ indicates the column name. We can also create a custom template(as a component) if we want customization while rendering the data in the rows and columns. For example, if we want to iterate the multiple speaker names for a session, we can do:

<div class="ui list">
  {{#each record.speakers as |speaker|}}
    <div class="item">
      {{speaker.name}}
    </div>
  {{/each}}
</div>

As a result, we can make custom templates for a particular property. Another example of formatting the response is:

<span>
  {{moment-format record.startsAt 'MMMM DD, YYYY - HH:mm A'}}
  {{moment-format record.endsAt 'MMMM DD, YYYY - HH:mm A'}}
</span>

Thus, we get the following view after integration of API:

Resources:

Ember data Official guide

Blog on medium: By “Embedly” publication

Continue ReadingImplement Sessions API for the ‘admin/sessions’ Route in Open Event Frontend

Implementing a Custom Serializer for Yaydoc

At the crux of it, Yaydoc is comprised of a number of specialized bash scripts which perform various tasks such as generating documentation, publishing it to github pages, heroku, etc. These bash scripts also serve as the central communication portal for various technologies used in Yaydoc. The core generator is composed of several Python modules extending the sphinx documentation generator. The web Interface has been built using Node, Express, etc. Yaydoc also contains a Python package dedicated to reading configuration options from a Yaml file.

Till now the options were read and then converted to strings irrespective of the actual data type, based on some simple rules.

  • List was converted to a comma separated string.(Nested lists were not handled)
  • Boolean values were converted to true | false respectively.
  • None was converted to an empty string.

While these simple rules were enough at that time, It was certain that a better solution would be required as the project grew in size. It was also getting tough to maintain because a lot of hard-coding was required when we wanted to convert those strings to python objects. To handle these cases, I decided to create a custom serialization format which would be simple for our use cases and easily parseable from a bash script yet can handle all edge cases. The format is mostly similar to its earlier form apart from lists where it takes heavy inspiration from the python language itself.

With the new implementation, Lists would get converted to comma separated strings enclosed by square brackets. This allowed us to encode the type of the object in the string so that it can later be decoded. This handled the case of an empty list or a list with single element well. The implementation also handled nested lists.

Two methods were created namely serialize and deserialize which detected the type of the corresponding object using several heuristics and applied the proper serialization or deserialization rule.

def serialize(value):
    """
    Serializes a python object to a string.
    None is serialized to an empty string.
    bool values are converted to strings True False.
    list or tuples are recursively handled and are comma separated.
    """
    if value is None:
        return ''
    if isinstance(value, str):
        return value
    if isinstance(value, bool):
        return "true" if value else "false"
    if isinstance(value, (list, tuple)):
        return '[' + ','.join(serialize(_) for _ in value) + ']'
    return str(value)

To deserialize we also had to handle the case of nested lists. The following snippet does that properly.

def deserialize(value, numeric=True):
    """
    Deserializes a string to a python object.
    Strings True False are converted to bools.
    `numeric` controls whether strings should be converted to
    ints or floats if possible. List strings are handled recursively.
    """
    if value.lower() in ("true", "false"):
        return value.lower() == "true"
    if numeric and _is_numeric(value):
        return _to_numeric(value)
    if value.startswith('[') and value.endswith(']'):
        split = []
        element = ''
        level = 0
        for c in value:
            if c == '[':
                level += 1
                if level != 1:
                    element += c
            elif c == ']':
                if level != 1:
                    element += c
                level -= 1
            elif c == ',' and level == 1:
                split.append(element)
                element = ''
            else:
                element += c
        if split or element:
            split.append(element)
        return [deserialize(_, numeric) for _ in split]
    return value

With this new approach, we are able to handle much more cases as compared to the previous implementation and is much more robust. It does however still lacks lacks certain features such as serializing dictionaries. That may be be implemented in the future if need be.

Resources

Continue ReadingImplementing a Custom Serializer for Yaydoc