Implementing Scheduler Actions on Open Event Frontend

After the functionality to display scheduled sessions was added to Open Event Frontend, the read-only implementation of the scheduler had been completed. What was remaining now in the scheduler were the write actions, i.e., the sessions’ scheduling which event organizers do by deciding its timings, duration and venue.

First of all, these actions required the editable flag to be true for the fullcalendar plugin. This allowed the sessions displayed to be dragged and dropped. Once this was enabled, the next task was to embed data in each of the unscheduled sessions so that when they get dropped on the fullcalendar space, they get recognized by the calendar, which can place it at the appropriate location. For this functionality, they had to be jQuery UI draggables and contain an “event” data within them. This was accomplished by the following code:

this.$().draggable({
  zIndex         : 999,
  revert         : true,      // will cause the event to go back to its
  revertDuration : 0  //  original position after the drag
});

this.$().data('event', {
  title    : this.$().text().replace(/\s\s+/g, ' '), // use the element's text as the event title
  id       : this.$().attr('id'),
  serverId : this.get('session.id'),
  stick    : true, // maintain when user navigates (see docs on the renderEvent method)
  color    : this.get('session.track.color')
});

Here, “this” refers to each unscheduled session. Note that the session color is fetched via the corresponding session track. Once the unscheduled sessions contain enough relevant data and are of the right type (i.e, jQuery UI draggable type), they’re ready to be dropped on the fullcalendar space.

Now, when an unscheduled session is dropped on the fullcalendar space, fullcalendar’s eventReceive callback is triggered after its drop callback. In this callback, the code removes the session data from the unscheduled sessions’ list, so it disappears from there and gets stuck to the fullcalendar space. Then the code in the drop callback makes a PATCH request to Open Event Server with the relevant data, i.e, start and end times as well as microlocation. This updates the corresponding session on the server.

Similarly, another callback is generated when an event is resized, which means when its duration is changed. This again sends a corresponding session PATCH request to the server. Furthermore, the functionality to pop a scheduled event out of the calendar and add it back to the unscheduled sessions’ list is also implemented, just like in Eventyay version 1. For this, a cross button is implemented, which is embedded in each scheduled session. Clicking this pops the session out of the calendar and adds it back to the unscheduled sessions list. Again, a corresponding PATCH request is sent to the server.

After getting the response of such requests, a notification is displayed on the screen, which informs the users whether the action was successful or not. The main PATCH functionality is in a separate function which is called by different callbacks accordingly, so code reusability is increased:

updateSession(start, end, microlocationId, sessionId) {
    let payload = {
      data: {
        attributes: {
          'starts-at' : start ? start.toISOString() : null,
          'ends-at'   : end ? end.toISOString() : null
        },
        relationships: {
          microlocation: {
            data: {
              type : 'microlocation',
              id   : microlocationId
            }
          }
        },
        type : 'session',
        id   : sessionId
      }
    };

    let config = {
      skipDataTransform: true
    };
    return this.get('loader')
      .patch(`sessions/${sessionId}`, JSON.stringify(payload), config)
      .then(() => {
        this.get('notify').success('Changes have been made successfully');
      })
      .catch(reason => {
        this.set('error', reason);
        this.get('notify').error(`Error: ${reason}`);
      });
  },

This completes the scheduler implementation on Open Event Frontend. Here is how it looks in action:

scheduler actions.gif

Resources

Open Event Server – Export Speakers as PDF File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the speakers in a very detailed view in the event management dashboard. He can see the statuses of all the speakers. The possible statuses are pending, accepted, and rejected. He/she can take actions such as editing the speakers.

If the organizer wants to download the list of all the speakers as a PDF file, he or she can do it very easily by simply clicking on the Export As PDF button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Speakers PDF file

Here we will be using the pisa package which is used to convert from HTML to PDF. It is a html2pdf converter which uses ReportLab Toolkit, the HTML5lib and pyPdf. It supports HTML5 and CSS 2.1 (and some of CSS 3). It is completely written in pure Python so it is platform independent.

from xhtml2pdf import pisa<

We have a utility method create_save_pdf which creates and saves PDFs from HTML. It takes the following arguments:

  • pdf_data – This contains the HTML template which has to be converted to PDF.
  • key – This contains the file name
  • dir_path – This contains the directory

It returns the newly formed PDF file. The code is as follows:

def create_save_pdf(pdf_data, key, dir_path='/static/uploads/pdf/temp/'):
   filedir = current_app.config.get('BASE_DIR') + dir_path

   if not os.path.isdir(filedir):
       os.makedirs(filedir)

   filename = get_file_name() + '.pdf'
   dest = filedir + filename

   file = open(dest, "wb")
   pisa.CreatePDF(io.BytesIO(pdf_data.encode('utf-8')), file)
   file.close()

   uploaded_file = UploadedFile(dest, filename)
   upload_path = key.format(identifier=get_file_name())
   new_file = upload(uploaded_file, upload_path)
   # Removing old file created
   os.remove(dest)

   return new_file

The HTML file is formed using the render_template method of flask. This method takes the HTML template and its required variables as the arguments. In our case, we pass in ‘pdf/speakers_pdf.html’(template) and speakers. Here, speakers is the list of speakers to be included in the PDF file. In the template, we loop through each item of speakers. We print his name, email, list of its sessions, mobile, a short biography, organization, and position. All these fields form a row in the table. Hence, each speaker is a row in our PDF file.

The various columns are as follows:

<thead>
<tr>
   <th>
       {{ ("Name") }}
   </th>
   <th>
       {{ ("Email") }}
   </th>
   <th>
       {{ ("Sessions") }}
   </th>
   <th>
       {{ ("Mobile") }}
   </th>
   <th>
       {{ ("Short Biography") }}
   </th>
   <th>
       {{ ("Organisation") }}
   </th>
   <th>
       {{ ("Position") }}
   </th>
</tr>
</thead>

A snippet of the code which handles iterating over the speakers’ list and forming a row is as follows:

{% for speaker in speakers %}
   <tr class="padded" style="text-align:center; margin-top: 5px">
       <td>
           {% if speaker.name %}
               {{ speaker.name }}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
       <td>
           {% if speaker.email %}
               {{ speaker.email }}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
       <td>
           {% if speaker.sessions %}
               {% for session in speaker.sessions %}
                   {{ session.name }}<br>
               {% endfor %}
           {% else %}
               {{ "-" }}
           {% endif %}
       </td>
      …. So on
   </tr>
{% endfor %}

The full template can be found here.

Obtaining the Speakers PDF file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/speakers/pdf

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the speakers of the event as a PDF file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Implementing Unscheduled Sessions List for Event Scheduler

Until recently, Open Event Server didn’t allow the storage of unscheduled sessions. However, having the provision of unscheduled sessions was necessary so that event organizers can easily schedule the unscheduled sessions and keep track of them. Also, it allows them to remove scheduled sessions from the scheduler and place them in the unscheduled sessions list, so that they can be scheduled later. Also, since the unscheduled sessions list was also present in Eventyay version 1, it was decided to have the same in version 2.

The first step was to enable the storage of unscheduled sessions on the server. For this, the starts-at and ends-at fields of the session model were modified to be non-required (earlier they were mandatory). Once this change was done, the next step was to fetch the list of unscheduled sessions on the frontend, from the server. Unscheduled sessions were the ones which had the starts-at and ends-at fields as null. Also, the speakers’ details needed to be fetched so that their names can be mentioned along with sessions’ titles, in accordance with Eventyay version 1. Thus, the following were the filter options for the unscheduled sessions’ fetching:

let unscheduledFilterOptions = [
      {
        and: [
          {
            name : 'starts-at',
            op   : 'eq',
            val  : null
          },
          {
            name : 'ends-at',
            op   : 'eq',
            val  : null
          }
        ]
      }
];
 
let unscheduledSessions = await eventDetails.query('sessions', {
      include : 'speakers,track',
      filter  : unscheduledFilterOptions
    });

 

This gave us the list of unscheduled sessions on the frontend appropriately. After this, the next step was to display this list to the event organizer. For this, the scheduler’s Handlebars template file was modified appropriately. The colors and sizes were chosen so that the list looks similar to the one in Eventyay version 1. Also, the Ember add-on named ember-drag-drop was used to make these unscheduled session components draggable, so that they can be ultimately scheduled on the scheduler. After installing this package and making the necessary changes to the project’s package.json file, the component file for unscheduled sessions was modified accordingly to adapt for the draggable components’ UI. This was the final step and completed the implementation of listing unscheduled sessions.

unscheduled_sessions.gif

Resources

Open Event Frontend – Events Explore Page

This blog illustrates how the events explore page is implemented in the Open Event Frontend web app. The user can land on the events explore page by clicking on Browse Events button in the top panel on the home page, shown by the mouse tip in the following picture.

Here, the user can use the various filter options provided to search for the events as per his requirements, He/she can filter according to categories, sub-categories for each category, event type, and date range. A unique feature here is that the user can pick from the start date range options such as today, tomorrow, this week, this weekend, next week and many more. If neither of these fits his needs he can use custom dates as well. The user can also filter events using event location which is autocompleted using Google Maps API. Thus, searching for events is fast, easy and fun.

Let us see how this has been implemented.

Implementation

The explore routes has a method _loadEvents(params). Here, params is the various query parameters for filtering the events. This method forms the query, sends it to the server and returns the list of events returned by the server. The server uses Flask-REST-JSONAPI. It has a very flexible filtering system. It is completely related to the data layer used by the ResourceList manager. More information about this can be found here.

So, the filters are formed using syntax specified in the link mentioned above. We form an array filterOptions which stores the various filters. The default filter is that the event should be published:

let filterOptions = [
 {
   name : 'state',
   op  : 'eq',
   val  : 'published'
 }
];

Then we check for each filter option and check if it is present or not. If yes then we add it to filterOptions. An example as follows:

if (params.category) {
 filterOptions.push({
   name : 'event-topic',
   op  : 'has',
   val  : {
     name : 'name',
     op : 'eq',
     val : params.category
   }
 });
}

This is repeated for sub_category, event_type, location and start_date and end_date. An event is considered to fulfill the date filter if it satisfies any one of the given conditions:

  • If both start_date and end_date are mentioned:
    • Event start_date is after filter start date and before filter end date.
    • Or, event end date if after filter start date and before filter end date.
    • Or, event start date is before filter start date and event end date date is after filter end date.
  • If only start_date is mentioned, then if the event start date is after filter start date or event end date is after filter start date.

The code to this can be found here. For the date ranges mentioned above(today, tomorrow etc) the start dates and end dates are calculated using the moment.js library and then passed on as params.

The filteredEvents are passed in the route model.

async model(params) {
 return {
   eventTypes     : await this.store.findAll('event-type'),
   eventTopics    : await this.store.findAll('event-topic', { include: 'event-sub-topics' }),
   filteredEvents : await this._loadEvents(params)
 };
}

The variable is set in the controller and any change to the query params is observed for. The method _loadEvents is called whenever the query params change.

setupController(controller, model) {
 this._super(...arguments);
 controller.set('filteredEvents', model.filteredEvents);
 this.set('controller', controller);
},

actions: {
 async queryParamsDidChange(change, params) {
   if (this.get('controller')) {
     this.get('controller').set('filteredEvents', await this._loadEvents(params));
   }
 }
}

The template iterates over the filteredEvents and displays each one in a card.

Resources

Open Event Server – Export Sessions as PDF File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the sessions in a very detailed view in the event management dashboard. He can see the statuses of all the sessions. The possible statuses are pending, accepted, confirmed and rejected. He/she can take actions such as accepting/rejecting the sessions.

If the organizer wants to download the list of all the sessions as a PDF file, he or she can do it very easily by simply clicking on the Export As PDF button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Sessions PDF file

Here we will be using the pisa package which is used to convert from HTML to PDF. It is a html2pdf converter which uses ReportLab Toolkit, the HTML5lib and pyPdf. It supports HTML5 and CSS 2.1 (and some of CSS 3). It is completely written in pure Python so it is platform independent.

from xhtml2pdf import pisa

We have a utility method create_save_pdf which creates and saves PDFs from HTML. It takes the following arguments:

  • pdf_data – This contains the HTML template which has to be converted to PDF.
  • key – This contains the file name
  • dir_path – This contains the directory

It returns the newly formed PDF file. The code is as follows:

def create_save_pdf(pdf_data, key, dir_path='/static/uploads/pdf/temp/'):
   filedir = current_app.config.get('BASE_DIR') + dir_path

   if not os.path.isdir(filedir):
       os.makedirs(filedir)

   filename = get_file_name() + '.pdf'
   dest = filedir + filename

   file = open(dest, "wb")
   pisa.CreatePDF(io.BytesIO(pdf_data.encode('utf-8')), file)
   file.close()

   uploaded_file = UploadedFile(dest, filename)
   upload_path = key.format(identifier=get_file_name())
   new_file = upload(uploaded_file, upload_path)
   # Removing old file created
   os.remove(dest)

   return new_file

The HTML file is formed using the render_template method of flask. This method takes the HTML template and its required variables as the arguments. In our case, we pass in ‘pdf/sessions_pdf.html’(template) and sessions. Here, sessions is the list of sessions to be included in the PDF file. In the template, we loop through each item of sessions and check if it is deleted or not. If it not deleted then we print its title, state, list of its speakers, track, created at and has an email been sent or not. All these fields form a row in the table. Hence, each session is a row in our PDF file.

The various columns are as follows:

<thead>
<tr>
   <th>
       {{ ("Title") }}
   </th>
   <th>
       {{ ("State") }}
   </th>
   <th>
       {{ ("Speakers") }}
   </th>
   <th>
       {{ ("Track") }}
   </th>
   <th>
       {{ ("Created At") }}
   </th>
   <th>
       {{ ("Email Sent") }}
   </th>
</tr>
</thead>

A snippet of the code which handles iterating over the sessions list and forming a row is as follows:

{% for session in sessions %}
   {% if not session.deleted_at %}
       <tr class="padded" style="text-align:center; margin-top: 5px">
           <td>
               {% if session.title %}
                   {{ session.title }}
               {% else %}
                   {{ "-" }}
               {% endif %}
           </td>
           <td>
               {% if session.state %}
                   {{ session.state }}
               {% else %}
                   {{ "-" }}
               {% endif %}
           </td>
           <td>
               {% if session.speakers %}
                   {% for speaker in session.speakers %}
                       {{ speaker.name }}<br>
                   {% endfor %}
               {% else %}
                   {{ "-" }}
               {% endif %}
           </td>
          ….. And so on
       </tr>
   {% endif %}
{% endfor %}

The full template can be found here.

Obtaining the Sessions PDF file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/sessions/pdf

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the sessions of the event as a PDF file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Upgrading Open Event to Use Sendgrid API v3

Sendgrid recently upgraded their web API to send emails, and support for previous versions was deprecated. As a result, Open Event Server’s mail sending tasks were rendered unsuccessful, because the requests they were sending to Sendgrid were not being processed. On top of that, it was also found out later that the existing Sendgrid API key on the development server was expired. This had to be fixed at the earliest because emails are a core part of Open Event functionality.

The existing way for emails to be sent via Sendgrid used to hit the endpoint “https://api.sendgrid.com/api/mail.send.json” to send emails. Also, the payload structure was as follows:

payload = {
    'to': to,
    'from': email_from,
    'subject': subject,
    'html': html
}

Also, a header  “Authorization”: “Bearer ” accompanied the above payload. However, Sendgrid changed the payload structure to be of the following format:

{

“personalizations”: [

{“to”: [

{“email”: “[email protected]“}

]

}

],

“from”: {

“email”: “[email protected]

},

“subject”: “Hello, World!”,

“content”: [

{

“type”: “text/plain”,

“value”: “Heya!”

}

]

}

Furthermore, the endpoint was changed to be “https://api.sendgrid.com/v3/mail/send”. To incorporate all these changes with the minimum number of modified lines in the codebase, it was required for that the structure change itself happens at a fairly low level. This was because there are lots of features in the server that perform a wide variety of email actions. Thus, it was clear that changing all of them will not be the most efficient thing to do. So the perfect place to implement the API changes was the function send_email() in mail.py, because all other higher-level email functions are built on top of this function. But this was not the only change, because this function itself used another function, called send_email_task() in tasks.py, specifically for sending email via Sendgrid. So, in conclusion, the header modifications were made in send_email() and payload structure as well as endpoint modifications were made within send_email_task(). This brought the server codebase back on track to send emails successfully. Finally, the key for development server was also renewed and added to its settings in the Heroku Postgres database.

Screenshots:

Screen Shot 2018-08-21 at 3.40.12 PM.png

Screen Shot 2018-08-21 at 3.40.32 PM.png

Resources

Add check-in restrictions to Open Event Organizer App

The Open Event Organizer Android App has the ability to scan and check-in attendees holding different ticket types for an event. But often there are cases when the attendees holding a particular ticket type need to be check-in restricted. It can be because of reasons such as facilitating entry of premium ticket holders before general ticket holders, or not allowing general ticket holders in a VIP queue.

To facilitate this, we have a field called ‘is-checkin-restricted’ for the entity Ticket. So when it is set to true, any check ins for the holder of that particular ticket type will be restricted. Let’s look at how this was implemented in the Orga App.

This is what we want to achieve:

Even though we needed it to be present in the settings screen, we needed it to be dynamic in nature as the types of tickets are themselves dynamic. This meant that we couldn’t achieve this using the plain old preference themes. We must create a whole new fragment for it and try to make it as similar to a preference theme as possible.

We need the following to create a dynamic tickets fragment:

  1. The fragment itself, which should implement the interfaces:  Progressive, Erroneous  to show progress and error.
  2. An Adapter and a ViewHolder
  3. A ViewModel

The fragment CheckinRestriction is similar to the TicketsFragment for the most part except for the part where we need to restrict check in. In the fragment we are providing a checkbox at the top to restrict check-in for all ticket types. So we need to setup click listeners not just for the checkbox, but for the whole view as well, like this:

binding.restrictAll.setOnClickListener(v -> {
       restrictAll(!binding.restrictAllCheckbox.isChecked());
   });
binding.restrictAllCheckbox.setOnClickListener(v -> {
       //checkbox already checked
       restrictAll(binding.restrictAllCheckbox.isChecked());
   });

The restrictAll() method restricts check-in for all ticket types by updating the view and updating the tickets using the ViewModel:

private void restrictAll(boolean toRestrict) {
   binding.restrictAllCheckbox.setChecked(toRestrict);
   ticketSettingsViewModel.updateAllTickets(toRestrict);
   ticketsAdapter.notifyDataSetChanged();
}

It’s also important to note here how we are handling the clicks in the ViewHolder for each ticket item:

public void bind(Ticket ticket) {
   binding.setTicket(ticket);
   View.OnClickListener listener = v -> {
       ticket.isCheckinRestricted = ticket.isCheckinRestricted == null || !ticket.isCheckinRestricted;
       binding.ticketCheckbox.setChecked(ticket.isCheckinRestricted);
       updateTicketAction.push(ticket);
       binding.executePendingBindings();
   };
   itemView.setOnClickListener(listener);
   binding.ticketCheckbox.setOnClickListener(listener);
}

A method that is run each time in order to check if all the tickets are restricted and then accordingly tick the ‘restrict-all’ box.

private void checkRestrictAll() {
   if (ticketSettingsViewModel.getTickets() == null) {
       return;
   }
    boolean restrictAll = true;
    for (Ticket ticket : ticketSettingsViewModel.getTickets().getValue()) {
       if (ticket.isCheckinRestricted == null || !ticket.isCheckinRestricted) {
           restrictAll = false;
           break;
       }
   }
   binding.restrictAllCheckbox.setChecked(restrictAll);
}

This is all of the code we need apart from the boilerplate code in order to successfully build a check-in-restrictions fragment.

Read more of the code here

Resources:

Implementing Checkout Times for Attendees on Open Event Server

As of this writing, Open Event Server did not have the functionality to add, manipulate and delete checkout times of attendees. Event organizers should have access to log and update attendee checkout times. So it was decided to implement this functionality in the server. This boiled down to having an additional attribute checkout_times in the ticket holder model of the server.

So the first step was to add a string column named checkout_times in the ticket holder database model, since this was going to be a place for comma-separated values (CSV) of attendee checkout times. An additional boolean attribute named is_checked_out was also added to convey whether an attendee has checked out or not. After the addition of these attributes in the model, we saved the file and performed the required database migration:

To create the migration file for the above changes:

$ python manage.py db migrate

To upgrade the database instance:

$ python manage.py db upgrade

Once the migration was done, the API schema file was modified accordingly:

class AttendeeSchemaPublic(SoftDeletionSchema):
    """
    Api schema for Ticket Holder Model
    """
    
    checkout_times = fields.Str(allow_none=True)  # ←
    is_checked_out = fields.Boolean()  # ←
    

After the schema change, the attendees API file had to have code to incorporate these new fields. The way it works is that when we receive an update request on the server, we add the current time in the checkout times CSV to indicate a checkout time, so the checkout times field is essentially read-only:

from datetime import datetime
...
class AttendeeDetail(ResourceDetail):
    def before_update_object(self, obj, data, kwargs):
        
        if 'is_checked_out' in data and data['is_checked_out']:
        ...
        else:
            if obj.checkout_times and data['checkout_times'] not in \
obj.checkout_times.split(","):
                data['checkout_times'] = '{},{},{}'.format(
                    obj.checkout_times,
                    data['checkout_times'],
                    datetime.utcnow())

 

This completes the implementation of checkout times, so now organizers can process attendee checkouts on the server with ease.

Resources

Adding Tickets Relationship with Discount Codes in Open Event Server

Recently (as of this writing), it was discovered that the relationship between discount codes and tickets was not implemented yet in Open Event Server. It turns out that the server has two types of discount codes – discount codes for entire events and discount codes for individual tickets of a specific event. More information on how discount code themselves are implemented in the server can be found in this blog post from 2017 – Discount Codes in Open Event Server.

So, for implementing the relationship of discount codes with tickets, it was decided to be present only for discount codes that have the DiscountCodeSchemaTicket schema, since those are the discount codes that are used for individual tickets. As a first step, the `tickets` attribute of the discount code model was removed, as it was redundant. The already implemented used_for attribute did the same job, and with better validation. At the same time, discount code was added as an attribute.

In the ticket model file:

discount_code_id = db.Column(db.Integer, db.ForeignKey('discount_codes.id', ondelete='CASCADE'))
discount_code = db.relationship('DiscountCode', backref="tickets")

Also, in the __init__ constructor:

def __init__(self, ..., discount_code_id=None, ...):
    ...
    ...
    self.discount_code_id = discount_code_id

After that, we added a discount_code_id field in the ticket schema file:

discount_code_id = fields.Integer(allow_none=True)

In this file, we also removed the redundant tickets field.

Now, we migrated the Open Event Server database via the following commands:

$ python manage.py db migrate

then

$ python manage.py db upgrade

Next, in the discount code schema file, we added the tickets relationship. Note that this is a one-to-many relationship. One discount code (for tickets) can be mapped to many tickets. Here is the code for that relationship, in the discount code schema file, under the DiscountCodeSchemaTicket class:

tickets = Relationship(attribute='tickets',
self_view='v1.discount_code_tickets',
self_view_kwargs={'id': '<id>'},
related_view='v1.ticket_list',
related_view_kwargs={'discount_code_id': '<id>'},
schema='TicketSchemaPublic',
many=True,
type_='ticket')

For this, we, of course, imported the TicketSchemaPublic in this file first. After that, we created a DiscountCodeTicketRelationship class in the discount codes API file:

class DiscountCodeTicketRelationship(ResourceRelationship):
    """
    DiscountCode Ticket Relationship
    """
    decorators = (jwt_required,)
    methods = ['GET', 'PATCH']
    schema = DiscountCodeSchemaTicket
    data_layer = {'session': db.session, 'model': DiscountCode}

The next step was to add the query code to fetch the tickets related to a particular discount code from the database. For this, we added the following snippet to the query() method of the TicketList class in the tickets API file:

if view_kwargs.get('discount_code_id'):
    discount_code = safe_query(self, DiscountCode, 'id', view_kwargs['discount_code_id'], 'discount_code_id')
    # discount_code - ticket :: one-to-many relationship
    query_ = self.session.query(Ticket).filter_by(discount_code_id=discount_code.id)

The only thing that remains now is adding the API routes for this relationship. We do that in the project’s __init__.py file:

api.route(TicketList, 'ticket_list', '/events/<int:event_id>/tickets',
'/events/<event_identifier>/tickets', '/ticket-tags/<int:ticket_tag_id>/tickets',
'/access-codes/<int:access_code_id>/tickets', '/orders/<order_identifier>/tickets',
'/discount-codes/<int:discount_code_id>/tickets')

api.route(DiscountCodeTicketRelationship, 'discount_code_tickets',
'/discount-codes/<int:id>/relationships/tickets')

 

Many routes already map to TicketList, we added one for that comes from discount codes API. Now we can use Postman to check this relationship, and it indeed works as expected, as seen below!

Screen Shot 2018-08-21 at 1.54.22 PM

Here’s the end:

Screen Shot 2018-08-21 at 1.54.35 PM.png

References:

Implementing Event Average Rating with SQLAlchemy

While implementing Open Event Server version 2, we decided to have a better way of ranking events by their quality. To define the “quality” of events, the programmers decided to accumulate the feedbacks of specific events and take the average of the ratings involved. Thus, the average rating of an event proves to be a good (enough) measure of its quality. While there are many ways to implement aggregate relationships in an app, here I demonstrate a rather modern methodology which insists on storing such aggregates once they’re computed.

Since there is always a space-time/computation tradeoff in software development, this task was no exception. At first, the straightforward idea that came to my mind was to query the Postgres database every time a request for average rating was made. This sounds simple, but with hundreds of events stored on a server, and potentially thousands of users querying for events, this seemed to be a computationally expensive approach. It was costly because the average rating aggregate would be computed for each request, and there could potentially be thousands of such concurrent requests. Therefore, a better idea is to compute the aggregate once, store it in the database (compromising space in the tradeoff mentioned above, but saving a large amount of computation at the same time), and update only when a change is made. In our specific case, the update should happen only when a new rating is added, a rating is deleted or an existing rating is modified. Since the advantages outnumbered the disadvantages, this was the strategy to be implemented.

The first step in implementing average rating was to modify the database model of events accordingly. For this, I performed the necessary imports in the events’ database model file:

from sqlalchemy_utils import aggregated
from app.models.feedback import Feedback

Now comes the tricky part. We want an average_rating column in the events table, that contains the mean rating of events. The values in this column should be updated every time a change is made to the feedbacks table. To perform this sort of functionality, the best, raw tool is a Postgres trigger. A trigger should be created that is fired after every update to the feedbacks table, which should update the average rating values in the events table. Here’s how the raw code of such a trigger looks like:

create or replace function UpdateAverageRating() returns trigger AS
$$
BEGIN
UPDATE events SET average_rating=(
SELECT avg(rating) FROM feedbacks
WHERE event_id=NEW.event_id
GROUP BY event_id
)

WHERE id = NEW.event_id
END
$$
language plpgsql

Fortunately, the translation of such a trigger into SQLAlchemy-speak is not only easy, but also very elegant. The imports I showed above already set the context for this translation.

The event model class looks like the following:

class Event(db.Model):
    """Event object table"""
    __tablename__ = 'events'
    __versioned__ = {
'exclude': ['schedule_published_on', 'created_at']
}
    id = db.Column(db.Integer, primary_key=True)
    identifier = db.Column(db.String)
    name = db.Column(db.String, nullable=False)
    external_event_url = db.Column(db.String)

    

    

    

The list of attributes continues, and to the end of this list, we now add a decorated method:




xcal_url = db.Column(db.String)
is_sponsors_enabled = db.Column(db.Boolean, default=False)
discount_code_id = db.Column(db.Integer, db.ForeignKey(
'discount_codes.id', ondelete='CASCADE'))

@aggregated('feedbacks', db.Column(db.Float))
def average_rating(self):
    return db.func.avg(Feedback.rating)

That’s it with the translation – this slick, decorated method can be thought of as a bridge between Python and the trigger shown earlier that’s usually implemented in the database itself. Once this method is added, we save the model file and perform a database migration:

$ python manage.py db migrate

This generates a migration file associated with our changes. This file shows the following alembic migration code:

"""empty message

Revision ID: 1471fe0d04ee
Revises: 49f3a33f5437
Create Date: 2018-06-08 19:32:47.485543

"""

from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '1471fe0d04ee'
down_revision = '49f3a33f5437'


def upgrade():
    op.add_column('events', sa.Column('average_rating', sa.Float(), nullable=True))
    op.add_column('events_version', sa.Column('average_rating', sa.Float(), autoincrement=False, nullable=True))

def downgrade():
    op.drop_column('events_version', 'average_rating')
    op.drop_column('events', 'average_rating')

Now that the file is generated, we upgrade our database state by utilizing this migration file:

$ python manage.py db upgrade

And here are the successful migration logs that immediately follow the upgrade command:

INFO  [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO  [alembic.runtime.migration] Will assume transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade 49f3a33f5437 -> 1471fe0d04ee, empty message

 

This completes the implementation of the average rating attribute of events. We can use the same technique to implement other attributes in our server, like sum, and minimum, just to name a couple. We saw how SQLAlchemy so elegantly manages to map all the mapping from Python code to database commands. This is just one of the plethora of advantages of using database object-relational-mappers (ORMs), and Open Event Server utilizes them to full extent.

Resources: