Implementing job queue in Open Event Web app

Open Event Web app enables multiple request handling by the implementation of queue system in its generator. Every request received from the client is saved and stored in the queue backed by redis server. The jobs are then processed one at a time using the FCFS (First come First Serve) job scheduling algorithm. Processing the requests one by one prevents the crashing of app and also prevents the loss of requests from the client.

Initialising job queue

The job queue is initialised with a name and the connection object of redis server as the arguments.

const redisClient =  require('redis').createClient(process.env.REDIS_URL);
const Queue = require('bee-queue');
const queue = new Queue('generator-queue', {redis: redisClient});

Handling jobs in queue

The client emits an event namely ‘live’ when request for event generation is received, the corresponding event is listened and a new job for the request is created and enqueued in the job queue. Every request received by the client is saved to ensure that there is no loss of request. The queue is then searched for the requests or the jobs which are in ‘waiting’ state, if the current request status for the job Id is waiting the socket emits an event namely ‘waiting’.

socket.on('live', function(formData) {
 const req = {body: formData};
 const job = queue.createJob(req);

 job.on('succeeded', function() {
   console.log('completed job ' + job.id);
 });

 job.save(async function(err, currentJob) {
   if (err) {
     console.log('job failed to save');
   }
   emitter = socket;
   console.log('saved job ' + currentJob.id);
   const jobs = await queue.getJobs('waiting', {start: 0, end: 25});
   const jobIds = await jobs.map((currJob) => currJob.id);

   if(jobIds.indexOf(currentJob.id) !== -1) {
     socket.emit('waiting');
   }
 });

});

Updating the status of request

If the socket emits the event ‘waiting’ it signifies that some other job is currently in process and the status of the current request is ‘waiting’.

socket.on('waiting', function () {
 updateStatusAnimate('Request status: Waiting');
});

Processing the jobs

When the queue is in ready state and no job is currently in process, it starts processing the saved job. The job is not completed until it receives a callback. The generator starts generating the event when the processing of request starts.

queue.on('ready', function() {
 queue.process(function(job, done) {
   console.log('processing job ' + job.id);
   generator.createDistDir(job.data, emitter, done);
 });
 console.log('processing jobs...');
});

 

The generator calls the callback function for the current job when the event generation completes or it is halted in between due to some error. As soon as the current job completes, next job in the queue starts being processed.

generator.createDistDir() = function(req, socket, callback){
  
  .....
  .....
  .....

  mailer.uploadAndsendMail(req.body.email, eventName, socket, (obj) => {
    if(obj.mail)
      logger.addLog('Success', 'Mail sent succesfully', socket);
    else
      logger.addLog('Error', 'Error sending mail', socket);

    if(emit) {
      socket.emit('live.ready', {
        appDir: appFolder,
        url: obj.url
      });
      callback(null);
    }
    else {
      callback(appFolder);
    }

    done(null, 'write');
   });
}

Resources

Continue ReadingImplementing job queue in Open Event Web app

Parallelizing travis build in Open Event Web app

 

Open Event Web app uses Travis CI as a platform to perform unit testing. Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub. Travis CI automatically detects when a commit has been made and pushed to a GitHub repository that is using Travis CI, and each time this happens, it will try to build the project and run tests. Travis build took around 24 minutes to complete when any commit is made to the project, which is a very large time, to reduce the build time we parallelized the build which uses maximum amount of resources available at that time and run the builds parallely which resulted into better use of resources as well as required lesser amount of time.

Open Event web app uses saucelabs integration to perform selenium tests and travis to perform continuous integration.

Why parallelize the build?

When there are unit tests that are independent of each other and can be executed using a common set of dependencies, those procedures can be performed parallely on different virtual machines bringing out maximum throughput.

Running say a large number of tests on a single machine can increase the build time to a large extent, this build time can be reduced significantly by running the tests parallely on different machines. Open Event Webapp has a build time of around 24 minutes which is reduced to half on parallelising the build.

Parallelizing your builds across virtual machines

To speed up a test suite, you can break it up into several parts using Travis CI’s build matrix feature.

Say you want to split up your unit tests and your integration tests into two different build jobs. They’ll run in parallel and fully utilize the available build capacity and the resources.

The architecture of open event webapp supports test suite for all the pages in the generated application. To parallelize the build, the test suite is divided in different files  with the directory structure as shown below:

   
   ├── test
      ├── serverTest.js
      ├── roomsAndSpeakers.js
      ├── tracks.js
      ├── generatorAndSchedule.js
      ├── sessionAndEvent,js

 

The env key in travis.yml is modified as shown below:

env:
  - TESTFOLDER=test/serverTest.js
  - TESTFOLDER=test/roomsAndSpeakers.js
  - TESTFOLDER=test/tracks.js
  - TESTFOLDER=test/generatorAndSchedule.js
  - TESTFOLDER=test/sessionAndEvent.js

 

The script running tests fetches environment variable and runs the test file accordingly as shown below:

# installing required items for build
install:
 - npm install -g istanbul mocha@3
 - npm install
 - npm install --save-dev

# testing script
script:
 - istanbul cover _mocha -- $TESTFOLDER

# notify codecov and deploy to cloud
after_success:
 if ([ "$TESTFOLDER" == "test/serverTest.js" ]); then
   bash <(curl -s https://codecov.io/bash);
   bash gh_deploy.sh && kubernetes/travis/deploy.sh;
 fi

Results:

The build time which was earlier 23 minutes is reduced to 12 minutes after parallelizing the build.

Resources

 

Continue ReadingParallelizing travis build in Open Event Web app

Enable web app generation for multiple API formats

 

 

 

 

 

Open event server has two types of API (Application Programming Interface) formats, with one being generated by the legacy server and other one by the server side of decoupled development structure. The open event web app supported only the new version of API format, thus an error in read contents of JSON was thrown for the old version API format. To enable the support for both kind of API formats such that web app can be generated for each of them and there is no need to convert JSON files of version v1 to v2 we added an option field to the generator, where the client can choose the API version.

Excerpts and description for difference between data formats of API v1 and v2

The following excerpt is a subprogram getCopyrightData in both versions v1 and v2. The key for getting licence details in v1 is ‘licence_details’ and in v2 is ‘licence-details’. Similarly the key for getting copyright details in v1 is ‘copyright’ and in v2 is ‘event-copyright’.

So the data is extracted from the JSON files depending on the API version, the client has selected.

API V1

function getCopyrightData(event) {
 if(event.licence_details) {
   return convertLicenseToCopyright(event.licence_details, event.copyright);
 } else {
   return event.copyright;
 }
}

 

API V2

function getCopyrightData(event) {
 if(event['licence-details']) {
   return convertLicenseToCopyright(event['licence-details'], event['event-copyright']);
 } else {
   event['event-copyright'].logo = event['event-copyright']['logo-url'];
   return event['event-copyright'];
 }
}

 

Another example showing the difference between the API formats of v1 and v2 is excerpted below.

The following excerpt shows a constant ‘url’ containing the event URLs and the details. The version v1 uses event_url as a key for the main page url whereas v2 uses event-url for getting the same. A similar kind of structural difference is present for rest of the fields where the special character underscore has been replaced by hyphen and a bit of change in the name format for keys such as start_time, end_time.

API v1

const urls= {
 main_page_url: event.event_url,
 logo_url: event.logo,
 background_url: event.background_image,
 background_path: event.background_image,
 description: event.description,
 location: event.location_name,
 orgname: event.organizer_name,
 location_name: event.location_name,
};

 

API v2

const urls= {
 main_page_url: event['event-url'],
 logo_url: event['logo-url'],
 background_url: event['original-image-url'],
 background_path: event['original-image-url'],
 location: event['location-name'],
 orgname: event['organizer-name'],
 location_name: event['location-name'],
};

How we enabled support for both API formats?

To add the support for both API formats we added a options field on generator’s index page where the user chooses the type of API format for web app generation.

<label>Choose your API version</label>
<ul style="list-style-type:none">
 <li id="version1"><input name="apiVersion" type="radio" value="api_v1">   API_v1</li>
 <li id="version2"><input name="apiVersion" type="radio" value="api_v2"> API_v2</li>
</ul>

The generator depending on the version of API format, chooses the correct file where the data extraction from the input JSON files takes place. The file names are fold_v1.js and fold_v2.js for extraction of JSON v1 data and JSON v2 data respectively.

var type = req.body.apiVersion || 'api_v2';

if(type === 'api_v1') {
 fold = require(__dirname + '/fold_v1.js');
}
else {
 fold = require(__dirname + '/fold_v2.js');
}

 

The excerpts of code showing the difference between API formats of v1 and v2 are the contents of fold_v1.js and fold_v2.js files respectively.

Resources

Continue ReadingEnable web app generation for multiple API formats

Open Event Server – Export Attendees as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the attendees in a very detailed view in the event management dashboard. He can see the statuses of all the attendees. The possible statuses are completed, placed, pending, expired and canceled, checked in and not checked in. He/she can take actions such as checking in the attendee.

If the organizer wants to download the list of all the attendees as a CSV file, he or she can do it very easily by simply clicking on the Export As and then on CSV.

Let us see how this is done on the server.

Server side – generating the Attendees CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_attendees_csv which takes the attendees to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_attendees_csv(attendees):
   headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email',
              'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each attendee in attendees and form a row for that attendee by separating the values of each of the columns by a comma. Here, every row is one attendee.
  • The newly formed row is added to the rows list.
for attendee in attendees:
   column = [str(attendee.order.get_invoice_number()) if attendee.order else '-',
             str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-',
             str(attendee.order.status) if attendee.order and attendee.order.status else '-',
             str(attendee.firstname) if attendee.firstname else '',
             str(attendee.lastname) if attendee.lastname else '',
             str(attendee.email) if attendee.email else '',
             str(attendee.country) if attendee.country else '',
             str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '',
             str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '',
             str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0',
             str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']

   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_attendees_csv
content = export_attendees_csv(attendees)
for row in content:
   writer.writerow(row)

Obtaining the Attendees CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/attendees/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the attendees of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

References

Continue ReadingOpen Event Server – Export Attendees as CSV File

Open Event Server – Export Orders as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the orders in a very detailed view in the event management dashboard. He can see the statuses of all the orders. The possible statuses are completed, placed, pending, expired and canceled.

If the organizer wants to download the list of all the orders as a CSV file, he or she can do it very easily by simply clicking on the Export As and then on CSV.

Let us see how this is done on the server.

Server side – generating the Orders CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_orders_csv which takes the orders to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_orders_csv(orders):
   headers = ['Order#', 'Order Date', 'Status', 'Payment Type', 'Total Amount', 'Quantity',
              'Discount Code', 'First Name', 'Last Name', 'Email']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each order in orders and form a row for that order by separating the values of each of the columns by a comma. Here, every row is one order.
  • The newly formed row is added to the rows list.
for order in orders:
   if order.status != "deleted":
       column = [str(order.get_invoice_number()), str(order.created_at) if order.created_at else '',
                 str(order.status) if order.status else '', str(order.paid_via) if order.paid_via else '',
                 str(order.amount) if order.amount else '', str(order.get_tickets_count()),
                 str(order.discount_code.code) if order.discount_code else '',
                 str(order.user.first_name)
                 if order.user and order.user.first_name else '',
                 str(order.user.last_name)
                 if order.user and order.user.last_name else '',
                 str(order.user.email) if order.user and order.user.email else '']
       rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_orders_csv
content = export_orders_csv(orders)
for row in content:
   writer.writerow(row)

Obtaining the Orders CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/orders/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the orders of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}</span

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the aabove-mentionedURL.

References

Continue ReadingOpen Event Server – Export Orders as CSV File

Handling No internet cases in Open Event Android

It’s pretty common to face connectivity issues and when the user has no Internet connection he should be shown an appropriate response rather than allowing him to send requests to the server. Let’s have a look how we are handling such cases in Open Event Android

Firstly we need to add the required permission in the manifest. We need the permission to access the user’s WiFi state and network state.

<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

 

We use this function to check if the user is connected to the Internet. This function return a Boolean which is true if the user is connected to the Internet otherwise it is false

private fun isNetworkConnected(): Boolean {
val connectivityManager = context?.getSystemService(Context.CONNECTIVITY_SERVICE) as? ConnectivityManager

return connectivityManager?.activeNetworkInfo != null
}

 

This function is used to decide which screen should be shown to the user. If the user has an active Internet connection he will see events fragment but if there is no Internet he will see the no Internet card.

private fun showNoInternetScreen(show: Boolean) {
rootView.homeScreenLL.visibility = if (show) View.VISIBLE else View.GONE
rootView.noInternetCard.visibility = if (!show) View.VISIBLE else View.GONE
}

 

Let’s see how the above two functions are used in the events fragment. When the app starts we check if there is a need to show the no Internet screen. If the user is not connected to the Internet, the no Internet card will be shown. Then when the user clicks on retry, the events fragment is shown again if the user is connected to the Internet.

showNoInternetScreen(isNetworkConnected())

rootView.retry.setOnClickListener {
showNoInternetScreen(isNetworkConnected())
}

 

Let’s have a look a how the XML code looks, here we are only seeing a part of the code as the rest is pretty obvious. We have cardView and inside it all the views ie ImageView,TextView are inside a LinearLayout which has a vertical orientation so that all these views appear below each other.

<android.support.v7.widget.CardView
android:id="@+id/noInternetCard"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="@dimen/layout_margin_medium"
app:cardBackgroundColor="@color/white"
app:cardCornerRadius="@dimen/card_corner_radius"
app:cardElevation="@dimen/card_elevation">

<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="@dimen/layout_margin_extra_large"
android:orientation="vertical">

<ImageView
android:id="@+id/noInternetImageView"
android:layout_width="@dimen/item_image_view_large"
android:layout_height="@dimen/item_image_view_large"
android:layout_gravity="center_horizontal"
android:layout_marginTop="@dimen/layout_margin_large"
android:scaleType="centerCrop"
app:srcCompat="@drawable/ic_no_internet" />

<TextView
android:id="@+id/noInternetTextview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal"
android:layout_marginTop="@dimen/layout_margin_large"
android:text="@string/no_internet_message"
android:textSize="@dimen/text_size_medium"
tools:text="No Internet" />

 

References

  1. AndroidHive tutorial – https://www.androidhive.info/2012/07/android-detect-internet-connection-status/
  2. Official Android Documentation – https://developer.android.com/training/monitoring-device-state/connectivity-monitoring
  3. StackOverflow – https://stackoverflow.com/questions/9570237/android-check-internet-connection
Continue ReadingHandling No internet cases in Open Event Android

Opening Orga App through Intent in Open Event Android App

In the Open Event Android App there is a section called “Manage events”, we direct the users to the Organizer’s app if the users have the app already installed otherwise we open the organizer’s app in the playstore. This way users are motivated to create events using the organizer’s app. Let’s see how this feature was implemented.

So when the user clicks on the menu item “Manage Events” startOrgaApp function is called with the package name of the organizer’s app as the argument. Android apps are uniquely identified by their package names in the playstore.

override fun onOptionsItemSelected(item: MenuItem?): Boolean {

when (item?.getItemId()) {
R.id.orgaApp -> {
startOrgaApp("org.fossasia.eventyay")
return true
}
}

 

Let’s have a look at the startOrgaApp function that we are calling above. We are using a try/catch block here. We are opening the app in the try block if it is installed otherwise we throw ActivityNotFoundException and if this exception happens we will catch it and use the showInMarket function to show the Organizer’s app in the market.

private fun startOrgaApp(packageName: String) {
val manager = activity?.packageManager
try {
val intent = manager?.getLaunchIntentForPackage(packageName)
?: throw ActivityNotFoundException()
intent.addCategory(Intent.CATEGORY_LAUNCHER)
startActivity(intent)
} catch (e: ActivityNotFoundException) {
showInMarket(packageName)
}
}

 

Now since we know this will never raise an exception there is no try catch block as the app with this package name will always be there. We just need to create an intent with the required parameters and then just pass the intent to startActivity

private fun showInMarket(packageName: String) {
val intent = Intent(Intent.ACTION_VIEW, Uri.parse("market://details?id=$packageName"))
intent.flags = Intent.FLAG_ACTIVITY_NEW_TASK
startActivity(intent)
}

 

Lastly we just need to add the XML code to create a menu item. It should contain the id so that we can reference it and the title that will be visible.

<group android:id="@+id/profileMenu">
<item
android:id="@+id/orgaApp"
android:title="@string/manage_events" />
</group>

 

That’s it now you can open an app in the playstore if it is not installed or just open the app if the user has already installed the app.  

Resources

  1. Vogella Intent tutorial – http://www.vogella.com/tutorials/AndroidIntent/article.html
  2. Official Android Documentation Intent – https://developer.android.com/guide/components/intents-filters
  3. Javatpoint intent tutorial – https://www.javatpoint.com/android-explicit-intent-example
Continue ReadingOpening Orga App through Intent in Open Event Android App

Implementing Tax Endpoint in Open Event Server

The Open Event Server enables organizers to manage events from concerts to conferences and meetups. It offers features for events with several tracks and venues. The Event organizers may want to charge taxes on the event tickets. The Open Event Server has a Tax endpoint in order to support it. This blog goes over it’s implementation details in the project.

Model

First up, we will discuss what fields have been stored in the database for Tax endpoint. The most important fields are as follows:

  • The tax rate charged in percentage
  • The id for the Tax
  • The registered company
  • The country
  • The address of the event organiser
  • The additional message to be included as the invoice footer

We also store a field to specify whether the tax should be included in the ticket price or not. Each Event can have only one associated Tax information. You can checkout the full model for reference here.

Schema

We have defined two schemas for the Tax endpoint. This is because there are a few fields which contain sensitive information and should only be shown to the event organizer or the admin itself while the others can be shown to the public. Fields like name and rate aren’t sensitive and can be disclosed to the public. They have been defined in the TaxSchemaPublic class. Sensitive information like the tax id, address, registered company have been included in the TaxSchema class which inherits from the TaxSchemaPublic class. You can checkout the full schema for reference here.

Resources

The endpoint supports all the CRUD operations i.e. Create, Read, Update and Delete.

Create and Update

The Tax entry for an Event can be created using a POST request to the /taxes endpoint. We analyze if the posted data contains a related event id or identifier which is necessary as every tax entry is supposed to be related with an event. Moreover we also check whether a tax entry already exists for the event or not since an event should have only one tax entry. An error is raised if that is not the case otherwise the tax entry is created and saved in the database. An existing entry can be updated using the same endpoint by making a PATCH request.  

Read

A Tax entry can be fetched using a GET request to the  /taxes/{tax_id}  endpoint with the id for the tax entry. The entry for an Event can also be fetched from /events/{event_id}/tax  endpoint.

Delete

An existing Tax entry can be deleted by making a DELETE request to the /taxes/{tax_id} endpoint with the id of the entry. We make sure the tax entry exists. An error is raised if that is not the case else we delete it from the database.

References

Continue ReadingImplementing Tax Endpoint in Open Event Server

Open Event Server – Export Event as a Pentabarf XML File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he makes his event public, others can view it and buy tickets if interested.

To make event promotion easier, we also provide the event organizer to export his event as a Pentabarf XML file. Pentabarf XML is used to store events/conferences in a format which most of the scheduling applications can read and add that particular event/conference to the user’s schedule.

Server side – generating the Pentabarf XML file

Here we will be using the pentabarf package for Python for parsing and creating the file.

from pentabarf.Conference import Conference
from pentabarf.Day import Day
from pentabarf.Event import Event
from pentabarf.Person import Person
from pentabarf.Room import Room
  • We define a class PentabarfExporter which has a static method export(event_id).
  • Query the event using the event_id passed and start forming the event in the required format:
event = EventModel.query.get(event_id)
diff = (event.ends_at - event.starts_at)

conference = Conference(title=event.name, start=event.starts_at, end=event.ends_at,
                       days=diff.days if diff.days > 0 else 1,
                       day_change="00:00", timeslot_duration="00:15",
                       venue=event.location_name)
dates = (db.session.query(cast(Session.starts_at, DATE))
        .filter_by(event_id=event_id)
        .filter_by(state='accepted')
        .filter(Session.deleted_at.is_(None))
        .order_by(asc(Session.starts_at)).distinct().all())
  • We have queried for the dates of the event and saved it in dates.
  • We will now iterate over each date and query the microlocations who have a session on that particular date.
for date in dates:
   date = date[0]
   day = Day(date=date)
   microlocation_ids = list(db.session.query(Session.microlocation_id)
                            .filter(func.date(Session.starts_at) == date)
                            .filter_by(state='accepted')
                            .filter(Session.deleted_at.is_(None))
                            .order_by(asc(Session.microlocation_id)).distinct())
  • For each microlocation thus obtained, we will query for accepted sessions to be held at those microlocations.
  • We will also initialize a Room for each microlocation.
for microlocation_id in microlocation_ids:
   microlocation_id = microlocation_id[0]
   microlocation = Microlocation.query.get(microlocation_id)
   sessions = Session.query.filter_by(microlocation_id=microlocation_id) \
       .filter(func.date(Session.starts_at) == date) \
       .filter_by(state='accepted') \
       .filter(Session.deleted_at.is_(None)) \
       .order_by(asc(Session.starts_at)).all()

   room = Room(name=microlocation.name)
  • We will now iterate over the aabove-obtained sessions and instantiate an Event for each session.
  • Then we will iterate over all the speakers of that session and instantiate a Person for each speaker.
  • Finally, we will add that Event to the Room we created earlier.
for session in sessions:

   session_event = Event(id=session.id,
                         date=session.starts_at,
                         start=session.starts_at,
                         duration=str(session.ends_at - session.starts_at) + "00:00",
                         track=session.track.name,
                         abstract=session.short_abstract,
                         title=session.title,
                         type='Talk',
                         description=session.long_abstract,
                         conf_url=url_for('event_detail.display_event_detail_home',
                                          identifier=event.identifier),
                         full_conf_url=url_for('event_detail.display_event_detail_home',
                                               identifier=event.identifier, _external=True),
                         released="True" if event.schedule_published_on else "False")

   for speaker in session.speakers:
       person = Person(id=speaker.id, name=speaker.name)
       session_event.add_person(person)

   room.add_event(session_event)
  • Then we will add the room to the day and then add each day to the conference.
day.add_room(room)
conference.add_day(day)
  • Finally, we will call the generate method of the conference to generate the XML file. This can be directly written to the file.
return conference.generate("Generated by " + get_settings()['app_name'])

Obtaining the Pentabarf XML file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/pentabarf

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the event as a Pentabarf XML file. It returns the task of the URL to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his Celery task. If the task completed successfully he will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Hence, now the event can be added to any scheduling app which recognizes the Pentabarf XML format.

References

Continue ReadingOpen Event Server – Export Event as a Pentabarf XML File