Exporting CSV data through API

A Badge generator like Badgeyay must be able to generate, store and export the user data as and when needed. This blog post is about adding the exporting functionality to badgeyay backend..

Why do we need such an API?

Exporting data is required for a user. A user may want to know the details he/she has uploaded to the system or server. In our case we are dealing with the fact of exporting the CSV data from backend of Badgeyay.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding the necessary imports

We first need to import the required dependencies for the route to work

import os
import base64
import uuid
from flask import request, Blueprint, jsonify
from flask import current_app as app
from api.models.file import File
from api.schemas.file import ExportFileSchema
from api.utils.errors import ErrorResponse
from api.schemas.errors import FileNotFound

Step 2 : Adding a route

This step involves adding a separate route that provides us with the exported data from backend.

@router.route(‘/csv/data’, methods=[‘GET’])
def export_data():
input_data = request.args
file = File().query.filter_by(filename=input_data.get(
‘filename’)).first()

if file is None:
return ErrorResponse(FileNotFound(input_data.get(‘filename’)).message, 422, {‘Content-Type’: ‘application/json’}).respond()

export_obj = {
‘filename’: file.filename,
‘filetype’: file.filetype,
‘id’: str(uuid.uuid4()),
‘file_data’: None}

with open(os.path.join(app.config.get(‘BASE_DIR’), ‘static’, ‘uploads’, ‘csv’, export_obj[‘filename’]), “r”) as f:
export_obj[
‘file_data’] = f.read()

export_obj[‘file_data’] = base64.b64encode(export_obj[‘file_data’].encode())

return jsonify(ExportFileSchema().dump(export_obj).data)

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class ExportFileSchema(Schema):
class Meta:
type_ =
‘export-data’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
filename = fields.Str(required=
True, dump_only=True)
filetype = fields.Str(required=
True, dump_only=True)
file_data = fields.Str(required=
True, dump_only=True)

This is the ExportFileSchema that produces the output results of the GET request on the route. This helps us get the data onto the frontend.

Further Improvements

We are working on making badgeyay more comprehensive yet simple. This API endpoint needs to get registered onto the frontend. This can be a further improvement to the project and can be iterated over the next days.

Resources

Continue ReadingExporting CSV data through API

Dated queries in Badgeyay admin

Badgeyay is not just an anonymous badge generator that creates badges according to your needs, but it now has an admin section that allows the admin of the website to control and look over the statistics of the website.

Why do we need such an API?

For an admin, one of the most common functionality is to gather the details of the users or the files being served onto or over the server. Not just that, but the admin must also be aware about the traffic or files on the server in a particular duration of time. So we need an API that can coordinate all the stuff that requires dated queries from the backend database.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding a route

This step involves adding a separate route that provides us with the output of the dated badges queries from backend.

@router.route(‘/get_badges_dated’, methods=[‘POST’])
def get_badges_dated():
schema = DatedBadgeSchema()
input_data = request.get_json()
data, err = schema.load(input_data)
if err:
return jsonify(err)
dated_badges = Badges.query.filter(Badges.created_at <= data.get(
‘end_date’)).filter(Badges.created_at >= data.get(‘start_date’))
return jsonify(AllBadges(many=True).dump(dated_badges).data)

This route allows us to get badges produced by any user during a certain duration as a JSON API data object. This object is fed to the frontend to render the badges as cards.

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class DatedBadgeSchema(Schema):
class Meta:
type_ =
‘dated-badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
start_date = fields.Date(required=
True)
end_date = fields.Date(required=
True)

class AllBadges(Schema):
class Meta:
type_ =
‘all-badges’
self_view =
‘admin.get_all_badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
image = fields.Str(required=
True)
csv = fields.Str(required=
True)
badge_id = fields.Str(required=
True)
text_color = fields.Str(required=
True)
badge_size = fields.Str(required=
True)
created_at = fields.Date(required=
True)
user_id = fields.Relationship(
self_url=
‘/api/upload/get_file’,
self_url_kwargs={
‘file_id’: ‘<id>’},
related_url=
‘/user/register’,
related_url_kwargs={
‘id’: ‘<id>’},
include_resource_linkage=
True,
type_=
‘User’
)

This is the DatedBadge schema that produces the output results of the POST request on the route. And there is the AllBadges schema that produces the output results of the POST request on the route.

Further Improvements

We are working on adding multiple routes and adding modifications to database models and schemas so that the functionality of Badgeyay can be extended to a large extent. This will help us in making this badge generator even better.

Resources

 

Continue ReadingDated queries in Badgeyay admin

Get My Badges from Badgeyay API

Badgeyay is no longer a simple badge generator. It has more cool features than before.

Badgeyay now supports a feature that shows your badges. It is called ‘my-badges’ component. To get this component work, we need to design a backend API to deliver the badges produced by a particular user.

Why do we need such an API?

The main aim of Badgeyay has changed from being a standard and simple badge generator to a complete suite that solves your badge generation and management problem. So to tackle the problem of managing the produced badges per user, we need to define a separate route and schema that delivers the generated badges.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding a route

This step involves adding a separate route that provides with the generated output of the badges linked with the user account.

@router.route(‘/get_badges’, methods=[‘GET’])
def get_badges():
input_data = request.args
user = User.getUser(user_id=input_data.get(
‘uid’))
badges = Badges().query.filter_by(creator=user)
return jsonify(UserBadges(many=True).dump(badges).data)

This route allows us to get badges produced by the user as a JSON API data object. This object is fed to the frontend to render the badges as cards.

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class UserBadges(Schema):
class Meta:
type_ =
‘user-badges’
self_view =
‘generateBadges.get_badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
image = fields.Str(required=
True)
csv = fields.Str(required=
True)
badge_id = fields.Str(required=
True)
text_color = fields.Str(required=
True)
badge_size = fields.Str(required=
True)
user_id = fields.Relationship(
self_url=
‘/api/upload/get_file’,
self_url_kwargs={
‘file_id’: ‘<id>’},
related_url=
‘/user/register’,
related_url_kwargs={
‘id’: ‘<id>’},
include_resource_linkage=
True,
type_=
‘User’
)

This is the ‘UserBadge’ schema that produces the output results of the GET request on the route.

Finally, once this is done we can fire up a GET request on our deployment to receive results. The command that you need to run is given below.

$ ~ curl -X GET http://localhost:5000/api/get_badges?uid={user_id}

Further Improvements

We are working on adding multiple routes and adding modifications to database models and schemas so that the functionality of Badgeyay can be extended to a large extent. This will help us in making this badge generator even better.

Resources

 

Continue ReadingGet My Badges from Badgeyay API

Custom Colored Images with Badgeyay

Backend functionality of any Badge generator is to generate badges as per the requirements of the user. Currently Badgeyay is capable of generating badges by the following way:

  • Adding or Selecting a Pre-defined Image from the given set
  • Uploading a new image and then using it as a background

Well, badgeyay has been missing a functionality of generating Custom Colored images.

What is meant by Custom Colored Badges?

Currently, there are a set of 7 different kind of pre-defined images to choose from. But let’s say that a user want to choose from the images but doesn’t like any of the color. Therefore we provide the user with an additional option of applying custom background-color for their badges. This allows Badgeyay to deliver a more versatile amount of badges than ever before.

Adding the functionality to backend

Lets see how this functionality has been implemented in the backend of the project.

Step 1 :  Adding a background-color route to backend

Before generating badges, we need to know that what is the color that the user wants on the badge. Therefore we created a route that gathers the color and saves the user-defined.svg into that particular color.

@router.route(‘/background_color’, methods=[‘POST’])
def background_color():
try:
data = request.get_json()[‘data’][‘attributes’]
bg_color = data[‘bg_color’]
except Exception:
return ErrorResponse(PayloadNotFound().message, 422, {‘Content-Type’: ‘application/json’}).respond()

svg2png = SVG2PNG()

bg_color = ‘#’ + str(bg_color)
user_defined_path = svg2png.do_svg2png(1, bg_color)
with open(user_defined_path, “rb”) as image_file:
image_data = base64.b64encode(image_file.read())
os.remove(user_defined_path)

try:
imageName = saveToImage(imageFile=image_data.decode(‘utf-8’), extension=”.png”)
except Exception:
return ErrorResponse(ImageNotFound().message, 422, {‘Content-Type’: ‘application/json’}).respond()

uid = data[‘uid’]
fetch_user = User.getUser(user_id=uid)
if fetch_user is None:
return ErrorResponse(UserNotFound(uid).message, 422, {‘Content-Type’: ‘application/json’}).respond()

file_upload = File(filename=imageName, filetype=’image’, uploader=fetch_user)
file_upload.save_to_db()
return jsonify(ColorImageSchema().dump(file_upload).data)

Step 2: Adding Schema for background-color to backend

To get and save values from and to database, we need to have some layer of abstraction and so we use schemas created using marshmallow_jsonapi

class ColorImageSchema(Schema):
class Meta:
type_ = ‘bg-color’
self_view = ‘fileUploader.background_color’
kwargs = {‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
filename = fields.Str(required=True)
filetype = fields.Str(required=True)
user_id = fields.Relationship(
self_url=’/api/upload/background_color’,
self_url_kwargs={‘file_id’: ‘<id>’},
related_url=’/user/register’,
related_url_kwargs={‘id’: ‘<id>’},
include_resource_linkage=True,
type_=’User’
)

Now we have our schema and route done, So we can move forward with the logic of making badges.

Step 3 : Converting the SVG to PNG and adding custom color

Now we have the user-defined color for the badge background, but we still need a way to apply it to the badges. It is done using the following code below.

def do_svg2png(self, opacity, fill):
“””
Module to convert svg to png
:param `opacity` – Opacity for the output
:param `fill` –  Background fill for the output
“””
filename = os.path.join(self.APP_ROOT, ‘svg’, ‘user_defined.svg’)
tree = parse(open(filename, ‘r’))
element = tree.getroot()
# changing style using XPath.
path = element.xpath(‘//*[@id=”rect4504″]’)[0]
style_detail = path.get(“style”)
style_detail = style_detail.split(“;”)
style_detail[0] = “opacity:” + str(opacity)
style_detail[1] = “fill:” + str(fill)
style_detail = ‘;’.join(style_detail)
path.set(“style”, style_detail)
# changing text using XPath.
path = element.xpath(‘//*[@id=”tspan932″]’)[0]
# Saving in the original XML tree
etree.ElementTree(element).write(filename, pretty_print=True)
print(“done”)
png_name = os.path.join(self.APP_ROOT, ‘static’, ‘uploads’, ‘image’, str(uuid.uuid4())) + “.png”
svg2png(url=filename, write_to=png_name)
return png_name

Finally , we have our badges generating with custom colored background.

Here is a sample image:

Resources

 

Continue ReadingCustom Colored Images with Badgeyay

Open Event Server – Export Speakers as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the speakers in a very detailed view in the event management dashboard. He can see the statuses of all the speakers. The possible statuses are pending, accepted and rejected. He/she can take actions such as editing/viewing speakers.

If the organizer wants to download the list of all the speakers as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Speakers CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_speakers_csv which takes the speakers to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_speakers_csv(speakers):
   headers = ['Speaker Name', 'Speaker Email', 'Speaker Session(s)',
              'Speaker Mobile', 'Speaker Bio', 'Speaker Organisation', 'Speaker Position']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each speaker in speakers and form a row for that speaker by separating the values of each of the columns by a comma. Here, every row is one speaker.
  • As a speaker can contain multiple sessions we iterate over each session for that particular speaker and append each session to a string. ‘;’ is used as a delimiter. This string is then added to the row. We also include the state of the session – accepted, rejected, confirmed.
  • The newly formed row is added to the rows list.
for speaker in speakers:
   column = [speaker.name if speaker.name else '', speaker.email if speaker.email else '']
   if speaker.sessions:
       session_details = ''
       for session in speaker.sessions:
           if not session.deleted_at:
               session_details += session.title + ' (' + session.state + '); '
       column.append(session_details[:-2])
   else:
       column.append('')
   column.append(speaker.mobile if speaker.mobile else '')
   column.append(speaker.short_biography if speaker.short_biography else '')
   column.append(speaker.organisation if speaker.organisation else '')
   column.append(speaker.position if speaker.position else '')
   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
with open(file_path, "w") as temp_file:
   writer = csv.writer(temp_file)
   from app.api.helpers.csv_jobs_util import export_speakers_csv
   content = export_speakers_csv(speakers)
   for row in content:
       writer.writerow(row)

Obtaining the Speakers CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/speakers/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the speakers of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Speakers as CSV File

Open Event Server – Export Sessions as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the sessions in a very detailed view in the event management dashboard. He can see the statuses of all the sessions. The possible statuses are pending, accepted, confirmed and rejected. He/she can take actions such as accepting/rejecting the sessions.

If the organizer wants to download the list of all the sessions as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Sessions CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_sessions_csv which takes the sessions to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_sessions_csv(sessions):
   headers = ['Session Title', 'Session Speakers',
              'Session Track', 'Session Abstract', 'Created At', 'Email Sent']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each session in sessions and form a row for that session by separating the values of each of the columns by a comma. Here, every row is one session.
  • As a session can contain multiple speakers we iterate over each speaker for that particular session and append each speaker to a string. ‘;’ is used as a delimiter. This string is then added to the row.
  • The newly formed row is added to the rows list.
for session in sessions:
   if not session.deleted_at:
       column = [session.title + ' (' + session.state + ')' if session.title else '']
       if session.speakers:
           in_session = ''
           for speaker in session.speakers:
               if speaker.name:
                   in_session += (speaker.name + '; ')
           column.append(in_session[:-2])
       else:
           column.append('')
       column.append(session.track.name if session.track and session.track.name else '')
       column.append(strip_tags(session.short_abstract) if session.short_abstract else '')
       column.append(session.created_at if session.created_at else '')
       column.append('Yes' if session.is_mail_sent else 'No')
       rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_sessions_csv
content = export_sessions_csv(sessions)
for row in content:
   writer.writerow(row)

Obtaining the Sessions CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/sessions/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the sessions of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Sessions as CSV File

Implementing Endpoint to Resend Email Verification

Earlier, when a user registered via Open Event Frontend, s/he received a verification link via email to confirm their account. However, this was not enough in the long-term. If the confirmation link expired, or for some reasons the verification mail got deleted on the user side, there was no functionality to resend the verification email, which prevented the user from getting fully registered. Although the front-end already showed the option to resend the verification link, there was no support from the server to do that, yet.

So it was decided that a separate endpoint should be implemented to allow re-sending the verification link to a user. /resend-verification-email was an endpoint that would fit this action. So we decided to go with it and create a route in `auth.py` file, which was the appropriate place for this feature to reside. First step was to do the necessary imports and then definition:

from app.api.helpers.mail import send_email_confirmation
from app.models.mail import USER_REGISTER_WITH_PASSWORD
...
...
@auth_routes.route('/resend-verification-email', methods=['POST'])
def resend_verification_email():
...

Now we safely fetch the email mentioned in the request and then search the database for the user corresponding to that email:

def resend_verification_email():
    try:
        email = request.json['data']['email']
    except TypeError:
        return BadRequestError({'source': ''}, 'Bad Request Error').respond()

    try:
        user = User.query.filter_by(email=email).one()
    except NoResultFound:
        return UnprocessableEntityError(
{'source': ''}, 'User with email: ' + email + ' not found.').respond()
    else:

    ...

Once a user has been identified in the database, we proceed further and create an essentially unique hash for the user verification. This hash is in turn used to generate a verification link that is then ready to be sent via email to the user:

else:
    serializer = get_serializer()
    hash_ = str(base64.b64encode(str(serializer.dumps(
[user.email, str_generator()])).encode()), 'utf-8')
    link = make_frontend_url(
'/email/verify'.format(id=user.id), {'token': hash_})

Finally, the email is sent:

send_email_with_action(
user, USER_REGISTER_WITH_PASSWORD,
app_name=get_settings()['app_name'], email=user.email)
    if not send_email_confirmation(user.email, link):
        return make_response(jsonify(message="Some error occured"), 500)
    return make_response(jsonify(message="Verification email resent"), 200)

But this was not enough. When the endpoint was tested, it was found that actual emails were not being delivered, even after correctly configuring the email settings locally. So, after a bit of debugging, it was found that the settings, which were using Sendgrid to send emails, were using a deprecated Sendgrid API endpoint. A separate email function is used to send emails via Sendgrid and it contained an old endpoint that was no longer recommended by Sendgrid:

@celery.task(name='send.email.post')
def send_email_task(payload, headers):
   requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

The new endpoint, as per Sendgrid’s documentation, is:

https://api.sendgrid.com/v3/mail/send

But this was not the only change required. Sendgrid had also modified the structure of requests they accepted, and the new structure was different from the existing one that was used in the server. Following is the new structure:

'{"personalizations": [{"to": [{"email": "example@example.com"}]}],"from": {"email": "example@example.com"},"subject": "Hello, World!","content": [{"type": "text/plain", "value": "Heya!"}]}'

The header structure was also changed, so the structure in the server was also updated to

headers = {
"Authorization": ("Bearer " + key),
"Content-Type": "application/json"
}

The Sendgrid function (which is executed as a Celery task) was modified as follows, to incorporate the changes in the API endpoint and structure:

import json
...
@celery.task(name='send.email.post')
def send_email_task(payload, headers):
    data = {"personalizations": [{"to": []}]}
    data["personalizations"][0]["to"].append({"email": payload["to"]})
    data["from"] = {"email": payload["from"]}
    data["subject"] = payload["subject"]
    data["content"] = [{"type": "text/html", "value": payload["html"]}]
    requests.post(
        "https://api.sendgrid.com/v3/mail/send",
        data=json.dumps(data),
        headers=headers,
        verify=False  # doesn't work with verification in celery context
    )

 

As can be seen, there is a bug that doesn’t allow SSL verification within the celery context. However, the verification is successful when the functionality is executed independent of the celery context. But now email sending via Sendgrid actually works, which makes our verification resend endpoint functional:Screen Shot 2018-08-10 at 10.04.12 PM.pngEmail is received successfully by the recipient:

Screen Shot 2018-08-10 at 10.04.30 PM.png

Thus, a working email verification endpoint is implemented, which can be easily integrated in the frontend.


Resources:

Continue ReadingImplementing Endpoint to Resend Email Verification

Adding Check-in Attributes to Tickets

Recently, it was decided by the Open Event Orga App team that the event ticket API response from Open Event Server should have two additional attributes for specifying event check-in access. At first sight, it seemed that adding these options will only require changes in the orga app, but it turned out that the entire Ticket API from the server will need this addition.

Implementing these attributes turned out to be quite straightforward. Specifically, the fields to be added were boolean is_checkin_restricted and auto_checkin_enabled. By default, checkin is not automatic and is restricted. Therefore, the default values for these fields were chosen to be True and False respectively. To add them, the ticket model file was changed first – due to the addition of these two columns:

class Ticket(SoftDeletionModel):
    ...
    is_checkin_restricted = db.Column(db.Boolean)  # <--
    auto_checkin_enabled = db.Column(db.Boolean)  # <--
    ...
    def __init__(self,
        name=None,
        event_id=None,
        ...
        is_checkin_restricted=True,
        auto_checkin_enabled=False):

        self.name = name
        ...
        self.is_checkin_restricted = is_checkin_restricted
        self.auto_checkin_enabled = auto_checkin_enabled
        ...

Since the ticket database model was updated, a migration had to be performed. Following shell commands (at the open event server project root) did the migration and database update and a migration file was then generated:

$ python manage.py db migrate
$ python manage.py db upgrade

Here’s the generated migration file:

from alembic import op
import sqlalchemy as sa

revision = '6440077182f0'
down_revision = 'eaa029ebb260'

def upgrade():
    op.add_column('tickets', sa.Column('auto_checkin_enabled', sa.Boolean(), nullable=True))
    op.add_column('tickets', sa.Column('is_checkin_restricted', sa.Boolean(), nullable=True))

def downgrade():
    op.drop_column('tickets', 'is_checkin_restricted')
    op.drop_column('tickets', 'auto_checkin_enabled')

The next code change was required in the ticket API schema. The change was essentially the same as the one added in the model file – just these 2 new fields were added:

class TicketSchemaPublic(SoftDeletionSchema):
    ...
    id = fields.Str(dump_only=True)
    name = fields.Str(required=True)
    ...
    is_checkin_restricted = fields.Boolean(default=True)  # <--
    auto_checkin_enabled = fields.Boolean(default=False)  # <--
    event = Relationship(attribute='event',
    self_view='v1.ticket_event',
    self_view_kwargs={'id': '<id>'},
    related_view='v1.event_detail',
    related_view_kwargs={'ticket_id': '<id>'},
    schema='EventSchemaPublic',
    type_='event')
    ...

Now all that remained were changes in the API documentation, which were made accordingly. This completed the addition of these two checkin attributes in the ticket API, and eventually made way to the orga app. And, these can be requested as usual by the front-end and user app as well.


Resources and Links:

Continue ReadingAdding Check-in Attributes to Tickets

Adding Port Specification for Static File URLs in Open Event Server

Until now, static files stored locally on Open Event server did not have port specification in their URLs. This opened the door for problems while consuming local APIs. This would have created inconsistencies, if two server processes were being served on the same machine but at different ports. In this blog post, I will explain my approach towards solving this problem, and describe code snippets to demonstrate the changes I made in the Open Event Server codebase.

The first part in this process involved finding the source of the bug. For this, my open-source integrated development environment, Microsoft Visual Studio Code turned out to be especially useful. It allowed me to jump from function calls to function definitions quickly:

Screen Shot 2018-08-10 at 12.29.01 PM

I started at events.py and jumped all the way to storage.py, where I finally found out the source of this bug, in upload_local() function:

def upload_local(uploaded_file, key, **kwargs):
    """
    Uploads file locally. Base dir - static/media/
    """
    filename = secure_filename(uploaded_file.filename)
    file_relative_path = 'static/media/' + key + '/' + generate_hash(key) + '/' + filename
    file_path = app.config['BASE_DIR'] + '/' + file_relative_path
    dir_path = file_path.rsplit('/', 1)[0]
    # delete current
    try:
        rmtree(dir_path)
    except OSError:
        pass
    # create dirs
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)
        uploaded_file.save(file_path)
        file_relative_path = '/' + file_relative_path
    if get_settings()['static_domain']:
        return get_settings()['static_domain'] + \
    file_relative_path.replace('/static', '')
    url = urlparse(request.url)
    return url.scheme + '://' + url.hostname + file_relative_path

Look closely at the return statement:

return url.scheme + '://' + url.hostname + file_relative_path

Bingo! This is the source of our bug. A straightforward solution is to simply concatenate the port number in between, but that will make this one-liner look clumsy – unreadable and un-pythonic. We therefore use Python string formatting:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path)

But this statement isn’t perfect. There’s an edge case that might give unexpected URL. If the port isn’t originally specified, Python’s string formatting heuristic will substitute url.port with None. This will result in a URL like http://localhost:None/some/file_path.jpg, which is obviously something we don’t desire. We therefore append a call to Python’s string replace() method: replace(‘:None’, ”)

The resulting return statement now looks like the following:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path).replace(':None', '')

This should fix the problem. But that’s not enough. We need to ensure that our project adapts well with the change we made. We check this by running the project tests locally:

$ nosetests tests/unittests

Unfortunately, the tests fail with the following traceback:

======================================================================
ERROR: test_create_save_image_sizes (tests.unittests.api.helpers.test_files.TestFilesHelperValidation)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 138, in test_create_save_image_sizes
resized_width_large, _ = self.getsizes(resized_image_file_large)
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 22, in getsizes
im = Image.open(file)
File "/usr/local/lib/python3.6/site-packages/PIL/Image.py", line 2312, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/open-event-server:5000/static/media/events/53b8f572-5408-40bf-af97-6e9b3922631d/large/UFNNeW5FRF/5980ede1-d79b-4907-bbd5-17511eee5903.jpg'

It’s evident from this traceback that the code in our test framework is not converting the image url to file path correctly. The port specification part is working fine, but it should not affect file names, they should be independent of port number. The files saved originally do not have port specified in their name, but the code in test framework is expecting port to be involved, hence the above error.

Using the traceback, I went to the code in the test framework where this problem occurred:

def test_create_save_image_sizes(self):
       with app.test_request_context():
           image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'

           image_sizes_type = "event"
           width_large = 1300
           width_thumbnail = 500
           width_icon = 75
           image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

           resized_image_url = image_sizes['original_image_url']
           resized_image_url_large = image_sizes['large_image_url']
           resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
           resized_image_url_icon = image_sizes['icon_image_url']

           resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
           resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
           resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
           resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

           resized_width_large, _ = self.getsizes(resized_image_file_large)
           resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
           resized_width_icon, _ = self.getsizes(resized_image_file_icon)

           self.assertTrue(os.path.exists(resized_image_file))
           self.assertEqual(resized_width_large, width_large)
           self.assertEqual(resized_width_thumbnail, width_thumbnail)
           self.assertEqual(resized_width_icon, width_icon)

 

Obviously, resized_image_url.split(‘/localhost’)[1] will involve port number. So we have to change this line. But this means we also have to change the subsequent lines involving thumbnail, icon and large images. Instead of stripping the port for each of these, we can simply do this collectively at an earlier stage. So we redefine the image_sizes dictionary after the create_save_image_sizes() function call:

image_sizes = {
url_name: urlparse(image_sizes[url_name]).path
for url_name in image_sizes
} # Now file names don't contain port (this gives relative urls).

Now we can simplify the lines each of which earlier required port-stripping code:

resized_image_file = app.config.get('BASE_DIR') + resized_image_url
resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large
resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail
resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon

We now do a similar modification in test_create_save_resized_image() test method as it also involves URL to file path conversion. We break the line

resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]

to 2 lines:

resized_image_path = urlparse(resized_image_url).path
resized_image_file = app.config.get('BASE_DIR') + resized_image_path

Now let’s run the tests (which failed earlier) again:

Screen Shot 2018-08-10 at 12.29.15 PM.pngFinally, the tests pass without errors! Now, we can add some extra convenience functionality: we can also strip the port when it corresponds with the protocol we’re using. For example, if we’re using https protocol, then we need not specify the port if it is 443, as 443 corresponds to that protocol. We can add this functionality by creating a mapping of such correspondence and checking for it before generating the URL. To do this, we now go back to storage.py and add the following:

SCHEMES = {80: 'http', 443: 'https'}

And add

# No need to specify scheme-corresponding port
port = url.port
if port and url.scheme == SCHEMES.get(url.port, None):
    port = None

just before

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=port,
file_relative_path=file_relative_path).replace(':None', '')

And that finishes our work! The tests again pass successfully, plus on top of that we have this new functionality of mentioning ports only when they don’t correspond with the URL scheme!


Resources:

Continue ReadingAdding Port Specification for Static File URLs in Open Event Server

Open Event Server – Export Attendees as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the attendees in a very detailed view in the event management dashboard. He can see the statuses of all the attendees. The possible statuses are completed, placed, pending, expired and canceled, checked in and not checked in. He/she can take actions such as checking in the attendee.

If the organizer wants to download the list of all the attendees as a CSV file, he or she can do it very easily by simply clicking on the Export As and then on CSV.

Let us see how this is done on the server.

Server side – generating the Attendees CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_attendees_csv which takes the attendees to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_attendees_csv(attendees):
   headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email',
              'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each attendee in attendees and form a row for that attendee by separating the values of each of the columns by a comma. Here, every row is one attendee.
  • The newly formed row is added to the rows list.
for attendee in attendees:
   column = [str(attendee.order.get_invoice_number()) if attendee.order else '-',
             str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-',
             str(attendee.order.status) if attendee.order and attendee.order.status else '-',
             str(attendee.firstname) if attendee.firstname else '',
             str(attendee.lastname) if attendee.lastname else '',
             str(attendee.email) if attendee.email else '',
             str(attendee.country) if attendee.country else '',
             str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '',
             str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '',
             str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0',
             str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']

   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_attendees_csv
content = export_attendees_csv(attendees)
for row in content:
   writer.writerow(row)

Obtaining the Attendees CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/attendees/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the attendees of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

References

Continue ReadingOpen Event Server – Export Attendees as CSV File