Uploading Badges To Google Cloud Badgeyay

Badgeyay is an open source project developed by FOSSASIA community. This project mainly aims for generating badges for technical conferences and events.The project is divided into two parts mainly. Backend is developed in flask and frontend is developed in emberjs. The problem is after the badge generation, the flask server is storing and serving those files. In practise this is not a good convention to do so. This should be handled by secondary hosting server program like gunicorn or nginx. Better approach would be to consume the firebase storage and admin sdk for storing the badges on google cloud. This will also offload storage needs from the flask server and also give a public link over the network to access.

Procedure

  1. Get the file path of the temporary badge generated on the flask server. Currently badges are saved in the directory of the image file uploaded and final badge generated is written in all-badges.pdf`
badgePath = os.getcwd() + ‘/static/temporary/’ + badgeFolder
badgePath + ‘/all-badges.pdf’

 

  1. Create the blob path for the storage. Blob can be understood as the final reference to the location where the contents are saved onto the server. This can be a nested directory structure or simply a filename in root directory.
‘badges/’ + badge_created.id + ‘.pdf’

 

In our case it is the id of the badge that is generated in the badges directory.

  1. Function for uploading the file generated in temporary storage to google cloud storage.
def fileUploader(file_path, blob_path):
  bucket = storage.bucket()
  fileUploaderBlob = bucket.blob(blob_path)
  try:
      with open(file_path, ‘rb’) as file_:
          fileUploaderBlob.upload_from_file(file_)
  except Exception as e:
      print(e)
  fileUploaderBlob.make_public()
  return fileUploaderBlob.public_url

 

It creates a bucket using the firebase admin SDK and then open the file from the file path. After opening the file from the path it writes the data to the cloud storage. After the data is written, the blob is made public and the public access link to the blob is fetched, which then later returned and saved in the local database.

Topics Involved

  • Firebase admin sdk for storage
  • Google cloud storage sdk

Resources

  • Firebase admin sdk documentation – Link
  • Google Cloud Storage SDK Python – Link
  • Blob Management – Link

 

Continue ReadingUploading Badges To Google Cloud Badgeyay

Using external UART modules to debug PSLab operations

Pocket Science Lab by FOSSASIA is a compact tool that can be used for circuit analytics and debugging. To make things more interesting, this device can be accessed via the user interface using an Android app or also a desktop app. Both these apps use the UART protocol (Universal Asynchronous Receiver-Transmitter) to transmit commands to the PSLab device from mobile phone or PC and receive commands vice versa. The peculiar thing about hardware is that the developer cannot simply log data just like developing and debugging a software program. He needs some kind of an external mechanism or a tool to visualize those data packets travelling through the wires.

Figure 1: UART Interface in PSLab

PSLab has a UART interface extracted out simply for this reason and also to connect external sensors that use the UART protocol. With this, a developer who is debugging any of the Android app or the desktop app can view the command and data packets transmitted between the device and the user end application.

This  requires some additional components. UART interface has two communication related pins: Rx(Receiver) and Tx(Transmitter). We will need to monitor both these pin signals for input and output data packets. It should be kept in mind that PSLab is using 3.3V signals. This voltage level is important to mention here because if someone uses 5V signals on these pins, it will damage the main IC. There are FTDI modules available in market. FTDI stands for Future Technology Devices International which is a company name and their main product is this USB transceiver chip. These chips play a major role in electronic industry due to product reliability and multiple voltage support. PSLab uses 3.3V USB Tx Rx pins and modules other than FTDI wouldn’t support it.

Figure 2: FTDI Module from SparkFun

The module shown in Fig.2 is a FTDI module which you can simply plug in to computer and have a serial monitor interface. There are cheaper versions in shopping websites like eBay or Alibaba and they will also work fine. Both Tx and Rx pins will require two of these modules and connectivity is as follows;

PSLab [Rx Pin] → FTDI Module 1 [Rx Pin]
PSLab [Tx Pin] → FTDI Module 2 [Rx Pin]

This might look strange because everywhere we see a UART module is connected Rx → Tx and Tx → Rx. Notice that our idea is to monitor data packets. Not communicate with PSLab device directly. We want to see if our mobile phone Android app is sending correct commands to PSLab device or not and if PSLab device is transmitting back the expected result or not. This method helped a lot when debugging resistance measurement application in PSLab Android app.

Monitoring these UART data packets can be done simply using a serial monitor. You can either download and install some already built serial monitors or you can simply write a python script as follows which does the trick.

import serial

ser = serial.Serial(
    port='/dev/ttyUSB1',
    baudrate=1000000000
)

ser.isOpen()
while 1 :
    data = ''
    while ser.inWaiting() > 0:
        data += ser.read(1)

    if data != '':
        print ">>" + data

Once you are getting commands and responses, it will look like debugging a software using console.

References:

  1. PySerial Library : http://pyserial.readthedocs.io/en/latest/shortintro.html
  2. UART Protocol : https://en.wikipedia.org/wiki/Universal_asynchronous_receiver-transmitter
  3. FTDI : https://en.wikipedia.org/wiki/FTDI
Continue ReadingUsing external UART modules to debug PSLab operations

Exporting CSV data through API

A Badge generator like Badgeyay must be able to generate, store and export the user data as and when needed. This blog post is about adding the exporting functionality to badgeyay backend..

Why do we need such an API?

Exporting data is required for a user. A user may want to know the details he/she has uploaded to the system or server. In our case we are dealing with the fact of exporting the CSV data from backend of Badgeyay.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding the necessary imports

We first need to import the required dependencies for the route to work

import os
import base64
import uuid
from flask import request, Blueprint, jsonify
from flask import current_app as app
from api.models.file import File
from api.schemas.file import ExportFileSchema
from api.utils.errors import ErrorResponse
from api.schemas.errors import FileNotFound

Step 2 : Adding a route

This step involves adding a separate route that provides us with the exported data from backend.

@router.route(‘/csv/data’, methods=[‘GET’])
def export_data():
input_data = request.args
file = File().query.filter_by(filename=input_data.get(
‘filename’)).first()

if file is None:
return ErrorResponse(FileNotFound(input_data.get(‘filename’)).message, 422, {‘Content-Type’: ‘application/json’}).respond()

export_obj = {
‘filename’: file.filename,
‘filetype’: file.filetype,
‘id’: str(uuid.uuid4()),
‘file_data’: None}

with open(os.path.join(app.config.get(‘BASE_DIR’), ‘static’, ‘uploads’, ‘csv’, export_obj[‘filename’]), “r”) as f:
export_obj[
‘file_data’] = f.read()

export_obj[‘file_data’] = base64.b64encode(export_obj[‘file_data’].encode())

return jsonify(ExportFileSchema().dump(export_obj).data)

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class ExportFileSchema(Schema):
class Meta:
type_ =
‘export-data’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
filename = fields.Str(required=
True, dump_only=True)
filetype = fields.Str(required=
True, dump_only=True)
file_data = fields.Str(required=
True, dump_only=True)

This is the ExportFileSchema that produces the output results of the GET request on the route. This helps us get the data onto the frontend.

Further Improvements

We are working on making badgeyay more comprehensive yet simple. This API endpoint needs to get registered onto the frontend. This can be a further improvement to the project and can be iterated over the next days.

Resources

Continue ReadingExporting CSV data through API

Dated queries in Badgeyay admin

Badgeyay is not just an anonymous badge generator that creates badges according to your needs, but it now has an admin section that allows the admin of the website to control and look over the statistics of the website.

Why do we need such an API?

For an admin, one of the most common functionality is to gather the details of the users or the files being served onto or over the server. Not just that, but the admin must also be aware about the traffic or files on the server in a particular duration of time. So we need an API that can coordinate all the stuff that requires dated queries from the backend database.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding a route

This step involves adding a separate route that provides us with the output of the dated badges queries from backend.

@router.route(‘/get_badges_dated’, methods=[‘POST’])
def get_badges_dated():
schema = DatedBadgeSchema()
input_data = request.get_json()
data, err = schema.load(input_data)
if err:
return jsonify(err)
dated_badges = Badges.query.filter(Badges.created_at <= data.get(
‘end_date’)).filter(Badges.created_at >= data.get(‘start_date’))
return jsonify(AllBadges(many=True).dump(dated_badges).data)

This route allows us to get badges produced by any user during a certain duration as a JSON API data object. This object is fed to the frontend to render the badges as cards.

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class DatedBadgeSchema(Schema):
class Meta:
type_ =
‘dated-badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
start_date = fields.Date(required=
True)
end_date = fields.Date(required=
True)

class AllBadges(Schema):
class Meta:
type_ =
‘all-badges’
self_view =
‘admin.get_all_badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
image = fields.Str(required=
True)
csv = fields.Str(required=
True)
badge_id = fields.Str(required=
True)
text_color = fields.Str(required=
True)
badge_size = fields.Str(required=
True)
created_at = fields.Date(required=
True)
user_id = fields.Relationship(
self_url=
‘/api/upload/get_file’,
self_url_kwargs={
‘file_id’: ‘<id>’},
related_url=
‘/user/register’,
related_url_kwargs={
‘id’: ‘<id>’},
include_resource_linkage=
True,
type_=
‘User’
)

This is the DatedBadge schema that produces the output results of the POST request on the route. And there is the AllBadges schema that produces the output results of the POST request on the route.

Further Improvements

We are working on adding multiple routes and adding modifications to database models and schemas so that the functionality of Badgeyay can be extended to a large extent. This will help us in making this badge generator even better.

Resources

 

Continue ReadingDated queries in Badgeyay admin

Get My Badges from Badgeyay API

Badgeyay is no longer a simple badge generator. It has more cool features than before.

Badgeyay now supports a feature that shows your badges. It is called ‘my-badges’ component. To get this component work, we need to design a backend API to deliver the badges produced by a particular user.

Why do we need such an API?

The main aim of Badgeyay has changed from being a standard and simple badge generator to a complete suite that solves your badge generation and management problem. So to tackle the problem of managing the produced badges per user, we need to define a separate route and schema that delivers the generated badges.

Adding the functionality to backend

Let us see how we implemented this functionality into the backend of the project.

Step 1 : Adding a route

This step involves adding a separate route that provides with the generated output of the badges linked with the user account.

@router.route(‘/get_badges’, methods=[‘GET’])
def get_badges():
input_data = request.args
user = User.getUser(user_id=input_data.get(
‘uid’))
badges = Badges().query.filter_by(creator=user)
return jsonify(UserBadges(many=True).dump(badges).data)

This route allows us to get badges produced by the user as a JSON API data object. This object is fed to the frontend to render the badges as cards.

Step 2 : Adding a relevant Schema

After creating a route we need to add a relevant schema that will help us to deliver the badges generated by the user to the Ember JS frontend so that it can be consumed as JSON API objects and shown to the user.

class UserBadges(Schema):
class Meta:
type_ =
‘user-badges’
self_view =
‘generateBadges.get_badges’
kwargs = {
‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
image = fields.Str(required=
True)
csv = fields.Str(required=
True)
badge_id = fields.Str(required=
True)
text_color = fields.Str(required=
True)
badge_size = fields.Str(required=
True)
user_id = fields.Relationship(
self_url=
‘/api/upload/get_file’,
self_url_kwargs={
‘file_id’: ‘<id>’},
related_url=
‘/user/register’,
related_url_kwargs={
‘id’: ‘<id>’},
include_resource_linkage=
True,
type_=
‘User’
)

This is the ‘UserBadge’ schema that produces the output results of the GET request on the route.

Finally, once this is done we can fire up a GET request on our deployment to receive results. The command that you need to run is given below.

$ ~ curl -X GET http://localhost:5000/api/get_badges?uid={user_id}

Further Improvements

We are working on adding multiple routes and adding modifications to database models and schemas so that the functionality of Badgeyay can be extended to a large extent. This will help us in making this badge generator even better.

Resources

 

Continue ReadingGet My Badges from Badgeyay API

Custom Colored Images with Badgeyay

Backend functionality of any Badge generator is to generate badges as per the requirements of the user. Currently Badgeyay is capable of generating badges by the following way:

  • Adding or Selecting a Pre-defined Image from the given set
  • Uploading a new image and then using it as a background

Well, badgeyay has been missing a functionality of generating Custom Colored images.

What is meant by Custom Colored Badges?

Currently, there are a set of 7 different kind of pre-defined images to choose from. But let’s say that a user want to choose from the images but doesn’t like any of the color. Therefore we provide the user with an additional option of applying custom background-color for their badges. This allows Badgeyay to deliver a more versatile amount of badges than ever before.

Adding the functionality to backend

Lets see how this functionality has been implemented in the backend of the project.

Step 1 :  Adding a background-color route to backend

Before generating badges, we need to know that what is the color that the user wants on the badge. Therefore we created a route that gathers the color and saves the user-defined.svg into that particular color.

@router.route(‘/background_color’, methods=[‘POST’])
def background_color():
try:
data = request.get_json()[‘data’][‘attributes’]
bg_color = data[‘bg_color’]
except Exception:
return ErrorResponse(PayloadNotFound().message, 422, {‘Content-Type’: ‘application/json’}).respond()

svg2png = SVG2PNG()

bg_color = ‘#’ + str(bg_color)
user_defined_path = svg2png.do_svg2png(1, bg_color)
with open(user_defined_path, “rb”) as image_file:
image_data = base64.b64encode(image_file.read())
os.remove(user_defined_path)

try:
imageName = saveToImage(imageFile=image_data.decode(‘utf-8’), extension=”.png”)
except Exception:
return ErrorResponse(ImageNotFound().message, 422, {‘Content-Type’: ‘application/json’}).respond()

uid = data[‘uid’]
fetch_user = User.getUser(user_id=uid)
if fetch_user is None:
return ErrorResponse(UserNotFound(uid).message, 422, {‘Content-Type’: ‘application/json’}).respond()

file_upload = File(filename=imageName, filetype=’image’, uploader=fetch_user)
file_upload.save_to_db()
return jsonify(ColorImageSchema().dump(file_upload).data)

Step 2: Adding Schema for background-color to backend

To get and save values from and to database, we need to have some layer of abstraction and so we use schemas created using marshmallow_jsonapi

class ColorImageSchema(Schema):
class Meta:
type_ = ‘bg-color’
self_view = ‘fileUploader.background_color’
kwargs = {‘id’: ‘<id>’}

id = fields.Str(required=True, dump_only=True)
filename = fields.Str(required=True)
filetype = fields.Str(required=True)
user_id = fields.Relationship(
self_url=’/api/upload/background_color’,
self_url_kwargs={‘file_id’: ‘<id>’},
related_url=’/user/register’,
related_url_kwargs={‘id’: ‘<id>’},
include_resource_linkage=True,
type_=’User’
)

Now we have our schema and route done, So we can move forward with the logic of making badges.

Step 3 : Converting the SVG to PNG and adding custom color

Now we have the user-defined color for the badge background, but we still need a way to apply it to the badges. It is done using the following code below.

def do_svg2png(self, opacity, fill):
“””
Module to convert svg to png
:param `opacity` – Opacity for the output
:param `fill` –  Background fill for the output
“””
filename = os.path.join(self.APP_ROOT, ‘svg’, ‘user_defined.svg’)
tree = parse(open(filename, ‘r’))
element = tree.getroot()
# changing style using XPath.
path = element.xpath(‘//*[@id=”rect4504″]’)[0]
style_detail = path.get(“style”)
style_detail = style_detail.split(“;”)
style_detail[0] = “opacity:” + str(opacity)
style_detail[1] = “fill:” + str(fill)
style_detail = ‘;’.join(style_detail)
path.set(“style”, style_detail)
# changing text using XPath.
path = element.xpath(‘//*[@id=”tspan932″]’)[0]
# Saving in the original XML tree
etree.ElementTree(element).write(filename, pretty_print=True)
print(“done”)
png_name = os.path.join(self.APP_ROOT, ‘static’, ‘uploads’, ‘image’, str(uuid.uuid4())) + “.png”
svg2png(url=filename, write_to=png_name)
return png_name

Finally , we have our badges generating with custom colored background.

Here is a sample image:

Resources

 

Continue ReadingCustom Colored Images with Badgeyay

Open Event Server – Export Speakers as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the speakers in a very detailed view in the event management dashboard. He can see the statuses of all the speakers. The possible statuses are pending, accepted and rejected. He/she can take actions such as editing/viewing speakers.

If the organizer wants to download the list of all the speakers as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Speakers CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_speakers_csv which takes the speakers to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_speakers_csv(speakers):
   headers = ['Speaker Name', 'Speaker Email', 'Speaker Session(s)',
              'Speaker Mobile', 'Speaker Bio', 'Speaker Organisation', 'Speaker Position']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each speaker in speakers and form a row for that speaker by separating the values of each of the columns by a comma. Here, every row is one speaker.
  • As a speaker can contain multiple sessions we iterate over each session for that particular speaker and append each session to a string. ‘;’ is used as a delimiter. This string is then added to the row. We also include the state of the session – accepted, rejected, confirmed.
  • The newly formed row is added to the rows list.
for speaker in speakers:
   column = [speaker.name if speaker.name else '', speaker.email if speaker.email else '']
   if speaker.sessions:
       session_details = ''
       for session in speaker.sessions:
           if not session.deleted_at:
               session_details += session.title + ' (' + session.state + '); '
       column.append(session_details[:-2])
   else:
       column.append('')
   column.append(speaker.mobile if speaker.mobile else '')
   column.append(speaker.short_biography if speaker.short_biography else '')
   column.append(speaker.organisation if speaker.organisation else '')
   column.append(speaker.position if speaker.position else '')
   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
with open(file_path, "w") as temp_file:
   writer = csv.writer(temp_file)
   from app.api.helpers.csv_jobs_util import export_speakers_csv
   content = export_speakers_csv(speakers)
   for row in content:
       writer.writerow(row)

Obtaining the Speakers CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/speakers/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the speakers of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Speakers as CSV File

Open Event Server – Export Sessions as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the sessions in a very detailed view in the event management dashboard. He can see the statuses of all the sessions. The possible statuses are pending, accepted, confirmed and rejected. He/she can take actions such as accepting/rejecting the sessions.

If the organizer wants to download the list of all the sessions as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Sessions CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_sessions_csv which takes the sessions to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_sessions_csv(sessions):
   headers = ['Session Title', 'Session Speakers',
              'Session Track', 'Session Abstract', 'Created At', 'Email Sent']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each session in sessions and form a row for that session by separating the values of each of the columns by a comma. Here, every row is one session.
  • As a session can contain multiple speakers we iterate over each speaker for that particular session and append each speaker to a string. ‘;’ is used as a delimiter. This string is then added to the row.
  • The newly formed row is added to the rows list.
for session in sessions:
   if not session.deleted_at:
       column = [session.title + ' (' + session.state + ')' if session.title else '']
       if session.speakers:
           in_session = ''
           for speaker in session.speakers:
               if speaker.name:
                   in_session += (speaker.name + '; ')
           column.append(in_session[:-2])
       else:
           column.append('')
       column.append(session.track.name if session.track and session.track.name else '')
       column.append(strip_tags(session.short_abstract) if session.short_abstract else '')
       column.append(session.created_at if session.created_at else '')
       column.append('Yes' if session.is_mail_sent else 'No')
       rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_sessions_csv
content = export_sessions_csv(sessions)
for row in content:
   writer.writerow(row)

Obtaining the Sessions CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/sessions/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the sessions of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Sessions as CSV File

Implementing Endpoint to Resend Email Verification

Earlier, when a user registered via Open Event Frontend, s/he received a verification link via email to confirm their account. However, this was not enough in the long-term. If the confirmation link expired, or for some reasons the verification mail got deleted on the user side, there was no functionality to resend the verification email, which prevented the user from getting fully registered. Although the front-end already showed the option to resend the verification link, there was no support from the server to do that, yet.

So it was decided that a separate endpoint should be implemented to allow re-sending the verification link to a user. /resend-verification-email was an endpoint that would fit this action. So we decided to go with it and create a route in `auth.py` file, which was the appropriate place for this feature to reside. First step was to do the necessary imports and then definition:

from app.api.helpers.mail import send_email_confirmation
from app.models.mail import USER_REGISTER_WITH_PASSWORD
...
...
@auth_routes.route('/resend-verification-email', methods=['POST'])
def resend_verification_email():
...

Now we safely fetch the email mentioned in the request and then search the database for the user corresponding to that email:

def resend_verification_email():
    try:
        email = request.json['data']['email']
    except TypeError:
        return BadRequestError({'source': ''}, 'Bad Request Error').respond()

    try:
        user = User.query.filter_by(email=email).one()
    except NoResultFound:
        return UnprocessableEntityError(
{'source': ''}, 'User with email: ' + email + ' not found.').respond()
    else:

    ...

Once a user has been identified in the database, we proceed further and create an essentially unique hash for the user verification. This hash is in turn used to generate a verification link that is then ready to be sent via email to the user:

else:
    serializer = get_serializer()
    hash_ = str(base64.b64encode(str(serializer.dumps(
[user.email, str_generator()])).encode()), 'utf-8')
    link = make_frontend_url(
'/email/verify'.format(id=user.id), {'token': hash_})

Finally, the email is sent:

send_email_with_action(
user, USER_REGISTER_WITH_PASSWORD,
app_name=get_settings()['app_name'], email=user.email)
    if not send_email_confirmation(user.email, link):
        return make_response(jsonify(message="Some error occured"), 500)
    return make_response(jsonify(message="Verification email resent"), 200)

But this was not enough. When the endpoint was tested, it was found that actual emails were not being delivered, even after correctly configuring the email settings locally. So, after a bit of debugging, it was found that the settings, which were using Sendgrid to send emails, were using a deprecated Sendgrid API endpoint. A separate email function is used to send emails via Sendgrid and it contained an old endpoint that was no longer recommended by Sendgrid:

@celery.task(name='send.email.post')
def send_email_task(payload, headers):
   requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

The new endpoint, as per Sendgrid’s documentation, is:

https://api.sendgrid.com/v3/mail/send

But this was not the only change required. Sendgrid had also modified the structure of requests they accepted, and the new structure was different from the existing one that was used in the server. Following is the new structure:

'{"personalizations": [{"to": [{"email": "example@example.com"}]}],"from": {"email": "example@example.com"},"subject": "Hello, World!","content": [{"type": "text/plain", "value": "Heya!"}]}'

The header structure was also changed, so the structure in the server was also updated to

headers = {
"Authorization": ("Bearer " + key),
"Content-Type": "application/json"
}

The Sendgrid function (which is executed as a Celery task) was modified as follows, to incorporate the changes in the API endpoint and structure:

import json
...
@celery.task(name='send.email.post')
def send_email_task(payload, headers):
    data = {"personalizations": [{"to": []}]}
    data["personalizations"][0]["to"].append({"email": payload["to"]})
    data["from"] = {"email": payload["from"]}
    data["subject"] = payload["subject"]
    data["content"] = [{"type": "text/html", "value": payload["html"]}]
    requests.post(
        "https://api.sendgrid.com/v3/mail/send",
        data=json.dumps(data),
        headers=headers,
        verify=False  # doesn't work with verification in celery context
    )

 

As can be seen, there is a bug that doesn’t allow SSL verification within the celery context. However, the verification is successful when the functionality is executed independent of the celery context. But now email sending via Sendgrid actually works, which makes our verification resend endpoint functional:Screen Shot 2018-08-10 at 10.04.12 PM.pngEmail is received successfully by the recipient:

Screen Shot 2018-08-10 at 10.04.30 PM.png

Thus, a working email verification endpoint is implemented, which can be easily integrated in the frontend.


Resources:

Continue ReadingImplementing Endpoint to Resend Email Verification

Adding Check-in Attributes to Tickets

Recently, it was decided by the Open Event Orga App team that the event ticket API response from Open Event Server should have two additional attributes for specifying event check-in access. At first sight, it seemed that adding these options will only require changes in the orga app, but it turned out that the entire Ticket API from the server will need this addition.

Implementing these attributes turned out to be quite straightforward. Specifically, the fields to be added were boolean is_checkin_restricted and auto_checkin_enabled. By default, checkin is not automatic and is restricted. Therefore, the default values for these fields were chosen to be True and False respectively. To add them, the ticket model file was changed first – due to the addition of these two columns:

class Ticket(SoftDeletionModel):
    ...
    is_checkin_restricted = db.Column(db.Boolean)  # <--
    auto_checkin_enabled = db.Column(db.Boolean)  # <--
    ...
    def __init__(self,
        name=None,
        event_id=None,
        ...
        is_checkin_restricted=True,
        auto_checkin_enabled=False):

        self.name = name
        ...
        self.is_checkin_restricted = is_checkin_restricted
        self.auto_checkin_enabled = auto_checkin_enabled
        ...

Since the ticket database model was updated, a migration had to be performed. Following shell commands (at the open event server project root) did the migration and database update and a migration file was then generated:

$ python manage.py db migrate
$ python manage.py db upgrade

Here’s the generated migration file:

from alembic import op
import sqlalchemy as sa

revision = '6440077182f0'
down_revision = 'eaa029ebb260'

def upgrade():
    op.add_column('tickets', sa.Column('auto_checkin_enabled', sa.Boolean(), nullable=True))
    op.add_column('tickets', sa.Column('is_checkin_restricted', sa.Boolean(), nullable=True))

def downgrade():
    op.drop_column('tickets', 'is_checkin_restricted')
    op.drop_column('tickets', 'auto_checkin_enabled')

The next code change was required in the ticket API schema. The change was essentially the same as the one added in the model file – just these 2 new fields were added:

class TicketSchemaPublic(SoftDeletionSchema):
    ...
    id = fields.Str(dump_only=True)
    name = fields.Str(required=True)
    ...
    is_checkin_restricted = fields.Boolean(default=True)  # <--
    auto_checkin_enabled = fields.Boolean(default=False)  # <--
    event = Relationship(attribute='event',
    self_view='v1.ticket_event',
    self_view_kwargs={'id': '<id>'},
    related_view='v1.event_detail',
    related_view_kwargs={'ticket_id': '<id>'},
    schema='EventSchemaPublic',
    type_='event')
    ...

Now all that remained were changes in the API documentation, which were made accordingly. This completed the addition of these two checkin attributes in the ticket API, and eventually made way to the orga app. And, these can be requested as usual by the front-end and user app as well.


Resources and Links:

Continue ReadingAdding Check-in Attributes to Tickets