Adding Online Payment Support in Open Event Frontend via PayPal

Open Event Frontend involves ticketing system which supports both paid and free tickets. To buy a paid ticket Open Event provides several options such as debit card, credit card, cheque, bank transfer and onsite payments. So to add support for debit and credit card payments Open Event uses Paypal checkout as one of the options. Using paypal checkout screen users can enter their card details and pay for their ticket or they can use their paypal wallet money to pay for their tickets.

Given below are some steps which are to be followed for successfully charging a user for ticket using his/her card.

  • We create an application on paypal developer dashboard to receive client id and secret key.
  • We set these keys in admin dashboard of open event and then while checkout we use these keys to render checkout screen.
  • After clicking checkout button a request is sent to create-paypal-payment endpoint of open event server to create a paypal token which is used in checkout procedure.
  • After user’s verification paypal generates a payment id is which is used by open event frontend to charge the user for stipulated amount.
  • We send this token to open event server which processes the token and charge the user.
  • We get error or success message from open event server as per the process outcome.

To render the paypal checkout elements we use paypal checkout library provided by npm. Paypal button is rendered using Button.render method of paypal checkout library. Code snippet is given below.

// app/components/paypal-button.js

paypal.Button.render({
   env: 'sandbox',

   commit: true,

   style: {
     label : 'pay',
     size  : 'medium', // tiny, small, medium
     color : 'gold', // orange, blue, silver
     shape : 'pill'    // pill, rect
   },

   payment() {
   // this is used to obtain paypal token to initialize payment process 
   },

   onAuthorize(data) {
     // this callback will be for authorizing the payments
    }

 }, this.elementId);

 

After button is rendered next step is to obtain a payment token from create-paypal-payment endpoint of open event server. For this we use the payment() callback of paypal-checkout. Code snippet for payment callback method is given below:

// app/components/paypal-button.js

let createPayload = {
      'data': {
        'attributes': {
          'return-url' : `${window.location.origin}/orders/${order.identifier}/placed`,
          'cancel-url' : `${window.location.origin}/orders/${order.identifier}/placed`
        },
        'type': 'paypal-payment'
      }
    };
paypal.Button.render({
     //Button attributes

      payment() {
        return loader.post(`orders/${order.identifier}/create-paypal-payment`, createPayload)
          .then(res => {
            return res.payment_id;
          });
      },

      onAuthorize(data) {
        // this callback will be for authorizing the payments
      }

    }, this.elementId);

 

After getting the token payment screen is initialized and user is asked to enter his/her credentials. This process is handled by paypal servers. After user verifies his/her payment paypal generates a paymentId and a payerId and sends it back to open event. After the payment authorization onAuthorize() method of paypal is called and payment is further processed in this callback method. Payment ID and payer Id received from paypal is sent to charge endpoint of open event server to charge the user. After receiving success or failure message from paypal proper message is displayed to users and their order is confirmed or cancelled respectively. Code snippet for onAuthorize is given below:

// app/components/paypal-button.js

onAuthorize(data) {
        // this callback will be for authorizing the payments
        let chargePayload = {
          'data': {
            'attributes': {
              'stripe'            : null,
              'paypal_payer_id'   : data.payerID,
              'paypal_payment_id' : data.paymentID
            },
            'type': 'charge'
          }
        };
        let config = {
          skipDataTransform: true
        };
        chargePayload = JSON.stringify(chargePayload);
        return loader.post(`orders/${order.identifier}/charge`, chargePayload, config)
          .then(charge => {
            if (charge.data.attributes.status) {
              notify.success(charge.data.attributes.message);
              router.transitionTo('orders.view', order.identifier);
            } else {
              notify.error(charge.data.attributes.message);
            }
          });
      }

 

Full code can be seen here.

In this way we achieve the functionality of adding paypal payment support in open event frontend. Please follow the links below for further clarification and detailed overview.

Resources:

Continue ReadingAdding Online Payment Support in Open Event Frontend via PayPal

Countrywise Usage Analytics of a Skill in SUSI.AI

The statistics regarding which country the skills are being used is quite important. They help in updating the skill to support the native language of those countries. SUSI.AI must be able to understand as well as reply in its user’s language. So mainly the server side and some client side (web client) implementation of country wise skill usage statistics is explained in this blog.

Fetching the user’s location on the web client

  1. Add a function in chat.susi.ai/src/actions/API.actions.js to fetch the users location. The function makes a call to freegeoip.net API which returns the client’s location based on its IP address. So country name and code are required for country wise usage analytics.

export function getLocation(){
  $.ajax({
    url: 'https://cors-anywhere.herokuapp.com/http://freegeoip.net/json/',
    success: function (response) {
      _Location = {
        lat: response.latitude,
        lng: response.longitude,
        countryCode: response.country_code,
        countryName: response.country_name
      };
    },
  });
}
  1. Send the location parameters along with the query while fetching reply from SUSI server in chat.json API. The parameters are country_name and country_code.

if(_Location){
	url += '&latitude='+_Location.lat+'&longitude='+_Location.lng+'&country_code='+_Location.countryCode+'&country_name='+_Location.countryName;
}

Storage of Country Wise Skill Usage Data on SUSI Server

  1. Create a countryWiseSkillUsage.json file to store the country wise skill usage stats and make a JSONTray object for that in src/ai/susi/DAO.java file. The JSON file contains the country name, country code and the usage count in that country.
  1. Modify the src/ai/susi/server/api/susi/SusiService.java file to fetch country_name and country_code from the query parameters and pass them SusiCognition constructor.

String countryName = post.get("country_name", "");
String countryCode = post.get("country_code", "");
...

SusiCognition cognition = new SusiCognition(q, timezoneOffset, latitude, longitude, countryCode, countryName, language, count, user.getIdentity(), minds.toArray(new SusiMind[minds.size()]));
  1. Modify the src/ai/susi/mind/SusiCognition.java file to accept the countryCode and countryName in the constructor parameters. Check which skill is being currently used for the response and update the skill usage stats for that country in countryWiseSkillUsage.json. Call the function updateCountryWiseUsageData() to update the skill usage data.

if (!countryCode.equals("") && !countryName.equals("")) {
    List<String> skills = dispute.get(0).getSkills();
    for (String skill : skills) {
        try {
            updateCountryWiseUsageData(skill, countryCode, countryName);
        }
        catch (Exception e) {
            e.printStackTrace();
        }
    }
}

The updateCountryWiseUsageData() function accepts the skill path , country name and country code. It parses the skill path to get the skill metadata like its model name, group name, language etc. The function then checks if the country already exists in the JSON file or not. If it exists then it increments the usage count by 1 else it creates an entry for the skill in the JSON file and initializes it with the current country name and usage count 1.

for (int i = 0; i < countryWiseUsageData.length(); i++) {
  countryUsage = countryWiseUsageData.getJSONObject(i);
  if (countryUsage.get("country_code").equals(countryCode)) {
    countryUsage.put("count", countryUsage.getInt("count")+1);
    countryWiseUsageData.put(i,countryUsage);
  }
}

API to access the Country Wise Skill Usage Data

  1. Create GetCountryWiseSkillUsageService.java file to return the usage stats stored in countryWiseSkillUsage.json

public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {
        
  ...  // Fetch the query parameters
  JSONArray countryWiseSkillUsage = languageName.getJSONArray(skill_name);
  return new ServiceResponse(result);
}
  1. Add the API file to src/ai/susi/server/api/susi/SusiServer.java

services = new Class[]{
	...
	//Skill usage data
	GetCountryWiseSkillUsageService.class
	...
}

 

Endpoint : /cms/getCountryWiseSkillUsage.json

Parameters

  • model
  • group
  • language
  • Skill

Sample query: /cms/getCountryWiseSkillUsage.json?model=general&group=Knowledge&language=en&skill=aboutsusi

Sample response:

{  
   "skill_usage":[  
      {  
         "country_code":"MYS",
         "country_name":"Malaysia",
         "count":1
      },
      {  
         "country_code":"MYS",
         "country_name":"Malaysia",
         "count":1
      }
   ],
   "session":{  
      "identity":{  
         "type":"host",
         "name":"162.158.154.147_81c88a10",
         "anonymous":true
      }
   },
   "skill_name":"ceo",
   "accepted":true,
   "message":"Country wise skill usage fetched"
}

Resources

Continue ReadingCountrywise Usage Analytics of a Skill in SUSI.AI

Open Event Server – Export Speakers as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the speakers in a very detailed view in the event management dashboard. He can see the statuses of all the speakers. The possible statuses are pending, accepted and rejected. He/she can take actions such as editing/viewing speakers.

If the organizer wants to download the list of all the speakers as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Speakers CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_speakers_csv which takes the speakers to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_speakers_csv(speakers):
   headers = ['Speaker Name', 'Speaker Email', 'Speaker Session(s)',
              'Speaker Mobile', 'Speaker Bio', 'Speaker Organisation', 'Speaker Position']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each speaker in speakers and form a row for that speaker by separating the values of each of the columns by a comma. Here, every row is one speaker.
  • As a speaker can contain multiple sessions we iterate over each session for that particular speaker and append each session to a string. ‘;’ is used as a delimiter. This string is then added to the row. We also include the state of the session – accepted, rejected, confirmed.
  • The newly formed row is added to the rows list.
for speaker in speakers:
   column = [speaker.name if speaker.name else '', speaker.email if speaker.email else '']
   if speaker.sessions:
       session_details = ''
       for session in speaker.sessions:
           if not session.deleted_at:
               session_details += session.title + ' (' + session.state + '); '
       column.append(session_details[:-2])
   else:
       column.append('')
   column.append(speaker.mobile if speaker.mobile else '')
   column.append(speaker.short_biography if speaker.short_biography else '')
   column.append(speaker.organisation if speaker.organisation else '')
   column.append(speaker.position if speaker.position else '')
   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
with open(file_path, "w") as temp_file:
   writer = csv.writer(temp_file)
   from app.api.helpers.csv_jobs_util import export_speakers_csv
   content = export_speakers_csv(speakers)
   for row in content:
       writer.writerow(row)

Obtaining the Speakers CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/speakers/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the speakers of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Speakers as CSV File

Open Event Server – Export Sessions as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the sessions in a very detailed view in the event management dashboard. He can see the statuses of all the sessions. The possible statuses are pending, accepted, confirmed and rejected. He/she can take actions such as accepting/rejecting the sessions.

If the organizer wants to download the list of all the sessions as a CSV file, he or she can do it very easily by simply clicking on the Export As CSV button in the top right-hand corner.

Let us see how this is done on the server.

Server side – generating the Sessions CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_sessions_csv which takes the sessions to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_sessions_csv(sessions):
   headers = ['Session Title', 'Session Speakers',
              'Session Track', 'Session Abstract', 'Created At', 'Email Sent']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each session in sessions and form a row for that session by separating the values of each of the columns by a comma. Here, every row is one session.
  • As a session can contain multiple speakers we iterate over each speaker for that particular session and append each speaker to a string. ‘;’ is used as a delimiter. This string is then added to the row.
  • The newly formed row is added to the rows list.
for session in sessions:
   if not session.deleted_at:
       column = [session.title + ' (' + session.state + ')' if session.title else '']
       if session.speakers:
           in_session = ''
           for speaker in session.speakers:
               if speaker.name:
                   in_session += (speaker.name + '; ')
           column.append(in_session[:-2])
       else:
           column.append('')
       column.append(session.track.name if session.track and session.track.name else '')
       column.append(strip_tags(session.short_abstract) if session.short_abstract else '')
       column.append(session.created_at if session.created_at else '')
       column.append('Yes' if session.is_mail_sent else 'No')
       rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_sessions_csv
content = export_sessions_csv(sessions)
for row in content:
   writer.writerow(row)

Obtaining the Sessions CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/sessions/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the sessions of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

Resources

Continue ReadingOpen Event Server – Export Sessions as CSV File

Implementing Endpoint to Resend Email Verification

Earlier, when a user registered via Open Event Frontend, s/he received a verification link via email to confirm their account. However, this was not enough in the long-term. If the confirmation link expired, or for some reasons the verification mail got deleted on the user side, there was no functionality to resend the verification email, which prevented the user from getting fully registered. Although the front-end already showed the option to resend the verification link, there was no support from the server to do that, yet.

So it was decided that a separate endpoint should be implemented to allow re-sending the verification link to a user. /resend-verification-email was an endpoint that would fit this action. So we decided to go with it and create a route in `auth.py` file, which was the appropriate place for this feature to reside. First step was to do the necessary imports and then definition:

from app.api.helpers.mail import send_email_confirmation
from app.models.mail import USER_REGISTER_WITH_PASSWORD
...
...
@auth_routes.route('/resend-verification-email', methods=['POST'])
def resend_verification_email():
...

Now we safely fetch the email mentioned in the request and then search the database for the user corresponding to that email:

def resend_verification_email():
    try:
        email = request.json['data']['email']
    except TypeError:
        return BadRequestError({'source': ''}, 'Bad Request Error').respond()

    try:
        user = User.query.filter_by(email=email).one()
    except NoResultFound:
        return UnprocessableEntityError(
{'source': ''}, 'User with email: ' + email + ' not found.').respond()
    else:

    ...

Once a user has been identified in the database, we proceed further and create an essentially unique hash for the user verification. This hash is in turn used to generate a verification link that is then ready to be sent via email to the user:

else:
    serializer = get_serializer()
    hash_ = str(base64.b64encode(str(serializer.dumps(
[user.email, str_generator()])).encode()), 'utf-8')
    link = make_frontend_url(
'/email/verify'.format(id=user.id), {'token': hash_})

Finally, the email is sent:

send_email_with_action(
user, USER_REGISTER_WITH_PASSWORD,
app_name=get_settings()['app_name'], email=user.email)
    if not send_email_confirmation(user.email, link):
        return make_response(jsonify(message="Some error occured"), 500)
    return make_response(jsonify(message="Verification email resent"), 200)

But this was not enough. When the endpoint was tested, it was found that actual emails were not being delivered, even after correctly configuring the email settings locally. So, after a bit of debugging, it was found that the settings, which were using Sendgrid to send emails, were using a deprecated Sendgrid API endpoint. A separate email function is used to send emails via Sendgrid and it contained an old endpoint that was no longer recommended by Sendgrid:

@celery.task(name='send.email.post')
def send_email_task(payload, headers):
   requests.post(
       "https://api.sendgrid.com/api/mail.send.json",
       data=payload,
       headers=headers
   )

The new endpoint, as per Sendgrid’s documentation, is:

https://api.sendgrid.com/v3/mail/send

But this was not the only change required. Sendgrid had also modified the structure of requests they accepted, and the new structure was different from the existing one that was used in the server. Following is the new structure:

'{"personalizations": [{"to": [{"email": "example@example.com"}]}],"from": {"email": "example@example.com"},"subject": "Hello, World!","content": [{"type": "text/plain", "value": "Heya!"}]}'

The header structure was also changed, so the structure in the server was also updated to

headers = {
"Authorization": ("Bearer " + key),
"Content-Type": "application/json"
}

The Sendgrid function (which is executed as a Celery task) was modified as follows, to incorporate the changes in the API endpoint and structure:

import json
...
@celery.task(name='send.email.post')
def send_email_task(payload, headers):
    data = {"personalizations": [{"to": []}]}
    data["personalizations"][0]["to"].append({"email": payload["to"]})
    data["from"] = {"email": payload["from"]}
    data["subject"] = payload["subject"]
    data["content"] = [{"type": "text/html", "value": payload["html"]}]
    requests.post(
        "https://api.sendgrid.com/v3/mail/send",
        data=json.dumps(data),
        headers=headers,
        verify=False  # doesn't work with verification in celery context
    )

 

As can be seen, there is a bug that doesn’t allow SSL verification within the celery context. However, the verification is successful when the functionality is executed independent of the celery context. But now email sending via Sendgrid actually works, which makes our verification resend endpoint functional:Screen Shot 2018-08-10 at 10.04.12 PM.pngEmail is received successfully by the recipient:

Screen Shot 2018-08-10 at 10.04.30 PM.png

Thus, a working email verification endpoint is implemented, which can be easily integrated in the frontend.


Resources:

Continue ReadingImplementing Endpoint to Resend Email Verification

Adding Port Specification for Static File URLs in Open Event Server

Until now, static files stored locally on Open Event server did not have port specification in their URLs. This opened the door for problems while consuming local APIs. This would have created inconsistencies, if two server processes were being served on the same machine but at different ports. In this blog post, I will explain my approach towards solving this problem, and describe code snippets to demonstrate the changes I made in the Open Event Server codebase.

The first part in this process involved finding the source of the bug. For this, my open-source integrated development environment, Microsoft Visual Studio Code turned out to be especially useful. It allowed me to jump from function calls to function definitions quickly:

Screen Shot 2018-08-10 at 12.29.01 PM

I started at events.py and jumped all the way to storage.py, where I finally found out the source of this bug, in upload_local() function:

def upload_local(uploaded_file, key, **kwargs):
    """
    Uploads file locally. Base dir - static/media/
    """
    filename = secure_filename(uploaded_file.filename)
    file_relative_path = 'static/media/' + key + '/' + generate_hash(key) + '/' + filename
    file_path = app.config['BASE_DIR'] + '/' + file_relative_path
    dir_path = file_path.rsplit('/', 1)[0]
    # delete current
    try:
        rmtree(dir_path)
    except OSError:
        pass
    # create dirs
    if not os.path.isdir(dir_path):
        os.makedirs(dir_path)
        uploaded_file.save(file_path)
        file_relative_path = '/' + file_relative_path
    if get_settings()['static_domain']:
        return get_settings()['static_domain'] + \
    file_relative_path.replace('/static', '')
    url = urlparse(request.url)
    return url.scheme + '://' + url.hostname + file_relative_path

Look closely at the return statement:

return url.scheme + '://' + url.hostname + file_relative_path

Bingo! This is the source of our bug. A straightforward solution is to simply concatenate the port number in between, but that will make this one-liner look clumsy – unreadable and un-pythonic. We therefore use Python string formatting:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path)

But this statement isn’t perfect. There’s an edge case that might give unexpected URL. If the port isn’t originally specified, Python’s string formatting heuristic will substitute url.port with None. This will result in a URL like http://localhost:None/some/file_path.jpg, which is obviously something we don’t desire. We therefore append a call to Python’s string replace() method: replace(‘:None’, ”)

The resulting return statement now looks like the following:

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=url.port,
file_relative_path=file_relative_path).replace(':None', '')

This should fix the problem. But that’s not enough. We need to ensure that our project adapts well with the change we made. We check this by running the project tests locally:

$ nosetests tests/unittests

Unfortunately, the tests fail with the following traceback:

======================================================================
ERROR: test_create_save_image_sizes (tests.unittests.api.helpers.test_files.TestFilesHelperValidation)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 138, in test_create_save_image_sizes
resized_width_large, _ = self.getsizes(resized_image_file_large)
File "/open-event-server/tests/unittests/api/helpers/test_files.py", line 22, in getsizes
im = Image.open(file)
File "/usr/local/lib/python3.6/site-packages/PIL/Image.py", line 2312, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '/open-event-server:5000/static/media/events/53b8f572-5408-40bf-af97-6e9b3922631d/large/UFNNeW5FRF/5980ede1-d79b-4907-bbd5-17511eee5903.jpg'

It’s evident from this traceback that the code in our test framework is not converting the image url to file path correctly. The port specification part is working fine, but it should not affect file names, they should be independent of port number. The files saved originally do not have port specified in their name, but the code in test framework is expecting port to be involved, hence the above error.

Using the traceback, I went to the code in the test framework where this problem occurred:

def test_create_save_image_sizes(self):
       with app.test_request_context():
           image_url_test = 'https://cdn.pixabay.com/photo/2014/09/08/17/08/hot-air-balloons-439331_960_720.jpg'

           image_sizes_type = "event"
           width_large = 1300
           width_thumbnail = 500
           width_icon = 75
           image_sizes = create_save_image_sizes(image_url_test, image_sizes_type)

           resized_image_url = image_sizes['original_image_url']
           resized_image_url_large = image_sizes['large_image_url']
           resized_image_url_thumbnail = image_sizes['thumbnail_image_url']
           resized_image_url_icon = image_sizes['icon_image_url']

           resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]
           resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large.split('/localhost')[1]
           resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail.split('/localhost')[1]
           resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon.split('/localhost')[1]

           resized_width_large, _ = self.getsizes(resized_image_file_large)
           resized_width_thumbnail, _ = self.getsizes(resized_image_file_thumbnail)
           resized_width_icon, _ = self.getsizes(resized_image_file_icon)

           self.assertTrue(os.path.exists(resized_image_file))
           self.assertEqual(resized_width_large, width_large)
           self.assertEqual(resized_width_thumbnail, width_thumbnail)
           self.assertEqual(resized_width_icon, width_icon)

 

Obviously, resized_image_url.split(‘/localhost’)[1] will involve port number. So we have to change this line. But this means we also have to change the subsequent lines involving thumbnail, icon and large images. Instead of stripping the port for each of these, we can simply do this collectively at an earlier stage. So we redefine the image_sizes dictionary after the create_save_image_sizes() function call:

image_sizes = {
url_name: urlparse(image_sizes[url_name]).path
for url_name in image_sizes
} # Now file names don't contain port (this gives relative urls).

Now we can simplify the lines each of which earlier required port-stripping code:

resized_image_file = app.config.get('BASE_DIR') + resized_image_url
resized_image_file_large = app.config.get('BASE_DIR') + resized_image_url_large
resized_image_file_thumbnail = app.config.get('BASE_DIR') + resized_image_url_thumbnail
resized_image_file_icon = app.config.get('BASE_DIR') + resized_image_url_icon

We now do a similar modification in test_create_save_resized_image() test method as it also involves URL to file path conversion. We break the line

resized_image_file = app.config.get('BASE_DIR') + resized_image_url.split('/localhost')[1]

to 2 lines:

resized_image_path = urlparse(resized_image_url).path
resized_image_file = app.config.get('BASE_DIR') + resized_image_path

Now let’s run the tests (which failed earlier) again:

Screen Shot 2018-08-10 at 12.29.15 PM.pngFinally, the tests pass without errors! Now, we can add some extra convenience functionality: we can also strip the port when it corresponds with the protocol we’re using. For example, if we’re using https protocol, then we need not specify the port if it is 443, as 443 corresponds to that protocol. We can add this functionality by creating a mapping of such correspondence and checking for it before generating the URL. To do this, we now go back to storage.py and add the following:

SCHEMES = {80: 'http', 443: 'https'}

And add

# No need to specify scheme-corresponding port
port = url.port
if port and url.scheme == SCHEMES.get(url.port, None):
    port = None

just before

return '{scheme}://{hostname}:{port}{file_relative_path}'.format(
scheme=url.scheme, hostname=url.hostname, port=port,
file_relative_path=file_relative_path).replace(':None', '')

And that finishes our work! The tests again pass successfully, plus on top of that we have this new functionality of mentioning ports only when they don’t correspond with the URL scheme!


Resources:

Continue ReadingAdding Port Specification for Static File URLs in Open Event Server

Multiple Languages Filter in SUSI.AI Skills CMS and Server

There are numerous users of SUSI.AI globally. Most of the users use skills in English languages while some prefer their native languages. Also,there are some users who want SUSI skills of multiple languages. So the process of fetching skills from multiple languages has been explained in this blog.

Server side implementation

The language parameter in ListSkillService.java is modified to accept a string that contains the required languages separated by a comma. Then this parameter is split by comma symbol which returns an array of the required languages.

String language_list = call.get("language", "en");
String[] language_names = language_list.split(",");

Then simple loop over this array language by language and keep adding the the skills’ metadata, in that language into the response object.

for (String language_name : language_names) {
	// fetch the skills in this language.
}

CMS side implementation

Convert the state variable languageValue, in BrowseSkill.js, from strings to an array so that multiple languages can be kept in it.

languageValue: ['en']

Change the language dropdown menu to allow selection of multiple values and attach an onChange listener to it. Its value is the same as that of state variable languageValue and its content is filled by calling a function languageMenuItems().

<SelectField
    multiple={true}
    hintText="Languages"
    value={languageValue}
    onChange={this.handleLanguageChange}
  >
    {this.languageMenuItems(languageValue)}
</SelectField>

The languageMenuItems() function gets the list of checked languages as a parameter. The whole list of languages are stored in a global variable called languages. So this function loops over the list of all the languages and check / uncheck them based on the values passed in the argument. It build a menu item for each language and put the ISO6391 native name of that language into the menu item.

languageMenuItems(values) {
  return languages.map(name => (
    <MenuItem
      insetChildren={true}
      checked={values && values.indexOf(name) > -1}
      value={name}
      primaryText={
        ISO6391.getNativeName(name)
          ? ISO6391.getNativeName(name)
          : 'Universal'
      }
    />
  ));
}

While the language change handler gets the values of the selected languages in the form of an array, from the drop down menu. It simply assigns this value to the state variable languageValue and calls the loadCards() function to load the skills based on the new filter.

this.setState({ languageValue: values }, function() {
    this.loadCards();
  });

 References

Continue ReadingMultiple Languages Filter in SUSI.AI Skills CMS and Server

Implementation of Sponsors API in Open Event Organizer Android App

New contributors to this project are sometimes not experienced with the set of libraries and MVP pattern which this app uses. This blog post is an attempt to walk a new contributor through some parts of the code of the app by implementing an operation for an endpoint of the API. We’ll be implementing the sponsor endpoint.

Open Event Organizer Android app uses a robust architecture. It is presently using the MVP (Model-View-Presenter) architecture. Therefore, this blog post aims at giving some brief insights to the app architecture through the implementation Sponsor endpoint. This blog post will focus only on one operation of the endpoint – the list operation – so as to make the post short enough.

This blog post relates to Pull Request #901 of Open Event Organizer App.

Project structure:

These are the parts of the project structure where major updates will be made for the implementation of Sponsor endpoint:

core

data

Setting up elements in the data module for the respective endpoint

Sponsor.java

@Data
@Builder
@Type(“sponsor”)
@AllArgsConstructor
@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)
@EqualsAndHashCode(callSuper = false, exclude = { “sponsorDelegate”, “checking” })
@Table(database = OrgaDatabase.class)
public class Sponsor extends AbstractItem<Sponsor, SponsorsViewHolder> implements Comparable<Sponsor>, HeaderProvider {

   @Delegate(types = SponsorDelegate.class)
   private final SponsorDelegateImpl sponsorDelegate = new         SponsorDelegateImpl(this);

This class uses Lombok, Jackson, RaizLabs-DbFlow, extends AbstractItem class (from Fast Adapter) and implements Comparable and HeaderProvider.

All the annotations processor help us reduce boilerplate code.

From the Lombok plugin, we are using:

Lombok has annotations to generate Getters, Setters, Constructors, toString(), Equal() and hashCode() methods. Thus, it is very efficient in reducing boilerplate code

@Getter,  @Setter, @ToString, @EqualsAndHashCode

@Data is a shortcut annotation that bundles the features of @Getter, @Setter, @ToString and @EqualsAndHashCode

The @Delegate is used for direct calls to the methods that are annotated with it, to the specified delegate. It basically separates the model class from other methods which do not pertain to data.

Jackson

@JsonNaming – used to choose the naming strategies for properties in serialization, overriding the default. For eg:  KEBAB_CASE, LOWER_CASE, SNAKE_CASE, UPPER_CAMEL_CASE

@JsonNaming(PropertyNamingStrategy.KebabCaseStrategy.class)

@JsonProperty – used to store the variable from JSON schema as the given variable name. So, “type” from JSON will be stored as sponsorType.

@JsonProperty(“type”)
public String sponsorType;

RaizLabs-DbFlow

DbFlow uses model classes which must be annotated using the annotations provided by the library. The basic annotations are – @Table, @PrimaryKey, @Column, @ForeignKey etc.

These will create a table named attendee with the columns and the relationships annotated.

SponsorDelegate.java and SponsorDelegateImpl.java

The above are required only for method declarations of the classes and interfaces that Sponsor.java extends or implements. These basically separate the required method overrides from the base item class.

public class SponsorDelegateImpl extends AbstractItem<Sponsor, SponsorsViewHolder> implements SponsorDelegate {

SponsorRepository.java and SponsorRepositoryImpl.java

A Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.

public interface SponsorRepository {
  @NonNull
  Observable<Sponsor> getSponsors(long eventId, boolean reload);
}

Presently the app uses MVP architecture and the core package contains respective Views and their Presenters, whereas the data package contains the Model implementation.

To understand Observable, one will need to dive in RxJava. This video of a presentation by Jake Wharton can be a great start.

So, basically Observables represent sources of data, and whenever there is a change in our observable, i.e. our data, it fires an event and we get to know about it only if we have subscribed to it.

Setting up elements in core module (views and presenters)

Presently the app uses MVP architecture and the core package contains respective Views and Presenters for different properties of an event.

SponsorView.java

public interface SponsorsView extends Progressive, Erroneous, Refreshable, Emptiable<Sponsor> {
}

SponsorsFragment.java

This is a simple Fragment that extends BaseFragment (a generic fragment class predefined in the project) and implements SponsorView interface.

SponsorsListAdapter.java

A simple adapter for a RecyclerView.

SponsorsPresenter.java

The presenter is the middle-man between model and view. All your presentation logic belongs to it. A Presenter in a MVP architecture is responsible for querying the model and updating the view, reacting to user interactions and updating the model.

In this Presenter we have used the @Inject annotation and some RxJava code:

@Inject
public SponsorsPresenter(SponsorRepository sponsorRepository, DatabaseChangeListener<Sponsor> sponsorChangeListener) {
  this.sponsorRepository = sponsorRepository;
  this.sponsorChangeListener = sponsorChangeListener;
}

The @Inject annotation is used to request dependencies. It can be used on a constructor, a field, or a method. If you annotate a constructor with @Inject, Dagger 2 can also use an instance of this object to fulfill dependencies. Note that we are not instantiating any object here ourselves.

Some RxJava part, which is not in the scope of this blog.

private void listenChanges() {
  sponsorChangeListener.startListening();
  sponsorChangeListener.getNotifier()
      .compose(dispose(getDisposable()))
      .map(DbFlowDatabaseChangeListener.ModelChange::getAction)
      .filter(action -> action.equals(BaseModel.Action.INSERT))
      .subscribeOn(Schedulers.io())
      .subscribe(sponsorModelChange -> loadSponsors(false), Logger::logError);
}

 

public void loadSponsors(boolean forceReload) {
  getSponsorSource(forceReload)
      .compose(dispose(getDisposable()))
      .compose(progressiveErroneousRefresh(getView(), forceReload))
      .toList()
      .compose(emptiable(getView(), sponsors))
      .subscribe(Logger::logSuccess, Logger::logError);
}

Make changes to Dependency Injection modules

We are using Dagger for Dependency Injection, and therefore we need to update the modules that provide endpoint specific objects.

This video and this blog are a great start to learn about dependency injection using dagger.

Dependency injection is a technique whereby one object supplies the dependencies of another object. A dependency is an object that can be used (a service). An injection is the passing of a dependency to a dependent object (a client) that would use it. The service is made part of the client’s state. Passing the service to the client, rather than allowing a client to build or find the service, is the fundamental requirement of the pattern.(source)

Dependency Injection in built upon the concept of Inversion of Control. Which says that a class should get its dependencies from outside. In simple words, no class should instantiate another class but should get the instances from a configuration class.

Dagger 2 analyzes the dependencies for you and generates code to help wire them together.  It relies purely on using Java annotation processors and compile-time checks to analyze and verify dependencies. It is considered to be one of the most efficient dependency injection frameworks built to date.

@Provides is basically required to specify that the annotated method returns an object that should be available for injection to dependencies using @Inject

@Singleton annotation signals to the Dagger compiler that the instance should be created only once in the application.

@Binds annotation is a replacement for @Provides methods that simply returns an injected parameter. Its generated implementation is likely to be more efficient. A method annotated with @Binds must be: abstract.

ApiModule.java

@Provides
@Singleton
SponsorApi providesSponsorApi(Retrofit retrofit) {    return retrofit.create(SponsorApi.class);
}    

ChangeListenerModule.java

@Provides
DatabaseChangeListener<Sponsor> providesSponsorChangeListener() {
   return new DbFlowDatabaseChangeListener<>(Sponsor.class);
}

NetworkModule.java

@Provides
Class[] providesMappedClasses() {
   return new Class[]{Event.class, Attendee.class, Ticket.class, User.class,
   EventStatistics.class, Faq.class, Copyright.class, Feedback.class,           Track.class, Session.class, Sponsor.class};
   }

RepoModule.java

@Binds
@Singleton
abstract SponsorRepository bindsSponsorRepository(SponsorRepositoryImpl sponsorRepositoryImpl);

MainFragmentBuilderModule.java

@ContributesAndroidInjector
abstract SponsorsFragment contributeSponsorsFragment();

Also remember to make this change to FragmentNavigator.java

case R.id.nav_sponsor:
fragment = SponsorsFragment.newInstance(eventId);
break;

XML resources to be added:

  1. sponsors_item.xml
  2. sponsors_fragment.xml
  3. Make changes to activity_main_drawer.xml to include “sponsors” option.

Here’s what the result looks like:

References:

Lombok Plugin: https://blog.fossasia.org/using-lombok-to-reduce-boilerplate-code-in-open-event-android-app/

Jackson: https://blog.fossasia.org/shrinking-model-classes-boilerplate-in-open-event-android-projects/

RaizLabs DbFlow: https://blog.fossasia.org/persistence-layer-in-open-event-organizer-android-app/

Dependency Injection:
https://medium.com/@harivigneshjayapalan/dagger-2-for-android-beginners-di-part-i-f5cc4e5ad878

Continue ReadingImplementation of Sponsors API in Open Event Organizer Android App

Enable web app generation for multiple API formats

 

 

 

 

 

Open event server has two types of API (Application Programming Interface) formats, with one being generated by the legacy server and other one by the server side of decoupled development structure. The open event web app supported only the new version of API format, thus an error in read contents of JSON was thrown for the old version API format. To enable the support for both kind of API formats such that web app can be generated for each of them and there is no need to convert JSON files of version v1 to v2 we added an option field to the generator, where the client can choose the API version.

Excerpts and description for difference between data formats of API v1 and v2

The following excerpt is a subprogram getCopyrightData in both versions v1 and v2. The key for getting licence details in v1 is ‘licence_details’ and in v2 is ‘licence-details’. Similarly the key for getting copyright details in v1 is ‘copyright’ and in v2 is ‘event-copyright’.

So the data is extracted from the JSON files depending on the API version, the client has selected.

API V1

function getCopyrightData(event) {
 if(event.licence_details) {
   return convertLicenseToCopyright(event.licence_details, event.copyright);
 } else {
   return event.copyright;
 }
}

 

API V2

function getCopyrightData(event) {
 if(event['licence-details']) {
   return convertLicenseToCopyright(event['licence-details'], event['event-copyright']);
 } else {
   event['event-copyright'].logo = event['event-copyright']['logo-url'];
   return event['event-copyright'];
 }
}

 

Another example showing the difference between the API formats of v1 and v2 is excerpted below.

The following excerpt shows a constant ‘url’ containing the event URLs and the details. The version v1 uses event_url as a key for the main page url whereas v2 uses event-url for getting the same. A similar kind of structural difference is present for rest of the fields where the special character underscore has been replaced by hyphen and a bit of change in the name format for keys such as start_time, end_time.

API v1

const urls= {
 main_page_url: event.event_url,
 logo_url: event.logo,
 background_url: event.background_image,
 background_path: event.background_image,
 description: event.description,
 location: event.location_name,
 orgname: event.organizer_name,
 location_name: event.location_name,
};

 

API v2

const urls= {
 main_page_url: event['event-url'],
 logo_url: event['logo-url'],
 background_url: event['original-image-url'],
 background_path: event['original-image-url'],
 location: event['location-name'],
 orgname: event['organizer-name'],
 location_name: event['location-name'],
};

How we enabled support for both API formats?

To add the support for both API formats we added a options field on generator’s index page where the user chooses the type of API format for web app generation.

<label>Choose your API version</label>
<ul style="list-style-type:none">
 <li id="version1"><input name="apiVersion" type="radio" value="api_v1">   API_v1</li>
 <li id="version2"><input name="apiVersion" type="radio" value="api_v2"> API_v2</li>
</ul>

The generator depending on the version of API format, chooses the correct file where the data extraction from the input JSON files takes place. The file names are fold_v1.js and fold_v2.js for extraction of JSON v1 data and JSON v2 data respectively.

var type = req.body.apiVersion || 'api_v2';

if(type === 'api_v1') {
 fold = require(__dirname + '/fold_v1.js');
}
else {
 fold = require(__dirname + '/fold_v2.js');
}

 

The excerpts of code showing the difference between API formats of v1 and v2 are the contents of fold_v1.js and fold_v2.js files respectively.

Resources

Continue ReadingEnable web app generation for multiple API formats

Open Event Server – Export Attendees as CSV File

FOSSASIA‘s Open Event Server is the REST API backend for the event management platform, Open Event. Here, the event organizers can create their events, add tickets for it and manage all aspects from the schedule to the speakers. Also, once he/she makes his event public, others can view it and buy tickets if interested.

The organizer can see all the attendees in a very detailed view in the event management dashboard. He can see the statuses of all the attendees. The possible statuses are completed, placed, pending, expired and canceled, checked in and not checked in. He/she can take actions such as checking in the attendee.

If the organizer wants to download the list of all the attendees as a CSV file, he or she can do it very easily by simply clicking on the Export As and then on CSV.

Let us see how this is done on the server.

Server side – generating the Attendees CSV file

Here we will be using the csv package provided by python for writing the csv file.

import csv
  • We define a method export_attendees_csv which takes the attendees to be exported as a CSV file as the argument.
  • Next, we define the headers of the CSV file. It is the first row of the CSV file.
def export_attendees_csv(attendees):
   headers = ['Order#', 'Order Date', 'Status', 'First Name', 'Last Name', 'Email',
              'Country', 'Payment Type', 'Ticket Name', 'Ticket Price', 'Ticket Type']
  • A list is defined called rows. This contains the rows of the CSV file. As mentioned earlier, headers is the first row.
rows = [headers]
  • We iterate over each attendee in attendees and form a row for that attendee by separating the values of each of the columns by a comma. Here, every row is one attendee.
  • The newly formed row is added to the rows list.
for attendee in attendees:
   column = [str(attendee.order.get_invoice_number()) if attendee.order else '-',
             str(attendee.order.created_at) if attendee.order and attendee.order.created_at else '-',
             str(attendee.order.status) if attendee.order and attendee.order.status else '-',
             str(attendee.firstname) if attendee.firstname else '',
             str(attendee.lastname) if attendee.lastname else '',
             str(attendee.email) if attendee.email else '',
             str(attendee.country) if attendee.country else '',
             str(attendee.order.payment_mode) if attendee.order and attendee.order.payment_mode else '',
             str(attendee.ticket.name) if attendee.ticket and attendee.ticket.name else '',
             str(attendee.ticket.price) if attendee.ticket and attendee.ticket.price else '0',
             str(attendee.ticket.type) if attendee.ticket and attendee.ticket.type else '']

   rows.append(column)
  • rows contains the contents of the CSV file and hence it is returned.
return rows
  • We iterate over each item of rows and write it to the CSV file using the methods provided by the csv package.
writer = csv.writer(temp_file)
from app.api.helpers.csv_jobs_util import export_attendees_csv
content = export_attendees_csv(attendees)
for row in content:
   writer.writerow(row)

Obtaining the Attendees CSV file:

Firstly, we have an API endpoint which starts the task on the server.

GET - /v1/events/{event_identifier}/export/attendees/csv

Here, event_identifier is the unique ID of the event. This endpoint starts a celery task on the server to export the attendees of the event as a CSV file. It returns the URL of the task to get the status of the export task. A sample response is as follows:

{
  "task_url": "/v1/tasks/b7ca7088-876e-4c29-a0ee-b8029a64849a"
}

The user can go to the above-returned URL and check the status of his/her Celery task. If the task completed successfully he/she will get the download URL. The endpoint to check the status of the task is:

and the corresponding response from the server –

{
  "result": {
    "download_url": "/v1/events/1/exports/http://localhost/static/media/exports/1/zip/OGpMM0w2RH/event1.zip"
  },
  "state": "SUCCESS"
}

The file can be downloaded from the above-mentioned URL.

References

Continue ReadingOpen Event Server – Export Attendees as CSV File