Adding Check-in Attributes to Tickets

Recently, it was decided by the Open Event Orga App team that the event ticket API response from Open Event Server should have two additional attributes for specifying event check-in access. At first sight, it seemed that adding these options will only require changes in the orga app, but it turned out that the entire Ticket API from the server will need this addition.

Implementing these attributes turned out to be quite straightforward. Specifically, the fields to be added were boolean is_checkin_restricted and auto_checkin_enabled. By default, checkin is not automatic and is restricted. Therefore, the default values for these fields were chosen to be True and False respectively. To add them, the ticket model file was changed first – due to the addition of these two columns:

class Ticket(SoftDeletionModel):
    ...
    is_checkin_restricted = db.Column(db.Boolean)  # <--
    auto_checkin_enabled = db.Column(db.Boolean)  # <--
    ...
    def __init__(self,
        name=None,
        event_id=None,
        ...
        is_checkin_restricted=True,
        auto_checkin_enabled=False):

        self.name = name
        ...
        self.is_checkin_restricted = is_checkin_restricted
        self.auto_checkin_enabled = auto_checkin_enabled
        ...

Since the ticket database model was updated, a migration had to be performed. Following shell commands (at the open event server project root) did the migration and database update and a migration file was then generated:

$ python manage.py db migrate
$ python manage.py db upgrade

Here’s the generated migration file:

from alembic import op
import sqlalchemy as sa

revision = '6440077182f0'
down_revision = 'eaa029ebb260'

def upgrade():
    op.add_column('tickets', sa.Column('auto_checkin_enabled', sa.Boolean(), nullable=True))
    op.add_column('tickets', sa.Column('is_checkin_restricted', sa.Boolean(), nullable=True))

def downgrade():
    op.drop_column('tickets', 'is_checkin_restricted')
    op.drop_column('tickets', 'auto_checkin_enabled')

The next code change was required in the ticket API schema. The change was essentially the same as the one added in the model file – just these 2 new fields were added:

class TicketSchemaPublic(SoftDeletionSchema):
    ...
    id = fields.Str(dump_only=True)
    name = fields.Str(required=True)
    ...
    is_checkin_restricted = fields.Boolean(default=True)  # <--
    auto_checkin_enabled = fields.Boolean(default=False)  # <--
    event = Relationship(attribute='event',
    self_view='v1.ticket_event',
    self_view_kwargs={'id': '<id>'},
    related_view='v1.event_detail',
    related_view_kwargs={'ticket_id': '<id>'},
    schema='EventSchemaPublic',
    type_='event')
    ...

Now all that remained were changes in the API documentation, which were made accordingly. This completed the addition of these two checkin attributes in the ticket API, and eventually made way to the orga app. And, these can be requested as usual by the front-end and user app as well.


Resources and Links:

Patching an Attribute Type Across a Flask App

Recently, it was discovered by a contributor that the rating attribute for event feedbacks in Open Event was of type String. The type was incorrect, indeed. After a discussion, developers came concluded that it should be of type Float. In this post, I explain how to perform this simple migration task of changing a data type across a typical Flask app’s stack.

To begin this change, we first, we modify the database model. The model file for feedbacks (feedback.py) looks like the following:

from app.models import db


class Feedback(db.Model):
"""Feedback model class"""
__tablename__ = 'feedback'
id = db.Column(db.Integer, primary_key=True)
rating = db.Column(db.String, nullable=False)  # ←-- should be float
comment = db.Column(db.String, nullable=True)
user_id = db.Column(db.Integer,
db.ForeignKey('users.id', ondelete='CASCADE'))
event_id = db.Column(db.Integer,
db.ForeignKey('events.id', ondelete='CASCADE'))

def __init__(self, rating=None, comment=None, event_id=None,                                        user_id=None):
self.rating = rating  # ←-- cast here for safety
self.comment = comment
self.event_id = event_id
self.user_id = user_id





The change here is quite straightforward, and spans just 2 lines:

rating = db.Column(db.Float, nullable=False)

and

self.rating = float(rating)

We now perform the database migration using a couple of manage.py commands on the terminal. This file is different for different projects, but the migration commands essentially look the same. For Open Event Server, the manage.py file is at the root of the project directory (as is conventional). After cd’ing to the root, we execute the following commands:

$ python manage.py db migrate

and then

$ python manage.py db upgrade

These commands update our Open Event database so that the rating is now stored as a Float. However, if we execute these commands one after the other, we note that an exception is thrown:

sqlalchemy.exc.ProgrammingError: column "rating" cannot be cast automatically to type float
HINT:  Specify a USING expression to perform the conversion.
'ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision'

This happens because the migration code is ambiguous about what precision to use after converting to type float. It hints us to utilize the USING clause of PostgreSQL to do that. We accomplish this manually by using the psql client to connect to our database and command it the type change:

$ psql oevent
psql (10.1)
Type "help" for help.

oevent=# ALTER TABLE feedback ALTER COLUMN rating TYPE FLOAT USING rating::double precision

We now exit the psql shell and run the above migration commands again. We see that the migration commands pass successfully this time, and a migration file is generated. For our migration, the file looks like the following:

from alembic import op
import sqlalchemy as sa


# These values would be different for your migrations.
revision = '194a5a2a44ef'

down_revision = '4cac94c86047'


def upgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.VARCHAR(),
type_=sa.Float(),
existing_nullable=False)


def downgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.Float(),
type_=sa.VARCHAR(),
existing_nullable=False)

This is an auto-generated file (built by the database migration tool Alembic) and we need to specify the extra commands we used while migrating our database. Since we did use an extra command to specify the precision, we need to add it here. PostgreSQL USING clause can be added to alembic migration files via the postgresql_using keyword argument. Thus, the edited version of the upgrade function looks like the following:

def upgrade():
op.alter_column('feedback', 'rating',
existing_type=sa.VARCHAR(),
type_=sa.Float(),
existing_nullable=False,
postgresql_using='rating::double precision')

This completes our work on database migration. Migration files are useful for a variety of purposes – they allow us to easily get to a previous database state, or a new database state as suggested by a project collaborator. Just like git, they allow for easy version control and collaboration.

We didn’t finish this work after database migration. We also decided to impose limits on the rating value. We concluded that 0-5 would be a good range for rating. Furthermore, we also decided to round off the rating value to the “nearest 0.5”, so if the input rating is 2.3, it becomes 2.5. Also, if it is 4.1, it becomes 4.0. This was decided because such values are conventional for ratings across numerous web and mobile apps. So this will hopefully enable easier adoption for new users.

For the validation part, marshmallow came to rescue. It is a simple object serialization and deserialization tool for Python. So it basically allows to convert complex Python objects to JSON data for communicating over HTTP and vice-versa, among other things. It also facilitates pre-processing input data and therefore, allows clean validation of payloads. In our case, marshmallow was specifically used to validate the range of the rating attribute of feedbacks. The original feedbacks schema file looked like the following:

from marshmallow_jsonapi import fields
from marshmallow_jsonapi.flask import Schema, Relationship

from app.api.helpers.utilities import dasherize


class FeedbackSchema(Schema):
"""
Api schema for Feedback Model
"""
class Meta:
"""
Meta class for Feedback Api Schema
"""
type_ = 'feedback'
self_view = 'v1.feedback_detail'
self_view_kwargs = {'id': '<id>'}
inflect = dasherize

id = fields.Str(dump_only=True)
rating = fields.Str(required=True)  # ← need to validate this
comment = fields.Str(required=False)
event = Relationship(attribute='event',
self_view='v1.feedback_event',
self_view_kwargs={'id': '<id>'},
related_view='v1.event_detail',
related_view_kwargs={'feedback_id': '<id>'},
schema='EventSchemaPublic',
type_='event')




To validate the rating attribute, we use marshmallow’s Range class:

from marshmallow.validate import Range

Now we change the line

rating = fields.Str(required=True)

to

rating = fields.Float(required=True, validate=Range(min=0, max=5))

So with marshmallow, just about 2 lines of work implements rating validation for us!

After the validation part, what’s remaining is the rounding-off business logic. This is simple mathematics, and for getting to the “nearest 0.5” number, the formula goes as follows:

rating * 2 --> round off --> divide the result by 2

We will use Python’s built-in function (BIF) to accomplish this. To implement the business logic, we go back to the feedback model class and modify its constructor. Before this type change, the constructor looked like the following:

def __init__(self, rating=None, comment=None, event_id=None, user_id=None):
self.rating = rating
self.comment = comment
self.event_id = event_id
self.user_id = user_id

We change this by first converting the input rating to float, rounding it off and then finally assigning the result to feedback’s rating attribute. The new constructor is shown below:

def __init__(self, rating=None, comment=None, event_id=None, user_id=None):
rating = float(rating)
self.rating = round(rating*2, 0) / 2  # Rounds to nearest 0.5

self.comment = comment
self.event_id = event_id
self.user_id = user_id

This completes the rounding-off part and ultimately concludes rating’s type change from String to Float. We saw how a simple high-level type change requires editing code across multiple files and the use of different tools in between. In doing so, we thus also learned the utility of alembic and marshmallow in database migration and data validation, respectively.


Resources

Open Event Server – Pages API

This article illustrates how the Pages API has been designed and implemented on the server side, i.e., FOSSASIA‘s Open Event Server. Pages endpoint is used to create static pages such as “About Page” or any other page that doesn’t need to be updated frequently and only a specific content is to be shown.

Parameters

  1. name – This stores the name of the page.
      1. Type – String
      2. Required – Yes
  2. title – This stores the title of the page.
      1. Type – String
      2. Required – No
  3. url – This stores the url of the page.
      1. Type – String
      2. Required – Yes
  4. description – This stores the description of the page.
      1. Type – String
      2. Required – Yes
  5. language – This stores the language of the page.
      1. Type – String
      2. Required – No
  6. index – This stores the position of the page.
      1. Type – Integer
      2. Required – No
      3. Default – 0
  7. place – Location where the page will be placed.
      1. Type – String
      2. Required – No
      3. Accepted Values – ‘footer’ and ‘event’

These are the allowed parameters for the endpoint.

Model

Lets see how we model this API. The ORM looks like this :

__tablename__ = 'pages'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False)
title = db.Column(db.String)
url = db.Column(db.String, nullable=False)
description = db.Column(db.String)
place = db.Column(db.String)
language = db.Column(db.String)
index = db.Column(db.Integer, default=0)

As you can see, we created a table called “pages”. This table has 8 columns, 7 of which are the parameters that I have mentioned above. The column “id” is an Integer column and is the primary key column. This will help to differentiate between the various entries in the table.

The visualisation for this table looks as follows :

API

We support the following operations:

  1. GET all the pages in the database
  2. POST create a new page
  3. GET details of a single page as per id
  4. PATCH a single page by id
  5. DELETE a single page by id

To implement this we first add the routes in our python file as follows :

api.route(PageList, 'page_list', '/pages')
api.route(PageDetail, 'page_detail', '/pages/<int:id>')

Then we define these classes to handle the requests. The first route looks as follows:

class PageList(ResourceList):
   """
   List and create page
   """
   decorators = (api.has_permission('is_admin', methods="POST"),)
   schema = PageSchema
   data_layer = {'session': db.session,
                 'model': Page}

As can be seen above, this request requires the user to be an admin. It uses the Page model described above and handles a POST request.

The second route is:

class PageDetail(ResourceDetail):
   """
   Page detail by id
   """
   schema = PageSchema
   decorators = (api.has_permission('is_admin', methods="PATCH,DELETE"),)
   data_layer = {'session': db.session,
                 'model': Page}

This route also requires the user to be an admin. It uses the Page model and handles PATCH, DELETE requests.

To summarise our APIs are:

GET

/v1/pages{?sort,filter}

POST

/v1/pages{?sort,filter}

GET

/v1/pages/{page_id}

PATCH

/v1/pages/{page_id}

DELETE

/v1/pages/{page_id}

References

Implementing Database Migrations to Badgeyay

Badgeyay project is divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python.

We have integrated PostgreSQL as the object-relational database in Badgeyay and we are using SQLAlchemy SQL Toolkit and Object Relational Mapper tools for working with databases and Python. As we have Flask microframework for Python, so we are having Flask-SQLAlchemy as an extension for Flask that adds support for SQLAlchemy to work with the ORM.

One of the challenging jobs is to manage changes we make to the models and propagate these changes in the database. For this purpose, I have added Added Migrations to Flask SQLAlchemy for handling database changes using the Flask-Migrate extension.

In this blog, I will be discussing how I added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.

First, Let’s understand Database Models, Migrations, and Flask Migrate extension. Then we will move onto adding migrations using Flask-Migrate. Let’s get started and understand it step by step.

What are Database Models?

A Database model defines the logical design and structure of a database which includes the relationships and constraints that determine how data can be stored and accessed. Presently, we are having a User and file Models in the project.

What are Migrations?

Database migration is a process, which usually includes assessment, database schema conversion. Migrations enable us to manipulate modifications we make to the models and propagate these adjustments in the database. For example, if later on, we make a change to a field in one of the models, all we will want to do is create and do a migration, and the database will replicate the change.

What is Flask Migrate?

Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are made available through the Flask command-line interface or through the Flask-Script extension.

Now let’s add support for migration in Badgeyay.

Step 1 :

pip install flask-migrate

 

Step 2 :

We will need to edit run.py and it will look like this :

import os
from flask import Flask
from flask_migrate import Migrate  // Imported Flask Migrate

from api.db import db
from api.config import config

......

db.init_app(app)
migrate = Migrate(app, db) // It will allow us to run migrations
......

@app.before_first_request
def create_tables():
    db.create_all()

if __name__ == '__main__':
    app.run()

 

Step 3 :

Creation of Migration Directory.

 export FLASK_APP=run.py
 flask db init

 

This will create Migration Directory in the backend API folder.

└── migrations
    ├── README
    ├── alembic.ini
    ├── env.py
    ├── script.py.mako
    └── versions

 

Step 4 :

We will do our first Migration by the following command.

flask db migrate

 

Step 5 :

We will apply the migrations by the following command.

flask db upgrade

 

Now we are all done with setting up Migrations to Flask SQLAlchemy for handling database changes in the badgeyay repository. We can verify the Migration by checking the database tables in the Database.

This is how I have added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.

Resources:

  • PostgreSQL Docs    – Link
  • Flask Migrate Docs  – Link
  • SQLAlchemy Docs  – Link
  • Flask SQLAlchemy Docs – Link

Database Listener for User Centric Events

Badgeyay is an open-source utility developed by FOSSASIA to generate badges for conferences and events. The project is separated into two components to ease maintainability. First is the frontend part which is in ember and second part is backend which is in Flask. The choice of database to support backend is PostgreSQL.

Now comes the problem, whenever a user is registered in the database, he should receive  a verification mail, that he is successfully registered on the platform. For this case we have to listen to the database events on User model. This issue has greater extendibility than only sending greeting or verification mail to the user. We can extend this to trigger services that are dependent on user registration, like subscribing the user to some set of services based on the plan he opted while registration and many more.

These type of issues cannot be handled by normal relationship with tables and other entities, there has to be logic in place to support such functionalities. So the challenges for tackling the problem are as follows:

  • Listen to the insert_action in User model
  • Extracting the details necessary for the logic
  • Execute particular logic

Procedure

  1. Attaching insert_action listener to the User model. This function will get triggered whenever an entity is saved in the User model.

<!– HTML generated using hilite.me –>

@db.event.listens_for(User, "after_insert")
def logic(mapper, connection, target): {
......
}
  1. When the function gets triggered, extract the details of the saved user that is necessary for the logic. As currently we are sending greeting mail to the user,we only need the email of the user. Target is the actual saved user passed as argument to the listening function from the library.

<!– HTML generated using hilite.me –>

msg = {}
msg['subject'] = "Welcome to Badgeyay"
msg['receipent'] = target.email
msg['body'] = "It's good to have you onboard with Badgeyay. Welcome to " \
"FOSSASIA Family."
sendMail(msg)
  1. Now the details are passed to sendMail() function for sending mail which uses flask-mail library to send mail to the recipient.
    def sendMail(message):
    if message and message.receipent:
    try:
    msg = Message(
    subject=message.subject,
    sender=app.config['MAIL_USERNAME'], Response(200).generateMessage(
    recipients=[message.receipent],
    body=message.body)
    Mail(app).send(msg)
    except Exception as e:
    return jsonify(
    Response(500).exceptWithMessage(
    str(e),
    'Unable to send the mail'))
    return jsonify(
    Response(200).generateMessage(
    'Mail Sent'))
    else:
    return jsonify(
    Response(403).generateMessage(
    'No data received')) 'No data received'))
    
  2. This will send mail to the user who has been registered to the application.

Similarly we can use separate logics according to the need of the application.

 

The Pull Request for the above functionality is at this Link

Topics Involved

Working on the issue involve following topics:

  • Configuring mail service to allow insecure apps access.
  • Sending mail from the flask-mail to end user
  • Attaching listener to listen for database change
  • Extraction of data from saved object in database sqlalchemy.

Resources

  • Sending Mails Programmatically –  Link
  • Flask Mail Documentation – Link
  • Listening to database events – Link
  • Enabling access to GMAIL to send mails to recipient – Link

Adding multiple email support for users on Open Event Server

The Open Event Server enables organizers to manage events from concerts to conferences and meet-ups. It offers features for events with several tracks and venues. Event managers can create invitation forms for speakers and build schedules in a drag and drop interface. The event information is stored in a database. The system provides API endpoints to fetch the data, and to modify and update it.

The Open Event Server is based on JSON 1.0 Specification and hence build on top of Flask Rest Json API (for building Rest APIs) and Marshmallow (for Schema).

In this blog, we will talk about how to add support of multiple emails for a user in Open Event Server. The focus is on model and schema creation for this support.

Model Creation

For the UserEmail, we’ll make our model as follows

from app.models import db

class UserEmail(db.Model):
“””user email model class”””
__tablename__ = ‘user_emails’
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(120), unique=True, nullable=False)
verified = db.Column(db.Boolean, default=False)
user_id = db.Column(db.Integer, db.ForeignKey(‘users.id’, ondelete=’CASCADE’))
user = db.relationship(“User”, backref=”emails”, foreign_keys=[user_id])

def __init__(self, email=None, user_id=None):
self.email = email
self.user_id = user_id

def __str__(self):
return ‘User:’ + unicode(self.user_id).encode(‘utf-8’) + ‘ email: ‘ + unicode(self.email).encode(‘utf-8’)

def __unicode__(self):
return unicode(self.id)

Now, let’s try to understand the attributes of this model.

  1. id is most important Column required in every model to set it as primary key and to uniquely identify an UserEmail object.
  2. email is that attribute which is required hence should be unique and non-nullable.
  3. Verified attribute is used to check whether a email is verified or not (thus should be boolean)
  4. User_id is the attribute which specifies id of the user whose email is contained in the UserEmail object.
  5. Finally, a relationship with the user of id user_id and these emails (associated with the User.id == user_id) will be stored in the attribute emails in User Model.

Schema Creation

For the model UserEmail, we’ll make our schema UserEmailSchema as follows

from marshmallow_jsonapi import fields
from marshmallow_jsonapi.flask import Schema, Relationshipfrom app.api.helpers.utilities import dasherizeclass UserEmailSchema(Schema):
“””   API Schema for user email Model   “””class Meta:
“””  Meta class for user email API schema  “””
type_ = ‘user-emails’
self_view = ‘v1.user_emails_detail’
self_view_kwargs = {‘id’: ‘<id>’}
inflect = dasherize

id = fields.Str(dump_only=True)
email = fields.Email(allow_none=False)
user_id = fields.Integer(allow_none=False)
user = Relationship(attribute=’user’,
self_view=’v1.user_email’,
self_view_kwargs={‘id’: ‘<id>’},
related_view=’v1.user_detail’,
related_view_kwargs={‘user_id’: ‘<id>’},
schema=’UserSchema’,
type_=’user’
)

  • Marshmallow-jsonapi provides a simple way to produce JSON API-compliant data in any Python web framework.

Now, let’s try to understand the schema UserEmailSchema

  1. id : Same as in model id is used as uniquely identify an UserEmail object.
  2. email : Same as in model email is required thus allow_none is set to False.
  3. User_id : user_id is the id of user whose email is contained in a UserEmailSchema object.
  4. User : It tells whole attributes of the user to which this email belongs to.

So, we saw how to add multiple email support for users on Open Event Server. We just required to create a model and its schema to add this feature. Similarly, to add support for any database model in the project, we need to create Model and Schema with all the attributes as specified in the model too. This Schema creation is done with guidelines of JSONAPI 1.0 Specification using Marshmallow.

Resources

How to use Realm in SUSI Android to Save Data

Sometimes we need to store information on the device locally so that we can use information offline and also query data faster. Initially, SQLite was only option to store information on the device. But working with SQLite is difficult sometimes and also it makes code difficult to understand. Also, SQL queries take a long time. But now we have realm a better alternative of SQLite. The Realm is a lightweight mobile database and better substitute of SQLite. The Realm has own C++ core and store data in a universal, table-based format by a C++ core. This allows Realm to allow data access from multiple languages as well as a range of queries. In this blog post, I will show you why we used Realm and how we save data in SUSI Android using Realm.

“How about performance? Well, we’re glad you asked 🙂 For all the API goodness & development productivity we give you, we’re still up to 100x faster than some SQLite ORMs and on average ~10x faster than raw SQLite and common ORMs for typical operations.” (compare: https://blog.realm.io/realm-for-android/)

Advantages of Realm over SQLite are following:

  • It is faster than SQLite as explained on the Realm blog. One of the reasons realm is faster than SQLite is, the traditional SQLite + ORM abstraction is leaky because ORM simply converts  Objects and their methods into SQL statements. Realm, on the other hand, is an object database, meaning your objects directly reflect your database.
  • It is easier to use as it uses objects for storing data. When we use SQLite we need boilerplate code to convert values to and from the database, setting up mappings between classes and tables, fields and columns, foreign keys, etc. Whereas in Realm data is directly exposed as objects and can be queried without any conversion.

Prerequisites

To include this library in your project you need

  • Android studio version 1.5.1 or higher.
  • JDK version 7.0 or higher.
  • Android API level 9 or higher.

How to use realm in Android

To use Realm in your project we must add the dependency of the library in build.gradle(project) file 

 dependencies {
       classpath “io.realm:realm-gradle-plugin:3.3.1”
   }

and build.gradle(module) file.

apply plugin: realm-android
dependencies {
compile io.realm:android-adapters:1.3.0
}

Now you have to instantiate Realm in your application class. Setting a default configuration in your Application class, will ensure that it is available in the rest of your code.

RealmConfiguration realmConfiguration = new RealmConfiguration.Builder(this)
                                                              .deleteRealmIfMigrationNeeded().build();
Realm.setDefaultConfiguration(realmConfiguration);

Now we need to create a model class. A model class is use to save data in Realm and retrieve saved data and it must extend RealmObject class. For eg.

public class Person extends RealmObject {
   private String name;
   public String getName() {
       return name;
   }
   public void setName(String name) {
       this.name = name;
   }
}

Field in the model class uses to define columns. For eg. ‘name’ is a column name. Method like setName() use to save data  and getName() use to retrieve saved data.

Now create an instance of the Realm in the activity where you want to use it. It will be used to read data from the Realm and write data to the Realm.

Realm realm = Realm.getInstance(this);

Before you start a new transaction you must call beginTransaction(). It will open database.

realm.beginTransaction();

To write data to the Realm you need to create an instance of the model class. createObject used to create an instance of RealmObject class. Our model class is RealmObject type so we use createObject() to create an instance of the model class.

Person person = realm.createObject(Person.class);

Write data to realm.

person.setName(“MSDHONI”);

And after it you must call commitTransaction(). commitTransaction() use to end transaction.

realm.commitTransaction();

Reading data from Realm is easier than writing data to it. You need to create an instance of the Realm.

Realm realm = Realm.getInstance(this);

To create query use where the method and pass the class of object you want to query. After creating query you can fetch all data using findAll() method.

realm.where(Person.class).findAll();

Reference

Filtering List with Search Manager in Connfa Android App

It is a good practice to provide the facility to filter lists in Android apps to improve the user experience. It often becomes very unpleasing to scroll through the entire list when you want to reach a certain data point. Recently I modified Connfa app to read the list of speakers from the Open Event Format. In this blog I describe how to add filtering facility in lists with Search Manager.

First, we declare the search menu so that the widget appears in it.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
    <item android:id="@+id/search"
        android:title="Search"
        android:icon="@drawable/search"
        android:showAsAction="collapseActionView ifRoom"
        android:actionViewClass="android.widget.SearchView" />
</menu>

In above menu item the collapseActionView attribute allows your SearchView to expand to take up the whole action bar and collapse back down into a normal action bar item when not in use. Now we create the SearchableConfiguration which defines how SearchView behaves.

<?xml version="1.0" encoding="utf-8"?>
<searchable
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:label="@string/app_name"
    android:hint="Search friend">
</searchable>

Also add this to the activity that will be used with <meta-data> tag in the manifest file. Then associate searchable configuration with the SearchView in the activity class

@Override
public boolean onCreateOptionsMenu(Menu menu) {
    MenuInflater inflater = getMenuInflater();
    inflater.inflate(R.menu.search_menu, menu);

    SearchManager searchManager = (SearchManager)
                            getSystemService(Context.SEARCH_SERVICE);
    searchMenuItem = menu.findItem(R.id.search);
    searchView = (SearchView) searchMenuItem.getActionView();

    searchView.setSearchableInfo(searchManager.
                            getSearchableInfo(getComponentName()));
    searchView.setSubmitButtonEnabled(true);
    searchView.setOnQueryTextListener(this);

    return true;
}

Implement SearchView.OnQueryTextListener in activity, need to override two new methods now

@Override
public boolean onQueryTextSubmit(String searchText) {
  
  return true;
}

@Override
public boolean onQueryTextChange(String searchedText) {

   if (mSpeakersAdapter != null) {
       lastSearchRequest = searchedText;
       mSpeakersAdapter.getFilter().filter(searchedText);
   }
   return true;
}

Find the complete implementation here. In the end it will look like this,

 

References

Android Search View documentation – https://developer.android.com/reference/android/widget/SearchView.html

Using Data Access Object to Store Information

We often need to store the information received from the network to retrieve that later. Although we can store and read data directly but by using data access object to store information enables us to do data operations without exposing details of the database. Using data access object is also a best practice in software engineering. Recently I modified Connfa app to store the data received in Open Event format. In this blog, I describe how to use data access object.

The goal is to abstract and encapsulate all access to the data and provide an interface. This is called Data Access Object pattern. In a nutshell, the DAO “knows” which data source (that could be a database, a flat file or even a WebService) to connect to and is specific for this data source. It makes no difference in applications when it accesses a relational database or parses xml files (using a DAO). The DAO is usually able to create an instance of a data object (“to read data”) and also to persist data (“to save data”) to the datasource.

Consider the example from Connfa app in which get the tracks from API and store them in SQL database. We use DAO to create a layer between model and database. Where AbstractEntityDAO is an abstract class which have the functions to perform CRUD operation. We extend it to implement them in our DAO model. Here is TrackDAO structure,

public class TrackDao extends AbstractEntityDAO<Track, Long> {

    public static final String TABLE_NAME = "table_track";

    @Override
    protected String getSearchCondition() {
        return "_id=?";
    }
    
    ...
}

Find the complete class to see the detailed methods to implement search conditions, get key columns, create instance etc.  here.

Here is a general method to get the data from the database. Where getFacade() for the given layer element, this method returns the requested facade object to represent the passed in layer element.

public List<ClassToSave> getAllSafe() {
   ILAPIDBFacade facade = getFacade();
   try {
       facade.open();
       return getAll();

   } finally {
       facade.close();
   }
}

Now we can create an instance to use these methods instead of directly using SQL operations. This functions gets the data and sort it accordingly.

private TrackDao mTrackDao;
 public List<Track> getTracks() {
   List<Track> tracks = mTrackDao.getAllSafe();
   Collections.sort(tracks, new Comparator<Track>() {
       @Override
       public int compare(Track track, Track track2) {
           return Double.compare(track.getOrder(), track2.getOrder());
       }
   });
   return tracks;
}

References:

 

How User Event Roles relationship is handled in Open Event Server

Users and Events are the most important part of FOSSASIA‘s Open Event Server. Through the advent and upgradation of the project, the way of implementing user event roles has gone through a lot many changes. When the open event organizer server was first decoupled to serve as an API server, the user event roles like all other models was decided to be served as a separate API to provide a data layer above the database for making changes in the entries. Whenever a new role invite was accepted, a POST request was made to the User Events Roles table to insert the new entry. Whenever there was a change in the role of an user for a particular event, a PATCH request was made. Permissions were made such that a user could insert only his/her user id and not someone else’s entry.

def before_create_object(self, data, view_kwargs):
        """
        method to create object before post
        :param data:
        :param view_kwargs:
        :return:
        """
        if view_kwargs.get('event_id'):
            event = safe_query(self, Event, 'id', view_kwargs['event_id'], 'event_id')
            data['event_id'] = event.id

        elif view_kwargs.get('event_identifier'):
            event = safe_query(self, Event, 'identifier', view_kwargs['event_identifier'], 'event_identifier')
            data['event_id'] = event.id
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if not invite:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")

    def after_create_object(self, obj, data, view_kwargs):
        """
        method to create object after post
        :param data:
        :param view_kwargs:
        :return:
        """
        email = safe_query(self, User, 'id', data['user'], 'user_id').email
        invite = self.session.query(RoleInvite).filter_by(email=email).filter_by(role_id=data['role'])\
                .filter_by(event_id=data['event_id']).one_or_none()
        if invite:
            invite.status = "accepted"
            save_to_db(invite)
        else:
            raise ObjectNotFound({'parameter': 'invite'}, "Object: not found")


Initially what we did was when a POST request was sent to the User Event Roles API endpoint, we would first check whether a role invite from the organizer exists for that particular combination of user, event and role. If it existed, only then we would make an entry to the database. Else we would raise an “Object: not found” error. After the entry was made in the database, we would update the role_invites table to change the status for the role_invite.

Later it was decided that we need not make a separate API endpoint. Since API endpoints are all user accessible and may cause some problem with permissions, it was decided that the user event roles would be handled entirely through the model instead of a separate API. Also, the workflow wasn’t very clear for an user. So we decided on a workflow where the role_invites table is first updated with the particular status and after the update has been made, we make an entry to the user_event_roles table with the data that we get from the role_invites table.

When a role invite is accepted, sqlalchemy add() and commit() is used to insert a new entry into the table. When a role is changed for a particular user, we make a query, update the values and save it back into the table. So the entire process is handled in the data layer level rather than the API level.

The code implementation is as follows:

def before_update_object(self, role_invite, data, view_kwargs):
        """
        Method to edit object
        :param role_invite:
        :param data:
        :param view_kwargs:
        :return:
        """
        user = User.query.filter_by(email=role_invite.email).first()
        if user:
            if not has_access('is_user_itself', id=user.id):
                raise UnprocessableEntity({'source': ''}, "Only users can edit their own status")
        if not user and not has_access('is_organizer', event_id=role_invite.event_id):
            raise UnprocessableEntity({'source': ''}, "User not registered")
        if not has_access('is_organizer', event_id=role_invite.event_id) and (len(data.keys())>1 or 'status' not in data):
            raise UnprocessableEntity({'source': ''}, "You can only change your status")

    def after_update_object(self, role_invite, data, view_kwargs):
        user = User.query.filter_by(email=role_invite.email).first()
        if 'status' in data and data['status'] == 'accepted':
            role = Role.query.filter_by(name=role_invite.role_name).first()
            event = Event.query.filter_by(id=role_invite.event_id).first()
            uer = UsersEventsRoles.query.filter_by(user=user).filter_by(event=event).filter_by(role=role).first()
            if not uer:
                uer = UsersEventsRoles(user, event, role)
                save_to_db(uer, 'Role Invite accepted')


In the above code, there are two main functions –
before_update_object which gets executed before the entry in the role_invites table is updated, and after_update_object which gets executed after.

In the before_update_object, we verify that the user is accepting or rejecting his own role invite and not someone else’s role invite. Also, we ensure that the user is allowed to only update the status of the role invite and not any other sensitive data like the role_name or email. If the user tried to edit any other field except status, then an error is shown to him/her. However if the user has organizer access, then he/she can edit the other fields of the role_invites table as well. The has_access() helper permission function helps us ensure the permission checks.

In the after_update_object we make the entry to the user event roles table. In the after_update_object from the role_invite parameter we can get the exact values of the newly updated row in the table. We use the data of this role invite to find the user, event and role associated with this role. Then we create a UsersEventsRoles object with user, event and role as parameters for the constructor. Then we use save_to_db helper function to save the new entry to the database. The save_to_db function uses the session.add() and session.commit() functions of flask-sqlalchemy to add the new entry directly to the database.

Thus, we maintain the flow of the user event roles relationship. All the database entries and operation related to users-events-roles table remains encapsulated from the client user so that they can use the various API features without thinking about the complications of the implementations.

 

Reference: