Implementing a suggestion box in Susper Angular JS Front-end

In Susper, we have implemented a suggestion box for our input box. This was done using Yacy suggest API. This is how the box looks:

In this blog, let us see how to implement such a box using html, css, twitter-bootstrap and typescript. You can also check the code at the Susper repository.

The html code is simple and straightforward:

 

class=“suggestion-box” *ngIf=“results”>

</div>

A few points to notice :

  • *ngIf=”results” ensures that the box is displayed only when it has suggestions to display and not otherwise
  • The [routerLink] and [queryParams] attributes together link every result to the search page, with the correct query.

This is the css code :

a {
text-decoration: none;
}.suggestions {
font-family: Arial, sans-serif;
font-size: 17px;
margin-left: 2.4%;
}.query-suggestions:hover {
background: #E3E3E3;
}.suggestion-box{
width: 635.2px;
max-width: 100%;
border: 1px solid rgba(150,150,150,0.3);
background-color: white;
margin-left: -25.7px;
position: absolute;
boxshadow: 0px 0.2px 0px;
}

A few points to notice again:

    • Box-shadow: This gives the drop up a shadow effect, which looks really nice, the first 3 parameters are for dimensions (X-offset, Y-offset, Blur). The rgba specifies color, with parameters as (red-component, green-component, blue-component, opacity).
    • Text-decoration: This attribute is used to add/remove decoration like underline for links.
    • Font-family: The font-family mentioned here is Arial, if Arial is unavailable sans-serif is used.

In the typescript file, there are two major tasks:

  1. Splice the results if greater than five. A neat suggestion-box should have a maximum of five results, hence we splice the results:

this.results.concat(res[0]);
if ( this.results.length > 5) {
this.results = this.results.splice (0, 5);
  1. Hide the suggestion-box if there are no suggestions from the API:
@Output() hidecomponent: EventEmitter<any> = new EventEmitter<any>();

this.query$.subscribe( query => {
if (query) {
this.autocompleteservice.getsearchresults(query).subscribe(res => {
if (res) {
this.results = res[1];
if (this.results.length === 0) {
this.hidecomponent.emit(1);
} else {
this.hidecomponent.emit(0);
}

If you want a more elaborate picture, you can view the entire html, css and typescript files of the auto-complete component.

This tutorial, http://4dev.tech/2016/03/tutorial-creating-an-angular2-autocomplete/ is very useful in case you want to implement auto complete suggestion feature from scratch.

In addition, this stack overflow thread has some interesting insights too: https://stackoverflow.com/questions/35881815/implementing-autocomplete-for-angular2

Continue ReadingImplementing a suggestion box in Susper Angular JS Front-end

Using Flask-REST-JSONAPI’s Resource Manager In Open Event API Server

For the nextgen Open Event API Server, we are using flask-rest-jsonapi to write all the API endpoints. The flask-rest-jsonapi is based on JSON API 1.0 Specifications for JSON object responses.

In this blog post, I describe how I wrote API schema and endpoints for an already existing database model in the Open Event API Server. Following this blog post, you can learn how to write similar classes for your database models. Let’s dive into how the API Schema is defined for any Resource in the Open Event API Server. Resource, here, is an object based on a database model. It provides a link between the data layer and your logical data abstraction. This ResourceManager has three classes.

  1. Resource List
  2. Resource Detail
  3. Resource Relationship

(We’ll take a look at the Speakers API.)
First, we see the already implemented Speaker Model
 :

class Speaker(db.Model):

   """Speaker model class"""

   __tablename__ = 'speaker'

   id = db.Column(db.Integer, primary_key=True)

   name = db.Column(db.String, nullable=False)

   photo = db.Column(db.String)

   website = db.Column(db.String)

   organisation = db.Column(db.String)

   is_featured = db.Column(db.Boolean, default=False)

   sponsorship_required = db.Column(db.Text)

   def __init__(self,

                name=None,

                photo_url=None,

                website=None,

                organisation=None,

                is_featured=False,

                sponsorship_required=None):

      self.name = name

      self.photo = photo_url

      self.website = website

      self.organisation = organisation

      self.is_featured = is_featured

      self.sponsorship_required = sponsorship_required

 

Here’s the Speaker API Schema:

class SpeakerSchema(Schema):
   class Meta:
       type_ = 'speaker'
       self_view = 'v1.speaker_detail'
       self_view_kwargs = {'id': '<id>'}
   id = fields.Str(dump_only=True)
   name = fields.Str(required=True)
   photo_url = fields.Url(allow_none=True)
   website = fields.Url(allow_none=True)
   organisation = fields.Str(allow_none=True)
   is_featured = fields.Boolean(default=False)
   sponsorship_required = fields.Str(allow_none=True)
class SpeakerList(ResourceList):  
   schema = SpeakerSchema
   data_layer = {'session': db.session,
                 'model': Speaker}
class SpeakerDetail(ResourceDetail):
   schema = SpeakerSchema
   data_layer = {'session': db.session,
                 'model': Speaker}
class SpeakerRelationship(ResourceRelationship):
   schema = SpeakerSchema
   data_layer = {'session': db.session,
                 'model': Speaker}

 

Last piece of code is listing the actual endpoints in __init__ file for flask-rest-jsonapi

api.route(SpeakerList, 'speaker_list', '/events/<int:event_id>/speakers', '/sessions/<int:session_id>/speakers', '/users/<int:user_id>/speakers')
api.route(SpeakerDetail, 'speaker_detail', '/speakers/<int:id>')
api.route(SpeakerRelationship, 'speaker_event', '/speakers/<int:id>/relationships/event')
api.route(SpeakerRelationship, 'speaker_user', '/speakers/<int:id>/relationships/user')
api.route(SpeakerRelationship, 'speaker_session', '/speakers/<int:id>/relationships/sessions')

 

How to write API schema from database model?

Each column of the database model is a field in the API schema. These are marshmallow fields and can be of several data types – String, Integer, Float, DateTime, Url.

Three class definitions follow the Schema class.

  • List:
    SpeakerList class is the basis of endpoints:
api.route(SpeakerList, 'speaker_list', '/events/<int:event_id>/speakers',
          '/sessions/<int:session_id>/speakers',                                                             
          '/users/<int:user_id>/speakers')

This class will contain methods that generate a list of speakers based on the id that is passed in view_kwargs. Let’s say that ‘/sessions/<int:session_id>/speakers’ is requested. As the view_kwargs here contains sesssion_id, the query methods in SpeakerList class will fetch a list of speaker profiles related to  the sessions identified by session_id.

The flask-rest-jsonapi allows GET and POST methods for ResourceList. When using these endpoints for POST, the before_create_object and before_post methods can be written. These methods are overridden from the base ResourceList class in flask-rest-jsonapi/resource.py when they are defined in Speaker’s class.

  • Detail: 

SpeakerDetail class provides these endpoints:

 api.route(SpeakerDetail, 'speaker_detail', '/speakers/<int:id>')

The Resource Detail provides methods to facilitate GET, PATCH and DELETE requests provided for the endpoints. Methods like: before_get_object, before_update_object, after_update_object are derived from ResourceDetail class. The endpoints return an object of the resource based on the view_kwargs in a JSON response.

  • Relationship:

SpeakerRelationship class, as you might have guesses, provides:

api.route(SpeakerRelationship, 'speaker_event', '/speakers/<int:id>/relationships/event')

api.route(SpeakerRelationship, 'speaker_user', '/speakers/<int:id>/relationships/user')

api.route(SpeakerRelationship, 'speaker_session', '/speakers/<int:id>/relationships/sessions')

SpeakerRelationship class provides methods to create, update and delete relationships between the speaker and related resources – events, users and sessions in this case.

The above is a bare bone API schema example. The actual implementation in Open Event Server has lots of helper methods too to cater to our specific needs.

Additional Resources:

Continue ReadingUsing Flask-REST-JSONAPI’s Resource Manager In Open Event API Server

Environment Monitoring with PSLab

In this post, we shall explore the working principle and output signals of particulate matter sensors, and explore how the PSLab can be used as a data acquisition device for these.

Working Principle

A commonly used technique employed by particulate matter sensors is to study the diffraction of light by dust particles, and estimate the concentration based on a parameter termed the ‘occupancy factor’. The following image illustrates how the most elementary particle sensors work using a photogate, and a small heating element to ensure continuous air flow by convection.

Occupancy Rate

Each time a dust particle of aerodynamic diameters 2.5um passes through the lit area, a phenomenon called Mie scattering which defines scattering of an electromagnetic plane wave by a homogenous sphere of diameter comparable to the wavelength of incident light, results in a photo-signal to be detected by the photosensor.  In more accurate dust sensors, a single wavelength source with a high quality factor such as a laser is used instead of LEDs which typically have broader spectra.

The signal output from the photosensor is in the form of intermittent digital pulses whenever a particle is detected. The occupancy ratio can be determined by measuring the sum total of time when a positive signal was output from the sensor to the total averaging time. The readings can be taken over a fairly long amount of time such as 30 seconds in order to get a more accurate representation of the occupancy ratio.

Using the Logic analyzer to capture and interpret signals

The PSLab has a built-in logic analyzer that can acquire data signals up to 67 seconds long at its highest sampling rate, and this period is more than sufficient to record and interpret a dataset from a dust sensor. An inexpensive dust sensor, DSM501A was chosen for the readings, and the following results were obtained

Dust sensor readings from an indoor, climate controlled environment. After the 100 second mark, the windows were opened to expose the sensor to the outdoor environment.

A short averaging time has resulted in large fluctuations in the readings, and therefore it is important to maintain longer averaging times for stable measurements.

Recording data with a python script instead of the app

The output of the dust sensor must be connected to ID1 of the PSLab, and both devices must share a common ground which is a prerequisite for exchange of DC signals. All that is required is to start the logic analyzer in single channel mode, wait for a specified averging time, and interpret the acquired data

Record_dust_sensor.py

from PSL import sciencelab   #import the required library
import time
import numpy as np
I = sciencelab.connect()           #Create the instance
I.start_one_channel_LA(channel='ID1',channel_mode=1,trigger_mode=0)  #record all level changes
time.sleep(30)   #Wait for 30 seconds while the PSLab gathers data from the dust sensor
a,_,_,_,e =I.get_LA_initial_states()      #read the status of the logic analyzer
raw_data =I.fetch_long_data_from_LA(a,1)  #fetch number of samples available in chan #1
I.dchans[0].load_data(e,raw_data)  
stamps =I.dchans[0].timestamps    #Obtain a copy of the timestamps
if len(stamps)>2:   #If more than two timestamps are available (At least one dust particle was detected
		if not self.I.dchans[0].initial_state:   #Ensure the starting position of timestamps
			stamps = stamps[1:] - stamps[0]   # is in the LOW state
	diff = np.diff(stamps)   #create an array of individual time gaps between successive level changes


	lows = diff[::2]      #Array of time durations when a particle was not present
	highs = diff[1::2]    #Array of time durations when a particle was present
	low_occupancy = 100*sum(lows)/stamps[-1] #Occupancy ratio
print (low_occupancy) # datasheets of individual dust sensors also provide a mathematical
                      #equation to interpret the occupancy ratio as concentration of
				#particulate matter

Further Reading, and application notes:

[1] LED based  dust Sensor application note

Continue ReadingEnvironment Monitoring with PSLab

Analyzing Sensor Data on PSLab

PSLab Android App and Desktop app have the functionality of reading data from the sensors. The raw sensor data received is in the form of a long string and needs to parsed to understand what the data actually conveys.

The sensor data is unique in terms of volume of data sent, the units of measurement of the data etc., however none of this is reflected in the raw data. The blog describes how the sensor data received by the Android/Desktop app is parsed, interpreted and finally presented to the user for viewing.

The image below displays the raw data sent by the sensors

blog_post_9_2

Fig: Raw Sensor data displayed below the Get Raw button

  • In order to understand the data sent from the sensor, we need to understand what the sensor does.
    • For example, HMC5883L is a 3-axis magnetometer and it returns the value of the magnetic field in the x, y & z axes in the order of nanoTeslas.
    • Similarly, the DAC of PSLab – MCP4728 can also be used like other sensors, it returns the values of channels in millivolts.
    • The sensor MPU6050 being 3-axes accelerometer & gyroscope which returns the values of acceleration & angular momentum of the x, y & z axes in their SI units respectively.
  • Each sensor has a sensitivity value. The sensitivity of the sensor can be modified to adjust the accuracy of the data received. For PSLab, the data returned is a float number with each data point having 4 bytes of memory with the highest sensitivity. Although sensitivity is not a reliable indicator of the accuracy of the data. Each value received has a lot of trailing values after the decimal and it is evident that no sensor can possibly achieve accuracy that high, so the data after 2-3 decimal places is garbage and not taken into consideration.
  • Some sensors are configurable up to a great extent like MPU6050 where limits can also be set on the range of data, volume of data sent etc. whereas some are not configurable and are just meant for sending the data at regular intervals.
  • In order to parse the above data, if the sensor returns a single value, then the data is ready to be used. However, in most cases like above where the sensors return multiple values, the data stream can be divided into equal parts since each value occupies equal space and each value can be stored in different variables.
  • The stored data has to be presented to the user in a better understandable format where it is clear that what each value represents. For example, in case of the 3 axes sensors, the data of each axis must be distinctly represented to the user.

Shown below are the mock-ups of the sensor UIs in which each value has been distinctly represented.

         

Fig: Mock-ups for the sensor UIs (a) – HMC5883L (b) – MPU6050

Each UI has a card to display those values. These values are updated in real time and there are additional options to plot the data received in real time and in some cases also configure the sensor. In addition to that there are features for data logging where the data is recorded for a given time interval specified by the user and on completion of recording, calculations like the mean, standard deviation etc. are presented to the user.

Additional Resources

  1. Analyzing sensor data using Arduino, similar to method for PSLab – http://tronixstuff.com/2014/01/21/online-data-analysis-arduino-plotly/
  2. YouTube video to understand analysis of data from MPU6050 in Arduino – https://www.youtube.com/watch?v=taZHl4Mr-Pk
Continue ReadingAnalyzing Sensor Data on PSLab

Addition Of Track Tags In Open Event Android App

In the Open Event Android app we only had a list of sessions in the schedule page without  knowing which track each session belonged to. There were several iterations of UI design for the same. Taking cues from the Google I/O 17 App we thought that the addition of track tags would be informative to the user.  In this blog post I will be talking about how this feature was implemented.

TrackTag Layout in Session

<Button
                android:layout_width="wrap_content"
                android:layout_height="@dimen/track_tag_height"
                android:id="@+id/slot_track"
                android:textAllCaps="false"
                android:gravity="center"
                android:layout_marginTop="@dimen/padding_small"
                android:paddingLeft="@dimen/padding_small"
                android:paddingRight="@dimen/padding_small"
                android:textStyle="bold"
                android:layout_below="@+id/slot_location"
                android:ellipsize="marquee"
                android:maxLines="1"
                android:background="@drawable/button_ripple"
                android:textColor="@color/white"
                android:textSize="@dimen/text_size_small"
                tools:text="Track" />

The important part here is the track tag was modelled as a <Button/> to give users a touch feedback via a ripple effect.

Tag Implementation in DayScheduleAdapter

All the dynamic colors for the track tags was handled separately in the DayScheduleAdapter.

final Track sessionTrack = currentSession.getTrack();
       int storedColor = Color.parseColor(sessionTrack.getColor());
       holder.slotTrack.getBackground().setColorFilter(storedColor, PorterDuff.Mode.SRC_ATOP);

       holder.slotTrack.setText(sessionTrack.getName());
       holder.slotTrack.setOnClickListener(v -> {
            Intent intent = new Intent(context, TrackSessionsActivity.class);
            intent.putExtra(ConstantStrings.TRACK, sessionTrack.getName());

            intent.putExtra(ConstantStrings.TRACK_ID, sessionTrack.getId());
            context.startActivity(intent);
      });

As the colors associated with a track were all stored inside the track model we needed to obtain the track from the currentSession. We could get the storedColor by calling getColor() on the track object associated with the session.

In order to set this color as the background of the button we needed to set a color filter with a PorterDuff Mode called as SRC_ATOP.

The name of the parent class is an homage to the work of Thomas Porter and Tom Duff, presented in their seminal 1984 paper titled “Compositing Digital Images”. In this paper, the authors describe 12 compositing operators that govern how to compute the color resulting of the composition of a source (the graphics object to render) with a destination (the content of the render target).

Ref : https://developer.android.com/reference/android/graphics/PorterDuff.Mode.html

To exactly understand what does the mode SRC_ATOP do we can refer to the image below for understanding. There are several other modes with similar functionality.

               

      SRC IMAGE             DEST IMAGE           SRC ATOP DEST

Now we also set an onClickListener for the track tag so that it directs us to the tracks Page.

So now we have successfully added track tags to the app.

Resources

Continue ReadingAddition Of Track Tags In Open Event Android App

Addition Of Settings Page Animations in Open Event Android App

In order to bring in some uniform user flow in the  Open Event Android  there was a need to implement slide animation for various screens. It turned out that the settings page didn’t adhere to such a rule. In this blog post I’ll be highlighting how this was implemented in the app.

Android Animations

Animations for views/layouts in android were built in order to have elements of the app rely on real life motion and physics. These are mainly used to choreograph motion among various elements within a page or multiple pages. Here we would be implementing a simple slide_in and a slide_out animation for the settings page. Below I highlight how to write a simple animator.

slide_in_right.xml

<?xml version="1.0" encoding="utf-8"?>
<set

   xmlns:android="http://schemas.android.com/apk/res/android">
   <translate
       android:duration="@android:integer/config_shortAnimTime"
       android:fromXDelta="100%"
       android:toXDelta="0%" />

</set>

This xml file maily is what does the animation for us in the app. We just need to provide the translate element with the position fromXDelta from where the view would move to another location that would be toXDelta.

@android:integer/config_shortAnimTime is a default int variable for animation durations.

Here toXDelta=”0%” signifies that initial position of the page i.e the settings page that is the screen itself. And the 100% in fromXDelta signifies the virtual screen space to the right of the actual screen(OUTSIDE THE SCREEN OF THE PHONE).

Using this animation it appears that the screen is sliding into the view of the user from the right.

We will be using overridePendingTransition to override the default enter and exit animations for the activities which is fade_in and fade_out as of Android Lollipop.

The two integers you provide for overridePendingTransition(int enterAnim, int exitAnim) correspond to the two animations – removing the old Activity and adding the new one.

We have already defined the enterAnim here that is slide_in_left. For the exit animation we would define another animation that would be stay_in_place.xml that would have both fromXDelta and toXDelta as 0% as their values. This was done to just avoid any exit animation.

We need to add this line after the onCreate call within the SettingActivity.

@Override
 protected void onCreate(Bundle savedInstanceState) {

       //Some Code
       super.onCreate(savedInstanceState);
       overridePendingTransition(R.anim.slide_in_right,    R.anim.stay_in_place);
     //Some Code

Now we are done.

We can see below that the settings page animates with the prescribed animations below.

Resources

 

Continue ReadingAddition Of Settings Page Animations in Open Event Android App

Documenting Open Event API Using API-Blueprint

FOSSASIA‘s Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually.

Using API Blueprint:-
We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :-

FORMAT: V1
HOST: https://api.eventyay.com

# Open Event API Server

The Open Event API Server

# Group Authentication

The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>`


API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API.

The heading starts with # and the first heading is regarded as the name of the API.

NOTE – Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.
        Following the heading section comes the description of the API. Further, headings are used to break up the description section.

Resource Groups:
—————————–
    By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users.

# Group Users

For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints.


| Parameter | Description | Type | Required |
|:----------|-------------|------|----------|
| `name`  | Name of the user | string | - |
| `password` | Password of the user | string | **yes** |
| `email` | Email of the user | string | **yes** |

 

Resources:
——————
    In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI.

## Users Collection [/v1/users{?page%5bsize%5d,page%5bnumber%5d,sort,filter}]
+ Parameters
    + page%5bsize%5d (optional, integer, `10`) - Maximum number of resources in a single paginated response.
    + page%5bnumber%5d (optional, integer, `2`) - Page number to fetchedfor the paginated response.
    + sort (optional, string, `email`) - Sort the resources according to the given attribute in ascending order. Append '-' to sort in descending order.
    + filter(optional, string, ``) - Filter according to the flask-rest-jsonapi filtering system. Please refer: http://flask-rest-jsonapi.readthedocs.io/en/latest/filtering.html for more.

 

Actions:
————–
    An action is specified with a sub-heading inside of  a resource as the name of Action, followed by HTTP method inside the square brackets.
    Before we get on further, let us discuss what a payload is. A payload is an HTTP transaction message including its discussion and any additional assets such as entity-body validation schema.

There are two payloads inside an Action:

  1. Request: It is a payload containing one specific HTTP Request, with Headers and an optional body.
  2. Response: It is a payload containing one HTTP Response.

A payload may have an identifier-a string for a request payload or an HTTP status code for a response payload.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)


Types of HTTP methods for Actions:

  • GET – In this action, we simply send the header data like Accept and Authorization and no body. Along with it we can send some GET parameters like page[size]. There are two cases for GET: List and Detail. For example, if we consider users, a GET for List helps us retrieve information about all users in the response, while Details, helps us retrieve information about a particular user.

The API Blueprint examples implementation of both GET list and detail request and response are as follows.

### List All Users [GET]
Get a list of Users.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "meta": {
                "count": 2
            },
            "data": [
                {
                    "attributes": {
                        "is-admin": true,
                        "last-name": null,
                        "instagram-url": null,

 

### Get Details [GET]
Get a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "data": {
                "attributes": {
                    "is-admin": false,
                    "last-name": "Doe",
                    "instagram-url": "http://instagram.com/instagram",

 

  • POST – In this action, apart from the header information, we also need to send a data. The data must be correct with jsonapi specifications. A POST body data for an users API would look something like this:
### Create User [POST]
Create a new user using an email, password and an optional name.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data":
              {
                "attributes":
                {
                  "email": "example@example.com",
                  "password": "password",


A POST request with this data, would create a new entry in the table and then return in jsonapi format the particular entry that was made into the table along with the id assigned to this new entry.

  • PATCH – In this action, we change or update an already existing entry in the database. So It has a header data like all other requests and a body data which is almost similar to POST except that it also needs to mention the id of the entry that we are trying to modify.
### Update User [PATCH]
+ `id` (integer) - ID of the record to update **(required)**

Update a single user by setting the email, email and/or name.

Authorized user should be same as user in request body or must be admin.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data": {
                "attributes": {
                  "password": "password1",
                  "avatar_url": "http://example1.com/example1.png",
                  "first-name": "Jane",
                  "last-name": "Dough",
                  "details": "example1",
                  "contact": "example1",
                  "facebook-url": "http://facebook.com/facebook1",
                  "twitter-url": "http://twitter.com/twitter1",
                  "instagram-url": "http://instagram.com/instagram1",
                  "google-plus-url": "http://plus.google.com/plus.google1",
                  "thumbnail-image-url": "http://example1.com/example1.png",
                  "small-image-url": "http://example1.com/example1.png",
                  "icon-image-url": "http://example1.com/example1.png"
                },
                "type": "user",
                "id": "2"
              }
            }

Just like in POST, after we have updated our entry, we get back as response the new updated entry in the database.

  • DELETE – In this action, we delete an entry from the database. The entry in our case is soft deleted by default. Which means that instead of deleting it from the database, we set the deleted_at column with the time of deletion. For deleting we just need to send header data and send a DELETE request to the proper endpoint. If deleted successfully, we get a response as “Object successfully deleted”.
### Delete User [DELETE]
Delete a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
          "meta": {
            "message": "Object successfully deleted"
          },
          "jsonapi": {
            "version": "1.0"
          }
        }


How to check after manually entering all these? We can use the
apiary website to render it, or simply use different renderer to do it. How? Checkout for my next blog on apiary and aglio.

Learn more about api blueprint here: https://apiblueprint.org/

Continue ReadingDocumenting Open Event API Using API-Blueprint

Isolating each request in Dredd testing for Open Event Server

In the Open Event Server, we are using Api-blueprint along with aglio for API documentation in the project. The foremost concern with any API documentation is making sure it remains updated to the actual implementation of the API backend.

It was only a matter of days that we realized we needed some sort of testing mechanism for the API documentation. Dredd came to our rescue in that moment. Quoting the official documentation:

Dredd is a language-agnostic command-line tool for validating API description document against backend implementation of the API. Dredd reads your API description and step by step validates whether your API implementation replies with responses as they are described in the documentation. 

For loading DB fixtures, handling authentication, sessions etc. Dredd Hooks are used. Quoting their official documentation:

Similar to any other testing framework, Dredd supports executing code around each test step. Hooks are code blocks executed in defined stage of execution lifecycle. In the hooks code you have an access to compiled HTTP transaction object which you can modify.

Now after setting up dredd testing successfully, the challenge was to make every request remains isolated with respect to the database i.e making sure that database changes by one request do not affect the database used by another request, similar to what is done in case of unit tests ;).

In unit tests, you can just access the session of the tests after it is completed and gracefully rollback the changes made by the test. But as dredd is completely separated from the API backend being tested, accessing that session is not possible.

So the only choice that remains is purging the database after each request and recreating it with the required data before each request. To do the above, we initialize a vanilla flask app and attach a database to it:

@hooks.before_all
def before_all(transaction):
   app = Flask(__name__)
   app.config.from_object('config.TestingConfig')
   db.init_app(app)
   Migrate(app, db)
   stash['app'] = app
   stash['db'] = db

Before each request, the pre-existing database is purged to remove the changes from the previous request. After that a new db is created with necessary repopulation.

@hooks.before_each
def before_each(transaction):
   with stash['app'].app_context():
       db.engine.execute("drop schema if exists public cascade")
       db.engine.execute("create schema public")
       db.create_all()
       stamp()
       populate_db_with_required_static_values()

Database fixtures specific to the request can be loaded in the before hook in the following way:

@hooks.before("Users > Users Collection > List All Users")
def user_get_list(transaction):
   """
   GET /users
   :param transaction:
   :return:
   """
   with stash['app'].app_context():
       user = UserFactory()
       db.session.add(user)
       db.session.commit()

Relevant links:

Continue ReadingIsolating each request in Dredd testing for Open Event Server

Generate a Live feed from Facebook for the Open Event Android App

To increase the user engagement in the Open Event Android app, the feed was a welcome feature for the users. In order to fetch data from a public facebook page, firstly the facebook page id should be fetched from an API GET request. To fetch the page id, the page name have to be included in the API request along with the access token generated from the facebook developers page itself.

To fetch the data from Facebook, it requires the user to login to Facebook. Facebook requires its SDK to be set up first on the android itself to make the requests. Hence it destroys the motive to have feed of a public Facebook page.

Solution

The solution involves making custom API requests to the Facebook servers in order to fetch the data. 3 API requests have to be made. The first fetches the access token for the developer to access the facebook servers for the data. It’s going to be a one time call, so just make the API request in the browser itself and fetch the token with the help of the app ID and the app secret. The second fetches the page id of a particular facebook page from the given facebook page name. The third fetches the data using the page id with the fields specified in the query. The API requests will be made with the help of the retrofit library and the object communication throughout the app will be catered by rxJava library.

Implementation

Create a Facebook developer account. Generate the app ID for your app along with its secret.

Make a GET request to fetch the access token from the app ID and app secret.

https://graph.facebook.com/oauth/access_token?client_id={APP_ID}&client_secret={APP_SECRET}&grant_type=client_credentials

#NOTE: Remove the brackets!

Make the appropriate JSON getters and setters of the feed from the GET request. Use http://www.jsonschema2pojo.org/ to autogenerate them with the help of a JSON input.

Create the Facebook Graph API interface to make the GET requests. Make them observable for object communication using rxJava.

public interface FacebookGraphAPI {

  @GET(“/{event_name}”)
  Observable<FacebookPageId> getPageId(@Path(“event_name”) String eventName, @Query(“access_token”) String accessToken);

  @GET(“/{page_id}/feed”)
  Observable<Feed> getPosts(@Path(“page_id”) String pageId, @Query(“fields”) String fields, @Query(“access_token”) String accessToken);

}

Create an API client to build the retrofit along with the OkHttpClient

public final class APIClient {

  private static final int CONNECT_TIMEOUT_MILLIS = 20 * 1000; // 15s

  private static final int READ_TIMEOUT_MILLIS = 50 * 1000; // 20s

  private static FacebookGraphAPI facebookGraphAPI;

  private static Retrofit.Builder retrofitBuilder;

  static {
      OkHttpClient.Builder okHttpClientBuilder = new OkHttpClient().newBuilder()
              .connectTimeout(CONNECT_TIMEOUT_MILLIS, TimeUnit.MILLISECONDS)
              .readTimeout(READ_TIMEOUT_MILLIS, TimeUnit.MILLISECONDS);
      if (BuildConfig.DEBUG)
          okHttpClientBuilder.addNetworkInterceptor(new StethoInterceptor());
      OkHttpClient okHttpClient = okHttpClientBuilder.addInterceptor(new HttpLoggingInterceptor()
              .setLevel(HttpLoggingInterceptor.Level.BASIC))
              .build();

      retrofitBuilder = new Retrofit.Builder()
              .addConverterFactory(JacksonConverterFactory.create(new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)))
              .client(okHttpClient);
  }

  public static FacebookGraphAPI getFacebookGraphAPI() {
      if (facebookGraphAPI == null)
          facebookGraphAPI = retrofitBuilder
                  .baseUrl(Urls.FACEBOOK_BASE_URL)
                  .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
                  .build()
                  .create(FacebookGraphAPI.class);
      return facebookGraphAPI;
  }

}

Create a utility method to get the relative time from the timestamp of the post.

private static final Locale defaultLocale = Locale.getDefault();
private static String iso8601WithTimezone = “yyyy-MM-dd’T’HH:mm:ssZ”;
private static final String ISO_TIMEZONE_FORMATTER = new SimpleDateFormat(iso8601WithTimezone, defaultLocale);

@NonNull
private static Date getDate(@NonNull SimpleDateFormat formatter, @NonNull String isoDateString) throws ParseException {
  return formatter.parse(isoDateString);
}

public static String getRelativeTimeFromTimestamp(String timeStamp) throws ParseException {
  Date timeCreatedDate = getDate(ISO_TIMEZONE_FORMATTER, timeStamp);

  return (String) android.text.format.DateUtils.getRelativeTimeSpanString(
          (timeCreatedDate.getTime()),
          System.currentTimeMillis(), android.text.format.DateUtils.SECOND_IN_MILLIS);
}

Create a feed fragment, download and subscribe to the feed with the help of the retrofit and rxjava library. Notify the adapter when the data set changes.

private void downloadFeed() {
  APIClient.getFacebookGraphAPI()
          .getPosts(sharedPreferences.getString(ConstantStrings.FACEBOOK_PAGE_ID, null),
                  getContext().getResources().getString(R.string.fields),
                  getContext().getResources().getString(R.string.facebook_access_token))
          .subscribeOn(Schedulers.io())
          .observeOn(AndroidSchedulers.mainThread())
          .subscribe(feed -> {
              feedItems.clear();
              feedItems.addAll(feed.getData());
              feedAdapter.notifyDataSetChanged();
              handleVisibility();
          }, throwable -> {
              Snackbar.make(swipeRefreshLayout, getActivity()
                      .getString(R.string.refresh_failed), Snackbar.LENGTH_LONG)
                      .setAction(R.string.retry_download, view -> refresh()).show();
              Timber.d(“Refresh not done”);
          }, () -> {
              swipeRefreshLayout.setRefreshing(false);
              Timber.d(“Refresh done”);
          });
}

Conclusion

The open event android project currently only fetches feed from the facebook. Feeds from twitter and youtube will be fetched too in the near future. Watch out the next blog on how to display the comments of Facebook post on a dialogfragment over the feed fragment.

For more information on the code, please take help from here.

Continue ReadingGenerate a Live feed from Facebook for the Open Event Android App

Iteration through the Android File System in the phimpme project

Android uses the single file system structure which has a single root. The task involved creating a custom folder chooser to whitelist folders while displaying images in the gallery in the Phimpme Photo App. The challenge arose in iterating over the files in the most efficient way. The best possible way to represent the file structure is in the form of tree data structure as given below.

Current Alternative

Currently, the MediaStore class contains metadata for all available media on both internal and external storage devices. Since it only returns a list of a particular media file format, it refrains the developer from customizing the structure in his way.

Implementation

Create a public class which represents the file tree. Since each subtree of the tree could itself be represented as file tree itself, therefore the parent of a node will be a FileTree object itself. Therefore declare a list of FileTree objects as children of the node, a FileTree object as the parent of the particular node, node’s own File object along with string values filepath and display name associated with it.

public class FileTree {
 public final String filepath;
 public final String displayName;
 public final List<FileTree> childFileTreeList = new ArrayList<>();
 public final FileTree parent;
 public boolean hasMedia = false;

 public FileTree(String filepath, String displayName, FileTree parent) {
    this.filepath = filepath;
    this.displayName = displayName;
    this.parent = parent;
 }
}

For iterating through the file system, we create a recursive function which is called on the root of the Android file system. If the particular file is a directory, with the help of Depth First traversal algorithm, the directory is traversed. Else, the file is added to the list of the file. The below code snippet is the recursive function.

public static void walkDir(FileTree fileTree, List<File> files) {
  File listFile[] = fileTree.file.listFiles();

  if (listFile != null) {
      for (File file : listFile) {
          if (file.isDirectory()) {
              FileTree childFileTree = new FileTree(file, file.getName(), fileTree);
              fileTree.childFileTreeList.add(childFileTree);
              walkDir(childFileTree, files);
          }
          else {
                  files.add(file);

          }
      }
  }
}

Conclusion

The android file system was used to whitelist folders so that the images of the folders could neither be uploaded nor edited.

For the complete guide to whitelisting folders, navigate here

Continue ReadingIteration through the Android File System in the phimpme project