Developing MultiLinePlotter App for Loklak

MultiLinePlotter is a web application which uses Loklak API under the hood to plot multiple tweet aggregations related to different user provided query words in the same graph. The user can give several query words and multiple lines for different queries will be plotted in the same graph. In this way, users will be able to compare tweet distribution for various keywords and visualise the comparison. All the searched queries are shown under the search record section. Clicking on a record causes a dialogue box to pop up where the individual tweets related to the query word is displayed. Users can also remove a series from the plot dynamically by just pressing the Remove button beside the query word in record section. The app is presently hosted on Loklak apps site.

Related issue – https://github.com/fossasia/apps.loklak.org/issues/225

Getting started with the app

Let us delve into the working of the app. The app uses Loklak aggregation API to get the data.

A call to the API looks something like this:

http://api.loklak.org/api/search.json?q=fossasia&source=cache&count=0&fields=created_at

A small snippet of the aggregation returned by the above API request is shown below.

"aggregations": {"created_at": {
    "2017-07-03": 3,
    "2017-07-04": 9,
    "2017-07-05": 12,
    "2017-07-06": 8,
}}

The API provides a nice date v/s number of tweets aggregation. Now we need to plot this. For plotting Morris.js has been used. It is a lightweight javascript library for visualising data.

One of the main features of this app is addition and removal of multiple series from the graph dynamically. How do we achieve that? Well, this can be achieved by manipulating the morris.js data list whenever a new query is made. Let us understand this in steps.

At first, the data is fetched using angular HTTP service.

$http.jsonp('http://api.loklak.org/api/search.json?callback=JSON_CALLBACK',
            {params: {q: $scope.tweet, source: 'cache', count: '0', fields: 'created_at'}})
                .then(function (response) {
                    $scope.getData(response.data.aggregations.created_at);
                    $scope.plotData();
                    $scope.queryRecords.push($scope.tweet);
                });

Once we get the data, getData function is called and the aggregation data is passed to it. The query word is also stored in queryRecords list for future use.

In order to plot a line graph morris.js requires a data object which will contain the required values for a series. Given below is an example of such a data object.

data: [
    { x: '2006', a: 100, b: 90 },
    { x: '2007', a: 75,  b: 65 },
    { x: '2008', a: 50,  b: 40 },
    { x: '2009', a: 75,  b: 65 },
],

For every ‘x’, ‘a’ and ‘b’ will be plotted. Thus two lines will be drawn. Our app will also maintain a data list like the one shown above, however, in our case, the data objects will have a variable number of keys. One key will determine the ‘x’ value and other keys will determine the ordinates (number of tweets).

All the data objects present in the data list needs to be updated whenever a new search is done.

The getData function does this for us.

var value = $scope.tweet;
        for (date in aggregations) {
            var present = false;
            for (var i = 0; i < $scope.data.length; i++) {
                var item = $scope.data[i];
                if (item['day'] === date) {
                    item[[value]] = aggregations[date];
                    $scope.data[i] = item
                    present = true;
                    break;
                }
            }
            if (!present) {
                $scope.data.push({day: date, [value]: aggregations[date]});
            }
        }


The for loop in the above code snippet updates the global data list used by morris.js. It simply iterates over the dates in the aggregation, extracts the object corresponding to a particular date, adds the new query word as a key and, the number of tweets on that date as the value.If a date is not already present in the list, then it inserts a new object corresponding to the date and query word. Once our data list is updated, we are ready to redraw the graph with the updated data. This is done using plotData function. The plotData function simply checks the user selected graph type. If the selected type is aggregations then it calls plotAggregationGraph() to redraw the aggregations plot.

$scope.remove = function(record) {
        $scope.queryRecords = $scope.queryRecords.filter(function(e) {
            return e !== record });

        $scope.data.forEach(function(item) {
            delete item[record];
        });

        $scope.data = $scope.data.filter(function(item) {
            return Object.keys(item).length !== 1;
        });

        $scope.ykeys = $scope.ykeys.filter(function(item) {
            return item !== record;
        });

        $scope.labels = $scope.labels.filter(function(item) {
            return item !== record;
        });

        $scope.plotData();
}

The above function simply scans the data list, filters the objects which contains selected record as a key and removes them using filter method of javascript arrays. It also removes the corresponding labels and entries from labels and ykeys arrays. Finally, it once again calls plotData function to redraw the plot.

Given below is a sample plot generated by this app with the query words – google, android, microsoft, samsung.

 

Conclusion

This blog post explained how multiple series are plotted dynamically in the MultiLinePlotter app. Apart from aggregations plot it also plots tweet statistics like maximum tweets and average tweets containing a query word and visualises them using stacked bar chart. I will be discussing about them in my subsequent blogs.

Important resources

Continue ReadingDeveloping MultiLinePlotter App for Loklak

Implementing Registration API in Open Event Front-end

In this post I will discuss how I implemented the registration feature in Open Event Front-end using the Open-Event-Orga API. The project uses Ember Data for consumption of the API in the ember application. The front end sends POST request to Open Event Orga Server which verifies and creates the user.

We use a custom serialize method for trimming the request payload of the user model by creating a custom user serializer. Lets see how we did it.

Implementing register API

The register API takes username & password in the payload for a POST request which are validated in the register-form component using the semantic-ui form validation. After validating the inputs from the user we bubble the save action to the controller form the component.

submit() {
  this.onValid(() => {
    this.set('errorMessage', null);
    this.set('isLoading', true);
    this.sendAction('submit');
  });
}

In controller we have `createUser()` action where we send a POST request to the server using the save() method, which returns a promise.

createUser() {
  var user = this.get('model');
  user.save()
    .then(() => {
      this.set('session.newUser', user.get('email'));
      this.set('isLoading', false);
      this.transitionToRoute('login');
    })
    .catch(reason => {
      this.set('isLoading', false);
      if (reason.hasOwnProperty('errors') && reason.errors[0].status === 409) {
        this.set('errorMessage', this.l10n.t('User already exists.'));
      } else {
        this.set('errorMessage', this.l10n.t('An unexpected error occurred.'));
      }
    });
}

The `user.save()` returns a promise, therefore we handle it using a then-catch clause. If the request is successful, it executes the `then` clause where we redirect to the login route. If the request fails we check if the status is 409 which translates to a duplicate entry i.e the user already exists in the server.

Serializing the user model using custom serializer

Ember lets us customise the payload using serializers for models. The serializers have serialize function where we can trim the payload of the model. In the user serializer we check if the request is for record creation using `options.includeId`. If the request is for record creation we trim the payload using the lodash `pick` method and pick only email & password for payload for POST request.

serialize(snapshot, options) {
  const json = this._super(...arguments);
  if (options && options.includeId) {
    json.data.attributes = pick(json.data.attributes, ['email', 'password']);
  }
  return json;
}

Thank you for reading the blog, you can check the source code for the example here.

Resources

Continue ReadingImplementing Registration API in Open Event Front-end

Face detection in Phimpme Android’s Camera

The Phimpme Android application comes with a well-featured camera application which offers almost all the functionality an advanced camera user searches for. It comes with a wide range of options to apply different scene modes in the camera and also to detect the faces using the front or the back camera of the device. In this tutorial, I will be discussing how we achieved the face detection functionality in Phimpme.

In the Phimpme application, we have the option in the settings to enable the face detection just as depicted in the screenshot below. After enabling it the Camera starts detecting the faces and draws rectangular boxes on the number of faces detected by the camera.

I will be explaining step by step to achieve this using some code snippets.

Step 1

First, we have to check whether our device supports the face detection functionality to avoid unnecessary application crashes using the Android’s Camera.Parameters class.

After the check we have to create a new class named My FaceDetectionListener which will be implementing the Android’s Camera.FaceDetectionListener. The face detection class overrides the function onFaceDetection and passes the array of Faces detected and the camera as the parameter to this function.

class MyFaceDetectionListener implements CameraController.FaceDetectionListener {
  @Override  
  public void onFaceDetection(CameraController.Face[] faces) { 
    faces_detected = new CameraController.Face[faces.length];     System.arraycopy(faces, 0, faces_detected, 0, faces.length);
  }
 }

Step 2

After creating this class, we need to start the camera of the application to set the face detection listener to it. This can be done by the code snippet provided below

camera = Camera.open(cameraId);

We can open the front camera and the back camera by simply changing the cameraId. If we want to open the front camera, then we need to set the camera Id value as 1 and if we want the back camera to open up we can set the camera Id to be 0.
After this, we can set the face detection listener in the camera. This can be done using the below code snippet.

mCamera.setFaceDetectionListener(fDetectionListener);
   mCamera.startFaceDetection();

The set face detection listener function takes in the object of the class we created in step 1 as the parameter and calls the Android’s pre defined function to start the face detection. The object of the class we created in step 1 can be created and initialised with the help of code snippet below.

MyFaceDetectionListener fDListener = new MyFaceDetectionListener();

After we have set the detection listener in the camera, as soon as it detects the face, it will call the overridden function onFaceDetection but how do the user know if the face has been detected or not. For this we have to create a rectangular box of size approximately that of the face detected. This can be done with the following code snippet.

int l = faces[i].rect.left;
               int r = faces[i].rect.right;
               int t = faces[i].rect.top;
               int b = faces[i].rect.bottom;
               Rect uRect = new Rect(l0, t0, r0, b0);

To get the full source code, please check out the Phimpme Android github repository.

Resources

  1. Phimpme Android Github repository
  2. Complete tutorial on face detection in Android
  3. Leafpic github repository
  4. Android Camera API Google developer page
Continue ReadingFace detection in Phimpme Android’s Camera

Multiple Color Effects in Phimpme Camera

The Phimpme Android’s camera comes with an option to switch between various color effects along with various other functionalities. To select different color modes, we have added a toggle button at the top right corner of the camera interface and which switches from the range of color effect available and on long clicking the toggle button, it resets the effect to normal. To show the functionality of the toggle button we have made use of the Showcase view in the application which displays all the functionality of the toggle button on the first run of the application.

In this tutorial, I will be discussing how we implemented the color effects feature in the Phimpme Android application with the help of some code snippets.

Step 1

Firstly we have to create a toggle button in the camera interface and have to set the onclicklistener on it to change the various color effects on the button click. This can be done with the following code snippet.

toggle.setOnClickListener(new View.OnClickListener() {
  @Override
  public void onClick(View v) {
       //Actions here
}

Similarly, we have to set the long click listener on the toggle button which will handle the reset color effects function in the application.

Step 2

The next thing we need is to extract all the color modes supported by the device and to create an Arraylist of it so that we can call the respective values by just increasing the index on toggle button click. This can be done with the help of the following code snippet.

Now we have all the supported color modes along with the normal mode stored in the values list. For instance,

  1. Mono
  2. Negative
  3. Solarize
  4. Sepia
  5. Neon

Hence on button click, we have to get the color values using the list index and we have to set the value to the camera parameter from where we extracted the supported color effects.

For this, we can make use of a static variable named colorNum and initialise it with 0 and on button click we can just increment this variable by 1 and can set the color effect using the code snippet provided below

final String color = colorEffect.get(colorNum);
CameraController.SupportedValues supported_values = camera_controller.setColorEffect(color);
if (supported_values != null) {
    color_effects = supported_values.values;
   applicationInterface.setColorEffectPref(supported_values.selected_value);
}

And on the long click listener method of the camera, we can set the value of the variable to be 0 and can set the color values accordingly.

To get the full source code on changing the color effects in the camera and to know about adding the showcase view which we have used in this to show the functionality. Please refer to the Phimpme Android repository.

Resources

  1. Open camera Github repository
  2. Color effects in Android camera
  3. Camera API developer page
  4. Amlcurran Showcaseview

 

Continue ReadingMultiple Color Effects in Phimpme Camera

Working With Inter-related Resource Endpoints In Open Event API Server

In this blogpost I will discuss how the resource inter-related endpoints work. These are the endpoints which involve two resource objects which are also related to another same resource object.


The discussion in this post is of the endpoints related to Sessions Model. In the API server, there exists a relationship between event and sessions. Apart from these, session also has relationships with microlocations, tracks, speakers and session-types. Let’s take a look at the endpoints related with the above.

`/v1/tracks/<int:track_id>/sessions` is a list endpoint which can be used to list and create the sessions related to a particular track of an event. To get the list we define the query() method in ResourceList class as such:

The query method is executed for GET requests, so this if clause looks for track_id in view_kwargs dict. When the request is made to `/v1/tracks/<int:track_id>/sessions`, track id will be present as ‘track_id in the view_kwargs. The tracks are filtered based on the id passed here and then joined on the query with all sessions object from database.

For the POST method, we need to add the track_id from view_kwargs to pass into the track_id field of database model. This is achieved by using flask-rest-jsonapi’s before_create_object() method. The implementation for track_id is the following:

When a POST request is made to `/v1/tracks/<int:track_id>/sessions`, the view_kwargs dict will have ‘track_id’ in it. So if track_id is present in the url params, we first ensure that a track with the passed id exists, then only proceed to create a sessions object under the given track. Now the safe_query() method is a generic custom method written to check for such things. The model is passed along with the filter attribute, and a field to include in the error message. This method throws an ObjectNotFound exception if no such object exists for a given id.

We also need to take care of the permissions for these endpoints. As the decorators are called even before schema validation, it was difficult to get the event_id for permissions unless adding highly endpoint-specific code in the permission manager core, leading to loss of generality.  So the leave_if parameter of permission check was used to overcome this issue. Since the permissions manager isn’t fully developed yet, this is to be changed in the improved implementation of the permissions manager.

Similar implementations for micro locations and session types was done. All the same is not explained in this blogpost. For extended code, take a look at the source code of this schema.

For speaker relation, a few things were different. This is because speakers-sessions is a many-to-many relationship. Let’s take a look at this:

As it is a many-to-many relationship, a association_table was used with flask-sqlalchemy. So for the query() method, the same association table is queried after extracting the speaker_id from view_kwargs dict.

For the POST request on `/v1/speakers/<int:speaker_id>/sessions` , flask-rest-jsonapi’s after_create_object() method is used to insert the request in association_table. In this method the parameters are the following: self, obj, data, view_kwargs

Now view_kwargs contains the url parameters, so we make a check for speaker_id in view_wargs. If it is present, then before proceeding to insert data, we ensure that a speaker exists with that id using the safe_query() method as described above. After that, the obj argument of the method is used. This contains the object that was created in  previous method. So now once the sessions object has been created and we are sure that a speaker exists with given speaker_id, this is just to be appended to obj.speakers  so that this relationship tuple is inserted into the association table.

This updates the association table ‘speakers_session’ in this case. The other such endpoints are being worked upon in a similar fashion and will be consolidated as part of a set of robust APIs, along with the improved permissions manager for the Open Event API Server.

Additional Resources

Code, Issues and Pull Request involved

Continue ReadingWorking With Inter-related Resource Endpoints In Open Event API Server

Adding API endpoint to SUSI.AI for Skill Historization

SUSI Skill CMS is an editor to write and edit skill easily. It follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message  and name the author who made the changes to that skills. In this blogpost we will see how to fetch complete commit history of a skill in the susi skill repository. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.  For managing version control in skill data repository, the following dependency is added to build.gradle . JGit is a library which implements the Git functionality in Java.

dependencies {
 compile 'org.eclipse.jgit:org.eclipse.jgit:4.6.1.201703071140-r'
}

To implement our servlet we need to extend our servlet to AbstractAPIHandler. In Susi Server, an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.

public class HistorySkillService extends AbstractAPIHandler implements APIHandler {}

The AbstractAPIHandler checks the permissions of the user using the userroles of and comparing it with the value minimum base role of each servlet. Thus to specify the user permission for a servlet we need Override the getMinimalBaseUserRole method.

 @Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ANONYMOUS;
    }

UserRoles can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. A User need not to log in to access this endpoint.

  @Override
    public String getAPIPath() {
        return "/cms/getSkillHistory.json";
    }

This methods sets the api endpoint path. One need to send requests at http://api.susi.ai/cms/getSkillHistory.json to get the modification history of skill. Next we will implement The ServiceImpl method where we will be processing the user request and giving back the service response.

@Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {

        String model_name = call.get("model", "general");
        File model = new File(DAO.model_watch_dir, model_name);
        String group_name = call.get("group", "knowledge");
        File group = new File(model, group_name);
        String language_name = call.get("language", "en");
        File language = new File(group, language_name);
        String skill_name = call.get("skill", "wikipedia");
        File skill = new File(language, skill_name + ".txt");
        JSONArray commitsArray;
        commitsArray = new JSONArray();
        String path = skill.getPath().replace(DAO.model_watch_dir.toString(), "models");
        //Add to git
        FileRepositoryBuilder builder = new FileRepositoryBuilder();
        Repository repository = null;
        try {
            repository = builder.setGitDir((DAO.susi_skill_repo))
                    .readEnvironment() // scan environment GIT_* variables
                    .findGitDir() // scan up the file system tree
                    .build();
            try (Git git = new Git(repository)) {
                Iterable<RevCommit> logs;
                logs = git.log().addPath(path).call();
                int i = 0;
                for (RevCommit rev : logs) {
                    commit = new JSONObject();
                    commit.put("commitRev", rev);
                    commit.put("commitName", rev.getName());
                    commit.put("commitID", rev.getId().getName());
                    commit.put("commit_message", rev.getShortMessage());
                    commit.put("author",rev.getAuthorIdent().getName());
                    commitsArray.put(i, commit);
                    i++;
                } success=true;
            } catch (GitAPIException e) {
                e.printStackTrace();
                success=false;
           } if(commitsArray.length()==0){
            success=false;
        }
        JSONObject result = new JSONObject();
        result.put("commits",commitsArray);
        result.put("success",success);
        return new ServiceResponse(result);
    }

To access any skill we need parameters model, group, language. We get this through call.get method where first parameter is the key for which we want to get the value and second parameter is the default value. Based on received model, group and language browse files in that folder we build the susi_skill_data repository path read the git variables and scan up the file system tree using FileRepositoryBuilder build() method. Next we fetch all the logs of the skill file and store them in json commits array and finally pass as a server response with success messages. In case of exceptions, pass service with success flags as false.

We have successfully implemented the servlet. Check the working of endpoint by sending request like http://api.susi.ai/cms/getSkillHistory.json?model=general&group=knowledge&language=en&skill=bitcoin and checking the response.

Susi skill cms uses this endpoint to fetch the skill history, try it out at http://skills.susi.ai/browseHistory

Resources
Continue ReadingAdding API endpoint to SUSI.AI for Skill Historization

Unifying Data from Different Scrapers of loklak server using Post

Loklak Server project is a software that scrapes data from different websites through different endpoints. It is difficult to create a single endpoint. For a single endpoint, there is a need of a decent design for using multiple scrapers. For such a task, multiple changes are needed. That is why one of the changes I introduced was Post class that acts as both wrapper and an interface for data objects of search scrapers (though implementation in scrapers is in progress).

Post is a subclass of JSONObject that helps in working with JSON data in Java. In other words, Post is a JSONObject with an identity (we call it postId) and and a timestamp of the data scraped. It is used to capture data fetched by the web-scrapers. Benefit of JSONObject as superclass is that it provides methods to capture and access data efficiently.

Why Post?

At present there is a Class MessageEntry which is the superclass of TwitterTweet (data object of TwitterScraper). It has numerous methods that can be used by data objects to clean and analyse data. But it has a disadvantage, it is a specialized for social websites like Twitter, but will become redundant for different types websites like Quora, Github, etc.

Whereas Post object is a small but powerful and flexible object with its ability to deal with data like JSONObject. It contains getter and setter methods, identity members used to provide each Post object a unique identity. It doesn’t have any methods for analysis and cleaning of data, but MessageEntry class’ methods can be used for this purpose.

Uses of Post Object

When I started working on Post Object, it could be used as marker interface for data objects. Following are the advantages I came up with it:

1) Accessing the data object of any scraper using its variable. And yes, this is the primary reason it is an interface.

2) But in addition to accessing the data objects, one can also directly use it to fetch, modify or use data without knowing the scraper it belongs. This feature is useful in Timeline iterator.

This is an example how Post interface is used to append two lists of Posts (maybe carrying different type of data) into one.

public void mergePost(PostTimeline list) {
    for (Post post: list) {
        this.add(post);
    }
}

 

Post as a wrapper object

While working on Post object, I converted it into a class to also use it as a wrapper. But why a wrapper? Wrapper can be used to wrap a list of Post objects into one object. It doesn’t have any identity or timestamp. It is just a utility to dump a pack of data objects with homogeneous attributes.

This is an example implementation of Post object as wrapper. typeArray is a wrapper which is used to store 2 arrays of data objects in it. These data object arrays are timeline objects that are saved as JSONArray objects in the Post wrapper.

    Post typeArray = new Post(true);
    switch(type) {
        case "users":
            typeArray.put("users", scrapeProfile(br, url).toArray());
            break;
        case "question":
            typeArray.put("question", scrapeQues(br, url).toArray());
            break;
        default:
            break;
    }

 

Resources:

 

Continue ReadingUnifying Data from Different Scrapers of loklak server using Post

Using SUSI AI Server to Store User Feedback for a Skill

User feedback is valuable information that plays an important  role in improving the quality of service. In SUSI AI server we are planning to make a feedback mechanism to see if the user liked the answer or not. The result of that user input (which can be given using a vote button) will be then learned to enhance the future use of the rule. So as a first step for implementation of  skill rating system with guided learning, we need to store the user rating of a skill . In this blogpost we will learn how to make an endpoint for getting skill rating from user. This API endpoint will be used  by its web and mobile clients.
Before the implementation of API  let’s look how data is stored in SUSI AI Susi_server uses DAO in which skill rating is stored as JSONTray. 

 public JsonTray(File file_persistent, File file_volatile, int cachesize) throws IOException {
        this.per = new JsonFile(file_persistent);
        this.vol = new CacheMap<String, JSONObject>(cachesize);
        this.file_volatile = file_volatile;
        if (file_volatile != null && file_volatile.exists()) try {
            JSONObject j = JsonFile.readJson(file_volatile);
            for (String key: j.keySet()) this.vol.put(key, j.getJSONObject(key));
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

JsonTray takes three parameters the persistent file, volatile file and cache size to store them as cache map in String and JsonObject pairs. The HttpServlet class which provides methods, such as doGet and doPost, for handling HTTP-specific services.In Susi Server an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided. Next we will inherit our RateSkillService class from AbstractAPIHandler and implement APIhandler interface.

public class RateSkillService extends AbstractAPIHandler implements APIHandler {
    private static final long serialVersionUID =7947060716231250102L;
    @Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ANONYMOUS;
    }

    @Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }

    @Override
    public String getAPIPath() {
        return "/cms/rateSkill.json";
    }

}

The getMinimalBaseRole method tells the minimum Userrole required to access this servlet it can also be ADMIN, USER. In our case it is Anonymous. A User need not to log in to access this endpoint. The getAPIPath() methods sets the API endpoint path, it gets appended to base path which is 127.0.0.1:4000/cms/rateSkill.json for local host .

Next we will implement serviceImpl method

  @Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {

        String model_name = call.get("model", "general");
        File model = new File(DAO.model_watch_dir, model_name);
        String group_name = call.get("group", "knowledge");
        File group = new File(model, group_name);
        String language_name = call.get("language", "en");
        File language = new File(group, language_name);
        String skill_name = call.get("skill", null);
        File skill = new File(language, skill_name + ".txt");
        String skill_rate = call.get("rating", null);

        JSONObject result = new JSONObject();
        result.put("accepted", false);
        if (!skill.exists()) {
            result.put("message", "skill does not exist");
            return new ServiceResponse(result);

        }
        JsonTray skillRating = DAO.skillRating;
        JSONObject modelName = new JSONObject();
        JSONObject groupName = new JSONObject();
        JSONObject languageName = new JSONObject();
        if (skillRating.has(model_name)) {
            modelName = skillRating.getJSONObject(model_name);
            if (modelName.has(group_name)) {
                groupName = modelName.getJSONObject(group_name);
                if (groupName.has(language_name)) {
                    languageName = groupName.getJSONObject(language_name);
                    if (languageName.has(skill_name)) {
                        JSONObject skillName = languageName.getJSONObject(skill_name);
                        skillName.put(skill_rate, skillName.getInt(skill_rate) + 1 + "");
                        languageName.put(skill_name, skillName);
                        groupName.put(language_name, languageName);
                        modelName.put(group_name, groupName);
                        skillRating.put(model_name, modelName, true);
                        result.put("accepted", true);
                        result.put("message", "Skill ratings updated");
                        return new ServiceResponse(result);
                    }
                }
            }
        }
        languageName.put(skill_name, createRatingObject(skill_rate));
        groupName.put(language_name, languageName);
        modelName.put(group_name, groupName);
        skillRating.put(model_name, modelName, true);
        result.put("accepted", true);
        result.put("message", "Skill ratings added");
        return new ServiceResponse(result);

    }

    /* Utility function*/
    public JSONObject createRatingObject(String skill_rate) {
        JSONObject skillName = new JSONObject();
        skillName.put("positive", "0");
        skillName.put("negative", "0");
        skillName.put(skill_rate, skillName.getInt(skill_rate) + 1 + "");
        return skillName;
    }

 

One can access any skill based on four tuples parameters model, group, language, skill. Before rating a skill we must ensure whether it exists or not. We can get the required parameters through call.get() method where first parameter is the key for which we want to get the value and second parameter is the default value. If skill.exists() method return false we generate error message stating “No such skill exists”. Otherwise check if the skill exist in our skillRating.json file if so, update the current ratings otherwise create a new json object and add it to rating file based on model, group and language. After successful implementation go ahead and test your endpoint on http://localhost:4000/cms/rateSkill.json?model=general&group=knowledge&skill=who&rating=positive

You can also check for the updated json file in  susi_server/data/skill_rating/skillRating.json 

{"general": {
 "assistants": {"en": {
   "language_translation": {
     "negative": "1",
     "positive": "0"
  }}},
 "smalltalk": {"en": {
   "aboutsusi": {
   "negative": "0",
   "positive": "1"
 }}},
 "knowledge": {"en": {
   "who": {
     "negative": "2",
     "positive": "4"
   }}}
}}

And if the skill is not present if will generate error message

We have successfully implemented the API endpoint for storing the user skill’s feedback. For more information take a look at Susi server and join gitter chat channel for discussions.

Resources 

Continue ReadingUsing SUSI AI Server to Store User Feedback for a Skill

Rendering Open Event Server’s API-Blueprint document

After writing the FOSSASIA‘s Open Event Server project API- Blueprint Document manually, we wanted to know how we could render the document, how to check it in an HTML-client friendly format and how to make it change the look as we go. In order to do that, we found two rendering ways.

They are:

1) The apiary editor:

This editor helps us to render API blueprints and print them in user readable API documented format. When we create the API blueprint manually, we always follow the pattern write an api blueprint i.e the name and metadata, then followed by resource groups and actions, which was already discussed in the last blog. In order to use the apiary editor, we start off by creating our first project. Initially during the our first use of this editor, we will get a default “polls and vote” example api project. This is a template we can use as guide. The pole/vote api looks something like this in the editor mode:

 

Apiary has a facility to test an API, document an API, inspect an API or simply edit an API. We first start off by creating a project “open-event-api”. Next, in the editor mode of the apiary, we add the contents of our api-blueprint documents.
Here is an example of how USERS API is rendered. If we get our request and response correctly, on clicking List All Users we will get a good 200 response like this in the editor:

However, if we tend to go off format with the api-blueprint, we get an invalid error:

The final rendering and how the API result can be seen in the document mode with the respective API’s request and response.
The document mode request and response look like this:

This rendered doc can be viewed publicly with the link got in the document mode. Similarly, we test it out in the editor for the rest of the ap. This is a simple way to render your api blueprint.

2)  The aglio renderer:

Since API blueprint is presented in the form of .apib format, the con side of it is it is not easily viewable by viewers. Even though we use apiary, view the rendered docs along with getting a shareable link, we would surely like the docs for our API server to be hosted in our server as well. So, we use Aglio exactly to do that .

It is an API Blueprint renderer which supports multiple themes. It converts the apib file into user readable formats such as pdf, html, etc. Here since we want to host it as a webpage, we render it in the form of .html.  It outputs static HTML of the result and can be served by any web host. Since API Blueprint is a Markdown-based document format, this lets us write API descriptions and documentation in a simple and straightforward way.
An example of how aglio rendered document in a three column format looks like:

The best thing about Aglio is not only does it support a lot many theme and templates, but it also allows you to provide your own custom theme and template to render the html file from the api blueprint.

How to use aglio renderer:

  • We first follow up with installation:
npm install -g aglio
  • After installation, we go to the folder the .apib file is stored and generate the HTML. There are 5 built in themes available with two column and three column layout. They are:
# Default theme
aglio -i input.apib -o output.html

-> This command takes as input the input.apib file as API Blueprint and creates a rendered output file named output.html.

 

# Use three-column layout
aglio -i input.apib --theme-template triple -o output.html

-> This command takes as input the input.apib file as API Blueprint and creates a rendered output file named output.html. However it uses the theme-template flag. The theme-template flag is used to define whether the layout of the rendered html is two column or three column. In this command, it is set as triple which means three column.

# Built-in color scheme
aglio --theme-variables slate -i input.apib -o output.html

-> Aglio has different color schemes that you can use while rendering the docs html file. Some of them are Olio, Streak, Slate, etc.

# Customize a built-in style
aglio --theme-style default --theme-style ./my-style.less -i input.apib -o output.html

-> Suppose you want to provide a syntactical style sheet such as SASS, LESS, etc. so as to define your own styling. You can do that as given in the above example. The my-style.less is a user defined syntactical stylesheet. This is then used to provide styling for the output file rendered.

# Custom layout template
aglio --theme-template /path/to/template.jade -i input.apib -o output.html

-> You can write your own custom layout template in a template.jade file and use that for generating the output.html instead of two or three column layout.

We run the build-in color scheme: aglio –theme-variables slate -i api_blueprint.apib -o output.html to generate our Open Event Server api document which we have something like this:

You can visit the live version of FOSSASIA‘s Open Event Server API Document right here: https://api.eventyay.com/

Continue ReadingRendering Open Event Server’s API-Blueprint document

Smart Data Loading in Open Event Android Orga App

In any API centric native application like the Open Event organizer app (Github Repo), there is a need to access data through network, cache it for later use in a database, and retrieve data selectively from both sources, network and disk. Most of Android Applications use SQLite (including countless wrapper libraries on top of it) or Realm to manage their database, and Retrofit has become a de facto standard for consuming a REST API. But there is no standard way to manage the bridge between these two for smart data loading. Most applications directly make calls to DB or API service to selectively load their data and display it on the UI, but this breaks fluidity and cohesion between these two data sources. After all, both of these sources manage the same kind of data.

Suppose you wanted to load your data from a single source without having to worry from where it is coming, it’d require a smart model or repository which checks which kind of data you want to load, check if it is available in the DB, load it from there if it is. And if it not, call the API service to load the data and also save it in the DB there itself. These smart models are self contained, meaning they handle the loading logic and also handle edge cases and errors and take actions for themselves about storing and retrieving the data. This makes presentation or UI layer free of data aspects of the application and also remove unnecessary handling code unrelated to UI.

From the starting of Open Event Android Orga Application planning, we proposed to create an efficient MVP based design with clear separation of concerns. With use of RxJava, we have created a single source repository pattern, which automatically handles connection, reload and database and network management. This blog post will discuss the implementation of the AbstractObservableBuilder class which manages from which source to load the data intelligently

Feature Set

So, first, let’s discuss what features should our AbstractObservableBuilder class should have:

  • Should take two inputs – disk source and network source
  • Should handle force reload from server
  • Should handle network connection logic
  • Should load from disk observable if data is present
  • Should load from network observable if data is not present is disk observable

This constitutes the most basic data operations done on any API based Android application. Now, let’s talk about the implementation

Implementation

Since our class will be of generic type, we will used variable T to denote it. Firstly, we have 4 declarations globally

private IUtilModel utilModel;
private boolean reload;
private Observable<T> diskObservable;
private Observable<T> networkObservable;

 

  • UtilModel tells us if the device is connected to internet or not
  • reload tells if the request should bypass database and fetch from network source only
  • diskObservable is the database source of the item to be fetched
  • networkObservable is the network source of the same item

Next is a very simple implementation of the builder pattern methods which will be used to set these variable fields

@Inject
public AbstractObservableBuilder(IUtilModel utilModel) {
    this.utilModel = utilModel;
}

AbstractObservableBuilder<T> reload(boolean reload) {
    this.reload = reload;

    return this;
}

AbstractObservableBuilder<T> withDiskObservable(Observable<T> diskObservable) {
    this.diskObservable = diskObservable;

    return this;
}

AbstractObservableBuilder<T> withNetworkObservable(Observable<T> networkObservable) {
    this.networkObservable = networkObservable;

    return this;
}

 

UtilModel is the required dependency, and so is added as a constructor parameter.

All right, all variables are set up, now we need to create the build function to actually create the observable:

@NonNull
public Observable<T> build() {
    if (diskObservable == null || networkObservable == null)
        throw new IllegalStateException("Network or Disk observable not provided");

    return Observable
            .defer(getReloadCallable())
            .switchIfEmpty(getConnectionObservable())
            .compose(applySchedulers());
}

Reloading Logic

First of all, we check if the caller forgot to add disk or network source and throw an exception if it is actually so. Next, we use defer operator to defer the call to getReloadCallable() so that this function is not executed until this observable is subscribed. Some articles over the internet directly use combine operators from Rx to make things easy, but this lazy calling is the most efficient way to do things because no actual call will be made to observables.

Secondly, you can easily test the behaviour in unit tests, by verifying that

  • no call to the network observable was made if the data inside disk observable was present; or
  • call to network observable was made even if there was data in disk observable if the reload request was made

These tests would not have been possible if we did not employ the lazy call technique because the calls to the observables and utilModel would have been made before the subscription to this model happen, in order to create this observable eagerly.

Now, let’s see what getReloadCallable does

@NonNull
private Callable<Observable<T>> getReloadCallable() {
    return () -> {
        if (reload)
            return Observable.empty();
        else
            return diskObservable
                .doOnNext(item -> 
                    Timber.d("Loaded %s From Disk on Thread %s",
                    item.getClass(), Thread.currentThread().getName()));
    };
}

 

This function’s role is to return the disk observable if the request is not a reload call, or else return an empty observable, so that force network request happens to reload the data

So it returns a Callable which encapsulates this logic, and besides that, it also adds a log if loading from disk about the type of item loaded and the thread it was loaded on

Connection and Database Switch Logic

In the next chain of operation, we make a switchIfEmpty call to getConnectionObservable(). Because of the above reloading logic, switchIfEmpty serves 2 purpose here, it changes to API call:

  • if db does not contain data
  • If it is a reload call

The observable we switch to is returned by getConnectionObservable() and its purpose is to check if the device is connected to the internet, and if it is, to forward the network request and if it is not, then return an Error Observable.

@NonNull
private Observable<T> getConnectionObservable() {
    if (utilModel.isConnected())
        return networkObservable
            .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                item.getClass(), Thread.currentThread().getName()));
    else
        return Observable.error(new Throwable(Constants.NO_NETWORK));
}

We use util model to determine if we are connected to internet and take action accordingly. As you can see, here too, we log about the data being loaded and the thread information.

Threading

Lastly, we want to ensure that all processing happens on correct threads, and for that, we call compose with an Observable Transformer to make all requests happen on I/O scheduler and the data is received on Android’s Main Thread

@NonNull
private <V> ObservableTransformer<V, V> applySchedulers() {
    return observable -> observable
        .subscribeOn(Schedulers.io())
        .observeOn(AndroidSchedulers.mainThread());
}

And that’s all it takes to create a reactive, generic and reusable data handler for disk and network based operations. In the repository pattern we have employed in the Open Event Android Orga Application, all our data switching and handling code is delegated to it, with unit tests and integration tests testing the individual and cross component working in all cases.

If you want to learn more about other implementations, you can read these articles

Continue ReadingSmart Data Loading in Open Event Android Orga App