Developing MultiLinePlotter App for Loklak

MultiLinePlotter is a web application which uses Loklak API under the hood to plot multiple tweet aggregations related to different user provided query words in the same graph. The user can give several query words and multiple lines for different queries will be plotted in the same graph. In this way, users will be able to compare tweet distribution for various keywords and visualise the comparison. All the searched queries are shown under the search record section. Clicking on a record causes a dialogue box to pop up where the individual tweets related to the query word is displayed. Users can also remove a series from the plot dynamically by just pressing the Remove button beside the query word in record section. The app is presently hosted on Loklak apps site.

Related issue – https://github.com/fossasia/apps.loklak.org/issues/225

Getting started with the app

Let us delve into the working of the app. The app uses Loklak aggregation API to get the data.

A call to the API looks something like this:

http://api.loklak.org/api/search.json?q=fossasia&source=cache&count=0&fields=created_at

A small snippet of the aggregation returned by the above API request is shown below.

"aggregations": {"created_at": {
    "2017-07-03": 3,
    "2017-07-04": 9,
    "2017-07-05": 12,
    "2017-07-06": 8,
}}

The API provides a nice date v/s number of tweets aggregation. Now we need to plot this. For plotting Morris.js has been used. It is a lightweight javascript library for visualising data.

One of the main features of this app is addition and removal of multiple series from the graph dynamically. How do we achieve that? Well, this can be achieved by manipulating the morris.js data list whenever a new query is made. Let us understand this in steps.

At first, the data is fetched using angular HTTP service.

$http.jsonp('http://api.loklak.org/api/search.json?callback=JSON_CALLBACK',
            {params: {q: $scope.tweet, source: 'cache', count: '0', fields: 'created_at'}})
                .then(function (response) {
                    $scope.getData(response.data.aggregations.created_at);
                    $scope.plotData();
                    $scope.queryRecords.push($scope.tweet);
                });

Once we get the data, getData function is called and the aggregation data is passed to it. The query word is also stored in queryRecords list for future use.

In order to plot a line graph morris.js requires a data object which will contain the required values for a series. Given below is an example of such a data object.

data: [
    { x: '2006', a: 100, b: 90 },
    { x: '2007', a: 75,  b: 65 },
    { x: '2008', a: 50,  b: 40 },
    { x: '2009', a: 75,  b: 65 },
],

For every ‘x’, ‘a’ and ‘b’ will be plotted. Thus two lines will be drawn. Our app will also maintain a data list like the one shown above, however, in our case, the data objects will have a variable number of keys. One key will determine the ‘x’ value and other keys will determine the ordinates (number of tweets).

All the data objects present in the data list needs to be updated whenever a new search is done.

The getData function does this for us.

var value = $scope.tweet;
        for (date in aggregations) {
            var present = false;
            for (var i = 0; i < $scope.data.length; i++) {
                var item = $scope.data[i];
                if (item['day'] === date) {
                    item[[value]] = aggregations[date];
                    $scope.data[i] = item
                    present = true;
                    break;
                }
            }
            if (!present) {
                $scope.data.push({day: date, [value]: aggregations[date]});
            }
        }


The for loop in the above code snippet updates the global data list used by morris.js. It simply iterates over the dates in the aggregation, extracts the object corresponding to a particular date, adds the new query word as a key and, the number of tweets on that date as the value.If a date is not already present in the list, then it inserts a new object corresponding to the date and query word. Once our data list is updated, we are ready to redraw the graph with the updated data. This is done using plotData function. The plotData function simply checks the user selected graph type. If the selected type is aggregations then it calls plotAggregationGraph() to redraw the aggregations plot.

$scope.remove = function(record) {
        $scope.queryRecords = $scope.queryRecords.filter(function(e) {
            return e !== record });

        $scope.data.forEach(function(item) {
            delete item[record];
        });

        $scope.data = $scope.data.filter(function(item) {
            return Object.keys(item).length !== 1;
        });

        $scope.ykeys = $scope.ykeys.filter(function(item) {
            return item !== record;
        });

        $scope.labels = $scope.labels.filter(function(item) {
            return item !== record;
        });

        $scope.plotData();
}

The above function simply scans the data list, filters the objects which contains selected record as a key and removes them using filter method of javascript arrays. It also removes the corresponding labels and entries from labels and ykeys arrays. Finally, it once again calls plotData function to redraw the plot.

Given below is a sample plot generated by this app with the query words – google, android, microsoft, samsung.

 

Conclusion

This blog post explained how multiple series are plotted dynamically in the MultiLinePlotter app. Apart from aggregations plot it also plots tweet statistics like maximum tweets and average tweets containing a query word and visualises them using stacked bar chart. I will be discussing about them in my subsequent blogs.

Important resources

Continue ReadingDeveloping MultiLinePlotter App for Loklak

Managing Related Endpoints in Permission Manager of Open Event API Server

Open Event API Server has its permission manager to manage all permission to different endpoints and some of the left gaps were filled by new helper method has_access. The next challenge for permission manager was to incorporate a feature many related endpoints points to the same resource.
Example:

  • /users-events-roles/<int:users_events_role_id>/user or
  • /event-invoices/<int:event_invoice_id>/user

Both endpoints point to Users API where they are fetching the record of a single user and for this, we apply the permission “is_user_itself”. This permission ensures that the logged in user is the same user whose record is asked through the API and for this we need the “user_id” as the “id” in the permission function, “is_user_itself”
Thus there is need to add the ability in permission manager to fetch this user_id from different models for different endpoints. For example, if we consider above endpoints then we need the ability to get user_id from UsersEventsRole and EventInvoice models and pass it to permission function so that it can use it for the check.

Adding support

To add support for multiple keys, we have to look for two things.

  • fetch_key_url
  • model

These two are key attributes to add this feature, fetch_key_url will take the comma separated list which will be matched with view_kwargs and model receives the array of the Model Classes which will be used to fetch the related records from the model
This snippet provides the main logic for this:

for index, mod in enumerate(model):
   if is_multiple(fetch_key_url):
       f_url = fetch_key_url[index]
   else:
       f_url = fetch_key_url
   try:
       data = mod.query.filter(getattr(mod, fetch_key_model) == view_kwargs[f_url]).one()
   except NoResultFound, e:
       pass
   else:
       found = True

if not found:
   return NotFoundError({'source': ''}, 'Object not found.').respond()

From the above snippet we are:

  • We iterate through the models list
  • Check if fetch_key_url has multiple keys or not
  • Get the key from fetch_key_url on the basis of multiple keys or single key in it.
  • We try to attempt to get object from model for the respective iteration
  • If there is any record/object in the database then it’s our data. Skipping further process
  • Else continue iteration till we get the object or to the end.

To use multiple mode

Instead of providing the single model to the model option of permission manager, provide an array of models. Also, it is optional to provide comma separated values to fetch_key_url
Now there can be scenario where you want to fetch resource from database model using different keys present on your view_kwargs
for example, consider these endpoints

  1. `/notifications/<notification_id>/event`
  2. `/orders/<order_id>/event`

Since they point to same resource and if you want to ensure that logged in user is organizer then you can use these two things as:

  1. fetch_key_url=”notification_id, order_id”
  2. model=[Notification, Order]

Permission manager will always match indexes in both options, the first key of fetch_key_url will be only used for the first key of the model and so on.
Also, fetch_key_url is an optional parameter and even in multiple mode you can provide a single value as well.  But if you provide multiple commas separated values make sure you provide all values such that no of values in fetch_key_url and model must be equal.

Resources

Continue ReadingManaging Related Endpoints in Permission Manager of Open Event API Server

Custom Data Layer in Open Event API Server

Open Event API Server uses flask-rest-jsonapi module to implement JSON API. This module provides a good logical abstraction in the data layer.
The data layer is a CRUD interface between resource manager and data. It is a very flexible system to use any ORM or data storage. The default layer you get in flask-rest-jsonapi is the SQLAlchemy ORM Layer and API Server makes use of default alchemy layer almost everywhere except the case where I worked on email verification part.

To add support for adding user’s email verification in API Server, there was need to create an endpoint for POST /v1/users/<int:user_id>/verify
Clearly here we are working on a single resource i.e, specific user record. This requires us to use ResourceDetail and the only issue was there is no any POST method or view in ResourceDetail class. To solve this I created a custom data layer which enables me to redefine all methods and views by inheriting abstract class. A custom data layer must inherit from flask_rest_jsonapi.data_layers.base.Base.

Creating Custom Layer

To solve email verification process, a custom layer was created at app/api/data_layers/VerifyUserLayer.py

def create_object(self, data, view_kwargs):
   user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
   s = get_serializer()
   try:
       data = s.loads(data['token'])
   except Exception:
       raise UnprocessableEntity({'source': 'token'}, "Invalid Token")

   if user.email == data[0]:
       user.is_verified = True
       save_to_db(user)
       return user
   else:
       raise UnprocessableEntity({'source': 'token'}, "Invalid Token")

Using custom layer in API

We can easily provide custom layer in API Resource using one of the properties of the Resource Class

data_layer = {
   'class': VerifyUserLayer,
   'session': db.session
}

This is all we have to provide in the custom layer, now all CRUD method will be directed to our custom data layer.

Solution to our issue
Setting up custom layer provides us the ability to create our custom resource methods, i.e, modifying the view for POST request and allowing us to verify the registered users in API Server.
On Setting up the data layer all I need to do is create a ResourceList with using this layer and with permissions

class VerifyUser(ResourceList):

   methods = ['POST', ]
   decorators = (jwt_required,)
   schema = VerifyUserSchema
   data_layer = {
       'class': VerifyUserLayer,
       'session': db.session
   }

This enables me to use the custom layer, VerifyUserLayer for ResourceList resource.

Resources

Continue ReadingCustom Data Layer in Open Event API Server

Integrating Stock Sensors with PSLab Android App

A sensor is a digital device (almost all the time an integrated circuit) which can receive data from outer environment and produce an electric signal proportional to that. This signal will be then processed by a microcontroller or a processor to provide useful functionalities. A mobile device running Android operating system usually has a few sensors built into it. The main purpose of these sensors is to provide user with better experience such as rotating the screen as he moves the device or turn off the screen when he is making a call to prevent unwanted screen touch events. PSLab Android application is capable of processing inputs received by different sensors plugged into it using the PSLab device and produce useful results. Developers are currently planning on integrating the stock sensors with the PSLab device so that the application can be used without the PSLab device.

This blog is about how to initiate a stock sensor available in the Android device and get readings from it. Sensor API provided by Google developers is really helpful in achieving this task. The process is consist of several steps. It is also important to note the fact that there are devices that support only a few sensors while some devices will support a lot of sensors. There are few basic sensors that are available in every device such as

  • “Accelerometer” – Measures acceleration along X, Y and Z axis
  • “Gyroscope” – Measures device rotation along X, Y and Z axis
  • “Light Sensor” – Measures illumination in Lux
  • “Proximity Sensor” – Measures distance to an obstacle from sensor

The implementing steps are as follows;

  1. Check availability of sensors

First step is to invoke the SensorManager from Android system services. This class has a method to list all the available sensors in the device.

SensorManager sensorManager;
sensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
List<Sensor> sensors = sensorManager.getSensorList(Sensor.TYPE_ALL);

Once the list is populated, we can iterate through this to find out if the required sensors are available and obstruct displaying activities related to sensors that are not supported by the device.

for (Sensor sensor : sensors) {
   switch (sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           break;
       case Sensor.TYPE_GYROSCOPE:
           break;
       ...
   }
}

  1. Read data from sensors

To read data sent from the sensor, one should implement the SensorEventListener interface. Under this interface, there are two method needs to be overridden.

public class StockSensors extends AppCompatActivity implements SensorEventListener {

    @Override
    public void onSensorChanged(SensorEvent sensorEvent) {

    }

    @Override
    public void onAccuracyChanged(Sensor sensor, int i) {

    }
}

Out of these two methods, onSensorChanged() method should be addressed. This method provides a parameter SensorEvent which supports a method call getType() which returns an integer value representing the type of sensor produced the event.

@Override
public void onSensorChanged(SensorEvent sensorEvent) {
   switch (sensorEvent.sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           break;
       case Sensor.TYPE_GYROSCOPE:
           break;
       ...
   }
}

Each available sensor should be registered under the SensorEventListener to make them available in onSensorChanged() method. The following code block illustrates how to modify the previous code to register each sensor easily with the listener.

for (Sensor sensor : sensors) {
   switch (sensor.getType()) {
       case Sensor.TYPE_ACCELEROMETER:
           sensorManager.registerListener(this, sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_UI);
           break;
       case Sensor.TYPE_GYROSCOPE:
           sensorManager.registerListener(this, sensorManager.getDefaultSensor(Sensor.TYPE_GYROSCOPE), SensorManager.SENSOR_DELAY_UI);
           break;
   }
}

Depending on the readings we can provide user with numerical data or graphical data using graphs plotted using MPAndroidChart in PSLab Android application.

The following images illustrate how a similar implementation is available in Science Journal application developed by Google.

Resources

Continue ReadingIntegrating Stock Sensors with PSLab Android App

Changing Dimensions of Search Box Dynamically in Susper

Earlier the Susper search box had a fixed dimension. When a user types in a query, the dimensions of the search box remained fixed. This approach resulted in several issues like:

  • Matching the dimensions of the search bar following the market leader.
  • When dimensions are dynamically changing, it should not disturb alignment w.r.t tabs in the results page.

What actually happens is, when a user enters a query, the search box quickly changes its dimensions when results appear. I will be discussing below how we achieved this goal.

On the home page, we created the dimensions of a search bar with 584 x 44 pixels.

On the results page, we created the dimensions of search bar 632 x 44 similar to market leader:

How we proceeded?

Susper is built on Angular v4.1.3. It automatically comes with a function ngOnInit() whenever a new component has been created. ngOnInit() is a part of life cycle hook in Angular 4 (in Angular 2 as well). The function is called up or initialized when the component is rendered completely. This was the key for changing dimensions of search bar dynamically as soon as a component is created.

What happens is when a user types a query on the homepage and hits enter then, results component is created. As soon as, it is created – ngOnInit() function is called.

The default dimensions of search bar have been provided as follows:

search-bar.component.css

#navgroup {
  height: 44px;
  width: 584px;
}
When the homepage loads up, dimensions by default are 584 x 44.

search-bar.component.ts

private navbarWidth: any;
ngOnInit() {
  this.navbarWidth = 632px;
}

search-bar.component.html

We used [style.width] attribute to change the dimensions dynamically. Add this attribute inside input element.

<input #input type=“text” name=“query” class=“form-control” id=“nav-input” (ngModelChange)=“onquery($event)” [(ngModel)]=“searchdata.query” autocomplete=“off” (keypress)=“onEnter($event)” [style.width]=“navbarWidth”>
As soon as results component is loaded, the dimensions of search bar change to 632 x 44. In this way, we change the dimensions of search bar dynamically as soon as results are loaded.

Resources

Continue ReadingChanging Dimensions of Search Box Dynamically in Susper

Controlling Camera Actions Using Voice Interaction in Phimpme Android

In this blog, I will explain how I implemented Google voice actions to control camera features on the Phimpme Android project. I will cover the following features I have implemented on the Phimpme project:

  • Opening the application using Google Voice command.
  • Switching between the cameras.
  • Clicking a Picture and saving it through voice command.

Opening application when the user gives a command to Google Now.                       When the user gives command “Take a selfie” or “Click a picture” to Google Now it directly opens Phimpme camera activity.

 First                                                                                                                                        We need to add an intent filter to the manifest file so that Google Now can  detect Phimpme camera activity

<activity
   android:name=".opencamera.Camera.CameraActivity"
   android:screenOrientation="portrait"
   android:theme="@style/Theme.AppCompat.NoActionBar">
   <intent-filter>
       <action android:name="android.media.action.IMAGE_CAPTURE"/>

       <category android:name="android.intent.category.DEFAULT"/>
       <category android:name="android.intent.category.VOICE"/>
   </intent-filter>
</activity>

category android:name=”android.intent.category.VOICE” is added to the IMAGE_CAPTURE intent filter for the Google Now to detect the camera activity. For the Google Now assistance to accept the command in the camera activity we need to add the following in the STILL_IMAGE_CAMERA intent filter in the camera activity.

<intent-filter>
   <action android:name="android.media.action.STILL_IMAGE_CAMERA"/>

   <category android:name="android.intent.category.DEFAULT"/>
   <category android:name="android.intent.category.VOICE"/>
</intent-filter>

So, now when the user says “OK Google” and “Take a Picture” the camera activity on Phimpme opens.

Integrating Google Voice assistance in Camera Activity

Second,                                                                                                                               After opening the camera application the Google Assistance should ask a question.

The cameraActivity in Phimpme can be opened in two ways, such as:

  • When opened from a different application.
  • When given the command to Goole Now assistance.

We need to check whether the camera activity is prompted from Google assistance or not to activate voice command. We will check it in onResume function.

@Override
public void onResume() {
if (CameraActivity.this.isVoiceInteraction()) {
     startVoiceTrigger();
  }
} 

If is.VoiceInteraction gives “true” then voice Assistance prompts.             Assistance to ask which camera to use

Third,                                                                                                                                 After the camera activity opens the Google assistance should ask which camera to use either front or back.

To take any voice input from the user, we to store the expected commands in VoiceInteractor.PickoptionRequest. This function listens to the command by the user. We need to add synonyms for the same command.

To choose the rear camera

VoiceInteractor.PickOptionRequest.Option rear = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_rear), 0);
rear.addSynonym(getResources().getString(R.string.rear));
rear.addSynonym(getResources().getString(R.string.back));
rear.addSynonym(getResources().getString(R.string.normal)); 

I added synonyms like the rear, normal and back.

To choose front camera

VoiceInteractor.PickOptionRequest.Option front = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.camera_front), 1);
front.addSynonym(getResources().getString(R.string.front));
front.addSynonym(getResources().getString(R.string.selfie_camera));
front.addSynonym(getResources().getString(R.string.forward));

I added synonyms like the front, selfie camera and forward. 

For assistance to ask any question such as “Which camera to use we” I have used getVoiceinteractor and inflating VoiceInteractor.PickOptionRequest.option[] array with options front and rear.

CameraActivity.this.getVoiceInteractor()
     .submitRequest(new VoiceInteractor.PickOptionRequest(
           new VoiceInteractor.Prompt(getResources().getString(“Which camera would you like to use?”),
           new VoiceInteractor.PickOptionRequest.Option[]{front, rear},
           null) {

The google assistance waits for a response from the user for only a few seconds and it goes inactive. If the user gives any unexpected command the assistance will ask the question one more time.

Check if the user gives an expected command or not.

We will override OnOptionResult(boolean finished, Options[] selection, Bundle result) function.if  (finished && selections.length == 1) if the speech length matches with any of the options provided it checks which option was used.

Check the command given by the user to switch between the cameras.

Two array objects are passed 0 and 1.  If the command given was “rear” then selection[0].getindex() = 0 and camera activity switches to the rear camera and if the the command given by the user is rear then selection[0].getIndex = 1 and camera activity switches to front camera.

@Override
public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
  if (finished && selections.length == 1) {
     Message message = Message.obtain();
     message.obj = result;
     if (selections[0].getIndex() == 0)
     {  rearCamera();
        asktakePicture();
     }
     if (selections[0].getIndex() == 1)
     {
        asktakePicture();
     }
  }else{

       getActivity().finish();
  }

Click Picture when the user says “Cheese

After switching the camera the assistant prompts the message”Say cheese”. We need to add voiceInteractor.prompt(“Say cheese”).

We need to store the synonyms in VoiceInteractor.PickOption.Options options. I have added synonyms like ready, go, take it, OK, and Cheese to click a picture. If the user gives an unexpected output the assistance checks selection.length ==1 or not and prompts the message “Say cheese” again.

private void asktakePicture() {
  VoiceInteractor.PickOptionRequest.Option option = new VoiceInteractor.PickOptionRequest.Option(getResources().getString(R.string.cheese), 2);
  option.addSynonym(getResources().getString(R.string.ready));
  option.addSynonym(getResources().getString(R.string.go));
  option.addSynonym(getResources().getString(R.string.take));
  option.addSynonym(getResources().getString(R.string.ok));
getVoiceInteractor()
        .submitRequest(new VoiceInteractor.PickOptionRequest(
              new VoiceInteractor.Prompt(getResources().getString(R.string.say_cheese)),
              new VoiceInteractor.PickOptionRequest.Option[]{option},
              null) {
           @Override
           public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) {
              if (finished && selections.length == 1) {
                 Message message = Message.obtain();
                 message.obj = result;
                 takePicture();
              } else {
                 getActivity().finish();
              }
           }
           @Override
           public void onCancel() {
              getActivity().finish();
           }
        });                                                                                                                                     

Conclusion

Now, Users can start camera activity on Phimpme through voice command “Take a Selfie”. They can switch between the cameras through voice command “Selfie camera” or “back camera”, “back” or “front” and finally click a picture by giving voice command “Cheese”, “Click it” and related synonyms.

Github

Resources

Continue ReadingControlling Camera Actions Using Voice Interaction in Phimpme Android

Aligning Images to Same Height Maintaining Ratio in Susper

In this blog, I’ll be sharing the idea how we have perfectly aligned images to the same height in Susper without disturbing their ratio. When it comes to aligning images perfectly, they should have:

  • Same height.
  • A proper ratio to maintain the image quality. Many developers apply same width and height without keeping in mind about image ratio which results in:
    • Blurred image,
    • Image with a lot of pixels,
    • Cropping of an image.

Earlier Susper was having image layout like this:

In the screenshot, images are not properly aligned.  They are also not having the same height. We wanted to improve the layout of images just like market leaders Google and DuckDuckGo.

  • How we implemented a better layout for images?

<div class=“container”>
  <div class=“grid” *ngIf=“Display(‘images’)”>
    <div class=“cell” *ngFor=“let item of item$ | async”>
      <a class=“image-pointer” href=“{{item.link}}”>
        <img class=“responsive-image” src=“{{item.link}}”></a>
    </div>
  </div>
</div>
I have created a container, in which images will be loaded from yacy server. Then I have created a grid with an equal number of rows and column. I have adjusted the height and width of rows and columns to obtain a grid which contains each division as a cell. The image will load inside the cell. Each cell consists of one image.
.grid {
  paddingleft: 80px;
}
.container {
  width: 100%;
  margin: 0;
  top: 0;
  bottom: 0;
  padding: 0;
}

After implementing it, we were facing issues like cropping of the image inside a cell. So, to avoid cropping and maintain the image ratio we introduced .responsive-image class which will avoid cropping of images inside cell.

.responsiveimage {
  maxwidth: 100%;
  height: 200px;
  paddingtop: 20px;
  padding: 0.6%;
  display: inlineblock;
  float: left;
}

This is how Susper’s image section looks now:

It took some time to align images, but somehow we succeeded in creating a perfect layout for the images.

We are facing some issues regarding images. Some of them don’t appear due to broken link. This issue will be resolved soon on the server.

Resources

Continue ReadingAligning Images to Same Height Maintaining Ratio in Susper

Showing RSS and Table Type Replies in SUSI Messenger Bots

All the messengers have a “plain text” reply support. To show web search (RSS) or table type replies, either:

  1. We need a “list type” (as in Facebook messenger) or “table type” reply support built in the messenger itself.
                                                                  or
  2. We need to convert the RSS or table type reply by SUSI API to plain text, so that we can send it, due to the “plain text” reply support available in almost every messenger.

The second point is the most favorable approach, as that way, replying with RSS or table type results is dependent only on the text support feature in the messenger. This way RSS or table type replies can be shown in messengers like REST API Gitter (which do not provide any other reply type support except text).

In SUSI web app, the UI of the web search and table type results:

As the SUSI web app is made in React js, it provided the app with necessary features to show the results this way. The messengers may not be having these required features.

So the task is we need to convert the RSS or table type replies by SUSI API to plain text to send it to the messenger.

Let’s work it out.

Converting RSS results to text:

First get familiar with the SUSI API reply to query “why” by visiting this url – http://api.susi.ai/susi/chat.json?q=why.

The JSON object returned will be constituting an array of objects as the value of the data key like:

"data": [
        {
        "title": "Why is Oracle male?",
        "description": "Why is Oracle male?. http dba oracle com why is male htm. Oracle Why is masculine?. ",
        "link": "http://dba-oracle.com/t_why_is_oracle_male.htm"
      }
]

If we notice carefully each object has 3 main keys namely “title”, “description” and “link”. So extracting these 3 properties from each object and binding them together into 1 string is the task we need to do.

So we traverse each object (i.e. rss result) in the data array and we keep on appending the values of “title”, “description” and “link” key values to the ans variable. At the end we send this resultant string to the messenger bot as a reply.  

Suppose we have the returned JSON object, in the data variable.

// storing the number of rss results
var metaCnt = data.answers[0].metadata.count;
    for(var i=0;i<metaCnt;i++){
        ans += ('Title : ');
ans += data.answers[0].data[i].title+', ';
        ans += ('Link : ');
        ans += data.answers[0].data[i].link+', ';
        ans += '\n\n';
    }

// send the message in ans variable to the messenger

Converting table type replies to text:

First get familiar with the SUSI API reply to query “why” by visiting this url – http://api.susi.ai/susi/chat.json?q=universities+in+australia.

The JSON object returned will be constituting an array of objects representing universities as the value of the data key in this form:

{
    "alpha_two_code": "AU",
    "name": "Australian Correspondence Schools",
    "country": "Australia",
    "web_page": "http://www.acs.edu.au/"
}

Here too, each object has 3 main keys namely “name”, “country” and “web_page”. So extracting these 3 properties from each object and binding them together into 1 string can make the things work.

Again traverse each object (i.e. university object) in the data array and we keep on appending the values of “name”, “country” and “web_page” key values to the ans variable. At the end we send this resultant string to the messenger bot as a reply.

Suppose we have the returned JSON object in the data variable.

    // the 3 main columns which we need are stored in colNames variable
    var colNames = data.answers[0].actions[0].columns;
    
    // storing the number of table entries
    var metaCnt = data.answers[0].metadata.count;
    for(var i=0;i<metaCnt;i++){
        for(var cN in colNames){
            // The column name
            ans += (colNames[cN]+' : ');
            // value for that column name
            ans += data.answers[0].data[i][cN]+', ';
        }
        ans += '\n\n';
    }

    // send the message in ans variable to the messenger

Resources

  1. By Slobodan Stojanović from smashing magazineDevelop a chat bot with node js.
  2. By Mikhail Larionov from Facebook blogsList templates and check box plugin
Continue ReadingShowing RSS and Table Type Replies in SUSI Messenger Bots

Using Templates in SUSI FBbot

The SUSI AI Fbbot previously showed rss and table type replies as plain text to the user. To enhance the user experience, Facebook provides with templates which can be used in it’s messenger. Using those, we show rss and table type replies wrapped up in a better U.I. creating a better user experience.

The 4 basic template structures that can be used for this task are:

  1. Button template
  2. Generic template
  3. List template
  4. Receipt template

List template is used in SUSI A.I. Fbbot because rss reply and table type reply both are lists of data.
The basic syntax for list template with reference to Fb official docs is:

"message": {
    "attachment": {
        "type": "template",
        "payload": {
            "template_type": "list",
            "top_element_style": "compact",
            "elements": [
                {
                    "title": "Classic White T-Shirt",
                    "subtitle": "100% Cotton, 200% Comfortable",
                    "default_action": {
                        "type": "web_url",
                        "url": "https://peterssendreceiveapp.ngrok.io/view?item=100"
                    },
                    "buttons": [
                        {
                            "title": "Buy",
                            "type": "web_url",
                            "url": "https://peterssendreceiveapp.ngrok.io/shop?item=100"                     
                        }
                    ]                
                }
            ]
        }
    }
}

This code shows a reply to the user like this:

If we want to show a “View more” button at the end of the list, we can add a “buttons” key and an array as its value which will have information regarding all the buttons we want to show.

The code below will show a “View more” button at the end of the list:

"elements": [
            {
                // all the elements, like shirt in the above case               
            }
       ],
       "buttons": [
            {
                "title": "View More",
                "type": "postback",
                "payload": "payload"                        
               }
       ]

Let’s learn how to incorporate these features to rss results in SUSI Fbbot:

When sending a reply using list template, “elements” key must have an array type value which will be constituted of list items. Therefore, we need to push all the rss results into that array and set it as the value of the “elements” key.

Let’s develop the code part:

Fetch the JSON object from SUSI API url with query as “why” i.e. http://api.susi.ai/susi/chat.json?q=why. Let body be a variable which stores the returned JSON object.

  • The below code fills up an array (namely elementsVal) with all the rss results:
var elementsVal = [];
var metaCnt = body.answers[0].metadata.count;
for(var i=0;i<((metaCnt>4)?4:metaCnt);i++){
    elementsVal.push(
        {
            "title": body.answers[0].data[i].title,
            "subtitle": body.answers[0].data[i].link
        }
    );
}
  • Setting the elementsVal array as the value of the “elements” key:
var message = {
    "type": "template",
    "payload": 
    {
        "template_type": "list",
        "top_element_style": "compact",
        "elements": elementsVal
    }
};
  • Sending this message as a reply to the user:
sendTextMessage(sender, message);

Same procedure can be used to render table type replies in the bot using the list template.

Generic template provided by Facebook can also used to render the web and table type replies. This template helps in showing the results in square boxes, which can be changed using left or right arrows.

Resources:

  1. By Mikhail Larionov from Facebook blogsList templates and check box plugin
  2. By Slobodan Stojanović from smashing magazineDevelop a chat bot with node js.
Continue ReadingUsing Templates in SUSI FBbot

How to Implement Memory like Servlets in SUSI.AI

In this blog, I’ll be discussing about how a server gets previous messages from the Log files in SUSI server. SUSI AI clients, Android, iOS and web chat, follow a very simple rule. Whenever a user logs in to the app, the app makes a http GET call to the server in the background and in response, server returns the chat history.

Link to the API endpoint -> http://api.susi.ai/susi/memory.json

But parsing a lot of data might depend on the connection speed. If the connection is poor or lacking speed, the history would cost user’s time. To prevent this, server by default returns last 10 pair of messages. It is up to the client that how many messages they want to render. So for example, if the client requests last 5 messages, then the client has to make a GET request and pass the cognitions parameter. Hence the modified end point will be :

http://api.susi.ai/susi/memory.json?cognitions=2

But how does the server process it? Let us see.
Browse to susi_server/src/ai/susi/server/api/susi/UserService.java file. This is the main working servlet. If you are new and wondering how servlets for susi are implemented, Please go through this first how-to-add-a-new-servletapi-to-susi-server
This is how serviceImpl() method looks like :

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization user, final JsonObjectWithDefault permissions) throws APIException {

        int cognitionsCount = Math.min(10, post.get("cognitions", 10));
        String client = user.getIdentity().getClient();
        List<SusiCognition> cognitions = DAO.susi.getMemories().getCognitions(client);
        JSONArray coga = new JSONArray();
        for (SusiCognition cognition: cognitions) {
            coga.put(cognition.getJSON());
            if (--cognitionsCount <= 0) break;
        }
        JSONObject json = new JSONObject(true);
        json.put("cognitions", coga);
        return new ServiceResponse(json);
    }

In the first step, we find the minimum of default value (i.e. 10) and the value of cognitions received as GET parameter. Messages equivalent to minimum variable are encoded in JSONArray and sent to the client.

Whenever the server receives a valid signup request, It makes a directory with the name “email_emailid”. In this directory, a log.txt file is maintained which stores all the queries along with the other details associated with it. For example if user has signed up with the email id example@example.com, Then the path of this directory will be /data/susi/email_example@example.com. If a user queries “http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=flip+a+coin”,  then

{
	"query": "flip a coin",
	"count": 1,
	"client_id": "",
	"query_date": "2017-06-30T12:22:05.918Z",
	"answers": [{
		"data": [{
			"0": "flip a coin",
			"token_original": "coin",
			"token_canonical": "coin",
			"token_categorized": "coin",
			"timezoneOffset": "-330",
			"answer": "tails",

			"skill": "/susi_skill_data/models/general/entertainment/en/flip_coin.txt",
			"_etherpad_dream": "cricket"
		},
		"metadata": {
			"count": 1
		},
		"actions": [{
			"type": "answer",
			"expression": "tails"
		}],
		"skills": ["/susi_skill_data/models/general/entertainment/en/flip_coin.txt"]
	}],
	"answer_date": "2017-06-30T12:22:05.928Z",
	"answer_time": 10,
	"language": "en"
}

The server has user’s identity. It will use this identity and store (will be appended) in the respective log file.

The next steps in retrieving the message are pretty easy and includes getting the identity of the current user session. Use this identity to populate the JSONArray named coga. This is finally encoded in a JSONObject along with other basic details and returned to the clients where they render the data received, and show the messages in an appropriate way.

Resources

Continue ReadingHow to Implement Memory like Servlets in SUSI.AI