Generating Real-Time Graphs in PSLab Android App

In PSLab Android App, we need to log data from the sensors and correspondingly generate real-time graphs. Real-time graphs mean a data streaming chart that automatically updates itself after every n second. This was different from what we did in Oscilloscope’s graph, here we need to determine the relative time at which the data is recorded from the sensor by the PSLab.

Another thing we need to take care of was the range of x axis. Since the data to be streamed is ever growing, setting a large range of the x axis will only make reading sensor data tedious for the user. For this, the solution was to make real time rolling window graph. It’s like when the graph exceeds the maximum range of x axis, the graph doesn’t show the initial plots. For example, if I set that graph should show the data only for the 10-second window when the 11th-second data would be plot, the 1st-second data won’t be shown by the graph and maintains the difference between the maximum and the minimum range of the graph. The graph library we are going to use is MPAndroidChart. Let’s break-down the implementation step by step.

First, we create a long variable, startTime which records the time at which the entire process starts. This would be the reference time. Flags make sure when to reset this time.

if (flag == 0) {
   startTime = System.currentTimeMillis();
   flag = 1;
}

 

We used Async Tasks approach in which the data is from the sensors is acquired in the background thread and the graph is updated in the UI thread. Here we consider an example of the HMC5883L sensor, which is actually Magnetometer. We are calculating time elapsed by subtracting current time with the sartTime and the result is taken as the x coordinate.

private class SensorDataFetch extends AsyncTask<Void, Void, Void> {
   ArrayList<Double> dataHMC5883L = new ArrayList<Double>();
   long timeElapsed;

   @Override
   protected Void doInBackground(Void... params) {
       
     timeElapsed = (System.currentTimeMillis() - startTime) / 1000;

     entriesbx.add(new Entry((float) timeElapsed, dataHMC5883L.get(0).floatValue()));
     entriesby.add(new Entry((float) timeElapsed, dataHMC5883L.get(1).floatValue()));
     entriesbz.add(new Entry((float) timeElapsed, dataHMC5883L.get(2).floatValue()));
       
     return null;
   }

 

As we need to create a rolling window graph we require to add few lines of code with the standard implementation of the graph using MPAndroidChart. This entire code is placed under onPostExecute method of AsyncTasks. The following code sets data set for the Line Chart and tells the Line Chart that a new data is acquired. It’s very important to call notifyDataSetChanged, without this the things won’t work.

mChart.setData(data);
mChart.notifyDataSetChanged();

 

Now, we will set the visible range of x axis. This means that the graph window of the graph won’t change until and unless the range set by this method is not achieved. Here we are setting it to be 10 as we need a 10-second window.

mChart.setVisibleXRangeMaximum(10);

Then we will call moveViewToX method to move the view to the latest entry of the graph. Here, we have passed data.getEntryCount method which returns the no. of data points in the data set.

mChart.moveViewToX(data.getEntryCount());

 

We will get following results

To see the entire code visit this link.

Resources

Continue ReadingGenerating Real-Time Graphs in PSLab Android App

Flask App to Upload Wallpaper On the Server for Meilix Generator

We had a problem of getting a wallpaper from the user using Meilix Generator and use the wallpaper with the Meilix build scripts to generate the ISO. So, we were required to host the wallpaper on the server and downloaded by Travis CI during the build to include it in the ISO.

A solution is to render HTML templates and access data sent by POST using the request object from the flask. Redirect and url_for will be used to redirect the user once the upload is done and send_from_directory will help us to host the file under the /uploads that the user just uploaded which will be downloaded by the Travis for building the ISO.

We start by creating the HTML form marked with enctype=multipart/form-data.

<form action="upload" method="post" enctype="multipart/form-data">
        <input type="file" name="file"><br /><br />
        <input type="submit" value="Upload">
 </form>

 

First, we need imports of modules required. Most important is werkzeug.secure_filename().

import os
from flask import Flask, render_template, request, redirect, url_for, send_from_directory
from werkzeug import secure_file

 

Now, we’ll define where to upload and the type of file allowed for uploading. The path to upload directory on the server is defined by the extensions in app.config which is uploads/ here.

app.config['UPLOAD_FOLDER'] = 'uploads/'
app.config['ALLOWED_EXTENSIONS'] = set(['png', 'jpg', 'jpeg'])

 

This functions will check for valid extension for the wallpaper which are png, jpg and jpeg in this case defined above in app.config.

def allowed_file(filename):
    return '.' in filename and \
           filename.rsplit('.', 1)[1] in app.config['ALLOWED_EXTENSIONS']

 

After, getting the name of uploaded file from the user then using above function check if there are allowed file type and store it in a variable filename after that it move the files to the upload folder to save it.

Upload function check if the file name is safe and remove unsupported characters (line 3) after that moves it from a temporal folder to the upload folder. After moving, it renames the file as wallpaper so that the download link is same always which we have used in Meilix build script to download from server.

def upload():
    file = request.files['file']
    if file and allowed_file(file.filename):
        filename = secure_filename(file.filename)
        file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
         os.rename(UPLOAD_FOLDER + filename, UPLOAD_FOLDER+'wallpaper')
         filename = 'wallpaper'

 

At this point, we have only uploaded the wallpaper and renamed the uploaded file to ‘wallpaper’ only. We cannot access the file outside the server it will result in 403 error so to make it available, the uploaded file need to be registered and then hosted using below code snippet.

We can also register uploaded_file as build_only rule and use the SharedDataMiddleware.

@app.route('/uploads/<filename>')
def uploaded_file(filename):
    return send_from_directory(app.config['UPLOAD_FOLDER'],filename)

The hosted wallpaper is used by Meilix in Travis CI to generate ISO using the download link which remains same for the uploaded wallpaper.

Why should we use secure secure_filename() function?

just imagine someone sends the following information as the filename to your app.

filename = "../../../../home/username/.sh"

 

If the number of ../ is correct and you would join this with your UPLOAD_FOLDER the hacker might have the ability to modify a file on the server’s filesystem that he or she should not modify.

Now, let’s look how the function works.

secure_filename('../../../../home/username/.sh')
'home_username_.sh'

Improving the uploads

We can add validation to the size of the file to be uploaded so that in case a user tries to upload a file too much big that may increase load on the server.

from flask import Flask, Request
app = Flask(__name__)
app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024

Resources

Continue ReadingFlask App to Upload Wallpaper On the Server for Meilix Generator

Sending Data between components of SUSI MagicMirror Module

SUSI MagicMirror module is a module to add SUSI assistant right on your MagicMirror. The software for MagicMirror constitutes of an Electron app to which modules can be added easily. Since there are many modules, there might be functionalities that need interaction between various modules by transfer of information. MagicMirror also provides a node_helper script that facilitates a module to perform some background tasks. Therefore, a mechanism to transfer information from node_helper to various components of module is also needed.

MagicMirror provides an inbuilt module notification system that can be used to send notification across the modules and a socket notification system to send information between node_helper and various components of the system.

Our codebase for SUSI MagicMirror is divided mainly into two parts. A Main module that handles all the process of hotword detection, speech recognition, calling SUSI API and saving audio after Text to Speech and a Renderer module which performs the task of managing the display of content on the Mirror Screen and playing back the file obtained by Speech Synthesis. Plainly put, Main module mainly handles the backend logic of the application and the Renderer handles the frontend. Main and Renderer module work on different layers of the application and to facilitate communication between them, we need to make a mechanism. A schematic of flow that is needed to be maintained can be highlighted as:

As you can see in the above diagram, we need to transfer a lot of information between the components. We display animation and text based on the current state of recognition in the  module, thus we need to transfer this information frequently. This task is accomplished by utilizing the inbuilt socket notification system in the MagicMirror. For every event like when system enters into listening , busy or recognized speech state, we need to pass message to renderer. To achieve this, we made a rendererSend function to send notification to renderer.

const rendererSend =  (event: NotificationType , payload: any) => {
   this.sendSocketNotification(event, payload);
}

This function takes an event and a payload as arguments. Event tells which event occurred and payload is any data that we wish to send. This method in turn calls the method provided by MagicMirror module to send socket notifications within the module.

When certain events occur like when system enters busy state or listening state, we trigger the rendererSend call to send a socket notification to the module. The rendererSend method is supplied in the State Machine Components available to every state. The task of sending notifications can be done using the code snippet as follows:

// system enters busy state
this.components.rendererSend("busy", {});
// send speech recognition hypothesis text to renderer
this.components.rendererSend("recognized", {text: recognizedText});
// send susi api output json to renderer to display interactive results while Speech Output is performed
this.components.rendererSend("speak", {data: susiResponse});

The socket notification sent via the above method is received in SUSI Module via a callback called socketNotificationReceived . We need to define this callback with implementation while registering module to MagicMirror. So, we register the MMM-SUSI-AI module by adding the definition for socketNotificationReceived method.

Module.register("MMM-SUSI-AI", {
//other function definitions
***
   // define socketNotificationReceived function
   socketNotificationReceived: function (notification, payload) {
       susiMirror.receivedNotification(notification, payload);
   },
***
});

In this way, we send all the notification received to susiMirror object in the renderer module by calling the receivedNotification method of susiMirror object

We can now receive all the notifications in the SusiMirror and update UI. To handle notifications, we define receivedNotification method as follows:

public receivedNotification(type: NotificationType, payload: any): void {

   this.visualizer.setMode(type);
   switch (type) {
       case "idle":
            // handle idle state
           break;
       case "listening":
           // handle listening state
           break;
       case "busy":
           // handle busy state
         break;
       case "recognized":
           // handle recognized state. This notification also contains a payload about the hypothesis text           
           break;
       case "speak":
           // handle speaking state. We need to play back audio file and display text on screen for SUSI Output. Notification Payload contains SUSI Response
           break;
   }
}

In this way, we utilize the Socket Notification System provided by the MagicMirror Electron Application to send data across the components of Magic Mirror module for SUSI AI.

Resources

Continue ReadingSending Data between components of SUSI MagicMirror Module

Shifting from Java to Kotlin in SUSI Android

SUSI Android (https://github.com/fossasia/susi_android) is written in Java. After the announcement of Google to officially support Kotlin as a first class language for Android development we decided to shift to Kotlin as it is more robust and code friendly than Java.

Advantages of Kotlin over Java

  1. Kotlin is a null safe language. It changes all the instances used in the code to non nullable type thus it ensures that the developer don’t get any nullPointerException.
  2. Kotlin provides the way to declare Extensive function similar to that of C#. We can use this function in the same way as we use the member functions of our class.
  3. Kotlin also provides support for Lambda function and other high order functions.

For more details refer to this link.

After seeing the above points it is now clear that Kotlin is much more effective than Java and there is harm in switching the code from Java to Kotlin. Lets now see the implementation in Susi Android.

Implementation in Susi Android

In the Susi Android App we are implementing the MVP design with Kotlin. We are converting the code by one activity each time from java to Kotlin. The advantage here with Kotlin is that it is totally compatible with java at any time. Thus allowing the developer to change the code bit by bit instead of all at once.Let’s now look at SignUp Activity implementation in Susi Android.

The SignUpView interface contains all the function related to the view.

interface ISignUpView {


  fun alertSuccess()

  fun alertFailure()

  fun alertError(message: String)

  fun setErrorEmail()

  fun setErrorPass()

  fun setErrorConpass(msg: String)

  fun setErrorUrl()

  fun enableSignUp(bool: Boolean)

  fun clearField()

  fun showProgress()

  fun hideProgress()

  fun passwordInvalid()

  fun emptyEmailError()

  fun emptyPasswordError()

  fun emptyConPassError()


}

The SignUpActivity implements the view interface in the following way. The view is responsible for all the interaction of user with the UI elements of the app. It does not contain any business logic related to the app.

class SignUpActivity : AppCompatActivity(), ISignUpView {


  var signUpPresenter: ISignUpPresenter? = null

  var progressDialog: ProgressDialog? = null


  override fun onCreate(savedInstanceState: Bundle?) {

      super.onCreate(savedInstanceState)

      setContentView(R.layout.activity_sign_up)

      addListeners()

      setupPasswordWatcher()


      progressDialog = ProgressDialog(this@SignUpActivity)

      progressDialog?.setCancelable(false)

      progressDialog?.setMessage(this.getString(R.string.signing_up))


      signUpPresenter = SignUpPresenter()

      signUpPresenter?.onAttach(this)

  }


  fun addListeners() {

      showURL()

      hideURL()

      signUp()

  }


  override fun onOptionsItemSelected(item: MenuItem): Boolean {

      if (item.itemId == android.R.id.home) {

          finish()

          return true

      }

      return super.onOptionsItemSelected(item)

  }

Now we will see the implementation of models in Susi Android in Kotlin and compare it with Java.

Lets First see the implementation in Java

public class WebSearchModel extends RealmObject {

  private String url;

  private String headline;

  private String body;

  private String imageURL;


  public WebSearchModel() {

  }


  public WebSearchModel(String url, String headline, String body, String imageUrl) {

      this.url = url;

      this.headline = headline;

      this.body = body;

      this.imageURL = imageUrl;

  }


  public void setUrl(String url) {

      this.url = url;

  }


  public void setHeadline(String headline) {

      this.headline = headline;

  }


  public void setBody(String body) {

      this.body = body;

  }


  public void setImageURL(String imageURL) {

      this.imageURL = imageURL;

  }


  public String getUrl() {

      return url;

  }


  public String getHeadline() {

      return headline;

  }


  public String getBody() {

      return body;

  }


  public String getImageURL() {

      return imageURL;

  }

}
open class WebSearchModel : RealmObject {


  var url: String? = null


  var headline: String? = null


  var body: String? = null


  var imageURL: String? = null


  constructor() {}


  constructor(url: String, headline: String, body: String, imageUrl: String) {

      this.url = url

      this.headline = headline

      this.body = body

      this.imageURL = imageUrl

  }

}

You can yourself see the difference and how easily with the help of Kotlin we can reduce the code drastically.

For diving more into the code, we can refer to the GitHub repo of Susi Android (https://github.com/fossasia/susi_android).

Resources

Continue ReadingShifting from Java to Kotlin in SUSI Android

Implementing Toolbar(ActionBar) in SUSI Android

SUSI is an artificial intelligence for interactive chat bots. The SUSI Android app (https://github.com/fossasia/susi_android) has a toolbar, that allows different user preferences right up in the menu option. These include functionalities such as search, login, logout and rating the app. The material design toolbar provides a clean and efficient way to do these functions. We will see how we implemented this in the SUSI Android app.

To implement ActionBar in the Android app we will start with adding the dependency below to the gradle file.

compile `com.android.support:appcompat-v7:22.2.0` 

For more information regarding setting up ActionBar in your own project please refer to this link.

Now we will add the Toolbar in our XML file as shown below.

<android.support.v7.widget.Toolbar xmlns:app="http://schemas.android.com/apk/res-auto"

  style="@style/Theme.AppCompat"

  xmlns:android="http://schemas.android.com/apk/res/android"

  android:layout_width="match_parent"

  android:layout_height="wrap_content"

  android:background="?attr/colorPrimary"

  android:minHeight="@dimen/abc_action_bar_default_height_material">


  <ImageView

      android:id="@+id/toolbar_img"

      android:layout_width="@dimen/toolbar_logo_width"

      android:layout_height="@dimen/toolbar_logo_height"

      android:layout_gravity="center"

      app:srcCompat="@drawable/ic_susi_white" />


</android.support.v7.widget.Toolbar>

On Adding the above code we can see the preview of the Toolbar which appears like this.

 

Now we will see how to implement the menu options. In the menu_main.xml we can add options to be shown in the toolbar.

<menu xmlns:android="http://schemas.android.com/apk/res/android"

  xmlns:app="http://schemas.android.com/apk/res-auto"

  xmlns:tools="http://schemas.android.com/tools"

  tools:context=".activities.MainActivity">

  <item

      android:id="@+id/action_search"

      android:icon="@android:drawable/ic_menu_search"

      android:orderInCategory="200"

      android:title="@string/search"

      app:actionViewClass="android.support.v7.widget.SearchView"

      app:showAsAction="always" />

  <item

      android:id="@+id/down_angle"

      android:icon="@drawable/ic_arrow_down_24dp"

      android:title="Search Down"

      android:visible="false"

      app:showAsAction="always"/>

  <item

      android:id="@+id/up_angle"

      android:icon="@drawable/ic_arrow_up_24dp"

      android:title="Search Up"

      android:visible="false"

      app:showAsAction="always"/>

  <item

      android:id="@+id/action_settings"

      android:orderInCategory="100"

      android:title="@string/action_settings"

      app:showAsAction="never" />

  <item

      android:id="@+id/wall_settings"

      android:orderInCategory="100"

      android:title="@string/action_wall_settings"

      app:showAsAction="never" />

  <item

      android:id="@+id/action_share"

      android:orderInCategory="101"

      android:title="@string/action_share"

      app:showAsAction="never" />

We can define different items in the ActionBar as shown above. Also, we have the option by which we can set the visibility of the items. If the visibility of the item is false, then it will be shown in the overflow menu like this.

Now we will see how we can add functionalities to the options present in the menu ActionBar. To add the listeners to the options we have to override the function onCreateOptionsMenu and there we can add the logic for different options as shown below.

@Override

  public boolean onCreateOptionsMenu(final Menu menu) {

      this.menu = menu;

      getMenuInflater().inflate(R.menu.menu_main, menu);

      if(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false)) {

          menu.findItem(R.id.action_logout).setVisible(false);

          menu.findItem(R.id.action_login).setVisible(true);

      } else if(!(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false))) {

          menu.findItem(R.id.action_logout).setVisible(true);

          menu.findItem(R.id.action_login).setVisible(false);

      }


      searchView = (SearchView) MenuItemCompat.getActionView(menu.findItem(R.id.action_search));

      final EditText editText = (EditText)searchView.findViewById(android.support.v7.appcompat.R.id.search_src_text);

      searchView.setOnSearchClickListener(new View.OnClickListener() {

          @Override

          public void onClick(View view) {

              toolbarImg.setVisibility(View.GONE);

              ChatMessage.setVisibility(View.GONE);

              btnSpeak.setVisibility(View.GONE);

              sendMessageLayout.setVisibility(View.GONE);

              isEnabled = false;

          }

      });

}

For diving more into the code, we can refer to the GitHub repo of Susi Android (https://github.com/fossasia/susi_android).

Resources

Continue ReadingImplementing Toolbar(ActionBar) in SUSI Android

Implementation of Text to Speech alongside Hotword Detection in SUSI Android App

In this blog post, we’ll be learning about how to implement Text to speech. Now you may be wondering that what is so difficult in implementing text to speech. One can easily find many tutorials on that and can easily look at the official documentation of TTS but there’s a catch here. In this blog post I’ll be telling about how to implement Text to Speech alongside Hotword Detection.

Let me give you a rough idea about how hotword detection works in SUSI Android App. For more details, read my other blog here on Hotword Detection. So, there is a constantly running background recording thread which detects when hotword is detected. Now, you may be thinking why do we need to stop that thread for text to speech. Well there are 2 reasons to do that:

  1. Recording while playing causing problems with mic and may crash the app.
  2. Suppose we even implement that but what will happen if the answer contains word “susi” in it. Now, the hotword will be detected because the speech output contained word “susi” in it (which is our hotword).

So, to avoid these problems we had to come up a way to stop hotword detection only for that particular time when SUSI is giving speech output and resume it back immediately when speech output is finished.

Let’s see how we did that.

Implementation

Check out this video to see how this work in the app

https://youtu.be/V9N6K4SzpXw

Initiating the TTS engine

The first task is to initiate the Text to speech engine. This process takes some time. So, it is done in the starting of app in a new handler.

new Handler().post(new Runnable() {
   @Override
   public void run() {
       textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
           @Override
           public void onInit(int status) {
               if (status != TextToSpeech.ERROR) {
                   Locale locale = textToSpeech.getLanguage();
                   textToSpeech.setLanguage(locale);
               }
           }
       });
   }
});

Check Audio Focus

The next step is to check whether audio focus is granted. Suppose there is some music playing in the background, in that case we won’t be able to give voice output. So, we check audio focus using below code.

final AudioManager audiofocus = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
 int result = audiofocus.requestAudioFocus(afChangeListener, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
//DO WORK HERE
}

Using OnAudioFocusChangeListener, we keep a track of when we have access to give speech output and when we don’t.

private AudioManager.OnAudioFocusChangeListener afChangeListener =
       new AudioManager.OnAudioFocusChangeListener() {
           public void onAudioFocusChange(int focusChange) {
               if (focusChange == AUDIOFOCUS_LOSS_TRANSIENT) {
                   textToSpeech.stop();
               } else if (focusChange == AudioManager.AUDIOFOCUS_GAIN) {
                   // Resume playback
               } else if (focusChange == AudioManager.AUDIOFOCUS_LOSS) {
                   textToSpeech.stop();
               }
           }
       };

Converting the given text to speech

Now we have audio focus, we just have to convert given text to speech. Use method textToSpeech.speak().

private void voiceReply(final String reply) {
       Handler handler = new Handler();
       handler.post(new Runnable() {
           @Override
           public void run() {
                   textToSpeech.speak(spokenReply, TextToSpeech.QUEUE_FLUSH, ttsParams);                  
               }
           }
       });
   }
}

Abandon Audio Focus

Now we are done with speech output, it’s time we abandon audio focus.

audiofocus.abandonAudioFocus(afChangeListener);

TTS alongside Hotword Detection

Okay so now the major part. How do we check when to stop hotword detection thread and when to resume it? How do we check if Speech output is finished?

Answer to these questions is textToSpeech.setOnUtteranceProgressListener. The UtteranceProgressListener overrides 3 methods:

  1. onStart: Indicates starting of text to speech conversion. Which means it’s time to stop hotword detection thread.
  2. onDone: Called when every word of the provided text is converted to speech. So, simply resume hotword detection
  3. onError: Called when there is an error and text is not converted to speech. Anyway, we need to resume hotword detection here too.
textToSpeech.setOnUtteranceProgressListener(new UtteranceProgressListener() {
                       @Override
                       public void onStart(String s) {
                           if(recordingThread !=null && isDetectionOn){
                               recordingThread.stopRecording();
                               isDetectionOn = false;
                           }
                       }

                       @Override
                       public void onDone(String s) {
                           if(recordingThread != null && !isDetectionOn && checkHotwordPref()) {
                               recordingThread.startRecording();
                               isDetectionOn = true;
                           }
                       }

                       @Override
                       public void onError(String s) {
                           if(recordingThread != null && !isDetectionOn && checkHotwordPref()) {
                               recordingThread.startRecording();
                               isDetectionOn = true;
                           }
                       }
                   });

                   HashMap<String,String> ttsParams = new HashMap<String, String>();
                   ttsParams.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,
                           MainActivity.this.getPackageName());

Summary

So, the main thing required for implementation of Text to Speech alongside Hotword detection is a way to control stopping and resuming hotword detection when Text to speech is in process. For that we used UtteranceProgressListener of TextToSpeech class which makes it so easier to do the task we required. You may follow this same approach as well or if you have a better approach, open an issue here.

Resources

  1. Official Documentation of TextToSpeech https://developer.android.com/reference/android/speech/tts/TextToSpeech.html
  2. Documentation of UtteranceProgressListener https://developer.android.com/reference/android/speech/tts/UtteranceProgressListener.html
  3. Blog link to Hotword Detection https://docs.google.com/document/d/1auTyuk32i15Rw94TOkrSruRJ9LZVtjcThoWVJkvnAz8/edit?usp=sharing
Continue ReadingImplementation of Text to Speech alongside Hotword Detection in SUSI Android App

Custom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

The SUSI Server is an AI powered server which is capable of responding to intelligent answers based on user’s queries. The queries to the susi server are obtained either as a websearch using the application or as an RSS feed. Two of the actions are websearch and RSS. These actions as the name suggests respond to queries based on search results from the web which are rendered in the clients. In order to use use these action types and display them in the SUSI iOS client, we need to first parse the actions looking for these action types and then creating a custom UI for them to display them.

To start with, we need to make send the query to the server to receive an intelligent response from the server. This response is parsed into different action types supported by the server and saved into relevant objects. Here, we check the action types by looping through the answers array containing the actions and based on that, we save the data for that action.

if type == ActionType.rss.rawValue {
   message.actionType = ActionType.rss.rawValue
   message.rssData = RSSAction(data: data, actionObject: action)
} else if type == ActionType.websearch.rawValue {
   message.actionType = ActionType.websearch.rawValue
   message.message = action[Client.ChatKeys.Query] as? String ?? ""
}

Here, we parsed the data response from the server and looked for the rss and websearch action type followed by which we saved the data we received from the server for each of the action types in their own objects.

Next, when a message object is created, we insert it into the dataSource item by appending it and use the `insertItems(at: [IndexPath])` method of collection view to insert them into the views at a particular index. Before adding them, we need to create a Custom UI for them. This UI will consist of a Collection View which is scrollable in the horizontal direction inside a CollectionView Cell. To start with this, we create a new class called `WebsearchCollectionView` which will be a `UIView` consisting of a `UICollectionView`.

We start by adding a collection view into the UIView inside the `init` method by overriding it. Declare a collection view using flow layout and scroll direction set to `horizontal`. Also, hide the scroll indicators and assign the delegate and datasource to `self`.

Now to populate this collection view, we need to specify the number of items that will show up. For this, we make use of the `message` variable declared. We use the `websearchData` in case of websearch action and `rssData` otherwise.

Now to specify the number of cells, we use the below method which returns the number of rss or websearch action objects and defaults to 0 such cells.

func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
    if let rssData = message?.rssData {
        return rssData.count
    } else if let webData = message?.websearchData {
        return webData.count
    }
    return 0
}

We display the title, description and image for each object for which we need to create a UI for the cells. Let’s start by creating a Custom Collection View cell with the imageview and 2 labels for title and description.

The imageview is given a contentMode of `aspectFit` and assigned a placeholder image in case the image doesn’t exist. The title and description labels are assigned the same font size, the title being bolder and both are center aligned.

class WebsearchCell: BaseCell {

   var imageView: UIImageView = {
       let iv = UIImageView()
       iv.contentMode = .scaleAspectFit
       iv.image = UIImage(named: "placeholder")
       return iv
   }()

   let titleLabel: UILabel = {
       let label = UILabel()
       label.textColor = .black
       label.font = UIFont.boldSystemFont(ofSize: 14)
       label.textAlignment = .center
       label.numberOfLines = 2
       label.backgroundColor = Color.grey.lighten3
       return label
   }()

   let descriptionLabel: UILabel = {
       let label = UILabel()
       label.font = UIFont.systemFont(ofSize: 14)
       label.textAlignment = .center
       return label
   }()

}

Next, we add constraints for each such view adding a title and description label adding a small margin on both sides of the cell for a cleaner UI.

addSubview(imageView)
addSubview(titleLabel)
addSubview(descriptionLabel)
descriptionLabel.numberOfLines = 5
addConstraintsWithFormat(format: "H:|-4-[v0(\(frame.width * 0.4))]-4-[v1]-4-|", views: imageView, titleLabel)
addConstraintsWithFormat(format: "|-\(frame.width * 0.4 + 8)-[v0]-4-|", views: descriptionLabel)
addConstraintsWithFormat(format: "V:|-4-[v0]-4-|", views: imageView)
addConstraintsWithFormat(format: "V:|-4-[v0(44)]-4-[v1]-4-|", views: titleLabel, descriptionLabel)

Now to use this custom cell, we first need to register it with the collection view and then we can use it easily in the `cellForItemAt` method. Since we are using Kingfisher for image caching, we use the `.kf.setImage` method to download the image from a URL and cache it as soon as it is downloaded.

collectionView.register(WebsearchCell.self, forCellWithReuseIdentifier: cellId)
 func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
       if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellId, for: indexPath) as? WebsearchCell {
           cell.backgroundColor = .white

           if message?.actionType == ActionType.rss.rawValue {
               let feed = message?.rssData?.rssFeed[indexPath.item]
               cell.titleLabel.text = feed?.title
               cell.descriptionLabel.text = feed?.desc                cell.imageView.kf.setImage(with: URL(string: feed?.rssData?.image))
           } else if message?.actionType == ActionType.websearch.rawValue {
               let webData = message?.websearchData[indexPath.item]
               cell.titleLabel.text = webData?.title
               cell.descriptionLabel.text = webData?.desc.html2String                cell.imageView.kf.setImage(with: URL(string: feed?.webData?.image))
           }
           return cell
       } else {
           return UICollectionViewCell()
       }
   }

We check the action type and assign data based on that. Also, for the images, if they don’t exist a placeholder is added.

Since we are done with the Custom UI of the cell, we need to add it to the chat cell. For that, we add this `UIView` as a subview. For reasons of reusability, the cell was extracted into a separate one and call the `prepareForReuse()` method for reusability. This is followed by adding a subview and setting constraints for each cell and assigning the message object.

class RSSCell: ChatMessageCell {

   var message: Message? {
       didSet {
           self.addWebsearchView()
       }
   }

   lazy var websearchView: WebsearchCollectionView = {
       let view = WebsearchCollectionView()
       return view
   }()

   override func setupViews() {
       super.setupViews()
       prepareForReuse()
   }

   func addWebsearchView() {
       self.addSubview(websearchView)
       self.addConstraintsWithFormat(format: "H:|[v0]|", views: websearchView)
       self.addConstraintsWithFormat(format: "V:[v0(135)]", views: websearchView)
       websearchView.message = message
   }

}
if message.actionType == ActionType.rss.rawValue || message.actionType == ActionType.websearch.rawValue {
   if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.rssCell, for: indexPath) as? RSSCell {
       cell.message = message
       return cell
   } else {
       return UICollectionViewCell()
   }
}

This is all we need to add a custom UI to a chat cell in SUSI iOS, very simple and clean.

Check the screenshot below, the app in action.

Resources:

Continue ReadingCustom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

Hotword Recognition in SUSI iOS

Hot word recognition is a feature by which a specific action can be performed each time a specific word is spoken. There is a service called Snowboy which helps us achieve this for various clients (for ex: iOS, Android, Raspberry pi, etc.). It is basically a DNN based hotword recognition toolkit.

In this blog, we will learn how to integrate the snowboy hotword detection wrapper in the SUSI iOS client. This service can be used in any open source project but for using it commercially, a commercial license needs to be obtained.

Following are the files that need to be added to the project which are provided by the service itself: snowboy-detect.h libsnowboy-detect.a and a trained model file which can be created using their online service: snowboy.kitt.ai. For the sake of this blog, we will be using the hotword “Susi”, the model file can be found here.

The way how snowboy works is that speech is recorded for a few seconds and this data is detected with an already trained model by a specific hotword, now if snowboy returns a 1 means word has been successfully detected else wasn’t.

We start with creation of a wrapper class in Objective-C which can be found wrapper and the bridging header in case this needs to be added to a Swift project. The wrapper contains methods for setting sensitivity, audio gain and running the detection using the buffer. It is a wrapper class built on top of the snowboy-detect.h header file.

Let’s initialize the service and run it. Below are the steps followed to enable hotword recognition and print out whether it successfully detected the hotword or not:

  • Create a ViewController class with extensions
    • AVAudioRecorderDelegate
    • AVAudioPlayerDelegate

since we will be recording speech.

  • Import AVFoundation
  • Create a basic layout containing a label which detects whether hotword detected or not and create corresponding `IBOutlet` in the ViewController and a button to trigger the start and stop of recognition.
  • Create the following variables:
    • let WAKE_WORD = “Susi” // hotword used
    • let RESOURCE = Bundle.main.path(forResource: “common”, ofType: “res”)
    • let MODEL = Bundle.main.path(forResource: “susi”, ofType: “umdl”) //path where the model file is stored
    • var wrapper: SnowboyWrapper! = nil // wrapper instance for running detection
    • var audioRecorder: AVAudioRecorder! // audio recorder instance
    • var audioPlayer: AVAudioPlayer!
    • var soundFileURL: URL! //stores the URL of the temp reording file
    • var timer: Timer! //timer to fire a function after an interval
    • var isStarted = false // variable to check if audio recorder already started
  • In `viewDidLoad` initialize the wrapper and set sensitivity and audio gain. Recognition best happens when sensitivity is set to `0.5` and audio gain is set to `1.0` according to the docs.
override func viewDidLoad() {
    super.viewDidLoad()
    wrapper = SnowboyWrapper(resources: RESOURCE, modelStr: MODEL)
    wrapper.setSensitivity("0.5")
    wrapper.setAudioGain(1.0)
}
  • Create an `IBAction` for the button to start recognition. This action will be used to start or stop the recording in which the action toggles based on the `isStarted` variable. When true, recording is stopped and the timer invalidated else a timer is started which calls the `startRecording` method with an interval of 4 seconds.
@IBAction func onClickBtn(_ sender: Any) {
  if (isStarted) {
    stopRecording()
    timer.invalidate()
    btn.setTitle("Start", for: .normal)
    isStarted = false
  } else {
    timer = Timer.scheduledTimer(timeInterval: 4, target: self, 
    selector: #selector(startRecording), userInfo: nil, repeats: true)
    timer.fire()
    btn.setTitle("Stop", for: .normal)
    isStarted = true
  }
}
  • Next, we add the start and stop recording methods.
    • First, a temp file is created which stores the recorded audio output
    • After which, necessary record configurations are made such as setting the sampling rate.
    • The recording is then started and the output stored in the temp file.
func startRecording() {
  do {
    let fileMgr = FileManager.default
    let dirPaths = fileMgr.urls(for: .documentDirectory, in: .userDomainMask)
    soundFileURL = dirPaths[0].appendingPathComponent("temp.wav")
    let recordSettings = [AVEncoderAudioQualityKey: 
    AVAudioQuality.high.rawValue,
    AVEncoderBitRateKey: 128000,
    AVNumberOfChannelsKey: 1,
    AVSampleRateKey: 16000.0] as [String : Any]
    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSessionCategoryRecord)
    try audioRecorder = AVAudioRecorder(url: soundFileURL,
settings: recordSettings as [String : AnyObject])
    audioRecorder.delegate = self
    audioRecorder.prepareToRecord()
    audioRecorder.record(forDuration: 2.0)
    instructionLabel.text = "Speak wake word: \(WAKE_WORD)"print("Started recording...")
  } catch let error {
    print("Audio session error: \(error.localizedDescription)")
  }
}
  • The stop recording method, stops the audioRecorder instance and updates the instruction label to show the same.
func stopRecording() {
  if (audioRecorder != nil && audioRecorder.isRecording) {
    audioRecorder.stop()
  }
  instructionLabel.text = "Stop"
  print("Stopped recording...")
}

The final recognition is done in the `audioRecorderDidFinishRecording` delegate method which runs the snowboy detection function which processes the audio recording in the temp file by creating a buffer and storing the audio in it and giving the wrapper this buffer as input which processes the buffer and returning a `1` is hotword was successfully detected.

func runSnowboy() {
  let file = try! AVAudioFile(forReading: soundFileURL)
  let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, 
  sampleRate: 16000.0, channels: 1, interleaved: false)
  let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(file.length))
  try! file.read(into: buffer)
  let array = Array(UnsafeBufferPointer(start: 
  buffer.floatChannelData![0], count:Int(buffer.frameLength)))
  // print output
  let result = wrapper.runDetection(array, length: Int32(buffer.frameLength))
  print("Result: \(result)")
}

To test this out, click the start button and speak different words and you will notice that once the Hot Word is spoken, log with `result: 1` is printed out.

The snowboy hotword recognition also offers to train the personalized model with the help of Rest Apis for which the docs can be found here. The complete project implementation can be found here.

Sources:

Continue ReadingHotword Recognition in SUSI iOS

Adding React based World Mood Tracker to loklak Apps

loklak apps is a website that hosts various apps that are built by using loklak API. It uses static pages and angular.js to make API calls and show results from users. As a part of my GSoC project, I had to introduce the World Mood Tracker app using loklak’s mood API. But since I had planned to work on React, I had to go off from the track of typical app development in loklak apps and integrate a React app in apps.loklak.org.

In this blog post, I will be discussing how I introduced a React based app to apps.loklak.org and solved the problem of country-wise visualisation of mood related data on a World map.

Setting up development environment inside apps.loklak.org

After following the steps to create a new app in apps.loklak.org, I needed to add proper tools and libraries for smooth development of the World Mood Tracker app. In this section, I’ll be explaining the basic configuration that made it possible for a React app to be functional in the angular environment.

Pre-requisites

The most obvious prerequisite for the project was Node.js. I used node v8.0.0 while development of the app. Instead of npm, I decided to go with yarn because of offline caching and Internet speed issues in India.

Webpack and Babel

To begin with, I initiated yarn in the app directory inside project and added basic dependencies –

$ yarn init
$ yarn add webpack webpack-dev-server path
$ yarn add babel-loader babel-core babel-preset-es2015 babel-preset-react --dev

 

Next, I configured webpack to set an entry point and output path for the node project in webpack.config.js

module.exports = {
    entry: './js/index.js',
    output: {
        path: path.resolve('.'),
        filename: 'index_bundle.js'
    },
    ...
};

This would signal to look for ./js/index.js as an entry point while bundling. Similarly, I configured babel for es2015 and React presets –

{
  "presets":[
    "es2015", "react"
  ]
}

 

After this, I was in a state to define loaders for module in webpack.config.js. The loaders would check for /\.js$/ and /\.jsx$/ and assign them to babel-loader (with an exclusion of node_modules).

React

After configuring the basic presets and loaders, I added React to dependencies of the project –

$ yarn add react react-dom

 

The React related files needed to be in ./js/ directory so that the webpack can bundle it. I used the file to create a simple React app –

import React from 'react';
import ReactDOM from 'react-dom';

ReactDOM.render(
    <div>World Mood Tracker</div>,
    document.getElementById('app')
);

 

After this, I was in a stage where it was possible to use this app as a part of apps.loklak.org app. But to do this, I first needed to compile these files and bundle them so that the external app can use it.

Configuring the build target for webpack

In apps.loklak.org, we need to have a file by the name of index.html in the app’s root directory. Here, we also needed to place the bundled js properly so it could be included in index.html at app’s root.

HTML Webpack Plugin

Using html-webpack-plugin, I enabled auto building of project in the app’s root directory by using the following configuration in webpack.config.js

...
const HtmlWebpackPlugin = require('html-webpack-plugin');
const HtmlWebpackPluginConfig = new HtmlWebpackPlugin({
    template: './js/index.html',
    filename: 'index.html',
    inject: 'body'
});
module.exports = {
    ...
    plugins: [HtmlWebpackPluginConfig]
};

 

This would build index.html at app’s root that would be discoverable externally.

To enable bundling of the project using simple yarn build command, the following lines were added to package.json

{
  ..
  "scripts": {
    ..
    "build": "webpack -p"
  }
}

After a simple yarn build command, we can see the bundled js and html being created at the app root.

Using datamaps for visualization

Datamaps is a JS library which allows plotting of data on map using D3 as backend. It provides a simple interface for creating visualizations and comes with a handy npm installation –

$ yarn add datamaps

Map declaration and usage as state

A map from datamaps was used a state for React component which allowed fast rendering of changes in the map as the state of React component changes –

export default class WorldMap extends React.Component {
    constructor() {
        super();
        this.state = {
            map: null
        };
    render() {
        return (<div className={styles.container}>
                <div id="map-container"></div></div>)
    }
    componentDidMount() {
        this.setState({map: new Datamap({...})});
    }
    ...
}

 

The declaration of map goes in componentDidMount method because it would not be possible to start the map until we have the div with id=”map-container” in the DOM. It was necessary to draw the map only after the component has mounted otherwise it would fail due to no id=”map-container” in the DOM.

Defining data for countries

Data for every country had two components –

data = {
    positiveScore: someValue,
    negativeScore: someValue
}

 

This data is used to generate popup for the counties –

this.setState({
    map: new Datamap({
        ...
        geographyConfig: {
            ...
            popupTemplate: function (geo, data) {
                // Configure variables pScore so that it gives “No Data” when data.positiveScore is not set (similar for negative)
                return [
                    // Use pScore and nScore to generate results here
                    // geo.properties.name would give current country name
                ].join('');
            }
        }
    })
});

The result for countries with unknown data values look something like this –

Conclusion

In this blog post, I explained about introducing a React based app in app.loklak.org’s angular based environment. I discussed the setup and bundling process of the project so it becomes available from the project’s external HTTP server.

I also discussed using datamaps as a visualisation tool for data about Tweets. The app was first introduced in pull request fossasia/apps.loklak.org#189 and was improved step by step in subsequent patches.

Resources

Continue ReadingAdding React based World Mood Tracker to loklak Apps

Using Picasso to Show Images in SUSI Android

Important skills of SUSI.AI are to display web search queries, maps of any location and provide a list of relevant information of a topic. This blog post will cover why Glide is replaced by Picasso to show images related to these action types and how it is implemented in SUSI AndroidPicasso is a powerful image downloading and caching open source library developed by Square.

Why Glide is replaced by Picasso to show images in SUSI Android?

Previously we used Glide library to show preview in SUSI Android but we replace it because it was creating an error continuously.

java.lang.IllegalArgumentException: You cannot start a load for a destroyed activity at com.bumptech.glide.manager.RequestManagerRetriever  at Com.bumptech.glide.manager.RequestManagerRetriever.get(RequestManagerRetriever.java:102) at com.bumptech.glide.manager.RequestManagerRetriever.get(RequestManagerRetriever.java:87)at com.bumptech.glide.Glide.with(Glide.java:629)
atnorg.fossasia.susi.ai.adapters.recycleradapters.WebSearchAdapter.onBindViewHolder(WebSearchAdapter.java:74)

Reason for this error is when activity destroyed and again recreated the context used by glide is old one and  that activity already destroyed .

Glide.with(context).load(imageList.get(0))

One solution of this error is to use context.getApplicationContext()  but it is a bad idea. Another solution is to replace glide by picasso and later one is good because picasso is also a very good image downloading and caching library.

To use Picasso in your project you have to add dependency in build.gradle(Module) file.

dependencies {
  
  compile “com.squareup.picasso:picasso:2.4.0”
  
}

How Picasso is used in different actiontype

Map

“actions”: [
     {
       “type”: “map”,
       “latitude”: “1.2896698812440377”,
       “longitude”: “103.85006683126556”,
       “zoom”: “13”
     }
   ]

Link we used to retrieve image url is

http://staticmap.openstreetmap.de/statucmap.php?center=latitude,longitude& zoom=zoom&size=lengthXbreadth

Picasso will load image from this url and show image in the imageview. Here mapImage is the imageview in which map image is shown.

Picasso.with(currContext).load(mapHelper.getMapURL())
                       .into(mapImage, new  com.squareup.picasso.Callback() {    
                           @Override
                           public void onSuccess() {
                               pointer.setVisibility(View.VISIBLE);
                           }
                           @Override
                           public void onError() {
                               Log.d(“Error”, “map image can’t loaded”);
                           }
                       });

WebSearch

When we query like “Search for fog” we get ‘query’ in reply from server

“query”: “fog”

Now we use this query to retrieve image url which we used in Picasso to show images.Picasso load this image into previewImageView imageview. Image url is retrieved using  DuckDuckGo api. We are using url

https://api.duckduckgo.com/?format=json&pretty=1&q=query&ia=meanings

It gives a json response which contains image url

Picasso.with(context).load(iconUrl)
      .into(holder.previewImageView, new  com.squareup.picasso.Callback() {
                  @Override
                   public void onSuccess() {
                         Log.d(“Sucess”,“image loaded successfully”);
                   }
                   @Override
                   public void onError() {
                       holder.previewImageView.setVisibility(View.GONE);
                     }
                });     

Here also com.squareup.picasso.Callback is use to find that image is loaded successfully or not.

RSS

When we query any like “dhoni” we get ‘link’ in reply from server

“title”: “Dhoni”,

“description”: “”,
“link”: “http://de.wikipedia.org/wiki/Dhoni”

We use this link in android-link-preview library to retrieve relevant image url and then Picasso use this url to load image into imageview previewImageView.

Picasso.with(currContext).load(imageList.get(0))
      .fit().centerCrop()
      .into(previewImageView);

Reference

Continue ReadingUsing Picasso to Show Images in SUSI Android