Utilizing Readiness Probes for loklak Dependencies in Kubernetes

When we use any application and fail to connect to it, we do not give up and retry connecting to it again and again. But in the reality we often face this kind of obstacles like application that break instantly or when connecting to an API or database that is not ready yet, the app gets upset and refuses to continue to work. So, something similar to this was happening with api.loklak.org.
In such cases we can’t really re-write the whole application again every time the problem occurs.So for this we need to define dependencies of some kind that can handle the situation rather than disappointing the users of loklak app.

Solution:

We will just wait until a dependent API or backend of loklak is ready and then only start the loklak app. For this to be done, we used Kubernetes Health Checks.

Kubernetes health checks are divided into liveness and readiness probes.
The purpose of liveness probes are to indicate that your application is running.
Readiness probes are meant to check if your application is ready to serve traffic.
The right combination of liveness and readiness probes used with Kubernetes deployments can:
Enable zero downtime deploys
Prevent deployment of broken images
Ensure that failed containers are automatically restarted

Pod Ready to be Used?

A Pod with defined readiness probe won’t receive any traffic until a defined request can be successfully fulfilled. This health checks are defined through the Kubernetes, so we don’t need any changes to be made in our services (APIs). We just need to setup a readiness probe for the APIs that loklak server is depending on. Here you can see the relevant part of the container spec you need to add (in this example we want to know when loklak is ready):

        readinessProbe:
          httpGet:
            path: /api/status.json
            port: 80
          initialDelaySeconds: 30
          timeoutSeconds: 3

 

Readiness Probes

Updating deployments of loklak when something pushed into development without readiness probes can result in downtime as old pods are replaced by new pods in case of Kubernetes deployment. If the new pods are misconfigured or somehow broken, that downtime extends until you detect the problem and rollback.
With readiness probes, Kubernetes will not send traffic to a pod until the readiness probe is successful. When updating a loklak deployment, it will also leave old one’s running until probes have been successful on new copy. That means that if loklak server new pods are broken in some way, they’ll never see traffic, instead old pods of loklak server will continue to serve all traffic for the deployment.

Conclusion

Readiness probes is a simple solution to ensure that Pods with dependencies do not get started before their dependencies are ready (in this case for loklak server). This also works with more than one dependency.

Resources

Skill History Component in Susi Skill CMS

SUSI Skill CMS is an editor to write and edit skill easily. It is built on ReactJS framework and follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message and name the author who made the changes to that skills. In this blog post, we will see how to add skill revision history component in Susi Skill CMS.

One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.

<Menu.Item key="BrowseRevision">
 <Icon type="fork" />
   Browse Skills Revision
 <Link to="/browseHistory"></Link>
</Menu.Item>

First let’s create a option in sidebar menu, and link it “/browseHistory” route created at index.js 

 <Route path="/browseHistory" component={BrowseHistory} />

Next we will be adding skill versioning using endpoints provided by Susi Server, to select a skill we will create a drop down list, for this we will be using Select Field a component of  Material UI.

request('http://cors-anywhere.herokuapp.com/api.susi.ai/cms/getModel.json').then((data) => {
    console.log(data.data);
    data = data.data;
    for (let i = 0; i < data.length; i++) {
        models.push(<MenuItem value={i} key={data[i]} primaryText={`${data[i]}`}/>);
    }

    console.log(models);
});
 <SelectField
   floatingLabelText="Expert"
   style={{width: '50px'}}
   value={this.state.value}
   onChange={this.handleChange}>
      {experts}
  </SelectField>

We store the models, groups and languages in array using the endpoints api.susi.ai/cms/getModel.json, api.susi.ai/cms/getGroups.json, api.susi.ai/cms/getAllLanguages.json set the values in respective select fields. The request functions takes the url as string and the parses the json and fetches the object containing data or error depending on the response from the server. Once run your project using

npm start

And you would be able to see the drop down list working

Next, we will use Material UI tables for displaying the organized data. For using  table component we need to import table, it’s body, header and row column from Material ui class. 

import {Table, TableBody, TableHeader, TableHeaderColumn, TableRow, TableRowColumn} from "material-ui/Table";

We then make our header Columns, in our case it’s three, namely Commit ID, Commit Message, Author name.

 <TableRow>
  <TableHeaderColumn tooltip="Commit ID">Commit ID</TableHeaderColumn>
  <TableHeaderColumn tooltip="Commit Message">Commit Message</TableHeaderColumn>
  <TableHeaderColumn tooltip="Author Name">Author Name</TableHeaderColumn>
 </TableRow>

To get the history of modification of a skill, we will use endpoint “http://api.susi.ai/cms/getSkillHistory.json”. It uses JGit for managing version control in skill data repository. JGit is a library which implements the Git functionality in Java. An endpoint is accessed based on userRoles, which can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. Thus a User need not to log in to access this endpoint.

 url = "http://api.susi.ai/cms/getSkillHistory.json?model="+models[this.state.modelValue].key+"&group="+groups[this.state.groupValue].key+"&language="+languages[this.state.languageValue].key+"&skill="+this.state.expertValue;

After getting the url, we will next make a ajax network call to get the modification history of skill., if the method returns success, we get the desired data in table array and display it in rows through its render() method, which checks if the data is set in state -if so, it renders the contents otherwise we display the error occurred while processing the request.

 $.ajax({
            url: url,
            jsonpCallback: 'pccd',
            dataType: 'jsonp',
            jsonp: 'callback',
            crossDomain: true,
            success: function(data) {
                data = data.commits;
                let array = [];
                for(let i=0;i<data.length;i++){
                    array.push(data[i]);
                }
{tableData.map((row, index) => (
  <TableRow key={index}>
    <TableRowColumn>{row.commitID}</TableRowColumn>
    <TableRowColumn>{row.commit_message}</TableRowColumn>
    <TableRowColumn>{row.author}</TableRowColumn>
   </TableRow>
))}

Test the final output  on http://skills.susi.ai/browseHistory or http://localhost:3000/browseHistory , select the model, group , language and skill and get the history of that skill.


Next time when you need drop down list or tables to organize your data, do check out https://github.com/callemall/material-ui for examples, which can help in providing good outlines to you apps. For contributions to susi_skill_cms, join our chat channel  on gitter: https://gitter.im/fossasia/susi_server and browse https://github.com/fossasia/susi_skill_cms for complete code.

Resources

Getting Response Feedback In SUSI.AI Web Chat

The SUSI.AI Web Chat provides responses for various queries, but the quality of responses it not always the best possible. Machine learning and deep learning algorithms will help us to solve this step by step. In order to implement machine learning, we need feedback mechanisms. The first step in this direction is to provide users with a way to give feedback to responses with a “thumbs up” or “thumbs down”. In this blog, I explain how we fetch the feedback of responses from the server.

On asking a query like tell me a quote, Susi responses with following message:

Now the user can rate the response by pressing thumbs up or thumbs down button. We store this response on the server. For getting this count of feedback we use the following endpoint:

BASE_URL+'/cms/getSkillRating.json?'+'model='+model+'&group='+group+'&skill='+skill;

Here:

  • BASE_URL: Base URL of our server: http://api.susi.ai/
  • model: Model of the skill from which response is fetched. For example “general”.
  • group: The group of the skill from which response is fetched. For example “entertainment”.
  • skill: name of the skill from which response is fetched. For example “quotes”.

We make an ajax call to the server to fetch the data:

$.ajax({
          url: getFeedbackEndPoint,
          dataType: 'jsonp',
          crossDomain: true,
          timeout: 3000,
          async: false,
          success: function (data) {
            console.log(getFeedbackEndPoint)
            console.log(data);
            if(data.accepted) {
              let positiveCount = data.skill_rating.positive;
              let negativeCount = data.skill_rating.negative;
              receivedMessage.positiveFeedback = positiveCount;
              receivedMessage.negativeFeedback = negativeCount;
            }

}

In the success function, we receive the data, which is in jsonp format. We parse this to get the desired result and store it in variable positiveCount and negativeCount. An example of data response is :

In the client, we can get value corresponding to positive and negative key as follows :

let positiveCount = data.skill_rating.positive;
let negativeCount = data.skill_rating.negative;

This way we can fetch the positive and negative counts corresponding to a particular response. This data can be used in many ways, for example:

  • It can be used to display the number of positive and negative count next to the thumbs:

  • It can be used in machine learning algorithms to improve the response that SUSI.AI provides.

Resources:

Testing Link:

http://chat.susi.ai/

Adding Manual ISO Controls in Phimpme Android

The Phimpme Android application comes with a well-featured camera to take high resolution photographs. It features an auto mode in the camera as well as a manual mode for users who likes to customise the camera experience according to their own liking. It provides the users to select from the range of ISO values supported by their devices with a manual mode to enhance the images in case the auto mode fails on certain circumstances such as low lighting conditions.

In this tutorial, I will be discussing how we achieved this in Phimpme Android with some code snippets and screenshots.

To provide the users with an option to select from the range of ISO values, the first thing we need to do is scan the phone for all the supported values of ISO and store it in an arraylist to be used to display later on. This can be done by the snippet provided below:

String iso_values = parameters.get("iso-values");
if( iso_values == null ) {
 iso_values = parameters.get("iso-mode-values"); // Galaxy Nexus
 if( iso_values == null ) {
    iso_values = parameters.get("iso-speed-values"); // Micromax A101
    if( iso_values == null )
       iso_values = parameters.get("nv-picture-iso-values"); // LG dual P990

Every device supports a different set of keyword to provide the list of ISO values. Hence, we have tried to add every possible keywords to extract the values. Some of the keywords used above covers almost 90% of the android devices and gets the set of ISO values successfully.

For the devices which supports the ISO values but doesn’t provide the keyword to extract the ISO values, we can provide the standard list of ISO values manually using the code snippet provided below:

values.add("200");
values.add("400");
values.add("800");
values.add("1600");

After extracting the set of ISO values, we need to create a list to display to the user and upon selection of the particular ISO value as depicted in the Phimpme camera screenshot below

Now to set the selected ISO value, we first need to get the ISO key to set the ISO values as depicted in the code snippet provided below:

if( parameters.get(iso_key) == null ) {
 iso_key = "iso-speed"; // Micromax A101
 if( parameters.get(iso_key) == null ) {
    iso_key = "nv-picture-iso"; // LG dual P990
    if( parameters.get(iso_key) == null ) {
       if ( Build.MODEL.contains("Z00") )
          iso_key = "iso"; // Asus Zenfone 2 Z00A and Z008

Getting the key to set the ISO values is similar to getting the key to extract the ISO values from the device. The above listed ISO keys to set the values covers most of the devices.

Now after we have got the ISO key, we need to change the camera parameter to reflect the selected change.

parameters.set(iso_key, supported_values.selected_value);
setCameraParameters(parameters);

To get the full source code on how to set the ISO values manually, please refer to the Phimpme Android repository.

Resources

  1. Stackoverflow – Keywords to extract ISO values from the device: http://stackoverflow.com/questions/2978095/android-camera-api-iso-setting
  2. Open camera Android source code: https://sourceforge.net/p/opencamera/code/ci/master/tree/
  3. Blog – Learn more about ISO values in photography: https://photographylife.com/what-is-iso-in-photography

Handling High Resolution Images in Phimpme Android

In Android, loading heavy and high resolution images is a difficult task. For instance, if we try to load a photo clicked at a resolution four times that of the screen and try to load it in an imageview, it may result in an app’s crash due to the OutOFMemory exception. It happens because at the run time of our application some limited memory is allocated to our application and if we exceed that by loading a high quality images. To make a perfect gallery application, one must take care of all the possible causes for application crashes. In Phimpme Android, we have done this with the help of Glide library with some other tweaks to help catch all possible causes for the OutOfMemory exceptions. In this tutorial, I will be discussing how we have displayed heavy images with the help of Glide library and other tweaks to avoid the above mentioned exception.

Step 1:

To avoid the OutOFMemory exception, first we have to add the below line of code in the AndroidManifest.xml file.

android:largeHeap=”true”

What this piece of code does is that it increases the amount of heap memory that is allocated at the time of run time of the application. Hence, more heap memory, less chance of running out of memory.

Step 2:

To load the images into the image view we can make use of the Glide library as it is the most recommended way to do it according to the Google’s  Android developer page to cache bitmaps. The below code helps us to load the image in the imageView using a pager adapter.

Glide.with(getContext())
          .load(img.getUri())
          .asBitmap().format(DecodeFormat.PREFER_RGB_565)
          .signature(useCache ? img.getSignature(): new StringSignature(new Date().getTime()+""))
          .diskCacheStrategy(DiskCacheStrategy.SOURCE)
          .thumbnail(0.5f)
          .transform(new RotateTransformation(getContext(), img.getOrientation(), false))
          .animate(R.anim.fade_in)
          .into(new SimpleTarget<Bitmap>() {
[email protected]
              public void onResourceReady(Bitmap bitmap, GlideAnimation<? super Bitmap> glideAnimation) {
                  photoView.setImageBitmap(bitmap);
              }
          });

This is the way we have done it in the Phimpme Android application using the Glide library. We are loading the image as a bitmap and by preferring the bitmap RGB565 as it consumes 50% less memory than the RGB8888 model which may be the type of the original image. Of course the image quality will seem bit less but it is not noticeable until we zoom in to full extent.

The next thing we are doing is caching the image in the memory using the below line of code using Glide library.

.diskCacheStrategy(DiskCacheStrategy.SOURCE)

As caching images offers faster access to the images. It also helps in avoiding the OutOfMemory crashes when we are using a large list of images. Link to cache images without using the Glide library is mentioned in the resources section below. After this, we are loading the image in the PhotoView which is a module we are using in the Phimpme Android application, which extends to the imageView and comes with many zooming images functionalities.

This is how we have implemented the loading of images in the gallery view in Phimpme Android application so that it can handle resolution of any sizes without running out of memory. To get the full source code on how to load high resolution images, please refer to the Phimpme Android repository.

Resource

  1. Introduction to glide image loader: https://inthecheesefactory.com/blog/get-to-know-glide-recommended-by-google/en
  2. Google developer guide to cache bitmaps without using glide library : https://developer.android.com/topic/performance/graphics/cache-bitmap.html.
  3. Google developer guide to OutOfMemory Exceptions: https://developer.android.com/reference/java/lang/OutOfMemoryError.html
  4. LeafPic GitHub repository: https://github.com/HoraApps/LeafPic

Implementing Hiding App Bar of SUSI Web Chat Application

In the SUSI Web Chat application we got a requirement to build a responsive app bar for static pages and there was another requirement to  show and hide the app bar when user scrolls. Basically this is how it should work: The app bar should be hidden after user scrolls down to a certain extent. When user scrolls up, It should appear again.

First we tried readymade node packages to do this task. But these packages are hard to customize. So we planned to make this feature from the sketch. We used Jquery for this. This is how we built this.

First we installed jQuery package using this command.

npm install jquery

Next we imported it on top of the application like this.

import $ from 'jquery'

We have discussed about this app bar and how we made it in previous blog post. Our app bar is like this.

             <header className="nav-down" id="headerSection">
             <AppBar
               className="topAppBar"
               title={<img src="susi-white.svg" alt="susi-logo"
               className="siteTitle"/>}
               style={{backgroundColor:'#0084ff'}}
               onLeftIconButtonTouchTap={this.handleDrawer}
               iconElementRight={<TopMenu />}
             />
             </header>

We have to use these HTML elements to write jQuery code. But we can’t refer HTML elements before it renders. So we have to define it soon after the render method executes. We can do it using “React LifeCycle” method. We have to add our code into the “componentDidMount()” method.
This is how we used jQuery inside the “componentDidMount()” lifeCycle method. Here we assigned the height of the App Bar using “$(‘header’).outerHeight();”

componentDidMount(){
     var didScroll;
     var lastScrollTop = 0;
     var delta = 5;
     var navbarHeight = $('header').outerHeight();

Here we assigned the height of the app bar to “navbarHeight” variable.

     $(window).scroll(function(event){
         didScroll = true;
     });

In this part we checked whether the user has scrolled or not. If user scrolled we set the value of “didScroll” to “true”.
Now we have to define what to do if user has scrolled.

     function hasScrolled() {
         var st = $(window).scrollTop();
         if(Math.abs(lastScrollTop - st) <= delta){

             return;
         }

Here we get the absolute scrolled height. If the height is less than the delta value we defined, it does not do anything. It just returns.

         if (st > lastScrollTop && st > navbarHeight){
             $('header').removeClass('nav-down').addClass('nav-up');
         } else if(st + $(window).height() < $(document).height()) {
             $('header').removeClass('nav-up').addClass('nav-down');
         }
         lastScrollTop = st;
     }

Here we hide the app bar after user scrolled down more than the height of the app bar. If we need to change the height which app bar should disappear, we just need to add a value to the condition like this.

if (st > lastScrollTop && st > navbarHeight + 200){

If the user scrolled down more than that value we change the class name of the element “nav-down” to “nav-up”.
And we change the className “nav-up” to “nav-down” when user is scrolling up.
We defined CSS classes in the stylesheet to do these things and the animations of action.

header {
   background: #f5b335;
   height: 40px;
   position: fixed;
   top: 0;
   transition: top 0.5s ease-in-out;
   width: 100%;
}

.nav-up {
   top: -100px;
}

We have defined the things which we need to do when user scrolls.
Now we have to call this function if user has scrolled

     setInterval(function() {
         if (didScroll) {
             hasScrolled();
             didScroll = false;
         }
     }, 2500);
   }

If the “didcroll” is “true” we execute the “hasScrolled()” function. And set 2500 millisecond time interval. Because of that app bar does not hide right after user scrolls. It triggers the function after 2.5 seconds later.
This is how we built the scroll bar hiding feature using react JS and jQuery.

Resources:

  • Learn more about React LifeCycle Methods http://busypeoples.github.io/post/react-component-lifecycle/
  • Use jQuery in React component: https:[email protected]/using-jquery-in-react-component-the-refs-way-969de9aa651f

Custom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

The SUSI Server is an AI powered server which is capable of responding to intelligent answers based on user’s queries. The queries to the susi server are obtained either as a websearch using the application or as an RSS feed. Two of the actions are websearch and RSS. These actions as the name suggests respond to queries based on search results from the web which are rendered in the clients. In order to use use these action types and display them in the SUSI iOS client, we need to first parse the actions looking for these action types and then creating a custom UI for them to display them.

To start with, we need to make send the query to the server to receive an intelligent response from the server. This response is parsed into different action types supported by the server and saved into relevant objects. Here, we check the action types by looping through the answers array containing the actions and based on that, we save the data for that action.

if type == ActionType.rss.rawValue {
   message.actionType = ActionType.rss.rawValue
   message.rssData = RSSAction(data: data, actionObject: action)
} else if type == ActionType.websearch.rawValue {
   message.actionType = ActionType.websearch.rawValue
   message.message = action[Client.ChatKeys.Query] as? String ?? ""
}

Here, we parsed the data response from the server and looked for the rss and websearch action type followed by which we saved the data we received from the server for each of the action types in their own objects.

Next, when a message object is created, we insert it into the dataSource item by appending it and use the `insertItems(at: [IndexPath])` method of collection view to insert them into the views at a particular index. Before adding them, we need to create a Custom UI for them. This UI will consist of a Collection View which is scrollable in the horizontal direction inside a CollectionView Cell. To start with this, we create a new class called `WebsearchCollectionView` which will be a `UIView` consisting of a `UICollectionView`.

We start by adding a collection view into the UIView inside the `init` method by overriding it. Declare a collection view using flow layout and scroll direction set to `horizontal`. Also, hide the scroll indicators and assign the delegate and datasource to `self`.

Now to populate this collection view, we need to specify the number of items that will show up. For this, we make use of the `message` variable declared. We use the `websearchData` in case of websearch action and `rssData` otherwise.

Now to specify the number of cells, we use the below method which returns the number of rss or websearch action objects and defaults to 0 such cells.

func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
    if let rssData = message?.rssData {
        return rssData.count
    } else if let webData = message?.websearchData {
        return webData.count
    }
    return 0
}

We display the title, description and image for each object for which we need to create a UI for the cells. Let’s start by creating a Custom Collection View cell with the imageview and 2 labels for title and description.

The imageview is given a contentMode of `aspectFit` and assigned a placeholder image in case the image doesn’t exist. The title and description labels are assigned the same font size, the title being bolder and both are center aligned.

class WebsearchCell: BaseCell {

   var imageView: UIImageView = {
       let iv = UIImageView()
       iv.contentMode = .scaleAspectFit
       iv.image = UIImage(named: "placeholder")
       return iv
   }()

   let titleLabel: UILabel = {
       let label = UILabel()
       label.textColor = .black
       label.font = UIFont.boldSystemFont(ofSize: 14)
       label.textAlignment = .center
       label.numberOfLines = 2
       label.backgroundColor = Color.grey.lighten3
       return label
   }()

   let descriptionLabel: UILabel = {
       let label = UILabel()
       label.font = UIFont.systemFont(ofSize: 14)
       label.textAlignment = .center
       return label
   }()

}

Next, we add constraints for each such view adding a title and description label adding a small margin on both sides of the cell for a cleaner UI.

addSubview(imageView)
addSubview(titleLabel)
addSubview(descriptionLabel)
descriptionLabel.numberOfLines = 5
addConstraintsWithFormat(format: "H:|-4-[v0(\(frame.width * 0.4))]-4-[v1]-4-|", views: imageView, titleLabel)
addConstraintsWithFormat(format: "|-\(frame.width * 0.4 + 8)-[v0]-4-|", views: descriptionLabel)
addConstraintsWithFormat(format: "V:|-4-[v0]-4-|", views: imageView)
addConstraintsWithFormat(format: "V:|-4-[v0(44)]-4-[v1]-4-|", views: titleLabel, descriptionLabel)

Now to use this custom cell, we first need to register it with the collection view and then we can use it easily in the `cellForItemAt` method. Since we are using Kingfisher for image caching, we use the `.kf.setImage` method to download the image from a URL and cache it as soon as it is downloaded.

collectionView.register(WebsearchCell.self, forCellWithReuseIdentifier: cellId)
 func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
       if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellId, for: indexPath) as? WebsearchCell {
           cell.backgroundColor = .white

           if message?.actionType == ActionType.rss.rawValue {
               let feed = message?.rssData?.rssFeed[indexPath.item]
               cell.titleLabel.text = feed?.title
               cell.descriptionLabel.text = feed?.desc                cell.imageView.kf.setImage(with: URL(string: feed?.rssData?.image))
           } else if message?.actionType == ActionType.websearch.rawValue {
               let webData = message?.websearchData[indexPath.item]
               cell.titleLabel.text = webData?.title
               cell.descriptionLabel.text = webData?.desc.html2String                cell.imageView.kf.setImage(with: URL(string: feed?.webData?.image))
           }
           return cell
       } else {
           return UICollectionViewCell()
       }
   }

We check the action type and assign data based on that. Also, for the images, if they don’t exist a placeholder is added.

Since we are done with the Custom UI of the cell, we need to add it to the chat cell. For that, we add this `UIView` as a subview. For reasons of reusability, the cell was extracted into a separate one and call the `prepareForReuse()` method for reusability. This is followed by adding a subview and setting constraints for each cell and assigning the message object.

class RSSCell: ChatMessageCell {

   var message: Message? {
       didSet {
           self.addWebsearchView()
       }
   }

   lazy var websearchView: WebsearchCollectionView = {
       let view = WebsearchCollectionView()
       return view
   }()

   override func setupViews() {
       super.setupViews()
       prepareForReuse()
   }

   func addWebsearchView() {
       self.addSubview(websearchView)
       self.addConstraintsWithFormat(format: "H:|[v0]|", views: websearchView)
       self.addConstraintsWithFormat(format: "V:[v0(135)]", views: websearchView)
       websearchView.message = message
   }

}
if message.actionType == ActionType.rss.rawValue || message.actionType == ActionType.websearch.rawValue {
   if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.rssCell, for: indexPath) as? RSSCell {
       cell.message = message
       return cell
   } else {
       return UICollectionViewCell()
   }
}

This is all we need to add a custom UI to a chat cell in SUSI iOS, very simple and clean.

Check the screenshot below, the app in action.

Resources:

Hotword Recognition in SUSI iOS

Hot word recognition is a feature by which a specific action can be performed each time a specific word is spoken. There is a service called Snowboy which helps us achieve this for various clients (for ex: iOS, Android, Raspberry pi, etc.). It is basically a DNN based hotword recognition toolkit.

In this blog, we will learn how to integrate the snowboy hotword detection wrapper in the SUSI iOS client. This service can be used in any open source project but for using it commercially, a commercial license needs to be obtained.

Following are the files that need to be added to the project which are provided by the service itself: snowboy-detect.h libsnowboy-detect.a and a trained model file which can be created using their online service: snowboy.kitt.ai. For the sake of this blog, we will be using the hotword “Susi”, the model file can be found here.

The way how snowboy works is that speech is recorded for a few seconds and this data is detected with an already trained model by a specific hotword, now if snowboy returns a 1 means word has been successfully detected else wasn’t.

We start with creation of a wrapper class in Objective-C which can be found wrapper and the bridging header in case this needs to be added to a Swift project. The wrapper contains methods for setting sensitivity, audio gain and running the detection using the buffer. It is a wrapper class built on top of the snowboy-detect.h header file.

Let’s initialize the service and run it. Below are the steps followed to enable hotword recognition and print out whether it successfully detected the hotword or not:

  • Create a ViewController class with extensions
    • AVAudioRecorderDelegate
    • AVAudioPlayerDelegate

since we will be recording speech.

  • Import AVFoundation
  • Create a basic layout containing a label which detects whether hotword detected or not and create corresponding `IBOutlet` in the ViewController and a button to trigger the start and stop of recognition.
  • Create the following variables:
    • let WAKE_WORD = “Susi” // hotword used
    • let RESOURCE = Bundle.main.path(forResource: “common”, ofType: “res”)
    • let MODEL = Bundle.main.path(forResource: “susi”, ofType: “umdl”) //path where the model file is stored
    • var wrapper: SnowboyWrapper! = nil // wrapper instance for running detection
    • var audioRecorder: AVAudioRecorder! // audio recorder instance
    • var audioPlayer: AVAudioPlayer!
    • var soundFileURL: URL! //stores the URL of the temp reording file
    • var timer: Timer! //timer to fire a function after an interval
    • var isStarted = false // variable to check if audio recorder already started
  • In `viewDidLoad` initialize the wrapper and set sensitivity and audio gain. Recognition best happens when sensitivity is set to `0.5` and audio gain is set to `1.0` according to the docs.
override func viewDidLoad() {
    super.viewDidLoad()
    wrapper = SnowboyWrapper(resources: RESOURCE, modelStr: MODEL)
    wrapper.setSensitivity("0.5")
    wrapper.setAudioGain(1.0)
}
  • Create an `IBAction` for the button to start recognition. This action will be used to start or stop the recording in which the action toggles based on the `isStarted` variable. When true, recording is stopped and the timer invalidated else a timer is started which calls the `startRecording` method with an interval of 4 seconds.
@IBAction func onClickBtn(_ sender: Any) {
  if (isStarted) {
    stopRecording()
    timer.invalidate()
    btn.setTitle("Start", for: .normal)
    isStarted = false
  } else {
    timer = Timer.scheduledTimer(timeInterval: 4, target: self, 
    selector: #selector(startRecording), userInfo: nil, repeats: true)
    timer.fire()
    btn.setTitle("Stop", for: .normal)
    isStarted = true
  }
}
  • Next, we add the start and stop recording methods.
    • First, a temp file is created which stores the recorded audio output
    • After which, necessary record configurations are made such as setting the sampling rate.
    • The recording is then started and the output stored in the temp file.
func startRecording() {
  do {
    let fileMgr = FileManager.default
    let dirPaths = fileMgr.urls(for: .documentDirectory, in: .userDomainMask)
    soundFileURL = dirPaths[0].appendingPathComponent("temp.wav")
    let recordSettings = [AVEncoderAudioQualityKey: 
    AVAudioQuality.high.rawValue,
    AVEncoderBitRateKey: 128000,
    AVNumberOfChannelsKey: 1,
    AVSampleRateKey: 16000.0] as [String : Any]
    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSessionCategoryRecord)
    try audioRecorder = AVAudioRecorder(url: soundFileURL,
settings: recordSettings as [String : AnyObject])
    audioRecorder.delegate = self
    audioRecorder.prepareToRecord()
    audioRecorder.record(forDuration: 2.0)
    instructionLabel.text = "Speak wake word: \(WAKE_WORD)"print("Started recording...")
  } catch let error {
    print("Audio session error: \(error.localizedDescription)")
  }
}
  • The stop recording method, stops the audioRecorder instance and updates the instruction label to show the same.
func stopRecording() {
  if (audioRecorder != nil && audioRecorder.isRecording) {
    audioRecorder.stop()
  }
  instructionLabel.text = "Stop"
  print("Stopped recording...")
}

The final recognition is done in the `audioRecorderDidFinishRecording` delegate method which runs the snowboy detection function which processes the audio recording in the temp file by creating a buffer and storing the audio in it and giving the wrapper this buffer as input which processes the buffer and returning a `1` is hotword was successfully detected.

func runSnowboy() {
  let file = try! AVAudioFile(forReading: soundFileURL)
  let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, 
  sampleRate: 16000.0, channels: 1, interleaved: false)
  let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(file.length))
  try! file.read(into: buffer)
  let array = Array(UnsafeBufferPointer(start: 
  buffer.floatChannelData![0], count:Int(buffer.frameLength)))
  // print output
  let result = wrapper.runDetection(array, length: Int32(buffer.frameLength))
  print("Result: \(result)")
}

To test this out, click the start button and speak different words and you will notice that once the Hot Word is spoken, log with `result: 1` is printed out.

The snowboy hotword recognition also offers to train the personalized model with the help of Rest Apis for which the docs can be found here. The complete project implementation can be found here.

Sources:

Handling Change of Password of SUSI.AI Account

In this blog, we will talk about a very special case, where the user changes his password to his current one only, in other words, the user enters the same password in both current password and new password. This case is now being handled by SUSI.AI server.

Considering the example of SUSI.AI Web Chat, we have following dialog when the user tries to change his/her password:

Here the user can add his/her current password and new password. When the new password meets the minimum conditions (minimum 6 characters), then the user can press CHANGE button.

We make ajax call to the server with the following endpoint:

BASE_URL+'/aaa/changepassword.json?'+
            'changepassword=' + email +
            '&password=' + this.state.passwordValue +
            '&newpassword=' + this.state.newPasswordValue +
            '&access_token='+cookies.get('loggedIn');

Here we have 4 parameters:

  • changepassword: This takes the email of the current user
  • password: This is the password of the current user, which is saved in the state named “passwordValue”
  • newpassword: This is the new password which the user enters
  • access_token: These are access tokens which are fetched from cookies. These are defined on login and are deleted on logout.

This is now handled on the server by a file named PasswordChangeService.java. Here we have to check whether the newpassword and password matches or not.

In this file, we have a function named serviceImpl with return type ServiceResponse and takes in an argument: Query post (Query is the return type). The query is not the only argument, Please read from the file from resources mentioned below for all the argument. To handle our case we just need to work with the post.

We extract the password, newpassword and email as follows:

String useremail = post.get("changepassword", null);
String password = post.get("password", null);
String newpassword = post.get("newpassword",null);

So to simply handle the case where password and newpassword matches, we define an if block in java and compare these two parameters as follows:

if(password.equals(newpassword)){
            result.put("message", "Your current password and new password matches");
            result.put("accepted", false);
            return new ServiceResponse(result);
}

Here we put the message as “Your current password and new password matches” and make the accepted flag of result JSON as false. After this, we return the ServiceResponse.

Now in our web chat client, the ajax call is as follows:

$.ajax({
                url: changePasswordEndPoint,
                dataType: 'jsonp',
                crossDomain: true,
                timeout: 3000,
                async: false,
                statusCode: {
                    422: function() {
                      let msg = 'Invalid Credentials. Please check your Email or Password.';
                      let state = this.state;
                      state.msg = msg;
                      this.setState(state);
                    }
                },
                success: function (response) {
                    let msg = response.message+'\n Please login again.';
                    let state = this.state;
                    state.msg = msg;
                    state.success = true;
                    state.msgOpen = true;
                    this.setState(state);
                }.bind(this),
                error: function(jqXHR, textStatus, errorThrown) {
                    let msg = 'Failed. Try Again';
                    if (status === 'timeout') {
                      msg = 'Please check your internet connection';
                    }
                    let state = this.state;
                    state.msg = msg;
                    state.msgOpen = true;
                    this.setState(state);
                }.bind(this)
            });

In our success method of ajax call,  we receive the JSON response in a variable named response and store this in the state in variable msg and set the state of success equal to true. We then use the state and message to handle accordingly.

Our JSON object when both password and new password are same:

So this is how clients can handle accordingly to the message received from the server instead of doing this on their own end.

Resources

Change Background of Message Section in SUSI.AI Web Chat

In SUSI.AI Web Chat we pay special attentions to the UI to make it easy to use and attract more users. Many chat apps offer users customization such as changing the colors and background. As this is very popular we decided to give the option to customize the background of message section of SUSI.AI web chat. The UI of the message section had a default gray background which could not be modified by the user. The goal was now to allow the user to customize the look of his or her own client starting with the background of the message section.

We added the settings to change the background image to Custom theme menu which occurs only when the user is logged in.The option looks like this:

User can add URL of any image of his/her choice and it will be set as the background of message section.
Now let’s take a look at the implementation of this option. We added messageBackgroundImage to our state of message section and initialised it to ‘ ’ (empty string). The TextField component looks like this:

<TextField
   name="messageImg"
   style={{display:component.component==='body'?'block':'none'}}
   ref={(input) => { this.backImage = input }}
   onChange={
     (e,value)=>
     this.handleChangeMessageBackground(value) }
    value={this.state.messageBackgroundImage}
    floatingLabelText="Message Background Image URL"
 />

OnChange method handles the input URL and this calls the function handleChangeMessageBackground. This function is having following implementation:

handleChangeMessageBackground(backImage){
    this.setState({messageBackgroundImage:backImage});
  }

It sets the messageBackgroundImage equal to the URL entered by the user. Now to change the background of message section we made a custom React style object messageBackgroundStyles:

const messageBackgroundStyles = {
        backgroundImage: `url(${this.state.messageBackgroundImage})`,
        backgroundRepeat: 'no-repeat',
        backgroundSize: '100% 100%'
    }

In the above object,

backgroundImage: sets the message section background to the image on URL entered by user

backgroundImage: Does not allows to the image to be repeated.

backgroundSize: Fills the entire message section with the background image.

Now this style was added to the component of message section :

<div className='message-section'>
     <ul
      className='message-list'
      ref={(c) => { this.messageList = c; }}
      style={messageBackgroundStyles}> // Styles added
       // Some relevant code    
      </ul>
</div>

Now we have following UI, where the user can modify the message section background with image of his or her own choice. User can add any image as background of message section by adding appropriate URL. The image will cover the entire message section without repeating itself.

Background Image source:link

This gives our user custom look according to his/her own choice.

Resources

Testing Link

http://chat.susi.ai

Background Image source of featured image : link