Implementing a Collapsible Responsive App Bar of SUSI Web Chat

In the SUSI Web Chat application we wanted to make few static pages such as: Overview, Terms, Support, Docs, Blog, Team, Settings and About. The idea was to show them in app bar. Requirements were  to have the capability to collapse on small viewports and to use Material UI components. In this blog post I’m going to elaborate how we built the responsive app bar Using Material UI components.

First we added usual Material UI app bar component  like this.

             <header className="nav-down" id="headerSection">
             <AppBar
               className="topAppBar"
title={<a href={this.state.baseUrl} ><img src="susi-white.svg" alt="susi-logo"  className="siteTitle"/></a>}
               style={{backgroundColor:'#0084ff'}}
               onLeftIconButtonTouchTap={this.handleDrawer}
               iconElementRight={<TopMenu />}
             />
             </header>

We added SUSI logo instead of the text title using below code snippet and linked it to the home page like this.

title={<a href={this.state.baseUrl} ><img src="susi-white.svg" alt="susi-logo"  className="siteTitle"/></a>}

We have defined “this.state.baseUrl” in constructor and it gets the base url of the web application.

this.state = {
       openDrawer: false, 
baseUrl: window.location.protocol + '//' + window.location.host + '/'
       };

We need to open the right drawer when we click on the button on top left corner. So we have to define two methods to open and close drawer as below.

   handleDrawer = () => this.setState({openDrawer: !this.state.openDrawer});
   handleDrawerClose = () => this.setState({openDrawer: false});

Now we have to add components that we need to show on the right side of the app bar. We connect those elements to the app bar like this. “iconElementRight={}”

We defined “TopMenu” Items like this.

   const TopMenu = (props) => (
   <div>
     <div className="top-menu">
     <FlatButton label="Overview"  href="/overview" style={{color:'#fff'}} className="topMenu-item"/>
     <FlatButton label="Team"  href="/team" style={{color:'#fff'}} className="topMenu-item"/>
     </div>

We added FlatButtons to place links to other static pages. After all we needed a FlatButton that gives IconMenu to show login and signup options.

     <IconMenu {...props} iconButtonElement={
         <IconButton iconStyle={{color:'#fff'}} ><MoreVertIcon /></IconButton>
     }>
     <MenuItem primaryText="Chat" containerElement={<Link to="/logout" />}
                   rightIcon={<Chat/>}/>
     </IconMenu>
   </div>
   );

After adding all these correctly you will see this kind of an app bar in your application.

Now our app bar is ready. But it does not collapse on small viewports.
So we planned to hide flat buttons on small sized screens and show the menu button. For that we used media queries.

@media only screen and (max-width: 800px){
   .topMenu-item{ display: none !important;  }
   .topAppBar button{ display: block  !important; }
}

This is how we built the responsive app bar using Material UI components. You can check the preview from this url. If you are willing to contribute to SUSI Web Chat here is the GitHub repository.

Resources:

  • Material UI Components: http://www.material-ui.com/#/components/
  • Learn More about media queries: https://www.w3schools.com/css/css_rwd_mediaqueries.asp
Continue ReadingImplementing a Collapsible Responsive App Bar of SUSI Web Chat

Skill History Component in Susi Skill CMS

SUSI Skill CMS is an editor to write and edit skill easily. It is built on ReactJS framework and follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message and name the author who made the changes to that skills. In this blog post, we will see how to add skill revision history component in Susi Skill CMS.

One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.

<Menu.Item key="BrowseRevision">
 <Icon type="fork" />
   Browse Skills Revision
 <Link to="/browseHistory"></Link>
</Menu.Item>

First let’s create a option in sidebar menu, and link it “/browseHistory” route created at index.js 

 <Route path="/browseHistory" component={BrowseHistory} />

Next we will be adding skill versioning using endpoints provided by Susi Server, to select a skill we will create a drop down list, for this we will be using Select Field a component of  Material UI.

request('http://cors-anywhere.herokuapp.com/api.susi.ai/cms/getModel.json').then((data) => {
    console.log(data.data);
    data = data.data;
    for (let i = 0; i < data.length; i++) {
        models.push(<MenuItem value={i} key={data[i]} primaryText={`${data[i]}`}/>);
    }

    console.log(models);
});
 <SelectField
   floatingLabelText="Expert"
   style={{width: '50px'}}
   value={this.state.value}
   onChange={this.handleChange}>
      {experts}
  </SelectField>

We store the models, groups and languages in array using the endpoints api.susi.ai/cms/getModel.json, api.susi.ai/cms/getGroups.json, api.susi.ai/cms/getAllLanguages.json set the values in respective select fields. The request functions takes the url as string and the parses the json and fetches the object containing data or error depending on the response from the server. Once run your project using

npm start

And you would be able to see the drop down list working

Next, we will use Material UI tables for displaying the organized data. For using  table component we need to import table, it’s body, header and row column from Material ui class. 

import {Table, TableBody, TableHeader, TableHeaderColumn, TableRow, TableRowColumn} from "material-ui/Table";

We then make our header Columns, in our case it’s three, namely Commit ID, Commit Message, Author name.

 <TableRow>
  <TableHeaderColumn tooltip="Commit ID">Commit ID</TableHeaderColumn>
  <TableHeaderColumn tooltip="Commit Message">Commit Message</TableHeaderColumn>
  <TableHeaderColumn tooltip="Author Name">Author Name</TableHeaderColumn>
 </TableRow>

To get the history of modification of a skill, we will use endpoint “http://api.susi.ai/cms/getSkillHistory.json”. It uses JGit for managing version control in skill data repository. JGit is a library which implements the Git functionality in Java. An endpoint is accessed based on userRoles, which can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. Thus a User need not to log in to access this endpoint.

 url = "http://api.susi.ai/cms/getSkillHistory.json?model="+models[this.state.modelValue].key+"&group="+groups[this.state.groupValue].key+"&language="+languages[this.state.languageValue].key+"&skill="+this.state.expertValue;

After getting the url, we will next make a ajax network call to get the modification history of skill., if the method returns success, we get the desired data in table array and display it in rows through its render() method, which checks if the data is set in state -if so, it renders the contents otherwise we display the error occurred while processing the request.

 $.ajax({
            url: url,
            jsonpCallback: 'pccd',
            dataType: 'jsonp',
            jsonp: 'callback',
            crossDomain: true,
            success: function(data) {
                data = data.commits;
                let array = [];
                for(let i=0;i<data.length;i++){
                    array.push(data[i]);
                }
{tableData.map((row, index) => (
  <TableRow key={index}>
    <TableRowColumn>{row.commitID}</TableRowColumn>
    <TableRowColumn>{row.commit_message}</TableRowColumn>
    <TableRowColumn>{row.author}</TableRowColumn>
   </TableRow>
))}

Test the final output  on http://skills.susi.ai/browseHistory or http://localhost:3000/browseHistory , select the model, group , language and skill and get the history of that skill.


Next time when you need drop down list or tables to organize your data, do check out https://github.com/callemall/material-ui for examples, which can help in providing good outlines to you apps. For contributions to susi_skill_cms, join our chat channel  on gitter: https://gitter.im/fossasia/susi_server and browse https://github.com/fossasia/susi_skill_cms for complete code.

Resources

Continue ReadingSkill History Component in Susi Skill CMS

Sending Data between components of SUSI MagicMirror Module

SUSI MagicMirror module is a module to add SUSI assistant right on your MagicMirror. The software for MagicMirror constitutes of an Electron app to which modules can be added easily. Since there are many modules, there might be functionalities that need interaction between various modules by transfer of information. MagicMirror also provides a node_helper script that facilitates a module to perform some background tasks. Therefore, a mechanism to transfer information from node_helper to various components of module is also needed.

MagicMirror provides an inbuilt module notification system that can be used to send notification across the modules and a socket notification system to send information between node_helper and various components of the system.

Our codebase for SUSI MagicMirror is divided mainly into two parts. A Main module that handles all the process of hotword detection, speech recognition, calling SUSI API and saving audio after Text to Speech and a Renderer module which performs the task of managing the display of content on the Mirror Screen and playing back the file obtained by Speech Synthesis. Plainly put, Main module mainly handles the backend logic of the application and the Renderer handles the frontend. Main and Renderer module work on different layers of the application and to facilitate communication between them, we need to make a mechanism. A schematic of flow that is needed to be maintained can be highlighted as:

As you can see in the above diagram, we need to transfer a lot of information between the components. We display animation and text based on the current state of recognition in the  module, thus we need to transfer this information frequently. This task is accomplished by utilizing the inbuilt socket notification system in the MagicMirror. For every event like when system enters into listening , busy or recognized speech state, we need to pass message to renderer. To achieve this, we made a rendererSend function to send notification to renderer.

const rendererSend =  (event: NotificationType , payload: any) => {
   this.sendSocketNotification(event, payload);
}

This function takes an event and a payload as arguments. Event tells which event occurred and payload is any data that we wish to send. This method in turn calls the method provided by MagicMirror module to send socket notifications within the module.

When certain events occur like when system enters busy state or listening state, we trigger the rendererSend call to send a socket notification to the module. The rendererSend method is supplied in the State Machine Components available to every state. The task of sending notifications can be done using the code snippet as follows:

// system enters busy state
this.components.rendererSend("busy", {});
// send speech recognition hypothesis text to renderer
this.components.rendererSend("recognized", {text: recognizedText});
// send susi api output json to renderer to display interactive results while Speech Output is performed
this.components.rendererSend("speak", {data: susiResponse});

The socket notification sent via the above method is received in SUSI Module via a callback called socketNotificationReceived . We need to define this callback with implementation while registering module to MagicMirror. So, we register the MMM-SUSI-AI module by adding the definition for socketNotificationReceived method.

Module.register("MMM-SUSI-AI", {
//other function definitions
***
   // define socketNotificationReceived function
   socketNotificationReceived: function (notification, payload) {
       susiMirror.receivedNotification(notification, payload);
   },
***
});

In this way, we send all the notification received to susiMirror object in the renderer module by calling the receivedNotification method of susiMirror object

We can now receive all the notifications in the SusiMirror and update UI. To handle notifications, we define receivedNotification method as follows:

public receivedNotification(type: NotificationType, payload: any): void {

   this.visualizer.setMode(type);
   switch (type) {
       case "idle":
            // handle idle state
           break;
       case "listening":
           // handle listening state
           break;
       case "busy":
           // handle busy state
         break;
       case "recognized":
           // handle recognized state. This notification also contains a payload about the hypothesis text           
           break;
       case "speak":
           // handle speaking state. We need to play back audio file and display text on screen for SUSI Output. Notification Payload contains SUSI Response
           break;
   }
}

In this way, we utilize the Socket Notification System provided by the MagicMirror Electron Application to send data across the components of Magic Mirror module for SUSI AI.

Resources

Continue ReadingSending Data between components of SUSI MagicMirror Module

Implementing Hiding App Bar of SUSI Web Chat Application

In the SUSI Web Chat application we got a requirement to build a responsive app bar for static pages and there was another requirement to  show and hide the app bar when user scrolls. Basically this is how it should work: The app bar should be hidden after user scrolls down to a certain extent. When user scrolls up, It should appear again.

First we tried readymade node packages to do this task. But these packages are hard to customize. So we planned to make this feature from the sketch. We used Jquery for this. This is how we built this.

First we installed jQuery package using this command.

npm install jquery

Next we imported it on top of the application like this.

import $ from 'jquery'

We have discussed about this app bar and how we made it in previous blog post. Our app bar is like this.

             <header className="nav-down" id="headerSection">
             <AppBar
               className="topAppBar"
               title={<img src="susi-white.svg" alt="susi-logo"
               className="siteTitle"/>}
               style={{backgroundColor:'#0084ff'}}
               onLeftIconButtonTouchTap={this.handleDrawer}
               iconElementRight={<TopMenu />}
             />
             </header>

We have to use these HTML elements to write jQuery code. But we can’t refer HTML elements before it renders. So we have to define it soon after the render method executes. We can do it using “React LifeCycle” method. We have to add our code into the “componentDidMount()” method.
This is how we used jQuery inside the “componentDidMount()” lifeCycle method. Here we assigned the height of the App Bar using “$(‘header’).outerHeight();”

componentDidMount(){
     var didScroll;
     var lastScrollTop = 0;
     var delta = 5;
     var navbarHeight = $('header').outerHeight();

Here we assigned the height of the app bar to “navbarHeight” variable.

     $(window).scroll(function(event){
         didScroll = true;
     });

In this part we checked whether the user has scrolled or not. If user scrolled we set the value of “didScroll” to “true”.
Now we have to define what to do if user has scrolled.

     function hasScrolled() {
         var st = $(window).scrollTop();
         if(Math.abs(lastScrollTop - st) <= delta){

             return;
         }

Here we get the absolute scrolled height. If the height is less than the delta value we defined, it does not do anything. It just returns.

         if (st > lastScrollTop && st > navbarHeight){
             $('header').removeClass('nav-down').addClass('nav-up');
         } else if(st + $(window).height() < $(document).height()) {
             $('header').removeClass('nav-up').addClass('nav-down');
         }
         lastScrollTop = st;
     }

Here we hide the app bar after user scrolled down more than the height of the app bar. If we need to change the height which app bar should disappear, we just need to add a value to the condition like this.

if (st > lastScrollTop && st > navbarHeight + 200){

If the user scrolled down more than that value we change the class name of the element “nav-down” to “nav-up”.
And we change the className “nav-up” to “nav-down” when user is scrolling up.
We defined CSS classes in the stylesheet to do these things and the animations of action.

header {
   background: #f5b335;
   height: 40px;
   position: fixed;
   top: 0;
   transition: top 0.5s ease-in-out;
   width: 100%;
}

.nav-up {
   top: -100px;
}

We have defined the things which we need to do when user scrolls.
Now we have to call this function if user has scrolled

     setInterval(function() {
         if (didScroll) {
             hasScrolled();
             didScroll = false;
         }
     }, 2500);
   }

If the “didcroll” is “true” we execute the “hasScrolled()” function. And set 2500 millisecond time interval. Because of that app bar does not hide right after user scrolls. It triggers the function after 2.5 seconds later.
This is how we built the scroll bar hiding feature using react JS and jQuery.

Resources:

  • Learn more about React LifeCycle Methods http://busypeoples.github.io/post/react-component-lifecycle/
  • Use jQuery in React component: https://medium.com/@shuvohabib/using-jquery-in-react-component-the-refs-way-969de9aa651f
Continue ReadingImplementing Hiding App Bar of SUSI Web Chat Application

Implementation of Text to Speech alongside Hotword Detection in SUSI Android App

In this blog post, we’ll be learning about how to implement Text to speech. Now you may be wondering that what is so difficult in implementing text to speech. One can easily find many tutorials on that and can easily look at the official documentation of TTS but there’s a catch here. In this blog post I’ll be telling about how to implement Text to Speech alongside Hotword Detection.

Let me give you a rough idea about how hotword detection works in SUSI Android App. For more details, read my other blog here on Hotword Detection. So, there is a constantly running background recording thread which detects when hotword is detected. Now, you may be thinking why do we need to stop that thread for text to speech. Well there are 2 reasons to do that:

  1. Recording while playing causing problems with mic and may crash the app.
  2. Suppose we even implement that but what will happen if the answer contains word “susi” in it. Now, the hotword will be detected because the speech output contained word “susi” in it (which is our hotword).

So, to avoid these problems we had to come up a way to stop hotword detection only for that particular time when SUSI is giving speech output and resume it back immediately when speech output is finished.

Let’s see how we did that.

Implementation

Check out this video to see how this work in the app

https://youtu.be/V9N6K4SzpXw

Initiating the TTS engine

The first task is to initiate the Text to speech engine. This process takes some time. So, it is done in the starting of app in a new handler.

new Handler().post(new Runnable() {
   @Override
   public void run() {
       textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
           @Override
           public void onInit(int status) {
               if (status != TextToSpeech.ERROR) {
                   Locale locale = textToSpeech.getLanguage();
                   textToSpeech.setLanguage(locale);
               }
           }
       });
   }
});

Check Audio Focus

The next step is to check whether audio focus is granted. Suppose there is some music playing in the background, in that case we won’t be able to give voice output. So, we check audio focus using below code.

final AudioManager audiofocus = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
 int result = audiofocus.requestAudioFocus(afChangeListener, AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
//DO WORK HERE
}

Using OnAudioFocusChangeListener, we keep a track of when we have access to give speech output and when we don’t.

private AudioManager.OnAudioFocusChangeListener afChangeListener =
       new AudioManager.OnAudioFocusChangeListener() {
           public void onAudioFocusChange(int focusChange) {
               if (focusChange == AUDIOFOCUS_LOSS_TRANSIENT) {
                   textToSpeech.stop();
               } else if (focusChange == AudioManager.AUDIOFOCUS_GAIN) {
                   // Resume playback
               } else if (focusChange == AudioManager.AUDIOFOCUS_LOSS) {
                   textToSpeech.stop();
               }
           }
       };

Converting the given text to speech

Now we have audio focus, we just have to convert given text to speech. Use method textToSpeech.speak().

private void voiceReply(final String reply) {
       Handler handler = new Handler();
       handler.post(new Runnable() {
           @Override
           public void run() {
                   textToSpeech.speak(spokenReply, TextToSpeech.QUEUE_FLUSH, ttsParams);                  
               }
           }
       });
   }
}

Abandon Audio Focus

Now we are done with speech output, it’s time we abandon audio focus.

audiofocus.abandonAudioFocus(afChangeListener);

TTS alongside Hotword Detection

Okay so now the major part. How do we check when to stop hotword detection thread and when to resume it? How do we check if Speech output is finished?

Answer to these questions is textToSpeech.setOnUtteranceProgressListener. The UtteranceProgressListener overrides 3 methods:

  1. onStart: Indicates starting of text to speech conversion. Which means it’s time to stop hotword detection thread.
  2. onDone: Called when every word of the provided text is converted to speech. So, simply resume hotword detection
  3. onError: Called when there is an error and text is not converted to speech. Anyway, we need to resume hotword detection here too.
textToSpeech.setOnUtteranceProgressListener(new UtteranceProgressListener() {
                       @Override
                       public void onStart(String s) {
                           if(recordingThread !=null && isDetectionOn){
                               recordingThread.stopRecording();
                               isDetectionOn = false;
                           }
                       }

                       @Override
                       public void onDone(String s) {
                           if(recordingThread != null && !isDetectionOn && checkHotwordPref()) {
                               recordingThread.startRecording();
                               isDetectionOn = true;
                           }
                       }

                       @Override
                       public void onError(String s) {
                           if(recordingThread != null && !isDetectionOn && checkHotwordPref()) {
                               recordingThread.startRecording();
                               isDetectionOn = true;
                           }
                       }
                   });

                   HashMap<String,String> ttsParams = new HashMap<String, String>();
                   ttsParams.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,
                           MainActivity.this.getPackageName());

Summary

So, the main thing required for implementation of Text to Speech alongside Hotword detection is a way to control stopping and resuming hotword detection when Text to speech is in process. For that we used UtteranceProgressListener of TextToSpeech class which makes it so easier to do the task we required. You may follow this same approach as well or if you have a better approach, open an issue here.

Resources

  1. Official Documentation of TextToSpeech https://developer.android.com/reference/android/speech/tts/TextToSpeech.html
  2. Documentation of UtteranceProgressListener https://developer.android.com/reference/android/speech/tts/UtteranceProgressListener.html
  3. Blog link to Hotword Detection https://docs.google.com/document/d/1auTyuk32i15Rw94TOkrSruRJ9LZVtjcThoWVJkvnAz8/edit?usp=sharing
Continue ReadingImplementation of Text to Speech alongside Hotword Detection in SUSI Android App

Custom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

The SUSI Server is an AI powered server which is capable of responding to intelligent answers based on user’s queries. The queries to the susi server are obtained either as a websearch using the application or as an RSS feed. Two of the actions are websearch and RSS. These actions as the name suggests respond to queries based on search results from the web which are rendered in the clients. In order to use use these action types and display them in the SUSI iOS client, we need to first parse the actions looking for these action types and then creating a custom UI for them to display them.

To start with, we need to make send the query to the server to receive an intelligent response from the server. This response is parsed into different action types supported by the server and saved into relevant objects. Here, we check the action types by looping through the answers array containing the actions and based on that, we save the data for that action.

if type == ActionType.rss.rawValue {
   message.actionType = ActionType.rss.rawValue
   message.rssData = RSSAction(data: data, actionObject: action)
} else if type == ActionType.websearch.rawValue {
   message.actionType = ActionType.websearch.rawValue
   message.message = action[Client.ChatKeys.Query] as? String ?? ""
}

Here, we parsed the data response from the server and looked for the rss and websearch action type followed by which we saved the data we received from the server for each of the action types in their own objects.

Next, when a message object is created, we insert it into the dataSource item by appending it and use the `insertItems(at: [IndexPath])` method of collection view to insert them into the views at a particular index. Before adding them, we need to create a Custom UI for them. This UI will consist of a Collection View which is scrollable in the horizontal direction inside a CollectionView Cell. To start with this, we create a new class called `WebsearchCollectionView` which will be a `UIView` consisting of a `UICollectionView`.

We start by adding a collection view into the UIView inside the `init` method by overriding it. Declare a collection view using flow layout and scroll direction set to `horizontal`. Also, hide the scroll indicators and assign the delegate and datasource to `self`.

Now to populate this collection view, we need to specify the number of items that will show up. For this, we make use of the `message` variable declared. We use the `websearchData` in case of websearch action and `rssData` otherwise.

Now to specify the number of cells, we use the below method which returns the number of rss or websearch action objects and defaults to 0 such cells.

func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
    if let rssData = message?.rssData {
        return rssData.count
    } else if let webData = message?.websearchData {
        return webData.count
    }
    return 0
}

We display the title, description and image for each object for which we need to create a UI for the cells. Let’s start by creating a Custom Collection View cell with the imageview and 2 labels for title and description.

The imageview is given a contentMode of `aspectFit` and assigned a placeholder image in case the image doesn’t exist. The title and description labels are assigned the same font size, the title being bolder and both are center aligned.

class WebsearchCell: BaseCell {

   var imageView: UIImageView = {
       let iv = UIImageView()
       iv.contentMode = .scaleAspectFit
       iv.image = UIImage(named: "placeholder")
       return iv
   }()

   let titleLabel: UILabel = {
       let label = UILabel()
       label.textColor = .black
       label.font = UIFont.boldSystemFont(ofSize: 14)
       label.textAlignment = .center
       label.numberOfLines = 2
       label.backgroundColor = Color.grey.lighten3
       return label
   }()

   let descriptionLabel: UILabel = {
       let label = UILabel()
       label.font = UIFont.systemFont(ofSize: 14)
       label.textAlignment = .center
       return label
   }()

}

Next, we add constraints for each such view adding a title and description label adding a small margin on both sides of the cell for a cleaner UI.

addSubview(imageView)
addSubview(titleLabel)
addSubview(descriptionLabel)
descriptionLabel.numberOfLines = 5
addConstraintsWithFormat(format: "H:|-4-[v0(\(frame.width * 0.4))]-4-[v1]-4-|", views: imageView, titleLabel)
addConstraintsWithFormat(format: "|-\(frame.width * 0.4 + 8)-[v0]-4-|", views: descriptionLabel)
addConstraintsWithFormat(format: "V:|-4-[v0]-4-|", views: imageView)
addConstraintsWithFormat(format: "V:|-4-[v0(44)]-4-[v1]-4-|", views: titleLabel, descriptionLabel)

Now to use this custom cell, we first need to register it with the collection view and then we can use it easily in the `cellForItemAt` method. Since we are using Kingfisher for image caching, we use the `.kf.setImage` method to download the image from a URL and cache it as soon as it is downloaded.

collectionView.register(WebsearchCell.self, forCellWithReuseIdentifier: cellId)
 func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
       if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: cellId, for: indexPath) as? WebsearchCell {
           cell.backgroundColor = .white

           if message?.actionType == ActionType.rss.rawValue {
               let feed = message?.rssData?.rssFeed[indexPath.item]
               cell.titleLabel.text = feed?.title
               cell.descriptionLabel.text = feed?.desc                cell.imageView.kf.setImage(with: URL(string: feed?.rssData?.image))
           } else if message?.actionType == ActionType.websearch.rawValue {
               let webData = message?.websearchData[indexPath.item]
               cell.titleLabel.text = webData?.title
               cell.descriptionLabel.text = webData?.desc.html2String                cell.imageView.kf.setImage(with: URL(string: feed?.webData?.image))
           }
           return cell
       } else {
           return UICollectionViewCell()
       }
   }

We check the action type and assign data based on that. Also, for the images, if they don’t exist a placeholder is added.

Since we are done with the Custom UI of the cell, we need to add it to the chat cell. For that, we add this `UIView` as a subview. For reasons of reusability, the cell was extracted into a separate one and call the `prepareForReuse()` method for reusability. This is followed by adding a subview and setting constraints for each cell and assigning the message object.

class RSSCell: ChatMessageCell {

   var message: Message? {
       didSet {
           self.addWebsearchView()
       }
   }

   lazy var websearchView: WebsearchCollectionView = {
       let view = WebsearchCollectionView()
       return view
   }()

   override func setupViews() {
       super.setupViews()
       prepareForReuse()
   }

   func addWebsearchView() {
       self.addSubview(websearchView)
       self.addConstraintsWithFormat(format: "H:|[v0]|", views: websearchView)
       self.addConstraintsWithFormat(format: "V:[v0(135)]", views: websearchView)
       websearchView.message = message
   }

}
if message.actionType == ActionType.rss.rawValue || message.actionType == ActionType.websearch.rawValue {
   if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.rssCell, for: indexPath) as? RSSCell {
       cell.message = message
       return cell
   } else {
       return UICollectionViewCell()
   }
}

This is all we need to add a custom UI to a chat cell in SUSI iOS, very simple and clean.

Check the screenshot below, the app in action.

Resources:

Continue ReadingCustom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

Hotword Recognition in SUSI iOS

Hot word recognition is a feature by which a specific action can be performed each time a specific word is spoken. There is a service called Snowboy which helps us achieve this for various clients (for ex: iOS, Android, Raspberry pi, etc.). It is basically a DNN based hotword recognition toolkit.

In this blog, we will learn how to integrate the snowboy hotword detection wrapper in the SUSI iOS client. This service can be used in any open source project but for using it commercially, a commercial license needs to be obtained.

Following are the files that need to be added to the project which are provided by the service itself: snowboy-detect.h libsnowboy-detect.a and a trained model file which can be created using their online service: snowboy.kitt.ai. For the sake of this blog, we will be using the hotword “Susi”, the model file can be found here.

The way how snowboy works is that speech is recorded for a few seconds and this data is detected with an already trained model by a specific hotword, now if snowboy returns a 1 means word has been successfully detected else wasn’t.

We start with creation of a wrapper class in Objective-C which can be found wrapper and the bridging header in case this needs to be added to a Swift project. The wrapper contains methods for setting sensitivity, audio gain and running the detection using the buffer. It is a wrapper class built on top of the snowboy-detect.h header file.

Let’s initialize the service and run it. Below are the steps followed to enable hotword recognition and print out whether it successfully detected the hotword or not:

  • Create a ViewController class with extensions
    • AVAudioRecorderDelegate
    • AVAudioPlayerDelegate

since we will be recording speech.

  • Import AVFoundation
  • Create a basic layout containing a label which detects whether hotword detected or not and create corresponding `IBOutlet` in the ViewController and a button to trigger the start and stop of recognition.
  • Create the following variables:
    • let WAKE_WORD = “Susi” // hotword used
    • let RESOURCE = Bundle.main.path(forResource: “common”, ofType: “res”)
    • let MODEL = Bundle.main.path(forResource: “susi”, ofType: “umdl”) //path where the model file is stored
    • var wrapper: SnowboyWrapper! = nil // wrapper instance for running detection
    • var audioRecorder: AVAudioRecorder! // audio recorder instance
    • var audioPlayer: AVAudioPlayer!
    • var soundFileURL: URL! //stores the URL of the temp reording file
    • var timer: Timer! //timer to fire a function after an interval
    • var isStarted = false // variable to check if audio recorder already started
  • In `viewDidLoad` initialize the wrapper and set sensitivity and audio gain. Recognition best happens when sensitivity is set to `0.5` and audio gain is set to `1.0` according to the docs.
override func viewDidLoad() {
    super.viewDidLoad()
    wrapper = SnowboyWrapper(resources: RESOURCE, modelStr: MODEL)
    wrapper.setSensitivity("0.5")
    wrapper.setAudioGain(1.0)
}
  • Create an `IBAction` for the button to start recognition. This action will be used to start or stop the recording in which the action toggles based on the `isStarted` variable. When true, recording is stopped and the timer invalidated else a timer is started which calls the `startRecording` method with an interval of 4 seconds.
@IBAction func onClickBtn(_ sender: Any) {
  if (isStarted) {
    stopRecording()
    timer.invalidate()
    btn.setTitle("Start", for: .normal)
    isStarted = false
  } else {
    timer = Timer.scheduledTimer(timeInterval: 4, target: self, 
    selector: #selector(startRecording), userInfo: nil, repeats: true)
    timer.fire()
    btn.setTitle("Stop", for: .normal)
    isStarted = true
  }
}
  • Next, we add the start and stop recording methods.
    • First, a temp file is created which stores the recorded audio output
    • After which, necessary record configurations are made such as setting the sampling rate.
    • The recording is then started and the output stored in the temp file.
func startRecording() {
  do {
    let fileMgr = FileManager.default
    let dirPaths = fileMgr.urls(for: .documentDirectory, in: .userDomainMask)
    soundFileURL = dirPaths[0].appendingPathComponent("temp.wav")
    let recordSettings = [AVEncoderAudioQualityKey: 
    AVAudioQuality.high.rawValue,
    AVEncoderBitRateKey: 128000,
    AVNumberOfChannelsKey: 1,
    AVSampleRateKey: 16000.0] as [String : Any]
    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSessionCategoryRecord)
    try audioRecorder = AVAudioRecorder(url: soundFileURL,
settings: recordSettings as [String : AnyObject])
    audioRecorder.delegate = self
    audioRecorder.prepareToRecord()
    audioRecorder.record(forDuration: 2.0)
    instructionLabel.text = "Speak wake word: \(WAKE_WORD)"print("Started recording...")
  } catch let error {
    print("Audio session error: \(error.localizedDescription)")
  }
}
  • The stop recording method, stops the audioRecorder instance and updates the instruction label to show the same.
func stopRecording() {
  if (audioRecorder != nil && audioRecorder.isRecording) {
    audioRecorder.stop()
  }
  instructionLabel.text = "Stop"
  print("Stopped recording...")
}

The final recognition is done in the `audioRecorderDidFinishRecording` delegate method which runs the snowboy detection function which processes the audio recording in the temp file by creating a buffer and storing the audio in it and giving the wrapper this buffer as input which processes the buffer and returning a `1` is hotword was successfully detected.

func runSnowboy() {
  let file = try! AVAudioFile(forReading: soundFileURL)
  let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, 
  sampleRate: 16000.0, channels: 1, interleaved: false)
  let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(file.length))
  try! file.read(into: buffer)
  let array = Array(UnsafeBufferPointer(start: 
  buffer.floatChannelData![0], count:Int(buffer.frameLength)))
  // print output
  let result = wrapper.runDetection(array, length: Int32(buffer.frameLength))
  print("Result: \(result)")
}

To test this out, click the start button and speak different words and you will notice that once the Hot Word is spoken, log with `result: 1` is printed out.

The snowboy hotword recognition also offers to train the personalized model with the help of Rest Apis for which the docs can be found here. The complete project implementation can be found here.

Sources:

Continue ReadingHotword Recognition in SUSI iOS

Search Functionalities in SUSI Android App Using Android SearchView Widget

Searching is a common feature that is required in most applications. But the problem in implementing searching functionality is that there is no common way to do that. People fight over whose way is best to implement search functionality. In this blog post we’ll be looking at how search functionality works in SUSI Android App and how is it implemented. We have used Android’s SearchView widget to do that. There are many other ways to do so but this one is best suited for our requirements. Let’s see how it works.

UI Components used for Searching

1. Search icon (magnifying glass icon)

In the action bar, you can see a small icon. Clicking on the icon initiates search.

2. Edit text

An Obvious requirement is an edit test to enter search query.

3. Up and Down arrow keys

Required to search through the whole app. Simply use the up and down arrow keys to navigate through the app and find out each occurrence of the word you want to search.

 

 

 

 

 

 

 

4. Cross Button

Last but not the least, a close or cross button to close the search action.

Implementation

We have used Android’s inbuilt Widget SearchView. According to official android documentation

A widget that provides a user interface for the user to enter a search query and submit a request to a search provider. Shows a list of query suggestions or results, if available, and allows the user to pick a suggestion or result to launch into.

This widget makes searching a lot easier. It provides all methods and listeners which are actually required for searching. Let’s cover them one by one.

  1. Starting the search: searchView.setOnSearchClickListener Listener simply activates when a user clicks on search icon in the toolbar. Do all your work which needs to be done at the starting of the search like, hiding some other UI elements of doing an animation inside the listener
searchView.setOnSearchClickListener({
   chatPresenter.startSearch()
})
  1. Stop the Search: searchView.setOnCloseListener Listener gets activated when a user clicks on the cross icon to close the search. Add all the code snippet you want which is needed to be executed when the search is closed inside this like maybe notify the adapter about data set changes or closing the database etc.
searchView.setOnCloseListener({
   chatPresenter.stopSearch()
   false
})
  1.  Searching a query:  searchView.setOnQueryTextListener Listener overrides 2 methods:

3.1 onQueryTextSubmit: As the name suggests, this method is called when the query to be searched is submitted.

3.2 onQueryTextChange: This method is called when query you are writing changes.

We, basically wanted same thing to happen if user has submitted the query or if he is still typing and that is to take the query at that particular moment, find it in database and highlight it. So, chatPresenter.onSearchQuerySearched(query) this method is called in both onQueryTextSubmit and onQueryTextSubmit  to do that.

 searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
 
      override fun onQueryTextSubmit(query: String): Boolean {
           //Handle Search Query
           chatPresenter.onSearchQuerySearched(query)
           recyclerAdapter.query = query
           return false
       }

       override fun onQueryTextChange(newText: String): Boolean {
           if (TextUtils.isEmpty(newText)) {
               modifyMenu(false)
               recyclerAdapter.highlightMessagePosition = -1
               recyclerAdapter.notifyDataSetChanged()
               if (!editText.isFocused) {
                   editText.requestFocus()
               }
           } else {
               chatPresenter.onSearchQuerySearched(newText)
               recyclerAdapter.query = newText
           }
           return false
       }
   })
   return true
}
  1. Finding query in database: Now we have a query to be searched, we can just use a database operation to do that. The below code snippet finds all the messages which has the query present in it and work on it. If the query is not found, it simply displays a toast saying “Not found”
override fun onSearchQuerySearched(query: String) {
   chatView?.displaySearchElements(true)
   results = databaseRepository.getSearchResults(query)
   offset = 1
   if (results.size > 0) {
       chatView?.modifyMenu(true)
       chatView?.searchMovement(results[results.size - offset].id.toInt())
   } else {
       chatView?.showToast(utilModel.getString(R.string.not_found))
   }
}

This is the database operation.

override fun getSearchResults(query: String): RealmResults<ChatMessage> {
   return realm.where(ChatMessage::class.java).contains(Constant.CONTENT,
           query, Case.INSENSITIVE).findAll()
}

  1. Highlighting the part of message: Now, we know which message has the query, we just want to highlight it with a bright color to display the result. For that, we used SpannableString to highlight a part of complete string.
String text = chatTextView.getText().toString();
SpannableString modify = new SpannableString(text);
Pattern pattern = Pattern.compile(query, Pattern.CASE_INSENSITIVE);
Matcher matcher = pattern.matcher(modify);
while (matcher.find()) {
   int startIndex = matcher.start();
   int endIndex = matcher.end();
   modify.setSpan(new BackgroundColorSpan(Color.parseColor("#ffff00")), startIndex, endIndex, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
}
chatTextView.setText(modify);

Summary

The whole point of this blog post was to educate about SearchView widget of android and how it makes it easy to search queries. All the methods you need are already implemented. You just need to call them and add database operation.

Resources

  1. The link to official android documentation explaining about different methods in SearchView Class https://developer.android.com/reference/android/widget/SearchView.html
  2. Another tutorial about SearchView http://www.journaldev.com/12478/android-searchview-example-tutorial
Continue ReadingSearch Functionalities in SUSI Android App Using Android SearchView Widget

Reset SUSI.AI User Password & Parameter extraction from Link

In this blog I will discuss how does Accounts handle the incoming request to reset the password. If a user forgets his/her password, They can use forgot password button on http://accounts.susi.ai and It’s implementation is quite straightforward. As soon as a user enter his/her e-mail id and hits RESET button, an ajax call is made which first checks if the user email id is registered with SUSI.AI or not. If not, a failure message is thrown to user. But if the user is found to be registered in the database, An email is sent to him/her which contains a 30 characters long token. On the server token is hashed against the user’s email id and a validity of 7 days is set to it.
Let us have a look at the Reset Password link one receives.

http://accounts.susi.ai/?token={30 characters long token}

On clicking this link, what it does is that user is redirected to http://accounts.susi.ai with token as a request parameter. At the client side, A search is made which evaluates whether the URL has a token parameter or not. This was the overview. Since, http://accounts.susi.ai is based on ReactJS framework, it is not easy alike the native php functionality, but much more logical and systematic. Let us now take a closer look at how this parameter is searched for, token extracted and validated.

As you can see http://accounts.susi.ai and http://accounts.susi.ai/?token={token}, Both redirect the user to the same URL. So the first task that needs to be accomplished is check if a parameter is passed in the URL or not.
First import addUrlProps and UrlQueryParamTypes from react-url-query package and PropTypes from prop-types package. These will be required in further steps.
Have a look at the code and then we will understand it’s working.

const urlPropsQueryConfig = {
  token: { type: UrlQueryParamTypes.string },
};

class Login extends Component {
	static propTypes = {
    // URL props are automatically decoded and passed in based on the config
    token: PropTypes.string,
    // change handlers are automatically generated when given a config.
    // By default they update that single query parameter and maintain existing
    // values in the other parameters.
    onChangeToken: PropTypes.func,
  }

  static defaultProps = {
    token: "null",
  }

Above in the first step, we have defined by what parameter should the Reset Password triggered. It means, if and only if the incoming parameter in the URL is token, we move to next step, otherwise normal http://accounts.susi.ai page should be rendered. Also we have defined the data type of the token parameter to be UrlQueryParamTypes.string.
PropTypes are attributes in ReactJS similar to tags in HTML. So again, we have defined the data type. onChangeToken is a custom attribute which is fired whenever token parameter is modified. To declare default values, we have used defaultProps function. If token is not passed in the URL, by default it makes it null. This is still not the last step. Till now we have not yet checked if token is there or not. This is done in the componentDidMount function. componentDidMount is a function which gets called before the client renders the page. In this function, we will check that if the value of token is not equal to null, then a value must have been passed to it. If the value of token is not equal to null, it extracts the token from the URL and redirects user to reset password application. Below is the code snippet for better understanding.

componentDidMount() {
		const {
      token
    } = this.props;
		console.log(token)
		if(token !== "null") {
			this.props.history.push('/resetpass/?token='+token);
		}
		if(cookies.get('loggedIn')) {
			this.props.history.push('/userhome', { showLogin: false });
		}
	}

With this the first step of the process has been achieved. Again in the second step, we extract the token from the URL in the similar above fashion followed by an ajax request to the server which will validate the token and sends response to client accordingly {token might be invalid or expired}. Success is encoded in a JSON object and sent to client. Error is thrown as an error only and client shows an error of “invalid token”. If it is a success, it sends email id of the user against which the token was hashed and the client displays it on the page. Now user has to enter new password and confirm it. Client also tests whether both the fields have same values or not. If they are same, a simple ajax call is made to server with new password and the email id and token. If no error is caused in between (like connection timeout, server might be down etc), client is sent a JSON response again with “accepted” flag = true and message as “Your password has been changed!”.

Additional Resources:

(npmjs – official documentation of create-react-apps)

(Fortnox developer’s blog post)

(Dave Ceddia’s blog post for new ReactJS developers)

Continue ReadingReset SUSI.AI User Password & Parameter extraction from Link

Download SUSI.AI Setting Files from Data Folder

In this blog, I will discuss how the DownloadDataSettings servlet hosted on SUSI server functions. This post also covers a step by step demonstration on how to use this feature if you have hosted your own custom SUSI server and have admin rights to it. Given below is the endpoint where the request to download a particular file has to be made.

/data/settings

For systematic functionality and workflow, Users with admin login, are given a special access. This allows them to download the settings files and go through them easily when needed. There are various files which have email ids of registered users (accounting.json), user roles associated to them (authorization.json), groups they are a part of (groups.json) etc. To list all the files in the folder, use the given below end point:

/aaa/listSettings.json

How does the above servlet works? Prior to that, let us see how to to get admin rights on your custom SUSI.AI server.
For admin login, it is required that you have access to files and folders on server. Signup with an account and browse to

/data/settings/authorization.json

Find the email id with which you signed up for admin login and change userRole to “admin”. For example,

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "user"
	}
}

If you have signed up with an email id “test@test.com” and want to give admin access to it, modify the userRole to “admin”. See below.

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "admin"
	}
}

Till now, server did not have any email id with admin login or user role equal to admin. Hence, this exercise is required only for the first admin. Later admins can use changeUserRole application and give/change/modify user roles for any of the users registered. By now you must have admin login session. Let’s see now how the download and file listing servlets work.
First, the server creates a path by locally referencing settings folder with the help of DAO.data_dir.getPath(). This will give a string path to the data directory containing all the data-settings files. Now the server just has to make a JSONArray and has to pass a String array to JSONArray’s constructor, which will eventually be containing the name of all the data/settings files. If the process is not successfull ,then, “accepted” = false will be sent as an error to the user. The base user role to access the servlet is ADMIN as only admins are allowed to download data/setting files,
The file name which you have to download has to be sent in a HTTP request as a get parameter. For example, if an admin has to download accounting.json to get the list of all the registered users, the request is to be made in the following way:

BASE_URL+/data/settings?file=file_name

*BASE_URL is the URL where the server is hosted. For standard server, use BASE_URL = http://api.susi.ai.

In the initial steps, Server generates a path to data/settings folder and finds the file, name of which it receives in the request. If no filename is specified in the API call, by default, the server sends accounting.json file.

File settings = new File(DAO.data_dir.getPath()+"/settings");
String filePath = settings.getPath(); 
String fileName = post.get("file","accounting"); 
filePath += "/"+fileName+".json";

Next, the server will extract the file and using ServletOutputStream class, it will generate properties for it and set appropriate context for it. This context will, in turn, fetch the mime type for the file generated. If the mime type is returned as null, by default, mime type for the file will be set to application/octet-stream. For more information on mime type, please look at the following link. A complete list of mime types is compiled and documented here.

response.setContentType(mimetype);
response.setContentLength((int)file.length());

In the above code snippet, mime type and length of the file being downloaded is set. Next, we set the headers for the download response and use filename for that.

response.setHeader("Content-Disposition", "attachment; filename=" + fileName +".json");

All the manual work is done by now. The only thing left is to open a buffer stream, size of which has been defined as a class variable.
Here we use a byte array of size 4096 elements and write the file to client’s default download storage.

private static final int BUFSIZE = 4096;
byte[] byteBuffer = new byte[BUFSIZE];
             DataInputStream in = new DataInputStream(new FileInputStream(file));
            while ((in != null) && ((length = in.read(byteBuffer)) != -1))
            {
                outStream.write(byteBuffer,0,length);
            }

            in.close();
            outStream.close();

All the above-mentioned steps are enclosed in a try-catch block, which catches an exception if any ,and logs it in the log file. This message is also sent to the client for appropriate user information along with the success or failure indication through a boolean flag. Do not forget to close the input and output buffers as it may lead to memory leaks and someone with proper knowledge of network and buffer stream would be able to steal any essential or secured data.

Additional Resources

Continue ReadingDownload SUSI.AI Setting Files from Data Folder