How SUSI AI Searches the Web For You

SUSI is now capable of performing web search to answer your queries. When SUSI doesn’t know how to answer your queries, it performs a web search on the client side and displays all the results as horizontally swipeable tiles with each tile giving a brief description and also providing a link to the relevant source.

Lets visit SUSI WebChat and try it out.

Query : Search for Google
Response : <Web Search Results>

How does SUSI know when to perform a websearch?

It uses action types to identify if a web search is to be performed or not. The API Response is parsed to check for the action types and if a websearch action type is present, then an API call is made using the duckduckgo api with the relevant query and the results are displayed as tiles with :

  • Category : The Topic related to the given query
  • Text : The result from the websearch
  • Image : Image related to the query if present
  • Link : A url redirecting to the relevant source

Parsing the actions :

Let us look at the API response for a query.

Sample Query: search for google

Response: <API Response>

"actions": [
  {
    "type": "answer",
    "expression": "Here is a web search result:"
  },
  {
    "type": "websearch",
    "query": "google"
  }
]

Note: The API Response has been trimmed to show only the relevant content

We find a websearch type action and the query to be searched as google . So we now make a API call using duckduckgo api to get our websearch results.

API Call Format : api.duckduckgo.com/?q={query}&format=json

API Call for query google : http://api.duckduckgo.com/?q=google&format=json

And from the duckduckgo API response we generate our websearch tiles showing relevant data using the fields present in each object.

This is the sample object from duckduckgo API response under the RelatedTopics , which we use to create our websearch result tiles.

{
  "Result": "<a href=\"https:\/\/duckduckgo.com\/Google\">Google<\/a> An American multinational technology company specializing in Internet-related services and...",
  "Icon": {
    "URL": "https:\/\/duckduckgo.com\/i\/8f85c93f.png",
    "Height": "",
    "Width": ""
  },
  "FirstURL": "https:\/\/duckduckgo.com\/Google",
  "Text": "Google An American multinational technology company specializing in Internet-related services and..."
},

Let us look at the code for querying data from the API

if (actions.indexOf('websearch')>=0) {

  let actionIndex = actions.indexOf('websearch');
  let query = response.answers[0].actions[actionIndex].query;

  $.ajax({
    url: 'http://api.duckduckgo.com/?format=json&q='+query,
    dataType: 'jsonp',
    crossDomain: true,
    timeout: 3000,
    async: false,

    success: function (data) {
      receivedMessage.websearchresults = data.RelatedTopics;

      if(data.AbstractText){
        let abstractTile = {
          Text:'',
          FirstURL:'',
          Icon:{URL:''},
        }
        abstractTile.Text = data.AbstractText;
        abstractTile.FirstURL = data.AbstractURL;
        abstractTile.Icon.URL = data.Image;
        receivedMessage.websearchresults.unshift(abstractTile);
    }

    let message =  ChatMessageUtils.getSUSIMessageData(
receivedMessage, currentThreadID);

    ChatAppDispatcher.dispatch({
      type: ActionTypes.CREATE_SUSI_MESSAGE,
      message
    });
  },

    error: function(errorThrown) {
      console.log(errorThrown);
      receivedMessage.text = 'Please check your internet connection';
    }

  });

}

Here, from the actions object, we get the query needed to search the web. We then make a ajax call using that query to the duckduckgo API. If the API call succeeds then we collect the required data to create tiles as array of objects and store it as websearchresults. and dispatch the message with the websearchresults which gets reflected in the message store and when passed to the components we use it to create the result tiles.

<MuiThemeProvider>
  <Paper zDepth={0} className='tile'>
    <a rel='noopener noreferrer'
    href={tile.link} target='_blank'
    className='tile-anchor'>
    {tile.icon &&
    (<Paper className='tile-img-container'>
      <img src={tile.icon}
      className='tile-img' alt=''/>
     </Paper>
    )}
  <Paper className='tile-text'>
    <p className='tile-title'>
      <strong>
        {processText(tile.title,'websearch-rss')}
      </strong>
    </p>
    {processText(tile.description,'websearch-rss')}
  </Paper>
  }
  </a>
  </Paper>
</MuiThemeProvider>

We then display the tiles as horizontally swipeable carousel ensuring a good and interactive UX.

React-Slick module was used to implement the horizontal swiping feature.

function renderTiles(tiles){

if(tiles.length === 0){
  let noResultFound = 'NO Results Found';
  return(<center>{noResultFound}</center>);
}

let resultTiles = drawTiles(tiles);

var settings = {
  speed: 500,
  slidesToShow: 3,
  slidesToScroll: 1,
  swipeToSlide:true,
  swipe:true,
  arrows:false
};

return(
    <Slider {...settings}>
      {resultTiles}
    </Slider>
);

}

Here we are handling the corner case when there are no results to display by rendering `NO Results found`. We then have our web search results displayed as swipeable tiles with a image, title, description and link to the source.

This is how SUSI performs web search to respond to user queries ensuring that no query goes unanswered! Don’t forget to swipe left and go through all the results displayed!

Resources

Continue ReadingHow SUSI AI Searches the Web For You

How SUSI AI Tabulates Answers For You

SUSI is an artificial intelligence chat bot that responds to all kinds of user queries. It isn’t any regular chat bot replying in just plain text. It supports various response types which we refer to as ‘actions’. One such action is the “table” type. When the response to a user query contains a list of answers which can be grouped, it is better visualised as a table rather than plain text.

Lets visit SUSI WebChat and try it out. In our example we ask SUSI for the 2009 race statistics of British Formula 1 racing driver Lewis Hamilton.

Query: race stats of hamilton in f1 season 2009

Response: <table> (API response)

 

 

How does SUSI do that? Let us look at the skill teaching SUSI to give table responses.

# Returns race stats as a table

race summary of  * in f1 season *|race stats of  * in f1 season *
!console:
{
  "url":"http://ergast.com/api/f1/$2$/drivers/$1$/status.json",
  "path":"$.MRData.StatusTable.Status",
  "actions":[{
     "type":"table",
     "columns":{"status":"Race Status","count":"Number Of Races"}
   }]
}
eol

Here, we are telling SUSI that the data type is a table through type attribute in actions and also defining column names and under which column each value must be put using their respective keys. Using this information SUSI generates a response accordingly with the table schema and data points.

How do we know when to render a table?

We know it through the type attribute in the actions from the API response.

"actions": [{
  "type": "table",
  "columns": {
    "status": "Race Status",
    "count": "Number Of Races"
  },
  "count": -1
  }]
}],

We can see that the type is table so we now know that we have to render a table.

But what is the table schema? What do we fill it with?

There is a columns key under actions and from the value of the columns key we get a object whose key value pairs give us column names and what data to put under each column.

Here, we have two columns – Race Status and Number Of Races

And the data to put under each column is found in answers[0].data with same keys as those for each column name i.e ‘status’ and ‘count’.

Sample data object from the API response:

{
  "statusId": "2",
  "count": "1",
  "status": "Disqualified"
}

The value under ‘status’ key is ‘Disqualified’ and the column name for ‘status’ key is ‘Race Status’, so Disqualified is entered under Race Status column in the table. Similarly 1  is entered under Number Of Races column. We thus have a row of our table. We populate the table for each object in the data array using the same procedure.

let coloumns = data.answers[0].actions[index].columns;
let count = data.answers[0].actions[index].count;
let table = drawTable(coloumns,data.answers[0].data,count);

We also have a ’count’ attribute in the API response . This is used to denote how many rows to populate in the table. If count = -1 , then it means infinite or to display all the results.

function drawTable(coloumns,tableData,count){

let parseKeys;
let showColName = true;

if(coloumns.constructor === Array){
  parseKeys = coloumns;
  showColName = false;
}
else{
  parseKeys = Object.keys(coloumns);
}

let tableheader = parseKeys.map((key,i) =>{
return(<TableHeaderColumn key={i}>{coloumns[key]}</TableHeaderColumn>);
});

let rowCount = tableData.length;

if(count > -1){
  rowCount = Math.min(count,tableData.length);
}

let rows = [];

for (var j=0; j < rowCount; j++) {

  let eachrow = tableData[j];

  let rowcols = parseKeys.map((key,i) =>{
    return(
        <TableRowColumn key={i}>
          <Linkify properties={{target:'_blank'}}>
            {eachrow[key]}
          </Linkify>
        </TableRowColumn>
      );
  });

  rows.push(
      <TableRow key={j}>{rowcols}</TableRow>
  );

}

const table =
  <MuiThemeProvider>
    <Table selectable={false}>
      <TableHeader displaySelectAll={false} adjustForCheckbox={false}>
        { showColName && <TableRow>{tableheader}</TableRow>}
      </TableHeader>
      <TableBody displayRowCheckbox={false}>{rows}</TableBody>
    </Table>
  </MuiThemeProvider>

return table;

}

Here we first determine how many rows to populate using the count attribute and then parse the columns to get the column names and keys. We then loop through the data and populate each row.

This is how SUSI responds with tabulated data!

You can create your own table skill and SUSI will give the tabulated response you need. Check out this tutorial to know more about SUSI and the various other action types it supports.

Resources

Continue ReadingHow SUSI AI Tabulates Answers For You

Implementing Real Time Speech to Text in SUSI Android App

Android has a very interesting feature built in called Speech to Text. As the name suggests, it converts user’s speech into text. So, android provides this feature in form of a recognizer intent. Just the below 4 line code is needed to start recognizer intent to capture speech and convert into text. This was implemented earlier in SUSI Android App too.  Simple, isn’t it?

Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
       RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
       "com.domain.app");
startActivityForResult(intent, REQ_CODE_SPEECH_INPUT);

But there is a problem here. It popups an alert dialog box to take a speech input like this :

Although this is great and working but there were many disadvantages of using this :

  1. It breaks connection between user and app.
  2. When user speaks, it first listens to complete sentence and then convert it into text. So, user was not able to see what the first word of sentence was until he says the complete sentence.
  3. There is no way for user to check if the sentence he is speaking is actually converting correctly to speech or not. He only gets to know about it when the sentence is completed.
  4. User can not cancel the conversion of Speech to text if some word is converted wrongly because the user doesn’t even get to know if the word is converted wrongly. It is kind of suspense that a whether the conversion is right or not.

So, now the main challenge was how can we improve the Speech to Text and implement it something like Google now?  So, what google now does is, it converts speech to text alongside taking input making it a real time speech to text convertor. So, basically if you speak second word of sentence, the first word is already converted and displayed. You can also cancel the conversion of rest of the sentence if the first word of sentence is converted wrongly.

I searched this problem a lot on internet, read various blogs and stackoverflow answers, read Documentation of Speech to Text until I finally came across SpeechRecognizer class and RecognitionListener. The plan was to use object of SpeechRecognizer class with RecognizerIntent and RecognitionListener to generate partial results.

So, I just added this line to the the above code snippet of recognizer intent.

intent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS,true);

And introduced SpeechRecognizer class and RecognitionListener like this

 SpeechRecognizer recognizer = SpeechRecognizer
   .createSpeechRecognizer(this.getApplicationContext());

RecognitionListener listener = new RecognitionListener() {
   @Override
   public void onResults(Bundle results) {
   }
 
   @Override
   public void onError(int error) {
   }
 
   @Override
   public void onPartialResults(Bundle partialResults) {
    //The main method required. Just keep printing results in this.
   }
};

Now combine SpeechRecognizer class with RecognizerIntent and RecognitionListener with these two lines.

recognizer.setRecognitionListener(listener);
recognizer.startListening(intent);

Everything is now set up. You can now use result in onPartialResults method and keep displaying partial results. So, if you want to speak “My name is…” , “My” will be displayed on screen by the time you speak “name” and “My name” will be displayed on screen by the time you speak “is” . Isn’t it cool?

This is how Realtime Speech to Text is implemented in SUSI Android App. If you want to implement this in your app too, just follow the above steps and you are good to go. So, the results look like these.

  

Resources

Continue ReadingImplementing Real Time Speech to Text in SUSI Android App

Test SUSI Web App with Facebook Jest

Jest is used by Facebook to test all Javascript codes specially React code snippets. If you need to setup jest on your react application you can follow up these simple steps. But if your React application is made with “create-react-app”, you do not need to setup jest manually. Because it comes with Jest. You can run test using “react-scripts” node module.

Since SUSI chat is made with “create-react-app” we do not need to install and run Jest directly. We executed our test cases using “npm test” it executes “react-scripts test” command. It executes all “.js” files under “__tests__” folders. And all other files with “.spec.js” and “.test.js” suffixes.

React apps that are made from “create-react-app” come with sample test case (smoke test) and that checks whether whole application is built correctly or not. If it passes the smoke test then it starts to run further test cases.

import React from 'react';
import ReactDOM from 'react-dom';
import ChatApp from '../../components/ChatApp.react';
it('renders without crashing', () => {
 const div = document.createElement('div');
 ReactDOM.render( < ChatApp / > , div);
});

This checks all components which are inside of the “<ChatApp />” component and it check whether all these components integrate correctly or not.

If we need to check only one component in isolated environment. We can use shallow rendering API. we have used shallow rendering API to check each and every component in isolated environment.

We have to install enzyme and test renderer before use it.

npm install --save-dev enzyme react-test-renderer

import React from 'react';
import MessageSection from '../../components/MessageSection.react';
import { shallow } from 'enzyme';
it('render MessageListItem without crashing',()=>{
  shallow(<MessageSection />);
})

This test case tests only the “MessageListItem”, not its child components.

After executing “npm test” you will get the passed and failed number of test cases.

If you need to see the coverage you can see it without installing additional dependencies.

You just need to run this.

npm test -- --coverage

It will show the output like this.

This view shows how many lines, functions, statements, branches your program has and this shows how much of those covered from the test cases you have.

If we are going to write new test cases for susi chat, we have to make separate file in “__tests__” folder and name it with corresponding file name that we are going to test.

it('your test case description',()=>{
 //test what you need 
})

Normally test cases looks like this.in test cases you can use “test” instead of “it” .after test case description, there is a fat arrow function. In side this fat arrow function you can add what you need to test

In below example I have compared returned value of the function with static value.

function funcName(){
 return 1;
}

it('your test case description',()=>{
 expect(funcName()).toBe(1);
})

You have to add your function/variable that need to be tested into “expect()” and value you expect from that function or variable into “toBe()”.  Instead of “toBe()” you can use different functions according to your need.

If you have a long list of test cases you can group them ( using describe()).

describe('my beverage', () => {
  test('is delicious', () => {
    expect(myBeverage.delicious).toBeTruthy();
  });

  test('is not sour', () => {
    expect(myBeverage.sour).toBeFalsy();
  });
});

This post covers only few things of testing . You can learn more about jest testing from official documentation here.

Continue ReadingTest SUSI Web App with Facebook Jest

How to make SUSI AI Line Bot

In order to integrate SUSI’s API with Line bot you will need to have a line account first so that you can follow below procedure. You can download app from here.

Pre-requisites:

  • Line app
  • Github
  • Heroku

    Steps:
    1. Install Node.js from the link below on your computer if you haven’t installed it already https://nodejs.org/en/.
    2. Create a folder with any name and open shell and change your current directory to the new folder you created.
    3. Type npm init in command line and enter details like name, version and entry point.
    4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
    5. Type following commands in command line  npm install –save @line/bot-sdk. After bot-sdk is installed type npm install –save express after express is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

      Your package.json file should look like this.

      {
      "name": "SUSI-Bot",
      "version": "1.0.0",
      "description": "SUSI AI LINE bot",
      "main": "index.js",
      "dependencies": {
         "@line/bot-sdk": "^1.0.0",
         "express": "^4.15.2",
         "request": "^2.81.0"
      },
      "scripts": {
         "start": "node index.js"
       }
      }
    6. Copy following code into file you created i.e index.js
      'use strict';
      const line = require('@line/bot-sdk');
      const express = require('express');
      var request = require("request");
      
      // create LINE SDK config from env variables
      
      const config = {
         channelAccessToken: process.env.CHANNEL_ACCESS_TOKEN,
         channelSecret: process.env.CHANNEL_SECRET,
      };
      
      // create LINE SDK client
      
      const client = new line.Client(config);
      
      
      // create Express app
      // about Express: https://expressjs.com/
      
      const app = express();
      
      // register a webhook handler with middleware
      
      app.post('/webhook', line.middleware(config), (req, res) => {
         Promise
             .all(req.body.events.map(handleEvent))
             .then((result) => res.json(result));
      });
      
      // event handler
      
      function handleEvent(event) {
         if (event.type !== 'message' || event.message.type !== 'text') {
             // ignore non-text-message event
             return Promise.resolve(null);
         }
      
         var options1 = {
             method: 'GET',
             url: 'http://api.asksusi.com/susi/chat.json',
             qs: {
                 timezoneOffset: '-330',
                 q: event.message.text
             }
         };
      
         request(options, function(error, response, body) {
             if (error) throw new Error(error);
             // answer fetched from susi
             //console.log(body);
             var ans = (JSON.parse(body)).answers[0].actions[0].expression;
             // create a echoing text message
             const answer = {
                 type: 'text',
                 text: ans
             };
      
             // use reply API
      
             return client.replyMessage(event.replyToken, answer);
         })
      }
      
      // listen on port
      
      const port = process.env.PORT || 3000;
      app.listen(port, () => {
         console.log(`listening on ${port}`);
      });
    7. Now we have to get channel access token and channel secret to get that follow below steps.

    8. If you have Line account then move to next step else sign up for an account and make one.
    9. Create Line account on  Line Business Center with messaging API and follow these steps:
    10. In the Line Business Center, select Messaging API under the Service category at the top of the page.
    11. Select start using messaging API, enter required information and confirm it.
    12. Click LINE@ Manager option, In settings go to bot settings and Enable messaging API
    13. Now we have to configure settings. Allow messages using webhook and select allow for “Use Webhooks”.
    14. Go to Accounts option at top of page and open LINE Developers.
    15. To get Channel access token for accessing API, click ISSUE for the “Channel access token” item.
    16. Click EDIT and set a webhook URL for your Channel. To get webhook url deploy your bot to heroku and see below steps.
    17. Before deploying we have to make a github repository for chatbot to make github repository follow these steps:

      In command line change current directory to folder we created above and  write

      git init
      git add .
      git commit -m”initial”
      git remote add origin <URL for remote repository> 
      git remote -v
      git push -u origin master 

      You will get URL for remote repository by making repository on your github and copying this link of your repository.

    18. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.
    19. Deploy app using github deployment method.


    20. Select Automatic deployment method.


    21. After making app copy this link and paste it in webhook url in Line channel console page from where we got channel access token.

                https://<your_heroku_app_name>.herokuapp.com/webhook
    22. Your SUSI AI Line bot is ready add this account as a friend and start chatting with SUSI.
      Here is the LINE API reference https://devdocs.line.me/en/
Continue ReadingHow to make SUSI AI Line Bot

Conversion of CSS styles into React styles in SUSI Web Chat App

Earlier this week we had an issue where the text in our search box of the SUSI web app was not white even after having all the required styles. After careful inspection it was found that there is a attribute named -webkit-text-fill-color which was set to black.

Now I faced this issue as adding such attribute to our reactJs code will cause lint errors. So after careful searching stackoverflow, i found a way to add css attribute to our react code by using different case. I decided to write a blog on this for future reference and it might come handy to other developers as well.

If you want to write css in javascript, you have to turn dashed-key-words into camelCaseKeys

For example:

background-color => backgroundColor
border-radius => borderRadius
but vendor prefix starts with capital letter (except ms)
-webkit-box-shadow => WebkitBoxShadow (capital W)
-ms-transition => msTransition ('ms' is the only lowercase vendor prefix)

const containerStyle = {
  WebkitBoxShadow: '0 0 0 1000px white inset'
};

So in our case:-

-webkit-text-fill-color became WebkitTextFillColor

The final code of styles looked like: –

const searchstyle = {
      WebkitTextFillColor: 'white',
      color: 'white'
    }

Now, because inline styles gets attached on tags directly instead of using selectors, we have to put this style on the <input> tag itself, not the container.

See the react doc #inline-styles section for more details.

Continue ReadingConversion of CSS styles into React styles in SUSI Web Chat App

How to keep crash records of SUSI.AI Android with Crashlytics

At this stage of the development of SUSI.AI Android there are many changes and at times this results in inconsistencies and crashes of the app. One important questions we face is how to keep record of crashes so that we can improve our app. Using Crashlytics is a way keep record of crashes. The easiest way to add crashlytics in an app is to integrate the fabric plugin in Android Studio.

  • First create an account at fabric.
  • When you create account it will send you confirmation mail.
  • After clicking confirmation mail it will redirect you to fabric page.
  • It show you different platform option. Select android as platform.

  • For window/linux user select setting from file menu. For Mac user select Preferences from menu.
  • Select Plugins, click the Browse repositories button and search for “Fabric for Android”
  • Click install plugin button to download and install plugin.
  • You can see Fabric option in right side. Click on it and enter your credentials to sign in.
  • Select susi_android project and click next.
  • Fabric will list all the organizations you registered, so select the organization you want to associate the app with and click next. In my case organization is susi. 
  • Fabric will then list all of its kits.select Crashlytics and click  Next .
  •  Click the install button.It will add crashlytics in project.
  • Fabric wants to make changes in MainApplication  and AndroidManifest.xml files , so click the Apply button for the changes to happen.

  • Build and run your application to make sure that everything is configured properly. If your app was successfully configured, you will get an email sent instantly to the email address you used to sign up with fabric.
  • Now you can track crashes of your app on the dashboard of your  fabric account.
  • It will give you all details like 1.) How many users are affected and how many times app crashes with date. 2.) Details of  devices in which app crashes . 3.) Cause of errors

For more information use these links:

https://fabric.io/home

https://fabric.io/kits/android/crashlytics

Continue ReadingHow to keep crash records of SUSI.AI Android with Crashlytics

How to add the Google Books API to SUSI AI

SUSI.AI is a Open Source personal assistant. You can also add new skills to SUSI easily. In this blog post I’m going to add Google’s Books API to SUSI as a skill. A complete tutorial on SUSI.AI skills is n the repository. Check out Tutorial Level 11: Call an external API here and you will understand how can we integrate an external API with SUSI AI.

To start adding book skills to SUSI.AI , first go to this URL http://dream.susi.ai/  > give a name in the text field and press OK.

 

Copy and paste above code to the newly opened etherpad.

Go to this url http://chat.susi.ai to test new skill.

Type “dream blogpost” on chat and press enter. Now we can use the skills we  add to the etherpad.

To understand  Google’s book API use this url.Your request url should be like this:

[code]https://www.googleapis.com/books/v1/volumes?q=BOOKNAME&amp;amp;amp;amp;amp;amp;amp;amp;key=yourAPIKey[/code]

 

you should replace APIKey with your API key.

To get started you first need to get an API key.

Go to this url > click GET A KEY button which is in right top > and select “Create a new project”

Add name to a project and click “CREATE AND ENABLE API” button

Copy your API key and replace the API Key part of request URL.

Paste request url on your browser address bar and replace BOOKNAME part with “flower” and go to the URL. It will give this JSON.

We need to get the full name of books which is in items array to that we have to go through this hierarchy
items array >first item>volumeInfo >title
Go to the etherpad we made before and paste the following code.


is there any book called * ?
!console:did you mean "$title$" ? Here is a link to read more: $infoLink$
{
"url":"https://www.googleapis.com/books/v1/volumes?q=$1$&key=AIzaSyCt3Wop5gN3S5H0r1CKZlXIgaM908oVDls",
"path":"$.items[0].volumeInfo"
}
eol

first line of the code “is there any book called *?” is the question user ask. *  is the variant part  of question. that part can be used in the code by $1$ , if there more variants we can add multiple asterisk marks and refer by using corresponding number Ex: $1$,$2$,$3$
  • In this code  “path” : “$.items[0].volumeInfo”
  • $  represents full JSON result.
  • items[0] for get first element
  • .volumeInfo is to refer  volumeInfo object
!console:did you mean “$title$” ?  Here is a link to read more: $infoLink$
this line produce the output.
  • $title$ this one is for refer the “title” part of data that comes from “path”
  • $infoLink$ this one gives link to more details

Now go to the chat UI and type again “dream blogpost”. And after it shows “dreaming enabled” type in”is there any book called world war?”. It will result in the following.

This  is a simple way to add any service to SUSI as a skill.

Continue ReadingHow to add the Google Books API to SUSI AI

Adding Send Button in SUSI.AI webchat

Our SUSI.AI web chat app is improving day by day. One such day it looked like this: 

It replies to your query and have all the basic functionality, but something was missing. When viewed in mobile, we realised that this should have a send button.

Send buttons actually make chat apps look cool and give them their complete look.

Now a method was defined in MessageCompose Component of React App, which took the target value of  textarea and pass it as props.

Method:

_onClickButton(){
     let text = this.state.text.trim();
     if (text) {
       Actions.createMessage(text, this.props.threadID);
     }
     this.setState({text: ''});
   }

Now this method was to be called in onClick Action of our send Button, which was included in our div rendered by MessageComposer Component.

This method will also be called on tap on ENTER key on keyboard. Implementation of this method has also been done, this can be seen here.

Why wrap textarea and button in a div and not render as two independent items ?

Well in react you can only render single components, so wrapping them in a div is our only option.

Now since we had our functionality running, It was time for styling.

Our team choose to use http://www.material-ui.com/ and it’s components for styling.

We chose to have FloatingActionButton as send button.

Now to use components of material ui in our component, several importing was to be done. But to enable these feature we needed to change our render to DOM to :-

import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider';
 
 const App = () => (
   <MuiThemeProvider>
     <ChatApp />
   </MuiThemeProvider>
 );
 
 ReactDOM.render(
   <App /> ,
   document.getElementById('root')
 );

Imports in our MessageComposer looked like this :-

import Send from 'material-ui/svg-icons/content/send';
import FloatingActionButton from 'material-ui/FloatingActionButton';
import injectTapEventPlugin from 'react-tap-event-plugin';
 injectTapEventPlugin();

The injectTapEventPlugin is very important method, in order to have event handler’s in our send button, we need to call this method and method which handles onClick event  is know as onTouchTap.

The JSX code which was to be rendered looked like this:

<div className="message-composer">
         <textarea
           name="message"
           value={this.state.text}
           onChange={this._onChange.bind(this)}
           onKeyDown={this._onKeyDown.bind(this)}
           ref={(textarea)=> { this.nameInput = textarea; }}
           placeholder="Type a message..."
         />
         <FloatingActionButton
           backgroundColor=' #607D8B'
           onTouchTap={this._onClickButton.bind(this)}
           style={style}>
           <Send />
         </FloatingActionButton>
       </div>

Styling for button was done separately and it looked like:

const style = {
     mini: true,
     top: '1px',
     right: '5px',
     position: 'absolute',
 };

Ultimately after successfully implementing all of this our SUSI.AI web chat had a good looking FloatingAction send Button.

This can be tested here.

Continue ReadingAdding Send Button in SUSI.AI webchat

Map Support for SUSI Webchat

SUSI chat client supports map tiles now for queries related to location. SUSI responds with an interactive internal map tile with the location pointed by a marker. It also provides you with a link to open street maps where you can get the whole view of the location using the zooming options provided and also gives the population count for that location.

Lets visit SUSI WebChat and try it out.

Query : Where is london
Response :

Implementation:

How do we know that a map tile is to be rendered?
The actions in the API response tell the client what to render. The client loops through the actions array and renders the response for each action accordingly.

"actions": [
  {
    "type": "answer",
    "expression": "City of London is a place with a population of 7556900.             Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228"
  },
  {
    "type": "anchor",
    "link":    "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228",
    "text": "Link to Openstreetmap: City of London"
  },
  {
    "type": "map",
    "latitude": "51.51279067225417",
    "longitude": "-0.09184009399817228",
    "zoom": "13"
  }
]

Note: The API response has been trimmed to show only the relevant content.

The first action element is of type answer so the client renders the text response, ‘City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228

The second action element is of type anchor with the text to display and the link to hyperlink specified by the text and link attributes, so the client renders the text `Link to Openstreetmap: City of London`, hyperlinked to “https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228”.

Finally, the third action element is of type map. Latitude, Longitude and zoom level information are also  specified using latitude, longitude and zoom attributes. The client renders a map using these attributes.

I used react-leafletmodule to render the interactive map tiles.

To integrate it into our project and set the required style for the map tiles, we need to load Leaflet’s CSS style sheet and we also need to include height and width for the map component. 

<link rel="stylesheet"  href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" />
.leaflet-container {
  height: 150px;
  width: 80%;
  margin: 0 auto;
}
case 'map': {

  let lat = parseFloat(data.answers[0].actions[index].latitude);
  let lng = parseFloat(data.answers[0].actions[index].longitude);
  let zoom = parseFloat(data.answers[0].actions[index].zoom);
  let mymap = drawMap(lat,lng,zoom);

  listItems.push(
    <li className='message-list-item' key={action+index}>
      <section className={messageContainerClasses}>
        {mymap}
        <p className='message-time'>
          {message.date.toLocaleTimeString()}
        </p>;
      </section>
    </li>
  );

  break;
}
import { divIcon } from 'leaflet';
import { Map, Marker, Popup, TileLayer } from 'react-leaflet';


// Draw a Map

function drawMap(lat,lng,zoom){

  let position = [lat, lng];

  const icon = divIcon({
    className: 'map-marker-icon',
    iconSize: [35, 35]
    });

  const map = (
    <Map center={position} zoom={zoom}>
      <TileLayer
      attribution=''
      url='http://{s}.tile.osm.org/{z}/{x}/{y}.png'
      />
      <ExtendedMarker position={position} icon={icon}>
        <Popup>
          <span><strong>Hello!</strong> <br/> I am here.</span>
        </Popup>
      </ExtendedMarker>
    </Map>
  );

return map;

}

Here, I used a custom marker icon because the default icon provided by leaflet had an issue and was not being rendered. I used divIcon from leaflet to create a custom map marker icon.

When the map tile is rendered, we see a Popup message at the marker. The extended marker class is used to keep the Popup open initially.

class ExtendedMarker extends Marker {
  componentDidMount() {
    super.componentDidMount();
    this.leafletElement.openPopup();
  }
}


The function drawMap returns a Map tile component which is rendered and we have our interactive map!

Resources
Continue ReadingMap Support for SUSI Webchat