Setup Lint Testing in SUSI Android

As developers tend to make mistakes while writing code, small mistakes or issues can cause a negative impact on the overall functionality and speed of the app. Therefore it is necessary to understand the importance of Lint testing in Android.

Android Lint is a tool present in Android studio which is effective in scanning and reporting different types of bugs present in the code, it also find typos and security issues in the app. The issue is reported along with severity level thus allowing the developer to fix it based on the priority and level of damage they can cause. It is easy to use and can significantly improve the quality of your code.

Effect of Lint testing on Speed of the Android App

Lint testing can significantly improve the speed of the app in the following ways

  1. Android Link test helps in removing the declaration redundancy in the code, thus the Gradle need not to bind the same object again and again helping to improve the speed.
  2. Lint test helps to find bugs related to Class Structure in different Activities of the Application which is necessary to avoid the case of memory leaks.
  3. Lint testing also tells the developer above the size of resources use for example the drawable resources which sometimes take up a large piece of memory in the application. Cleaning these resources or replacing them with lightweight drawables helps in increasing the speed of the app.
  4. Overall Lint Testing helps in removing Typos, unused import statement, redundant strings, hence refactoring the whole code and increasing stability and speed.

Setup

We can use Gradle to invoke the list test by the following commands in the root directory of the folder.

To set up we can add this code to our build.gradle file

 lintOptions {
        lintConfig file("lint.xml")
    }

The lint.xml generated will look something like this

<?xml version="1.0" encoding="UTF-8"?>
<lint>
   <!-- Changes the severity of these to "error" for getting to a warning-free build -->
   <issue id="UnusedResources" severity="error"/>
</lint>

To explicitly run the test on Android we can use the following commands.

On Windows

gradlew lint

On Mac

./gradlew lint

We can also use the lint testing on various variants of the app, using commands such as

gradle lintDebug 

  or 

gradle lintRelease.

The xml file generated contains the error along with their severity level .

<?xml version="1.0" encoding="UTF-8"?>
<lint>
   <issue id="Invalid Package" severity="ignore" />
   <!-- All below are issues that have been brought to informational (so they are visible, but don't break the build) -->
   <issue id="GradleDependency" severity="informational" />    <issue id="Old TargetApi" severity="informational" />
</lint>

Testing on Susi Android:-

After testing the result on Susi Android we find the following errors.

As we can see that there are two errors and a lot of warnings. Though warning are not that severe but we can definitely improve on this. Thus making a habit of testing your code with lint test will improve the performance of your app and will make it more faster.

The test provides a complete and detailed list of issues present in the project.

We can find the exact location as well as the cause of the error by going deeper into the directory like this.

We can see there is a error in build.gradle file which is due to different versions of libraries present in the gradle files as of com.android.support libraries must be of same version.

In this way we can test out code and find errors in it.

Continue ReadingSetup Lint Testing in SUSI Android

How to add a new Servlet/API to SUSI Server

You have got a new feature added to enhance SUSI-AI (in web/android/iOS application) but do not find an API which could assist you in your work to make calls to the server {since the principle of all Susi-AI clients is to contact with SUSI-server for any feature}. Making servlets for  Susi is quite different from a normal JAVA servlet. Though the real working logic remains the same but we have got classes which allow you to directly focus on one thing and that is to maintain your flow for the feature. To find already implemented servlets, first clone the susi_server repository  from here.

git clone https://github.com/fossasia/susi_server.git

Cd to susi_server directory or open your terminal in susi_server directory. (This blog focuses on servlet development for Susi only and hence it is assumed that you have any version of JAVA8 installed properly). If you have not gone through how to run a susi_server manually, then follow  below steps to start the server:

./gradlew build	   //some set of files and dependencies will be downloaded
bin/start.sh		   //command to start the server

This will start your Susi server and it will listen at port 4000.

The first step is to analyze that to which class of API is your  servlet  going to be added. Let us take a small example and see how to proceed step by step. Let us look at development of ListSettingsService servlet. (to find the code of this servlet, browse to the following location: susi_server->src->ai->susi->server->api->aaa). Once you have decided the classification of your srvlet, create a .java file in it (Like we created ListSettingsService.java file in aaa folder). Extend AbstractAPIHandler class to your class and implement APIHandler to your class. If you are using any IDE like Intelij IDEA or eclipse then they will give you an error message and when you click on it, it  will ask you to Override some methods. Select the option and if you are using a simple text editor, then override the following methods in the given way:

@Override
    public String getAPIPath() {
        return null;
    }
@Override
    public BaseUserRole getMinimalBaseUserRole() {
        return null;
    }
@Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }
@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException {
        Return null;
    }

What all these methods are for and why do we need them?

These are those 4 methods that make our work way easy. With the code compilation, first getAPIPath() is called to evaluate the end point.  Whenever this end point is called properly, it responds with whatever is defined in serviceImpl(). In our case we have given the endpoint

"/aaa/listSettings.json".

Ensure that you do not have 2  servlets with same end point.

Next in the line is getMinimalBaseUserRole() method. While developing certain features, a need of special privilege {like admin login} might be required. If you are implementing a feature for Admins only (like  we are doing in this servlet), return BaseUserRole.ADMIN. If you want to give access to anyone (registered or not) then return BaseUserRole.Anonymous. These might be login, signup or maybe a search point. By default all these methods are returning null. Once you are decided what to return, encode it in serviceImpl() method.

Look at the below implementation of the servlet :

@Override
    public String getAPIPath() {
        return "/aaa/listSettings.json";
}

@Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ADMIN;
}

@Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
}

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException {

        String path = DAO.data_dir.getPath()+"/settings/";
        File settings = new File(path);
        String[] files = settings.list();
        JSONArray fileArray = new JSONArray(files);
        return new ServiceResponse(fileArray);
    }
}

As discussed earlier, the task of this servlet is to list all the files in data/settings folder. But this list is only available to users with admin login.

DAO.data_dir.getPath() returns a String identifier which is the path to data directory present in susi_server folder. We append “/settings/” to access settings folder inside it. Next we list all the files present in settings folder, encode them as a JSONArray object and reeturn the JSONArray object.

Think you can enhance Susi-server now? Get started right away!!

Continue ReadingHow to add a new Servlet/API to SUSI Server

Making SUSI’s login experience easy


Every app should provide a smooth and user-friendly login experience, as it is the first point of contact between app and user. To provide easy login in SUSI, auto-suggestion email address is used. With this feature, the user is able to select his email from autocomplete dropdown if he has successfully logged in earlier in the app just by typing first few letters of his email. Thus one need not write the whole email address each and every time for login.
Let’s see how to implement it.
AutoCompleteTextView is the subclass of EditText class, which displays a list of suggestions in a drop down menu from which user can select only one suggestion or value. To use AutoCompleteTextView the dependency of the latest version of design library to the Gradle build file should be added.

dependencies {
compile "com.android.support:design:$support_lib_version"
}

Next, in the susi_android/app/src/main/res/layout/activity_login.xml. The AutoCompleteTextView is wrapped inside TextInputLayout to provide an input field for email to the user.

<android.support.design.widget.TextInputLayout
android:id="@+id/email"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:errorEnabled="true">
<AutoCompleteTextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="@string/email"
android:id="@+id/email_input"
android:textColor="@color/edit_text_login_screen"
android:inputType="textEmailAddress"
android:textColorHint="@color/edit_text_login_screen" />
<android.support.design.widget.TextInputLayout/>

href=”https://github.com/fossasia/susi_android/blob/development/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java”>susi_android/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java following import statements is added to import the collections.

import android.widget.ArrayAdapter;
import android.widget.AutoCompleteTextView;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Set;

The following code binds the AutoCompleteTextView using ButterKnife.

@BindView(R.id.email_input)
AutoCompleteTextView autoCompleteEmail;

To store every successfully logged in email id, use Preference Manager in login response.

...
if (response.isSuccessful() &amp;&amp; response.body() != null) {
Toast.makeText(LoginActivity.this, response.body().getMessage(), Toast.LENGTH_SHORT).show();
// Save email for autocompletion
savedEmails.add(email.getEditText().getText().toString());
PrefManager.putStringSet(Constant.SAVED_EMAIL,savedEmails);
}
...

Here Constant.SAVED_EMAIL is a string defined in Constants.java as

public static final String SAVED_EMAIL="saved_email";

Next to specify the list of suggestions emails to be displayed the array adapter class is used. The setAdapter method is used to set the adapter of the autoCompleteTextView.

private Set savedEmails = new HashSet&lt;&gt;();
if (PrefManager.getStringSet(Constant.SAVED_EMAIL) != null) {
savedEmails.addAll(PrefManager.getStringSet(Constant.SAVED_EMAIL));
autoCompleteEmail.setAdapter(new ArrayAdapter&lt;&gt;(this, android.R.layout.simple_list_item_1, new ArrayList&lt;&gt;(savedEmails)));
}

Then just test it with first logging in and after that every time you log in, just type first few letters and see the email suggestions. So, next time when you make an app with login interface do include AutoCompleteview for hassle-free login.

Continue ReadingMaking SUSI’s login experience easy

Using Speech To Text Engine in Susi Android

Susi is an intelligent chatbot, it supports speech to text as the input. The user can talk to the susi just like he or she is talking to some other person. Also in case of speech to text input, the output of susi is in the form of text to speech giving a seamless conversational experience to the user.

To achieve speech to text input in Susi Android or any other android application we have the following ways:-

  1. Using Android inbuilt Speech to Text function.
  2. Using Google Clouds Speech API.

We will talk about each of these.

Using Android inbuilt Speech to Text function.

Android provides an inbuilt method to convert speech into text, it is the most easy way to convert speech to text.

This method uses android.speech package and a specific class called android.speech.RecognizerIntent . Basically we trigger an intent (android.speech.RecognizerIntent) which shows dialog box to recognize speech input. This Activity then converts the speech into text and send backs the result to our calling Activity. When we invoke android.speech.RecognizerIntent intent, we must use startActivityForResult()as we must listen back for result text.

The code snippet for the following is

private void promptSpeechInput() {
        Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
                RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
        intent.putExtra(RecognizerIntent.EXTRA_PROMPT,
                getString(R.string.speech_prompt));
        try {
            startActivityForResult(intent, REQ_CODE_SPEECH_INPUT);
        } catch (ActivityNotFoundException a) {
            Toast.makeText(getApplicationContext(),
                    getString(R.string.speech_not_supported),
                    Toast.LENGTH_SHORT).show();
        }
    }

In the above code as we can see that we putting some extra information while passing the intent. This information is used by speech to text engine to determine the language of the user. Thus while invoking RecognizerIntent, we must provide extra RecognizerIntent.EXTRA_LANGUAGE_MODE. Here we are setting its value to en-US.

Since the recognizer is triggered we receive a callback onActivityResult(int requestCode, int resultCode, Intent data) which is an override method to handle the result. The RecognizerIntent will convert the speech input to text and send back the result as ArraList with key RecognizerIntent.EXTRA_RESULTS. Generally this list should be ordered in descending order of speech recognizer confidence. Only present when RESULT_OK is returned in an activity result. We just set the text that we got in result in text view txtText using txtText.setText()

The screenshot of the implementation is

Using Google Cloud Speech API

Google Cloud Api is enable the developers to convert speech to text in Real time . It is used in Google Allo and Google Assistant. It is backed by powerful neural network and machine learning algorithms which makes it very efficient and fast at the same time. The Api is capable of recognizing more than 80 languages. To find more detail about Google Cloud Speech Api, one can refer to the official documentation at this link.

The Google to text Api is not free and based on the usage of the Api. To use this Api, developer have to sign up at the google console to generate the Api key. On enabling the speech API a json will be created.

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);

        switch (requestCode) {
            case REQ_CODE_SPEECH_INPUT: {
                if (resultCode == RESULT_OK && null != data) {
                    ArrayList<String> result = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                    mVoiceInputTv.setText(result.get(0));
                }
                break;
            }

        }
    }

The whole implementation of the API can be found here.

Continue ReadingUsing Speech To Text Engine in Susi Android

How to keep crash records of SUSI.AI Android with Crashlytics

At this stage of the development of SUSI.AI Android there are many changes and at times this results in inconsistencies and crashes of the app. One important questions we face is how to keep record of crashes so that we can improve our app. Using Crashlytics is a way keep record of crashes. The easiest way to add crashlytics in an app is to integrate the fabric plugin in Android Studio.

  • First create an account at fabric.
  • When you create account it will send you confirmation mail.
  • After clicking confirmation mail it will redirect you to fabric page.
  • It show you different platform option. Select android as platform.

  • For window/linux user select setting from file menu. For Mac user select Preferences from menu.
  • Select Plugins, click the Browse repositories button and search for “Fabric for Android”
  • Click install plugin button to download and install plugin.
  • You can see Fabric option in right side. Click on it and enter your credentials to sign in.
  • Select susi_android project and click next.
  • Fabric will list all the organizations you registered, so select the organization you want to associate the app with and click next. In my case organization is susi. 
  • Fabric will then list all of its kits.select Crashlytics and click  Next .
  •  Click the install button.It will add crashlytics in project.
  • Fabric wants to make changes in MainApplication  and AndroidManifest.xml files , so click the Apply button for the changes to happen.

  • Build and run your application to make sure that everything is configured properly. If your app was successfully configured, you will get an email sent instantly to the email address you used to sign up with fabric.
  • Now you can track crashes of your app on the dashboard of your  fabric account.
  • It will give you all details like 1.) How many users are affected and how many times app crashes with date. 2.) Details of  devices in which app crashes . 3.) Cause of errors

For more information use these links:

https://fabric.io/home

https://fabric.io/kits/android/crashlytics

Continue ReadingHow to keep crash records of SUSI.AI Android with Crashlytics

How to teach SUSI.AI skills using external API’s

A powerful feature of SUSI is, that it can use external API’s to learn new skills. The generic syntax used here is:

Question string
!console:constant answer string + answer variable
{
 "url" : "API to be called",
 "path" : "path from where answer will be fetched"
}
eol

I will try to explain this syntax with the help of some useful examples. Let’s start with a very basic example:

I want SUSI to be able to answer questions like “What is the date today?”.

Let’s try to tackle this step by step. As we can infer from the above-written syntax, to teach SUSI a skill involving external API call, we need to be clear about five things namely:

  1. Question string i.e. “What is the date today?” (in this case).
  2. Constant answer string i.e. “The date today is ”
  3. The API to be called i.e. “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today
  4. The path which contains our answer.

When we visit this API url, we get the result as follows:

{
  "date":"2017-05-16",
  "season":"easter",
  "season_week":5,
  "celebrations":[
    {
      "title":"",
      "colour":"white",
      "rank":"ferial",
      "rank_num":3.13
    }
  ],
  "weekday":"tuesday"
}

The whole JSON object is represented with the ‘$’ sign. As date is a property of this object, so date can be accessed with “$.date” – this string is referred to as the path.

  1.  The last one is the answer variable.

We can see that the result of API url contains many “key:value” pairs. Answer variable is the value of the last key variable(i.e. date) referred in path string. This value is stored in a variable named $object$.

So our answer variable turns out to be $object$.   

Now, as we have all the five things ready with us, we can make our SUSI skill:

What is the date today?
!console:$object$
{
  “url”:“http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today”,
  “path” : “$.date”
}
eol

Kudos! But where to feed this skill and check if SUSI chat bot is able to answer “What is the date today?” appropriately.

To test the working of a skill:

  1. Open dream.asksusi.com, write whatever name you like for the pad and then click OK.

  2. Replace the data written on your pad with the skill code you created. You don’t need to save it, it is saved automatically. Now your page should look something like this:
  3. To check if this skill is working properly:
  • Visit SUSI chat bot.
  • In the textbox below, write dream followed by the name of your pad and then press Enter key. SUSI will reply with “dreaming enabled for YOUR-PAD-NAME”.
  • Now write the question string i.e. What is the date today? and you should be shown today’s date!

For more clarity, refer to this image:


Great, that you made it! You can now contribute skills by making a PR to
this repository and see those skills live on SUSI without enabling any dream! Just ask your question and get your own skilled answers.

Let’s learn more about skills by introducing some changes to this question. Let’s go through some variations of this question:

  • We want SUSI to answer the same when we ask “What is the date today?” or “today’s date”. To achieve this we can use ‘|’ symbol when writing our question.

The new syntax of our skill will be:

What is the date today? | today’s date?
!console:$object$
{
“url” : “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today”,
“path” : “$.date”
}
eol
  • We want SUSI to answer according to the question. To make it answer all the questions like today’s date?, tomorrow’s date? or yesterday’s date?

The new syntax of our skill will be:

*’s date?
!console:$object$
{
“url” : “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/$1$”,
“path” : “$.date”
}
eol

Here * acts as a wildcard character. That means * will be “today” in “today’s date” and “tomorrow” in “tomorrow’s date”. $1$ is the variable which stores the value in *.

Let’s dive into more examples:

  1. Sometimes we may need 2 wildcard characters in our question:
* plot of * | * summary of *
!console:$object$
{
  "url":"http://api.tvmaze.com/singlesearch/shows?q=$2$",
  "path":"$.summary"
}
eol   

The api used above is to tell the plot of a tv show. We need to query this API with the name of the show.

For questions like “Tell me the plot of Game of Thrones” or “What is the plot of Game of Thrones”, we want to ignore the string before “plot of” and store the string after it. This string stored can be used to query the API later.

The naming of the variables for storing the values in * is done number wise. The value of the first * in the question is stored in $1$, for the second * it is in $2$ and so on…

Now the above-written skill should make sense to everyone. Let’s see the skill in action:

 

  1. What if we want two answers from the same API. Consider this question:
    We have a public API to check the details of a space agency. We need the abbreviation of the space agency and append that to the API.

For example, when we visit https://launchlibrary.net/1.2/agency/ISRO, we get the following as output:

We want SUSI to answer the full form of a space agency along with its country code.

The skill used for it:

what is the full form of * and its country code?
!console:Full form - $name$, Country code - $countryCode$
{
  "url":"https://launchlibrary.net/1.2/agency/$1$",
  "path":"$.agencies[0]"
}
eol

How this skill works?

Let’s breakdown the path variable and check what does it leads to. The ‘$’ will fetch the whole object.

Further, “$.agencies[0]” will fetch this -:

{
  "Id":31,
  "name":"Indian Space  Research Organization",
  "countryCode":"IND",
  "abbrev":"ISRO",
  "Type":1,
  "infoURL":"http:\/\/www.isro.org\/",
 "wikiURL":"http:\/\/en.wikipedia.org\/wiki\/Indian_Space_Research_Organiation",
  "infoURLs":["http:\/\/www.isro.org\/"]
}

To fetch values of any of the keys, we can use the key name enclosed in ‘$KEY_NAME$’. The  value of that key will be automatically stored in this variable i.e. $KEY_NAME$.

Hence we use $name$ and $countryCode$ in our skill, to get the required answer.

The skill in action:

The same way we can use other API’s and contribute new skills to SUSI. To help you get started, see the public API’s repository available here. As said before, you can contribute skills by making a PR to this repository and see those skills live in SUSI!

Continue ReadingHow to teach SUSI.AI skills using external API’s

How to add the Google Books API to SUSI AI

SUSI.AI is a Open Source personal assistant. You can also add new skills to SUSI easily. In this blog post I’m going to add Google’s Books API to SUSI as a skill. A complete tutorial on SUSI.AI skills is n the repository. Check out Tutorial Level 11: Call an external API here and you will understand how can we integrate an external API with SUSI AI.

To start adding book skills to SUSI.AI , first go to this URL http://dream.susi.ai/  > give a name in the text field and press OK.

 

Copy and paste above code to the newly opened etherpad.

Go to this url http://chat.susi.ai to test new skill.

Type “dream blogpost” on chat and press enter. Now we can use the skills we  add to the etherpad.

To understand  Google’s book API use this url.Your request url should be like this:

[code]https://www.googleapis.com/books/v1/volumes?q=BOOKNAME&amp;amp;amp;amp;amp;amp;amp;amp;key=yourAPIKey[/code]

 

you should replace APIKey with your API key.

To get started you first need to get an API key.

Go to this url > click GET A KEY button which is in right top > and select “Create a new project”

Add name to a project and click “CREATE AND ENABLE API” button

Copy your API key and replace the API Key part of request URL.

Paste request url on your browser address bar and replace BOOKNAME part with “flower” and go to the URL. It will give this JSON.

We need to get the full name of books which is in items array to that we have to go through this hierarchy
items array >first item>volumeInfo >title
Go to the etherpad we made before and paste the following code.


is there any book called * ?
!console:did you mean "$title$" ? Here is a link to read more: $infoLink$
{
"url":"https://www.googleapis.com/books/v1/volumes?q=$1$&key=AIzaSyCt3Wop5gN3S5H0r1CKZlXIgaM908oVDls",
"path":"$.items[0].volumeInfo"
}
eol

first line of the code “is there any book called *?” is the question user ask. *  is the variant part  of question. that part can be used in the code by $1$ , if there more variants we can add multiple asterisk marks and refer by using corresponding number Ex: $1$,$2$,$3$
  • In this code  “path” : “$.items[0].volumeInfo”
  • $  represents full JSON result.
  • items[0] for get first element
  • .volumeInfo is to refer  volumeInfo object
!console:did you mean “$title$” ?  Here is a link to read more: $infoLink$
this line produce the output.
  • $title$ this one is for refer the “title” part of data that comes from “path”
  • $infoLink$ this one gives link to more details

Now go to the chat UI and type again “dream blogpost”. And after it shows “dreaming enabled” type in”is there any book called world war?”. It will result in the following.

This  is a simple way to add any service to SUSI as a skill.

Continue ReadingHow to add the Google Books API to SUSI AI

Adding Send Button in SUSI.AI webchat

Our SUSI.AI web chat app is improving day by day. One such day it looked like this: 

It replies to your query and have all the basic functionality, but something was missing. When viewed in mobile, we realised that this should have a send button.

Send buttons actually make chat apps look cool and give them their complete look.

Now a method was defined in MessageCompose Component of React App, which took the target value of  textarea and pass it as props.

Method:

_onClickButton(){
     let text = this.state.text.trim();
     if (text) {
       Actions.createMessage(text, this.props.threadID);
     }
     this.setState({text: ''});
   }

Now this method was to be called in onClick Action of our send Button, which was included in our div rendered by MessageComposer Component.

This method will also be called on tap on ENTER key on keyboard. Implementation of this method has also been done, this can be seen here.

Why wrap textarea and button in a div and not render as two independent items ?

Well in react you can only render single components, so wrapping them in a div is our only option.

Now since we had our functionality running, It was time for styling.

Our team choose to use http://www.material-ui.com/ and it’s components for styling.

We chose to have FloatingActionButton as send button.

Now to use components of material ui in our component, several importing was to be done. But to enable these feature we needed to change our render to DOM to :-

import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider';
 
 const App = () => (
   <MuiThemeProvider>
     <ChatApp />
   </MuiThemeProvider>
 );
 
 ReactDOM.render(
   <App /> ,
   document.getElementById('root')
 );

Imports in our MessageComposer looked like this :-

import Send from 'material-ui/svg-icons/content/send';
import FloatingActionButton from 'material-ui/FloatingActionButton';
import injectTapEventPlugin from 'react-tap-event-plugin';
 injectTapEventPlugin();

The injectTapEventPlugin is very important method, in order to have event handler’s in our send button, we need to call this method and method which handles onClick event  is know as onTouchTap.

The JSX code which was to be rendered looked like this:

<div className="message-composer">
         <textarea
           name="message"
           value={this.state.text}
           onChange={this._onChange.bind(this)}
           onKeyDown={this._onKeyDown.bind(this)}
           ref={(textarea)=> { this.nameInput = textarea; }}
           placeholder="Type a message..."
         />
         <FloatingActionButton
           backgroundColor=' #607D8B'
           onTouchTap={this._onClickButton.bind(this)}
           style={style}>
           <Send />
         </FloatingActionButton>
       </div>

Styling for button was done separately and it looked like:

const style = {
     mini: true,
     top: '1px',
     right: '5px',
     position: 'absolute',
 };

Ultimately after successfully implementing all of this our SUSI.AI web chat had a good looking FloatingAction send Button.

This can be tested here.

Continue ReadingAdding Send Button in SUSI.AI webchat

Map Support for SUSI Webchat

SUSI chat client supports map tiles now for queries related to location. SUSI responds with an interactive internal map tile with the location pointed by a marker. It also provides you with a link to open street maps where you can get the whole view of the location using the zooming options provided and also gives the population count for that location.

Lets visit SUSI WebChat and try it out.

Query : Where is london
Response :

Implementation:

How do we know that a map tile is to be rendered?
The actions in the API response tell the client what to render. The client loops through the actions array and renders the response for each action accordingly.

"actions": [
  {
    "type": "answer",
    "expression": "City of London is a place with a population of 7556900.             Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228"
  },
  {
    "type": "anchor",
    "link":    "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228",
    "text": "Link to Openstreetmap: City of London"
  },
  {
    "type": "map",
    "latitude": "51.51279067225417",
    "longitude": "-0.09184009399817228",
    "zoom": "13"
  }
]

Note: The API response has been trimmed to show only the relevant content.

The first action element is of type answer so the client renders the text response, ‘City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228

The second action element is of type anchor with the text to display and the link to hyperlink specified by the text and link attributes, so the client renders the text `Link to Openstreetmap: City of London`, hyperlinked to “https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228”.

Finally, the third action element is of type map. Latitude, Longitude and zoom level information are also  specified using latitude, longitude and zoom attributes. The client renders a map using these attributes.

I used react-leafletmodule to render the interactive map tiles.

To integrate it into our project and set the required style for the map tiles, we need to load Leaflet’s CSS style sheet and we also need to include height and width for the map component. 

<link rel="stylesheet"  href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" />
.leaflet-container {
  height: 150px;
  width: 80%;
  margin: 0 auto;
}
case 'map': {

  let lat = parseFloat(data.answers[0].actions[index].latitude);
  let lng = parseFloat(data.answers[0].actions[index].longitude);
  let zoom = parseFloat(data.answers[0].actions[index].zoom);
  let mymap = drawMap(lat,lng,zoom);

  listItems.push(
    <li className='message-list-item' key={action+index}>
      <section className={messageContainerClasses}>
        {mymap}
        <p className='message-time'>
          {message.date.toLocaleTimeString()}
        </p>;
      </section>
    </li>
  );

  break;
}
import { divIcon } from 'leaflet';
import { Map, Marker, Popup, TileLayer } from 'react-leaflet';


// Draw a Map

function drawMap(lat,lng,zoom){

  let position = [lat, lng];

  const icon = divIcon({
    className: 'map-marker-icon',
    iconSize: [35, 35]
    });

  const map = (
    <Map center={position} zoom={zoom}>
      <TileLayer
      attribution=''
      url='http://{s}.tile.osm.org/{z}/{x}/{y}.png'
      />
      <ExtendedMarker position={position} icon={icon}>
        <Popup>
          <span><strong>Hello!</strong> <br/> I am here.</span>
        </Popup>
      </ExtendedMarker>
    </Map>
  );

return map;

}

Here, I used a custom marker icon because the default icon provided by leaflet had an issue and was not being rendered. I used divIcon from leaflet to create a custom map marker icon.

When the map tile is rendered, we see a Popup message at the marker. The extended marker class is used to keep the Popup open initially.

class ExtendedMarker extends Marker {
  componentDidMount() {
    super.componentDidMount();
    this.leafletElement.openPopup();
  }
}


The function drawMap returns a Map tile component which is rendered and we have our interactive map!

Resources
Continue ReadingMap Support for SUSI Webchat

Hyperlinking Support for SUSI Webchat

SUSI responses can contain links or email ids. Whenever we want to access those links or send a mail to those email ids, it is very inconvenient for the user to manually copy the link and check out the contents, which is a very bad UX.

I used a module called ‘react-linkify’ to address this issue.
React-linkify’ is a React component to parse links (urls, emails, etc.) in text into clickable links.

Usage:

<Linkify>{text to linkify}</Linkify>

Any link that appears inside the linkify component will be hyperlinked and is made clickable. It uses regular expressions and pattern matching to detect URLs and mail ids and are made clickable such that clicking on URLs opens the link in a new window and clicking a mail id opens “mailto:” .

Code:

export const parseAndReplace = (text) => {return <Linkify properties={{target:"_blank"}}>{text}</Linkify>;}

Lets visit SUSI WebChat and try it out.

Query: search internet

Response: Internet The global system of interconnected computer networks that use the Internet protocol suite to… https://duckduckgo.com/Internet

The link has been parsed from the response text and has been successfully hyperlinked. Clicking the links opens the respective URL in a new window.

Resources
Continue ReadingHyperlinking Support for SUSI Webchat