Youtube videos in the SUSI iOS Client

The iOS and android client already have the functionality to play videos based on the queries. In order to implement this feature of playing videos in the iOS client, we use the Youtube Data API v3. The task here was to create an UI/UX for the playing of videos within the app. An API call is made initially to fetch the youtube videos based on the query and they video ID of the first object is extracted and used to play the video.

The API endpoint for youtube data API looks like:

https://www.googleapis.com/youtube/v3/search?part=snippet&q={query}&key={your_api_key}

Using this we get the following result: ( I am adding only the first item which is required since the response is too long )

Path: $.items[0]

{

 "kind": "youtube#searchResult",

 "etag": "\"m2yskBQFythfE4irbTIeOgYYfBU/oR-eA572vNoma1XIhrbsFTotfTY\"",

 "id": {

"kind": "youtube#channel",

"channelId": "UCQprMsG-raCIMlBudm20iLQ"

 },

 "snippet": {

"publishedAt": "2015-01-01T11:06:00.000Z",

"channelId": "UCQprMsG-raCIMlBudm20iLQ",

"title": "FOSSASIA",

"description": "FOSSASIA is supporting the development of Free and Open Source technologies for social change in Asia. The annual FOSSASIA Summit brings together ...",

"thumbnails": {

"default": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s88-c-k-no-mo-rj-c0xffffff/photo.jpg"

},

"medium": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg"

},

"high": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg"

}

},

"channelTitle": "FOSSASIA",

"liveBroadcastContent": "upcoming"

 }

}

We parse the above object to grab the videoID, based on the query,  we will use code below:

if let itemsObject = response[Client.YoutubeResponseKeys.Items] as? [[String : AnyObject]] {
    if let items = itemsObject[0][Client.YoutubeResponseKeys.ID] as? [String : AnyObject] {
         let videoID = items[Client.YoutubeResponseKeys.VideoID] as? String
         completion(videoID, true, nil)
    }
}

This videoID is returned to the Controller where this method was called.

Now, we begin with designing the UI for the same. First of all, we need a view on which the youtube video will be played and this view would help dismiss the video by clicking on it.

First, we add the blackView to the entire screen.

// declaration
let blackView = UIView()

// Add backgroundView
func addBackgroundView() {

   If let window = UIApplication.shared.keyWindow {

           self.view.addSubview(blackView) 

           // Cover the entire screen
           blackView.frame = window.frame

           blackView.backgroundColor = UIColor(white: 0, alpha: 0.5)
           blackView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleDismiss)))

   }

}

func handleDismiss() {
   UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {
       self.blackView.removeFromSuperview()
   }, completion: nil)
}

Next, we add the YoutubePlayerView. For this we use the Pod `YoutubePlayer`. Since, it had a few warnings showing as well as some videos not being played I had to make fixes to the original pod and use my own customized version ( available here ).

// Youtube Player
lazy var youtubePlayer: YouTubePlayerView = {
    let frame = CGRect(x: 0, y: 0, width: self.view.frame.width - 16, height: self.view.frame.height * 1 / 3)
    let player = YouTubePlayerView(frame: frame)
    return player
}()

// Shows Youtube Player

func addYotubePlayer(_ videoID: String) {
    if let window = UIApplication.shared.keyWindow {

       // Add YoutubePlayer view on top of blackView
        self.blackView.addSubview(self.youtubePlayer)
        // Calculate and set frame

       let centerX = UIScreen.main.bounds.size.width / 2
        let centerY = UIScreen.main.bounds.size.height / 3
        self.youtubePlayer.center = CGPoint(x: centerX, y: centerY)

       // Load Player using the Video ID 
        self.youtubePlayer.loadVideoID(videoID)

        blackView.alpha = 0
        youtubePlayer.alpha = 0

        UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {
            self.blackView.alpha = 1
            self.youtubePlayer.alpha = 1
       }, completion: nil)
    }
}

We are set with the UI and the only thing we are left with is to actually call the API in the client and after getting the `videoID` from that we call the above method passing this `videoID`. Before calling we need to check whether our query contains the play action or not and if it does we make the API call and add the player.

if let text = inputTextField.text {
    if text.contains("play") || text.contains("Play") {
        let query = text.replacingOccurrences(of: "play", with: "").replacingOccurrences(of: "Play", with: "")
        Client.sharedInstance.searchYotubeVideos(query) { (videoID, _, _) in
            DispatchQueue.main.async {
                if let videoID = videoID {
                    self.addYotubePlayer(videoID)
                 }
             }
          }
    }

}

We are all set now!Below is the output for the Youtube Player:

Continue ReadingYoutube videos in the SUSI iOS Client

Websearch and Link Preview support in SUSI iOS

The SUSI.AI server responds to API calls with answers to the queries made. These answers might contain an action, for example a web search, where the client needs to make a web search request to fetch different web pages based on the query. Thus, we need to add a link preview in the iOS Client for each such page extracting and displaying the title, description and a main image of the webpage.

At first we make the API call adding the query to the query parameter and get the result from it.

API Call:

http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=amazon

And get the following result:

{

"query": "amazon",

"count": 1,

"client_id": "aG9zdF8xMDguMTYyLjI0Ni43OQ==",

"query_date": "2017-06-02T14:34:15.675Z",

"answers": [

{

"data": [{

"0": "amazon",

"1": "amazon",

"timezoneOffset": "-330"

}],

"metadata": {

"count": 1

},

"actions": [{

"type": "answer",

"expression": "I don't know how to answer this. Here is a web search result:"

},

{

"type": "websearch",

"query": "amazon"

}]

}],

"answer_date": "2017-06-02T14:34:15.773Z",

"answer_time": 98,

"language": "en",

"session": {

"identity": {

"type": "host",

"name": "108.162.246.79",

"anonymous": true

}

}

}

After parsing this response, we first recognise the type of action that needs to be performed, here we get `websearch` which means we need to make a web search for the query. Here, we use `DuckDuckGo’s` API to get the result.

API Call to DuckDuckGo:

http://api.duckduckgo.com/?q=amazon&format=json

I am adding just the first object of the required data since the API response is too long.

Path: $.RelatedTopics[0]

{

 "Result": "<a href=\"https://duckduckgo.com/Amazon.com\">Amazon.com</a>Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company...",

 "Icon": {

"URL": "https://duckduckgo.com/i/d404ba24.png",

"Height": "",

"Width": ""

},

 "FirstURL": "https://duckduckgo.com/Amazon.com",

 "Text": "Amazon.com Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company..."

}

For the link preview, we need an image logo, URL and a description for the same so, here we will use the `Icon.URL` and `Text` key. We have our own class to parse this data into an object.

class WebsearchResult: NSObject {
   
   var image: String = "no-image"
   var info: String = "No data found"
   var url: String = "https://duckduckgo.com/"
   var query: String = ""
   
   init(dictionary: [String:AnyObject]) {
       
       if let relatedTopics = dictionary[Client.WebsearchKeys.RelatedTopics] as? [[String : AnyObject]] {
           
           if let icon = relatedTopics[0][Client.WebsearchKeys.Icon] as? [String : String] {
               if let image = icon[Client.WebsearchKeys.Icon] {
                   self.image = image
               }
           }
           
           if let url = relatedTopics[0][Client.WebsearchKeys.FirstURL] as? String {
               self.url = url
           }
           
           if let info = relatedTopics[0][Client.WebsearchKeys.Text] as? String {
               self.info = info
           }
           
           if let query = dictionary[Client.WebsearchKeys.Heading] as? String {
               let string = query.lowercased().replacingOccurrences(of: " ", with: "+")
               self.query = string
           }   
       }   
   }   
}

We now have the data and only thing left is to display it in the UI.

Within the Chat Bubble, we need to add a container view which will contain the image, and the text description.

 let websearchContentView = UIView()
    
    let searchImageView: UIImageView = {
        let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 44, height: 44))
        imageView.contentMode = .scaleAspectFit

       // Placeholder image assigned
        imageView.image = UIImage(named: "no-image")
        return imageView
    }()
    
    let websiteText: UILabel = {
        let label = UILabel()
        label.textColor = .white
        return label
    }()

   func addLinkPreview(_ frame: CGRect) {
        textBubbleView.addSubview(websearchContentView)
        websearchContentView.backgroundColor = .lightGray
        websearchContentView.frame = frame
        
        websearchContentView.addSubview(searchImageView)
        websearchContentView.addSubview(websiteText)




       // Add constraints in UI
        websearchContentView.addConstraintsWithFormat(format: "H:|-4-[v0(44)]-4-[v1]-4-|", views: searchImageView, websiteText)
        websearchContentView.addConstraintsWithFormat(format: "V:|-4-[v0]-4-|", views: searchImageView)
        websearchContentView.addConstraintsWithFormat(format: "V:|-4-[v0(44)]-4-|", views: websiteText)
    }

Next, in the Collection View, while checking other action types, we add checking for `websearch` and then call the API there followed by adding frame size and calling the `addLinkPreview` function.

else if message.responseType == Message.ResponseTypes.websearch {
  let params = [
    Client.WebsearchKeys.Query: message.query!,
    Client.WebsearchKeys.Format: "json"
  ]

  Client.sharedInstance.websearch(params, { (results, success, error) in                      
    if success {
      cell.message?.websearchData = results
      message.websearchData = results
      self.collectionView?.reloadData()
      self.scrollToLast()
    } else {
      print(error)
    }
  })
                    
  cell.messageTextView.frame = CGRect(x: 16, y: 0, width: estimatedFrame.width + 16, height: estimatedFrame.height + 30)
  

 cell.textBubbleView.frame = CGRect(x: 4, y: -4, width: estimatedFrame.width + 16 + 8 + 16, height: estimatedFrame.height + 20 + 6 + 64)
                    
  let frame = CGRect(x: 16, y: estimatedFrame.height + 20, width: estimatedFrame.width + 16 - 4, height: 60 - 8)
  

 cell.addLinkPreview(frame)

}

And set the collection View cell’s size.

else if message.responseType == Message.ResponseTypes.websearch {
  return CGSize(width: view.frame.width, height: estimatedFrame.height + 20 + 64)
}

And we are done 🙂

Here is the final version how this would look like on the device:

 

Continue ReadingWebsearch and Link Preview support in SUSI iOS

Setup Lint Testing in SUSI Android

As developers tend to make mistakes while writing code, small mistakes or issues can cause a negative impact on the overall functionality and speed of the app. Therefore it is necessary to understand the importance of Lint testing in Android.

Android Lint is a tool present in Android studio which is effective in scanning and reporting different types of bugs present in the code, it also find typos and security issues in the app. The issue is reported along with severity level thus allowing the developer to fix it based on the priority and level of damage they can cause. It is easy to use and can significantly improve the quality of your code.

Effect of Lint testing on Speed of the Android App

Lint testing can significantly improve the speed of the app in the following ways

  1. Android Link test helps in removing the declaration redundancy in the code, thus the Gradle need not to bind the same object again and again helping to improve the speed.
  2. Lint test helps to find bugs related to Class Structure in different Activities of the Application which is necessary to avoid the case of memory leaks.
  3. Lint testing also tells the developer above the size of resources use for example the drawable resources which sometimes take up a large piece of memory in the application. Cleaning these resources or replacing them with lightweight drawables helps in increasing the speed of the app.
  4. Overall Lint Testing helps in removing Typos, unused import statement, redundant strings, hence refactoring the whole code and increasing stability and speed.

Setup

We can use Gradle to invoke the list test by the following commands in the root directory of the folder.

To set up we can add this code to our build.gradle file

 lintOptions {
        lintConfig file("lint.xml")
    }

The lint.xml generated will look something like this

<?xml version="1.0" encoding="UTF-8"?>
<lint>
   <!-- Changes the severity of these to "error" for getting to a warning-free build -->
   <issue id="UnusedResources" severity="error"/>
</lint>

To explicitly run the test on Android we can use the following commands.

On Windows

gradlew lint

On Mac

./gradlew lint

We can also use the lint testing on various variants of the app, using commands such as

gradle lintDebug 

  or 

gradle lintRelease.

The xml file generated contains the error along with their severity level .

<?xml version="1.0" encoding="UTF-8"?>
<lint>
   <issue id="Invalid Package" severity="ignore" />
   <!-- All below are issues that have been brought to informational (so they are visible, but don't break the build) -->
   <issue id="GradleDependency" severity="informational" />    <issue id="Old TargetApi" severity="informational" />
</lint>

Testing on Susi Android:-

After testing the result on Susi Android we find the following errors.

As we can see that there are two errors and a lot of warnings. Though warning are not that severe but we can definitely improve on this. Thus making a habit of testing your code with lint test will improve the performance of your app and will make it more faster.

The test provides a complete and detailed list of issues present in the project.

We can find the exact location as well as the cause of the error by going deeper into the directory like this.

We can see there is a error in build.gradle file which is due to different versions of libraries present in the gradle files as of com.android.support libraries must be of same version.

In this way we can test out code and find errors in it.

Continue ReadingSetup Lint Testing in SUSI Android

How to add a new Servlet/API to SUSI Server

You have got a new feature added to enhance SUSI-AI (in web/android/iOS application) but do not find an API which could assist you in your work to make calls to the server {since the principle of all Susi-AI clients is to contact with SUSI-server for any feature}. Making servlets for  Susi is quite different from a normal JAVA servlet. Though the real working logic remains the same but we have got classes which allow you to directly focus on one thing and that is to maintain your flow for the feature. To find already implemented servlets, first clone the susi_server repository  from here.

git clone https://github.com/fossasia/susi_server.git

Cd to susi_server directory or open your terminal in susi_server directory. (This blog focuses on servlet development for Susi only and hence it is assumed that you have any version of JAVA8 installed properly). If you have not gone through how to run a susi_server manually, then follow  below steps to start the server:

./gradlew build	   //some set of files and dependencies will be downloaded
bin/start.sh		   //command to start the server

This will start your Susi server and it will listen at port 4000.

The first step is to analyze that to which class of API is your  servlet  going to be added. Let us take a small example and see how to proceed step by step. Let us look at development of ListSettingsService servlet. (to find the code of this servlet, browse to the following location: susi_server->src->ai->susi->server->api->aaa). Once you have decided the classification of your srvlet, create a .java file in it (Like we created ListSettingsService.java file in aaa folder). Extend AbstractAPIHandler class to your class and implement APIHandler to your class. If you are using any IDE like Intelij IDEA or eclipse then they will give you an error message and when you click on it, it  will ask you to Override some methods. Select the option and if you are using a simple text editor, then override the following methods in the given way:

@Override
    public String getAPIPath() {
        return null;
    }
@Override
    public BaseUserRole getMinimalBaseUserRole() {
        return null;
    }
@Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }
@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException {
        Return null;
    }

What all these methods are for and why do we need them?

These are those 4 methods that make our work way easy. With the code compilation, first getAPIPath() is called to evaluate the end point.  Whenever this end point is called properly, it responds with whatever is defined in serviceImpl(). In our case we have given the endpoint

"/aaa/listSettings.json".

Ensure that you do not have 2  servlets with same end point.

Next in the line is getMinimalBaseUserRole() method. While developing certain features, a need of special privilege {like admin login} might be required. If you are implementing a feature for Admins only (like  we are doing in this servlet), return BaseUserRole.ADMIN. If you want to give access to anyone (registered or not) then return BaseUserRole.Anonymous. These might be login, signup or maybe a search point. By default all these methods are returning null. Once you are decided what to return, encode it in serviceImpl() method.

Look at the below implementation of the servlet :

@Override
    public String getAPIPath() {
        return "/aaa/listSettings.json";
}

@Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ADMIN;
}

@Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
}

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException {

        String path = DAO.data_dir.getPath()+"/settings/";
        File settings = new File(path);
        String[] files = settings.list();
        JSONArray fileArray = new JSONArray(files);
        return new ServiceResponse(fileArray);
    }
}

As discussed earlier, the task of this servlet is to list all the files in data/settings folder. But this list is only available to users with admin login.

DAO.data_dir.getPath() returns a String identifier which is the path to data directory present in susi_server folder. We append “/settings/” to access settings folder inside it. Next we list all the files present in settings folder, encode them as a JSONArray object and reeturn the JSONArray object.

Think you can enhance Susi-server now? Get started right away!!

Continue ReadingHow to add a new Servlet/API to SUSI Server

Making SUSI’s login experience easy


Every app should provide a smooth and user-friendly login experience, as it is the first point of contact between app and user. To provide easy login in SUSI, auto-suggestion email address is used. With this feature, the user is able to select his email from autocomplete dropdown if he has successfully logged in earlier in the app just by typing first few letters of his email. Thus one need not write the whole email address each and every time for login.
Let’s see how to implement it.
AutoCompleteTextView is the subclass of EditText class, which displays a list of suggestions in a drop down menu from which user can select only one suggestion or value. To use AutoCompleteTextView the dependency of the latest version of design library to the Gradle build file should be added.

dependencies {
compile "com.android.support:design:$support_lib_version"
}

Next, in the susi_android/app/src/main/res/layout/activity_login.xml. The AutoCompleteTextView is wrapped inside TextInputLayout to provide an input field for email to the user.

<android.support.design.widget.TextInputLayout
android:id="@+id/email"
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:errorEnabled="true">
<AutoCompleteTextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="@string/email"
android:id="@+id/email_input"
android:textColor="@color/edit_text_login_screen"
android:inputType="textEmailAddress"
android:textColorHint="@color/edit_text_login_screen" />
<android.support.design.widget.TextInputLayout/>

href=”https://github.com/fossasia/susi_android/blob/development/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java”>susi_android/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java following import statements is added to import the collections.

import android.widget.ArrayAdapter;
import android.widget.AutoCompleteTextView;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.Set;

The following code binds the AutoCompleteTextView using ButterKnife.

@BindView(R.id.email_input)
AutoCompleteTextView autoCompleteEmail;

To store every successfully logged in email id, use Preference Manager in login response.

...
if (response.isSuccessful() &amp;&amp; response.body() != null) {
Toast.makeText(LoginActivity.this, response.body().getMessage(), Toast.LENGTH_SHORT).show();
// Save email for autocompletion
savedEmails.add(email.getEditText().getText().toString());
PrefManager.putStringSet(Constant.SAVED_EMAIL,savedEmails);
}
...

Here Constant.SAVED_EMAIL is a string defined in Constants.java as

public static final String SAVED_EMAIL="saved_email";

Next to specify the list of suggestions emails to be displayed the array adapter class is used. The setAdapter method is used to set the adapter of the autoCompleteTextView.

private Set savedEmails = new HashSet&lt;&gt;();
if (PrefManager.getStringSet(Constant.SAVED_EMAIL) != null) {
savedEmails.addAll(PrefManager.getStringSet(Constant.SAVED_EMAIL));
autoCompleteEmail.setAdapter(new ArrayAdapter&lt;&gt;(this, android.R.layout.simple_list_item_1, new ArrayList&lt;&gt;(savedEmails)));
}

Then just test it with first logging in and after that every time you log in, just type first few letters and see the email suggestions. So, next time when you make an app with login interface do include AutoCompleteview for hassle-free login.

Continue ReadingMaking SUSI’s login experience easy

Using Speech To Text Engine in Susi Android

Susi is an intelligent chatbot, it supports speech to text as the input. The user can talk to the susi just like he or she is talking to some other person. Also in case of speech to text input, the output of susi is in the form of text to speech giving a seamless conversational experience to the user.

To achieve speech to text input in Susi Android or any other android application we have the following ways:-

  1. Using Android inbuilt Speech to Text function.
  2. Using Google Clouds Speech API.

We will talk about each of these.

Using Android inbuilt Speech to Text function.

Android provides an inbuilt method to convert speech into text, it is the most easy way to convert speech to text.

This method uses android.speech package and a specific class called android.speech.RecognizerIntent . Basically we trigger an intent (android.speech.RecognizerIntent) which shows dialog box to recognize speech input. This Activity then converts the speech into text and send backs the result to our calling Activity. When we invoke android.speech.RecognizerIntent intent, we must use startActivityForResult()as we must listen back for result text.

The code snippet for the following is

private void promptSpeechInput() {
        Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
                RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
        intent.putExtra(RecognizerIntent.EXTRA_PROMPT,
                getString(R.string.speech_prompt));
        try {
            startActivityForResult(intent, REQ_CODE_SPEECH_INPUT);
        } catch (ActivityNotFoundException a) {
            Toast.makeText(getApplicationContext(),
                    getString(R.string.speech_not_supported),
                    Toast.LENGTH_SHORT).show();
        }
    }

In the above code as we can see that we putting some extra information while passing the intent. This information is used by speech to text engine to determine the language of the user. Thus while invoking RecognizerIntent, we must provide extra RecognizerIntent.EXTRA_LANGUAGE_MODE. Here we are setting its value to en-US.

Since the recognizer is triggered we receive a callback onActivityResult(int requestCode, int resultCode, Intent data) which is an override method to handle the result. The RecognizerIntent will convert the speech input to text and send back the result as ArraList with key RecognizerIntent.EXTRA_RESULTS. Generally this list should be ordered in descending order of speech recognizer confidence. Only present when RESULT_OK is returned in an activity result. We just set the text that we got in result in text view txtText using txtText.setText()

The screenshot of the implementation is

Using Google Cloud Speech API

Google Cloud Api is enable the developers to convert speech to text in Real time . It is used in Google Allo and Google Assistant. It is backed by powerful neural network and machine learning algorithms which makes it very efficient and fast at the same time. The Api is capable of recognizing more than 80 languages. To find more detail about Google Cloud Speech Api, one can refer to the official documentation at this link.

The Google to text Api is not free and based on the usage of the Api. To use this Api, developer have to sign up at the google console to generate the Api key. On enabling the speech API a json will be created.

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
        super.onActivityResult(requestCode, resultCode, data);

        switch (requestCode) {
            case REQ_CODE_SPEECH_INPUT: {
                if (resultCode == RESULT_OK && null != data) {
                    ArrayList<String> result = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                    mVoiceInputTv.setText(result.get(0));
                }
                break;
            }

        }
    }

The whole implementation of the API can be found here.

Continue ReadingUsing Speech To Text Engine in Susi Android

How to keep crash records of SUSI.AI Android with Crashlytics

At this stage of the development of SUSI.AI Android there are many changes and at times this results in inconsistencies and crashes of the app. One important questions we face is how to keep record of crashes so that we can improve our app. Using Crashlytics is a way keep record of crashes. The easiest way to add crashlytics in an app is to integrate the fabric plugin in Android Studio.

  • First create an account at fabric.
  • When you create account it will send you confirmation mail.
  • After clicking confirmation mail it will redirect you to fabric page.
  • It show you different platform option. Select android as platform.

  • For window/linux user select setting from file menu. For Mac user select Preferences from menu.
  • Select Plugins, click the Browse repositories button and search for “Fabric for Android”
  • Click install plugin button to download and install plugin.
  • You can see Fabric option in right side. Click on it and enter your credentials to sign in.
  • Select susi_android project and click next.
  • Fabric will list all the organizations you registered, so select the organization you want to associate the app with and click next. In my case organization is susi. 
  • Fabric will then list all of its kits.select Crashlytics and click  Next .
  •  Click the install button.It will add crashlytics in project.
  • Fabric wants to make changes in MainApplication  and AndroidManifest.xml files , so click the Apply button for the changes to happen.

  • Build and run your application to make sure that everything is configured properly. If your app was successfully configured, you will get an email sent instantly to the email address you used to sign up with fabric.
  • Now you can track crashes of your app on the dashboard of your  fabric account.
  • It will give you all details like 1.) How many users are affected and how many times app crashes with date. 2.) Details of  devices in which app crashes . 3.) Cause of errors

For more information use these links:

https://fabric.io/home

https://fabric.io/kits/android/crashlytics

Continue ReadingHow to keep crash records of SUSI.AI Android with Crashlytics

How to teach SUSI.AI skills using external API’s

A powerful feature of SUSI is, that it can use external API’s to learn new skills. The generic syntax used here is:

Question string
!console:constant answer string + answer variable
{
 "url" : "API to be called",
 "path" : "path from where answer will be fetched"
}
eol

I will try to explain this syntax with the help of some useful examples. Let’s start with a very basic example:

I want SUSI to be able to answer questions like “What is the date today?”.

Let’s try to tackle this step by step. As we can infer from the above-written syntax, to teach SUSI a skill involving external API call, we need to be clear about five things namely:

  1. Question string i.e. “What is the date today?” (in this case).
  2. Constant answer string i.e. “The date today is ”
  3. The API to be called i.e. “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today
  4. The path which contains our answer.

When we visit this API url, we get the result as follows:

{
  "date":"2017-05-16",
  "season":"easter",
  "season_week":5,
  "celebrations":[
    {
      "title":"",
      "colour":"white",
      "rank":"ferial",
      "rank_num":3.13
    }
  ],
  "weekday":"tuesday"
}

The whole JSON object is represented with the ‘$’ sign. As date is a property of this object, so date can be accessed with “$.date” – this string is referred to as the path.

  1.  The last one is the answer variable.

We can see that the result of API url contains many “key:value” pairs. Answer variable is the value of the last key variable(i.e. date) referred in path string. This value is stored in a variable named $object$.

So our answer variable turns out to be $object$.   

Now, as we have all the five things ready with us, we can make our SUSI skill:

What is the date today?
!console:$object$
{
  “url”:“http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today”,
  “path” : “$.date”
}
eol

Kudos! But where to feed this skill and check if SUSI chat bot is able to answer “What is the date today?” appropriately.

To test the working of a skill:

  1. Open dream.asksusi.com, write whatever name you like for the pad and then click OK.

  2. Replace the data written on your pad with the skill code you created. You don’t need to save it, it is saved automatically. Now your page should look something like this:
  3. To check if this skill is working properly:
  • Visit SUSI chat bot.
  • In the textbox below, write dream followed by the name of your pad and then press Enter key. SUSI will reply with “dreaming enabled for YOUR-PAD-NAME”.
  • Now write the question string i.e. What is the date today? and you should be shown today’s date!

For more clarity, refer to this image:


Great, that you made it! You can now contribute skills by making a PR to
this repository and see those skills live on SUSI without enabling any dream! Just ask your question and get your own skilled answers.

Let’s learn more about skills by introducing some changes to this question. Let’s go through some variations of this question:

  • We want SUSI to answer the same when we ask “What is the date today?” or “today’s date”. To achieve this we can use ‘|’ symbol when writing our question.

The new syntax of our skill will be:

What is the date today? | today’s date?
!console:$object$
{
“url” : “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today”,
“path” : “$.date”
}
eol
  • We want SUSI to answer according to the question. To make it answer all the questions like today’s date?, tomorrow’s date? or yesterday’s date?

The new syntax of our skill will be:

*’s date?
!console:$object$
{
“url” : “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/$1$”,
“path” : “$.date”
}
eol

Here * acts as a wildcard character. That means * will be “today” in “today’s date” and “tomorrow” in “tomorrow’s date”. $1$ is the variable which stores the value in *.

Let’s dive into more examples:

  1. Sometimes we may need 2 wildcard characters in our question:
* plot of * | * summary of *
!console:$object$
{
  "url":"http://api.tvmaze.com/singlesearch/shows?q=$2$",
  "path":"$.summary"
}
eol   

The api used above is to tell the plot of a tv show. We need to query this API with the name of the show.

For questions like “Tell me the plot of Game of Thrones” or “What is the plot of Game of Thrones”, we want to ignore the string before “plot of” and store the string after it. This string stored can be used to query the API later.

The naming of the variables for storing the values in * is done number wise. The value of the first * in the question is stored in $1$, for the second * it is in $2$ and so on…

Now the above-written skill should make sense to everyone. Let’s see the skill in action:

 

  1. What if we want two answers from the same API. Consider this question:
    We have a public API to check the details of a space agency. We need the abbreviation of the space agency and append that to the API.

For example, when we visit https://launchlibrary.net/1.2/agency/ISRO, we get the following as output:

We want SUSI to answer the full form of a space agency along with its country code.

The skill used for it:

what is the full form of * and its country code?
!console:Full form - $name$, Country code - $countryCode$
{
  "url":"https://launchlibrary.net/1.2/agency/$1$",
  "path":"$.agencies[0]"
}
eol

How this skill works?

Let’s breakdown the path variable and check what does it leads to. The ‘$’ will fetch the whole object.

Further, “$.agencies[0]” will fetch this -:

{
  "Id":31,
  "name":"Indian Space  Research Organization",
  "countryCode":"IND",
  "abbrev":"ISRO",
  "Type":1,
  "infoURL":"http:\/\/www.isro.org\/",
 "wikiURL":"http:\/\/en.wikipedia.org\/wiki\/Indian_Space_Research_Organiation",
  "infoURLs":["http:\/\/www.isro.org\/"]
}

To fetch values of any of the keys, we can use the key name enclosed in ‘$KEY_NAME$’. The  value of that key will be automatically stored in this variable i.e. $KEY_NAME$.

Hence we use $name$ and $countryCode$ in our skill, to get the required answer.

The skill in action:

The same way we can use other API’s and contribute new skills to SUSI. To help you get started, see the public API’s repository available here. As said before, you can contribute skills by making a PR to this repository and see those skills live in SUSI!

Continue ReadingHow to teach SUSI.AI skills using external API’s

How to add the Google Books API to SUSI AI

SUSI.AI is a Open Source personal assistant. You can also add new skills to SUSI easily. In this blog post I’m going to add Google’s Books API to SUSI as a skill. A complete tutorial on SUSI.AI skills is n the repository. Check out Tutorial Level 11: Call an external API here and you will understand how can we integrate an external API with SUSI AI.

To start adding book skills to SUSI.AI , first go to this URL http://dream.susi.ai/  > give a name in the text field and press OK.

 

Copy and paste above code to the newly opened etherpad.

Go to this url http://chat.susi.ai to test new skill.

Type “dream blogpost” on chat and press enter. Now we can use the skills we  add to the etherpad.

To understand  Google’s book API use this url.Your request url should be like this:

[code]https://www.googleapis.com/books/v1/volumes?q=BOOKNAME&amp;amp;amp;amp;amp;amp;amp;amp;key=yourAPIKey[/code]

 

you should replace APIKey with your API key.

To get started you first need to get an API key.

Go to this url > click GET A KEY button which is in right top > and select “Create a new project”

Add name to a project and click “CREATE AND ENABLE API” button

Copy your API key and replace the API Key part of request URL.

Paste request url on your browser address bar and replace BOOKNAME part with “flower” and go to the URL. It will give this JSON.

We need to get the full name of books which is in items array to that we have to go through this hierarchy
items array >first item>volumeInfo >title
Go to the etherpad we made before and paste the following code.


is there any book called * ?
!console:did you mean "$title$" ? Here is a link to read more: $infoLink$
{
"url":"https://www.googleapis.com/books/v1/volumes?q=$1$&key=AIzaSyCt3Wop5gN3S5H0r1CKZlXIgaM908oVDls",
"path":"$.items[0].volumeInfo"
}
eol

first line of the code “is there any book called *?” is the question user ask. *  is the variant part  of question. that part can be used in the code by $1$ , if there more variants we can add multiple asterisk marks and refer by using corresponding number Ex: $1$,$2$,$3$
  • In this code  “path” : “$.items[0].volumeInfo”
  • $  represents full JSON result.
  • items[0] for get first element
  • .volumeInfo is to refer  volumeInfo object
!console:did you mean “$title$” ?  Here is a link to read more: $infoLink$
this line produce the output.
  • $title$ this one is for refer the “title” part of data that comes from “path”
  • $infoLink$ this one gives link to more details

Now go to the chat UI and type again “dream blogpost”. And after it shows “dreaming enabled” type in”is there any book called world war?”. It will result in the following.

This  is a simple way to add any service to SUSI as a skill.

Continue ReadingHow to add the Google Books API to SUSI AI

Adding Send Button in SUSI.AI webchat

Our SUSI.AI web chat app is improving day by day. One such day it looked like this: 

It replies to your query and have all the basic functionality, but something was missing. When viewed in mobile, we realised that this should have a send button.

Send buttons actually make chat apps look cool and give them their complete look.

Now a method was defined in MessageCompose Component of React App, which took the target value of  textarea and pass it as props.

Method:

_onClickButton(){
     let text = this.state.text.trim();
     if (text) {
       Actions.createMessage(text, this.props.threadID);
     }
     this.setState({text: ''});
   }

Now this method was to be called in onClick Action of our send Button, which was included in our div rendered by MessageComposer Component.

This method will also be called on tap on ENTER key on keyboard. Implementation of this method has also been done, this can be seen here.

Why wrap textarea and button in a div and not render as two independent items ?

Well in react you can only render single components, so wrapping them in a div is our only option.

Now since we had our functionality running, It was time for styling.

Our team choose to use http://www.material-ui.com/ and it’s components for styling.

We chose to have FloatingActionButton as send button.

Now to use components of material ui in our component, several importing was to be done. But to enable these feature we needed to change our render to DOM to :-

import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider';
 
 const App = () => (
   <MuiThemeProvider>
     <ChatApp />
   </MuiThemeProvider>
 );
 
 ReactDOM.render(
   <App /> ,
   document.getElementById('root')
 );

Imports in our MessageComposer looked like this :-

import Send from 'material-ui/svg-icons/content/send';
import FloatingActionButton from 'material-ui/FloatingActionButton';
import injectTapEventPlugin from 'react-tap-event-plugin';
 injectTapEventPlugin();

The injectTapEventPlugin is very important method, in order to have event handler’s in our send button, we need to call this method and method which handles onClick event  is know as onTouchTap.

The JSX code which was to be rendered looked like this:

<div className="message-composer">
         <textarea
           name="message"
           value={this.state.text}
           onChange={this._onChange.bind(this)}
           onKeyDown={this._onKeyDown.bind(this)}
           ref={(textarea)=> { this.nameInput = textarea; }}
           placeholder="Type a message..."
         />
         <FloatingActionButton
           backgroundColor=' #607D8B'
           onTouchTap={this._onClickButton.bind(this)}
           style={style}>
           <Send />
         </FloatingActionButton>
       </div>

Styling for button was done separately and it looked like:

const style = {
     mini: true,
     top: '1px',
     right: '5px',
     position: 'absolute',
 };

Ultimately after successfully implementing all of this our SUSI.AI web chat had a good looking FloatingAction send Button.

This can be tested here.

Continue ReadingAdding Send Button in SUSI.AI webchat