Making a SUSI Skill to get details about bank from IFSC

We are going to make a SUSI skill that fetches information about a bank when the IFSC (Indian Financial System Code) is known. Here is a detailed explanation of how we going about doing this.

Getting started with the skill creation

API endpoint that returns the bank details

Before going to the skill development, we need to find an API that would return the bank details from the IFSC, On browsing through various open source projects. I found an apt endpoint by Razorpay. Razorpay is a payment gateway for India which allows businesses to accept, process and disburse payments with ease. The Github link  to the repository is https://github.com/razorpay/ifsc.

API endpoint –  https://ifsc.razorpay.com/<:ifsc>
Request type –  GET
Response type –  JSON

Now, head over to the SUSI Etherpad, which is the current SUSI Skill Development Environment and create a new Pad. 

Here, we need to define the skill in the Etherpad. We will now write rules/intents for the skill. An intent represents an action that fulfills a user’s spoken request.

Intents consist of 2 parts –

  • User query – It contains different patterns of query that user can ask.
  • Answer – It contains the possible answer to the user query.

The main intent that our skill focuses on is, returning the bank name and address from the IFSC code. Here is how it looks –

Name of bank with IFSC code * | Bank's name with IFSC code *
!example:Name of bank with IFSC code SBIN0007245
!expect: The name of bank is State Bank of India
!console:The name of bank with IFSC code $1$ is $object$
{
"url":"https://ifsc.razorpay.com/$1$",
"path":"$.BANK"
}
eol

Part-wise explanation of the intent

  • The first line contains the query pattern that the user can use while querying. You can see that a wildcard character (*) is used in the pattern. It contains the IFSC of the bank that we wish to know, and will later on use to fetch the details via the API.
  • The second line contains an example query, followed by third line that contains the expected answer.
  • Last part of the rule contains the answer that is fetched from an external API –  https://ifsc.razorpay.com/<:ifsc>  ,via the console service  provided by SUSI Skills. Here, <:ifsc> refers to the IFSC that the user wants to know about. We get it from the user query itself, and can access it by the variable name $1$ as it matches with the 1st wildcard present in the query. If there would be 2 wildcards, we could have accessed them by $1$ and $2$ respectively.
  • The console service provides us with an option to enter the url of the API that we want to hit and path of the key we want to use.

The sample response of the endpoint looks like this :

{
  "BANK": "Karnataka Bank",
  "IFSC": "KARB0000001",
  "BRANCH": "RTGS-HO",
  "ADDRESS": "REGD. & HEAD OFFICE, P.B.NO.599, MAHAVEER CIRCLE, KANKANADY, MANGALORE - 575002",
  "CONTACT": "2228222",
  "CITY": "DAKSHINA KANNADA",
  "RTGS": true,
  "DISTRICT": "MANGALORE",
  "STATE": "KARNATAKA"
}

 

  • Since, we want to extract the name of the bank, the BANK key contains our desired value and we will use $.BANK in the path of the console service. And it can be accessed by $object$ in the answer. We frame the answer using $object$ and $1$ variables, and it like the one mentioned in the expected answer. eol marks the end of the console service.
  • Similarly, the intent that gives us the address of the bank looks like this –
Address of bank with IFSC code * | Bank's address with IFSC code *
!example:Address of bank with IFSC code SBIN0007245
!expect: The address of bank is TILAK ROAD HAKIMPARA, P.O.SILIGURI DARJEELING, WEST BENGAL ,PIN - 734401
!console:The address of bank with IFSC code $1$ is $object$
{
  "url":"https://ifsc.razorpay.com/$1$",
  "path":"$.BANK"
}
eol

Testing the skill

  • Open any SUSI Client and then write dream <your dream name> so that dreaming is enabled for SUSI. We will write down dream ifsc. Once dreaming is enabled, you can now test any skills which you’ve made in your Etherpad.
  • We can test the skills by asking queries and matching it with the expected answer. Once the testing is done, write stop dreaming to disable dreaming for SUSI.

  • After the testing was successful completely, we will go ahead and add it to the susi_skill_data.
  • The general skill format is –
::name <Skill_name>
::author <author_name>
::author_url <author_url>
::description <description> 
::dynamic_content <Yes/No>
::developer_privacy_policy <link>
::image <image_url>
::term_of_use <link>

#Intent
User query1|query2|query3....
Answer answer1|answer2|answer3...

We will add the basic skill details and author details to the etherpad file and make it in the format as mentioned above. The final text file looks like this –

::name IFSC to Bank Details
::author Akshat Garg
::author_url https://github.com/akshatnitd
::description It is a bank lookup skill that takes in IFSC code from the user and provides you all the necessary details for the Bank. It is valid for banks in India only
::dynamic_content Yes
::developer_privacy_policy 
::image images/download.jpeg
::terms_of_use 

Name of bank with IFSC code * | Bank's name with IFSC code *
!example:bank with IFSC code *
!expect: The name of bank is SBI
!console:The name of bank with IFSC code $1$ is $object$
{
"url":"https://ifsc.razorpay.com/$1$",
"path":"$.BANK"
}
eol

Address of bank with IFSC code * | Bank's address with IFSC code *
!example:Address of bank with IFSC code *
!expect: The address of bank is 
!console:The address of bank with IFSC code $1$ is $object$
{
"url":"https://ifsc.razorpay.com/$1$",
"path":"$.ADDRESS"
}
eol

Submitting the skill

The final part is adding the skill to the list of skills for SUSI. We can do it by 2 ways:

1st method (using the web interface)

  • Open https://susi.skills.com and login into SUSI account (or sign up, if not done).
  • Click on the create skill button.
  • Select the appropriate fields like Category, Language, Skill name, Logo.
  • Paste the text file that we had created.
  • Add comments regarding the skill and click on Save to save the skill.

2nd method (sending a PR)

  • Send a Pull Request to susi_skill_data repository providing the dream name. The PR should have the text file containing the skill.

So, this was a short blog on how we can develop a SUSI skill of our choice.

Resources

Continue ReadingMaking a SUSI Skill to get details about bank from IFSC

Creating Onboarding Screens for SUSI iOS

Onboarding screens are designed to introduce users to how the application works and what main functions it has, to help them understand how to use it. It can also be helpful for developers who intend to extend the current project.

When you enter in the SUSI iOS app for the first time, you see the onboarding screen displaying information about SUSI iOS features. SUSI iOS is using Material design so the UI of Onboarding screens are following the Material design.

There are four onboarding screens:

  1. Login (Showing the login features of SUSI iOS) – Login to the app using SUSI.AI account or else signup to create a new account or just skip login.
  2. Chat Interface (Showing the chat screen of SUSI iOS) – Interact with SUSI.AI asking queries. Use microphone button for voice interaction.
  3. SUSI Skill (Showing SUSI Skills features) – Browse and try your favorite SUSI.AI Skill.
  4. Chat Settings (SUSI iOS Chat Settings) – Personalize your chat settings for the better experience.

Onboarding Screens User Interface

 

There are three important components of every onboarding screen:

  1. Title – Title of the screen (Login, Chat Interface etc).
  2. Image – Showing the visual presentation of SUSI iOS features.
  3. Description – Small descriptions of features.

Onboarding screen user control:

  • Pagination – Give the ability to the user to go next and previous onboarding screen.
  • Swiping – Left and Right swipe are implemented to enable the user to go to next and previous onboarding screen.
  • Skip Button – Enable users to skip the onboarding instructions and go directly to the login screen.

Implementation of Onboarding Screens:

  • Initializing PaperOnboarding:
override func viewDidLoad() {
super.viewDidLoad()

UIApplication.shared.statusBarStyle = .lightContent
view.accessibilityIdentifier = "onboardingView"

setupPaperOnboardingView()
skipButton.isHidden = false
bottomLoginSkipButton.isHidden = true
view.bringSubview(toFront: skipButton)
view.bringSubview(toFront: bottomLoginSkipButton)
}

private func setupPaperOnboardingView() {
let onboarding = PaperOnboarding()
onboarding.delegate = self
onboarding.dataSource = self
onboarding.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(onboarding)

// Add constraints
for attribute: NSLayoutAttribute in [.left, .right, .top, .bottom] {
let constraint = NSLayoutConstraint(item: onboarding,
attribute: attribute,
relatedBy: .equal,
toItem: view,
attribute: attribute,
multiplier: 1,
constant: 0)
view.addConstraint(constraint)
}
}

 

  • Adding content using dataSource methods:

    let items = [
    OnboardingItemInfo(informationImage: Asset.login.image,
    title: ControllerConstants.Onboarding.login,
    description: ControllerConstants.Onboarding.loginDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.skillOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.chat.image,
    title: ControllerConstants.Onboarding.chatInterface,
    description: ControllerConstants.Onboarding.chatInterfaceDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.chatOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skill.image,
    title: ControllerConstants.Onboarding.skillListing,
    description: ControllerConstants.Onboarding.skillListingDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.loginOnboardingColor(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont),OnboardingItemInfo(informationImage: Asset.skillSettings.image,
    title: ControllerConstants.Onboarding.chatSettings,
    description: ControllerConstants.Onboarding.chatSettingsDescription,
    pageIcon: Asset.pageIcon.image,
    color: UIColor.iOSBlue(),
    titleColor: UIColor.white, descriptionColor: UIColor.white, titleFont: titleFont, descriptionFont: descriptionFont)]
    extension OnboardingViewController: PaperOnboardingDelegate, PaperOnboardingDataSource {
    func onboardingItemsCount() -> Int {
    return items.count
    }
    
    func onboardingItem(at index: Int) -> OnboardingItemInfo {
    return items[index]
    }
    }
    

     

  • Hiding/Showing Skip Buttons:
    func onboardingWillTransitonToIndex(_ index: Int) {
    skipButton.isHidden = index == 3 ? true : false
    bottomLoginSkipButton.isHidden = index == 3 ? false : true
    }
    

Resources:

Continue ReadingCreating Onboarding Screens for SUSI iOS

FOSSASIA Internship Program 2018

Are you interested to participate in the development of Open Source projects in a summer internship? Build up your developer profile with FOSSASIA and spend your summer coding on an open source project.  Contribute to SUSI.AIOpen EventBadgeyayYaydoc, Meilix or PSLab and join us at a workshop week and Jugaadfest in India. Please find the details below and submit your application to our form. Be sure to check out FOSSASIA’s program guidelines.

1. Program Details

  • Sign up on our dedicated form at fossasia.org/internship (Interns need to become members of the org and sign up on its social channels)
  • Internships are 3 months with monthly evaluations
  • plus preparation onboarding after acceptance
  • Eligible are contributors above 18 years of age. Any contributor is eligible including students, professionals, university staff etc. Prefered are contributors who have participated in the community previously.
  • Benefits of the program include Shirts, Swag, certificates. All participants who pass the final evaluation will be eligible to participate in a workshop week and Jugaadfest in September 2018 in Hyderabad. Travel grants and accommodation will be provided.
  • The program is intended as a full-time program. However, if contributors would like to participate who have a day job, they can still join and pass the program if they fulfill all program requirements. All contributors who pass the program will be able to receive funding for workshops and Jugaadfest participation.

2. Timeline

  • Application period ongoing until May 12
  • Acceptance ongoing until May 12
  • Start of pre-period:  May
  • Start of Internship: 1st June
  • Evaluation 1: July
  • Evaluation 2: August
  • Evaluation 3: September
  • End of Internship:  September, 2018
  • Issuing of Certificates: September 2018
  • FOSSASIA Workshop Week /Jugaadfest: September/October

3. Deliverables

  • Daily scrum email to project mailing list answering three questions: What did I do yesterday? What is my plan for today? Is there anything preventing me from achieving my goals, e.g. blockers?
  • Work according to pull requests and issues (submit code on Github and match it with issues)
  • Daily code submissions (software, hardware)
  • Documentation: Text, YouTube videos
  • 1 technical blog post a month with details on solving a problem in a FOSSASIA project (Monthly – 1: by Monday of second week)
  • Design items (in open formats, e.g. XCF, SVG, EPS)

4. Participating Projects

5. Best Practices

Please follow best practices as defined here: https://blog.fossasia.org/open-source-developer-guide-and-best-practices-at-fossasia/

6. Participant Benefits/Support

Participants will receive Swag, certificates and travel support to the FOSSASIA Workshop week and Jugaadfest.

  • Evaluation 1: July, 2018: Successful Participants receive a FOSSASIA Tshirt (sent out together with bag in evaluation 2)
  • Evaluation 2: August: Successful Participants receive a beautiful FOSSASIA bag
  • Evaluation 3: September: Successful Participants receive the following support to participate in the FOSSASIA India Workshop Week and Jugaadfest:
    • 100 SGD travel support from within India and 200 SGD support if coming from outside India
    • One week accommodation in Hyderabad (organized by FOSSASIA)
    • Catering during workshops
Continue ReadingFOSSASIA Internship Program 2018

Maintaining Extension State in SUSI.AI Chrome Bot Using Chrome Storage API

SUSI Chrome Bot is a browser action chrome extension which is used to communicate with SUSI AI.The browser action extension in chrome is like any other web app that you visit. It will store all your preferences like theme settings and speech synthesis settings and data till you are interacting with it, but once you close it, it forgets all of your data unless you are saving it in some database or you are using cookies for the session. We want to be able to save the chats and other preferences like theme settings the user makes when interacting with SUSI AI through Susi Chrome Bot. In this blog, we’ll explore Chrome’s chrome.storage API for storing data.

What options do we have for storing data offline?

IndexedDB: IndexedDB is a low-level API for client-side storage of data. IndexedDB allows us to store a large amount of data and works like RDBMS but IndexedDB is javascript based Object-oriented database.

localStorage API: localStorage allows us to store data in key/value pairs which is much more effective than storing data in cookies. localStorage data persists even if the user closes and reopens the browser.

Chrome.storage: Chrome provides us with chrome.storage. It provides the same storage capabilities as localStorage API with some advantages.

For susi_chromebot we will use chrome.storage because of the following advantages it has over the localstorage API:

  1. User data can be automatically synced with Chrome sync if the user is logged in.
  2. The extension’s content scripts can directly access user data without the need for a background page.
  3. A user’s extension settings can be persisted even when using incognito mode.
  4. It’s asynchronous so bulk read and write operations are faster than the serial and blocking localStorage API.
  5. User data can be stored as objects whereas the localStorage API stores data in strings.

Integrating chrome.storage to susi_chromebot for storing chat data

To use chrome.storage we first need to declare the necessary permission in the extension’s manifest file. Add “storage” in the permissions key inside the manifest file.

"permissions": [
         "storage"
       ]

 

We want to store the chat user has made with SUSI. We will use a Javascript object to store the chat data.

var storageObj = {
senderClass: "",
content: ""
};

The storageObj object has two keys namely senderClass and content. The senderClass key represents the sender of the message(user or susi) whereas the content key holds the actual content of the message.

We will use chrome.storage.get and chrome.storage.set methods to store and retrieve data.

var susimessage = newDiv.innerHTML;
storageObj.content = susimessage;
storageObj.senderClass = "susinewmessage";
chrome.storage.sync.get("message",(items) => {
if(items.message){
storageArr = items.message;
}
storageArr.push(storageObj);
chrome.storage.sync.set({"message":storageArr},() => {
console.log("saved");
});
});

 

In the above code snippet, susimessage contains the actual message content sent by the SUSI server. We then set the correct properties of the storageObj object that we declared earlier. Now we can use chrome.storage.set to save the storageObj object but that would overwrite the current data that we have inside chrome’s StorageArea. To prevent the old message data from getting overwritten, we’ll first get all the message content in our storage using chrome.storage.sync.get. Notice how we are passing the “message” string as the first perimeter to the function. This is done because we only want our message content which was saved in the StorageArea. If we pass null instead, it will return all the content inside storageArea. Once we have our messages (which will be an array of objects that we store as storageObj), we will store that into a new array storageArr. We will then push our new storageObj that contains the message and the sender into the array. Finally, we use chrome.storage.sync.set to save the message content in chrome’s StorageArea which can later be retrieved using the “message” key.

storageArr.push(storageObj);
chrome.storage.sync.set({"message":storageArr},() => {
console.log("saved");
});

We use the same procedure to save messages sent by the user.

Note: chrome.storage is not very large, so we need to be careful about what we store or we may run out of storage space. Also, we should not store confidential data in storage since the storage area is not encrypted.

Resources:

Tags:

  • FOSSASIA, codeheat, Chrome extensions, Javascript, Chrome Storage, Chrome Sync, Susi Chrome Bot, SUSI AI, Bot Development
Continue ReadingMaintaining Extension State in SUSI.AI Chrome Bot Using Chrome Storage API

Adding Map Type Response to SUSI.AI Chromebot

SUSI.AI Chromebot has almost all sorts of reply that SUSI.AI Server can generate. But it still missed the Map Type response that was generated by the SUSI.AI Server.

This blog explains how the map type response was added to the chromebot.

Brief Introduction

The original issue was planned by Manish Devgan and Mohit Sharma as an advanced task for Google Code-In 2017. The link to which can be found here: #157

For a long time the issue remained untouched and after GCI got over I assigned the issue to myself as it was a priority issue since MAP type was a major response from the SUSI.AI Server.

How was Map Type response added?

There were a lot of things to be taken in mind before starting working on this issue.

  • Changing code scheme during GCI and other PRs
  • API Response from the SUSI.AI Server
  • Understanding the new codebase that got altered during GCI-17
  • Doing it quick

I will go through all the steps in detail

Changing Code Scheme

The code was altered numerous times with the addition of a number of pull requests during GCI-17 and there were no docstrings for any functions and methods. So I had to figure them out in order to start working on the map type response.

API Response from the SUSI.AI Server

To understand the JSON that server sent, I went to SUSI.AI API and did a simple search for

“Where is Berlin?” and the response generated is given below.

( Since the JSON is very big I am only posting the relevant data for this issue )

 

    "actions": [
      {
        "type": "answer",
        "language": "en",
        "expression": "Berlin (, German: [bɛɐ̯ˈliːn] ( listen)) is the capital and the largest city of Germany as well as one of its 16 constituent states."
      },
      {
        "type": "anchor",
        "link": "https://www.openstreetmap.org/#map=13/52.52436820069531/13.41053001275776",
        "text": "Here is a map",
        "language": "en"
      },
      {
        "type": "map",
        "latitude": "52.52436820069531",
        "longitude": "13.41053001275776",
        "zoom": "13",
        "language": "en"
      }
    ]

 

Here we see and understand that “actions” is an Array of JSONs and the third part has “type” as “map”. This is the relevant information that we require for generating the map-type response.

The important variables in this context are: “latitude” and “longitude”.

Understanding the Codebase

Now I had to figure out the new pattern of adding response types to the SUSI.AI Chromebot.

After having a talk with @ms10398 I figured out the route map.

The above image shows the correct flow of Javascript Code that generated the response. After this, I was good to go and start my work.

Adding the Map-Type Response

To start with I chose “LEAFLET.JS” as the Javascript Library that will be used to create maps.

  • So I added the LEAFLET.JS to the JS folder.
  • Now changes were made to the “index.html” file

 

<link rel=”stylesheet” href=”https://unpkg.com/leaflet@1.3.1/dist/leaflet.css” />

http://”js/leaflet.js”

 

Appropriate CSS was added along with a link to leaflet.js was added.

  • Adding CSS to the “mapClass
.mapClass{

    height : 200px;

    width : 200px;

}

 

  • Generating Maps with dynamic IDs

This part was where I applied brain, as to add the map to any div we required the div to have a proper and unique ID and so a way to generate unique IDs for div without using any external source was to be thought of.

I came with idea of using timestamp, as it will always be unique.

var timeStamp = new Date.now().toString();

 

Then I created the “composeMapReply()” function.

function composeReplyMap(response, action){

    var newDiv = messages.childNodes[messages.childElementCount];

    var mapDiv = document.createElement(“div”);

    var mapDivId = Date.now().toString();

    mapDiv.setAttribute(“id”, mapDivId);

    mapDiv.setAttribute(“class”, “mapClass”);

    newDiv.appendChild(mapDiv);

    messages.appendChild(newDiv);

    var newMap = L.map(mapDivId).setView([Number(action.latitude), Number(action.longitude)], 13);

 L.tileLayer(“https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token={accessToken}”,{

    /*

        This part contains the data for api call.

    */

}).addTo(newMap);

response.isMap = true;

response.newMap = mapDiv;

return response

    

}

 

The complete code can be found: here

At last after adding so many snippets of code we were able to generate the Map-Type response for SUSI.AI Chromebot

GIF

A gif showing the Map-Type response in action.

 

Resources

 

Continue ReadingAdding Map Type Response to SUSI.AI Chromebot

Toggling Voice On/Off in SUSI Chromebot

SUSI Chromebot has a lot of features that make it one of the best projects of FOSSASIA.

Recently Voice/Speech was added to SUSI Chromebot. But there was no option that controlled the fact that whether speech output is needed or not.

The latest addition to SUSI Chromebot is Toggling the Voice of SUSI On or Off.

How was it achieved?

Toggling Voice for SUSI required adding a button and a snippet of Javascript code to the main JS file. The code will take care of the fact whether the voice is to be toggled on or off.

I started off by adding a button to the main HTML file.

<a href=”javascript: void(0)” id=”speak” style=”color: white”><i class=”material-icons” id=”speak-icon”>volume_up</i></a>

The above snippet of HTML code adds a voice button to the top bar of chromebot.

Then there was the major part where the javascript code was to be added to add the functionality to the button.

var shouldSpeak = true;

I started off by creating a variable called as “shouldSpeak” which will determine whether or not SUSI should use the Chrome’s API to speak.

Then I changed the “speakOut()” function and added another parameter to it.

function speakOut(msg,speak=false){

if(speak){

var voiceMsg = new SpeechSynthesisUtterance(msg);

window.speechSynthesis.speak(voiceMsg);

}

}

The above code made sure that susi was only allowed to speak when and only “speak” variable was set to true.

Then “eventListeners” were added to buttons and other things to link the functionality.

document.getElementById(‘speak’).addEventListener(‘click’,changeSpeak);

It adds the events of click to “speak” and associates it with the function “changeSpeak”.

Now the function “changeSpeak” is created as follows. It toggles the on/off mechanism of voice in SUSI Chromebot.

function changeSpeak(){

shouldSpeak = !shouldSpeak;

var SpeakIcon = document.getElementById(‘speak-icon’);

if(!shouldSpeak){

SpeakIcon.innerText = “volume_off”;

}

else{

SpeakIcon.innerText = “volume_up”;

}

console.log(‘Should be speaking? ’ + shouldSpeak);

}

Everytime the user clicks on the icon to toggle on/off voice the icon must also change and this functionality was taken care of by the above piece of code.

Resources

 

Continue ReadingToggling Voice On/Off in SUSI Chromebot
Read more about the article Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners
SUSI Desktop

Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Continue ReadingSetting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

    Link Preview Holder on SUSI.AI Android Chat

    SUSI Android contains several view holders which binds a view based on its type, and one of them is LinkPreviewHolder. As the name suggests it is used for previewing links in the chat window. As soon as it receives an input as of link it inflates a link preview layout. The problem which exists was that whenever a user inputs a link as an input to app, it crashed. It crashed because it tries to inflate component that doesn’t exists in the view that is given to ViewHolder. So it gave a Null pointer Exception, due to which the app crashed. The work around for fixing this bug was that based on the type of user it will inflate the layout and its components. Let’s see how all functionalities were implemented in the LinkPreviewHolder class.

    Components of LinkPreviewHolder

    @BindView(R.id.text)
    public TextView text;
    @BindView(R.id.background_layout)
    public LinearLayout backgroundLayout;
    @BindView(R.id.link_preview_image)
    public ImageView previewImageView;
    @BindView(R.id.link_preview_title)
    public TextView titleTextView;
    @BindView(R.id.link_preview_description)
    public TextView descriptionTextView;
    @BindView(R.id.timestamp)
    public TextView timestampTextView;
    @BindView(R.id.preview_layout)
    public LinearLayout previewLayout;
    @Nullable @BindView(R.id.received_tick)
    public ImageView receivedTick;
    @Nullable
    @BindView(R.id.thumbs_up)
    protected ImageView thumbsUp;
    @Nullable
    @BindView(R.id.thumbs_down)
    protected ImageView thumbsDown;

    Currently in this it binds the view components with the associated id using declarator @BindView(id)

    Instantiates the class with a constructor

    public LinkPreviewViewHolder(View itemView , ClickListener listener) {
       super(itemView, listener);
       realm = Realm.getDefaultInstance();
       ButterKnife.bind(this,itemView);
    }

    Here it binds the current class with the view passed in the constructor using ButterKnife and initiates the ClickListener.

    Now it is to set the components described above in the setView function:

    Spanned answerText;
    text.setLinksClickable(true);
    text.setMovementMethod(LinkMovementMethod.getInstance());
    if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
    answerText = Html.fromHtml(model.getContent(), Html.FROM_HTML_MODE_COMPACT);
    } else {
    answerText = Html.fromHtml(model.getContent());
    }

    Sets the textView inside the view with a clickable link. Version checking also has been put for checking the version of Android (Above Nougat or not) and implement the function accordingly.

    This ViewHolder will inflate different components based on the thing that who has requested the output. If the query wants to inflate the LinkPreviewHolder them some extra set of components will get inflated which need not be inflated for the response apart from the basic layout.

    if (viewType == USER_WITHLINK) {
       if (model.getIsDelivered())
           receivedTick.setImageResource(R.drawable.ic_check);
       else
           receivedTick.setImageResource(R.drawable.ic_clock);
    }

    In the above code  received tick image resource is set according to the attribute of message is delivered or not for the Query sent by the user. These components will only get initialised when the user has sent some links.

    Now comes the configuration for the result obtained from the query.  Every skill has some rating associated to it. To mark the ratings there needs to be a counter set for rating the skills, positive or negative. This code should only execute for the response and not for the query part. This is the reason for crashing of the app because the logic tries to inflate the contents of the part of response but the view that is passed belongs to query. So it gives NullPointerException there, so there is a need to separate the logic of Response from the Query.

    if (viewType != USER_WITHLINK) {
       if(model.getSkillLocation().isEmpty()){
           thumbsUp.setVisibility(View.GONE);
           thumbsDown.setVisibility(View.GONE);
       } else {
           thumbsUp.setVisibility(View.VISIBLE);
           thumbsDown.setVisibility(View.VISIBLE);
       }
    
       if(model.isPositiveRated()){
           thumbsUp.setImageResource(R.drawable.thumbs_up_solid);
       } else {
           thumbsUp.setImageResource(R.drawable.thumbs_up_outline);
       }
    
       if(model.isNegativeRated()){
           thumbsDown.setImageResource(R.drawable.thumbs_down_solid);
       } else {
           thumbsDown.setImageResource(R.drawable.thumbs_down_outline);
       }
    
       thumbsUp.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    
    
       thumbsDown.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    }

    As you can see in the above code  it inflates the rating components (thumbsUp and thumbsDown) for the view of the SUSI.AI response and set on the clickListeners for the rating buttons. Them in the below code it previews the link and commit the data using Realm in the database through WebLink class.

    LinkPreviewCallback linkPreviewCallback = new LinkPreviewCallback() {
       @Override
       public void onPre() { . . . }
    
       @Override
       public void onPos(final SourceContent sourceContent, boolean b) { . . . }
    }

    This method calls the api and set the rating of that skill on the server. On successful result it made the thumb Icon change and alter the rating method and commit those changes in the databases using Realm.

    private void rateSusiSkill(final String polarity, String locationUrl, final Context context) {..}

    References

    Continue ReadingLink Preview Holder on SUSI.AI Android Chat

    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();
    

     

    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    
    function reset() {
    recognizing = false;
    }
    
    recognition.onerror = function(e){
    console.log(e.error);
    };
    
    recognition.onend = function(){
    reset();
    };
    

     

    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
          recognition.stop();
          reset();
        } else {
          recognition.start();
          recognizing = true;
        }
    }
    

     

    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {
        toggleStartStop();
    });
    

     

    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;
            submitForm();
          }
        }
    };
    

     

    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    chrome.tabs.create({
    url: chrome.runtime.getURL("options.html")
    });
    });navigator.webkitGetUserMedia({
    audio: true
    }, function(stream) {
    stream.stop();
    }, function () {
    console.log('no access');
    });
    

     

    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);
        window.speechSynthesis.speak(voiceMsg);
    }
    

     

    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.

    Resources

     

    Continue ReadingSUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Enhancing SUSI Desktop to Display a Loading Animation and Auto-Hide Menu Bar by Default

    SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop. The benefits of using chat.susi.ai as a submodule is that it inherits all the features that the webapp offers and thus serves them in a nicely build native application.

    Display a loading animation during DOM load.

    Electron apps should give a native feel, rather than feeling like they are just rendering some DOM, it would be great if we display a loading animation while the web content is actually loading, as depicted in the gif below is how I implemented that.
    Electron provides a nice, easy to use API for handling BrowserWindow, WebContent events. I read through the official docs and came up with a simple solution for this, as depicted in the below snippet.

    onload = function () {
    	const webview = document.querySelector('webview');
    	const loading = document.querySelector('#loading');
    
    	function onStopLoad() {
    		loading.classList.add('hide');
    	}
    
    	function onStartLoad() {
    		loading.classList.remove('hide');
    	}
    
    	webview.addEventListener('did-stop-loading', onStopLoad);
    	webview.addEventListener('did-start-loading', onStartLoad);
    };
    

    Hiding menu bar as default

    Menu bars are useful, but are annoying since they take up space in main window, so I hid them by default and users can toggle their display on pressing the Alt key at any point of time, I used the autoHideMenuBar property of BrowserWindow class while creating an object to achieve this.

    const win = new BrowserWindow({
    	
    	show: false,
    	autoHideMenuBar: true
    });
    

    Resources

    1. More information about BrowserWindow class in the official documentation at electron.atom.io.
    2. Follow a quick tutorial to kickstart creating apps with electron at https://www.youtube.com/watch?v=jKzBJAowmGg.
    3. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Continue ReadingEnhancing SUSI Desktop to Display a Loading Animation and Auto-Hide Menu Bar by Default