How SUSI WebChat Implements RSS Action Type

SUSI.AI now has a new action type called RSS. As the name suggests, SUSI is now capable of searching the internet to answer user queries. This web search can be performed either on the client side or the server side. When the web search is to be performed on the client side, it is denoted by websearch action type. When the web search is performed by the server itself, it is denoted by rss action type. The server searches the internet and using RSS feeds, returns an array of objects containing :

  • Title
  • Description
  • Link
  • Count

Each object is displayed as a result tile and all the results are rendered as swipeable tiles.

Lets visit SUSI WebChat and try it out.

Query : Google
Response: API response

SUSI WebChat uses the same code abstraction to render websearch and rss results as both are results of websearch, only difference being where the search is being performed i.e client side or server side.

How does the client know that it is a rss action type response?

"actions": [
  {
    "type": "answer",
    "expression": "I found this on the web:"
  },
  {
    "type": "rss",
    "title": "title",
    "description": "description",
    "link": "link",
    "count": 3
  }
],

The actions attribute in the JSON API response has information about the action type and the keys to be parsed for title, link and description.

  • The type attribute tells the action type is rss.
  • The title attribute tells that title for each result is under the key – title for each object in answers[0].data.
  • Similarly keys to be parsed for description and link are description and link respectively.
  • The count attribute tells the client how many results to display.

We then loop through the objects in answers,data[0] and from each object we extract title, description and link.

let rssKeys = Object.assign({}, data.answers[0].actions[index]);

delete rssKeys.type;

let count = -1;

if(rssKeys.hasOwnProperty('count')){
  count = rssKeys.count;
  delete rssKeys.count;
}

let rssTiles = getRSSTiles(rssKeys,data.answers[0].data,count);

We use the count attribute and the length of answers[0].data to fix the number of results to be displayed.

// Fetch RSS data

export function getRSSTiles(rssKeys,rssData,count){

  let parseKeys = Object.keys(rssKeys);
  let rssTiles = [];
  let tilesLimit = rssData.length;

  if(count > -1){
    tilesLimit = Math.min(count,rssData.length);
  }

  for(var i=0; i<tilesLimit; i++){
    let respData = rssData[i];
    let tileData = {};

    parseKeys.forEach((rssKey,j)=>{
      tileData[rssKey] = respData[rssKeys[rssKey]];
    });

    rssTiles.push(tileData);
  }

return rssTiles;

}

We now have our list of objects with the information parsed from the response.We then pass this list to our renderTiles function where each object in the rssTiles array returned from getRSSTiles function is converted into a Paper tile with the title and description and the entire tile is hyperlinked to the given link using Material UI Paper Component and few CSS attributes.

// Draw Tiles for Websearch RSS data

export function drawTiles(tilesData){

let resultTiles = tilesData.map((tile,i) => {

  return(
    <div key={i}>
      <MuiThemeProvider>
        <Paper zDepth={0} className='tile'>
          <a rel='noopener noreferrer'
          href={tile.link} target='_blank'
          className='tile-anchor'>
            {tile.icon &&
            (<div className='tile-img-container'>
               <img src={tile.icon}
               className='tile-img' alt=''/>
             </div>
            )}
            <div className='tile-text'>
              <p className='tile-title'>
                <strong>
                  {processText(tile.title,'websearch-rss')}
                </strong>
              </p>
              {processText(tile.description,'websearch-rss')}
            </div>
          </a>
        </Paper>
      </MuiThemeProvider>
    </div>
  );

});

return resultTiles;
}

The tile title and description is processed for HTML special entities and emojis too using the processText function.

case 'websearch-rss':{

let htmlText = entities.decode(text);
processedText = <Emojify>{htmlText}</Emojify>;
break;

}

We now display our result tiles as a carousel like swipeable display using react-slick. We initialise our slider with few default options specifying the swipe speed and the slider UI.

import Slider from 'react-slick';

// Render Websearch RSS tiles

export function renderTiles(tiles){

  if(tiles.length === 0){
    let noResultFound = 'NO Results Found';
    return(<center>{noResultFound}</center>);
  }

  let resultTiles = drawTiles(tiles);
  
  var settings = {
    speed: 500,
    slidesToShow: 3,
    slidesToScroll: 1,
    swipeToSlide:true,
    swipe:true,
    arrows:false
  };

  return(
    <Slider {...settings}>
      {resultTiles}
    </Slider>
  );
}

We finally add CSS attributes to style our result tile and add overflow for text maintaining standard width for all tiles.We also add some CSS for our carousel display to show multiple tiles instead of one by default. This is done by adding some margin for child components in the slider.

.slick-slide{
  margin: 0 10px;
}

.slick-list{
  max-height: 100px;
}

We finally have our swipeable display of rss data tiles each tile hyperlinked to the source of the data. When the user clicks on a tile, he is redirected to the link in a new window i.e the entire tile is hyperlinked. And when there are no results to display, we show a `NO Results Found` message.

The complete code can be found at SUSI WebChat Repo. Feel free to contribute

Resources

 

Continue ReadingHow SUSI WebChat Implements RSS Action Type

Landing Page for Phimpme Android

Landing page for any app is very important for its distribution. Its provide all the relevant informations for the app, its download/ source link and screenshot of the app for preview of its view and features. As our Phimpme app is now reached to a milestone, so we decided to bring out a landing for it.

Landing page is a simple static website which gives relevant informations and their links on a page. To develop a landing page, we can use multiple frameworks available for it. For example using Bootstrap, basic pages using html and css. As GitHub provides a free hosting service such as Github pages. I decided to work on jekyll which is easy to customize and hosted with GitHub pages.

How I did in Phimpme

  • Search for Open Source theme

There are various open source themes present for jekyll, mainly you will get for blogs. I got this awesome app landing page. It fulfils our requirements of showing a screenshot, brief description and links.

  • Create separate branch for Landing page

The GitHub pages works from a separate branch. So I firstly created a orphan branch in main Phimpme Repository. And Clear the working directory. Orphan branches are those branch which do not have any commit history. It is highly required that branch should not have any commit history, because that will irrelevant for gh-pages branch.

git checkout --orphan gh-pages
git rm --cached -r .
  • Test on Fork repo

Now rebase your branch with the gh-pages branch created on the main repository. Add the complete jekyll code here. To reflect changes on github.io page. You need to enable that in settings.

  • Enable gh-pages from settings

Go to the Settings of your repo

Scroll down to gh-pages section

Here things are clear, as we need to select the source branch from where the gh-pages is hosted. We can directly choose some open source themes built over jekyll. Also there is a option to add custom domain, such phimp.me in our case.

This all I’m writing as per the Developer perspective, where the developer have limited access to the repository and can’t access the settings of main repo. In that case push your code to by selecting the gh-pages base branch.

  • Problems I faced

The usual problem any Developer here faces is creating a orphan branch in main repo. This required removing all the files as well. Also to face less problem I recommend using some other editor such as Sublime Text, Atom etc. Or any other IDE other than Android Studio. Because in Android Studio after changing branch, it sync and rebuild again which actually takes lots of time.

Resources

 

Continue ReadingLanding Page for Phimpme Android

JSON Deserialization Using Jackson in Open Event Android App

The Open Event project uses JSON format for transferring event information like tracks, sessions, microlocations and other. The event exported in the zip format from the Open Event server also contains the data in JSON format. The Open Event Android application uses this JSON data. Before we use this data in the app, we have to parse the data to get Java objects that can be used for populating views. Deserialization is the process of converting JSON data to Java objects. In this post I explain how to deserialize JSON data using Jackson.

1. Add dependency

In order to use Jackson in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.fasterxml.jackson.core:jackson-core:2.8.9'
	compile 'com.fasterxml.jackson.core:jackson-annotations:2.8.9'
	compile 'com.fasterxml.jackson.core:jackson-databind:2.8.9'
}

2.  Define entity model of data

In the Open Event Android we have so many models like event, session, track, microlocation, speaker etc. Here i am only defining track model because of it’s simplicity and less complexity.

public class Track {

	private int id;
	private String name;
	private String description;
	private String color;
	@JsonProperty("font-color")       
	private String fontColor;
    
	//getters and setters
}

Here if the property name is same as json attribute key then no need to add JsonProperty annotation like we have done for id, name color property. But if property name is different from json attribute key then it is necessary to add JsonProperty annotation.

3.  Create sample JSON data

Let’s create sample JSON format data we want to deserialize.

{
        "id": 273,
        "name": "Android",
        "description": "Sample track",
        "color": "#94868c",
        "font-color": "#000000"
}

4.  Deserialize using ObjectMapper

ObjectMapper is Jackson serializer/deserializer. ObjectMapper’s readValue() method is used for simple deserialization. It takes two parameters one is JSON data we want to deserialize and second is Model entity class. Create an ObjectMapper object and initialize it.

ObjectMapper objectMapper = new ObjectMapper();

Now create a Model entity object and initialize it with deserialized data from ObjectMapper’s readValue() method.

Track track = objectMapper.readValue(json, Track.class);

So we have converted JSON data into the Java object.

Jackson is very powerful library for JSON serialization and deserialization. To learn more about Jackson features follow the links given below.

Continue ReadingJSON Deserialization Using Jackson in Open Event Android App

Create Scraper in Javascript for Loklak Scraper JS

Loklak Scraper JS is the latest repository in Loklak project. It is one of the interesting projects because of expected benefits of Javascript in web scraping. It has a Node Javascript engine and is used in Loklak Wok project as bundled package. It has potential to be used in different repositories and enhance them.

Scraping in Python is easy (at least for Pythonistas) as one needs to just import Request library and BeautifulSoup library (lxml as better option), write some lines of code using Request library to get webpage and some lines of bs4 to walk through html and scrape data. This sums up to about less than a hundred lines of coding, where as Javascript coding isn’t easily readable (at least to me) as compared to Python. But it has an advantage, it can easily deal with Javascript in the pages we are scraping. This is one of the motive, Loklak Scraper JS repository was created and we contributed and worked on it.

I recently coded a Javascript scraper in loklak_scraper_js repository. While coding, I found it’s libraries similar to the libraries, I use to code in Python. Therefore, this blog is for Pythonistas how they can start scraping in Javascript as they finish reading and also contribute to Loklak Scraper JS.

First, replace Python interpreter, Request and Beautifulsoup library with Node JS interpreter, Request and Cheerio JS library.

1) Node JS Interpreter: Node JS Interpreter is used to interpret Javascript files. This is different from Python as it deals with the project instead of a module in case of Python. The most compatible Node for most of the libraries is 6.0.0 , where as latest version available(as I checked) is 8.0.0

TIP: use `–save` with npm like here while installing a library.

2) Request Library :- This is used to load webpage to be processed. Similar to one in Python.

Request-promise library, a wrapper around Request with implementation of Bluebird library, improves readability and makes code cleaner (how?).

 

3) Cheerio Library:- A Pythonista (a rookie one) can call it twin of BeautifulSoup Library. But this is faster and is Javascript. It’s selector implementation is nearly identical to jQuery’s.

Let us code a basic Javascript scraper. I will take TimeAndDate scraper from loklak_scraper_js as example here. It inputs place and outputs its local time.

Step#1: fetching HTML from webpage with the help of Request library.

We input url to Request function to fetch the webpage and is saved to `html` variable. This scrapeTimeAndDate() function scrapes data from html

url = "http://www.timeanddate.com/worldclock/results.html?query=London";

request(url, function(error, response, body) {

 if(error) {

    console.log("Error: " + error);

    process.exit(-1);

 }

 html = body;

 scrapeTimeAndDate()

});

 

Step#2: To scrape important data from html using Cheerio JS

list of date and time of locations is embedded in table tag, So we will iterate through <td> and extract text.

  1. a) Load html to Cheerio as we do in beautifulsoup

In Python

soup = BeautifulSoup(html,'html5lib')

 

In Cheerio JS

$ = cheerio.load(html);

 

  1. b) This line finds first tr tag in table tag.

var htmlTime = $("table").find('tr');

 

  1. c) Iterate through td tags data by using each() function. This function acts as loop (in Python) iterating through list of elements in which data will be extracted.

htmlTime.each(function (index, element) {      

  // in python, we will use loop, `for element from elements:`

  tag = $(element).find("td");    // in python, `tag = soup.find_all('td')`

  if( tag.text() != "") {

    .

    .

    //EXTRACT DATA

    .

    .

  } else {

    //go to next td tag

    tag = tag.next();

  }

}

 

  1. d) To extract data

Cheerio JS loads html and uses DOM model traverse through. DOM model considers html is tree. So, go to the tag, and scrape data you want.

//extract location(text) enclosed in tag

location = tag.text();

//go to next tag

tag = tag.next();

//extract time(text) enclosed in tag

time = tag.text();

//save in dictionary like in python

loc_list["location"] = location;

loc_list["time"] = time;

 

Some other useful functions:-

1) $(selector, [context], [root])

returns object of selector(any tag) with class or id inside root

2) $(“table”).attr(name, value)

To get tag object having attribute having `value`

3) obj.html()

To get html enclosed in tags

For more just drop in here

Step#3: Execute scraper using command

node <scrapername>.js

 

Hoping that this blog is able to  how to scrape in Javascript by finding similarities with Python.

Resources:

Continue ReadingCreate Scraper in Javascript for Loklak Scraper JS

Data Scraping using Python with BeautifulSoup4 for Open Event samples

was creating samples for Open Event Android and Open Event Webapp when the idea of web scraping through scripts stuck me.

I chose Python for web scraping. But what’s more to using Python other than it’s user-friendliness, easy syntax and speed? It’s the wide variety of open-source libraries that come along with it!

One of them is the BeautifulSoup library for extracting data from XML and HTML tags and files. It provides with ways to search and sort through webpages, find specific elements that you need and extract them in objects of your preference.

Apart from BeautifulSoup, another module that can be easily neglected but is of great use is:- urllib2. This module is to open URLs that are fed to it.

Hence, with a combination of the above 2 Python modules, I was able to scrape off the main sample data like speakers, sessions, tracks, sponsors and so on for PyCon 2017 (sample that I created recently) in a quicker and more efficient way.

I will now talk about these modules in greater detail and their use in my web scraping scripts.

Web Scraping with BeautifulSoup

Start by installing Beautifulsoup module in your Python environment. I believe that it will be best to install it using  pip.  Run the following command to install Beautifulsoup:-

pip install beautifulsoup4

Next up, you need to import the module to your script by adding the following line:-

from bs4 import BeautifulSoup

Now you need to open a webpage using the module in the following way. For that we first have to open the URL of the page that is being scraped. We use urllib2 to do that.

import urllib2
site=urllib2.urlopen("https://us.pycon.org/2017/sponsors/")

Next we use BeautifulSoup to get the contents of the webpage in the following way:-

website=BeautifulSoup(site)

Suppose that, next I want to obtain the data inside all the <div> tags in the webpage that have their class=”sponsor-level”.

I will use BeautifulSoup in the following way along with the ‘find_all’ method to do so :-

divs=website.find_all("div",{'class':'sponsor-level'})

Here, divs will now be a special <list> type element containing all the HTML in the div tags of class=’sponsor-level’. Now I can do whatever with I want with this list. Parse it in any way, store it any kind of data type without any difficulty.

BeautifulSoup also helps in accessing the children tags or elements of different HTML or XML tags using the ‘dot’ (.) operator. Following example will make it more clear:-

tag = soup.b

Here, ‘tag’ will contain all the  <b> elements of the parent element that is stored in ‘soup’ variable.

To access attributes of HTML or XML tags, there is a special way as demonstrated below:-

To access the ‘href’ attribute of the first <a> tag in the webpage:-

soup.find_all(“a”)[0][‘href’]

A sample script…

Following is a script that I used to obtain sponsors from the PyCon 2017 site in json format.

import urllib2

site=urllib2.urlopen("https://us.pycon.org/2017/sponsors/")

from bs4 import BeautifulSoup

s=BeautifulSoup(site)

divs=s.find_all("div",{'class':'sponsor-level'})

spons=[]

for level in divs:

levelname=level.find_all("h2")[0].string

level_sponsors=level.find_all("div",{'class':'sponsor'})

for level_sponsor in level_sponsors:

anchor=level_sponsor.find_all("a")

dictionary={}

dictionary['id']=""

dictionary['name']=anchor[1].string

dictionary['level']=""

dictionary['sponsor_Type']=levelname

dictionary['url']=anchor[0]['href']

dictionary['description']=level_sponsor.find_all("p")[1].string

spons.append(dictionary)

print spons

There are still a lot of other functionalities of BeautifulSoup that are available and can make your life easier. These can be found here in the official BeautifulSoup documentation.

Continue ReadingData Scraping using Python with BeautifulSoup4 for Open Event samples

Using SUSI AI Accounting Object to Write User Settings

SUSI Server uses DAO in which accounting object is stored as JSONTray. SUSI clients are using this accounting object for user settings data. In this blogpost we will focus on how to use accounting JSONTray to write the user settings, so that a client can use such endpoint to store the user related settings in Susi server. The Susi server provides the required API endpoints to its web and mobile clients. Before starting with the implementation of servlet let’s take a look at Accounting.java file, to check how Susi server stores the accounting data.

public class Accounting {

        private JsonTray parent;
        private JSONObject json;
        private UserRequests requests;
        private ClientIdentity identity;
    ...
}

 

The JsonTray is class to hold the volume as <String,JsonObject> pairs as a Json file. The UserRequests  class holds all the user activities. The ClientIdentity class extend the base class Client and provides an Identification String for authentication of users. Now that we have understood about accounting in SUSI server let’s proceed for making an API endpoint to Store Webclient User settings. To make an endpoint we will use the HttpServlet class which provides methods, such as doGet and doPost, for handling HTTP-specific services. We will inherit our ChangeUserSettings class from AbstractAPIHandler yand implement APIhandler interface. In Susi Server the AbsrtactAPI handler extends a HTTPServlet which implements doGet and doPost ,all servlet in SUSI Server extends this class to increase code reusability.  

Since a User has to store its setting, set the minimum base role to access this endpoint to User. Apart from ‘User’ there are Admin and Anonymous roles too.

   @Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.USER;
    }

Next set the path for using this endpoint, by overriding getAPIPath method().

 @Override
    public String getAPIPath() {
        return "/aaa/changeUserSettings.json";
    }

We won’t be dealing with getdefault permissions so null can be return.

  @Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }

Next we implement serviceImpl method which takes four parameters the query, response, authorization and default permissions.

@Override
    public ServiceResponse serviceImpl(Query query, HttpServletResponse response, Authorization authorization, JsonObjectWithDefault permissions) throws APIException {
       String key = query.get("key", null);
       String value =query.get("value", null);
       if (key == null || value == null ) {
           throw new APIException(400, "Bad Service call, key or value parameters not provided");
       } else {
           if (authorization.getIdentity() == null) {
               throw new APIException(400, "Specified User Setting not found, ensure you are logged in");
           } else {
               Accounting accounting = DAO.getAccounting(authorization.getIdentity());
               JSONObject jsonObject = new JSONObject();
               jsonObject.put(key, value);
               if (accounting.getJSON().has("settings")) {
                   accounting.getJSON().getJSONObject("settings").put(key, value);
               } else {
                   accounting.getJSON().put("settings", jsonObject);
               }
               JSONObject result = new JSONObject();
               result.put("message", "You successfully changed settings to your account!");
               return new ServiceResponse(result);
           }
       }

    }

We will be storing the setting in Json object using key, value pairs. Take the values from user using query.get(“param”,”default value”) and set the default value to null. So that in case the parameters are not present the servlet can return “Bad service call”. To get the accounting object user identity string given by authorization.getIdentity() method is used. Now check if the same user settings is already present, if yes, overwrite it and if not append a new Json object with received key and value. And return the success message through ServiceResponse method.

Proceed to test the working of endpoint at http://127.0.0.1:4000/aaa/changeUserSettings.json?key=theme&value=dark and see if it’s stored using   http://127.0.0.1:4000/aaa/listUserSettings.json.

You have successfully created an endpoint to store user settings and  enhanced Susi Server, take a look and contribute to Susi Server.

Resources

Continue ReadingUsing SUSI AI Accounting Object to Write User Settings

Implementing Search Feature In SUSI Web Chat

SUSI WebChat now has a search feature. Users now have an option to filter or find messages. The user can enter a keyword or phrase in the search field and all the matched messages are highlighted with the given keyword and the user can then navigate through the results.

Lets visit SUSI WebChat and try it out.

  1. Clicking on the search icon on the top right corner of the chat app screen, we’ll see a search field expand to the left from the search icon.
  2. Type any word or phrase and you see that all the matches are highlighted in yellow and the currently focused message is highlighted in orange
  3. We can use the up and down arrows to navigate between previous and recent messages containing the search string.
  4. We can also choose to search case sensitively using the drop down provided by clicking on the vertical dots icon to the right of the search component.
  5. Click on the `X` icon or the search icon to exit from the search mode. We again see that the search field contracts to the right, back to its initial state as a search icon.

How does the search feature work?

We first make our search component with a search field, navigation arrow icon buttons and exit icon button. We then listen to input changes in our search field using onChange function, and on input change, we collect the search string and iterate through all the existing messages checking if the message contains the search string or not, and if present, we mark that message before passing it to MessageListItem to render the message.

let match = msgText.indexOf(matchString);

  if (match !== -1) {
    msgCopy.mark = {
    matchText: matchString,
    isCaseSensitive: isCaseSensitive
  };

}

We alse need to pass the message ID of the currently focused message to MessageListItem as we need to identify that message to highlight it in orange instead of yellow differentiating between all matches and the current match.

function getMessageListItem(messages, markID) {
  if(markID){
    return messages.map((message) => {
      return (
        <MessageListItem
          key={message.id}
          message={message}
          markID={markID}
        />
      );
    });
  }
}

We also store the indices of the messages marked in the MessageSection Component state which is later used to iterate through the highlighted results.

searchTextChanged = (event) => {

  let matchString = event.target.value;
  let messages = this.state.messages;
  let markingData = searchMsgs(messages, matchString,
    this.state.searchState.caseSensitive);

  if(matchString){

    let searchState = {
      markedMsgs: markingData.allmsgs,
      markedIDs: markingData.markedIDs,
      markedIndices: markingData.markedIndices,
      scrollLimit: markingData.markedIDs.length,
      scrollIndex: 0,
      scrollID: markingData.markedIDs[0],
      caseSensitive: this.state.searchState.caseSensitive,
      open: false,
      searchText: matchString
    };

    this.setState({
      searchState: searchState
    });

  }
}

After marking the matched messages with the search string, we pass the messages array into MessageListItem Component where the messages are processed and rendered. Here, we check if the message being received from MessageSection is marked or not and if marked, we then highlight the message. To highlight all occurrences of the search string in the message text, I used a module called react-text-highlight.

import TextHighlight from 'react-text-highlight';

if(this.props.message.id === markMsgID){
  markedText.push(
    <TextHighlight
      key={key}
      highlight={matchString}
      text={part}
      markTag='em'
      caseSensitive={isCaseSensitive}
    />
  );
}
else{
  markedText.push(
    <TextHighlight
      key={key}
      highlight={matchString}
      text={part}
      caseSensitive={isCaseSensitive}/>
  );
}

Here, we are using the message ID of the currently focused message, sent as props to MessageListItem to identify the currently focused message and highlight it specifically in orange instead of the default yellow color for all other matches.

I used ‘em’ tag to emphasise the currently highlighted message and colored it orange using CSS attributes.

em{
  background-color: orange;
}

We next need to add functionality to navigate through the matched results. The arrow buttons are used to navigate. We stored all the marked messages in the MessageSection state as `markedIDs` and their corresponding indices as `markedIndices`. Using the length of this array, we get the `scrollLimit` i.e we know the bounds to apply while navigating through the search results.

On clicking the up or down arrows, we update the currently highlighted message through `scrollID` and `scrollIndex`, and also check for bounds using `scrollLimit`  in the searchState. Once these are updated, the chat app must automatically scroll to the new currently highlighted message. Since findDOMNode is being deprecated, I used the custom scrollbar to find the node of the currently highlighted message without using findDOMNode. The custom scrollbar was implemented using the module react-custom-scrollbars. Once the node is found, we use the inbuilt HTML DOM method, scrollIntoView()  to automatically scroll to that message.

if(this.state.search){
  if (this.state.searchState.scrollIndex === -1
      || this.state.searchState.scrollIndex === null) {
      this._scrollToBottom();
  }
  else {
    let markedIDs = this.state.searchState.markedIDs;
    let markedIndices = this.state.searchState.markedIndices;
    let limit = this.state.searchState.scrollLimit;
    let ul = this.messageList;

    if (markedIDs && ul && limit > 0) {
      let currentID = markedIndices[this.state.searchState.scrollIndex];
      this.scrollarea.view.childNodes[currentID].scrollIntoView();
    }
  }
}

Let us now see how the search field was animated. I used a CSS transition property along width to get the search field animation to work. This gives the animation when there is a change of width for the search field. I fixed the width to be zero when the search mode is not activated, so only the search icon is displayed. When the search mode is activated i.e the user clicks on the search field, I fixed the width as 125px. Since the width has changed, the increase in width is displayed as an expanding animation due to the CSS transition property.

const animationStyle = {
  transition: 'width 0.75s cubic-bezier(0.000, 0.795, 0.000, 1.000)'
};

const baseStyles = {
  open: { width: 125 },
  closed: { width: 0 },
}

We also have a case sensitive option which is displayed on clicking the rightmost button i.e the three vertical dots button. We can toggle between case sensitive option, whose value is stored in MessageSection searchState and is passed along with the messages to MessageListItem where it is used by react-text-highlight to highlight text accordingly and render the highlighted messages.

This is how the search feature was implemented in SUSI WebChat. You can find the complete code at SUSI WebChat.

Resources
Continue ReadingImplementing Search Feature In SUSI Web Chat

Processing Text Responses in SUSI Web Chat

SUSI Web Chat client now supports emojis, images, links and special characters. However, these aren’t declared as separate action types i.e the server doesn’t explicitly tell the client that the response contains any of the above features when it sends the JSON response. So the client must parse the text response from server and add support for each of the above mentioned features instead of rendering the plain text as is, to ensure good UX.

SUSI Web Chat client parses the text responses to support :

  • HTML Special Entities
  • Images and GIFs
  • URLs and Mail IDs
  • Emojis and Symbols
// Proccess the text for HTML Spl Chars, Images, Links and Emojis

function processText(text){

  if(text){
    let htmlText = entities.decode(text);
    let imgText = imageParse(htmlText);
    let replacedText = parseAndReplace(imgText);

    return <Emojify>{replacedText}</Emojify>;

  };
  return text;
}

Let us write sample skills to test these out. Visit http://dream.susi.ai/ and enter textprocessing.

You can then see few sample queries and responses at http://dream.susi.ai/p/textprocessing.

Lets visit SUSI WebChat and try it out.

Query : dream textprocessing

Response: dreaming enabled for textprocessing

Query : text with special characters

Response:  &para; Here are few “Special Characters&rdquo;!

All the special entities notations have been parsed and rendered accordingly!

Sometimes we might need to use HTML special characters due to reasons like

  • You need to escape HTML special characters like <, &, or .
  • Your keyboard does not support the required character. For example, many keyboards do not have em-dash or the copyright symbol.

You might be wondering why the client needs to handle this separately as it is generally, automatically converted to relevant HTML character while rendering the HTML. SUSI Web Chat client uses reactjs which has JSX and not HTML. So JSX doesn’t support HTML special characters i.e they aren’t automatically converted to relevant characters while rendering. Hence, the client needs to handle this explicitly.

We used the module, html-entities to decode all types of special HTML characters and entities. This module parses the text for HTML entities and replaces them with the relevant character for rendering when used to decode text.

import {AllHtmlEntities} from 'html-entities';
const entities = new AllHtmlEntities();

let htmlText = entities.decode(text);

Now that the HTML entities are processed, the client then processes the text for image links. Let us now look at how images and gifs are handled.

Query : random gif

Response: https://media1.giphy.com/media/AAKZ9onKpXog8/200.gif

Sometimes, the text contains links for images or gifs and the user would be expecting a media type like image or gif instead of text. So we need to replace those image links with actual images to ensure good UX. This is handled using regular expressions to match image type urls and correspondingly replace them with html img tags so that the response is a image and not URL text.

// Parse text for Image URLs

function imageParse(stringWithLinks){

  let replacePattern = new RegExp([
    '((?:https?:\\/\\/)(?:[a-zA-Z]{1}',
    '(?:[\\w-]+\\.)+(?:[\\w]{2,5}))',
    '(?::[\\d]{1,5})?\\/(?:[^\\s/]+\\/)',
    '*(?:[^\\s]+\\.(?:jpe?g|gif|png))',
    '(?:\\?\\w+=\\w+(?:&\\w+=\\w+)*)?)'
  ].join(''),'gim');

  let splits = stringWithLinks.split(replacePattern);

  let result = [];

  splits.forEach((item,key)=>{
    let checkmatch = item.match(replacePattern);

    if(checkmatch){
      result.push(
        <img key={key} src={checkmatch}
        style={{width:'95%',height:'auto'}} alt=''/>)
    }
    else{
      result.push(item);
    }
  });

  return result;
}

The text is split using the regular expression and every matched part is replaced with the corresponding image using the img tag with source as the URL contained in the text.

The client then parses URLs and Mail IDs.

Query: search internet

Response: Internet The global system of interconnected computer networks that use the Internet protocol suite to… https://duckduckgo.com/Internet

The link has been parsed from the response text and has been successfully hyperlinked. Clicking the links opens the respective url in a new window.

We used react-linkify module to parse links and email IDs. The module parses the text and hyperlinks all kinds of URLs and Mail IDs.

import Linkify from 'react-linkify';

export const parseAndReplace = (text) => {return <Linkify properties={{target:"_blank"}}>{text}</Linkify>;}

Finally, let us see, how emojis are parsed.

Query : dream textprocessing

Response: dreaming enabled for textprocessing

Query : susi, do you use emojis?

Response: Ofcourse ^__^ 😎 What about you!? 😉 😛

All the notations for emojis have been parsed and rendered as emojis instead of text!

We used react-emojine module to emojify the text.

import Emojify from 'react-emojione';

<Emojify>{text}</Emojify>;

This is how text is processed to support special characters, images, links and emojis, ensuring a rich user experience. You can find the complete code at SUSI WebChat.

Resources

Continue ReadingProcessing Text Responses in SUSI Web Chat

Integration of SUSI AI to Alexa

An Alexa skill which can be used to ask susi for answers like: “Alexa, ask susi chat who are you” or “Alexa, ask susi chat what is the temperature in berlin”.

If at any point of time, you are unclear about the code in the blog post, you can check the code of the already made SUSI Alexa skill from the susi_alexa_skill repository.

Getting Started : Alexa Susi AI Skill

Follow the instructions below:

Visit the Amazon developer site and Login.

Click Alexa, on the top bar.

Click Alexa skills kit.

Click on add a new skill button on the top right of the page.

We will be at the skill information tab.

 

Write the name of the skill Write the invocation name of the skill i.e. the name that will be used to trigger your skill. Like in our case, if we need to ask anything (as we have ‘susi chat’ as the invocation name), we will ask with “Alexa, ask susi chat” as a prefix to our question sentence.

By clicking next, we will be redirected to the second tab i.e. Interaction model. We need to fill two fields here i.e. intent schema and sample utterances. For intent schema, we need to write all the available intents and the parameters for each of them. Like in our case:

{
 "intents": 
 [
   {
     "slots": [
       {
         "name": "query",
         "type": "AMAZON.LITERAL"
       }
     ],
     "intent": "callSusiApi"
   }
 ]
}

We have a single intent that is “callSusiApi” and the parameter it accepts is “query” of type “AMAZON.LITERAL” (in simple words, a string type). Parameters are termed as slots here. The different types of slots available, can be seen from here.

For sample utterances, we need to tell what utterances by the client will lead to what intent. In our case:

We have just one intent and the whole string uttered by the client should be fed to this intent as a “query” slot (parameter).

Let’s click next now.

We will be shifted to the configuration tab.

We will be making a lambda function, which will hold the code for our Susi skill, further we need to link that code to this skill. To do the linking we need to get the Amazon resource name i.e. ARN and fill it in the field named endpoint:


To get the amazon resource name, in a new tab, visit here. Visit “Lambda” followed by get started button. Click on “Create a lambda function”:

We need to select a blueprint for our lambda function. Select the “blank function” option for that.


Click next.

For configure triggers option, click this box and select “Alexa skills kit” option.


Click next.

In configure function tab, just write the name of the function and its description. Let’s code our lambda function:

// basic syntax that should be available in the lambda function
var https = require('http');
exports.handler = (event, context) => {
  try {
    if (event.session.new) {
      // New Session
      console.log("NEW SESSION")
    }
    switch (event.request.type) {
      case "LaunchRequest":
        // Launch Request
        console.log(`LAUNCH REQUEST`)
        context.succeed(
          generateResponse(
            buildSpeechletResponse("Welcome to a Susi Skill, this is an A.I. chatbot developed by Fossasia open source community. Ask anything to me like temperature at a place or rating of a movie or any other thing which you would like to ask?", false),
            {}
          )
        )
        break;
      case "IntentRequest":
        // Intent Request
        console.log(`INTENT REQUEST`)

        switch(event.request.intent.name) {
          case "callSusiApi":
            console.log(event.request.intent.slots.query.value)
            var endpoint = "http://api.susi.ai/susi/chat.json?q="+event.request.intent.slots.query.value; // ENDPOINT GOES HERE
            var body = ""
            https.get(endpoint, (response) => {
              response.on('data', (chunk) => { body += chunk })
              response.on('end', () => {
                var data = JSON.parse(body)
                // fetching answer from susi
                var viewCount = data.answers[0].actions[0].expression;
                if(viewCount.indexOf('I found this on the web') != -1)
                    viewCount = 'I have no idea about it, sorry.';
                context.succeed(
                  generateResponse(
                    buildSpeechletResponse(`${viewCount}`, false),
                    {}
                  )
                )
              })
            })
            break;

          default:
            throw "Invalid intent"
        }

        break;
      case "SessionEndedRequest":
        // Session Ended Request
        console.log(`SESSION ENDED REQUEST`)
        break;
      default:
        context.fail(`INVALID REQUEST TYPE: ${event.request.type}`)
    }
  } catch(error) { context.fail(`Exception: ${error}`) }
}

// Helpers
buildSpeechletResponse = (outputText, shouldEndSession) => {
  return {
    outputSpeech: {
      type: "PlainText",
      text: outputText
    },
    shouldEndSession: shouldEndSession
  }
}

generateResponse = (speechletResponse, sessionAttributes) => {
  return {
    version: "1.0",
    sessionAttributes: sessionAttributes,
    response: speechletResponse
  }
}

Paste this code into the space given below “lambda function code”. In lambda function handler and role, Click the field named role and select “create a custom role” from the dropdown shown.

You will be redirected to a new page. Select the IAM role as lambda_basic_execution:

Click allow button in the bottom right. We will be redirected back to our previous page. We don’t need to worry about other settings on this page.

Click next.

Again cross-check the details shown and click next.

Now we will have our ARN (Amazon resource name) on the top right of the page.


Copy that and paste it into the field “endpoint” on our previously open browser tab:


Click next.

Great that our SUSI AI skill is ready!

Now we can test it with a sample query, when we get redirected to the test tab:


Also we can test it using our voice through reverb app available on play store or echosim by logging into your amazon account.

Till now, the skill can just be invoked or tested from your own amazon id. To make this skill public , you need to fill the other 2 tabs left that are “publishing information” and “privacy and compliance”.

Some sample strings that we can speak to test it: “Alexa, ask susi chat where are you” “Alexa, ask susi chat tell me a joke” “Alexa, ask susi chat what is a table” (where ‘susi chat’ is the invocation name).

This was the video that helped a lot in making this skill for Alexa. It can be referred too.

Continue ReadingIntegration of SUSI AI to Alexa

Customizing Chromium for the Meilix Generator

Imagine if you are able to vanish that star icon of the bookmark which is on the extreme right of address bar. Disable auto-fill option, disable some particular extension to get install and many more things. And you could even distribute it to any friend to get the same setting within just copy and paste. We are working on such features for the FOSSASIA Meilix generator for Chromium.

Chromium is one of the most popular browsers. But had you ever thought how grateful it would be if you are able to customize your chromium to a larger extent? Sometimes it feels that few features are there which we merely used.

So here it is the way to tailor cut the specification of the browser and you can even give it to your friend to try out the feature. It just needs to copy and paste a file for your friend. Not forget to mention from where do I get this file: fossasia/meilix

How can you do that?

This gist is a .json file which has to be copied in the etc/chromium-browser/policies/managed with the name chrome.json. That’s it.

Chrome.json
It’s a policy template of Linux for the browser Chromium. This file contains different policies and they are commented. Uncomment the required values and set your desired values.

Format of the policy of the JSON file.

Each policy is well-structured so that a person can easily understand and change its values.

Let’s take an example:

1 // Enable Bookmark Bar
2 //-------------------------------------------------------------------------
3 // Enables the bookmark bar on Google Chrome.  If you enable this setting,
4 // Google Chrome will show a bookmark bar.  If you disable this setting, users
5 // will never see the bookmark bar.  If you enable or disable this setting,
6 // users cannot change or override it in Google Chrome.  If this setting is
7 // left not set the user can decide to use this function or not.
8 //"BookmarkBarEnabled": true,

This is an example of controlling of bookmark bar which is mention on line 1
Line 2 is left intentionally for proper formatting
Line 3-7 explains the purpose of the policy, it explains itself quite briefly
Line 8 is the by default set option, to alter it, uncomment it and reverse the values.

This same way is being followed throughout. There are many other options which act as a boon.

Edit the file and share it with your friends and ask them to copy it in the same location and then they can also get the benefit of the feature.

Continue ReadingCustomizing Chromium for the Meilix Generator