Using a YAML file to read configuration options in Yaydoc

Yaydoc provides access to a lot of configurable variables which can be set as per requirements to configure various sections of the build process. You can see the entire list of variables in the project’s homepage. Till now the only way to do this was to set appropriate environment variables. Since a web user interface for yaydoc is in development, providing a clean UI was very important. This meant that we could not just create a bunch of input fields for all variables as that could be overwhelming for any new user. So we decided to ask only minimal information in the web form and read other variables if the user chooses to specify from a YAML file in the target repository.

To read a YAML file, we used PyYaml. It is a well established Python package to safely read info from a YAML file and convert it to a Python’s dictionary. Here is the code snippet for that.

def get_yaml_config():
    try:
        with open('.yaydoc.yml', 'r') as file:
            conf = yaml.safe_load(file)
    except FileNotFoundError:
        return {}
    return conf

The above code snippet returns a dictionary specifying all keys read from the YAML file. Since none of the options are required, we first create a dictionary with all defaults and recursively merges it with the yaml dict. The merging is done using the following code snippet:

for key, value in head.items():
    if isinstance(base, dict):
        if isinstance(value, dict):
            base[key] = update_dict(base.get(key, {}), value)
        else:
           base[key] = head[key]
    else:
        base = {key: head[key]}
return base

Now you can create a .yaydoc.yml file in the root of your repository and yaydoc would read options from there. Here is a sample yml file.

metadata:
  author: FOSSASIA
  projectname: Yaydoc
  version: development

build:
  doctheme: fossasia_theme
  docpath: docs/
  logo: images/logo.svg
  markdown_flavour: markdown_github

publish:
  ghpages:
    docurl: yaydoc.fossasia.org

It should be noted that the layout of the file may change in the future as the project is in active development.

Resources

SUSI AI Bots with Microsoft’s Bot Framework

The Bot Framework is used to build intelligent chatbots and it supports .NET, Node.js, and REST. To learn about building bots using bot framework go to  https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted . Now to build SUSI AI bot for different platforms like facebook, telegram, kik, skype follow below given steps.

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created.
  3. Type npm init in shell and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save restify.After restify is installed type npm install –save botbuilder   after botbuilder is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

  6. Your package.json file should look like this.

    {
    "name": "skype-bot",
    "version": "1.0.0",
    "description": "SUSI AI Skype Bot",
    "main": "app.js",
    "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1",
      "start": "node app.js"
    },
    "author": "",
    "license": "ISC",
    "dependencies": {
      "botbuilder": "^3.8.1",
      "request": "^2.81.0",
      "restify": "^4.3.0"
    }
    }
    
  7. Copy following code into file you created i.e index.js

    var restify = require('restify');
    var builder = require('botbuilder');
    var request = require('request');
    
    // Setup Restify Server
    var server = restify.createServer();
    server.listen(process.env.port || process.env.PORT || 8080, function() {
       console.log('%s listening to %s', server.name, server.url);
    });
    
    // Create chat bot
    var connector = new builder.ChatConnector({
     appId: process.env.appId,
     appPassword: process.env.appPassword
    });
    
    var bot = new builder.UniversalBot(connector);
    server.post('/api/messages', connector.listen());
    //When bot is added by user
    bot.on('contactRelationUpdate', function(message) {
       if (message.action === 'add') {
           var name = message.user ? message.user.name : null;
           var reply = new builder.Message()
               .address(message.address)
               .text("Hello %s... Thanks for adding me. You can talk to SUSI now.", name || 'there');
           bot.send(reply);
       }
    });
    //getting response from SUSI API upon receiving messages from User
    bot.dialog('/', function(session) {
       var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: session.message.text
           }
       };
    //sending request to SUSI API for response 
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           //responding back to user
           session.send(ans);
    
       })
    });
    
  8. You have to replace appID and appPassword with your own ID and Password which you can get by below given steps.
  9. Sign in/Sign up to this https://dev.botframework.com/. After signing in go to My Bots option at the top of the page and Create/Register your bot. Enter details of your bot and click on “Create Microsoft App ID and password”.

  10. Leave messaging endpoint for now after getting app ID and password we will write messaging endpoint.
  11. Copy your APP ID and Password and save them for later use. Paste your App ID in box given for ID on bot registration page.

  12. Now we have to create messaging endpoint to listen for requests. Make a github repository and push the files in the folder we created above.

    In command line change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master  You will URL for remote repository by making repository on your github and copying this link of your repository.

  13. Now we have to deploy this github repository to heroku to get url for messaging endpoint. If you don’t have account on heroku sign up here https://www.heroku.com/ else just sign in and create a new app.
  14. Deploy your repository onto heroku from deploy option and choosing github as a deployment method.
  15. Select automatic deployment so that you make any changes in github repository they should be deployed to heroku.

  16. Open you app from option on top right and copy the link of your heroku app and append it with /api/messages and enter this url as messaging endpoint.

    https://{Your_App_Name}.herokuapp.com/api/messages
  17. Register the bot and add APP ID and password you saved to your heroku app in settings->config variables.
  18. Now go to  https://dev.botframework.com/. and then in My Bots go to your bot and click on Skype bot then add it to contact and start chatting.
  19. You can connect same bot to different channels like kik, slack, telegram, facebook and many others.

    Add different channels in your bot page and follow these links for deploying onto different platforms.

If you want to learn more about bot framework you can refer to https://docs.microsoft.com/en-us/Bot-Framework/index

Resources:
Code: https://github.com/fossasia/susi_skypebot
Bot Framework: https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted
Bot Framework Logo: https://goo.gl/images/Vw5xZp

How to Collaborate Design on Hardware Schematics in PSLab Project

Generally ECAD tools are not built to support collaborative features such as git in software programming. PSLab hardware is developed using an open source ECAD tool called KiCAD. It is a practice in the electronic industry to use hierarchical blocks to support collaboration. One person can work on a specific block having rest of the design untouched. This will support a workaround to have a team working on a one hardware design just like a software design. In PSLab hardware repository, many developers can work simultaneously using this technique without having any conflicts in project files.

Printed Circuit Board (PCB) designing is an art. The way the components are placed and how they are interconnected through different type of wires and pads, it is an art for hardware designing engineers. If they do not use auto-route, PCB design for the same schematic will be quite different from one another.

There are two major approaches in designing PCBs.

  • Top Down method
  • Bottom Up method

Any of these methods can be implemented in PSLab hardware repository to support collaboration by multiple developers at the same time.

Top Down Method

In this method the design is starting from the most abstract definitions. We can think of this as a black box with several wires coming out of it. The user is aware of how to use the wires and to which devices they need to be connected. But the inside of the black box is not visible. Then a designer can open up this box and break the design down to several small black boxes which can perform a subset of functionalities the bigger black box did. He can go on breaking it down to even smaller boxes and reach the very bottom where basic components are found such as transistors, resistors, diodes etc.

Bottom Up Method

In the bottom up method, the opposite approach of the top down method is used. Small parts are combined together to design a much bigger part and they are combined together to build up an even bigger part which will eventually create the final design. Our human body is a great example for a use of bottom up method. Cells create organ; organs create systems and systems create the body.

Designing Top Down Designs using KiCAD

In PCB designing, the designers are free to choose whatever the approach they prefer more suitable for their project. In this blog, the Top Down method is used to demonstrate how to create a design from the abstract concepts. This will illustrate how to create a design with one layer deep in design using hierarchical blocks. However, these design procedures can be carried out as many times as the designer want to create depending on the complexity of the project.

Step 01 – Create a new project in KiCAD

Step 02 – Open up Eeschema to begin the design

Step 03 – Create a Hierarchical Sheet

Step 04 – Place the hierarchical sheet on the design sheet and give it a name

Step 05 – Enter sheet

Step 06 – Place components and create a schematic design inside the sheet and place hierarchical labels

Step 07 – Define the labels as input or output and give them an identifier. Once done, place them on appropriate places and connect with wires

Step 08 – Go back to main sheet to complete the hierarchical block

Step 09 – Place hierarchical pins on the block

Click on the “Place hierarchical pin” icon from the toolbar and click on the block. The pins can be placed on anywhere on the block. As a convention, input pins are placed on the left side and the output pins are placed on the right side of the block.

Step 10 – Complete the circuit

Resources:

How to write your own custom AST parser?

In Yaydoc, we are using pandoc to convert text from one format to another. Pandoc is one of the best text conversion tool which helps users to convert text between different markup formats. It is written in HASKELL. Many wrapper libraries are available for different programming languages which include python, nodejs, ruby. But in yaydoc, for a few particular scenarios we have to customize the conversion to meet our needs. So I started to build to a custom parser. The parser which I made will convert yml code block to yaml code block because sphinx need yaml code block for rendering. In order to parse, we have to split the text into tokens to our need. So initially we have to write a lexer to split the text into tokens. Here is the sample snippet for a basic lexer.

class Node:
    def __init__(self, text, token):
        self.text = text
        self.token = token
 
    def __str__(self):
        return self.text+' '+self.token
 
 
def lexer(text):
    def syntax_highliter_lexer(nodes, words):
        splitted_syntax_highligter = words.split('```')
        if splitted_syntax_highligter[0] is not '':
            nodes.append(Node(splitted_syntax_highligter[0], 'WORD'))
        splitted_syntax_highligter[0] = '```'
        words = ''.join([x for x in splitted_syntax_highligter])
        nodes.append(Node(words, 'SYNTAX HIGHLIGHTER'))
        return nodes
 
    syntax_re = re.compile('```')
    nodes = []
    pos = 0
    words = ''
    while pos < len(text):
        if text[pos] == ' ':
            if len(words) > 0:
                if syntax_re.search(words) is not None:
                    nodes = syntax_highliter_lexer(nodes, words)
                else:
                    nodes.append(Node(words, 'WORD'))
                words = ''
            nodes.append(Node(text[pos], 'SPACE'))
            pos = pos + 1
        elif text[pos] == '\n':
            if len(words) > 0:
                if syntax_re.search(words) is not None:
                    nodes = syntax_highliter_lexer(nodes, words)
                else:
                    nodes.append(Node(words, 'WORD'))
                words = ''
            nodes.append(Node(text[pos], 'NEWLINE'))
            pos = pos + 1
        else:
            words += text[pos]
            pos = pos + 1
    if len(words) > 0:
        if syntax_re.search(words) is not None:
            nodes = syntax_highliter_lexer(nodes, words)
        else:
            nodes.append(Node(words, 'WORD'))
    return nodes

After converting your text into tokens. We have to parse the tokens to match our need. In this case we need to build a simple parser

I chose the ABSTRACT SYNTAX TREE to build the parser. AST is a simple tree based on root node expression. The left node is evaluated first then the right node value. If there is one node after the root node just return the value. Sample snippet for AST parser

def parser(nodes, index):
    if nodes[index].token == 'NEWLINE':
        if index + 1 < len(nodes):
            return nodes[index].text + parser(nodes, index + 1)
        else:
            return nodes[index].text
    elif nodes[index].token == 'WORD':
        if index + 1 < len(nodes):
            return nodes[index].text + parser(nodes, index + 1)
        else:
            return nodes[index].text
    elif nodes[index].token == 'SYNTAX HIGHLIGHTER':
        if index + 1 < len(nodes):
            word = ''
            j = index + 1
            end_highligher = False
            end_pos = 0
            while j < len(nodes):
                if nodes[j].token == 'SYNTAX HIGHLIGHTER':
                    end_pos = j
                    end_highligher = True
                    break
                j = j + 1
            if end_highligher:
                for k in range(index, end_pos + 1):
                    word += nodes[k].text
                if index != 0:
                    if nodes[index - 1].token != 'NEWLINE':
                        word = '\n' + word
                if end_pos + 1 < len(nodes):
                    if nodes[end_pos + 1].token != 'NEWLINE':
                        word = word + '\n'
                    return word + parser(nodes, end_pos + 1)
                else:
                    return word
            else:
                return nodes[index].text + parser(nodes, index + 1)
        else:
            return nodes[index].text
    elif nodes[index].token == 'SPACE':
        if index + 1 < len(nodes):
            return nodes[index].text + parser(nodes, index + 1)
        else:
            return nodes[index].text

But we didn’t use the parser in Yaydoc because maintaining a custom parser is a huge hurdle. But it provided a good learning experience.

Resources:

Building Metapackages to Customize the Meilix Linux Distro Generator

This article will guide you to build a metapackage with your required configuration and to use it inside the meilix distro to customize and use the inbuild metapackages to customize the configuration file of packages and properties of various browsers.
Metapackages are scripts which contain the link to existing packages. It’s a .deb file. As packages include dependencies analogically metapackages include packages. So, we can say that metapackages do not contain actual software, they depend upon packages. This guide will help you to make your own metapackage easily, configure it and distribute it among your friends and other Linux users.

How to get started to build a metapackage for meilix?

At first one needs to sort out the metapackages that it needs to be there in the metapackages. One can also come up with the package which he don’t want to install but that comes under dependency of the some package.
It’s easy, a few lines of commands and you will have a .deb metapackage in your hand.
We will use equivs as a tool to build metapackages.

Install equivs :

sudo apt-get install equivs
equivs-control ns-control

This will create a file with the name ns-control and that files looks similar to this:

1.### Commented entries have reasonable defaults.
2.### Uncomment to edit them.
3.# Source: <source package name; defaults to package name>
4.Section: misc
5.Priority: optional
6.# Homepage: <enter URL here; no default>
7.Standards-Version: 3.9.2
8.Package: <package name; defaults to equivs-dummy>
9.# Version: <enter version here; defaults to 1.0>
10.# Maintainer: Your Name <[email protected]>
11.# Pre-Depends: <comma-separated list of packages>
12.# Depends: <comma-separated list of packages>
13.# Recommends: <comma-separated list of packages>
14.# Suggests: <comma-separated list of packages>
15.# Provides: <comma-separated list of packages>
16.# Replaces: <comma-separated list of packages>
17.# Architecture: all
18.# Multi-Arch: <one of: foreign|same|allowed>
19.# Copyright: <copyright file; defaults to GPL2>
20.# Changelog: <changelog file; defaults to a generic changelog>
21.# Readme: <README.Debian file; defaults to a generic one>
22.# Extra-Files: <comma-separated list of additional files for the doc directory>
23.# Files: <pair of space-separated paths; First is file to include, second is destination>
24.# <more pairs, if there's more than one file to include. Notice the starting space>
25.Description: <short description; defaults to some wise words>
long description and info

second paragraph

 

Now the question is what to do with this:
Line 3-7 : the control information of the source packages.
Line 8-25 : the control information for the binary packages
Source packages are those packages which contain the source code of the package. One can compile the source and install it in any architecture of the machine .
Binary packages are those packages which are specific to the architecture of machine. And one can easily install it with a click.

Description of important lines:
Line 3: The name of the source package, same to Line 8
Line 4: section of the distribution
There are various categories in which a source package can be put into.
Line 9: version of the package, it is helpful if you want it install packages of a particular version
Line 11: you have to write the dependencies of the packages #better remain this commented
Line 12 : Include the name of the packages that you want to include in the metapackage
Line 17: Architecture is set to all that is for both 32 and 64 bit.
Line 25: Provide description

Next is what
Then after filling up the text file, now it’s time to build it.

Build the package:

equivs-build ns-control

Now it will run and will give you a .deb file.
dpkg i *.deb will install the deb file.

This is the metapackage which contains the packages which you have included.
I have used this wiki as a source for the required information.

Suppose one of most popular metapackage : gnome-desktop-environment – It is the a desktop environment gnome flavoured. It gives the graphical user interface to the user with popular email, office tools, music and other wide range of applications.

How a common Linux user can get the benefit of it?

We know that most of the people avoid Linux because of its beautiful command line feature. They just want to use mouse/touchpad throughout.
With the help of this, a person can build a metapackage. This one can distribute to its friend and can also use for the future purpose.
One can also use this to make a collection of metapackages of different packages like hacking tools, text tools, etc.

How we uses the metapackages?

Meilix script uses the metapackages for building of all the required packages. In our webapp version (meilix-generator) we made several metapackages that will be asked from the user and a user can choose one among them according to its requirement. It will also contain the information that which packages the metapackage is made of.

Suppose event metapackages include the packages needed by the people for the events purpose which will predefined by us and they will consist of lightweight text editor, media player, document viewer etc. In an education related metapackage one contain packages related to school, workshop.

Now meilix repo contains its own metapackages that it uses to contain the distro.

How meilix metapackage is used to control distro configuration?

We can even control the distro properties including the browser configuration, it’s startup page, search page and many more things through metapackages. Let’s see how:

We created a metapackage with the name meilix-default-settings and used it to config various features in the distro. The meilix settings metapackage consists of etc folder where we can made the changes to get it on the distro. We can even include property folder in the .config under skel folder to copy the changes into the home folder of the new user. To change the chrome configuration, we need to edit the chrome.json file. To change firefox configuration we need to edit prefs.js file.

The metapackage folder is: https://github.com/fossasia/meilix/tree/master/meilix-default-settings

Repository using metapackages

https://github.com/fossasia/meilix
https://github.com/fossasia/meilix-generator  (the webapp)

Solving slow template rendering in loklak search

I was working on one of the issues which was to render top hashtags of the last 24 hours on the home-page of loklak search. The loklak search.json API took just milliseconds to return the response. However, it took about 3 minutes to render the HTML. Generally, API response acts as a Rate Determining Step but here, template rendering process was slow so, that optimization became crucial.

This blog will explain how I improved slow template rendering problem using some APIs of Angular. I would be explaining common mistakes we often make and solution about how to quickly render HTML after data is received from the APIs.

Previous Code

In this code, we have made an API request with query ‘since:day’ on the initialization of the Home page component. Next, we have subscribed to the store which will return an observable of hashtags. Then, we are subscribing to the observable of hashtags and extract the data from it.

In the HTML file, now we are presenting hashtags using *ngFor and *ngIf property.

This code working is fine. The only fault is that the template is not aware if some input is received or some state property has changed. Therefore, we need a trigger to do detect the change and render the HTML.

TypeScript File

export class HomeComponent implements OnInit, OnDestroy {
    public apiResponseHashtags$: Observable<Array<{ tag: string, count: number }>>;
    public apiResponseHashtags: <Array<{ tag: string, count: number }>>;

constructor(private store: Store<fromRoot.State>)
ngOnInit() {
this.getTopHashtags();
this.getDataFromStore();}
}

    private getTopHashtags() {
    this.store.dispatch(new queryAction.RelocationAfterQuerySetAction());
this.store.dispatch(new queryAction.InputValueChangeAction('since:day'));
    }

    private getDataFromStore() {
this.apiResponseHashtags$ =  this.store.select(fromRoot.getApiResponseTags);
    this.apiResponseHashtags$.subscribe((data) => {
this.apiResponseHashtags = data;
});
    }
}

HTML File

<div class="top-hashtags">
    <div *ngIf="apiResponseHashtags.length !== 0">
        <span *ngFor ="let item of apiResponseHashtags">
            <a [routerLink]="['/search']" [queryParams]="{ query : '#' + item.tag }">#{{item.tag}}</a>&nbsp;
        </span>
    </div>
</div>

Solution

These are the following changes that need to be introduced for quick HTML rendering.

  • Here, we will be using ‘Change Detection Strategy: OnPushwhile defining component for pushing changes in the template. Using ‘OnPush’ will make changes to the template as soon as some input is received.

 

  • Instead of subscribing to the observables of hashtags, we need to use the async pipe to extract data from observable in the HTML file. This is because, in the previous code, we were using subscribed value for hashtags. Now, the observable of hashtags will work like an input for the template and  ‘Change Detection Strategy: OnPush’ will act as a trigger.

TypeScript file

@Component({
    selector: 'app-home',
    templateUrl: './home.component.html',
    styleUrls: ['./home.component.scss'],
    changeDetection: ChangeDetectionStrategy.OnPush
})
export class HomeComponent implements OnInit, OnDestroy {
public apiResponseHashtags$: Observable<Array<{ tag: string, count: number }>>;

constructor(private store: Store<fromRoot.State>)
ngOnInit() {
this.getTopHashtags();
this.getDataFromStore();}
}


private getTopHashtags() {
    this.store.dispatch(new queryAction.RelocationAfterQuerySetAction());
this.store.dispatch(new queryAction.InputValueChangeAction('since:day'));
    }

    private getDataFromStore() {
this.apiResponseHashtags$ = this.store.select(fromRoot.getApiResponseTags);
    }
}

HTML File

<div class="top-hashtags">
    <div *ngIf="(apiResponseHashtags$ | async).length !== 0">
        <span *ngFor ="let item of (apiResponseHashtags$ | async)">
            <a [routerLink]="['/search']" [queryParams]="{ query : '#' + item.tag }">#{{item.tag}}</a>&nbsp;
        </span>
    </div>
</div>

Conclusion

Whenever the template data depends on the API response or user interaction, it is beneficial to use ChangeDetectionStrategy  and reduce template rendering time. Moreover, instead of subscribing to observables of data received from the API, we will be using an async pipe to make observable as an input for the template.

Resources

  • https://angular-2-training-book.rangle.io/handout/change-detection/change_detection_strategy_onpush.html
  • https://stackoverflow.com/questions/39795634/angular-2-change-detection-and-changedetectionstrategy-onpush/39802466#39802466
  • https://www.youtube.com/watch?v=X0DLP_rktsc

 

 

 

Deploy SUSI.AI to a Messenger

Integration of SUSI AI to messenger platform has become a vital step as to enhance the popularity of this chatbot and to target a large base of users. For example – Viber claims that it has a user base of 800 million. So just integrating SUSI AI to Viber can increase its user base exponentially. This integration also proves to be a big boon, if the chat bot learns with the number and variations in the questions being asked. Like in the case of the web chat client (Susi AI).

This blog post will walk you through on how to deploy SUSI.AI to a messenger platform (Viber and Line messengers are used as an example in this post). We will be using Node.js and REST API technology in our example integrations. The repository of deployment of Susi AI to Viber can be found at susi_viberbot, and to Line messenger at susi_linebot. The SUSI AI Viberbot can be followed from here and Linebot by scanning this QR code.

The diagram below will give you an overview on what flow is followed to deploy SUSI AI chatbot to various messenger platforms.

Fig: Integration of Susi AI to chat messengers.

Let’s walk through each of the steps mentioned in the above diagram.

  1. To get familiar with SUSI.AI chatbot.

We have an API from where we fetch answers. To get a reply for the query ‘hi’, we can visit the API link with the query ‘hi’ appended to it (http://api.susi.ai/susi/chat.json?q=hi). You can chat with SUSI AI here.

  1. To set up a private SUSI AI chatbot account.

A account must be set up in the messenger platform, so that the user can message in that account to get a reply by the chatbot. Steps to set up the chatbot account is dependent on the messenger platform.

  1. To set up a webhook url.

The message sent to the chatbot account, must somehow connect to the chatbot. This message can be fed as a query to the chatbot, so that accordingly chatbot can think of a reply. To achieve this we need a url referred to as the webhook url.

The messages sent by the user, to the SUSI AI chatbot account on the messenger, can then be redirected to this url.

(Heroku platform allows 5 apps to be hosted on its platform for free, so you can check this documentation on how to host a node js app there.)

Now we need to think on how to handle these messages.

  1. To host code on our webhook url

As said earlier, we will be using Node js technology.

Generally, the messages from our SUSI AI chatbot account on the messenger will travel as requests to our webhook url. These come as POST requests to our url. To handle that we can use this piece of code:

app.post('/', function(request, response) {
    response.writeHead(200);

    // first step here, getting the message string from the request body

    // second step, calling the chatbot to get the reply to this message

    // third step, to send this reply back to our messenger's API
    
    response.end();
}

Let’s go through the three steps:

  • Getting the message string from the request body:

Request body is in JSON in case of REST API. To be extra sure, we use:

var reqBody = JSON.parse(request.body);

What property of this reqBody has our message string is dependent on the messenger platform. Suppose we have our message in the actions property of the reqBody, we can access that by:

var message = reqBody.actions;

For example in Viber, we need to use this piece of code:

app.post('/', function(req, response) {
    response.writeHead(200);

    // If user sends a message in 1-on-1 chat to the susi public account
    if(req.body.event === 'message'){
        // call chatbot or it’s API with event.message.text as the message string.
    }
}

In Line messenger, we accept the request at ‘/webhook’:

// register a webhook handler with middleware
app.post('/webhook', line.middleware(config), (req, res) => {
  // here events property has our message string somewhere nested in it.
  Promise
      .all(req.body.events.map(handleEvent))
      .then((result) => res.json(result));
});

// event handler
function handleEvent(event) {
  if (event.type !== 'message' || event.message.type !== 'text') {
      // ignore non-text-message event
      return Promise.resolve(null);
  }
  // call chatbot API with event.message.text as the query string.
}

So the code is dependent on the messenger platform.

  • Calling the SUSI AI chatbot API to get the reply to this message(‘hi susi’ in this case).

This part of our code will remain constant, for any messenger platform.

Let’s first see SUSI API’s answer to query “hi susi” and get familiar with it. Visit http://api.asksusi.com/susi/chat.json?q=hi susi from the browser and get familiar with the JSON object returned.

We get a JSON object as follows:

The answer can be found as the value of the key named expression. In this case it is “Hi, I’m Susi”.

To fetch the answer through coding, we can use this code snippet in Node js:

// including request module
var request = require(‘request’);

// setting options to make a successful call to Susi API.
var options = {
    method: 'GET',          
    url:'http://api.asksusi.com/susi/chat.json',
    qs:
    {
        timezoneOffset: '-330',
        q:'hi susi'
    }
};

// A request to the Susi bot
request(options, function (error, response, body) {
    if (error)
        throw new Error(error);
    // answer fetched from susi
    ans = (JSON.parse(body)).answers[0].actions[0].expression;
}

The properties required for the call are set up through a json object (i.e. options). Pass the options object to our request function as its first parameter. The response by the API will be stored in ‘body’ variable. We need to parse this body, to be able to access the properties of that body object. Hence, fetching the answer from Susi API.

  • To send the answer fetched from our SUSI API, back to our messenger.

This feature too is dependent on the messenger platform. Initially as the messenger platform requests us with a request which has the user message string. Now, it’s our time to send a request back to our messenger platform with a reply(i.e. the answer we will fetch from our chat bot’s API).

Generally, it is sent as a POST request.

The basic code to send a message to the messenger API:

// Assuming ans variable has the reply by our chatbot.
// setting options to request the chat api of viber.
var options1 = {
    method: 'POST',
    url: MESSENGER_API_URL,
    headers: headerBody,
    body:
    {
        // other properties dependent to the messenger
        text: ans
        // the property name can be different from 'text'
    },
    json: true
};

// request to the api of messenger.
request(options1, function (error1, res, body1) {
    if (error1) 
        throw new Error(error1);
    console.log(body1);
});

In case of Viber, we set up an options variable and request Viber’s chat API:

// setting options to request the chat api of viber.
var options1 = {
    method: 'POST',
    url: 'https://chatapi.viber.com/pa/send_message',
    headers: headerBody,
    body:
    {
        receiver: req.body.sender.id,
        min_api_version: 1,
        sender:
        {
            name: 'Susi',
            avatar: ''
        },
        tracking_data: 'tracking data',
        type: 'text',
        text: ans
    },
    json: true
};

// request to the chat api of viber.
request(options1, function (error1, res, body1) {
    if (error1) throw new Error(error1);
        console.log(body1);
});

In case of Line messenger, we use it’s reply API:

const answer = {
    type: 'text',
    text: ans
};

// use reply API
return client.replyMessage(event.replyToken, answer);

Using react-url-query in SUSI Chat

For SUSI Web Chat, I needed a query parameter which can be passed to the components directly to activate the SUSI dreams in my textarea using just the URL which is not easy when one is using react-router. React URL Query is a package for managing state through query parameters in the URL in React. It integrates well with React Router and Redux and provides additional tools specifically targeted at serializing and deserializing state in URL query parameters. So for example, if one wants to pass some parameters to populate in your component directly through the URL, you can use react-url-query. Eg. http://chat.susi.ai/?dream=fossasia will populate fossasia in our textarea section without actually typing the term textarea.

So this in the URL,
Screenshot from 2017-06-29 09.46.33

Will produce this in the textarea,

To achieve this. the following steps are required:

  1. First we proceed with installing the packages (Dependencies  – history)
npm install history --save
npm install react-url-query --save
  1.   We then instantiate a history in our component where we want to listen to the parameters like the following code. Our class ChatApp is where we want to pass the params.

ChatApp.react.js

import history from '../history'; 
//Import the history object from the History package.
    
 // force an update if the URL changes inside the componentDidMount function
  componentDidMount() {
      history.listen(() => this.forceUpdate());
   }
  1.  Next, we define the props of the parameters in our Message Section. For that we need the following props-
  • urlPropsQueryConfig – this is where we define our URLConfig
  • Static proptypes – the query param to which we want to pass the value, so for me it’s dream.
  • The defaultProps when no such value is being passed to our param should be left a blank.
  • And then we finally assign the props.
  • This is then passed to the Message Composer Section from where we receive the value passed.

MessageSection.react.js

// Adding the UrlConfig
const urlPropsQueryConfig = {
  dream: { type: UrlQueryParamTypes.string }
};

 // Defining the query param inside our ClassName
  static propTypes = {
    dream: PropTypes.string
  }
// Setting the default param

  static defaultProps = {     
dream: ''
  }

 //Assigning the props inside the render() function
    const {
      dream
    } = this.props;
 //Passing the dream to the MessageComposer Section

                  <MessageComposer
                    threadID={this.state.thread.id}
                    theme={this.state.darkTheme}
                    dream={dream} />                
//Exporting our Class

export default addUrlProps({ urlPropsQueryConfig })(ClassName);
  1. Next we update the Message Composer section by the props we had passed. For this we first check if the props are null, we don’t populate it in our textarea if it is, otherwise we populate the textarea with the value ‘dream + props.dream’ so the value passed in the URL will be prepend by a word dream to enable the ‘dream value’ in our textarea.

The full file is available at MessageComposer.js

 //Add Check to the constructor
 constructor(props) {
    super(props);
    this.state = {text: ''};
    if(props.dream!==''){   //Setting the text as received ‘dream dreamPassed’
      this.state= {text: 'dream '+ props.dream}
    }
  }
// Populate the textarea
        <textarea
          name="message"
          value={this.state.text}
          onChange={this._onChange.bind(this)}
          onKeyDown={this._onKeyDown.bind(this)}
          ref={(textarea)=> { this.nameInput = textarea; }}
          placeholder="Type a message..."
        />
// Add props to the component
MessageComposer.propTypes = {
  /* other props */,
  dream: PropTypes.string //Setting Proptypes to receive the prop from the MessageSection
};

Now we have the full code working for querying any dream. Head over to chat.susi.ai?dream=fossasia Change fossasia to see the text change.

Resources

Using SUSI as your dictionary

SUSI can be taught to give responses from APIs as well. I made use of an API called Datamuse which is a premier search engine for English words, indexing 10 million unique words and phrases in more than 1000 dictionaries and glossaries.

1. First, we head towards creating our dream pad for creating rules for the skill. To do this we need to create a dream at dream.susi.ai and give it a name, say dictionary.

2. After that one needs to go to the API and check the response generated.

3. Going through the docs of the API, one can create various queries to produce informative responses as follows –

  • Word with a similar meaning.

define *| Meaning of *| one word for *

!console: $word$
{
"url":"https://api.datamuse.com/words?ml=$1$",
"path":"$.[0]"
}
eol
  • Word related to something that start with a given letter.

word related to * that start with the letter *

!console: $word$
{
"url":"https://api.datamuse.com/words?ml=$1$&sp=$2$*",
"path":"$.[0]"
}
eol

  • Word that sound like a given word..

word that sound like *|sounding like *

!console: $word$
{
"url":"https://api.datamuse.com/words?sl=$1$",
"path":"$.[0]"
}
eol

  • Words that are spelled similarly to a given word.

words that are spelled similarly to *| similar spelling to *| spelling of *

!console: $word$
{
"url":"https://api.datamuse.com/words?sp=$1$",
"path":"$.[0]"
}
eol

  • Word that rhyme with a given word.

rhyme *| word rhyming with *

!console: $word$
{
"url":"https://api.datamuse.com/words?rel_rhy=$1$",
"path":"$.[0]"
}
eol

  • Adjectives that are often used to describe a given word.

adjective to describe *|show adjective for *|adjective for *

!console: $word$
{
"url":"https://api.datamuse.com/words?rel_jjb=$1$",
"path":"$.[0]"
}
eol

  • Suggestions for a given word.

suggestions for *| show words like *| similar words to * | words like *

!console: $word$
{
"url":"https://api.datamuse.com/sug?s=$1$",
"path":"$.[0]"
}
eol

This is a sample query response for define *

To create more dictionary skills go to http://dream.susi.ai/p/dictionary and add skills from the API. To contribute by adding more skills, send a pull request to the susi_skill_data.  To test the skills, you can go to chat.susi.ai

20 Amazing Things SUSI can do for You

SUSI.AI has a collection of varied skills in numerous fields such as knowledge, entertainment, problem solving, small-talk, assistants etc. Here’s a list of top skills which SUSI possesses.

Knowledge Based

  1. Ask SUSI to describe anything.

Sample Queries describe *

  1. Ask SUSI the distance between any two cities.

Sample queries – distance between * and *|What is distance between * and * ?| What is distance between * and *         

  1. Ask SUSI about your site’s rank.

Sample Query – site rank of *                

  1. Ask SUSI to know the location of any place.

Sample Queries – where is *

        

  1. Ask SUSI the time in any city.

 Sample Query – current time in *

        

  1. Ask SUSI the weather information of any city.

Sample Queries – temperature in * , hashtags * *, mentions * *, weather in *, Tell me about humidity in *|What is humidity in *|Humidity in *|* Humidity, Tell me tomorrow’s weather in *|Weather forecast of *

        

  1. Ask SUSI to wiki about anything.

Sample Query – wiki *

            

  1. Ask SUSI about any word, words etc.

Sample Queries – define *| Meaning of *| one word for *, word related to * that start with the letter *, word that sound like *|sounding like *, words that are spelled similarly to *| similar spelling to *| spelling of *, rhyme *| word rhyming with *, adjective to describe *|show adjective for *|adjective for *, suggestions for *| show words like *| similar words to * | words like *

  1. Ask SUSI about a day in the calendar.

Sample Queries – Date * ?, Day * ?, Day on year * month * date *?

        

  1. Ask to convert a currency to USD for you. 

Sample Queries –  convert * to USD

        

Problem Solving Based

  1. Ask SUSI to solve a problem for you in Mathematics.

  Sample Queries – compute *| Compute *| Calculate *| calculate *

        

Entertainment Based

  1. Ask SUSI to draw a card for you.

Sample Query – draw a card

        

  1. Ask SUSI to toss a coin for you.

Sample Query – flip a coin

        

  1. Ask SUSI to tell you a Big Bang Theory Joke.

Sample Query – * big bang theory| tell me about big bang theory|geek jokes|geek joke|big bang theory *

  1. Ask SUSI to generate a meme for you.

 Sample Query – get me a meme

        

        

  1. Ask SUSI to give you a recipe. 

Sample Queries – * cook *, cook *|how to cook *|recipe for

 

  1. Ask SUSI to tell you a random joke.

Sample Queries – tell me a joke|recite me a joke|entertain me

  1. Ask SUSI to give you a random gif.

Sample Query – random gif

        

Assistants

  1. Ask SUSI to translate something for you

Sample Queries – What is * in french|french for * , What is * in german|german for *, What is * in spanish|spanish for *,  What is * in hindi|hindi for *

  1. Ask SUSI to search anything for you.

Sample Queries – search *|query *|duckduckgo *

        

To contribute to the above skills you can follow the tutorial here. To test or chat with SUSI you can go to chat.susi.ai