Hotword Detection for SUSI Android with CMUsphinx

Being an AI for conversational bots, Hotword detection of SUSI is the top priority to the community. Another requirement was that there should be an option for an offline hotword detection. So, I was searching for an API that has all these capabilities. Sphinx by CMU was the obvious choice. It provides robust mechanism for hotword detection.

What is CMUsphinx?

CMUsphinx is open source and leading speech recognition toolkit. CMUsphinx has different modules for different tasks it needs to perform. Our requirement for SUSI is, that is needs to be lightweight, So we are using Pocketsphinx. Before going into integration let us discuss about basics of speech recognition.

Let us dive into coding and integrating Susi with pocketsphinx.

Building Pocketsphinx .AAR file

Git clone the sphinxbase, pocketsphinx and pocketsphinx-android and put them in the same folder. By following commands below.

git clone http://github.com/cmupshinx/sphinxbase
git clone http://github.com/cmupshinx/pocketsphinx
git clone http://github.com/cmupshinx/pocketsphinx-android

Then import pocketsphinx Android into Android studio. Run the project. .aar files pocketsphinx-android-5prealpha-debug.aar & pocketsphinx-android-5prealpha-release.aar  will be created in the build/outputs/aar.

Integrating Susi with Pocketsphinx

In Android Studio you need to the above generated AAR into your project. Just go to File > New > New module and choose Import .JAR/.AAR Package. After this, We need to change permissions of project. Add the following permissions in AndroidManifest.xml.

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />

Import the following functions into your main activity.

import edu.cmu.pocketsphinx.Assets;
import edu.cmu.pocketsphinx.Hypothesis;
import edu.cmu.pocketsphinx.RecognitionListener;
import edu.cmu.pocketsphinx.SpeechRecognizer;
import edu.cmu.pocketsphinx.SpeechRecognizerSetup;

Next we need to sync the assets we get from .aar file in to our project. Edit app/build.gradle build file to run assets.xml. We do it by adding following code to build.gradle.

ant.importBuild 'assets.xml'
preBuild.dependsOn(list, checksum)
clean.dependsOn(clean_assets)

Now all the import and sync errors of gradle must disappear and you should be good to go. You can start your recognizer by adding this code to your activity.

recognizer = defaultSetup()
        .setAcousticModel(new File(assetsDir, "en-us-ptm"))
        .setDictionary(new File(assetsDir, 
"cmudict-en-us.dict"))
        .getRecognizer();
recognizer.addListener(this);

Decoder model is lengthy process that contains many operations, so it’s recommended to run in inside async task. These are commands for decoder to run. These commands essentially do acoustic and language modelling of speech.

// Create keyword-activation search.
recognizer.addKeyphraseSearch(KWS_SEARCH, KEYPHRASE);

// Create grammar-based searches.
File menuGrammar = new File(assetsDir, "menu.gram");
recognizer.addGrammarSearch(MENU_SEARCH, menuGrammar);

// Next search for digits
File digitsGrammar = new File(assetsDir, "digits.gram");
recognizer.addGrammarSearch(DIGITS_SEARCH, digitsGrammar);

// Create language model search.
File languageModel = new File(assetsDir, "weather.dmp");
recognizer.addNgramSearch(FORECAST_SEARCH, languageModel);

Speech recognition will end at onEndOfSpeech callback of the recognizer listener.  We can call recognizer.stop or recognizer.cancel(). Cancel will cancel the recognition, stop will cause the final result be passed you in onResult callback. During the recognition, you will get partial results in onPartialResult callback.

Now we have integrated Pocketsphinx with SUSI.AI in Android.

Continue ReadingHotword Detection for SUSI Android with CMUsphinx

Integration Testing of an Ember Component in Open Event Frontend

Open Event Frontend uses ember components which are reused several times throughout the project, making these components is one thing but we should also ensure they work as expected. To ensure this we need integration tests.How to write an integration test for event-map component? Three files are generated when we generate our event-map component from the command shell. They are namely:

event-map.js

event-map.hbs

event-map-test.js

We are familiar with the above three files as:

  1. event-map.js helps us to provide properties to our event-map.hbs
  2. event-map.hbs is where we define the structure of our component.
  3. event-map-test.js is where we define integration test for our component. Also, which is the current topic for discussion.

Testing

In order to test the rendering of the component, we define an integration test for that component. In integration tests we don’t have to launch our whole application and navigate to the location where our component is present. This makes it ideal for testing components. Have a look at the following sample integration test file.

import { test } from 'ember-qunit';
import moduleForComponent from 'open-event-frontend/tests/helpers/component-helper';
import hbs from 'htmlbars-inline-precompile';

moduleForComponent('public/event-map', 'Integration | Component | public/event map');

let event = Object.create({ latitude: 37.7833, longitude: -122.4167, locationName: 'Sample event location address' });

test('it renders', function(assert) {
  this.set('event', event);
  this.render(hbs `{{public/event-map event=event}}`);
  assert.equal(this.$('.address p').text(), 'Sample event location address');
});

So let’s break down this code line by line-

In the first line we have imported test from ‘ember-qunit’ (default unit testing helper suite     for Ember) which contains all the required test functions. For example, here we are using test function to check to check the rendering of our component. We can use test function multiple times to check multiple components.

Next, we are importing moduleForComponent from ‘open-event-frontend/tests/helpers/component-helper’ helper which helps in finding the component by its name.

Next,  we are importing hbs from ‘htmlbars-inline-precompile’ which basically imports precompile HTMLBars template strings within the tests via ES6 tagged template strings.

The moduleForComponent helper will find the component by name (event-map) and its template. The component we are testing here is a map-related. Therefore, to test this, we need to pass a dummy object consisting of latitude, longitude and locationName. The object data must render correctly in our app.

Inside our test function, this.set(‘event’, event) assigns a variable ( here event ) to our test context.

this.render (hbs `{{public/event-map event=event}}`) lets us create a new instance of the component by declaring the component in template syntax, as we would in our application.

assert.equal(this.$(‘.address p’).text(), ‘Sample event location address’) is basically a check between actual and expected arguments.

For a simple component like a table or a basic UI only component, it is not necessary to pass any object to the component. Only the rendered component can be tested as well. Apart from checking the component rendering we can perform tests on it by using test functions as described above. After writing the integration tests for components, simply run ember test –server on the terminal to see if all the tests have passed or not.      

Find out more about Integration testing in ember –

Ember guides, EmberIgniter

Continue ReadingIntegration Testing of an Ember Component in Open Event Frontend

Using Dynamic segments to Reduce Code Redundancy of Recurring HTML in Open Event ember Frontend

While developing web apps, at times we require the same HTML for different pages in our app. This leads to redundancy and low code-reusability. This can be well managed in ember.js by using dynamic segments in our routes.

In Open Event Front-end we have a route named /sessions where we want to show the details of the event’s sessions and we want to categorize the sessions in all, pending, accepted, confirmed and rejected sessions hence want to create the following subroutes under it.

events/<event-id>/sessions
events/<event-id>/sessions/pending
events/<event-id>/sessions/accepted
events/<event-id>/sessions/confirmed
events/<event-id>/sessions/rejected

All of these subroutes show different data based on the routes in a table with exactly same fields. So if we use dynamic segments, we can decrease code redundancy and increase code reusability. So let us see how to add these subroutes as dynamic segments.

Firstly, we have to add a dynamic part (pending, accepted, confirmed, rejected) under our URL /sessions . For this, edit the following code snippet to the router.js file. In place of list write the name of route handler which handles our subroutes and in place of session_status write any identifier you want as session_status is the dynamic part which changes according to subroutes. In our case it will be pending, accepted, confirmed or rejected.

     this.route('sessions', function() {
        this.route('list', { path: '/:session_status' });
      });

To display all the sessions in our /sessions route, we have to edit index route handler and return data from model hook. Now when we hit  /sessions end point, the template which we are using redundantly, i.e. list.hbs, gets data from a model hook of this route handler.

Next we need to define a model hook in our list.js file which returns data for the dynamic routes. In list.js, we would want to change the title of our page according to the dynamic segments. These dynamic segments are available under model hook in this route under param parameter. We are using this.set which sets the provided key or path to the value. Make this available in titleToken function and by applying simple switch case, we can change the title dynamically.

Till now, we could access our dynamic segments by manually changing the URL. Let’s add a link to do this transition automatically for us. For this, we edit our session.hbs file and provide links using a linkto helper. One thing to take care here is that we should pass the dynamic segments along with the link-to helper.

{{#link-to 'events.view.sessions.list' 'pending' class='item'}}

Here, pending is the dynamic segment which we are passing to our route handler. Similarly, we can make the links for all our dynamic segments. Also in this template, we should provide outlets for the common template i.e list.hbs which will be reused by other dynamic subroutes. And finally, we define our reusable template in list.hbs file.

Now when we hit on different links we are redirected to different routes which are using different data but same templates. Also, we can see this transition in our URL and title.

To know more about dynamic segments refer to Ember guide.

Continue ReadingUsing Dynamic segments to Reduce Code Redundancy of Recurring HTML in Open Event ember Frontend

Preparing a release for Phimpme Android

Most of the essential features are now in a stable state in our Phimpme Android app. So we decided to release a beta version of the app. In FOSSASIA we follow branch policy where in development all current development will take place and in master branch the stable code resides.

Releasing an app is not just building an apk and submitting to the distribution platform, certain guidelines should follow.

How I prepare a released apk for Phimpme

List down the feature

We discussed on our public channel what features are now in stable state and can be released. Features such as account manager and Share Activity is excluded because it is not complete and in under development mode. We don’t want to show and under development feature. So excluded them. And made a list of available features in different category of Camera, Gallery and Share.

Follow the branch policy.

The releasable and stable codebase should be on master branch. It is good to follow the branch policy because it helps if we encounter any problem with the released apk. We can directly go to our master branch and check there. Development branch have very volatile because of active development going on.

Every Contributor’s contribution is important

When we browse our old branches such as master in case of ours. We see generally it is behind 100s of commits to the development. In case of that when we create a PR for that then it generally contains all the old commits to make the branch up to the latest.

In this case while opening and merging do not squash the commits.

Testing from Developer’s end

Testing is very essential part in development. Before releasing it is a good practice that Developer should test the app from their end. We tested app features in different devices with varying Android OS version and screen size.

  • If there is any compatibility issue, report it right away and there are several tools in Android to fix.
  • Support variety of devices and screen sizes

Changing package name, application ID

Package name, application ID are vitals of an app. Which uniquely identifies them as app in world. Like I changed the package name of Phimpme app to org.fossasia.phimpme. Check all the permission app required.

Create Release build type

Build types are great to way categorize the apps. Debug and Release are two. There are various things in codebase which we want only in Debug modes. So when we create Release mode it neglect that part of the code.

Add build types in you application build.gradle

buildTypes {
   release {
       minifyEnabled false
   }

Rebuild app again and verify from the left tab bar

Generate Signed apk and Create keystore (.jks) file

Navigate to Build → Generate Signed APK

Fill all details and proceed further to generate the signed apk in your home directory.

Adding Signing configurations in build.gradle

Copy the keystore (.jks) file to the root of the project and add signing configurations

signingConfigs {
       config {
           keyAlias 'phimpme'
           keyPassword 'phimpme'
           storeFile file('../org.fossasia.phimpme.jks')
           storePassword 'XXXXXXX'
       }
   }

InstallRelease Gradle task

Navigate to the right sidebar of Android Studio click on Gradle


Click on installRelease to install the released apk. It take all the credentials from the signing configurations.

Resources

Continue ReadingPreparing a release for Phimpme Android

Setting up Your Own Custom SUSI AI server

When you chat with any of the SUSI clients, all the requests are made to standard SUSI server. But let us say that you have implemented some features on server and want to test it. Simply launch your server and you get your own SUSI server. You can then change the option to use your own server on the SUSI Android and Web Chat client.

The first step to get your own copy of the SUSI server is browse to https://github.com/fossasia/susi_server and fork it. Clone your origin head to your local machine and make the changes that you want. There are various tutorials on how to add more servlets to SUSI server, how to add new parameters to an existing action type or maybe modification or a similar type of memory servlet.

To start coding, you can either use some IDE like IDEA by Intelij (download it from here) or open the files you want to modify in a text editor. To start the SUSI server via command line terminal (and not IDE) do the following:

 

Open the terminal in the project directory cloned. Now write in the following command (It is expected that you have your JAVA installed with JAVA_PATH variable setup):

./gradlew build

This will install all the project dependencies required for the support of the project. Next execute :

bin/start.sh

This will take a little time but you will soon see a popped up window in your default browser and the server will start. The landing page of the website will launch. Make the modifications in the server you were aiming. The server is up now. You can access it at local ip address here:

http://127.0.0.1/

But if you try to add this local address in the URL field of any client, it will not work and give you an error. To access this server using a client, open your terminal and execute

ipconfig	//if you are running a windows machine{not recommended}
ifconfig	//if you are running a linux machine{recommended for better performance}

 

This will give you a list of various IP. Find Wireless LAN adapter Wi-Fi. Copy the IPv4 IP address. This is the link you need to enter in the server URL field in the client (Do not forget to add http:// before your IP address). Now sign up and you will be good to go.

Additional Resources :

Continue ReadingSetting up Your Own Custom SUSI AI server

Implementing Speech To Text in SUSI iOS

SUSI being an intelligent bot has the capabilities by which the user can provide input in a hands-free mode by talking and not requiring to even lift the phone for typing. The speech to text feature is available in SUSI iOS with the help of the Speech framework which was released alongside iOS 10 which enables continuous speech detection and transcription. The detection is really fast and supports around 50 languages and dialects from Arabic to Vietnamese. The speech recognition API does its heavy tasks of detection on Apple’s servers which requires an internet connection. The same API is also not always available on all newer devices and also provides the ability to check if a specific language is supported at a particular time.

How to use the Speech to Text feature?

  • Go to the view controller and import the speech framework
  • Now, because the speech is transmitted over the internet and uses Apple’s servers for computation, we need to ask the user for permissions to use the microphone and speech recognition feature. Add the following two keys to the Info.plist file which displays alerts asking user permission to use speech recognition and for accessing the microphone. Add a specific sentence for each key string which will be displayed to the user in the alerts.
    1. NSSpeechRecognitionUsageDescription
    2. NSMicrophoneUsageDescription

The prompts appear automatically when the functionality is used in the app. Since we already have the Hot word recognition enabled, the microphone alert would show up automatically after login and the speech one shows after the microphone button is tapped.

3) To request the user for authorization for Speech Recognition, we use the method SFSpeechRecognizer.requestAuthorization.

func configureSpeechRecognizer() {
        speechRecognizer?.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in
            var isEnabled = false

            switch authStatus {
            case .authorized:
                print("Autorized speech")
                isEnabled = true
            case .denied:
                print("Denied speech")
                isEnabled = false
            case .restricted:
                print("speech restricted")
                isEnabled = false
            case .notDetermined:
                print("not determined")
                isEnabled = false
            }

            OperationQueue.main.addOperation {

                // handle button enable/disable

                self.sendButton.tag = isEnabled ? 0 : 1

                self.addTargetSendButton()
            }
        }
    }

4)   Now, we create instances of the AVAudioEngine, SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest,SFSpeechRecognitionTask

let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
let audioEngine = AVAudioEngine()

5)  Create a method called `readAndRecognizeSpeech`. Here, we do all the recognition related stuff. We first check if the recognitionTask is running or not and if it does we cancel the task.

if recognitionTask != nil {
  recognitionTask?.cancel()
  recognitionTask = nil
}

6)  Now, create an instance of AVAudioSession to prepare the audio recording where we set the category of the session as recording, the mode and activate it. Since these might throw an exception, they are added inside the do catch block.

let audioSession = AVAudioSession.sharedInstance()

do {

    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    try audioSession.setMode(AVAudioSessionModeMeasurement)

    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

} catch {

    print("audioSession properties weren't set because of an error.")

}

7)  Instantiate the recognitionRequest.

recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

8) Check if the device has an audio input else throw an error.

guard let inputNode = audioEngine.inputNode else {

fatalError("Audio engine has no input node")

}

9)  Enable recognitionRequest to report partial results and start the recognitionTask.

recognitionRequest.shouldReportPartialResults = true

recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

  var isFinal = false // to indicate if final result

  if result != nil {

    self.inputTextView.text = result?.bestTranscription.formattedString

    isFinal = (result?.isFinal)!

  }

  if error != nil || isFinal {

    self.audioEngine.stop()

    inputNode.removeTap(onBus: 0)

    self.recognitionRequest = nil

    self.recognitionTask = nil

  }
})

10) Next, we start with writing the method that performs the actual speech recognition. This will record and process the speech continuously.

  • First, we create a singleton for the incoming audio using .inputNode
  • .installTap configures the node and sets up the buffer size and the format
let recordingFormat = inputNode.outputFormat(forBus: 0)

inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in

    self.recognitionRequest?.append(buffer)

}

11)  Next, we prepare and start the audio engine.

audioEngine.prepare()

do {

  try audioEngine.start()

} catch {

  print("audioEngine couldn't start because of an error.")

}

12)  Create a method that stops the Speech recognition.

func stopSTT() {

    print("audioEngine stopped")

    audioEngine.inputNode?.removeTap(onBus: 0)

    audioEngine.stop()

    recognitionRequest?.endAudio()

    indicatorView.removeFromSuperview()



    if inputTextView.text.isEmpty {

        self.sendButton.setImage(UIImage(named: ControllerConstants.mic), for: .normal)

    } else {

        self.sendButton.setImage(UIImage(named: ControllerConstants.send), for: .normal)

    }

        self.inputTextView.isUserInteractionEnabled = true
}

13)  Update the view when the speech recognition is running indicating the user its status. Add below code just below audio engine preparation.

// Listening indicator swift

self.indicatorView.frame = self.sendButton.frame

self.indicatorView.isUserInteractionEnabled = true

let gesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSTT))

gesture.numberOfTapsRequired = 1

self.indicatorView.addGestureRecognizer(gesture)
self.sendButton.setImage(UIImage(), for: .normal)

indicatorView.startAnimating()

self.sendButton.addSubview(indicatorView)

self.sendButton.addConstraintsWithFormat(format: "V:|[v0(24)]|", views: indicatorView)

self.sendButton.addConstraintsWithFormat(format: "H:|[v0(24)]|", views: indicatorView)

self.inputTextView.isUserInteractionEnabled = false

The screenshot of the implementation is below:

       

References

Continue ReadingImplementing Speech To Text in SUSI iOS

How to make a SUSI chat bot skill for Cortana

Cortana is assistant from Microsoft just like Siri from Apple. We can make a skill for Cortana i.e Creating your own bot with in cortana so that you can use your bot through cortana by activating it using invocation name. To create SUSI cortana skill we will use SUSI API and Microsoft bot framework. First of all we will have to make a bot on microsoft bot framework and to do so follow this tutorial. Change code in this tutorial with code given below.

var restify = require('restify');
var builder = require('botbuilder');
var request = require('request');
var http = require('http');

// Setup Restify Server
var server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 8080, function() {
   console.log('listening to ', server.name, server.url);
});
// Create chat bot
var connector = new builder.ChatConnector({
   appId: process.env.appId,
   appPassword: process.env.appPassword
});

setInterval(function() {
       http.get(process.env.HerokuURL);
   }, 1200000);

var bot = new builder.UniversalBot(connector);
server.post('/api/messages', connector.listen());

//getting response from SUSI API upon receiving messages from User
bot.dialog('/', function(session) {
   var msg = session.message.text;
   var options = {
       method: 'GET',
       url: 'http://api.asksusi.com/susi/chat.json',
       qs: {
           timezoneOffset: '-330',
           q: session.message.text
       }
   };
//sending request to SUSI API for response
   request(options, function(error, response, body) {
       if (error) throw new Error(error);
       var ans = (JSON.parse(body)).answers[0].actions[0].expression;
       //responding back to user
       session.say(ans,ans);

   })
});

After making bot we have to configure this bot for Cortana and to do so select cortana from list of channel on your bot from https://dev.botframework.com and add following detail shown in figure below.

Invocation name is name which you will call to use your bot. There are many ways to in invoke your skill like Run <invocation name> , Launch <invocation name>. Here is the complete list of invocation commands. How SUSI skill in cortana actually works is when user invoke SUSI skill cortana acts as middleware to send and receive responses from skill to user. After configuring your bot for Cortana test it with cortana with same hotmail account which you have used for bot framework by calling invocation name. Here is the demo video for testing SUSI Skill https://youtu.be/40yX16hxcls.

You have complete creating a skill for cortana now if you want to publish it to world select manage cortana dashboard from your bots in https://dev.botframework.com and publish it to first by filling form.

If you want to learn more about bot framework refer to https://docs.microsoft.com/en-us/Bot-Framework/index

References:
SUSI Cortana Repository: https://github.com/fossasia/susi_cortana_skill
Tutorial for bot: https://blog.fossasia.org/susi-ai-bots-with-microsofts-bot-framework/

Continue ReadingHow to make a SUSI chat bot skill for Cortana

How to make SUSI kik bot

To make SUSI kik bot first you have to configure a bot. To configure bot go to https://dev.kik.com/ and make your bot by scanning code from your kik mobile app.You have to answer following questions from botsworth and make your bot.

 

After logging in to your dashboard get your api key by going to configuration menu on top.

After your bot is setup follow given steps to create your first susi kik bot.

Steps:

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open shell and change your current directory to the new folder you created.
  3. Type npm init in command line and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save @kikinteractive/kik. After @kikinteractive/kik is installed type npm install –save http after http is installed type npm install –save request when all the modules are installed check your package.json these modules will be included within dependencies portion.

  6. Your package.json file should look like this.

     
    {
     "name": "susi_kikbot",
       "version": "1.0.0",
     "description": "susi kik bot",
     "main": "index.js",
     "scripts": {
       "test": "node tests/sample.js"
     },
     "license": "MIT",
     "dependencies": {
       "@kikinteractive/kik": "^2.0.11",
       "request": "^2.75.0"
     }
    }
    
    
  7. Copy following code into file you created i.e index.js and add your bot name to it in place of username.

    var http = require('http');
    var Bot = require('@kikinteractive/kik');
    var request = require('request')
    var answer;
     
    var bot = new Bot({
     
        username: '<your-bot-name>',
        apiKey: process.env.API_KEY,
        baseUrl: process.env.HEROKU_URL
     
    });
     
    bot.updateBotConfiguration();
     
    bot.onTextMessage((message) => {
     
        request('http://api.asksusi.com/susi/chat.json?q=' + encodeURI(query), function(error, response, body) {
     
            if (!error && response.statusCode == 200) {
     
                answer = JSON.parse(body).answers[0].actions[0].expression;
     
            } else {
     
                answer = "Oops, Looks like Susi is taking a
                break, She will be back soon";
     
            }
     
        });
     
        message.reply(answer)
    });
     
    http.createServer(bot.incoming()).listen(process.env.PORT || 5000)
    
    
  8. Before deploying our bot to heroku so that it can be active we have to make a github repository for chatbot to make github repository follow these steps.In shell change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for remote repository by making repository on your github and copying this link of your repository.

  9. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.

     

  10. Deploy app using github deployment method.
  11. Select Automatic deployment method.
  12. Go to settings of your app and config variables and paste your API key for bot to this and name it as API_KEY and get your heroku app url and make a variable for it named HEROKU_URL.
  13. Your susi bot is ready now test it by massaging it.

If you want to learn more about kik API then refer to https://dev.kik.com/#/docs/messaging

 

Resources:

Github Repository: https://github.com/fossasia/susi_kikbot
KIK bot API: https://dev.kik.com/#/docs/messaging

Continue ReadingHow to make SUSI kik bot

SUSI AI Bots with Microsoft’s Bot Framework

The Bot Framework is used to build intelligent chatbots and it supports .NET, Node.js, and REST. To learn about building bots using bot framework go to  https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted . Now to build SUSI AI bot for different platforms like facebook, telegram, kik, skype follow below given steps.

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created.
  3. Type npm init in shell and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save restify.After restify is installed type npm install –save botbuilder   after botbuilder is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

  6. Your package.json file should look like this.

    {
    "name": "skype-bot",
    "version": "1.0.0",
    "description": "SUSI AI Skype Bot",
    "main": "app.js",
    "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1",
      "start": "node app.js"
    },
    "author": "",
    "license": "ISC",
    "dependencies": {
      "botbuilder": "^3.8.1",
      "request": "^2.81.0",
      "restify": "^4.3.0"
    }
    }
    
  7. Copy following code into file you created i.e index.js

    var restify = require('restify');
    var builder = require('botbuilder');
    var request = require('request');
    
    // Setup Restify Server
    var server = restify.createServer();
    server.listen(process.env.port || process.env.PORT || 8080, function() {
       console.log('%s listening to %s', server.name, server.url);
    });
    
    // Create chat bot
    var connector = new builder.ChatConnector({
     appId: process.env.appId,
     appPassword: process.env.appPassword
    });
    
    var bot = new builder.UniversalBot(connector);
    server.post('/api/messages', connector.listen());
    //When bot is added by user
    bot.on('contactRelationUpdate', function(message) {
       if (message.action === 'add') {
           var name = message.user ? message.user.name : null;
           var reply = new builder.Message()
               .address(message.address)
               .text("Hello %s... Thanks for adding me. You can talk to SUSI now.", name || 'there');
           bot.send(reply);
       }
    });
    //getting response from SUSI API upon receiving messages from User
    bot.dialog('/', function(session) {
       var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: session.message.text
           }
       };
    //sending request to SUSI API for response 
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           //responding back to user
           session.send(ans);
    
       })
    });
    
  8. You have to replace appID and appPassword with your own ID and Password which you can get by below given steps.
  9. Sign in/Sign up to this https://dev.botframework.com/. After signing in go to My Bots option at the top of the page and Create/Register your bot. Enter details of your bot and click on “Create Microsoft App ID and password”. 
  10. Leave messaging endpoint for now after getting app ID and password we will write messaging endpoint.
  11. Copy your APP ID and Password and save them for later use. Paste your App ID in box given for ID on bot registration page.

  12. Now we have to create messaging endpoint to listen for requests. Make a github repository and push the files in the folder we created above.

    In command line change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master  You will URL for remote repository by making repository on your github and copying this link of your repository.

  13. Now we have to deploy this github repository to heroku to get url for messaging endpoint. If you don’t have account on heroku sign up here https://www.heroku.com/ else just sign in and create a new app.
  14. Deploy your repository onto heroku from deploy option and choosing github as a deployment method.
  15. Select automatic deployment so that you make any changes in github repository they should be deployed to heroku.

  16. Open you app from option on top right and copy the link of your heroku app and append it with /api/messages and enter this url as messaging endpoint.

    https://{Your_App_Name}.herokuapp.com/api/messages
  17. Register the bot and add APP ID and password you saved to your heroku app in settings->config variables.
  18. Now go to  https://dev.botframework.com/. and then in My Bots go to your bot and click on Skype bot then add it to contact and start chatting.
  19. You can connect same bot to different channels like kik, slack, telegram, facebook and many others.

    Add different channels in your bot page and follow these links for deploying onto different platforms.

If you want to learn more about bot framework you can refer to https://docs.microsoft.com/en-us/Bot-Framework/index

Resources:
Code: https://github.com/fossasia/susi_skypebot
Bot Framework: https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted

Continue ReadingSUSI AI Bots with Microsoft’s Bot Framework

Adding Twitter Integration with MVP Architecture in Phimpme Android

The Account manager layout part of Phimpme is set. Now we need to start adding different account to make it functional. We first start with twitter. Twitter functionality is integrated with the help of Twitter Kit provided by Twitter itself. We followed the steps provided on the installation guide.

Note: Before Starting this first go to apps.twitter.com and create new app, add the relevant information such as name, description, URL etc.

How twitter works in Phimpme

A dialog box appear when user selected the add account option in Account manager. Select Twitter option from it.

Twitter guides to use custom TwitterLoginButton for sign in. But as we are using a common dialog box. How to initiate login from there?

Using TwitterAuthClient

Twitter Auth Client invoke the Twitter callback and popup the login window. On authorized the correct user it goes inside the onSuccess method and start a Twitter session which helps us to get the information which we want to store in database such as username, access token.

client.authorize(getActivity(), new Callback<TwitterSession>() {
   @Override
   public void success(Result<TwitterSession> result) {

       // Creating twitter session, after user authenticate
       // in twitter popup
       TwitterSession session = TwitterCore.getInstance()
               .getSessionManager().getActiveSession();
       TwitterAuthToken authToken = session.getAuthToken();


       // Writing values in Realm database
       account.setUsername(session.getUserName());
       account.setToken(String.valueOf(session.getAuthToken()));

   }

Working with MVP architecture to show Twitter data to User in a recyclerView

Finally after successful login from Twitter, we also need to show user that you are successfully logged in Phimpme app and also provide a sign out feature so that user can logout from Twitter anytime.

Account manager has a recyclerView which takes data from the database and show it to the User.

Steps:

class AccountContract {
   internal interface View : MvpView{

       /**
        * Setting up the recyclerView. The layout manager, decorator etc.
        */
       fun setUpRecyclerView()

       /**
        * Account Presenter calls this function after taking data from Database Helper Class
        */
       fun setUpAdapter(accountDetails: RealmResults<AccountDatabase>)

       /**
        * Shows the error log
        */
       fun showError()
   }

   internal interface Presenter {

       /**
        * function to load data from database, using Database Helper class
        */
       fun loadFromDatabase()

       /**
        * setting up the recyclerView adapter from here
        */
       fun handleResults(accountDetails: RealmResults<AccountDatabase>)
   }
}

This class clearly show what functions are in View and what are in Presenter. The View interface extended to MvpView which actually holds some common functions such as onComplete()

  • Implement View interface in AccountActivity

class AccountActivity : ThemedActivity(), AccountContract.View

And perform all the action which are happening on the View such as setting up the RecyclerView

override fun setUpRecyclerView() {
   val layoutManager = LinearLayoutManager(this)
   accountsRecyclerView!!.setLayoutManager(layoutManager)
   accountsRecyclerView!!.setAdapter(accountAdapter)
}
  • Main Business Logic should not be in Activity class

That’s why using MVP we have very less number of lines of code in our Main Activity because it separate the work on different zones which help developer to easily work, maintain and other user to contribute.

So like in our case I need to update the RecyclerView adapter by taking data from database. That work should not be in activity that’s why I create a class AccountPresenter and extend this to our Presenter interface in contract class

class AccountPresenter extends BasePresenter<AccountContract.View>
       implements AccountContract.Presenter

I added the function which take care of loading data from database

@Override
public void loadFromDatabase() {
   handleResults(databaseHelper.fetchAccountDetails());
}
  • Always consider the future and keep an eye for future development

Right now I not need to do alot on Database, I just need to pick the whole data and show it on View. But I need to take case of future development in this part as well. There might be more complex operation on Database in future, then it will create complexity in the codebase, if it is not architectured well.

So, I created a DatabaseHelper class which takes care of all the database operations, the advantage of this is, anyone who have to contribute in Database part or debugging the databse need not to search for every activity and scroll lines of code, the work will be in DatabaseHelper for sure.

Added DatabaseHelper in data package

public class DatabaseHelper {

   private Realm realm;

   public DatabaseHelper(Realm realm) {
       this.realm = realm;
   }

   public RealmResults<AccountDatabase> fetchAccountDetails(){
       return realm.where(AccountDatabase.class).findAll();
   }

   public void deleteSignedOutAccount(String accountName){
       final RealmResults<AccountDatabase> deletionQueryResult =  realm.where(AccountDatabase.class)
               .equalTo("name", accountName).findAll();

       realm.executeTransaction(new Realm.Transaction() {
           @Override
           public void execute(Realm realm) {
               deletionQueryResult.deleteAllFromRealm();
           }
       });
   }
}

Flow Diagram:

Browse the Phimpme GitHub Repository for complete illustration.

Resources

  1. Twitter KIt overview : https://dev.twitter.com/twitterkit/android/overview
  2. Login with Twitter: https://dev.twitter.com/twitterkit/android/log-in-with-twitter
  3. MVP Architecture by Ribot: https://github.com/ribot/android-boilerplate

 

Continue ReadingAdding Twitter Integration with MVP Architecture in Phimpme Android