Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

Continue ReadingTesting Asynchronous Code in Open Event Orga App using RxJava

Implementing Speech To Text in SUSI iOS

SUSI being an intelligent bot has the capabilities by which the user can provide input in a hands-free mode by talking and not requiring to even lift the phone for typing. The speech to text feature is available in SUSI iOS with the help of the Speech framework which was released alongside iOS 10 which enables continuous speech detection and transcription. The detection is really fast and supports around 50 languages and dialects from Arabic to Vietnamese. The speech recognition API does its heavy tasks of detection on Apple’s servers which requires an internet connection. The same API is also not always available on all newer devices and also provides the ability to check if a specific language is supported at a particular time.

How to use the Speech to Text feature?

  • Go to the view controller and import the speech framework
  • Now, because the speech is transmitted over the internet and uses Apple’s servers for computation, we need to ask the user for permissions to use the microphone and speech recognition feature. Add the following two keys to the Info.plist file which displays alerts asking user permission to use speech recognition and for accessing the microphone. Add a specific sentence for each key string which will be displayed to the user in the alerts.
    1. NSSpeechRecognitionUsageDescription
    2. NSMicrophoneUsageDescription

The prompts appear automatically when the functionality is used in the app. Since we already have the Hot word recognition enabled, the microphone alert would show up automatically after login and the speech one shows after the microphone button is tapped.

3) To request the user for authorization for Speech Recognition, we use the method SFSpeechRecognizer.requestAuthorization.

func configureSpeechRecognizer() {
        speechRecognizer?.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in
            var isEnabled = false

            switch authStatus {
            case .authorized:
                print("Autorized speech")
                isEnabled = true
            case .denied:
                print("Denied speech")
                isEnabled = false
            case .restricted:
                print("speech restricted")
                isEnabled = false
            case .notDetermined:
                print("not determined")
                isEnabled = false
            }

            OperationQueue.main.addOperation {

                // handle button enable/disable

                self.sendButton.tag = isEnabled ? 0 : 1

                self.addTargetSendButton()
            }
        }
    }

4)   Now, we create instances of the AVAudioEngine, SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest,SFSpeechRecognitionTask

let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
let audioEngine = AVAudioEngine()

5)  Create a method called `readAndRecognizeSpeech`. Here, we do all the recognition related stuff. We first check if the recognitionTask is running or not and if it does we cancel the task.

if recognitionTask != nil {
  recognitionTask?.cancel()
  recognitionTask = nil
}

6)  Now, create an instance of AVAudioSession to prepare the audio recording where we set the category of the session as recording, the mode and activate it. Since these might throw an exception, they are added inside the do catch block.

let audioSession = AVAudioSession.sharedInstance()

do {

    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    try audioSession.setMode(AVAudioSessionModeMeasurement)

    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

} catch {

    print("audioSession properties weren't set because of an error.")

}

7)  Instantiate the recognitionRequest.

recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

8) Check if the device has an audio input else throw an error.

guard let inputNode = audioEngine.inputNode else {

fatalError("Audio engine has no input node")

}

9)  Enable recognitionRequest to report partial results and start the recognitionTask.

recognitionRequest.shouldReportPartialResults = true

recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

  var isFinal = false // to indicate if final result

  if result != nil {

    self.inputTextView.text = result?.bestTranscription.formattedString

    isFinal = (result?.isFinal)!

  }

  if error != nil || isFinal {

    self.audioEngine.stop()

    inputNode.removeTap(onBus: 0)

    self.recognitionRequest = nil

    self.recognitionTask = nil

  }
})

10) Next, we start with writing the method that performs the actual speech recognition. This will record and process the speech continuously.

  • First, we create a singleton for the incoming audio using .inputNode
  • .installTap configures the node and sets up the buffer size and the format
let recordingFormat = inputNode.outputFormat(forBus: 0)

inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in

    self.recognitionRequest?.append(buffer)

}

11)  Next, we prepare and start the audio engine.

audioEngine.prepare()

do {

  try audioEngine.start()

} catch {

  print("audioEngine couldn't start because of an error.")

}

12)  Create a method that stops the Speech recognition.

func stopSTT() {

    print("audioEngine stopped")

    audioEngine.inputNode?.removeTap(onBus: 0)

    audioEngine.stop()

    recognitionRequest?.endAudio()

    indicatorView.removeFromSuperview()



    if inputTextView.text.isEmpty {

        self.sendButton.setImage(UIImage(named: ControllerConstants.mic), for: .normal)

    } else {

        self.sendButton.setImage(UIImage(named: ControllerConstants.send), for: .normal)

    }

        self.inputTextView.isUserInteractionEnabled = true
}

13)  Update the view when the speech recognition is running indicating the user its status. Add below code just below audio engine preparation.

// Listening indicator swift

self.indicatorView.frame = self.sendButton.frame

self.indicatorView.isUserInteractionEnabled = true

let gesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSTT))

gesture.numberOfTapsRequired = 1

self.indicatorView.addGestureRecognizer(gesture)
self.sendButton.setImage(UIImage(), for: .normal)

indicatorView.startAnimating()

self.sendButton.addSubview(indicatorView)

self.sendButton.addConstraintsWithFormat(format: "V:|[v0(24)]|", views: indicatorView)

self.sendButton.addConstraintsWithFormat(format: "H:|[v0(24)]|", views: indicatorView)

self.inputTextView.isUserInteractionEnabled = false

The screenshot of the implementation is below:

       

References

Continue ReadingImplementing Speech To Text in SUSI iOS

How to make a SUSI chat bot skill for Cortana

Cortana is assistant from Microsoft just like Siri from Apple. We can make a skill for Cortana i.e Creating your own bot with in cortana so that you can use your bot through cortana by activating it using invocation name. To create SUSI cortana skill we will use SUSI API and Microsoft bot framework. First of all we will have to make a bot on microsoft bot framework and to do so follow this tutorial. Change code in this tutorial with code given below.

var restify = require('restify');
var builder = require('botbuilder');
var request = require('request');
var http = require('http');

// Setup Restify Server
var server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 8080, function() {
   console.log('listening to ', server.name, server.url);
});
// Create chat bot
var connector = new builder.ChatConnector({
   appId: process.env.appId,
   appPassword: process.env.appPassword
});

setInterval(function() {
       http.get(process.env.HerokuURL);
   }, 1200000);

var bot = new builder.UniversalBot(connector);
server.post('/api/messages', connector.listen());

//getting response from SUSI API upon receiving messages from User
bot.dialog('/', function(session) {
   var msg = session.message.text;
   var options = {
       method: 'GET',
       url: 'http://api.asksusi.com/susi/chat.json',
       qs: {
           timezoneOffset: '-330',
           q: session.message.text
       }
   };
//sending request to SUSI API for response
   request(options, function(error, response, body) {
       if (error) throw new Error(error);
       var ans = (JSON.parse(body)).answers[0].actions[0].expression;
       //responding back to user
       session.say(ans,ans);

   })
});

After making bot we have to configure this bot for Cortana and to do so select cortana from list of channel on your bot from https://dev.botframework.com and add following detail shown in figure below.

Invocation name is name which you will call to use your bot. There are many ways to in invoke your skill like Run <invocation name> , Launch <invocation name>. Here is the complete list of invocation commands. How SUSI skill in cortana actually works is when user invoke SUSI skill cortana acts as middleware to send and receive responses from skill to user. After configuring your bot for Cortana test it with cortana with same hotmail account which you have used for bot framework by calling invocation name. Here is the demo video for testing SUSI Skill https://youtu.be/40yX16hxcls.

You have complete creating a skill for cortana now if you want to publish it to world select manage cortana dashboard from your bots in https://dev.botframework.com and publish it to first by filling form.

If you want to learn more about bot framework refer to https://docs.microsoft.com/en-us/Bot-Framework/index

References:
SUSI Cortana Repository: https://github.com/fossasia/susi_cortana_skill
Tutorial for bot: https://blog.fossasia.org/susi-ai-bots-with-microsofts-bot-framework/

Continue ReadingHow to make a SUSI chat bot skill for Cortana

How to make SUSI kik bot

To make SUSI kik bot first you have to configure a bot. To configure bot go to https://dev.kik.com/ and make your bot by scanning code from your kik mobile app.You have to answer following questions from botsworth and make your bot.

 

After logging in to your dashboard get your api key by going to configuration menu on top.

After your bot is setup follow given steps to create your first susi kik bot.

Steps:

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open shell and change your current directory to the new folder you created.
  3. Type npm init in command line and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save @kikinteractive/kik. After @kikinteractive/kik is installed type npm install –save http after http is installed type npm install –save request when all the modules are installed check your package.json these modules will be included within dependencies portion.

  6. Your package.json file should look like this.

     
    {
     "name": "susi_kikbot",
       "version": "1.0.0",
     "description": "susi kik bot",
     "main": "index.js",
     "scripts": {
       "test": "node tests/sample.js"
     },
     "license": "MIT",
     "dependencies": {
       "@kikinteractive/kik": "^2.0.11",
       "request": "^2.75.0"
     }
    }
    
    
  7. Copy following code into file you created i.e index.js and add your bot name to it in place of username.

    var http = require('http');
    var Bot = require('@kikinteractive/kik');
    var request = require('request')
    var answer;
     
    var bot = new Bot({
     
        username: '<your-bot-name>',
        apiKey: process.env.API_KEY,
        baseUrl: process.env.HEROKU_URL
     
    });
     
    bot.updateBotConfiguration();
     
    bot.onTextMessage((message) => {
     
        request('http://api.asksusi.com/susi/chat.json?q=' + encodeURI(query), function(error, response, body) {
     
            if (!error && response.statusCode == 200) {
     
                answer = JSON.parse(body).answers[0].actions[0].expression;
     
            } else {
     
                answer = "Oops, Looks like Susi is taking a
                break, She will be back soon";
     
            }
     
        });
     
        message.reply(answer)
    });
     
    http.createServer(bot.incoming()).listen(process.env.PORT || 5000)
    
    
  8. Before deploying our bot to heroku so that it can be active we have to make a github repository for chatbot to make github repository follow these steps.In shell change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for remote repository by making repository on your github and copying this link of your repository.

  9. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.

     

  10. Deploy app using github deployment method.
  11. Select Automatic deployment method.
  12. Go to settings of your app and config variables and paste your API key for bot to this and name it as API_KEY and get your heroku app url and make a variable for it named HEROKU_URL.
  13. Your susi bot is ready now test it by massaging it.

If you want to learn more about kik API then refer to https://dev.kik.com/#/docs/messaging

 

Resources:

Github Repository: https://github.com/fossasia/susi_kikbot
KIK bot API: https://dev.kik.com/#/docs/messaging

Continue ReadingHow to make SUSI kik bot

Using a YAML file to read configuration options in Yaydoc

Yaydoc provides access to a lot of configurable variables which can be set as per requirements to configure various sections of the build process. You can see the entire list of variables in the project’s homepage. Till now the only way to do this was to set appropriate environment variables. Since a web user interface for yaydoc is in development, providing a clean UI was very important. This meant that we could not just create a bunch of input fields for all variables as that could be overwhelming for any new user. So we decided to ask only minimal information in the web form and read other variables if the user chooses to specify from a YAML file in the target repository.

To read a YAML file, we used PyYaml. It is a well established Python package to safely read info from a YAML file and convert it to a Python’s dictionary. Here is the code snippet for that.

def get_yaml_config():
    try:
        with open('.yaydoc.yml', 'r') as file:
            conf = yaml.safe_load(file)
    except FileNotFoundError:
        return {}
    return conf

The above code snippet returns a dictionary specifying all keys read from the YAML file. Since none of the options are required, we first create a dictionary with all defaults and recursively merges it with the yaml dict. The merging is done using the following code snippet:

for key, value in head.items():
    if isinstance(base, dict):
        if isinstance(value, dict):
            base[key] = update_dict(base.get(key, {}), value)
        else:
           base[key] = head[key]
    else:
        base = {key: head[key]}
return base

Now you can create a .yaydoc.yml file in the root of your repository and yaydoc would read options from there. Here is a sample yml file.

metadata:
  author: FOSSASIA
  projectname: Yaydoc
  version: development

build:
  doctheme: fossasia_theme
  docpath: docs/
  logo: images/logo.svg
  markdown_flavour: markdown_github

publish:
  ghpages:
    docurl: yaydoc.fossasia.org

It should be noted that the layout of the file may change in the future as the project is in active development.

Resources

Continue ReadingUsing a YAML file to read configuration options in Yaydoc

SUSI AI Bots with Microsoft’s Bot Framework

The Bot Framework is used to build intelligent chatbots and it supports .NET, Node.js, and REST. To learn about building bots using bot framework go to  https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted . Now to build SUSI AI bot for different platforms like facebook, telegram, kik, skype follow below given steps.

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created.
  3. Type npm init in shell and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save restify.After restify is installed type npm install –save botbuilder   after botbuilder is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

  6. Your package.json file should look like this.

    {
    "name": "skype-bot",
    "version": "1.0.0",
    "description": "SUSI AI Skype Bot",
    "main": "app.js",
    "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1",
      "start": "node app.js"
    },
    "author": "",
    "license": "ISC",
    "dependencies": {
      "botbuilder": "^3.8.1",
      "request": "^2.81.0",
      "restify": "^4.3.0"
    }
    }
    
  7. Copy following code into file you created i.e index.js

    var restify = require('restify');
    var builder = require('botbuilder');
    var request = require('request');
    
    // Setup Restify Server
    var server = restify.createServer();
    server.listen(process.env.port || process.env.PORT || 8080, function() {
       console.log('%s listening to %s', server.name, server.url);
    });
    
    // Create chat bot
    var connector = new builder.ChatConnector({
     appId: process.env.appId,
     appPassword: process.env.appPassword
    });
    
    var bot = new builder.UniversalBot(connector);
    server.post('/api/messages', connector.listen());
    //When bot is added by user
    bot.on('contactRelationUpdate', function(message) {
       if (message.action === 'add') {
           var name = message.user ? message.user.name : null;
           var reply = new builder.Message()
               .address(message.address)
               .text("Hello %s... Thanks for adding me. You can talk to SUSI now.", name || 'there');
           bot.send(reply);
       }
    });
    //getting response from SUSI API upon receiving messages from User
    bot.dialog('/', function(session) {
       var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: session.message.text
           }
       };
    //sending request to SUSI API for response 
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           //responding back to user
           session.send(ans);
    
       })
    });
    
  8. You have to replace appID and appPassword with your own ID and Password which you can get by below given steps.
  9. Sign in/Sign up to this https://dev.botframework.com/. After signing in go to My Bots option at the top of the page and Create/Register your bot. Enter details of your bot and click on “Create Microsoft App ID and password”. 
  10. Leave messaging endpoint for now after getting app ID and password we will write messaging endpoint.
  11. Copy your APP ID and Password and save them for later use. Paste your App ID in box given for ID on bot registration page.

  12. Now we have to create messaging endpoint to listen for requests. Make a github repository and push the files in the folder we created above.

    In command line change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master  You will URL for remote repository by making repository on your github and copying this link of your repository.

  13. Now we have to deploy this github repository to heroku to get url for messaging endpoint. If you don’t have account on heroku sign up here https://www.heroku.com/ else just sign in and create a new app.
  14. Deploy your repository onto heroku from deploy option and choosing github as a deployment method.
  15. Select automatic deployment so that you make any changes in github repository they should be deployed to heroku.

  16. Open you app from option on top right and copy the link of your heroku app and append it with /api/messages and enter this url as messaging endpoint.

    https://{Your_App_Name}.herokuapp.com/api/messages
  17. Register the bot and add APP ID and password you saved to your heroku app in settings->config variables.
  18. Now go to  https://dev.botframework.com/. and then in My Bots go to your bot and click on Skype bot then add it to contact and start chatting.
  19. You can connect same bot to different channels like kik, slack, telegram, facebook and many others.

    Add different channels in your bot page and follow these links for deploying onto different platforms.

If you want to learn more about bot framework you can refer to https://docs.microsoft.com/en-us/Bot-Framework/index

Resources:
Code: https://github.com/fossasia/susi_skypebot
Bot Framework: https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted

Continue ReadingSUSI AI Bots with Microsoft’s Bot Framework

How to Make SUSI AI Slack Bot

To make SUSI slack bot we will use real time messaging api of slack which will allow users to receive messages from bot in real time. To make SUSI slack bot you have to follow following steps:

Steps:

  1. First of all you have to create a team on slack in where your bot will be running. To create a team go to https://slack.com/ and create a new team.
  2. After creating sign in to your team and got to apps and integration option by clicking on left corner.
  3. Click manage on top right corner and go to custom integrations to add configuration to Bots.
  4. After adding configuration data,bot username and copying API Token now we have to write code for setting bot in slack. To set up code see below steps: 
  5. Install Node.js from the link below on your computer if you haven’t installed it already. https://nodejs.org/en/
  6. Create a folder with any name and open shell and change your current directory to the new folder you created.
  7. Type npm init in command line and enter details like name, version and entry point.
  8. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  9. Type following commands in command line  npm install –save @slack/client. After slack/client is installed type npm install –save express after express is installed type npm install –save request and then npm install –save http when all the modules are installed check your package.json modules will be included within dependencies portion.
  10. Your package.json file should look like this.
    {
    "name": "slack-bot",
    "version": "1.0.0",
    "description": "SUSI Slack Bot",
    "main": "index.js",
    "dependencies": {
           "express": "^4.15.3",
           "http": "0.0.0",
           "request": "^2.81.0"
    },
    "scripts": {
           "test": "echo \"Error: no test specified\" && exit 1",
           "start": "node index.js"
    }
    }
    
  11. Copy following code into file you created i.e index.js.
    var Slack = require('@slack/client');
    var request = require('request');
    var express = require('express');
    var http = require('http');
    var app = express();
    var RtmClient = Slack.RtmClient; 
    var RTM_EVENTS = Slack.RTM_EVENTS;
    var token = process.env.APIToken;
    
    var rtm = new RtmClient(token, { logLevel: 'info' }); 
    rtm.start();
    
    //to ping heorku app after 20 minutes to keep it active
    
    setInterval(function() {
            http.get(process.env.HerokuUrl);
        }, 1200000);
    
    rtm.on(RTM_EVENTS.MESSAGE, function(message) { 
    var channel = message.channel;
    
    var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: message.text
           }
       };
    
    //sending request to SUSI API for response
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           rtm.sendMessage(ans, channel);
       })
    });
    
    const port = process.env.PORT || 3000;
    app.listen(port, () => {
       console.log(`listening on ${port}`);
    });
     
    
    


  12. Now we have to deploy this code to heroku.
  13. Before deploying we have to make a github repository for chatbot to make github repository follow these steps:

    In command line change current directory to folder we created above and write

    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for remote repository by making repository on your github and copying this link of your repository.

  14. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.
  15. Deploy app using github deployment method.
  16. Select Automatic deployment method.
  17. Add APIToken and HerokuUrl variable to heroku app in settings options.
  18. Your SUSI Slack bot is ready enjoy chatting with it.If you want to learn more about slack API refer to https://api.slack.com
Continue ReadingHow to Make SUSI AI Slack Bot

Improving Harvesting Decision for Kaizen Harvester in loklak server

About Kaizen Harvester

Kaizen is an alternative approach to do harvesting in loklak. It focuses on query and information collecting to generate more queries from collected timelines. It maintains a queue of query that is populated by extracting following information from timelines –

  1. Hashtags in Tweets
  2. User mentions in Tweets
  3. Tweets from areas near to each Tweet in timeline.
  4. Tweets older than oldest Tweet in timeline.

Further, it can also utilise Twitter API to get trending keywords from Twitter and get search suggestions from other loklak peers.

It was introduced by @yukiisbored in pull request loklak/loklak_server#960.

The Problem: Unbiased Harvesting Decision

The Kaizen harvester either searches for queries from the queue, or tries to grab trending queries (using Twitter API or from backend). In the previous version of KaizenHarvester, the decision of “harvesting vs. info-grabbing” was taken based on the value from a random boolean generator –

@Override
public int harvest() {
   if (!queries.isEmpty() && random.nextBoolean())
       return harvestMessages();

   grabSuggestions();

   return 0;
}

[SOURCE]

In sane situations, the Kaizen harvester is configured to use a fixed size queue and drops the queries which are requested to get added once the queue is full. And since the decision doesn’t take into account the amount to which queue is filled, it would often call the grabSuggestions() method.

But since the queue would be full, the grabbed suggestions would simply be lost. This would result in wastage of time and resources in fetching the suggestions (from backend or API). To overcome this, something better was to be done in this part.

The Solution: Making Decision Biased

To solve the problem of dumb harvesting decision, the harvester was triggered based on the following steps –

  1. Calculate the ratio of queue filled (q.size() / q.maxSize()).
  2. Generate a random floating point number between 0 and 1.
  3. If the number is less than the fraction, harvest. Otherwise get harvesting suggestions.

Why would this work?

Initially, when the queue is mostly empty, the ratio would be a small number. So, it would be highly probable that a random number generated between 0 and 1 would be greater than the ratio. And Kaizen would go for grabbing search suggestions.

If this ratio is large (i.e. the queue is almost full), it would be highly likely that the random number generated would be less than it, making it more likely to search for results instead of grabbing suggestions.

Graph?

The following graph shows how the harvester decision would change. It performs 10k iterations for a given queue ratio and plots the number of times harvesting decision was taken.

Change in code

The harvest() method was changed in loklak/loklak_server#1158 to take smart decision of harvesting vs. info-grabbing in following manner –

@Override
public int harvest() {
   float targetProb = random.nextFloat();
   float prob = 0.5F;
   if (QUERIES_LIMIT > 0) {
       prob = queries.size() / (float)QUERIES_LIMIT;
   }
   if (!queries.isEmpty() && targetProb < prob) {
       return harvestMessages();
   }

   grabSuggestions();

   return 0;
}

[SOURCE]

Conclusion

This change brought enhancement in the Kaizen harvester and made it more sensible to how fast its queue if filling. There are no more requests made to backend for suggestions whose queries are not added to the queue.

 

Resources

Continue ReadingImproving Harvesting Decision for Kaizen Harvester in loklak server
Read more about the article Using Activities with Bottom navigation view in Phimpme Android
BottonNavigationView

Using Activities with Bottom navigation view in Phimpme Android

Can we use Bottom Navigation View with activities? In the FOSSASIA Phimpme Project we integrated two big projects such as Open Camera and Leafpic. Both are activity based projects and not developed over fragment. The bottom navigation we use will work with fragment.

In Leafpic as well as in Open Camera, there are a lot of Multi level inheritance which runs the app components. The problem faced to continue to work on activities. For theme management using base activity is necessary. The base activity extend to AppcompatActivity. The leafpic code hierarchy is very complex and depend on various activities. Better way is to shift to activities for using Bottom navigation view and  work with activity similar like fragments.

Image source: https://storage.googleapis.com/material-design/

Solution:

This is possible if we copy the bottom navigation item in each of the activity and setting the clicked state of it as per the menu item ID.

Steps to Implement (How I did in Phimpme)

  • Add a Base Activity which takes care of two thing

    • Taking the layout element and setting in the activity.
    • Set up the correct navigation menu item clicked

Function selectBottomNavigationBarItem(int itemId) will take care of what item is currently active. Called this function in onStart(), getNavigationMenuItemId() function will return the menu id and selectBottomNavigagionBarItem update the bottom navigation view.

public abstract class BaseActivity extends AppCompatActivity implements BottomNavigationView.OnNavigationItemSelectedListener

private void updateNavigationBarState() {
   int actionId = getNavigationMenuItemId();
   selectBottomNavigationBarItem(actionId);
}

void selectBottomNavigationBarItem(int itemId) {
   Menu menu = navigationView.getMenu();
   for (int i = 0, size = menu.size(); i < size; i++) {
       MenuItem item = menu.getItem(i);
       boolean shouldBeChecked = item.getItemId() == itemId;
       if (shouldBeChecked) {
           item.setChecked(true);
           break;
       }
   }
}
  • Add two abstract functions in the BaseActivity for above task
public abstract int getContentViewId();

public abstract int getNavigationMenuItemId();
  • Extend every activity to the Base Activity

For example I created a blank Activity: AccountsActivity and extend to BaseActivity

public class AccountsActivity extends BaseActivity
  • Implement the abstract function in the every activity and set the correct layout and menu item id.

@Override
public int getContentViewId() {
 return R.layout.content_accounts;
}

@Override
public int getNavigationMenuItemId() {
   return R.id.navigation_accounts;
}
  • Remove the setContentView(R.layout.activity_main);
  • Add the onNavigationItemSelected in BaseActivity

@Override
public boolean onNavigationItemSelected(@NonNull final MenuItem item) {
   switch (item.getItemId()) {
       case R.id.navigation_home:
    startActivity(new Intent(this, LFMainActivity.class));
           Break;
       case R.id.navigation_camera:
           startActivity(new Intent(this, CameraActivity.class));
           Break;
       case R.id.navigation_accounts:
           startActivity(new Intent(this, AccountsActivity.class));
           Break;
   }
   finish();
   return true;
}

Output:

The transition is as smooth as we use bottom navigation views with Fragment. The bottom navigation also appear on above of the activity.

Source: https://stackoverflow.com/a/42811144/2902518

Resources:

Continue ReadingUsing Activities with Bottom navigation view in Phimpme Android

Phimpme: Merging several Android Projects into One Project

To speed up our development of the version 2 of the Phimpme Android app, we decided to use some existing Open Source libraries and projects such as Open Camera app for Camera features and Leafpic for the photo gallery.

Integrating several big projects into one is a crucial step, and it is not difficult. Here are some points to ponder.

  1. Clone project separately. Build project and note down the features which you want to integrate. Explore the manifest file to know the Launcher, services and other permission app is taking.

I explore the Leafpic app manifest file. Found out its Launcher class, and changed it with the our code. Looked at the permissions app required such as Camera, access fine location, write external storage.

  1. Follow Bottom Up approach while integrating. It makes life easier. By the bottom up approach I mean create a new branch and then start deleting the code which is not required. Always work in branch so that no code lost or messed up.

Not everything is required at the moment. So I first integrate the whole leafpic app and then remove the splash screen. Also noted down the features needs to remove such as drawer, search bar etc.

  1. Remove and Commit, In a big project things are interlinked so removing one feature, actually made you remove most of the code. So my strategy is to focus on one feature and when it removed commit the code. This will help you in a reset hard your code anytime.

Like a lot of times I use reset –hard in my code, because it messed up.

  1. Licensing: One thing which is most important in using open source code is their License. Always follow what is written in their License. This is the way we can give credit to their hard work. There are various types of licenses, three famous are Apache, MIT and GNU General Public License. All have their pros and cons.

Apart of License there are some other condition as well. Like in leafpic which we are using in Phimpme has a condition to inform developers about the project in which we are using leafpic.

  1. Aware of File Duplication, sometimes many files have same name which results into file duplication. So refactor them already in the code.

I refactor the MainActivity in leafpic to LFMainActivity to not to clash with the existing. Resolve package name. This is also one of the tedious task. Android Studio already do a lot of refactoring but some part left.

  • Make sure your manifest file contains correct name.
  • The best way to refactor your package name in xml file is to ctrl+Shift+f in Ubuntu or Edit → Find → Find in path in Android Studio. Then search old package name and replace it with new. You can use alt+ j for multi cursor as well.
  • Clean and rebuild project.

Run app at each step so that you can catch the error at a point. Also use debugger for debugging.

Resources

Continue ReadingPhimpme: Merging several Android Projects into One Project