Preparing a release for Phimpme Android

Most of the essential features are now in a stable state in our Phimpme Android app. So we decided to release a beta version of the app. In FOSSASIA we follow branch policy where in development all current development will take place and in master branch the stable code resides.

Releasing an app is not just building an apk and submitting to the distribution platform, certain guidelines should follow.

How I prepare a released apk for Phimpme

List down the feature

We discussed on our public channel what features are now in stable state and can be released. Features such as account manager and Share Activity is excluded because it is not complete and in under development mode. We don’t want to show and under development feature. So excluded them. And made a list of available features in different category of Camera, Gallery and Share.

Follow the branch policy.

The releasable and stable codebase should be on master branch. It is good to follow the branch policy because it helps if we encounter any problem with the released apk. We can directly go to our master branch and check there. Development branch have very volatile because of active development going on.

Every Contributor’s contribution is important

When we browse our old branches such as master in case of ours. We see generally it is behind 100s of commits to the development. In case of that when we create a PR for that then it generally contains all the old commits to make the branch up to the latest.

In this case while opening and merging do not squash the commits.

Testing from Developer’s end

Testing is very essential part in development. Before releasing it is a good practice that Developer should test the app from their end. We tested app features in different devices with varying Android OS version and screen size.

  • If there is any compatibility issue, report it right away and there are several tools in Android to fix.
  • Support variety of devices and screen sizes

Changing package name, application ID

Package name, application ID are vitals of an app. Which uniquely identifies them as app in world. Like I changed the package name of Phimpme app to org.fossasia.phimpme. Check all the permission app required.

Create Release build type

Build types are great to way categorize the apps. Debug and Release are two. There are various things in codebase which we want only in Debug modes. So when we create Release mode it neglect that part of the code.

Add build types in you application build.gradle

buildTypes {
   release {
       minifyEnabled false
   }

Rebuild app again and verify from the left tab bar

Generate Signed apk and Create keystore (.jks) file

Navigate to Build → Generate Signed APK

Fill all details and proceed further to generate the signed apk in your home directory.

Adding Signing configurations in build.gradle

Copy the keystore (.jks) file to the root of the project and add signing configurations

signingConfigs {
       config {
           keyAlias 'phimpme'
           keyPassword 'phimpme'
           storeFile file('../org.fossasia.phimpme.jks')
           storePassword 'XXXXXXX'
       }
   }

InstallRelease Gradle task

Navigate to the right sidebar of Android Studio click on Gradle


Click on installRelease to install the released apk. It take all the credentials from the signing configurations.

Resources

Continue ReadingPreparing a release for Phimpme Android

Setting up Your Own Custom SUSI AI server

When you chat with any of the SUSI clients, all the requests are made to standard SUSI server. But let us say that you have implemented some features on server and want to test it. Simply launch your server and you get your own SUSI server. You can then change the option to use your own server on the SUSI Android and Web Chat client.

The first step to get your own copy of the SUSI server is browse to https://github.com/fossasia/susi_server and fork it. Clone your origin head to your local machine and make the changes that you want. There are various tutorials on how to add more servlets to SUSI server, how to add new parameters to an existing action type or maybe modification or a similar type of memory servlet.

To start coding, you can either use some IDE like IDEA by Intelij (download it from here) or open the files you want to modify in a text editor. To start the SUSI server via command line terminal (and not IDE) do the following:

 

Open the terminal in the project directory cloned. Now write in the following command (It is expected that you have your JAVA installed with JAVA_PATH variable setup):

./gradlew build

This will install all the project dependencies required for the support of the project. Next execute :

bin/start.sh

This will take a little time but you will soon see a popped up window in your default browser and the server will start. The landing page of the website will launch. Make the modifications in the server you were aiming. The server is up now. You can access it at local ip address here:

http://127.0.0.1/

But if you try to add this local address in the URL field of any client, it will not work and give you an error. To access this server using a client, open your terminal and execute

ipconfig	//if you are running a windows machine{not recommended}
ifconfig	//if you are running a linux machine{recommended for better performance}

 

This will give you a list of various IP. Find Wireless LAN adapter Wi-Fi. Copy the IPv4 IP address. This is the link you need to enter in the server URL field in the client (Do not forget to add http:// before your IP address). Now sign up and you will be good to go.

Additional Resources :

Continue ReadingSetting up Your Own Custom SUSI AI server

Creating inter-component actions in Open Event Front-end

Open Event Front-end project is built using Ember JS, which lets us create modular components. While implementing the sessions route on the project, we faced a challenge of sending inter-component actions. To solve this problem we used the ember action helpers which bubbles up the action to the controller and sends it to the desired component. How did we solve it?

Handling actions in ember

In ember we can handle actions using the {{action ‘function’}} where function executes every time an action on element is triggered. This can be used to handle actions for the component. You can define actions for elements as:

<a href="#" class="item {{if (eq selectedTrackId track.id) 'active'}}" {{action 'filter' track.id}}>
  {{track.name}}
</a>

All the actions defined using the {{action}}  helper are defined inside the actions section of the component. Here the action filter is getting binded to onClick event of the anchor tag. The above helper will pass the name of the track(track) as a parameter to the filter function defined in the component.

Whenever the element is clicked the filter function defined in the component will get triggered. This method works for handling actions within a component, however when we need to trigger inter-component actions this approach does not help.

Sending actions from component to controller

We will send an action from the component to the controller of the route in which the component is rendered using a computed property defined inside the controller, which watched the selectedTrackId.

{{public/session-filter selectedTrackId=selectedTrackId tracks=tracks}}

Whenever the anchor is clicked it passes the id of the selected track to the filter function of the component. Inside the filter function we set the selectedTrackId variable passed to the component inside the route template as specified above.

actions: {
  filter(trackId = null) {
    this.set('selectedTrackId', trackId);
  }
}

The selectedTrackId is a observed by a computed property defined in the controller which modified the track list based on the id passed by the sessions-filter component.

Handling action in the controller

Inside the controller we have a computed property which observes the sessions array and the selectedTrackId which is passed by the session-filter component to the controller called sessionsByTracks.

sessionsByTracks: computed('model.sessions.[]', 'selectedTrackId', function() {
  if (this.get('selectedTrackId')) {
    return chain(this.get('model.sessions'))
      .filter(['track.id', this.get('selectedTrackId')])
      .orderBy(['track.name', 'startAt'])
      .value();
  } else {
    return chain(this.get('model.sessions'))
      .orderBy(['track.name', 'startAt'])
      .groupBy('track.name')
      .value();
  }
}),
tracks: computed('model.sessions.[]', function() {
  return chain(this.get('model.sessions'))
    .map('track')
    .uniqBy('id')
    .orderBy('name')
    .value();
})

sessionsByTracks is the property gets filtered using the lodash functions based upon the id of session passed to it. On the first render all the sessions are displayed as the selectedTrackId is set to null.

{{public/session-list sessions=sessionsByTracks}}

This property is passed to the session-list component which renders the filtered list of session based on the selected session in session-filter component. Check the source code for the example here.

Resources

Continue ReadingCreating inter-component actions in Open Event Front-end

Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

Continue ReadingTesting Asynchronous Code in Open Event Orga App using RxJava

Implementing Speech To Text in SUSI iOS

SUSI being an intelligent bot has the capabilities by which the user can provide input in a hands-free mode by talking and not requiring to even lift the phone for typing. The speech to text feature is available in SUSI iOS with the help of the Speech framework which was released alongside iOS 10 which enables continuous speech detection and transcription. The detection is really fast and supports around 50 languages and dialects from Arabic to Vietnamese. The speech recognition API does its heavy tasks of detection on Apple’s servers which requires an internet connection. The same API is also not always available on all newer devices and also provides the ability to check if a specific language is supported at a particular time.

How to use the Speech to Text feature?

  • Go to the view controller and import the speech framework
  • Now, because the speech is transmitted over the internet and uses Apple’s servers for computation, we need to ask the user for permissions to use the microphone and speech recognition feature. Add the following two keys to the Info.plist file which displays alerts asking user permission to use speech recognition and for accessing the microphone. Add a specific sentence for each key string which will be displayed to the user in the alerts.
    1. NSSpeechRecognitionUsageDescription
    2. NSMicrophoneUsageDescription

The prompts appear automatically when the functionality is used in the app. Since we already have the Hot word recognition enabled, the microphone alert would show up automatically after login and the speech one shows after the microphone button is tapped.

3) To request the user for authorization for Speech Recognition, we use the method SFSpeechRecognizer.requestAuthorization.

func configureSpeechRecognizer() {
        speechRecognizer?.delegate = self

        SFSpeechRecognizer.requestAuthorization { (authStatus) in
            var isEnabled = false

            switch authStatus {
            case .authorized:
                print("Autorized speech")
                isEnabled = true
            case .denied:
                print("Denied speech")
                isEnabled = false
            case .restricted:
                print("speech restricted")
                isEnabled = false
            case .notDetermined:
                print("not determined")
                isEnabled = false
            }

            OperationQueue.main.addOperation {

                // handle button enable/disable

                self.sendButton.tag = isEnabled ? 0 : 1

                self.addTargetSendButton()
            }
        }
    }

4)   Now, we create instances of the AVAudioEngine, SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest,SFSpeechRecognitionTask

let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
var recognitionTask: SFSpeechRecognitionTask?
let audioEngine = AVAudioEngine()

5)  Create a method called `readAndRecognizeSpeech`. Here, we do all the recognition related stuff. We first check if the recognitionTask is running or not and if it does we cancel the task.

if recognitionTask != nil {
  recognitionTask?.cancel()
  recognitionTask = nil
}

6)  Now, create an instance of AVAudioSession to prepare the audio recording where we set the category of the session as recording, the mode and activate it. Since these might throw an exception, they are added inside the do catch block.

let audioSession = AVAudioSession.sharedInstance()

do {

    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    try audioSession.setMode(AVAudioSessionModeMeasurement)

    try audioSession.setActive(true, with: .notifyOthersOnDeactivation)

} catch {

    print("audioSession properties weren't set because of an error.")

}

7)  Instantiate the recognitionRequest.

recognitionRequest = SFSpeechAudioBufferRecognitionRequest()

8) Check if the device has an audio input else throw an error.

guard let inputNode = audioEngine.inputNode else {

fatalError("Audio engine has no input node")

}

9)  Enable recognitionRequest to report partial results and start the recognitionTask.

recognitionRequest.shouldReportPartialResults = true

recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in

  var isFinal = false // to indicate if final result

  if result != nil {

    self.inputTextView.text = result?.bestTranscription.formattedString

    isFinal = (result?.isFinal)!

  }

  if error != nil || isFinal {

    self.audioEngine.stop()

    inputNode.removeTap(onBus: 0)

    self.recognitionRequest = nil

    self.recognitionTask = nil

  }
})

10) Next, we start with writing the method that performs the actual speech recognition. This will record and process the speech continuously.

  • First, we create a singleton for the incoming audio using .inputNode
  • .installTap configures the node and sets up the buffer size and the format
let recordingFormat = inputNode.outputFormat(forBus: 0)

inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, _) in

    self.recognitionRequest?.append(buffer)

}

11)  Next, we prepare and start the audio engine.

audioEngine.prepare()

do {

  try audioEngine.start()

} catch {

  print("audioEngine couldn't start because of an error.")

}

12)  Create a method that stops the Speech recognition.

func stopSTT() {

    print("audioEngine stopped")

    audioEngine.inputNode?.removeTap(onBus: 0)

    audioEngine.stop()

    recognitionRequest?.endAudio()

    indicatorView.removeFromSuperview()



    if inputTextView.text.isEmpty {

        self.sendButton.setImage(UIImage(named: ControllerConstants.mic), for: .normal)

    } else {

        self.sendButton.setImage(UIImage(named: ControllerConstants.send), for: .normal)

    }

        self.inputTextView.isUserInteractionEnabled = true
}

13)  Update the view when the speech recognition is running indicating the user its status. Add below code just below audio engine preparation.

// Listening indicator swift

self.indicatorView.frame = self.sendButton.frame

self.indicatorView.isUserInteractionEnabled = true

let gesture: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(startSTT))

gesture.numberOfTapsRequired = 1

self.indicatorView.addGestureRecognizer(gesture)
self.sendButton.setImage(UIImage(), for: .normal)

indicatorView.startAnimating()

self.sendButton.addSubview(indicatorView)

self.sendButton.addConstraintsWithFormat(format: "V:|[v0(24)]|", views: indicatorView)

self.sendButton.addConstraintsWithFormat(format: "H:|[v0(24)]|", views: indicatorView)

self.inputTextView.isUserInteractionEnabled = false

The screenshot of the implementation is below:

       

References

Continue ReadingImplementing Speech To Text in SUSI iOS

How to make a SUSI chat bot skill for Cortana

Cortana is assistant from Microsoft just like Siri from Apple. We can make a skill for Cortana i.e Creating your own bot with in cortana so that you can use your bot through cortana by activating it using invocation name. To create SUSI cortana skill we will use SUSI API and Microsoft bot framework. First of all we will have to make a bot on microsoft bot framework and to do so follow this tutorial. Change code in this tutorial with code given below.

var restify = require('restify');
var builder = require('botbuilder');
var request = require('request');
var http = require('http');

// Setup Restify Server
var server = restify.createServer();
server.listen(process.env.port || process.env.PORT || 8080, function() {
   console.log('listening to ', server.name, server.url);
});
// Create chat bot
var connector = new builder.ChatConnector({
   appId: process.env.appId,
   appPassword: process.env.appPassword
});

setInterval(function() {
       http.get(process.env.HerokuURL);
   }, 1200000);

var bot = new builder.UniversalBot(connector);
server.post('/api/messages', connector.listen());

//getting response from SUSI API upon receiving messages from User
bot.dialog('/', function(session) {
   var msg = session.message.text;
   var options = {
       method: 'GET',
       url: 'http://api.asksusi.com/susi/chat.json',
       qs: {
           timezoneOffset: '-330',
           q: session.message.text
       }
   };
//sending request to SUSI API for response
   request(options, function(error, response, body) {
       if (error) throw new Error(error);
       var ans = (JSON.parse(body)).answers[0].actions[0].expression;
       //responding back to user
       session.say(ans,ans);

   })
});

After making bot we have to configure this bot for Cortana and to do so select cortana from list of channel on your bot from https://dev.botframework.com and add following detail shown in figure below.

Invocation name is name which you will call to use your bot. There are many ways to in invoke your skill like Run <invocation name> , Launch <invocation name>. Here is the complete list of invocation commands. How SUSI skill in cortana actually works is when user invoke SUSI skill cortana acts as middleware to send and receive responses from skill to user. After configuring your bot for Cortana test it with cortana with same hotmail account which you have used for bot framework by calling invocation name. Here is the demo video for testing SUSI Skill https://youtu.be/40yX16hxcls.

You have complete creating a skill for cortana now if you want to publish it to world select manage cortana dashboard from your bots in https://dev.botframework.com and publish it to first by filling form.

If you want to learn more about bot framework refer to https://docs.microsoft.com/en-us/Bot-Framework/index

References:
SUSI Cortana Repository: https://github.com/fossasia/susi_cortana_skill
Tutorial for bot: https://blog.fossasia.org/susi-ai-bots-with-microsofts-bot-framework/

Continue ReadingHow to make a SUSI chat bot skill for Cortana

How to make SUSI kik bot

To make SUSI kik bot first you have to configure a bot. To configure bot go to https://dev.kik.com/ and make your bot by scanning code from your kik mobile app.You have to answer following questions from botsworth and make your bot.

 

After logging in to your dashboard get your api key by going to configuration menu on top.

After your bot is setup follow given steps to create your first susi kik bot.

Steps:

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open shell and change your current directory to the new folder you created.
  3. Type npm init in command line and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save @kikinteractive/kik. After @kikinteractive/kik is installed type npm install –save http after http is installed type npm install –save request when all the modules are installed check your package.json these modules will be included within dependencies portion.

  6. Your package.json file should look like this.

     
    {
     "name": "susi_kikbot",
       "version": "1.0.0",
     "description": "susi kik bot",
     "main": "index.js",
     "scripts": {
       "test": "node tests/sample.js"
     },
     "license": "MIT",
     "dependencies": {
       "@kikinteractive/kik": "^2.0.11",
       "request": "^2.75.0"
     }
    }
    
    
  7. Copy following code into file you created i.e index.js and add your bot name to it in place of username.

    var http = require('http');
    var Bot = require('@kikinteractive/kik');
    var request = require('request')
    var answer;
     
    var bot = new Bot({
     
        username: '<your-bot-name>',
        apiKey: process.env.API_KEY,
        baseUrl: process.env.HEROKU_URL
     
    });
     
    bot.updateBotConfiguration();
     
    bot.onTextMessage((message) => {
     
        request('http://api.asksusi.com/susi/chat.json?q=' + encodeURI(query), function(error, response, body) {
     
            if (!error && response.statusCode == 200) {
     
                answer = JSON.parse(body).answers[0].actions[0].expression;
     
            } else {
     
                answer = "Oops, Looks like Susi is taking a
                break, She will be back soon";
     
            }
     
        });
     
        message.reply(answer)
    });
     
    http.createServer(bot.incoming()).listen(process.env.PORT || 5000)
    
    
  8. Before deploying our bot to heroku so that it can be active we have to make a github repository for chatbot to make github repository follow these steps.In shell change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for remote repository by making repository on your github and copying this link of your repository.

  9. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.

     

  10. Deploy app using github deployment method.
  11. Select Automatic deployment method.
  12. Go to settings of your app and config variables and paste your API key for bot to this and name it as API_KEY and get your heroku app url and make a variable for it named HEROKU_URL.
  13. Your susi bot is ready now test it by massaging it.

If you want to learn more about kik API then refer to https://dev.kik.com/#/docs/messaging

 

Resources:

Github Repository: https://github.com/fossasia/susi_kikbot
KIK bot API: https://dev.kik.com/#/docs/messaging

Continue ReadingHow to make SUSI kik bot

Using a YAML file to read configuration options in Yaydoc

Yaydoc provides access to a lot of configurable variables which can be set as per requirements to configure various sections of the build process. You can see the entire list of variables in the project’s homepage. Till now the only way to do this was to set appropriate environment variables. Since a web user interface for yaydoc is in development, providing a clean UI was very important. This meant that we could not just create a bunch of input fields for all variables as that could be overwhelming for any new user. So we decided to ask only minimal information in the web form and read other variables if the user chooses to specify from a YAML file in the target repository.

To read a YAML file, we used PyYaml. It is a well established Python package to safely read info from a YAML file and convert it to a Python’s dictionary. Here is the code snippet for that.

def get_yaml_config():
    try:
        with open('.yaydoc.yml', 'r') as file:
            conf = yaml.safe_load(file)
    except FileNotFoundError:
        return {}
    return conf

The above code snippet returns a dictionary specifying all keys read from the YAML file. Since none of the options are required, we first create a dictionary with all defaults and recursively merges it with the yaml dict. The merging is done using the following code snippet:

for key, value in head.items():
    if isinstance(base, dict):
        if isinstance(value, dict):
            base[key] = update_dict(base.get(key, {}), value)
        else:
           base[key] = head[key]
    else:
        base = {key: head[key]}
return base

Now you can create a .yaydoc.yml file in the root of your repository and yaydoc would read options from there. Here is a sample yml file.

metadata:
  author: FOSSASIA
  projectname: Yaydoc
  version: development

build:
  doctheme: fossasia_theme
  docpath: docs/
  logo: images/logo.svg
  markdown_flavour: markdown_github

publish:
  ghpages:
    docurl: yaydoc.fossasia.org

It should be noted that the layout of the file may change in the future as the project is in active development.

Resources

Continue ReadingUsing a YAML file to read configuration options in Yaydoc

SUSI AI Bots with Microsoft’s Bot Framework

The Bot Framework is used to build intelligent chatbots and it supports .NET, Node.js, and REST. To learn about building bots using bot framework go to  https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted . Now to build SUSI AI bot for different platforms like facebook, telegram, kik, skype follow below given steps.

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created.
  3. Type npm init in shell and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
  5. Type following commands in command line  npm install –save restify.After restify is installed type npm install –save botbuilder   after botbuilder is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

  6. Your package.json file should look like this.

    {
    "name": "skype-bot",
    "version": "1.0.0",
    "description": "SUSI AI Skype Bot",
    "main": "app.js",
    "scripts": {
      "test": "echo \"Error: no test specified\" && exit 1",
      "start": "node app.js"
    },
    "author": "",
    "license": "ISC",
    "dependencies": {
      "botbuilder": "^3.8.1",
      "request": "^2.81.0",
      "restify": "^4.3.0"
    }
    }
    
  7. Copy following code into file you created i.e index.js

    var restify = require('restify');
    var builder = require('botbuilder');
    var request = require('request');
    
    // Setup Restify Server
    var server = restify.createServer();
    server.listen(process.env.port || process.env.PORT || 8080, function() {
       console.log('%s listening to %s', server.name, server.url);
    });
    
    // Create chat bot
    var connector = new builder.ChatConnector({
     appId: process.env.appId,
     appPassword: process.env.appPassword
    });
    
    var bot = new builder.UniversalBot(connector);
    server.post('/api/messages', connector.listen());
    //When bot is added by user
    bot.on('contactRelationUpdate', function(message) {
       if (message.action === 'add') {
           var name = message.user ? message.user.name : null;
           var reply = new builder.Message()
               .address(message.address)
               .text("Hello %s... Thanks for adding me. You can talk to SUSI now.", name || 'there');
           bot.send(reply);
       }
    });
    //getting response from SUSI API upon receiving messages from User
    bot.dialog('/', function(session) {
       var options = {
           method: 'GET',
           url: 'http://api.asksusi.com/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: session.message.text
           }
       };
    //sending request to SUSI API for response 
       request(options, function(error, response, body) {
           if (error) throw new Error(error);
           var ans = (JSON.parse(body)).answers[0].actions[0].expression;
           //responding back to user
           session.send(ans);
    
       })
    });
    
  8. You have to replace appID and appPassword with your own ID and Password which you can get by below given steps.
  9. Sign in/Sign up to this https://dev.botframework.com/. After signing in go to My Bots option at the top of the page and Create/Register your bot. Enter details of your bot and click on “Create Microsoft App ID and password”. 
  10. Leave messaging endpoint for now after getting app ID and password we will write messaging endpoint.
  11. Copy your APP ID and Password and save them for later use. Paste your App ID in box given for ID on bot registration page.

  12. Now we have to create messaging endpoint to listen for requests. Make a github repository and push the files in the folder we created above.

    In command line change current directory to folder we created above and  write
    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master  You will URL for remote repository by making repository on your github and copying this link of your repository.

  13. Now we have to deploy this github repository to heroku to get url for messaging endpoint. If you don’t have account on heroku sign up here https://www.heroku.com/ else just sign in and create a new app.
  14. Deploy your repository onto heroku from deploy option and choosing github as a deployment method.
  15. Select automatic deployment so that you make any changes in github repository they should be deployed to heroku.

  16. Open you app from option on top right and copy the link of your heroku app and append it with /api/messages and enter this url as messaging endpoint.

    https://{Your_App_Name}.herokuapp.com/api/messages
  17. Register the bot and add APP ID and password you saved to your heroku app in settings->config variables.
  18. Now go to  https://dev.botframework.com/. and then in My Bots go to your bot and click on Skype bot then add it to contact and start chatting.
  19. You can connect same bot to different channels like kik, slack, telegram, facebook and many others.

    Add different channels in your bot page and follow these links for deploying onto different platforms.

If you want to learn more about bot framework you can refer to https://docs.microsoft.com/en-us/Bot-Framework/index

Resources:
Code: https://github.com/fossasia/susi_skypebot
Bot Framework: https://docs.microsoft.com/en-us/bot-framework/bot-builder-overview-getstarted

Continue ReadingSUSI AI Bots with Microsoft’s Bot Framework

How to Collaborate Design on Hardware Schematics in PSLab Project

Generally ECAD tools are not built to support collaborative features such as git in software programming. PSLab hardware is developed using an open source ECAD tool called KiCAD. It is a practice in the electronic industry to use hierarchical blocks to support collaboration. One person can work on a specific block having rest of the design untouched. This will support a workaround to have a team working on a one hardware design just like a software design. In PSLab hardware repository, many developers can work simultaneously using this technique without having any conflicts in project files.

Printed Circuit Board (PCB) designing is an art. The way the components are placed and how they are interconnected through different type of wires and pads, it is an art for hardware designing engineers. If they do not use auto-route, PCB design for the same schematic will be quite different from one another.

There are two major approaches in designing PCBs.

  • Top Down method
  • Bottom Up method

Any of these methods can be implemented in PSLab hardware repository to support collaboration by multiple developers at the same time.

Top Down Method

In this method the design is starting from the most abstract definitions. We can think of this as a black box with several wires coming out of it. The user is aware of how to use the wires and to which devices they need to be connected. But the inside of the black box is not visible. Then a designer can open up this box and break the design down to several small black boxes which can perform a subset of functionalities the bigger black box did. He can go on breaking it down to even smaller boxes and reach the very bottom where basic components are found such as transistors, resistors, diodes etc.

Bottom Up Method

In the bottom up method, the opposite approach of the top down method is used. Small parts are combined together to design a much bigger part and they are combined together to build up an even bigger part which will eventually create the final design. Our human body is a great example for a use of bottom up method. Cells create organ; organs create systems and systems create the body.

Designing Top Down Designs using KiCAD

In PCB designing, the designers are free to choose whatever the approach they prefer more suitable for their project. In this blog, the Top Down method is used to demonstrate how to create a design from the abstract concepts. This will illustrate how to create a design with one layer deep in design using hierarchical blocks. However, these design procedures can be carried out as many times as the designer want to create depending on the complexity of the project.

Step 01 – Create a new project in KiCAD

Step 02 – Open up Eeschema to begin the design

Step 03 – Create a Hierarchical Sheet

Step 04 – Place the hierarchical sheet on the design sheet and give it a name

Step 05 – Enter sheet

Step 06 – Place components and create a schematic design inside the sheet and place hierarchical labels

Step 07 – Define the labels as input or output and give them an identifier. Once done, place them on appropriate places and connect with wires

Step 08 – Go back to main sheet to complete the hierarchical block

Step 09 – Place hierarchical pins on the block

Click on the “Place hierarchical pin” icon from the toolbar and click on the block. The pins can be placed on anywhere on the block. As a convention, input pins are placed on the left side and the output pins are placed on the right side of the block.

Step 10 – Complete the circuit

Resources:

Continue ReadingHow to Collaborate Design on Hardware Schematics in PSLab Project