FOSSASIA at Google Code-In 2016 Grand Prize Trip

This year FOSSASIA came up with a whopping number of GCI participants, making it to the top. FOSSASIA is a mentor organization at the Google Code-In contest, which introduces pre-university students towards open source development.

Every year Google conducts the grand prize trip to all the GCI winners and I represented FOSSASIA as a mentor.

FOSSASIA GCI winners and Mentor at Google Mountain View Campus.

Day 1: Meet and Greet with the Diverse Communities

We all headed towards the San Francisco Google office and had a great time interacting with members from diverse open source organizations from different parts of the world. I had some interactive conversations with the kids, on how they scheduled their sleep hours in order to complete the task and got feedback from the mentors from different time zones! I was also overwhelmed while listening to their interests apart from open source contributions.

“I am a science enthusiast, mainly interested in Computer Science and its wide range of applications. I also enjoy playing the piano, reading, moving, and having engaging conversations with my friends. As a participant in the GCI contest, I got the chance to learn by doing, I got an insight of how it is like to work on a real open-source project, met some great people, helped others (and received help myself). Shortly, it was amazing, and I’m proud to have been a part of it. ” Shared by one of our Winner Oana Rosca.

There were people from almost 14 different countries, in fact, FOSSASIA, as a team, was the most diverse group 🙂

Day 2: Award Ceremony

We had two winners from FOSSASIA, Arkhan Kaiser from Indonesia and Oana Rosca from Romania. There were 8 organizations with 16 winners. The award ceremony was celebrated on day 2 and each winner was felicitated by Chris DiBona, the director of the Google open source team.

Talks by Googlers

We had amazing speakers from Google who spoke about their work, experiences, and journey to Google. Our first speaker was Jeremy Allison, a notable contributor to “Samba” which is a free software re-implementation of the SMB/CIFS networking protocol. He spoke on “How the Internet works” and gave a deeper view of the internet magic.

We had various speakers from different domains such as Grant Grundler from the Chrome team, Lyman Missimer from Google Expeditions, Katie Dektar from the Making and Science team, Sean Lip from Oppia(Googler and Oppia org admin), Timothy Papandreou from Waymo and Andrew Selle from TensorFlow.

Day 3: Fun Activities

We had various fun activities organized by the Google team. I had a great time cruising towards the Alcatraz island.  Later we had a walk on the Golden Gate bridge. Here comes the fun part of the tour “the cruise dinner” which was the best part of the day.

Day 4: End of the trip

Oana, Arkhan and I gave a nice presentation about our work during GCI. We spoke about all the amazing projects under FOSSASIA. One cool thing we did is that we “Doodled” our presentation 🙂 Here are few images from the actual presentation.

The day ended well with loads of good memories and information. Thanks to the open source technologies and their availability along with a beautiful friendly community, these memories and connections will now remain for a lifetime.

Continue Reading

How Anonymous Mode is Implemented in SUSI Android

Login and signup are an important feature for some android apps like chat app because user wants to save messages and secure messages from others. In SUSI Android we save messages for logged in user on the server corresponding to their account. But  users can also  use the app without logging in. In this blog, I will show you how the anonymous mode is implemented in SUSI Android .

When the user logs in using the username and password we provide a token to user for a limited amount of time, but in case of anonymous mode we never provide a token to the user and also we set ANONYMOUS_LOGGED_IN flag true which shows that the user is using the app anonymously.

PrefManager.clearToken()

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true)

We use ANONYMOUS_LOGGED_IN flag to check user is using the app anonymously or not. When a user opens the app we first check user is already logged in or not. If the user is not logged in then we check ANONYMOUS_LOGGED_IN flag is true or false. The true means user is using the app in anonymous mode.

if(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false)) {

 intent = new Intent(LoginActivity.this, MainActivity.class);

startActivity(intent);

}

Else we show login page to the user. The user can use app either by login using username and password or anonymously by clicking skip button. On clicking skip button ANONYMOUS_LOGGED_IN flag set to true.

public void skip() {

Intent intent = new Intent(LoginActivity.this,MainActivity.class);

PrefManager.clearToken();

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true);

startActivity(intent);

}

If the user is using the app in anonymous mode but he/she want to login then he/she can login. There is an option for login in menu.

When the user selects login option from the menu, then it redirects the user to the login screen and ANONYMOUS_LOGGED_IN flag is set to false. ANONYMOUS_LOGGED_IN flag is set to false to ensure that instead of login if the user closes the app and again open it, then he/she can’t use the app until logged in or click skip button.

case R.id.action_login:

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, false);

Reference

Continue Reading

How Settings of SUSI Android App Are Saved on Server

The SUSI Android allows users to specify whether they want to use the action button of soft keyboard as carriage return or else send action. The user can use SUSI.AI on different client like Android , iOS, Web. To give user uniform experience, we need to save user settings on the server so that if the user makes any change in setting in any chat client then that it changes in other chat clients too. So every chat client must store user specific data on the server to make sure that all chat clients access this data and maintain the same state for that particular user and must accordingly push and pull user data from and to the server to update the state of the app.

We used special key to store setting information on server. For eg.

Setting Key Value Use
Enter As Send enter_send true/false true means pressing enter key send message and false means pressing enter key adds new line.
Mic Input mic_input true/false true means default input method is speech but supports keyboard input too. false means the only input method is keyboard input.
Speech Always speech_always true/false true means we get speech output irrespective of input type.
Speech Output speech_output true/false true means we get speech output in case of speech input and false means we don’t get speech output.
Theme theme dark/light dark means theme is dark and light means theme is light

How setting is stored to server

Whenever user settings are changed, the client updates the changed settings on the server so that the state is maintained across all chat clients. When user change setting, we send three parameters to the server ‘key’, ‘value’ and ‘token’. For eg. let ‘Enter As Send’ is set to false. When user changes it from false to true, we immediately update it on the server. Here key will be ‘enter_send’ and value will be ‘true’.

The endpoint used to add or update User Settings is :

BASE_URL+’/aaa/changeUserSettings.json?key=SETTING_NAME&value=SETTING_VALUE&access_token=ACCESS_TOKEN’

SETTING_NAME’ is the key of the corresponding setting, ‘SETTING_VALUE’ is it’s updated value and ‘ACCESS_TOKEN’ is used to find correct user account. We used the retrofit library for network call.

settingResponseCall = ClientBuilder().susiApi .changeSettingResponse(key, value,  PrefManager.getToken())

If the user successfully updated the setting on the server then server reply with message ‘You successfully changed settings of your account!’

How setting is retrieved from server

We retrieve setting from the server when user login. The endpoint used to fetch User Settings is

BASE_URL+’/aaa/listUserSettings.json?access_token=ACCESS_TOKEN’

It requires “ACCESS_TOKEN” to retrieve setting data for a particular user. When user login, we use getUserSetting method to retrieve setting data from the server. PrefManager.getToken() is used to get “ACCESS_TOKEN”.

userSettingResponseCall = ClientBuilder().susiApi .getUserSetting(PrefManager.getToken())

We use userSettingResponseCall to get a response of ‘UserSetting’ type using which we can retrieve different setting from the server. ‘UserSetting’ contain ‘Session’ and ‘Settings’ and ‘Settings’ contain the value of all settings. We save the value of all settings on the server in string format, so after retrieving settings we convert them into the required format. For eg. ‘Enter As Send’ value is of boolean format, so after retrieving we convert it to boolean format.

var settings: Settings ?= response.body().settings

utilModel.setEnterSend((settings?.enterSend.toString()).toBoolean())

Reference

Continue Reading

Adding Manual ISO Controls in Phimpme Android

The Phimpme Android application comes with a well-featured camera to take high resolution photographs. It features an auto mode in the camera as well as a manual mode for users who likes to customise the camera experience according to their own liking. It provides the users to select from the range of ISO values supported by their devices with a manual mode to enhance the images in case the auto mode fails on certain circumstances such as low lighting conditions.

In this tutorial, I will be discussing how we achieved this in Phimpme Android with some code snippets and screenshots.

To provide the users with an option to select from the range of ISO values, the first thing we need to do is scan the phone for all the supported values of ISO and store it in an arraylist to be used to display later on. This can be done by the snippet provided below:

String iso_values = parameters.get("iso-values");
if( iso_values == null ) {
 iso_values = parameters.get("iso-mode-values"); // Galaxy Nexus
 if( iso_values == null ) {
    iso_values = parameters.get("iso-speed-values"); // Micromax A101
    if( iso_values == null )
       iso_values = parameters.get("nv-picture-iso-values"); // LG dual P990

Every device supports a different set of keyword to provide the list of ISO values. Hence, we have tried to add every possible keywords to extract the values. Some of the keywords used above covers almost 90% of the android devices and gets the set of ISO values successfully.

For the devices which supports the ISO values but doesn’t provide the keyword to extract the ISO values, we can provide the standard list of ISO values manually using the code snippet provided below:

values.add("200");
values.add("400");
values.add("800");
values.add("1600");

After extracting the set of ISO values, we need to create a list to display to the user and upon selection of the particular ISO value as depicted in the Phimpme camera screenshot below

Now to set the selected ISO value, we first need to get the ISO key to set the ISO values as depicted in the code snippet provided below:

if( parameters.get(iso_key) == null ) {
 iso_key = "iso-speed"; // Micromax A101
 if( parameters.get(iso_key) == null ) {
    iso_key = "nv-picture-iso"; // LG dual P990
    if( parameters.get(iso_key) == null ) {
       if ( Build.MODEL.contains("Z00") )
          iso_key = "iso"; // Asus Zenfone 2 Z00A and Z008

Getting the key to set the ISO values is similar to getting the key to extract the ISO values from the device. The above listed ISO keys to set the values covers most of the devices.

Now after we have got the ISO key, we need to change the camera parameter to reflect the selected change.

parameters.set(iso_key, supported_values.selected_value);
setCameraParameters(parameters);

To get the full source code on how to set the ISO values manually, please refer to the Phimpme Android repository.

Resources

  1. Stackoverflow – Keywords to extract ISO values from the device: http://stackoverflow.com/questions/2978095/android-camera-api-iso-setting
  2. Open camera Android source code: https://sourceforge.net/p/opencamera/code/ci/master/tree/
  3. Blog – Learn more about ISO values in photography: https://photographylife.com/what-is-iso-in-photography
Continue Reading

Using Picasso to Show Images in SUSI Android

Important skills of SUSI.AI are to display web search queries, maps of any location and provide a list of relevant information of a topic. This blog post will cover why Glide is replaced by Picasso to show images related to these action types and how it is implemented in SUSI AndroidPicasso is a powerful image downloading and caching open source library developed by Square.

Why Glide is replaced by Picasso to show images in SUSI Android?

Previously we used Glide library to show preview in SUSI Android but we replace it because it was creating an error continuously.

java.lang.IllegalArgumentException: You cannot start a load for a destroyed activity at com.bumptech.glide.manager.RequestManagerRetriever  at Com.bumptech.glide.manager.RequestManagerRetriever.get(RequestManagerRetriever.java:102) at com.bumptech.glide.manager.RequestManagerRetriever.get(RequestManagerRetriever.java:87)at com.bumptech.glide.Glide.with(Glide.java:629)
atnorg.fossasia.susi.ai.adapters.recycleradapters.WebSearchAdapter.onBindViewHolder(WebSearchAdapter.java:74)

Reason for this error is when activity destroyed and again recreated the context used by glide is old one and  that activity already destroyed .

Glide.with(context).load(imageList.get(0))

One solution of this error is to use context.getApplicationContext()  but it is a bad idea. Another solution is to replace glide by picasso and later one is good because picasso is also a very good image downloading and caching library.

To use Picasso in your project you have to add dependency in build.gradle(Module) file.

dependencies {
  
  compile “com.squareup.picasso:picasso:2.4.0”
  
}

How Picasso is used in different actiontype

Map

“actions”: [
     {
       “type”: “map”,
       “latitude”: “1.2896698812440377”,
       “longitude”: “103.85006683126556”,
       “zoom”: “13”
     }
   ]

Link we used to retrieve image url is

http://staticmap.openstreetmap.de/statucmap.php?center=latitude,longitude& zoom=zoom&size=lengthXbreadth

Picasso will load image from this url and show image in the imageview. Here mapImage is the imageview in which map image is shown.

Picasso.with(currContext).load(mapHelper.getMapURL())
                       .into(mapImage, new  com.squareup.picasso.Callback() {    
                           @Override
                           public void onSuccess() {
                               pointer.setVisibility(View.VISIBLE);
                           }
                           @Override
                           public void onError() {
                               Log.d(“Error”, “map image can’t loaded”);
                           }
                       });

WebSearch

When we query like “Search for fog” we get ‘query’ in reply from server

“query”: “fog”

Now we use this query to retrieve image url which we used in Picasso to show images.Picasso load this image into previewImageView imageview. Image url is retrieved using  DuckDuckGo api. We are using url

https://api.duckduckgo.com/?format=json&pretty=1&q=query&ia=meanings

It gives a json response which contains image url

Picasso.with(context).load(iconUrl)
      .into(holder.previewImageView, new  com.squareup.picasso.Callback() {
                  @Override
                   public void onSuccess() {
                         Log.d(“Sucess”,“image loaded successfully”);
                   }
                   @Override
                   public void onError() {
                       holder.previewImageView.setVisibility(View.GONE);
                     }
                });     

Here also com.squareup.picasso.Callback is use to find that image is loaded successfully or not.

RSS

When we query any like “dhoni” we get ‘link’ in reply from server

“title”: “Dhoni”,

“description”: “”,
“link”: “http://de.wikipedia.org/wiki/Dhoni”

We use this link in android-link-preview library to retrieve relevant image url and then Picasso use this url to load image into imageview previewImageView.

Picasso.with(currContext).load(imageList.get(0))
      .fit().centerCrop()
      .into(previewImageView);

Reference

Continue Reading

Sorting Photos in Phimpme Android

The Phimpme Android application features a fully fledged gallery interface with an option to switch to all photos mode, albums mode and to sort photos according to various sort actions. Sorting photos via various options helps the user to get to the desired photo immediately without having to scroll down till the end in case it is the last photo in the list generated automatically by scanning the phone for images using the Android’s mediaStore class. In this tutorial, I will be discussing how we achieved the sorting option in the Phimpme application with the help of some code snippets.

To sort the all photos list, first of all we need a list of all the photos by scanning the phone using the media scanner class via the code snippet provided below:

uri = android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI;
      String[] projection = {MediaStore.MediaColumns.DATA};
      cursor = activity.getContentResolver().query(uri, projection, null, null, null);

In the above code we are using a cursor to point to each photos and then we are extracting the path of the images and storing it in a list using a while loop. After we generate the list of path of all the images, we have to convert the into a list of media using the file path using the code below:

for (String path : listOfAllImages) {
          list.add(new Media(new File(path)));
      }
      return list;
  }

After generating the list of all images we can sort the photos using the Android’s collection class. In Phimpme Android we provide the option to sort photos in different categories namely:

  1. Name Sort action
  2. Date Sort action
  3. Size Sort action
  4. Numeric Sort action

As sorting is somewhat heavy task so doing this in the main thread will result in freezing UI of the application so we have to put this into an AsyncTask with a progress dialog to sort the photos. After putting the above four options in the menu options. We can define an Asynctask to load the images and in the onPreExecute method of the AsyncTask, we are displaying the progress dialog to the user to depict that the sorting process is going on as depicted in the code snippet below

AlertDialog.Builder progressDialog = new AlertDialog.Builder(LFMainActivity.this, getDialogStyle());
dialog = AlertDialogsHelper.getProgressDialog(LFMainActivity.this, progressDialog,
      getString(R.string.loading_numeric), all_photos ? getString(R.string.loading_numeric_all) : getAlbum().getName());
dialog.show();

In the doInBackgroundMethod of the AsyncTask, we are sorting the list of all photos using the Android’s collection class and using the static sort method defined in that class which takes the list of all the media files as a parameter and the MediaComparator which takes the sorting mode as the first parameter and the sorting order as the second. The sorting order decides whether to arrange the list in ascending or in descending order.

getAlbum().setDefaultSortingMode(getApplicationContext(), NUMERIC);
Collections.sort(listAll, MediaComparators.getComparator(getAlbum().settings.getSortingMode(), getAlbum().settings.getSortingOrder()));

After sorting, we have to update the data set to reflect the changes of the list in the UI. This we are doing in the onPostExecute method of the AsyncTask after dismissing the progress Dialog to avoid the window leaks in the application. You can read more about the window leaks in Android due to progressdialog here.

dialog.dismiss();
mediaAdapter.swapDataSet(listAll);

To get the full source code, you can refer the Phimpme Android repository listed in the resources below.

Resources

  1. Android developer guide to mediastore class: https://developer.android.com/reference/android/provider/MediaStore.html
  2. GitHub LeafPic repository: https://github.com/HoraApps/LeafPic
  3. Stackoverflow – Preventing window leaks in Android: https://stackoverflow.com/questions/6614692/progressdialog-how-to-prevent-leaked-window
  4. Blog – Sorting lists in Android: http://www.worldbestlearningcenter.com/tips/Android-sort-ListView.htm
Continue Reading

Implementing Text-to-Speech (TTS) in SUSI Android

Mobile assistants are designed to perform tasks that the user “commands” through by chat UI or speech. The Android OS already provides Text to speech (TTS) and Speech to text (STT) features. This feature is available from Android version 1.6 onward. In this blog post I will show how tts is implemented in SUSI Android and how I fix the issue ‘delay in speech response’.

TextToSpeech class controls the tts engine. To use TextToSpeech class import it in the activity where you want to use text to speech feature.

import android.speech.tts.TextToSpeech;

After you import TextToSpeech class now we need to initialize TextToSpeech

TextToSpeech tts = new TextToSpeech(this,this);

Here first parameter is the Context and the other one is the listener. The listener is  use  to  inform our app that the engine is ready to use. In order to be notified we have to  implement  TextToSpeech.OnInitListener.

TextToSpeech.OnInitListener listener = new  TextToSpeech.OnInitListener {
@Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS)
tts.setLanguage(Locale.UK/* set the default language*/);
}
}

Hence the engine can be initialized asIf status is success then, it means that TTS is initialized successfully and now we can use it. Otherwise, we can’t use it. setLanguage method is used to set language in which we want reply.

TextToSpeech tts = new TextToSpeech(getApplicationContext,listener)

When you use TTS one thing you have to remember that TTS run  on main thread so sometimes it may cause delays in text to speech conversion or it may block UI for a while. It is better to wrap it like below code.

new Handler().post(new Runnable() {
      @Override
      public void run() {
         tts = new TextToSpeech(getApplicationContext(), listener);
        }
    });

Now our engine is ready to speak, we need simply pass the string we want to read.

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

But before tts.speak, it is important to check for the audio focus change request. It is important because only one audio source can have focus at a time. You can check it using below code.

private AudioManager.OnAudioFocusChangeListener afChangeListener =
           new AudioManager.OnAudioFocusChangeListener() {
                 public void onAudioFocusChange(int focusChange) {
                                                        //check for focus
                                                   }
                                           };

OnAudioFocusChangeListener is called when audio focus of the system is changed and according to value of focusChange either we stop TTS or keep using it.

AudioManager audiofocus = (AudioManager)                                    getSystemService(Context.AUDIO_SERVICE);

audiofocus is instance of AudioManager class. We need it to call requestAudioFocus method of AudioManager class. requestAudioFocus method returns the status of request for audio focus change. This method requires three parameter  instance of AudioManager.OnAudioFocusChangeListener, stream type and duration hint. If request is granted only then we can we can use tts.speak .

int result = audiofocus.requestAudioFocus(afChangeListener,AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);

if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

}

We were continuously facing issue ‘delay in speech response’ because voiceReply method implementation was wrong. We were initializing TextToSpeech on each call of voiceReply method and since onInit method runs on main thread causing delay in voice response. So I removed it and instead of initializing tts each time I used the tts instance already initialized when activity create.

 String spoken = reply;

textToSpeech.speak(spoken, TextToSpeech.QUEUE_FLUSHnull);

You can also control how the engine read text. Like we can modify pitch and speech rate.

tts.setPitch((float)pitch);

tts.setSpeechRate((float)speed);

Resource

Continue Reading

Integration of Susi AI to Gitter

This blog post discusses the development of Susi Messenger bot on Gitter. It replies instantly to the messages sent to it, using the Susi API. The Streaming API notifies us when a user messages to the SUSI chat room. The REST API helps to message back with a reply from SUSI API, to the SUSI chat room.

Feel free to message to the already made SUSI AI account on Gitter and have a chat with it.

Prerequisites

  1. Basic knowledge about calling API’s and fetching data or posting data to the API.
  2. Node.js language.
  3. Github
  4. Heroku

Figure – Architecture for running SUSI AI on different messaging services.

This blog post will walk you through each of the steps required to integrate SUSI AI to Gitter:

Setup SUSI AI Bot on Gitter

  1. Create a Github or twitter account with a username having ‘Susi’ as its substring because this is the name that will be shown with the reply string we will get from Susi AI.
  2. Now you need to sign in to Gitter with a twitter or Github account from here.
  3. Create your community by visiting this page. After writing your community name press next, invite the people you want to be in this room and press next. You will be redirected to your communities lobby. This lobby is the chat room to which we will deploy our SUSI AI.
  4. Now visit the Gitter developer page, press sign in on the top right. You will be redirected to your apps page. Copy the personal access token written there as shown in this image: (The area colored black will have your access token).

5. On a new tab, in your browser visit   https://api.gitter.im/v1/rooms?access_token=YOUR_ACCESS_TOKEN, with YOUR_ACCESS_TOKEN replaced by the token we just copied.

A JSON object will be shown on our browser screen. You will see the value of ‘name’ key as YOUR_COMMUNITY_NAME/Lobby. Copy the id of this chat room, as we will need it later. You can refer to the image below, you will have your chat room id in the area colored black.

  1. Create a new heroku app here.

This app will accept the requests from Gitter and Susi api.

  1. Set the config variables for this heroku app in the setting tab of your account. Set ROOM_ID to the id of the chat room and TOKEN to the personal access token, we copied in steps 4 and 5.

These were the formalities to be done to have our chat bot account on Gitter.

  1. Let’s jump to the code part of how this integration will be done:

To use the two config variables set in Heroku, we need these two lines in our Node js code:

var roomId = process.env.ROOM_ID;
var token = process.env.TOKEN;

We need to set up an options variable with our access token and room id in it:

// Setting the options variable to use it in the https request block
var options = {
    hostname: 'stream.gitter.im',
    port:     443,
    path:     '/v1/rooms/' + roomId + '/chatMessages',
    method:   'GET',
    headers:  {'Authorization': 'Bearer ' + token}
};

We will send this options variable when making a request so that Gitter lets our request through and notifies us when a client messages to our SUSI chat room.

The res.on(‘data’) accepts a function which is called when a client messages to our SUSI chat room:

// making a request to gitter stream API
var req = https.request(options, function(res) {
 res.on('data', function(chunk) {
    // do stuff
 }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

According to the docs of REST API in Gitter, the JSON data that we receive when a client sends a message to a chat room is like this:

To get this response set in a variable, we can use this code snippet:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != " \n"){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);

Now we have this json response as shown in the above figure in the jsonMsg variable. To extract the client’s message from this json object:

var clientMsg = jsonMsg.text;

As we now have the user query in clientMsg variable. Let’s call Susi API and fetch an answer from it for a query.

As an example, let’s first take the query as ‘hi’ and visit http://api.asksusi.com/susi/chat.json?q=hi from the browser. We will get a JSON object as follows:

{
        "query": "hi",
        "count": 1,
        "client_id": "aG9zdF8xMDMuMjkuMjIyLjE4MA==",
        "query_date": "2017-07-17T02:29:44.171Z",
        "answers": [{
            "data": [{
                "0": "hi",
                "token_original": "hi",
                "token_canonical": "hi",
                "token_categorized": "hi",
                "timezoneOffset": "-330",
                "language": "en"
            }],
            "metadata": {"count": 1},
            "actions": [{
                    "type": "answer",
                    "language": "de",
                    "expression": "Hallo!"
            }],
   "skills": ["/susi_skill_data/models/general/smalltalk/de/German-Standalone-aiml2susi.txt"]
        }],
        "answer_date": "2017-07-17T02:29:44.179Z",
        "answer_time": 8,
        "language": "en",
        "session": {"identity": {
            "type": "host",
            "name": "103.29.222.180",
            "anonymous": true
        }}
    }

The answer can be found as the value of the key named expression. In this case, it is “Hallo!”.

To fetch the answer through coding for our client message, we can use this code snippet in Node js:

// including request module
var request = require(‘request’);

// setting options to make a successful call to Susi API.
var susiOptions = {
            method: 'GET',
            url: 'http://api.asksusi.com/susi/chat.json',
            qs:
            {
                timezoneOffset: '-330',
                q: clientMsg  //the client message sent to SUSI chat room.
            }
        };

// A request to the Susi bot
request(susiOptions, function (error, response, body) {
    if (error)
        throw new Error(error);
    // answer fetched from susi
    ans = (JSON.parse(body)).answers[0].actions[0].expression;
}

The properties required for the call are set up through a JSON object (i.e. susiOptions). Pass the susiOptions object to our request function as its 1st parameter. The response from the API will be stored in ‘body’ variable. We need to parse this body, to be able to access the properties of that body object. Hence, fetching the answer from Susi API.

As we now have the answer, let’s call the API of Gitter to show our answer back to the user. Let’s code the request for that:

// To send a reply by Susi AI to client's message back to Gitter
         var gitterOptions = {
                               method: 'POST',
                               url: 'https://api.gitter.im/v1/rooms/'+roomId+'/chatMessages',
                               headers:
                               {
                                 'authorization': 'Bearer '+ token ,
                                 'content-type': 'application/json',
                                 'accept': 'application/json'
                               },
                               body:
                               {
                                 text: ans
                               },
                               json: true
                             };

         // making the request to Gitter API
         request(gitterOptions, function (error, response, body) {
           if(error)
             throw new Error(error);
           console.log(body);
         });

Hence, we have made the basic chat work!

The streaming API of Gitter notifies us for every message sent to our chat room. We will also be notified about the reply message sent by ourselves. To not fall into an infinite loop of answers and questions by SUSI itself, we must include this line in our code:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != ” \n”){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);
     if(jsonMsg.fromUser.displayName != 'SusiAI'){ // If it’s not our own answer
        // do stuff 
     }
   }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

The display name in my case is ‘SusiAI’, but it may be different in your case according to the Github or Twitter id made by you.

  1. Upload this code to Github.
  2. Connect the Heroku app to the Github repository, which has your code.

  1. Deploy on the development branch. If you intend to contribute, it is recommended to Enable Automatic Deploys.

Branch Deployment.

Successful Deployment.

  1. Go to your Gitter room created and enjoy chatting with Susi.Resources
Continue Reading

Uploading images to Dropbox from the Phimpme App

The Phimpme Android application along with the camera, gallery and image editing features offers the option to upload images to many social media and cloud storages without having to install several other applications. As we can see from the screenshot below, Phimpme application contains a user-friendly accounts screen to connect to the accounts using a username and password so that we can upload photos from the share screen to that particular account later on.

One such famous cloud storage is the Dropbox and in this tutorial, I am explaining how I implemented the account authentication and image uploading feature to Dropbox in the Phimpme Android application.

Step 1

The first thing we need to do is to create an application in the Dropbox application console and to get the app key and the API secret key which we will require later on for the authentication purposes. To create an application on the Dropbox application console page,

  1. Go to this link. It will open a page as depicted in the screenshot below:
  2. Now click on the Dropbox API option.
  3. Click on App folder – access to a single folder created specifically for your app.
  4. Write the name of your application and press the create app button.

After this, we will be redirected to the page which will contain all the keys required to authenticate and upload photos.

Step 2

After getting the keys, the next thing we need to do is install the Dropbox SDK. To do this:

  1. Download the Android SDK from this link and extract it.
  2. Copy the dropbox-android-sdk-1.6.3.jar and json_simple-1.1.jar file to the libs folder.
  3. Click on the add as library button by right clicking on the jar files added.
  4. Copy the below-mentioned code in the AndroidManifest.xml file which defines the dropbox login activity within a new activity tag.
android:name="com.dropbox.client2.android.AuthActivity"
android:launchMode="singleTask"
android:theme="@android:style/Theme.Translucent.NoTitleBar"
android:configChanges="orientation|keyboard">
<intent-filter>
     <!-- Change this to be db- followed by your app key -->
         <data android:scheme="db-app_key" />
         <action android:name="android.intent.action.VIEW" />
         <category android:name="android.intent.category.BROWSABLE"/>
         <category android:name="android.intent.category.DEFAULT" />
</intent-filter>

In the 7th line of the code snippet, replace the app_key with the key you received from following the step 1.

Step 3

After setting up everything, we need to extract the access token for the user to upload the photos in that particular account. To do this, we can make use of the below code snippet, which uses the dropbox SDK we installed in step 2 to create an object named mDBApi and initialises it to authenticate the user.

private DropboxAPI<AndroidAuthSession> mDBApi;AppKeyPair appKeys = new AppKeyPair(APP_KEY, APP_SECRET);
AndroidAuthSession session = new AndroidAuthSession(appKeys);
mDBApi = new DropboxAPI<AndroidAuthSession>(session);

After initialisation in the onCreate method of the activity, we can authenticate the user using the following line of code.

mDBApi.getSession().startOAuth2Authentication(MyActivity.this);

This will open up a window where the user will be prompted to login to their dropbox account. After the login is finished, we will be taken back to the activity which made the authentication call, so in the onResume method, we need to get the access token of the user which will be used later on to upload the images using the following code snippet provided below:

mDBApi.getSession().finishAuthentication();
String accessToken = mDBApi.getSession().getOAuth2AccessToken();

After we have stored the access token, we can upload the selected image to the Dropbox using the following line of code.

File file = new File("working-draft.jpg");
FileInputStream inputStream = new FileInputStream(file);
Entry response = mDBApi.putFile("/magnum-opus.jpg", inputStream,
                              file.length(), null, null);

For more information on uploading and retrieving data from the Dropbox account, you can go to the dropbox developer guide and for working example refer to the Phimpme Android repository in the resources below.

Resources

  1. Phimpme Repo : Phimpme Android github repository.
  2. Dropbox official documentation : https://www.dropbox.com/developers-v1/core/start/android
  3. Dropbox application console : https://www.dropbox.com/developers/apps
  4. Stackoverflow example to upload image on Dropbox : https://stackoverflow.com/questions/10827371/upload-photo-to-dropbox-by-android
Continue Reading

Face detection in Phimpme Android’s Camera

The Phimpme Android application comes with a well-featured camera application which offers almost all the functionality an advanced camera user searches for. It comes with a wide range of options to apply different scene modes in the camera and also to detect the faces using the front or the back camera of the device. In this tutorial, I will be discussing how we achieved the face detection functionality in Phimpme.

In the Phimpme application, we have the option in the settings to enable the face detection just as depicted in the screenshot below. After enabling it the Camera starts detecting the faces and draws rectangular boxes on the number of faces detected by the camera.

I will be explaining step by step to achieve this using some code snippets.

Step 1

First, we have to check whether our device supports the face detection functionality to avoid unnecessary application crashes using the Android’s Camera.Parameters class.

After the check we have to create a new class named My FaceDetectionListener which will be implementing the Android’s Camera.FaceDetectionListener. The face detection class overrides the function onFaceDetection and passes the array of Faces detected and the camera as the parameter to this function.

class MyFaceDetectionListener implements CameraController.FaceDetectionListener {
  @Override  
  public void onFaceDetection(CameraController.Face[] faces) { 
    faces_detected = new CameraController.Face[faces.length];     System.arraycopy(faces, 0, faces_detected, 0, faces.length);
  }
 }

Step 2

After creating this class, we need to start the camera of the application to set the face detection listener to it. This can be done by the code snippet provided below

camera = Camera.open(cameraId);

We can open the front camera and the back camera by simply changing the cameraId. If we want to open the front camera, then we need to set the camera Id value as 1 and if we want the back camera to open up we can set the camera Id to be 0.
After this, we can set the face detection listener in the camera. This can be done using the below code snippet.

mCamera.setFaceDetectionListener(fDetectionListener);
   mCamera.startFaceDetection();

The set face detection listener function takes in the object of the class we created in step 1 as the parameter and calls the Android’s pre defined function to start the face detection. The object of the class we created in step 1 can be created and initialised with the help of code snippet below.

MyFaceDetectionListener fDListener = new MyFaceDetectionListener();

After we have set the detection listener in the camera, as soon as it detects the face, it will call the overridden function onFaceDetection but how do the user know if the face has been detected or not. For this we have to create a rectangular box of size approximately that of the face detected. This can be done with the following code snippet.

int l = faces[i].rect.left;
               int r = faces[i].rect.right;
               int t = faces[i].rect.top;
               int b = faces[i].rect.bottom;
               Rect uRect = new Rect(l0, t0, r0, b0);

To get the full source code, please check out the Phimpme Android github repository.

Resources

  1. Phimpme Android Github repository
  2. Complete tutorial on face detection in Android
  3. Leafpic github repository
  4. Android Camera API Google developer page
Continue Reading
Close Menu