Integration of Flickr in Phimpme Android

Flickr is very widely used photo sharing website. So, we added a functionality in Phimpme Android to share the image to Flickr. Using this feature, the user can upload a picture to Flickr, without leaving the application with a single click. The way how we implemented this Flickr in our Phimpme Android application is shown below.

Setting up Flickr API

Flickr provides only an official web API. So we used the flickrj-android library built using the official Flickr API. First of all, we created an application in Flickr App Garden and requested API tokens from here. We noted down the resultant API key and secret key. These keys are necessary for authenticating Flickr in the application.

Steps for Authentication

Establishing the authentication with Flickr requires the following steps as per the official documentation.

  • Getting a request token
  • Getting User Authorization
  • Exchanging the Request Token for an Access Token
  • Use the Access Token with up.flickr.com/services/upload/ to upload pictures.

Authorizing the user of Flickr in Phimpme

As shown in the above picture, a request token has to be generated. Flickrj SDK provides it for us using Flickr object which further is created using the API key and Secret key. This request token is used for directing the user to a page for manually authorizing the Phimpme application to share the picture to his/her Flickr account. In Phimpme Android application, we created the Flickr object as shown below.

Flickr f = new Flickr(API_KEY, API_SEC, new REST());
OAuthToken oauthToken = f.getOAuthInterface().getRequestToken(
     OAUTH_CALLBACK_URI.toString());
saveTokenSecrent(oauthToken.getOauthTokenSecret());
URL oauthUrl = f.getOAuthInterface().buildAuthenticationUrl(
     Permission.WRITE, oauthToken);

Getting Access Token

After the manual authorization of the application, Flickr returns the OAuth token and OAuth verifier. These have to used for making another request to Flickr for getting the “Access Token” which gives us the access to upload the picture to Flickr account. For implementing this, intent data is collected when the application reopens after manual authorization. This data contains the OAuth token and OAuth verifier. These are passed to get Access Token from functions present Flickrj library. In Phimpme Android, we implemented this in following way.

Uri uri = intent.getData();
String query = uri.getQuery();
String[] data = query.split("&");
if (data != null && data.length == 2) {
  String oauthToken = data[0].substring(data[0].indexOf("=") + 1);
  String oauthVerifier = data[1].substring(data[1].indexOf("=") + 1);
  OAuth oauth = getOAuthToken();
  if (oauth != null && oauth.getToken() != null && oauth.getToken().getOauthTokenSecret() != null) {
  Flickr f = new Flickr(API_KEY, API_SEC, new REST());
  OAuthInterface oauthApi = null;
  if (f != null) {
     oauthApi = f.getOAuthInterface();
     return oauthApi.getAccessToken(oauthToken, oauthTokenSecret,verifier);}
  }
}

Uploading Image to Flickr

As we got the Access Token, we used that for creating new Flickr Object. The upload function present in the API requires three arguments – a file name, input stream and its metadata. The input stream is generated using the ContentResolver present in the Android. The code showing how we implemented uploading to Flickr is shown below.

Flickr f = new Flickr(API_KEY, API_SEC, new REST());
RequestContext requestContext = RequestContext.getRequestContext();
OAuth auth = new OAuth();
auth.setToken(new OAuthToken(token, secret));
requestContext.setOAuth(auth);
UploadMetaData uploadMetaData = new UploadMetaData();
uploadMetaData.setTitle(your file name);
InputStream is = getContentResolver().openInputStream(Uri.fromFile(your file object));
f.getUploader().upload(your file name, is, uploadMetaData);

The full implementation, using AsyncTasks and other well-optimized functions, is available in our Phimpme Android project repo on GitHub.

Resources

Continue ReadingIntegration of Flickr in Phimpme Android

Using OpenCV for Native Image Processing in Phimpme Android

OpenCV is very widely used open-source image processing library. After the integration of OpenCV Android SDK in the Phimpme Android application, the image processing functions can be written in Java part or native part. Taking runtime of the functions into consideration we used native functions for image processing in the Phimpme application.

We didn’t have the whole application written in native code, we just called the native functions on the Java OpenCV Mat object. Mat is short for the matrix in OpenCV. The image on which we perform image processing operations in the Phimpme Android application is stored as Mat object in OpenCV.

Creating a Java OpenCV Mat object

Mat object of OpenCV is same whether we use it in Java or C++. We have common OpenCV object in Phimpme for accessing from both Java part and native part of the application. We have a Java bitmap object on which we have to perform image processing operations using OpenCV. For doing that we need to create a Java Mat object and pass its address to native. Mat object of OpenCV can be created using the bitmap2Mat() function present in the OpenCV library. The implementation is shown below.

Mat inputMat = new Mat(bitmap.getWidth(), bitmap.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(bitmap, inputMat);

“bitmap” is the Java bitmap object which has the image to be processed. The third argument in the Mat function indicates that the Mat should be of type 8UC3 i.e. three color channels with 8-bit depth. With the second line above, the bitmap gets saved as the OpenCV Mat object.

Passing Mat Object to Native

We have the OpenCV Mat object in the memory. If we pass the whole object again to native, the same object gets copied from one memory location to another. In Phimpme application, instead of doing all that we can just get the memory location of the current OpenCV Mat object and pass it to native. As we have the address of the Mat, we can access it directly from native functions. Implementation of this is shown below.

Native Function Definition:

private static native void nativeApplyFilter(long inpAddr);

Native Function call:

nativeApplyFilter(inputMat.getNativeObjAddr());

Getting Native Mat Object to Java

We can follow the similar steps for getting the Mat from the native part after processing. In the Java part of Phimpme, we created an OpenCV Mat object before we pass the inputMat OpenCV Mat object to native for processing. So we have inputMat and outputMat in the memory before we send them to native. We get the memory locations of both the Mat objects and pass those addresses to native part. After the processing is done, the data gets written to the same memory location and can be accessed in Java. The above functions can be modified and rewritten for this purpose as shown below

Native Function Definition:

private static native void nativeApplyFilter(long inpAddr, long outAddr );

Native Function call:

nativeApplyFilter(inputMat.getNativeObjAddr(),outputMat.getNativeObjAddr());
inputMat.release();

if (outputMat !=null){
   Bitmap outbit = Bitmap.createBitmap(bitmap.getWidth(),bitmap.getHeight(),bitmap.getConfig());
   Utils.matToBitmap(outputMat,outbit);
   outputMat.release();
   return outbit;
}

Native operations on Mat using OpenCV

The JNI function in the native part of Phimpme application receives the memory locations of both the OpenCV Mat objects. As we have the addresses, we can create Mat object pointing that memory location and can be passed to processing functions for performing native operations just like all OpenCV functions. This implementation is shown below.

#include <jni.h>
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include "enhance.h"
using namespace std;
using namespace cv;

JNIEXPORT void JNICALL
Java_org_fossasia_phimpme_editor_editimage_filter_PhotoProcessing_nativeApplyFilter(JNIEnv *env, jclass type, jlong inpAddr,jlong outAddr) {
       Mat &src = *(Mat*)inpAddr;
       Mat &dst = *(Mat*)outAddr;
       applyFilter(src, dst);
}

applyFilter() function can have any image processing operation. The implementation of edge detection function using OpenCV in the Phimpme Android is shown below. We were able to do this in very few lines which otherwise would have needed an extremely large code.  

Mat grey,detected_edges;
cvtColor( src, grey, CV_BGR2GRAY );
blur( grey, detected_edges, Size(3,3) );
dst.create( grey.size(), grey.type() );
Canny( detected_edges, detected_edges, 70, 200, 3 );
dst = Scalar::all(0);
detected_edges.copyTo( dst, detected_edges);
 

  

The general structure of an OpenCV function which is necessary for implementing custom image processing operations can be understood by referring this below-mentioned brightness adjustment function.  

int x,y,bright;
cvtColor(src,src,CV_BGRA2BGR);
dst = Mat::zeros( src.size(), src.type() );
for (y = 0; y < src.rows; y++) {
   for (x = 0; x < src.cols; x++) {
       dst.at<Vec3b>(y, x)[0] =
                  saturate_cast<uchar>((src.at<Vec3b>(y, x)[0]) + bright);
       dst.at<Vec3b>(y, x)[1] =
               saturate_cast<uchar>((src.at<Vec3b>(y, x)[1]) + bright);
       dst.at<Vec3b>(y, x)[2] =
               saturate_cast<uchar>((src.at<Vec3b>(y, x)[2]) + bright);
   }
}

    

Resources:

Continue ReadingUsing OpenCV for Native Image Processing in Phimpme Android

Adding Sentry Integration in Open Event Orga Android App

Sentry is a service that allows you to track events, issues and crashes in your apps and provide deep insights with context about them. This blog post will discuss how we implemented it in Open Event Orga App (Github Repo).

Configuration

First, we need to include the gradle dependency in build.gradle
compile ‘io.sentry:sentry-android:1.3.0’
Now, our project uses proguard for release builds which obfuscates the code and removes unnecessary class to shrink the app. For the crash events to make sense in Sentry dashboard, we need proguard mappings to be uploaded every time release build is generated. Thankfully, this is automatically handled by sentry through its gradle plugin, so to include it, we add this in our project level build.gradle in dependencies block

classpath 'io.sentry:sentry-android-gradle-plugin:1.3.0'

 

And then apply the plugin by writing this at top of our app/build.gradle

apply plugin: 'io.sentry.android.gradle'

 

And then configure the options for automatic proguard configuration and mappings upload

sentry {
   // Disables or enables the automatic configuration of proguard
   // for Sentry.  This injects a default config for proguard so
   // you don't need to do it manually.
   autoProguardConfig true

   // Enables or disables the automatic upload of mapping files
   // during a build.  If you disable this you'll need to manually
   // upload the mapping files with sentry-cli when you do a release.
   autoUpload false
}

 

We have set the autoUpload to false as we wanted Sentry to be an optional dependency to the project. If we turn it on, the build will crash if sentry can’t find the configuration, which we don’t want to happen.

Now, as we want Sentry to configurable, we need to set Sentry DSN as one of the configuration options. The easiest way to externalize configuration is to use environment variables. There are other methods to do it given in the official documentation for config https://docs.sentry.io/clients/java/config/

Lastly, for proguard configuration, we also need 3 other config options, namely:

defaults.project=your-project
defaults.org=your-organisation
auth.token=your-auth-token

 

For getting the auth token, you need to go to https://sentry.io/api/

Now, the configuration is complete and we’ll move to the code

Implementation

First, we need to initialise the sentry instance for all further actions to be valid. This is to be done when the app starts, so we add it in onCreate method Application class of our project by calling this method

// Sentry DSN must be defined as environment variable
// https://docs.sentry.io/clients/java/config/#setting-the-dsn-data-source-name
Sentry.init(new AndroidSentryClientFactory(getApplicationContext()));

 

Now, we’re all set to send crash reports and other events to our Sentry server. This would have required a lot of refactoring if we didn’t use Timber for logging. We are using default debug tree for debug build and a custom Timber tree for release builds.

if (BuildConfig.DEBUG)
   Timber.plant(new Timber.DebugTree());
else
   Timber.plant(new ReleaseLogTree());

 

The ReleaseLogTree extends Timber.Tree which is an abstract class requiring you to override this function:

@Override
protected void log(int priority, String tag, String message, Throwable throwable) {

 }

 

This function is called whenever there is a log event through Timber and this is where we send reports through Sentry. First, we return from the function if the event priority is debug or verbose

if(priority == Log.DEBUG || priority == Log.VERBOSE)
   return;

 

If the event if if info priority, we attach it to sentry bread crumb

if (priority == Log.INFO) {
    Sentry.getContext().recordBreadcrumb(new BreadcrumbBuilder()
          .setMessage(message)
          .build());
}

 

Breadcrumbs are stored and only send with an event. What event comprises for us is the crash event or something we want to be logged to dashboard whenever the user does it. But since info events are just user interactions throughout the app, we don’t want to crowd the issue dashboard with them. However, we want to understand what user was doing before the crash happened, and that is why we use bread crumbs to store the events and only send them attached to a crash event. Also, only the last 100 bread crumbs are stored, making it easier to parse through them.

Now, if there is an error event, we want to capture and send it to the server

if (priority == Log.ERROR) {
   if (throwable == null)
       Sentry.capture(message);
   else
       Sentry.capture(throwable);
}

 

Lastly, we want to set Sentry context to be user specific so that we can easily track and filter through issues based on the user. For that, we create a new class ContextManager with two methods:

  • setOrganiser: to be called at login
  • clearOrganiser: to be called at logout

public void setOrganiser(User user) {
   Map<String, Object> userData = new HashMap<>();
   userData.put("details", user.getUserDetail());
   userData.put("last_access_time", user.getLastAccessTime());
   userData.put("sign_up_time", user.getSignupTime());

   Timber.i("User logged in - %s", user);
   Sentry.getContext().setUser(
       new UserBuilder()
       .setEmail(user.getEmail())
       .setId(String.valueOf(user.getId()))
       .setData(userData)
       .build()
   );
}

 

In this method, we put all the information about the user in the context so that every action from here on is attached to this user.

public void clearOrganiser() {
   Sentry.clearContext();
}

 

And here, we just clear the sentry context.

This concludes the implementation of our sentry client. Now all Timber log events will through sentry and appropriate events will appear on the sentry dashboard. To read more about sentry features and Timber, visit these links:

Sentry Java Documentation (check Android section)

https://docs.sentry.io/clients/java/

Timber Library

https://github.com/JakeWharton/timber

Continue ReadingAdding Sentry Integration in Open Event Orga Android App

Integrating OpenCV for Native Image Processing in Phimpme Android

OpenCV is an open source computer vision library written in C and C++. It has many pre-built image processing functions for all kinds of purposes. The filters and image enhancing functions which we implemented in Phimpme are not fully optimized. So we decided to integrate the Android SDK of OpenCV in the Phimpme Android application.

For integrating OpenCV in the application, the files inside the OpenCV Android SDK have to be properly placed inside the project. The build and make files of the project have to be configured correspondingly.

Here, in this post, I will mention how we fully integrated OpenCV Android SDK and configured the Phimpme project.

Setting Up

First, we downloaded the OpenCV-Android-SDK zip from the official release page here.

Now we extracted the OpenCV-Android-SDK.zip file to navigated to SDK folder to see that there are three folders inside it. etc, Java and native.

All the build files are present inside the native directory and are necessarily copied to our project. The java directory inside the SDK folder is an external gradle module which has to imported and added as a dependency to our project.

So we copied all the files from jni directory of SDK folder inside OpenCV-Android-SDK to jni directory of the Phimpme project inside main. Similarly copied the 3rdparty directory into the main folder of the Phimpme project. Libs folder should also be copied into the main folder of the Phimpme but it has to be renamed to jniLibs. The project structure can be viewed in the below image.

Adding Module to Project

Now we headed to Android Studio and right clicked on the app in the project view, to add a new module. In that window select import Gradle project and browse to the java folder inside the SDK folder of OpenCV-Android-SDK as shown in figures.

Now we opened the build.gradle file of app level (not project level) and added the following line as a dependency.

  compile project(':openCVLibrary320')

Now open the build.gradle inside the openCVLibrary320 external module and change the platform tools version and  SDK versions.

Configuring Native Build Files

We removed all files related to cmake be we use ndk-build to compile native code. After removing the cmake files, added following lines in the application.mk file

APP_OPTIM := release
APP_ABI := all
APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
APP_PLATFORM := android-25

Now we further added the following lines to the top of Android.mk file

OPENCV_INSTALL_MODULES:=on
OPENCV_CAMERA_MODULES:=on
OPENCV_LIB_TYPE := STATIC
include $(LOCAL_PATH)/OpenCV.mk

The third line here is a flag indicating that the library has to be static and there is no need for external OpenCVLoader application.

Finally, find this line

OPENCV_LIBS_DIR:=$(OPENCV_THIS_DIR)/../libs/$(OPENCV_TARGET_ARCH_ABI)

in OpenCV.mk file and replace it with

OPENCV_LIBS_DIR:=$(OPENCV_THIS_DIR)/../jniLibs/$(OPENCV_TARGET_ARCH_ABI)

And finally, add the following lines to java part. So that the OpenCV is initialized and linked to application

static {
   if (!OpenCVLoader.initDebug()) {
       Log.e( TAG + " - Error", "Unable to load OpenCV");
   } else {
       System.loadLibrary("modulename_present_in_Android.mk");
   }
}

With this, the integration of OpenCV and the configuration of the project is completed. Remove the build directories i.e. .build and .externalNativeBuild directories inside app folder for a fresh build so that you don’t face any wrong build error.

Resources

Continue ReadingIntegrating OpenCV for Native Image Processing in Phimpme Android

Implementing Attendee Detail BottomSheet UI in Open Event Orga App

In Open Event Orga App (Github Repo), we allow the option to check the attendee details before checking him/her in or out. Originally, a dialog was shown showing the attendee details, which did not contain much information about the attendee, ticket or the order. The disadvantage of such design was also that it was tied to only one view. We couldn’t show the check in dialog elsewhere in the app, like during QR scanning. So we had to switch back to the attendee view for showing the check in dialog. We decided to create a usable detached component in the form of a bottom sheet containing all required information. This blog will outline the procedure we employed to design the bottom sheet UI.

The attendee check in dialog looked like this:

So, first we decide what we need to show on the check in bottom sheet:

  • Attendee Name
  • Attendee Email
  • Attendee Check In Status
  • Order Status ( Completed, Pending, etc )
  • TIcket Type ( Free, Paid, Donation )
  • Ticket Price
  • Order Date
  • Invoice Number
  • Order ‘Paid Via’

As we are using Android Data Binding in our layout, we’ll start by including the variables required in the layout. Besides the obvious attendee variable, we need presenter instance to handle the check in and check out of the attendee and DateUtils class to parse the order date. Additionally, to handle the visibility of views, we need to include the View class too

<data>
   <import type="org.fossasia.openevent.app.utils.DateUtils" />
   <import type="android.view.View" />

   <variable
       name="presenter"
       type="org.fossasia.openevent.app.event.checkin.contract.IAttendeeCheckInPresenter" />

   <variable
       name="checkinAttendee"
       type="org.fossasia.openevent.app.data.models.Attendee" />
</data>

 

Then, we make the root layout to be CoordinatorLayout and add a NestedScrollView inside it, which contains a vertical linear layout in it. This vertical linear layout will contain our fields.

Note: For brevity, I’ll skip most of the layout attributes from the blog and only show the ones that correspond to the text

Firstly, we show the attendee name:

<TextView
   style="@style/TextAppearance.AppCompat.Headline"
   android:text='@{attendee.firstName + " " + attendee.lastName }'
   tools:text="Name" />

 

The perks of using data binding can be seen here, as we are using string concatenation in layout itself. Furthermore, data binding also handles null checks for us if we add a question mark at the end of the variable name ( attendee.firstName? ).

But our server ensures that both these fields are not null, so we skip that part.

Next up, we display the attendee email

<TextView
   android:text="@{ checkinAttendee.email }"
   tools:text="xyz@example.com" />

 

And then the check in status of the attendee

<TextView
   android:text="@{ checkinAttendee.checkedIn ? @string/checked_in : @string/checked_out }"
   android:textColor="@{ checkinAttendee.checkedIn ? @color/light_green_500 : @color/red_500 }"
   tools:text="CHECKED IN" />

 

Notice that we dynamically change the color and text based on the check in status of the attendee

Now we begin showing the fields with icons to their left. You can use Compound Drawable to achieve this effect, but we use vector drawables which are incompatible with compound drawables on older versions of Android, so we use a horizontal LinearLayout instead.

The first field is the order status denoting if the order is completed or in transient state

<LinearLayout android:orientation="horizontal">

   <ImageView app:srcCompat="@drawable/ic_transfer" />
   <TextView android:text="@{ checkinAttendee.order.status }" />
</LinearLayout>

 

Now, again for keeping the snippets relevant, I’ll skip the icon portion and only show the text binding from now on.

Next, we include the type of ticket attendee has. There are 3 types of ticket supported in Open Event API – free, paid, donation

<TextView
   android:text="@{ checkinAttendee.ticket.type }"  />

 

Next, we want to show the price of the ticket, but only when the ticket is of paid type.

I’ll include the previously omitted LinearLayout part in this snippet because it is the view we control to hide or show the field

<LinearLayout
   android:visibility='@{ checkinAttendee.ticket.type.equalsIgnoreCase("paid") ? View.VISIBLE : View.GONE }'>

   <ImageView app:srcCompat="@drawable/ic_coin" />
   <TextView
       android:text='@{ "$" + checkinAttendee.ticket.price }'
       tools:text="3.78" />
</LinearLayout>

 

As you can see, we are showing this layout only if the ticket type equals paid

The next part is about showing the date on which the order took place

<TextView
   android:text="@{ DateUtils.formatDateWithDefault(DateUtils.FORMAT_DAY_COMPLETE, checkinAttendee.order.completedAt) }" />

 

Here we are using internal DateUtils method to format the date into complete date time from the ISO 8601 standard date present in the order object

Now, we show the invoice number of the order

<TextView
   android:text="@{ checkinAttendee.order.invoiceNumber }" />

 

Lastly, we want to show how the ticket was paid for via

<LinearLayout
   android:visibility='@{ checkinAttendee.order.paidVia.equalsIgnoreCase("free") ? View.GONE : View.VISIBLE }'>

   <ImageView app:srcCompat="@drawable/ic_ray" />
   <TextView  android:text="@{ checkinAttendee.order.paidVia }" />
</LinearLayout>

 

Notice that here too we are controlling the visibility of the layout container and only showing it if the ticket type is paid

This ends our vertical linear layout showing the fields about attendee detail. Now, we add a floating action button to toggle the check in status of attendee

<FrameLayout
   android:layout_gravity="top|end">

   <android.support.design.widget.FloatingActionButton
       android:layout_gravity="center"
       android:onClick="@{() -> presenter.toggleCheckIn() }"
       app:backgroundTint="@{ checkinAttendee.checkedIn ? @color/red_500 : @color/light_green_500 }"
       app:srcCompat="@{ checkinAttendee.checkedIn ? @drawable/ic_checkout : @drawable/ic_checkin }"
       app:tint="@android:color/white" />

   <ProgressBar
       android:layout_gravity="center" />

</FrameLayout>

 

We have used a FrameLayout to wrap a FAB and progress bar together in top end of the bottom sheet. The progress bar shows the indeterminate progress of the toggling of attendee status. And you can see the click binder on FAB triggering the presenter method toggleCheckIn() and how the background color and icon change according to the check in status of the attendee.

This wraps up our layout design. Now we just have to create a BottomSheetDialogFragment, inflate this layout in it and bind the attendee variable and we are all set. The result with all fields visible looks like this:

To learn more about bottom sheet and android data binding, please refer to these links:

Continue ReadingImplementing Attendee Detail BottomSheet UI in Open Event Orga App

Invalidating user login using JWT in Open Event Orga App

User authentication is an essential part of Open Event Orga App (Github Repo), which allows an organizer to log in and perform actions on the event he/she organizes. Backend for the application, Open Event Orga Server sends an authentication token on successful login, and all subsequent privileged API requests must include this token. The token is a JWT (Javascript Web Token) which includes certain information about the user, such as identifier and information about from when will the token be valid, when will it expire and a signature to verify if it was tampered.

Parsing the Token

Our job was to parse the token to find two fields:

  • Identifier of user
  • Expiry time of the token

We stored the token in our shared preference file and loaded it from there for any subsequent requests. But, the token expires after 24 hours and we needed our login model to clear it once it has expired and shown the login activity instead.

To do this, we needed to parse the JWT and compare the timestamp stored in the exp field with the current timestamp and determine if the token is expired. The first step in the process was to parse the token, which is essentially a Base 64 encoded JSON string with sections separated by periods. The sections are as follows:

  • Header ( Contains information about algorithm used to encode JWT, etc )
  • Payload ( The data in JWT – exp. Iar, nbf, identity, etc )
  • Signature ( Verification signature of JWT )

We were interested in payload and for getting the JSON string from the token, we could have used Android’s Base64 class to decode the token, but we wanted to unit test all the util functions and that is why we opted for a custom Base64 class for only decoding our token.

So, first we split the token by the period and decoded each part and stored it in a SparseArrayCompat

public static SparseArrayCompat<String> decode(String token) {
   SparseArrayCompat<String> decoded = new SparseArrayCompat<>(2);

   String[] split = token.split("\\.");
   decoded.append(0, getJson(split[0]));
   decoded.append(1, getJson(split[1]));

   return decoded;
}

 

The getJson function is primarily decoding the Base64 string

private static String getJson(String strEncoded) {
   byte[] decodedBytes = Base64Utils.decode(strEncoded);
   return new String(decodedBytes);
}

The decoded information was stored in this way

0={"alg":"HS256","typ":"JWT"},  1={"nbf":1495745400,"iat":1495745400,"exp":1495745800,"identity":344}

Extracting Information

Next, we create a function to get the expiry timestamp from the token. We could use GSON or Jackson for the task, but we did not want to map fields into any object. So we simply used JSONObject class which Android provides. It took 5 ms on average to parse the JSON instead of 150 ms by GSON

public static long getExpiry(String token) throws JSONException {
   SparseArrayCompat<String> decoded = decode(token);

   // We are using JSONObject instead of GSON as it takes about 5 ms instead of 150 ms taken by GSON
   return Long.parseLong(new JSONObject(decoded.get(1)).get("exp").toString());
}

 

Next, we wanted to get the ID of user from token to determine if a new user is logging in or an old one, so that we can clear the database for new user.

public static int getIdentity(String token) throws JSONException {
   SparseArrayCompat<String> decoded = decode(token);

   return Integer.parseInt(new JSONObject(decoded.get(1)).get("identity").toString());
}

Validating the token

After this, we needed to create a function that tells if a stored token is expired or not. With all the right functions in place, it was just a matter of comparing current time with the stored timestamp

public static boolean isExpired(String token) {
   long expiry;

   try {
       expiry = getExpiry(token);
   } catch (JSONException jse) {
       return true;
   }

   return System.currentTimeMillis()/1000 >= expiry;
}

 

Since the token provides timestamp from epoch in terms of seconds, we needed to divide the current time in milliseconds by 1000 and the function returned true if current timestamp was greater than the expiry time of token.

After writing a few unit tests for both functions, we just needed to plug them in our login model at the time of authentication.

At the time of starting of the application, we use this function to check if a user is logged in or not:

public boolean isLoggedIn() {
   String token = utilModel.getToken();

   return token != null && !JWTUtils.isExpired(token);
}

 

So, if there is no token or the token is expired, we do not automatically login the user and show the login screen.

Implementing login

The next task were

  • Sequest the server to login
  • Store the acquired token
  • Delete database if it is a new user

Before implementing the above logic, we needed to implement a function to determine if the person logging in is previous user, or new one. For doing so, we first loaded the saved user from our database, if the query is empty, surely it is a new user logging in. So we return false, and if there is a user in the database, we match its ID with the logged in user’s ID:

public Single<Boolean> isPreviousUser(String token) {
   return databaseRepository.getAllItems(User.class)
       .first(EMPTY)
       .map(user -> !user.equals(EMPTY) && user.getId() == JWTUtils.getIdentity(token));
}

 

We have added a default user EMPTY in the first operator so that RxJava returns it if there are no users in the database and then we simply map the user to a boolean denoting if they are same or different using the EMPTY user and getIdentity method from JWTUtils

Finally, we use all this information to implement our self contained login request:

eventService
   .login(new Login(username, password))
   .flatMapSingle(loginResponse -> {
       String token = loginResponse.getAccessToken();
       utilModel.saveToken(token);

       return isPreviousUser(token);
   })
   .flatMapCompletable(isPrevious -> {
       if (!isPrevious)
           return utilModel.deleteDatabase();

       return Completable.complete();
   });

 

Let’s see what is happening here. A request using username and password is made to the server which returns a login response containing a JWT, which we store for future use. Next, we flatMapSingle to the Single returned by the isPreviousUser method. And we finally clear the database if it is not a previous user.

Creating these self contained models help reduce complexity in presenter or view layer and all data is handled in one layer making presenter layer model agnostic.

To learn more about JWT and some of the Rx operators I mentioned here, please visit these links:

Continue ReadingInvalidating user login using JWT in Open Event Orga App

Implementing Text-to-Speech (TTS) in SUSI Android

Mobile assistants are designed to perform tasks that the user “commands” through by chat UI or speech. The Android OS already provides Text to speech (TTS) and Speech to text (STT) features. This feature is available from Android version 1.6 onward. In this blog post I will show how tts is implemented in SUSI Android and how I fix the issue ‘delay in speech response’.

TextToSpeech class controls the tts engine. To use TextToSpeech class import it in the activity where you want to use text to speech feature.

import android.speech.tts.TextToSpeech;

After you import TextToSpeech class now we need to initialize TextToSpeech

TextToSpeech tts = new TextToSpeech(this,this);

Here first parameter is the Context and the other one is the listener. The listener is  use  to  inform our app that the engine is ready to use. In order to be notified we have to  implement  TextToSpeech.OnInitListener.

TextToSpeech.OnInitListener listener = new  TextToSpeech.OnInitListener {
@Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS)
tts.setLanguage(Locale.UK/* set the default language*/);
}
}

Hence the engine can be initialized asIf status is success then, it means that TTS is initialized successfully and now we can use it. Otherwise, we can’t use it. setLanguage method is used to set language in which we want reply.

TextToSpeech tts = new TextToSpeech(getApplicationContext,listener)

When you use TTS one thing you have to remember that TTS run  on main thread so sometimes it may cause delays in text to speech conversion or it may block UI for a while. It is better to wrap it like below code.

new Handler().post(new Runnable() {
      @Override
      public void run() {
         tts = new TextToSpeech(getApplicationContext(), listener);
        }
    });

Now our engine is ready to speak, we need simply pass the string we want to read.

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

But before tts.speak, it is important to check for the audio focus change request. It is important because only one audio source can have focus at a time. You can check it using below code.

private AudioManager.OnAudioFocusChangeListener afChangeListener =
           new AudioManager.OnAudioFocusChangeListener() {
                 public void onAudioFocusChange(int focusChange) {
                                                        //check for focus
                                                   }
                                           };

OnAudioFocusChangeListener is called when audio focus of the system is changed and according to value of focusChange either we stop TTS or keep using it.

AudioManager audiofocus = (AudioManager)                                    getSystemService(Context.AUDIO_SERVICE);

audiofocus is instance of AudioManager class. We need it to call requestAudioFocus method of AudioManager class. requestAudioFocus method returns the status of request for audio focus change. This method requires three parameter  instance of AudioManager.OnAudioFocusChangeListener, stream type and duration hint. If request is granted only then we can we can use tts.speak .

int result = audiofocus.requestAudioFocus(afChangeListener,AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN);

if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {

tts.speak(text to read,TextToSpeech.QUEUE_FLUSH, null, null);

}

We were continuously facing issue ‘delay in speech response’ because voiceReply method implementation was wrong. We were initializing TextToSpeech on each call of voiceReply method and since onInit method runs on main thread causing delay in voice response. So I removed it and instead of initializing tts each time I used the tts instance already initialized when activity create.

 String spoken = reply;

textToSpeech.speak(spoken, TextToSpeech.QUEUE_FLUSHnull);

You can also control how the engine read text. Like we can modify pitch and speech rate.

tts.setPitch((float)pitch);

tts.setSpeechRate((float)speed);

Resource

Continue ReadingImplementing Text-to-Speech (TTS) in SUSI Android

Passing Java Bitmap Object to Native for Image Processing and Getting it back in Phimpme Android

To perform any image processing operations on an image, we must have an image object in native part like we have a Bitmap object in Java. We cannot just pass the image bitmap object directly as an argument to native function because ‘Bitmap’ is a java object and C/C++ cannot interpret it directly as an image. So, here I’ll discuss a method to send java Bitmap object to Native part for performing image processing operations on it, which we implemented in the image editor of Phimpme Android Image Application.

C/C++ cannot interpret java bitmap object. So, we have to find the pixels of the java bitmap object and send them to native part to create a native bitmap over there.

In Phimpme, we used a “struct” data structure in C to represent native bitmap. An image has three color channels red, green, blue. We consider alpha channel(transparency) for an argb8888 type image. But in Phimpme, we considered only three channels as it is enough to manipulate these channels to implement the edit functions, which we used in the Editor of Phimpme.

We defined a struct, type-defined as NativeBitmap with attributes corresponding to all the three channels. We defined this in the nativebitmap.h header file in the jni/ folder of Phimpme so that we can include this header file in c files in which we needed to use a NativeBitmap struct.

#ifndef NATIVEBITMAP
#define NATIVEBITMAP
#endif
typedef struct {
  unsigned int width;
  unsigned int height;
  unsigned int redWidth;
  unsigned int redHeight;
  unsigned int greenWidth;
  unsigned int greenHeight;
  unsigned int blueWidth;
  unsigned int blueHeight;
  unsigned char* red;
  unsigned char* green;
  unsigned char* blue;
} NativeBitmap;

void deleteBitmap(NativeBitmap* bitmap);
int initBitmapMemory(NativeBitmap* bitmap, int width, int height);

As I explained in my previous post here on introduction to flow of native functions in Phimpme-Android, we defined the native functions with necessary arguments in Java part of Phimpme. We needed the image bitmap to be sent to the native part of the Phimpme application. So the argument in the function should have been java Bitmap object. But as mentioned earlier, the C code cannot interpret this java bitmap object directly as an image. So we created a function for getting pixels from a bitmap object in the Java class of Phimpme application. This function returns an array of unsigned integers which correspond to the pixels of a particular row of the image bitmap. The array of integers can be interpreted by C, so we can send this array of integers to native part and create a receiving native function to create NativeBitmap.

We performed image processing operations like enhancing and applying filters on this NativeBitmap and after the processing is done, we sent the array of integers corresponding to a row of the image bitmap and constructed the Java Bitmap Object using those received arrays in java.

The integers present in the array correspond to the color value of a particular pixel of the image bitmap. We used this color value to get red, green and blue values in native part of the Phimpme application.

Java Implementation for getting pixels from the bitmap, sending to and receiving from native is shown below.

private static void sendBitmapToNative(NativeBitmap bitmap) {
   int width = bitmap.getWidth();
   int height = bitmap.getHeight();
   nativeInitBitmap(width, height);
   int[] pixels = new int[width];
   for (int y = 0; y < height; y++) {
       bitmap.getPixels(pixels, 0, width, 0, y, width, 1); 
                    //gets pixels of the y’th row
       nativeSetBitmapRow(y, pixels);
   }
}
private static Bitmap getBitmapFromNative(NativeBitmap bitmap) {
   bitmap = Bitmap.createBitmap(srcBitmap.getWidth(), srcBitmap.getHeight(), srcBitmap.getConfig());
   int[] pixels = new int[width];
   for (int y = 0; y < height; y++) {
       nativeGetBitmapRow(y, pixels);
       bitmap.setPixels(pixels, 0, width, 0, y, width, 1);
   }
   return bitmap;
}

The native functions which are defined in Java part have to be linked to native functions. So they have to be created with proper function name in main.c (JNI) file of the Phimpme application. We included nativebitmap.h in main.c so that we can use NativeBitmap struct which is defined in nativebitmap.h.
The main functions which do the actual work related to converting integer array received from java part to NativeBitmap and converting back to an integer array of color values of pixels in rows of image bitmap are present in nativebitamp.c. The main.c file acts as Java Native Interface which helps in linking native functions and java functions in Phimpme Android.

After adding all the JNI functions, the main.c function looks as below.

main.c

static NativeBitmap bitmap;
int Java_org_fossasia_phimpme_PhotoProcessing_nativeInitBitmap(JNIEnv* env, jobject thiz, jint width, jint height) {
  return initBitmapMemory(&bitmap, width, height);  //function present
                                                    // in nativebitmap.c
}

void Java_org_fossasia_phimpme_PhotoProcessing_nativeSetBitmapRow(JNIEnv* env, jobject thiz, jint y, jintArray pixels) {
  int cpixels[bitmap.width];
  (*env)->GetIntArrayRegion(env, pixels, 0, bitmap.width, cpixels);
  setBitmapRowFromIntegers(&bitmap, (int)y, &cpixels);
}

void Java_org_fossasia_phimpme_PhotoProcessing_nativeGetBitmapRow(JNIEnv* env, jobject thiz, jint y, jintArray pixels) {
  int cpixels[bitmap.width];
  getBitmapRowAsIntegers(&bitmap, (int)y, &cpixels);
  (*env)->SetIntArrayRegion(env, pixels, 0, bitmap.width, cpixels);
                    //sending bitmap row as output
}

We now reached the main part of the implementation where we created the functions for storing integer array of color values of pixels in the NativeBitmap struct, which we created in nativebitmap.h of Phimpme Application. The definition of all the functions is present in nativebitmap.h, so we included that header file in the nativebitmap.c for using NativeBitmap struct in the functions. We implemented the functions defined in main.c i.e. initializing bitmap memory, setting bitmap row using integer array, getting bitmap row and a method for deleting native bitmap from memory after completion of processing the image in nativebitmap.c.

The implementation of nativebitmap.c after adding all functions is shown below. A proper explanation is added as comments wherever necessary.

void setBitmapRowFromIntegers(NativeBitmap* bitmap, int y, int* pixels) {
  //y is the number of the row (yth row)
  //pixels is the pointer to integer array which contains color value of a pixel
  unsigned int width = (*bitmap).width;
  register unsigned int i = (width*y) + width - 1;
                                //this represent the absolute                                 //index of the pixel in the image bitmap  
  register unsigned int x;      //this represent the index of the pixel 
                                //in the particular row of image bitmap
  for (x = width; x--; i--) {
          //functions defined above
     (*bitmap).red[i] = red(pixels[x]);
     (*bitmap).green[i] = green(pixels[x]);
     (*bitmap).blue[i] = blue(pixels[x]);
  }
}

void getBitmapRowAsIntegers(NativeBitmap* bitmap, int y, int* pixels) {
  unsigned int width = (*bitmap).width;
  register unsigned int i = (width*y) + width - 1; 
  register unsigned int x;              
  for (x = width; x--; i--) {
          //function defined above
     pixels[x] = rgb((int)(*bitmap).red[i], (int)(*bitmap).green[i], (int)(*bitmap).blue[i]);
  }
}

The native bitmap has to be initialized first before we set pixel values to it and has to be deleted from memory after completion of the processing. The functions for doing these tasks are given below. These functions should be added to nativebitmap.c file.

void deleteBitmap(NativeBitmap* bitmap) {
//free up memory
  freeUnsignedCharArray(&(*bitmap).red);//do same for green and blue
}

int initBitmapMemory(NativeBitmap* bitmap, int width, int height) {
  deleteBitmap(bitmap); 
             //if nativebitmap already has some value it gets removed
  (*bitmap).width = width;
  (*bitmap).height = height;
  int size = width*height;
  (*bitmap).redWidth = width;
  (*bitmap).redHeight = height;
            //assigning memory to the red,green and blue arrays and
            //checking if it succeeded for each step.
  int resultCode = newUnsignedCharArray(size, &(*bitmap).red);
  if (resultCode != MEMORY_OK) return resultCode;
            //repeat the code given above for green and blue colors
}

You can find the values of different color components of the RGB color value and RGB color values from the values of color components using the below functions. Add these functions to the top of the nativebitmap.c file.

int rgb(int red, int green, int blue) {
  return (0xFF << 24) | (red << 16) | (green << 8) | blue;
//Find the color value(int) from the red, green, blue values of a particular //pixel.
}

unsigned char red(int color) {
  return (unsigned char)((color >> 16) & 0xFF);
//Getting the red value form the color value of a particular pixel
}

//for green return this ((color >> 8) & 0xFF)
//for blue return this (color & 0xFF);

Do not forget to define native functions in Java part. As now everything got set up, we can use this in PhotoProcessing.java in the following manner to send Bitmap object to native.

Bitmap input = somebitmap;
if  (bitmap != null) {
    sendBitmapToNative(bitmap);
}

////Do some native processing on native bitmap struct
////Discussed in the next post

Bitmap output = getBitmapFromNative(input);
nativeDeleteBitmap();

Performing image processing operations on the NativeBitmap in the image editor of Phimpme like enhancing the image, applying filters are discussed in next posts.

Resources

Continue ReadingPassing Java Bitmap Object to Native for Image Processing and Getting it back in Phimpme Android

Integration of SUSI AI in Twitter

We will be making a Susi messenger bot on Twitter. The messenger bot will tweet back to your tweets and reply instantly when you chat with it. Feel free to tweet to the already made SUSI AI account (mentioning @SusiAI1 in it). Follow it, to have a personal chat.

Make a new account, which you want to use as the bot account. You can make one from sign up option from https://www.twitter.com.

Prerequisites

To create your account on -:
1. Twitter
2. Github
3. Heroku
4. Node js

Setup your own Messenger Bot

1. Make a new app here, to know the access token and other properties for our application. These properties will help us communicate with Twitter.

Click “modify the app permissions” link, as shown here:

Select the Read, Write and Access direct messages option:

Don’t forget to click the update settings button at the bottom.

Click the Generate My Access Token and Token Secret button.

3. Create a new heroku app here.

This app will accept the requests from Twitter and Susi api.

4. Create a config variable by switching to settings page of your app.
  
  The name of your first config variable should be HEROKU_URL and its value is the url address of the heroku app created by you.
 

The other config variables that need to be created will be these:
 

The corresponding names of these variables in the same order are:
  i) Access token
  ii) Access token secret
  iii) Consumer key
  iv) Consumer secret
  
We need to visit our app from here, the keys and access tokens tab will help us with the values of these variables.

  1. Let’s start with the code part of the integration of SUSI AI to Twitter. We will be using Node js to achieve this integration.

First we need to require some packages:

Now using the Twit module, we need to authenticate our requests, by using our environment variables as set up in step 4:

Now let’s make a user stream:

var stream = T.stream('user');

We will be using the capabilities of this stream, to catch events of getting tweeted or receiving a direct message by using:

stream.on('tweet', functionToBeCalledWhenTweeted);
stream.on('follow', functionToBeCalledWhenFollowed);
stream.on('direct_message', functionToBeCalledWhenDirectMessaged);

So, when a person tweets to our account like this:

We can catch it with ‘tweet’ event and execute a set of instructions:

stream.on('tweet', tweetEvent);

    function tweetEvent(eventMsg) {
        var replyto = eventMsg.in_reply_to_screen_name;     

       // to store the message tweeted excluding '@SusiAI1' substring
        var text = eventMsg.text.substring(9);

        // to store the name of the tweeter
        var from = eventMsg.user.screen_name;
        
        if (replyto === 'SusiAI1') {
            var queryUrl = 'http://api.asksusi.com/susi/chat.json?q=' + encodeURI(text);
            var message = '';
            request({
                url: queryUrl,
                json: true
            }, function (err, response, data) {
                if (!err && response.statusCode === 200) {
                    // fetching the answer from the data object returned
                                        message = data.answers[0].actions[0].expression + data;


                }
                else {
                    message = 'Oops, Looks like Susi is taking a break';    
                    console.log(err);
                }
                console.log(message);
                // If the message length is more than tweet limit
                if(message.length > 140){
                    tweetIt('@' + from + ' Sorry due to tweet word limit, I have sent you a personal message. Check inbox'+date);
                    sendMessage(from, message);
                }
                else{
                    tweetIt('@' + from + ' ' + message + date);
                }
            });
        }
    }
  • When we a person follows this SUSI AI account, we can thank him/her by making use of the “follow” event. Also, we need to follow him/her back, to enable personal chat between Susi and that person (according to the rules of twitter):
stream.on('follow',followed);

function followed(eventMsg) {
        console.log('Follow event !');
        var name = eventMsg.source.name;
        var screenName = eventMsg.source.screen_name;
        var user_id1 = eventMsg.source.id_str;

        // To follow back the person.
        T.post('friendships/create', {user_id : user_id1},  function(err, tweets, response){
            if (err) {
                console.log("Couldn't follow back!");
            } 
            else {    
tweetIt('@' + screenName + ' Thank you for following me! I followed you back, you can also direct message me now! ');
                console.log("Followed back!");
            } 
        }); 
    }

When we personally message this SUSI AI account

This can be handled through the ‘direct_message’ event:

stream.on('direct_message', reply);
    function reply(directMsg) {
        console.log('You receive a message!');
        // If its our own bot messaging, ignore it, as this can lead to an infinite loop when we answer a user.
        if (directMsg.direct_message.sender_screen_name === 'SusiAI1') {
            return;
        }

        // code to fetch the reply from SUSI API
        
        // reply the user with the SUSI API's message
        sendMessage(directMsg.direct_message.sender_screen_name, message);
        });
    }
  • The tweetIt and sendMessage function code can be seen from the repo code.

6. Connect the heroku app to the forked repository.

Connect the app to Github by selecting the name of this forked repository.

7. Deploy on development branch. If you intend to contribute, it is recommended to Enable Automatic Deploys.

Branch Deployment.

Successful Deployment.

  1. Visit your own personal account and tweet to this new bot account with your query and enjoy a tweet back from the bot account! Also, you can enjoy personal chatting with Susi.

    Feel free to play around with the already made SUSI AI account on twitter here. Follow it, to have a personal chat with it.

    Resources
Continue ReadingIntegration of SUSI AI in Twitter

Integration of Susi AI to Gitter

This blog post discusses the development of Susi Messenger bot on Gitter. It replies instantly to the messages sent to it, using the Susi API. The Streaming API notifies us when a user messages to the SUSI chat room. The REST API helps to message back with a reply from SUSI API, to the SUSI chat room.

Feel free to message to the already made SUSI AI account on Gitter and have a chat with it.

Prerequisites

  1. Basic knowledge about calling API’s and fetching data or posting data to the API.
  2. Node.js language.
  3. Github
  4. Heroku

Figure – Architecture for running SUSI AI on different messaging services.

This blog post will walk you through each of the steps required to integrate SUSI AI to Gitter:

Setup SUSI AI Bot on Gitter

  1. Create a Github or twitter account with a username having ‘Susi’ as its substring because this is the name that will be shown with the reply string we will get from Susi AI.
  2. Now you need to sign in to Gitter with a twitter or Github account from here.
  3. Create your community by visiting this page. After writing your community name press next, invite the people you want to be in this room and press next. You will be redirected to your communities lobby. This lobby is the chat room to which we will deploy our SUSI AI.
  4. Now visit the Gitter developer page, press sign in on the top right. You will be redirected to your apps page. Copy the personal access token written there as shown in this image: (The area colored black will have your access token).

5. On a new tab, in your browser visit   https://api.gitter.im/v1/rooms?access_token=YOUR_ACCESS_TOKEN, with YOUR_ACCESS_TOKEN replaced by the token we just copied.

A JSON object will be shown on our browser screen. You will see the value of ‘name’ key as YOUR_COMMUNITY_NAME/Lobby. Copy the id of this chat room, as we will need it later. You can refer to the image below, you will have your chat room id in the area colored black.

  1. Create a new heroku app here.

This app will accept the requests from Gitter and Susi api.

  1. Set the config variables for this heroku app in the setting tab of your account. Set ROOM_ID to the id of the chat room and TOKEN to the personal access token, we copied in steps 4 and 5.

These were the formalities to be done to have our chat bot account on Gitter.

  1. Let’s jump to the code part of how this integration will be done:

To use the two config variables set in Heroku, we need these two lines in our Node js code:

var roomId = process.env.ROOM_ID;
var token = process.env.TOKEN;

We need to set up an options variable with our access token and room id in it:

// Setting the options variable to use it in the https request block
var options = {
    hostname: 'stream.gitter.im',
    port:     443,
    path:     '/v1/rooms/' + roomId + '/chatMessages',
    method:   'GET',
    headers:  {'Authorization': 'Bearer ' + token}
};

We will send this options variable when making a request so that Gitter lets our request through and notifies us when a client messages to our SUSI chat room.

The res.on(‘data’) accepts a function which is called when a client messages to our SUSI chat room:

// making a request to gitter stream API
var req = https.request(options, function(res) {
 res.on('data', function(chunk) {
    // do stuff
 }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

According to the docs of REST API in Gitter, the JSON data that we receive when a client sends a message to a chat room is like this:

To get this response set in a variable, we can use this code snippet:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != " \n"){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);

Now we have this json response as shown in the above figure in the jsonMsg variable. To extract the client’s message from this json object:

var clientMsg = jsonMsg.text;

As we now have the user query in clientMsg variable. Let’s call Susi API and fetch an answer from it for a query.

As an example, let’s first take the query as ‘hi’ and visit http://api.asksusi.com/susi/chat.json?q=hi from the browser. We will get a JSON object as follows:

{
        "query": "hi",
        "count": 1,
        "client_id": "aG9zdF8xMDMuMjkuMjIyLjE4MA==",
        "query_date": "2017-07-17T02:29:44.171Z",
        "answers": [{
            "data": [{
                "0": "hi",
                "token_original": "hi",
                "token_canonical": "hi",
                "token_categorized": "hi",
                "timezoneOffset": "-330",
                "language": "en"
            }],
            "metadata": {"count": 1},
            "actions": [{
                    "type": "answer",
                    "language": "de",
                    "expression": "Hallo!"
            }],
   "skills": ["/susi_skill_data/models/general/smalltalk/de/German-Standalone-aiml2susi.txt"]
        }],
        "answer_date": "2017-07-17T02:29:44.179Z",
        "answer_time": 8,
        "language": "en",
        "session": {"identity": {
            "type": "host",
            "name": "103.29.222.180",
            "anonymous": true
        }}
    }

The answer can be found as the value of the key named expression. In this case, it is “Hallo!”.

To fetch the answer through coding for our client message, we can use this code snippet in Node js:

// including request module
var request = require(‘request’);

// setting options to make a successful call to Susi API.
var susiOptions = {
            method: 'GET',
            url: 'http://api.asksusi.com/susi/chat.json',
            qs:
            {
                timezoneOffset: '-330',
                q: clientMsg  //the client message sent to SUSI chat room.
            }
        };

// A request to the Susi bot
request(susiOptions, function (error, response, body) {
    if (error)
        throw new Error(error);
    // answer fetched from susi
    ans = (JSON.parse(body)).answers[0].actions[0].expression;
}

The properties required for the call are set up through a JSON object (i.e. susiOptions). Pass the susiOptions object to our request function as its 1st parameter. The response from the API will be stored in ‘body’ variable. We need to parse this body, to be able to access the properties of that body object. Hence, fetching the answer from Susi API.

As we now have the answer, let’s call the API of Gitter to show our answer back to the user. Let’s code the request for that:

// To send a reply by Susi AI to client's message back to Gitter
         var gitterOptions = {
                               method: 'POST',
                               url: 'https://api.gitter.im/v1/rooms/'+roomId+'/chatMessages',
                               headers:
                               {
                                 'authorization': 'Bearer '+ token ,
                                 'content-type': 'application/json',
                                 'accept': 'application/json'
                               },
                               body:
                               {
                                 text: ans
                               },
                               json: true
                             };

         // making the request to Gitter API
         request(gitterOptions, function (error, response, body) {
           if(error)
             throw new Error(error);
           console.log(body);
         });

Hence, we have made the basic chat work!

The streaming API of Gitter notifies us for every message sent to our chat room. We will also be notified about the reply message sent by ourselves. To not fall into an infinite loop of answers and questions by SUSI itself, we must include this line in our code:

res.on('data', function(chunk) {
   var msg = chunk.toString();
   if(msg != ” \n”){              // If message is not an empty message
     var jsonMsg = JSON.parse(msg);
     if(jsonMsg.fromUser.displayName != 'SusiAI'){ // If it’s not our own answer
        // do stuff 
     }
   }
}

req.on('error', function(e) {
 console.log('Hey something went wrong: ' + e.message);
});

req.end();

The display name in my case is ‘SusiAI’, but it may be different in your case according to the Github or Twitter id made by you.

  1. Upload this code to Github.
  2. Connect the Heroku app to the Github repository, which has your code.

  1. Deploy on the development branch. If you intend to contribute, it is recommended to Enable Automatic Deploys.

Branch Deployment.

Successful Deployment.

  1. Go to your Gitter room created and enjoy chatting with Susi.Resources
Continue ReadingIntegration of Susi AI to Gitter