Using Custom Forms In Open Event API Server

One feature of the  Open Event management system is the ability to add a custom form for an event. The nextgen API Server exposes endpoints to view, edit and delete forms and form-fields. This blogpost describes how to use a custom-form in Open Event API Server.

Custom forms allow the event organizer to make a personalized forms for his/her event. The form object includes an identifier set by the user, and the form itself in the form of a string. The user can also set the type for the form which can be either of text or checkbox depending on the user needs. There are other fields as well, which are abstracted. These fields include:

  • id : auto generated unique identifier for the form
  • event_id : id of the event with which the form is associated
  • is_required : If the form is required
  • is_included : if the form is to be included
  • is_fixed : if the form is fixedThe last three of these fields are boolean fields and provide the user with better control over forms use-cases in the event management.

Only the event organizer has permissions to edit or delete these forms, while any user who is logged in to eventyay.com can see the fields available for a custom form for an event.

To create a custom-form for event with id=1, the following request is to be made:
POST  https://api.eventyay.com/v1/events/1/custom-forms?sort=type&filter=[]

with all the above described fields to be included in the request body.  For example:

{
 "data": {
   "type": "custom_form",
   "attributes": {
     "form": "form",
     "type": "text",
     "field-identifier": "abc123",
     "is-required": "true",
     "is-included": "false",
     "is-fixed": "false"
   }
 }
}

The API returns the custom form object along with the event relationships and other self and related links. To see what the response looks like exactly, please check the sample here.

Now that we have created a form, any user can get the fields for the same. But let’s say that the event organiser wants to update some field or some other attribute for the form, he can make the following request along with the custom-form id.

PATCH https://api.eventyay.com/v1/custom-forms/1

(Note: custom-form id must be included in both the URL as well as request body)

Similarly, to delete the form,
DELETE https://api.eventyay.com/v1/custom-forms/1     can be used.

Resources

Continue ReadingUsing Custom Forms In Open Event API Server

Implementing the Feedback Functionality in SUSI Web Chat

SUSI AI now has a feedback feature where it collects user’s feedback for every response to learn and improve itself. The first step towards guided learning is building a dataset through a feedback mechanism which can be used to learn from and improvise the skill selection mechanism responsible for answering the user queries.

The flow behind the feedback mechanism is :

  1. For every SUSI response show thumbs up and thumbs down buttons.
  2. For the older messages, the feedback thumbs are disabled and only display the feedback already given. The user cannot change the feedback already given.
  3. For the latest SUSI response the user can change his feedback by clicking on thumbs up if he likes the response, else on thumbs down, until he gives a new query.
  4. When the new query is given by the user, the feedback recorded for the previous response is sent to the server.

Let’s visit SUSI Web Chat and try this out.

We can find the feedback thumbs for the response messages. The user cannot change the feedback he has already given for previous messages. For the latest message the user can toggle feedback until he sends the next query.

How is this implemented?

We first design the UI for feedback thumbs using Material UI SVG Icons. We need a separate component for the feedback UI because we need to store the state of feedback as positive or negative because we are allowing the user to change his feedback for the latest response until a new query is sent. And whenever the user clicks on a thumb, we update the state of the component as positive or negative accordingly.

import ThumbUp from 'material-ui/svg-icons/action/thumb-up';
import ThumbDown from 'material-ui/svg-icons/action/thumb-down';

feedbackButtons = (
  <span className='feedback' style={feedbackStyle}>
    <ThumbUp
      onClick={this.rateSkill.bind(this,'positive')}
      style={feedbackIndicator}
      color={positiveFeedbackColor}/>
    <ThumbDown
      onClick={this.rateSkill.bind(this,'negative')}
      style={feedbackIndicator}
      color={negativeFeedbackColor}/>
  </span>
);

The next step is to store the response in Message Store using saveFeedback Action. This will help us later to send the feedback to the server by querying it from the Message Store. The Action calls the Dispatcher with FEEDBACK_RECEIVED ActionType which is collected in the MessageStore and the feedback is updated in the Message Store.

let feedback = this.state.skill;

if(!(Object.keys(feedback).length === 0 &&    
feedback.constructor === Object)){
  feedback.rating = rating;
  this.props.message.feedback.rating = rating;
  Actions.saveFeedback(feedback);
}

case ActionTypes.FEEDBACK_RECEIVED: {
  _feedback = action.feedback;
  MessageStore.emitChange();
  break;
}

The final step is to send the feedback to the server. The server endpoint to store feedback for a skill requires other parameters apart from feedback to identify the skill. The server response contains an attribute `skills` which gives the path of the skill used to answer that query. From that path we need to parse :

  • Model : Highest level of abstraction for categorising skills
  • Group : Different groups under a model
  • Language : Language of the skill
  • Skill : Name of the skill

For example, for the query `what is the capital of germany` , the skills object is

"skills": ["/susi_skill_data/models/general/smalltalk/en/English-Standalone-aiml2susi.txt"]

So, for this skill,

    • Model : general
    • Group : smalltalk
    • Language : en
    • Skill : English-Standalone-aiml2susi

The server endpoint to store feedback for a particular skill is :

BASE_URL+'/cms/rateSkill.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL&rating=RATING'

Where Model, Group, Language and Skill are parsed from the skill attribute of server response as discussed above and the Rating is either positive or negative and is collected from the user when he clicks on feedback thumbs.

When a new query is sent, the sendFeedback Action is triggered with the required attributes to make the server call to store feedback on server. The client then makes an Ajax call to the rateSkill endpoint to send the feedback to the server.

let url = BASE_URL+'/cms/rateSkill.json?'+
          'model='+feedback.model+
          '&group='+feedback.group+
          '&language='+feedback.language+
          '&skill='+feedback.skill+
          '&rating='+feedback.rating;

$.ajax({
  url: url,
  dataType: 'jsonp',
  crossDomain: true,
  timeout: 3000,
  async: false,
  success: function (response) {
    console.log(response);
  },
  error: function(errorThrown){
    console.log(errorThrown);
  }
});

This is how the feedback feedback mechanism works in SUSI Web Chat. The entire code can be found at SUSI Web Chat Repository.

Resources

 

Continue ReadingImplementing the Feedback Functionality in SUSI Web Chat

Implementing Admin Statistics Mail and Session API on Open Event Frontend

This blog article will illustrate how the admin-statistics-mail and admin-statistics-session API  are implemented on the admin dashboard page in Open Event Frontend.Our discussion primarily will involve the admin/index route to illustrate the process.The primary end points of Open Event API with which we are concerned with for fetching the admin statistics  for the dashboard are

GET /v1/admin/statistics/mails
GET /v1/admin/statistics/sessions

First we need to create the corresponding models according to the type of the response returned by the server , which in this case will be admin-statistics-event and admin-statistics-sessions, so we proceed with the ember CLI commands:

ember g model admin-statistics-mail
ember g model admin-statistics-session

Next we define the model according to the requirements. The model needs to extend the base model class, and all the fields will be number since the all the data obtained via these models from the API will be numerical statistics

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 oneDay     : attr('number'),
 threeDays  : attr('number'),
 sevenDays  : attr('number'),
 thirtyDays : attr('number')
});

And the model for sessions will be the following. It too will consist all the attributes of type number since it represents statistics

import attr from 'ember-data/attr';
import ModelBase from 'open-event-frontend/models/base';

export default ModelBase.extend({
 confirmed : attr('number'),
 accepted  : attr('number'),
 submitted : attr('number'),
 draft     : attr('number'),
 rejected  : attr('number'),
 pending   : attr('number')
});

Now we need to load the data from the api using the models, so will send a get request to the api to fetch the current permissions. This can be easily achieved via a store query in the model hook of the admin/index route.However this cannot be a normal get request. Because the the urls for the end point are /v1/admin/statistics/mails & /v1/admin/statistics/sessions but there are no relationships between statistics and various sub routes, which is what ember’s default behaviour would expect.

Hence we need to override the generated default request url using custom adapters and use buildUrl method to customize the request urls.

import ApplicationAdapter from './application';

export default ApplicationAdapter.extend({
 buildURL(modelName, id, snapshot, requestType, query) {
   let url = this._super(modelName, id, snapshot, requestType, query);
   url = url.replace('admin-statistics-session', 'admin/statistics/session');
   return url;
 }
});

The buildURL method replaces the the default  URL for admin-statistics-session  with admin/statistics/session otherwise the the default request would have been

GET v1/admin-statistics-session

Similarly it must be done for the mail statistics too. These will ensure that the correct request is sent to the server. Now all that remains is making the requests in the model hooks and adjusting the template slightly for the new model.

model() {
   return RSVP.hash({
         mails: this.get('store').queryRecord('admin-statistics-mail', {
       filter: {
         name : 'id',
         op   : 'eq',
         val  : 1
       }
     }),
     sessions: this.get('store').queryRecord('admin-statistics-session', {
       filter: {
         name : 'id',
         op   : 'eq',
         val  : 1
       }
     })
   });
 }


queryRecord is used instead of query because only a single record is expected to be returned by the API.

Resources

Tags :

Open event, Open event frontend, ember JS, ember service, semantic UI, ember-data, ember adapters,  tickets, Open Event API, Ember models

Continue ReadingImplementing Admin Statistics Mail and Session API on Open Event Frontend

Handle Large Size Images in Phimpme

Phimpme is an image app which provides custom camera, sharing features along with a well-featured gallery section. In gallery, it allows users to view local images. Right now we are using Glide to load images in the gallery, it is working fine for small size images it lags a bit when it comes to handling the high quality large images in the app. So in this post, I will explaining how to handle large size  images without lagging or without taking much time. To solve this problem I am using android universal image loader library which is very light when compared to glide.

Step – 1

First step is to include the dependency in the phimpme project and it can be done by the following way

dependencies {
compile 'com.nostra13.universalimageloader:universal-image-loader:1.9.4'
}

Step-2

After this create an Android universal image loader instance. We can create imageloader instance in our application class if we want to use the image loader globally.

ImageLoaderConfiguration config = new ImageLoaderConfiguration.Builder(
       this).memoryCacheExtraOptions(480, 800).defaultDisplayImageOptions(defaultOptions)
       .diskCacheExtraOptions(480, 800, null).threadPoolSize(3)
       .threadPriority(Thread.NORM_PRIORITY - 2)
       .tasksProcessingOrder(QueueProcessingType.FIFO)
       .denyCacheImageMultipleSizesInMemory()
       .memoryCache(new LruMemoryCache(MAXMEMONRY / 5))
       .diskCache(new UnlimitedDiskCache(cacheDir))
       .diskCacheFileNameGenerator(new HashCodeFileNameGenerator()) // default
       .imageDownloader(new BaseImageDownloader(this)) // default
       .imageDecoder(new BaseImageDecoder(false)) // default
       .defaultDisplayImageOptions(DisplayImageOptions.createSimple()).build();
 
 this.imageLoader = ImageLoader.getInstance();
 imageLoader.init(config);

Add the above code in the application class.


Step-3

Now our image loader instance is created now we can load an image easily. But to avoid the out of memory error and large image size error we can set many options to an image loader. In options we can set maximum memory allowed to image loader, maximum resolution and set particular architecture, it can be done in following ways.


Step-4

File cacheDir = com.nostra13.universalimageloader.utils.StorageUtils.getCacheDirectory(this);
 int MAXMEMONRY = (int) (Runtime.getRuntime().maxMemory());

To load an image using universal image loader just pass the URI of an image and to load write the below code.
Now the time is to load an image from local storage. We can load images from local storage, drawable, assets easily.

ImageLoader imageLoader = ((MyApplication)getApplicationContext()).getImageLoader();
 imageLoader.displayImage(imageUri, imageView);

This is how I handled large size image in Phimpme.

Large Image in Phimpme


References :

Continue ReadingHandle Large Size Images in Phimpme

How to Make SUSI Bot for Google Hangouts

Google Hangouts is a platform for communication provided by Google. We can also make bots for Google Hangouts. In fact, in this blog, we will make SUSI bot for Google Hangouts. In order to make it we will follow these steps:

  1. Install Node.js from the link below on your computer if you haven’t installed it already.
    https://nodejs.org/en/
  2. Create a folder with any name and open a shell and change your current directory to the new folder you created.
  3. Type npm init in the shell and enter details like name, version and entry point.
  4. Create a file with the same name that you wrote in entry point in an above-given step. i.app.js and it should be in the same folder you created.
  5. Type following commands in command line  npm install –save hangouts-bot. After hangouts-bot is installed, type npm install –save express.  On success, type npm install –save request. When all the modules are installed check your package.json file; therein you’ll see the modules in the dependencies portion.
  6. Your package.json file should look like this.

    {
     "name": "susi_hangout",
     "version": "1.0.0",
     "description": "SUSI AI bot for Google Hangouts",
     "main": "app.js",
     "scripts": {
       "start": "node app.js"
     },
     "author": "",
     "license": "",
     "dependencies": {
       "express": "^4.15.3",
       "hangouts-bot": "^0.2.1",
       "request": "^2.81.0"
     }
    }
    
  7. Copy following code into file you created i.e index.js
    var hangoutsBot = require("hangouts-bot");
    var bot = new hangoutsBot("your-email-here", process.env.password|| config.password);
    var request = require('request');
    var express = require('express');
    
    var app = express();
    app.set('port', 8080);
    
    bot.on('online', function() {
       console.log('online');
    });
    
    bot.on('message', function(from, message) {
       console.log(from + ">> " + message);
    
           var options1 = {
           method: 'GET',
           url: 'http://api.susi.ai/susi/chat.json',
           qs: {
               timezoneOffset: '-330',
               q: message
           }
       };
    
       request(options1, function(error, response, body) {
               if (error) throw new Error(error);
               // answer fetched from susi
               var type = (JSON.parse(body)).answers[0].actions;
               var answer, columns, data, key, i, count;
               if (type.length == 1 && type[0].type == "answer") {
                   answer = (JSON.parse(body)).answers[0].actions[0].expression;
                   bot.sendMessage(from, answer);
               } else if (type.length == 1 && type[0].type == "table") {
                   data = JSON.parse(body).answers[0].data;
                   columns = type[0].columns;
                   key = Object.keys(columns);
                   count = JSON.parse(body).answers[0].metadata.count;
                   console.log(key);
                   for (i = 0; i < count; i++) {
                       answer = "";
                       answer =key[0].toUpperCase() + ": " + data[i][key[0]] + "\n" + key[1].toUpperCase() + ": " + data[i][key[1]] + "\n" + key[2].toUpperCase() + ": " + data[i][key[2]] + "\n\n";
                       bot.sendMessage(from, answer);
                   }
               } else if (type.length == 2 && type[1].type == "rss"){
                 data = (JSON.parse(body)).answers[0].data;
                 columns = type[1];
                 key = Object.keys(columns);
                 for(i = 0; i< 4; i++){
                     if(i == 0){
                         answer = (JSON.parse(body)).answers[0].actions[0].expression;
                         bot.sendMessage(from, answer);
                     } else {
                         answer = "";
                         answer = key[1].toUpperCase() + ": " + data[i][key[1]] + "\n" + key[2].toUpperCase() + ": " + data[i][key[2]] + "\n" + key[3].toUpperCase() + ": " + data[i][key[3]] + "\n\n";
                         bot.sendMessage(from, answer);
                     }
                 }
               } else {
                   answer = "Oops looks like SUSI is taking break try to contact later.";
                   bot.sendMessage(from, answer);
               }
       });
    });
    
    app.listen(app.get('port'));
    
  8. Enter your email in above code.Now, we have to deploy the above-written code onto Heroku, but first, we will make a Github repository and push the files to Github as we will use Github to deploy our code.
    In command line change current directory to folder we created above and  write

    git init
    git add .
    git commit -m”initial”
    git remote add origin <URL for remote repository>
    git remote -v
    git push -u origin master

    You will get URL for the remote repository by making a repository on your Github and copying this link of your repository.

  9. Now we have to deploy this Github repository to Heroku to get URL.
  10. If you don’t have an account on Heroku sign up here https://www.heroku.com/ else just sign in and create a new app.
  11. Deploy your repository onto Heroku from deploy option and choosing Github as a deployment method.
  12. Select automatic deployment so that when you make any changes in the Github repository, they are automatically deployed to heroku.

You have successfully made SUSI Hangout bot. Try massaging it from your other account on https://hangouts.google.com

Resources

Hangouts bot in python: https://github.com/hangoutsbot/hangoutsbot

Continue ReadingHow to Make SUSI Bot for Google Hangouts

Working of One Click Deployment Buttons in loklak

Today’s topic is deployment. It’s called one-click deployment for a reason: Developers are lazy. It’s hard to do less than clicking on one button, so that’s our goal to make use of one click button in loklak.

For one click buttons we only need a central build server, which is our loklak_server. Everything written here was based on Apache ant, but later on ant build was deprecated and loklak server started to use gradle build. We wanted to make the process of provisioning and setting up a complete infrastructure of your own, from server to continuous integration tasks, as easy as possible. These button allows you to do all of that in one click.

How does it work?

You can see the one click buttons in the README page of loklak_server repository.

These repositories may include a different files like scalingo.json for scalingo, docker-compose.yml and docker-cloud.yml for docker cloud etc files at their root, allowing them to define a few things like a name, description, logo and build environment (Gradle build in the case of loklak server). Once you’ve clicked on any of the buttons, you will be redirected to respective apps and prompted with this information for you to review before confirming the fork.

This will effectively fork the repository in your account. Once the repo is ready, you can click on it. You will then be asked to “activate” or “deploy” your branch, allowing it to provision actual servers and run tasks. At the same time, you will be asked to review and potentially modify a few variables that were defined in the predefined files (for eg: app.json for heroku) of the apps. These are usually things like the Git URL of the repo for loklak, or some of the details related to the cloud provider you want to use (eg: Digital Ocean).

Once you confirmed this last step, your branch i.e., most probably master branch of loklak server repo is activated and the button will start provisioning and configuring your servers, along with the tasks which may allow you to build and deploy your app. In most of the cases, you can go to the tasks/setup section and run the build task that will fetch loklak server’s code, build it and deploy it on your server, all configurations included and will get a public IP.

What’s next

In loklak we are also introducing new one click “AZURE” button, then the users can also start deploying loklak in azure platform.

Resources

Continue ReadingWorking of One Click Deployment Buttons in loklak

Implementing 3 legged Authorization in Loklak Wok Android for Twitter

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Posting tweets from the app requires users to authorize the Loklak Wok app, the client app created https://apps.twitter.com/ . This blog explains in detail about the authorization process.

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Steps of Authorization

Step 1: Create client app in Twitter

Create a twitter client app at https://apps.twitter.com/. Provide the mandatory entries and also Callback url (would be used in next steps). Then go to “Keys and Access Token” and save your consumer key and consumer secret. In case you want to use Twitter API for yourself, click on “Create my access token”, which provides access token and access token secret.

Step 2: Obtaining a request token

Using the “consumer key” and “consumer secret” request token is obtained by sending a POST request to oauth/request_token. As Twitter API are Oauth1 based the sent request needs to be signed by generating oauth_signature. The oauth_signature is generated by intercepting the network request sent by retrofit rest API client, the oauth interceptor used in Loklak Wok Android is a modified version of this snippet. The retrofit TwitterAPI interface is defined

public interface TwitterAPI {

   String BASE_URL = "https://api.twitter.com/";

   @POST("/oauth/request_token")
   Observable<ResponseBody> getRequestToken();

   @FormUrlEncoded
   @POST("/oauth/access_token")
   Observable<ResponseBody> getAccessTokenAndSecret(@Field("oauth_verifier") String oauthVerifier);
}

 

And the retrofit REST client is implemented in TwitterRestClient. createTwitterAPIWithoutAccessToken method returns a twitter API client which can be called without providing access keys, this is used as we don’t have access tokens right now.

public static TwitterAPI createTwitterAPIWithoutAccessToken() {
   if (sWithoutAccessTokenRetrofit == null) {
       sLoggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY);
       // uncomment to debug network requests
       // sWithoutAccessTokenClient.addInterceptor(sLoggingInterceptor);
       sWithoutAccessTokenRetrofit = sRetrofitBuilder
               .client(sWithoutAccessTokenClient.build()).build();
   }
   return sWithoutAccessTokenRetrofit.create(TwitterAPI.class);
}

 

So, getRequestToken method is used to obtain the request token, if the request is successful oauth_token is returned.

@OnClick(R.id.twitter_authorize)
public void onClickTwitterAuthorizeButton(View view) {
   mTwitterApi.getRequestToken()
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::parseRequestTokenResponse, this::onFetchRequestTokenError);
}

 

Step 3: Redirecting the user

Using the oauth_token obtained in Step 2, the user is redirected to login page using WebView.

private void setAuthorizationView() {
   ...
   webView.setVisibility(View.VISIBLE);
   webView.loadUrl(mAuthorizationUrl);
}

 

A WebView client is created by extending WebViewClient, this is used to keep track of which webpage is opened by overriding shouldOverrideUrlLoading.

@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
   if (url.contains("github")) {
       String[] tokenAndVerifier = url.split("&");
       mOAuthVerifier = tokenAndVerifier[1].substring(tokenAndVerifier[1].indexOf('=') + 1);
       getAccessTokenAndSecret();
       return true;
   }
   return false;
}

 

As the link provided in callback url while creating our twitter app is a github page. The WebViewClient checks if it is a github page or not. If yes, then it parses the oauth_verifier from the github url.

Step 4: Converting the request token to an access token

A new rest client is created using the access token obtained in step 2, as implemented in createTwitterAPIWithAccessToken method.

public static TwitterAPI createTwitterAPIWithAccessToken(String token) {
   TwitterOAuthInterceptor withAccessTokenInterceptor =
           sInterceptorBuilder.accessToken(token).accessSecret("").build();
   OkHttpClient withAccessTokenClient = new OkHttpClient.Builder()
           .addInterceptor(withAccessTokenInterceptor)
           //.addInterceptor(loggingInterceptor) // uncomment to debug network requests
           .build();
   Retrofit withAccessTokenRetrofit = sRetrofitBuilder.client(withAccessTokenClient).build();
   return withAccessTokenRetrofit.create(TwitterAPI.class);
}

 

Now, to obtain access token and access token secret oauth_verifier obtained in step 3 is passed as a parameter to getAccessTokenAndSecret method defined in TwitterAPI interface which calls oauth/access_token endpoint from the rest client created above. This is implemented in getAccessTokenAndSecret method of WebViewClient class

private void getAccessTokenAndSecret() {
   mTwitterApi = TwitterRestClient.createTwitterAPIWithAccessToken(mOauthToken);
   mTwitterApi.getAccessTokenAndSecret(mOAuthVerifier)
           .flatMap(this::saveAccessTokenAndSecret)
           ....
}

 

Finally the obtained access_token and access_token_secret is saved in SharedPreference so that it can be used to call other Twitter API endpoints as in saveAccessTokenAndSecret

private Observable<Integer> saveAccessTokenAndSecret(ResponseBody responseBody)
       throws IOException {
   String[] responseValues = responseBody.string().split("&");

   String token = responseValues[0].substring(responseValues[0].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(getActivity(), OAUTH_ACCESS_TOKEN_KEY, token);
   mOauthToken = token; // here access_token that would be used for API calls

   String tokenSecret = responseValues[1].substring(responseValues[1].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(
           getActivity(), OAUTH_ACCESS_TOKEN_SECRET_KEY, tokenSecret);
   mOauthTokenSecret = tokenSecret;
   return Observable.just(1);
}

 

Resources:

Continue ReadingImplementing 3 legged Authorization in Loklak Wok Android for Twitter

How to Deploy Node js App to Google Container Engine Using Kubernetes

There are many ways to host node js apps. A popular way is to host your app on Heroku. We can also deploy our app to Google cloud platform on container engine using Kubernetes. In this blog, we will learn how to deploy node js app on container engine with kubernetes. There are many resources on the web but we will use YAML files to create a deployment and to build docker image we will not use Google container registry (GCR) as it will cost us more for the deployment. To deploy we will start by creating an account on Google cloud and you can get a free tier of Google cloud platform worth 300$ for 12 months. After creating account create a project with any name of your choice. Enable Google cloud shell from an icon on right top.

We will also need a docker image of our app for deployment. To create a docker image first create an account on https://www.docker.com and create a repository with any name on it. Now, we will add docker file into our repository so that we can build docker image. Dockerfile will contain this code:

FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]

You can see an example of docker file in SUSI telegram repository. After pushing docker file to your repository we will now build docker image of the app. In Google cloud shell clone, your repository with git clone {your-repository-link here }and change your current directory to the cloned app. We will use –no-cache -t arguments as described above it will be better for building docker image and it will use fewer resources. Run these two commands to build docker image and pushing it to your docker hub.

docker build --no-cache -t {your-docker-username-here}/{repository-name-on-docker here} .

docker push {your-docker-username-here}/{repository-name-on-docker here}


We have successfully created docker image for our app. Now we will deploy our app to container engine using this image. To deploy it we will use configuration files. Add a yaml folder in your repository and we will add four files into it now. In the first file, we will specify the namespace and name it as 00-namespace.yml It will contain following code:

apiVersion: v1
kind: Namespace
metadata:
 name: web

In second file we will configure our namespace that we specified and name it as configmap.yml It will contain following code:

apiVersion: v1
metadata:
 name:{name-of-your-deployment-here}
 namespace: web
kind: ConfigMap

In third file, we will define our deployment and name it as deployment.yml It will contain following code:

kind: Deployment
apiVersion: apps/v1beta1
metadata:
 name: {name-of-your-deployment-here}
 namespace: web
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: {name-of-your-deployment-here}
   spec:
     containers:
     - name: {name-of-your-deployment-here}
       image: {your-docker-username-here}/{repository-name-on-docker here}:latest
       ports:
       - containerPort: 8080
         protocol: TCP
       envFrom:
       - configMapRef:
           name: {name-of-your-deployment-here}
     restartPolicy: Always

In fourth file, we will define service for our deployment and name it as service.yml It will contain following code:

kind: Service
apiVersion: v1
metadata:
 name: {name-of-your-deployment-here}
 namespace: web
spec:
 ports:
 - port: 8080
   protocol: TCP
   targetPort: 8080
 selector:
   app: {name-of-your-deployment-here}
 type: LoadBalancer

After adding these files to repository we will now deploy our app to a cluster on container engine. Go to Google cloud shell and run the following commands:

gcloud config set compute/zone us-central1-b

This will set zone for our cluster.

gcloud container clusters create {name-your-cluster-here}

Now update in your repository with git pull as we have added new files to it. Run the following command to make your deployment and to see it:

kubectl create -R -f ./yamls

This will create our deployment.

kubectl get services --namespace=web


With above command, you will get external IP for your app and open that IP in your browser with the port. You will see your app.

kubectl get deployments --namespace=web

Run this command if available is 1 then it means your deployment is running the file.

You have successfully deployed your app to container engine using kubernetes.

Resources

Tutorial on Google cloud: https://cloud.google.com/container-engine/docs/tutorials/hello-node
Tutorial by Jatin Shridhar: https://www.sitepoint.com/kubernetes-deploy-node-js-docker-app/

Continue ReadingHow to Deploy Node js App to Google Container Engine Using Kubernetes

Adding Servlet for SUSI Service

Whenever we ask anything to SUSI, we get a intelligent reply. The API endpoints which clients uses to give the reply to users is http://api.susi.ai/susi/chat.json, and this API endpoint is defined in SUSIService.java. In this blog post I will explain how any query is processed by SUSI Server and the output is sent to clients.

public class SusiService extends AbstractAPIHandler implements APIHandler

This is a public class and just like every other servlet in SUSI Server, this extends AbstractAPIHandler class, which provides us many options including AAA related features.

    public String getAPIPath() {
           return "/susi/chat.json";
    }

The function above defines the endpoint of the servlet. In this case it is  “/susi/chat.json”. The final API Endpoint will be API http://api.susi.ai/susi/chat.json.

This endpoint accepts 6 parameters in GET request . “q” is the query “parameter”, “timezoneOffset” and “language” parameters are for giving user a reply according to his local time and local language, “latitude” and “longitude” are used for getting user’s location.

      String q = post.get("q", "").trim();
        int count = post.get("count", 1);
        int timezoneOffset = post.get("timezoneOffset", 0); // minutes, i.e. -60
        double latitude = post.get("latitude", Double.NaN); // i.e. 8.68
        double longitude = post.get("longitude", Double.NaN); // i.e. 50.11
        String language = post.get("language", "en"); // ISO 639-1

After getting all the parameters we do a database update of the skill data. This is done using DAO.susi.observe() function.

Then SUSI checks that whether the reply was to be given from a etherpad dream, so we check if we are dreaming something.

if (etherpad_dream != null && etherpad_dream.length() != 0) {
    String padurl = etherpadUrlstub + "/api/1/getText?apikey=" + etherpadApikey + "&padID=$query$";

If SUSI is dreaming something then we call the etherpad API with the API key and padID.

After we get the response from the etherpad API we parse the JSON and store the text from the skill in a local variable.

JSONObject json = new JSONObject(serviceResponse);
String text = json.getJSONObject("data").getString("text");

After that, we fill an empty SUSI mind with the dream. SUSI susi_memory_dir is a folder named as “susi” in the data folder, present at the server itself.

SusiMind dream = new SusiMind(DAO.susi_memory_dir); 

We need the memory directory here to get a share on the memory of previous dialogues, otherwise, we cannot test call-back questions.

JSONObject rules = SusiSkill.readEzDSkill(new BufferedReader(new InputStreamReader(new ByteArrayInputStream(text.getBytes(StandardCharsets.UTF_8)), StandardCharsets.UTF_8)));
dream.learn(rules, new File("file://" + etherpad_dream));

When we call dream.learn function, Susi starts dreaming. Now we will try to find an answer out of the dream.

The SusiCognition takes all the parameters including the dream name, query and user’s identity and then try to find the answer to that query based on the skill it just learnt.

    SusiCognition cognition = new SusiCognition(dream, q, timezoneOffset, latitude, longitude, language, count, user.getIdentity());

If SUSI finds an answer then it replies back with that answers else it tries to answer with built-in intents. These intents are both existing SUSI Skills and internal console services.

if (cognition.getAnswers().size() > 0) {          DAO.susi.getMemories().addCognition(user.getIdentity().getClient(), cognition);
    return new ServiceResponse(cognition.getJSON());
}

This is how any query is processed by SUSI Server and the output is sent to clients.

Currently people use Etherpad (http://dream.susi.ai/) to make their own skills, but we are also working on SUSI CMS where we can edit and test the skills on the same web application.

References

Java Servlet Docs: http://docs.oracle.com/javaee/6/tutorial/doc/bnafd.html

Embedding Jetty: http://www.eclipse.org/jetty/documentation/9.4.x/embedding-jetty.html

Server-Side Java: http://www.javaworld.com/article/2077634/java-web-development/how-to-get-started-with-server-side-java.html

Continue ReadingAdding Servlet for SUSI Service

Skill Editor in SUSI Skill CMS

SUSI Skill CMS is a web application built on ReactJS framework for creating and editing SUSI skills easily. It follows an API centric approach where the SUSI Server acts as an API server. In this blogpost we will see how to add a component which can be used to create a new skill SUSI Skill CMS.

For creating any skill in SUSI we need four parameters i.e model, group, language, skill name. So we need to ask these 4 parameters from the user. For input purposes we have a common card component which has dropdowns for selecting models, groups and languages, and a text field for skill name input.

<SelectField
    floatingLabelText="Model"
    value={this.state.modelValue}
    onChange={this.handleModelChange}
>
    {models}
</SelectField>
<SelectField
    floatingLabelText="Group"
    value={this.state.groupValue}
    onChange={this.handleGroupChange}
>
    {groups}
</SelectField>
<SelectField
    floatingLabelText="Language"
    value={this.state.languageValue}
    onChange={this.handleLanguageChange}
>
    {languages}
</SelectField>
<TextField
    floatingLabelText="Enter Skill name"
    floatingLabelFixed={true}
    hintText="My SUSI Skill"
    onChange={this.handleExpertChange}
/>
<RaisedButton label="Save" backgroundColor="#4285f4" labelColor="#fff" style={{marginLeft:10}} onTouchTap={this.saveClick} />

This is the card component where we get the user input. We have API endpoints on SUSI Server for getting the list of models, groups and languages. Using those APIs we inflate the dropdowns.
Then the user needs to edit the skill. For editing of skills we have used Ace Editor. Ace is an code
editor written in pure javascript. It matches the features native editors like Sublime and TextMate.

To use Ace we need to install the component.

npm install react-ace --save                        

This command will install the dependency and update the package.json file in our project with this dependency.

To use this editor we need to import AceEditor and place it in the render function of our react class.

<AceEditor
    mode=" markup"
    theme={this.state.editorTheme}
    width="100%"
    fontSize={this.state.fontSizeCode}
    height= "400px"
    value={this.state.code}
    name="skill_code_editor"
    onChange={this.onChange}
    editorProps={{$blockScrolling: true}}
/>

Now we have a page that looks something like this

Now we need to handle the click event when a user clicks on the save button.

First we check if the user is logged in or not. For this we check if we have the required cookies and the access token of the user.

 if(!cookies.get('loggedIn')) {
            notification.open({
                message: 'Not logged In',
                description: 'Please login and then try to create/edit a skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
            return 0;
        }

If the user is not logged in then we show him a error notification and asks him to login.

Then we check if he has filled all the required fields like name of the skill etc. and after that we call an API Endpoint on SUSI Server that will finally store the skill in the skill_data_repo.

let url= “http://api.susi.ai/cms/modifySkill.json”
$.ajax({
    url:url,
    dataType: 'jsonp',
    jsonp: 'callback',
    crossDomain: true,
    success: function (data) {
        console.log(data);
        if(data.accepted===true){
            notification.open({
                message: 'Accepted',
                description: 'Your Skill has been uploaded to the server',
                icon: <Icon type="check-circle" style={{ color: '#00C853' }} />,
            });
           }
    }
});

In the success function of ajax call we check if accepted parameter is true from the server or not. If accepted is true then we show user a notification with a message that “Your Skill has been uploaded to the server”.

To see this component running please visit http://skills.susi.ai/skillEditor.

Resources

Material-UI: http://www.material-ui.com/

Ace Editor: https://github.com/securingsincity/react-ace

Ajax: http://api.jquery.com/jquery.ajax/

Universal Cookies: https://www.npmjs.com/package/universal-cookie

Continue ReadingSkill Editor in SUSI Skill CMS