Avoiding Nested Callbacks using RxJS in Loklak Scraper JS

Loklak Scraper JS, as suggested by the name, is a set of scrapers for social media websites written in NodeJS. One of the most common requirement while scraping is, there is a parent webpage which provides links for related child webpages. And the required data that needs to be scraped is present in both parent webpage and child webpages. For example, let’s say we want to scrape quora user profiles matching search query “Siddhant”. The matching profiles webpage for this example will be https://www.quora.com/search?q=Siddhant&type=profile which is the parent webpage, and the child webpages are links of each matched profiles.

Now, a simplistic approach is to first obtain the HTML of parent webpage and then synchronously fetch the HTML of child webpages and parse them to get the desired data. The problem with this approach is that, it is slower as it is synchronous.

A different approach can be using request-promise-native to implement the logic in asynchronous way. But, there are limitations with this approach. The HTML of child webpages that needs to be fetched can only be obtained after HTML of parent webpage is obtained and number of child webpages are dynamic. So, there is a request dependency between parent and child i.e. if only we have data from parent webpage we can extract data from child webpages. The code would look like this

request(parent_url)
   .then(data => {
       ...
       request(child_url)
           .then(data => {
               // again nesting of child urls
           })
           .catch(error => {

           });
   })
   .catch(error => {

   });

 

Firstly, with this approach there is callback hell. Horrible, isn’t it? And then we don’t know how many nested callbacks to use as the number of child webpages are dynamic.

The saviour: RxJS

The solution to our problem is reactive extensions in JavaScript. Using rxjs we can obtain the required data without callback hell and asynchronously!

The promise-request object of the parent webpage is obtained. Using this promise-request object an observable is generated by using Rx.Observable.fromPromise. flatmap operator is used to parse the HTML of the parent webpage and obtain the links of child webpages. Then map method is used transform the links to promise-request objects which are again transformed into observables. The returned value – HTML – from the resulting observables is parsed and accumulated using zip operator. Finally, the accumulated data is subscribed. This is implemented in getScrapedData method of Quora JS scraper.

getScrapedData(query, callback) {
   // observable from parent webpage
   Rx.Observable.fromPromise(this.getSearchQueryPromise(query))
     .flatMap((t, i) => { // t is html of parent webpage
       // request-promise object of child webpages
       let profileLinkPromises = this.getProfileLinkPromises(t);
       // request-promise object to observable transformation
       let obs = profileLinkPromises.map(elem => Rx.Observable.fromPromise(elem));

       // each Quora profile is parsed
       return Rx.Observable.zip( // accumulation of data from child webpages
         ...obs,
         (...profileLinkObservables) => {
           let scrapedProfiles = [];
           for (let i = 0; i < profileLinkObservables.length; i++) {
             let $ = cheerio.load(profileLinkObservables[i]);
             scrapedProfiles.push(this.scrape($));
           }
           return scrapedProfiles; // accumulated data returned
         }
       )
     })
     .subscribe( // desired data is subscribed
       scrapedData => callback({profiles: scrapedData}),
       error => callback(error)
     );
 }

 

Resources:

Continue ReadingAvoiding Nested Callbacks using RxJS in Loklak Scraper JS

Implementing the Feedback Functionality in SUSI Web Chat

SUSI AI now has a feedback feature where it collects user’s feedback for every response to learn and improve itself. The first step towards guided learning is building a dataset through a feedback mechanism which can be used to learn from and improvise the skill selection mechanism responsible for answering the user queries.

The flow behind the feedback mechanism is :

  1. For every SUSI response show thumbs up and thumbs down buttons.
  2. For the older messages, the feedback thumbs are disabled and only display the feedback already given. The user cannot change the feedback already given.
  3. For the latest SUSI response the user can change his feedback by clicking on thumbs up if he likes the response, else on thumbs down, until he gives a new query.
  4. When the new query is given by the user, the feedback recorded for the previous response is sent to the server.

Let’s visit SUSI Web Chat and try this out.

We can find the feedback thumbs for the response messages. The user cannot change the feedback he has already given for previous messages. For the latest message the user can toggle feedback until he sends the next query.

How is this implemented?

We first design the UI for feedback thumbs using Material UI SVG Icons. We need a separate component for the feedback UI because we need to store the state of feedback as positive or negative because we are allowing the user to change his feedback for the latest response until a new query is sent. And whenever the user clicks on a thumb, we update the state of the component as positive or negative accordingly.

import ThumbUp from 'material-ui/svg-icons/action/thumb-up';
import ThumbDown from 'material-ui/svg-icons/action/thumb-down';

feedbackButtons = (
  <span className='feedback' style={feedbackStyle}>
    <ThumbUp
      onClick={this.rateSkill.bind(this,'positive')}
      style={feedbackIndicator}
      color={positiveFeedbackColor}/>
    <ThumbDown
      onClick={this.rateSkill.bind(this,'negative')}
      style={feedbackIndicator}
      color={negativeFeedbackColor}/>
  </span>
);

The next step is to store the response in Message Store using saveFeedback Action. This will help us later to send the feedback to the server by querying it from the Message Store. The Action calls the Dispatcher with FEEDBACK_RECEIVED ActionType which is collected in the MessageStore and the feedback is updated in the Message Store.

let feedback = this.state.skill;

if(!(Object.keys(feedback).length === 0 &&    
feedback.constructor === Object)){
  feedback.rating = rating;
  this.props.message.feedback.rating = rating;
  Actions.saveFeedback(feedback);
}

case ActionTypes.FEEDBACK_RECEIVED: {
  _feedback = action.feedback;
  MessageStore.emitChange();
  break;
}

The final step is to send the feedback to the server. The server endpoint to store feedback for a skill requires other parameters apart from feedback to identify the skill. The server response contains an attribute `skills` which gives the path of the skill used to answer that query. From that path we need to parse :

  • Model : Highest level of abstraction for categorising skills
  • Group : Different groups under a model
  • Language : Language of the skill
  • Skill : Name of the skill

For example, for the query `what is the capital of germany` , the skills object is

"skills": ["/susi_skill_data/models/general/smalltalk/en/English-Standalone-aiml2susi.txt"]

So, for this skill,

    • Model : general
    • Group : smalltalk
    • Language : en
    • Skill : English-Standalone-aiml2susi

The server endpoint to store feedback for a particular skill is :

BASE_URL+'/cms/rateSkill.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL&rating=RATING'

Where Model, Group, Language and Skill are parsed from the skill attribute of server response as discussed above and the Rating is either positive or negative and is collected from the user when he clicks on feedback thumbs.

When a new query is sent, the sendFeedback Action is triggered with the required attributes to make the server call to store feedback on server. The client then makes an Ajax call to the rateSkill endpoint to send the feedback to the server.

let url = BASE_URL+'/cms/rateSkill.json?'+
          'model='+feedback.model+
          '&group='+feedback.group+
          '&language='+feedback.language+
          '&skill='+feedback.skill+
          '&rating='+feedback.rating;

$.ajax({
  url: url,
  dataType: 'jsonp',
  crossDomain: true,
  timeout: 3000,
  async: false,
  success: function (response) {
    console.log(response);
  },
  error: function(errorThrown){
    console.log(errorThrown);
  }
});

This is how the feedback feedback mechanism works in SUSI Web Chat. The entire code can be found at SUSI Web Chat Repository.

Resources

 

Continue ReadingImplementing the Feedback Functionality in SUSI Web Chat

Implementing 3 legged Authorization in Loklak Wok Android for Twitter

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Not only it is a peer harvester, but also provides users to post their tweets from the app. Posting tweets from the app requires users to authorize the Loklak Wok app, the client app created https://apps.twitter.com/ . This blog explains in detail about the authorization process.

Adding Dependencies to the project

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Steps of Authorization

Step 1: Create client app in Twitter

Create a twitter client app at https://apps.twitter.com/. Provide the mandatory entries and also Callback url (would be used in next steps). Then go to “Keys and Access Token” and save your consumer key and consumer secret. In case you want to use Twitter API for yourself, click on “Create my access token”, which provides access token and access token secret.

Step 2: Obtaining a request token

Using the “consumer key” and “consumer secret” request token is obtained by sending a POST request to oauth/request_token. As Twitter API are Oauth1 based the sent request needs to be signed by generating oauth_signature. The oauth_signature is generated by intercepting the network request sent by retrofit rest API client, the oauth interceptor used in Loklak Wok Android is a modified version of this snippet. The retrofit TwitterAPI interface is defined

public interface TwitterAPI {

   String BASE_URL = "https://api.twitter.com/";

   @POST("/oauth/request_token")
   Observable<ResponseBody> getRequestToken();

   @FormUrlEncoded
   @POST("/oauth/access_token")
   Observable<ResponseBody> getAccessTokenAndSecret(@Field("oauth_verifier") String oauthVerifier);
}

 

And the retrofit REST client is implemented in TwitterRestClient. createTwitterAPIWithoutAccessToken method returns a twitter API client which can be called without providing access keys, this is used as we don’t have access tokens right now.

public static TwitterAPI createTwitterAPIWithoutAccessToken() {
   if (sWithoutAccessTokenRetrofit == null) {
       sLoggingInterceptor.setLevel(HttpLoggingInterceptor.Level.BODY);
       // uncomment to debug network requests
       // sWithoutAccessTokenClient.addInterceptor(sLoggingInterceptor);
       sWithoutAccessTokenRetrofit = sRetrofitBuilder
               .client(sWithoutAccessTokenClient.build()).build();
   }
   return sWithoutAccessTokenRetrofit.create(TwitterAPI.class);
}

 

So, getRequestToken method is used to obtain the request token, if the request is successful oauth_token is returned.

@OnClick(R.id.twitter_authorize)
public void onClickTwitterAuthorizeButton(View view) {
   mTwitterApi.getRequestToken()
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::parseRequestTokenResponse, this::onFetchRequestTokenError);
}

 

Step 3: Redirecting the user

Using the oauth_token obtained in Step 2, the user is redirected to login page using WebView.

private void setAuthorizationView() {
   ...
   webView.setVisibility(View.VISIBLE);
   webView.loadUrl(mAuthorizationUrl);
}

 

A WebView client is created by extending WebViewClient, this is used to keep track of which webpage is opened by overriding shouldOverrideUrlLoading.

@Override
public boolean shouldOverrideUrlLoading(WebView view, String url) {
   if (url.contains("github")) {
       String[] tokenAndVerifier = url.split("&");
       mOAuthVerifier = tokenAndVerifier[1].substring(tokenAndVerifier[1].indexOf('=') + 1);
       getAccessTokenAndSecret();
       return true;
   }
   return false;
}

 

As the link provided in callback url while creating our twitter app is a github page. The WebViewClient checks if it is a github page or not. If yes, then it parses the oauth_verifier from the github url.

Step 4: Converting the request token to an access token

A new rest client is created using the access token obtained in step 2, as implemented in createTwitterAPIWithAccessToken method.

public static TwitterAPI createTwitterAPIWithAccessToken(String token) {
   TwitterOAuthInterceptor withAccessTokenInterceptor =
           sInterceptorBuilder.accessToken(token).accessSecret("").build();
   OkHttpClient withAccessTokenClient = new OkHttpClient.Builder()
           .addInterceptor(withAccessTokenInterceptor)
           //.addInterceptor(loggingInterceptor) // uncomment to debug network requests
           .build();
   Retrofit withAccessTokenRetrofit = sRetrofitBuilder.client(withAccessTokenClient).build();
   return withAccessTokenRetrofit.create(TwitterAPI.class);
}

 

Now, to obtain access token and access token secret oauth_verifier obtained in step 3 is passed as a parameter to getAccessTokenAndSecret method defined in TwitterAPI interface which calls oauth/access_token endpoint from the rest client created above. This is implemented in getAccessTokenAndSecret method of WebViewClient class

private void getAccessTokenAndSecret() {
   mTwitterApi = TwitterRestClient.createTwitterAPIWithAccessToken(mOauthToken);
   mTwitterApi.getAccessTokenAndSecret(mOAuthVerifier)
           .flatMap(this::saveAccessTokenAndSecret)
           ....
}

 

Finally the obtained access_token and access_token_secret is saved in SharedPreference so that it can be used to call other Twitter API endpoints as in saveAccessTokenAndSecret

private Observable<Integer> saveAccessTokenAndSecret(ResponseBody responseBody)
       throws IOException {
   String[] responseValues = responseBody.string().split("&");

   String token = responseValues[0].substring(responseValues[0].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(getActivity(), OAUTH_ACCESS_TOKEN_KEY, token);
   mOauthToken = token; // here access_token that would be used for API calls

   String tokenSecret = responseValues[1].substring(responseValues[1].indexOf("=") + 1);
   SharedPrefUtil.setSharedPrefString(
           getActivity(), OAUTH_ACCESS_TOKEN_SECRET_KEY, tokenSecret);
   mOauthTokenSecret = tokenSecret;
   return Observable.just(1);
}

 

Resources:

Continue ReadingImplementing 3 legged Authorization in Loklak Wok Android for Twitter

Adding TextDrawable as a PlaceHolder in Open Event Android App

The Open Event Android project has a fragment for showing speakers of the event. Each Speaker model has image-url which is used to fetch the image from server and load in the ImageView. In some cases it is possible that image-url is null or client is not able to fetch the image from the server because of the network problem. So in these cases showing Drawable which contains First letters of the first name and the last name along with a color background gives great UI and UX. In this post I explain how to add TextDrawable as a placeholder in the ImageView using TextDrawable library.

1. Add dependency

In order to use TextDrawable in your app add following dependencies in your app module’s build.gradle file.

dependencies {
	compile 'com.amulyakhare:com.amulyakhare.textdrawable:1.0.1'
}

2. Create static TextDrawable builder

Create static method in the Application class which returns the builder object for creating TextDrawables. We are creating static method so that the method can be used all over the App.

private static TextDrawable.IShapeBuilder textDrawableBuilder;

public static TextDrawable.IShapeBuilder getTextDrawableBuilder()
 {
        if (textDrawableBuilder == null) {
            textDrawableBuilder = TextDrawable.builder();
        }
        return textDrawableBuilder;
}

This method first checks if the builder object is null or not and then initialize it if null. Then it returns the builder object.

3.  Create and initialize TextDrawable object

Now create a TextDrawable object and initialize it using the builder object. The Builder has methods like buildRound(), buildRect() and buildRoundRect() for making drawable round, rectangle and rectangle with rounded corner respectively. Here we are using buildRect() to make the drawable rectangle.

TextDrawable drawable = OpenEventApp.getTextDrawableBuilder()
                    .buildRect(Utils.getNameLetters(name), ColorGenerator.MATERIAL.getColor(name));

The buildRect() method takes two arguments one is String text which will be used as a text in the drawable and second is int color which will be used as a background color of the drawable. Here ColorGenerator.MATERIAL returns material color for given string.

4.  Create getNameLetters()  method

The getNameLetters(String name) method should return the first letters of the first name and last name as String.

Example, if the name is “Shailesh Baldaniya” then it will return “SB”.

public static String getNameLetters(String name) {
        if (isEmpty(name))
            return "#";

        String[] strings = name.split(" ");
        StringBuilder nameLetters = new StringBuilder();
        for (String s : strings) {
            if (nameLetters.length() >= 2)
                return nameLetters.toString().toUpperCase();
            if (!isEmpty(s)) {
                nameLetters.append(s.trim().charAt(0));
            }
        }
        return nameLetters.toString().toUpperCase();
}

Here we are using split method to get the first name and last name from the name. The charAt(0) gives the first character of the string. If the name string is null then it will return “#”.   

5.  Use Drawable

Now after creating the TextDrawable object we need to load it as a placeholder in the ImageView for this we are using Picasso library.

Picasso.with(context)
        .load(image-url)
        .placeholder(drawable)
        .error(drawable)
        .into(speakerImage);

Here the placeholder() method displays drawable while the image is being loaded. The error() method displays drawable when the requested image could not be loaded when the device is offline. SpeakerImage is an ImageView in which we want to load the image.

Conclusion

TextDrawable is a great library for generating Drawable with text. It has also support for animations, font and shapes. To know more about TextDrawable follow the links given below.

Continue ReadingAdding TextDrawable as a PlaceHolder in Open Event Android App

Adding Voice Recognition in Description Dialog Box in Phimpme project

In this blog, I will explain how I added Voice Recognition in a dialog box to describe an image in Phimpme Android application. In Phimpme Android application we have an option to add a description for the image. Sometimes the description can be long. Adding Voice Recognition text to speech will ease the user’s experience to add a description for the image.

Adding appropriate Dialog Box

In order to take input from the user to prompt the Voice Recognition function, I have added an image button in the description dialog box. Since the description dialog box will only contain an EditText and a button will have used material design to make it look better and add caption on top of it.

 

<LinearLayout
   android:id="@+id/rl_description_dialog"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:orientation="horizontal"
   android:padding="15dp">
   <EditText
       android:id="@+id/description_edittxt"
       android:layout_weight="1"
       android:layout_width="fill_parent"
       android:layout_height="wrap_content"
       android:padding="15dp"
       android:hint="@string/description_hint"
       android:textColorHint="@color/grey"
       android:layout_margin="20dp"
       android:gravity="top|left"
       android:inputType="text" />
   <ImageButton
       android:layout_width="@dimen/mic_image"
       android:layout_height="@dimen/mic_image"
       android:layout_alignRight="@+id/description_edittxt"
       app2:srcCompat="@drawable/ic_mic_black"
       android:layout_gravity="center"
       android:background="@color/transparent"
       android:paddingEnd="10dp"
       android:paddingTop="12dp"
       android:id="@+id/voice_input"/>
</LinearLayout>

Function to prompt dialog box

We have added a function to prompt the dialog box from anywhere in the application. getDescriptionDialog() function is used to prompt the description dialog box. getDescriptionDialog() returns EditText which can be further be used to manipulate the text in the EditText. Please follow the following steps to inflate description dialog box in the activity.

First,

In the getDescriptionDialog() function we will inflate the layout by using getLayoutInflater function. We will pass the layout id as an argument in the function.

public EditText getDescriptionDialog(final ThemedActivity activity, AlertDialog.Builder descriptionDialog){
final View DescriptiondDialogLayout = activity.getLayoutInflater().inflate(R.layout.dialog_description, null);

Second,

Get the TextView in the description dialog box.

final TextView DescriptionDialogTitle = (TextView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_title);

Third,

Present the dialog using cardview to make use of the material design. Then take an instance of the EditText. This EditText can be further used to input text from the user either by text or Voice Recognition.

final CardView DescriptionDialogCard = (CardView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_card);
EditText editxtDescription = (EditText) DescriptiondDialogLayout.findViewById(R.id.description_edittxt);

Fourth,

Set onClickListener when the user clicks the mic image icon. This onClicklistener will prompt the voice Recognition in the activity. We need to specify the language for the speech to text input. In the case of Phimpme its English so “en-US”. We have set the maximum results to 15.  

ImageButton VoiceRecognition = (ImageButton) DescriptiondDialogLayout.findViewById(R.id.voice_input);
VoiceRecognition.setOnClickListener(new View.OnClickListener() {
   @Override
   public void onClick(View v) {
       // This are the intents needed to start the Voice recognizer
       Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
       i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
       i.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 15); // number of maximum results..
       i.putExtra(RecognizerIntent.EXTRA_PROMPT, R.string.caption_speak);
       startActivityForResult(i, REQ_CODE_SPEECH_INPUT);

   }
});

Putting Text in the EditText

After Voice Recognition prompt ends the onActivityResult function checks to see if the data is received or not.

if (requestCode == REQ_CODE_SPEECH_INPUT && data!=null) {

We get the spoken text from intent data.getString() and store it in ArrayList. To store the received data in a string we need to get the first string from the ArrayList.

ArrayList<String> result = data
       .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
voiceInput = result.get(0);

Setting the received data in the the EditText

editTextDescription.setText(voiceInput);

Conclusion

Using Voice recognition is a quick and simple way to add a long description on the image. It’s Speech to Text feature works without many mistakes and is useful in our Phimpme Android application.

Github

https://github.com/fossasia/phimpme-android

Resources

Tutorial for speech to Text: https://www.androidhive.info/2014/07/android-speech-to-text-tutorial/

To add description dialog box: https://developer.android.com/guide/topics/ui/dialogs.html

Continue ReadingAdding Voice Recognition in Description Dialog Box in Phimpme project

Google Authentication and sharing Image on Google Plus from Phimpme Android

In this blog, I will be explaining how I implemented Google Authentication and sharing an Image on GooglePlus from Phimpme Android application.

Adding Google Plus authentication in accounts activity in Phimpme Android

In accounts Activity, we added Google Plus option. This is done by adding “Googleplus” in accountName.  

public static String[] accountName = { "Facebook", "Twitter", "Drupal", "NextCloud", "Wordpress", "Pinterest", "Flickr", "Imgur", "Dropbox", "OwnCloud", "Googleplus"};

Add this to your Gradle Build. Please note that the version of Google:play-services in the grade should be same. In the case of Phimpme the version is 10.0.1, so all the services from Google should be 10.0.1.

compile 'com.google.android.gms:play-services-auth:10.0.1

In onCreate we need to make the object of the GoogleSignInOptions. This is required to call methods: requestEmail() and build.

GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)
       .requestEmail()
       .build();

Building GoogleApiClient with access to the Google Sign and the option specified by the gso.

mGoogleApiClient = new GoogleApiClient.Builder(this)
       .enableAutoManage(this , AccountActivity.this)
       .addApi(Auth.GOOGLE_SIGN_IN_API, gso)
       .build();

Signing in Google Authentication can take place through intent which calls logged in users on the phone. The user will choose an option to select the account he or she wants to authenticate the application.

private void signInGooglePlus() {
   Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient);
   startActivityForResult(signInIntent, RC_SIGN_IN);
}

Adding Google Plus account in the Realm Database

HandleSignInResult() function is used to handle Sign in result. This result includes:

Storing the received data in the Realm Database. Showing the appropriate username in the Accounts activity and handling login failed.

Checking if login is successful or not

If the login is successful a Toast message will pop up to show the appropriate message.  

private void handleSignInResult(GoogleSignInResult result) {
   if (result.isSuccess())
       GoogleSignInAccount acct = result.getSignInAccount();//acct.getDisplayName()
       Toast.makeText(AccountActivity.this, R.string.success, Toast.LENGTH_SHORT).show();

Creating Object to store the details in Realm Database.

First, we need to begin realm transaction.Then add logged in username in the database.

To display the username we will create the function setUserName(acct.getDisplayName()). And then finally commit everything to Realm database.  

       realm.beginTransaction();
       account = realm.createObject(AccountDatabase.class,
       accountName[GOOGLEPLUS]);
       account.setUsername(acct.getDisplayName());
       realm.commitTransaction();
   }

Adding Google Plus option in Sharing Activity.

To add Google Plus option in the sharing Activity we first added Google Plus icon in the resource folder.

The Google Plus icon is SVG(scalable vector) format so that we can manipulate it to apply any colour and size.

<vector xmlns:android="http://schemas.android.com/apk/res/android"
   android:width="24dp"
   android:height="24dp"
   android:viewportHeight="32.0"
   android:viewportWidth="32.0">
   <path
       android:fillColor="#00000000"
       android:pathData="M16,16m-16,0a16,16 0,1 1,32 0a16,16 0,1 1,-32 0" />
   <path
       android:fillColor="#000000"
       android:pathData="M16.7,17.2c-0.4,-0.3 -1.3,-1.1 -1.3,-1.5c0,-0.5 0.2,-0.8 1,-1.4c0.8,-0.6 1.4,-1.5 1.4,-2.6c0,-1.2 -0.6,-2.5 -1.6,-2.9h1.6L18.8,8h-5c-2.2,0 -4.3,1.7 -4.3,3.6c0,2 1.5,3.6 3.8,3.6c0.2,0 0.3,0 0.5,0c-0.1,0.3 -0.3,0.6 -0.3,0.9c0,0.6 0.3,1 0.7,1.4c-0.3,0 -0.6,0 -0.9,0c-2.8,0 -4.9,1.8 -4.9,3.6c0,1.8 2.3,2.9 5.1,2.9c3.1,0 4.9,-1.8 4.9,-3.6C18.4,19 18,18.1 16.7,17.2zM14.1,14.7c-1.3,0 -2.5,-1.4 -2.7,-3.1c-0.2,-1.7 0.6,-3 1.9,-2.9c1.3,0 2.5,1.4 2.7,3.1C16.2,13.4 15.3,14.8 14.1,14.7zM13.6,23.2c-1.9,0 -3.3,-1.2 -3.3,-2.7c0,-1.4 1.7,-2.6 3.6,-2.6c0.4,0 0.9,0.1 1.2,0.2c1,0.7 1.8,1.1 2,1.9c0,0.2 0.1,0.3 0.1,0.5C17.2,22.1 16.2,23.2 13.6,23.2zM21.5,15v-2h-1v2h-2v1h2v2h1v-2h2v-1H21.5z" />
</vector>

Sharing Image on Google Plus from Sharing Activity

After creating the appropriate button, we need to send the image to Google Plus. We need to import the PlusShare files in the SharingActivity.

import com.google.android.gms.plus.PlusShare;

Share Image function

To share the image on Google Plus we have used PlusShare function which comes in Google Plus API. In the function shareToGoogle() we will send the message and the image on Google Plus.

To send the message: share.setText(“Provide the message you want to pass”) .

To send the Image:share.addStream(Uri of the Image to be sent).

private void shareToGoogle() {
   Uri uri = getImageUri(context);
   PlusShare.Builder share = new PlusShare.Builder(this);
   share.setText(caption);
   share.addStream(uri);
   share.setType(getResources().getString(R.string.image_type));
   startActivityForResult(share.getIntent(), REQ_SELECT_PHOTO);
}

Show appropriate message after uploading the image

After uploading the image on Google Plus there can be two possibilities:

  1. Image failed to upload
  2. Image uploaded successfully.

If the image uploaded image successfully an appropriate message is displayed in snackbar.

If the image upload fails an error message is displayed.

if (requestCode == REQ_SELECT_PHOTO) {
   if (responseCode == RESULT_OK) {
       Snackbar.make(parent, R.string.success_google, Snackbar.LENGTH_LONG).show();
       return;
   } else {
       Snackbar.make(parent, R.string.error_google, Snackbar.LENGTH_LONG).show();
       return;
   }
}

Conclusion

This is how we have implemented image share on Google Plus in Phimpme. Following this method, it will provide an easy way to upload an image on Google Plus without leaving or switching between the Phimpme application.

Github

https://github.com/fossasia/phimpme-android

Resources

 

Continue ReadingGoogle Authentication and sharing Image on Google Plus from Phimpme Android

Persistently Storing loklak Server Dumps on Kubernetes

In an earlier blog post, I discussed loklak setup on Kubernetes. The deployment mentioned in the post was to test the development branch. Next, we needed to have a deployment where all the messages are collected and dumped in text files that can be reused.

In this blog post, I will be discussing the challenges with such deployment and the approach to tackle them.

Volatile Disk in Kubernetes

The pods that hold deployments in Kubernetes have disk storage. Any data that gets written by the application stays only until the same version of deployment is running. As soon as the deployment is updated/relocated, the data stored during the application is cleaned up.


Due to this, dumps are written when loklak is running but they get wiped out when the deployment image is updated. In other words, all dumps are lost when the image updates. We needed to find a solution to this as we needed a permanent storage when collecting dumps.

Persistent Disk

In order to have a storage which can hold data permanently, we can mount persistent disk(s) on a pod at the appropriate location. This ensures that the data that is important to us stays with us, even
when the deployment goes down.


In order to add persistent disks, we first need to create a persistent disk. On Google Cloud Platform, we can use the gcloud CLI to create disks in a given region –

gcloud compute disks create --size=<required size> --zone=<same as cluster zone> <unique disk name>

After this, we can mount it on a Docker volume defined in Kubernetes configurations –

      ...
      volumeMounts:
        - mountPath: /path/to/mount
          name: volume-name
  volumes:
    - name: volume-name
      gcePersistentDisk:
        pdName: disk-name
        fsType: fileSystemType

But this setup can’t be used for storing loklak dumps. Let’s see “why” in the next section.

Rolling Updates and Persistent Disk

The Kubernetes deployment needs to be updated when the master branch of loklak server is updated. This update of master deployment would create a new pod and try to start loklak server on it. During all this, the older deployment would also be running and serving the requests.


The control will not be transferred to the newer pod until it is ready and all the probes are passing. The newer deployment will now try to mount the disk which is mentioned in the configuration, but it would fail to do so. This would happen because the older pod has already mounted the disk.


Therefore, all new deployments would simply fail to start due to insufficient resources. To overcome such issues, Kubernetes allows persistent volume claims. Let’s see how we used them for loklak deployment.

Persistent Volume Claims

Kubernetes provides Persistent Volume Claims which claim resources (storage) from a Persistent Volume (just like a pod does from a node). The higher level APIs are provided by Kubernetes (configurations and kubectl command line). In the loklak deployment, the persistent volume is a Google Compute Engine disk –

apiVersion: v1
kind: PersistentVolume
metadata:
  name: dump
  namespace: web
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  gcePersistentDisk:
    pdName: "data-dump-disk"
    fsType: "ext4"

[SOURCE]

It must be noted here that a persistent disk by the name of data-dump-index is already created in the same region.


The storage class defines the way in which the PV should be handled, along with the provisioner for the service –

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
  namespace: web
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-central1-a

[SOURCE]

After having the StorageClass and PersistentVolume, we can create a claim for the volume by using appropriate configurations –

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dump
  namespace: web
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: slow

[SOURCE]

After this, we can mount this claim on our Deployment –

  ...
  volumeMounts:
    - name: dump
      mountPath: /loklak_server/data
volumes:
  - name: dump
    persistentVolumeClaim:
      claimName: dump

[SOURCE]

Verifying persistence of Dumps

To verify this, we can redeploy the cluster using the same persistent disk and check if the earlier dumps are still present there –

$ http http://link.to.deployment/dump/
HTTP/1.1 200 OK
Cache-Control: public, max-age=60
Content-Type: text/html;charset=utf-8
...


<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

<h1>Index of /dump</h1>
<pre>      Name 
[gz ] <a href="messages_20170802_71562040.txt.gz">messages_20170802_71562040.txt.gz</a>   Thu Aug 03 00:07:21 GMT 2017   132M
[gz ] <a href="messages_20170803_69925009.txt.gz">messages_20170803_69925009.txt.gz</a>   Mon Aug 07 15:40:04 GMT 2017   532M
[gz ] <a href="messages_20170807_36357603.txt.gz">messages_20170807_36357603.txt.gz</a>   Wed Aug 09 10:26:24 GMT 2017   377M
[txt] <a href="messages_20170809_27974404.txt">messages_20170809_27974404.txt</a>      Thu Aug 10 08:51:49 GMT 2017  1564M
<hr></pre>
...

Conclusion

In this blog post, I discussed the process of deployment of loklak with persistent dumps on Kubernetes. This deployment is intended to work as root.loklak.org in near future. The changes were proposed in loklak/loklak_server#1377 by @singhpratyush (me).

Resources

Continue ReadingPersistently Storing loklak Server Dumps on Kubernetes

UI Espresso Test Cases for Phimpme Android

Now we are heading toward a release of Phimpme soon, So we are increasing the code coverage by writing test cases for our app. What is a Test Case? Test cases are the script against which we run our code to test the features implementation. It is basically contains the output, flow and features steps of the app. To release app on multiple platform, it is highly recommended to test the app on test cases.

For example, Let’s consider if we are developing an app which has one button. So first we write a UI test case which checks whether a button displayed on the screen or not? And in response to that it show the pass and fail of a test case.

Steps to add a UI test case using Espresso

Espresso testing framework provides APIs to simulate user interactions. It has a concise API. Even, now in new Version of Android Studio, there is a feature to record Espresso Test cases. I’ll show you how to use Recorder to write test cases in below steps.

  • Setup Project Directory

Android Instrumentation tests must be placed in androidTest directory. If it is not there create a directory in app/src/androidTest/java…

  • Write Test Case

So firstly, I am writing a very simple test case, which checks whether the three Bottom navigation view items are displayed or not?

Espresso Testing framework has basically three components:

ViewMatchers

Which helps to find the correct view on which some actions can be performed E.g. onView(withId(R.id.navigation_accounts). Here I am taking the view of accounts item in Bottom Navigation View.

ViewActions

It allows to perform actions on the view we get earlier. E.g. Very basic operation used is click()

ViewAssertions

It allows to assert the current state of the view E.g. isDisplayed() is an assertion on the view we get. So a basic architecture of an Espresso Test case is

onView(ViewMatcher)       
 .perform(ViewAction)     
   .check(ViewAssertion);

We can also Use Hamcrest framework which provide extra features of checking conditions in the code.

Setup Espresso in Code

Add this in your application level build.gradle

// Android Testing Support Library's runner and rules
androidTestCompile "com.android.support.test:runner:$rootProject.ext.runnerVersion"
androidTestCompile "com.android.support.test:rules:$rootProject.ext.rulesVersion"

// Espresso UI Testing dependencies.
androidTestCompile "com.android.support.test.espresso:espresso-core:$rootProject.ext.espressoVersion"
androidTestCompile "com.android.support.test.espresso:espresso-contrib:$rootProject.ext.espressoVersion"
  • Use recorder to write the Test Case

New recorder feature is great, if you want to set up everything quickly. Go to the Run → Record Espresso Test in Android Studio.

It dumps the current User Interface hierarchy and provide the feature to assert that.

You can edit the assertions by selecting the element and apply the assertion on it.

Save and Run the test cases by right click on Name of the class. Run ‘Test Case name’

Console will show the progress of Test case. Like here it is showing passed, it means it get all the view hierarchy which is required by the Test Case.

Resources

Continue ReadingUI Espresso Test Cases for Phimpme Android

How to Implement Feedback System in SUSI iOS

The SUSI iOS app provides responses for various queries but the response is always not accurate. To improve the response, we make use of the feedback system, which is the first step towards implementing Machine Learning on the SUSI Server. The way this works is that for every query, we present the user with an option to upvote or downvote the response and based on that a positive or negative feedback is saved on the server. In this blog, I will explain how this feedback system was implemented in the SUSI iOS app.

Steps to implement:

We start by adding the UI which is two buttons, one with a thumbs up and the other with a thumbs down image.

textBubbleView.addSubview(thumbUpIcon)
textBubbleView.addSubview(thumbDownIcon)
textBubbleView.addConstraintsWithFormat(format: "H:[v0]-4-[v1(14)]-2-[v2(14)]-8-|", views: timeLabel, thumbUpIcon, thumbDownIcon)
textBubbleView.addConstraintsWithFormat(format: "V:[v0(14)]-2-|", views: thumbUpIcon)
textBubbleView.addConstraintsWithFormat(format: "V:[v0(14)]-2-|", views: thumbDownIcon)
thumbUpIcon.isUserInteractionEnabled = true
thumbDownIcon.isUserInteractionEnabled = true

Here, we add the subviews and assign constraints so that these buttons align to the bottom right next to each other. Also, we enable the user interaction for these buttons.

We know that the user can rate the response by pressing either of the buttons added above. To do that we make an API call to the endpoint below:

BASE_URL+'/cms/rateSkill.json?'+'model='+model+'&group='+group+'&skill='+skill+’&language’+language+’&rating=’+rating

Here, the BASE_URL is the url of the server, the other three params model, group, language and skill are retrieved by parsing the skill location parameter we get with the response. The rating is positive or negative based on which button was pressed by the user. The skill param in the response looks like this:

skills:
[
"/susi_skill_data/models/general/entertainment/en/quotes.txt"
]

Let’s write the method that makes the API call and responds to the UI that it was successful.

if let accepted = response[ControllerConstants.accepted] as? Bool {
  if accepted {
    completion(true, nil)
    return
  }
  completion(false, ResponseMessages.ServerError)
  return
}

Here after receiving a response from the server, we check if the `accepted` variable is true or not. Based on that, we pass `true` or `false` to the completion handler. Below the response we actually receive by making the request.

{
session: {
identity: {
type: "host",
name: "23.105.140.146",
anonymous: true
}
},
accepted: true,
message: "Skill ratings updated"
}

Finally, let’s update the UI after the request has been successful.

if sender == thumbUpIcon {
thumbDownIcon.tintColor = UIColor(white: 0.1, alpha: 0.7)
thumbUpIcon.isUserInteractionEnabled = false
thumbDownIcon.isUserInteractionEnabled = true
feedback = "positive"
} else {
thumbUpIcon.tintColor = UIColor(white: 0.1, alpha: 0.7)
thumbDownIcon.isUserInteractionEnabled = false
thumbUpIcon.isUserInteractionEnabled = true
feedback = "negative"
}
sender.tintColor = UIColor.hexStringToUIColor(hex: "#2196F3")

Here, we check the sender (the thumbs up or down button) and based on that pass the rating (positive or negative) and update the color of the button.

Below is the app in action with the feedback system.

Resources:

Continue ReadingHow to Implement Feedback System in SUSI iOS

Using Mosquitto as a Message Broker for MQTT in loklak Server

In loklak server, messages are collected from various sources and indexed using Elasticsearch. To know when a message of interest arrives, users can poll the search endpoint. But this method would require a lot of HTTP requests, most of them being redundant. Also, if a user would like to collect messages for a particular topic, he would need to make a lot of requests over a period of time to get enough data.

For GSoC 2017, my proposal was to introduce stream API in the loklak server so that we could save ourselves from making too many requests and also add many use cases.

Mosquitto is Eclipse’s project which acts as a message broker for the popular MQTT protocol. MQTT, based on the pub-sub model, is a lightweight and IOT friendly protocol. In this blog post, I will discuss the basic setup of Mosquitto in the loklak server.

Installation and Dependency for Mosquitto

The installation process of Mosquitto is very simple. For Ubuntu, it is available from the pre installed PPAs –

sudo apt-get install mosquitto

Once the message broker is up and running, we can use the clients to connect to it and publish/subscribe to channels. To add MQTT client as a project dependency, we can introduce following line in Gradle dependencies file –

compile group: 'net.sf.xenqtt', name: 'xenqtt', version: '0.9.5'

[SOURCE]

After this, we can use the client libraries in the server code base.

The MQTTPublisher Class

The MQTTPublisher class in loklak would provide an interface to perform basic operations in MQTT. The implementation uses AsyncClientListener to connect to Mosquitto broker –

AsyncClientListener listener = new AsyncClientListener() {
    // Override methods according to needs
};

[SOURCE]

The publish method for the class can be used by other components of the project to publish messages on the desired channel –

public void publish(String channel, String message) {
    this.mqttClient.publish(new PublishMessage(channel, QoS.AT_LEAST_ONCE, message));
}

[SOURCE]

We also have methods which allow publishing of multiple messages to multiple channels in order to increase the functionality of the class.

Starting Publisher with Server

The flags which signal using of streaming service in loklak are located in conf/config.properties. These configurations are referred while initializing the Data Access Object and an MQTTPublisher is created if needed –

String mqttAddress = getConfig("stream.mqtt.address", "tcp://127.0.0.1:1883");
streamEnabled = getConfig("stream.enabled", false);
if (streamEnabled) {
    mqttPublisher = new MQTTPublisher(mqttAddress);
}

[SOURCE]

The mqttPublisher can now be used by other components of loklak to publish messages to the channel they want.

Adding Mosquitto to Kubernetes

Since loklak has also a nice Kubernetes setup, it was very simple to introduce a new deployment for Mosquitto to it.

Changes in Dockerfile

The Dockerfile for master deployment has to be modified to discover Mosquitto broker in the Kubernetes cluster. For this purpose, corresponding flags in config.properties have to be changed to ensure that things work fine –

sed -i.bak 's/^\(stream.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(stream.mqtt.address\).*/\1=mosquitto.mqtt:1883/' conf/config.properties && \

[SOURCE]

The Mosquitto broker would be available at mosquitto.mqtt:1883 because of the service that is created for it (explained in later section).

Mosquitto Deployment

The Docker image used in Kubernetes deployment of Mosquitto is taken from toke/docker-kubernetes. Two ports are exposed for the cluster but no volumes are needed –

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mosquitto
  namespace: mqtt
spec:
  ...
  template:
    ...
    spec:
      containers:
      - name: mosquitto
        image: toke/mosquitto
        ports:
        - containerPort: 9001
        - containerPort: 8883

[SOURCE]

Exposing Mosquitto to the Cluster

Now that we have the deployment running, we need to expose the required ports to the cluster so that other components may use it. The port 9001 appears as port 80 for the service and 1883 is also exposed –

apiVersion: v1
kind: Service
metadata:
  name: mosquitto
  namespace: mqtt
  ...
spec:
  ...
  ports:
  - name: mosquitto
    port: 1883
  - name: mosquitto-web
    port: 80
    targetPort: 9001

[SOURCE]

After creating the service using this configuration, we will be able to connect our clients to Mosquitto at address mosquitto.mqtt:1883.

Conclusion

In this blog post, I discussed the process of adding Mosquitto to the loklak server project. This is the first step towards introducing the stream API for messages collected in loklak.

These changes were introduced in pull requests loklak/loklak_server#1393 and loklak/loklak_server#1398 by @singhpratyush (me).

Resources

Continue ReadingUsing Mosquitto as a Message Broker for MQTT in loklak Server