Save Chat Messages using Realm in SUSI iOS

Fetching data from the server each time causes a network load which makes the app depend on the server and the network in order to display data. We use an offline database to store chat messages so that we can show messages to the user even if network is not present which makes the user experience better. Realm is used as a data storage solution due to its ease of usability and also, since it’s faster and more efficient to use. So in order to save messages received from the server locally in a database in SUSI iOS, we are using Realm and the reasons for using the same are mentioned below.

The major upsides of Realm are:

  • It’s absolutely free of charge,
  • Fast, and easy to use.
  • Unlimited use.
  • Work on its own persistence engine for speed and performance

Below are the steps to install and use Realm in the iOS Client:

Installation:

  • Install Cocoapods
  • Run `pod repo update` in the root folder
  • In your Podfile, add use_frameworks! and pod ‘RealmSwift’ to your main and test targets.
  • From the command line run `pod install`
  • Use the `.xcworkspace` file generated by Cocoapods in the project folder alongside `.xcodeproj` file

After installation we start by importing `Realm` in the `AppDelegate` file and start configuring Realm as below:

func initializeRealm() {
        var config = Realm.Configuration(schemaVersion: 1,
            migrationBlock: { _, oldSchemaVersion in
                if (oldSchemaVersion < 0) {
                    // Nothing to do!
                }
        })
        config.fileURL = config.fileURL?.deletingLastPathComponent().appendingPathComponent("susi.realm")
        Realm.Configuration.defaultConfiguration = config
}

Next, let’s head over to creating a few models which will be used to save the data to the DB as well as help retrieving that data so that it can be easily used. Since Susi server has a number of action types, we will cover some of the action types, their model and how they are used to store and retrieve data. Below are the currently available data types, that the server supports.

enum ActionType: String {
  case answer
  case websearch
  case rss
  case table
  case map 
  case anchor
}

Let’s start with the creation of the base model called `Message`. To make it a RealmObject, we import `RealmSwift` and inherit from `Object`

class Message: Object {
  dynamic var queryDate = NSDate()
  dynamic var answerDate = NSDate()
  dynamic var message: String = ""
  dynamic var fromUser = true
  dynamic var actionType = ActionType.answer.rawValue
  dynamic var answerData: AnswerAction?
  dynamic var mapData: MapAction?
  dynamic var anchorData: AnchorAction?
}

Let’s study these properties of the message one by one.

  • `queryDate`: saves the date-time the query was made
  • `answerDate`: saves the date-time the query response was received
  • `message`: stores the query/message that was sent to the server
  • `fromUser`: a boolean which keeps track who created the message
  • `actionType`: stores the action type
  • `answerData`, `rssData`, `mapData`, `anchorData` are the data objects that actually store the respective action’s data

To initialize this object, we need to create a method that takes input the data received from the server.

// saves query and answer date
if let queryDate = data[Client.ChatKeys.QueryDate] as? String,
let answerDate = data[Client.ChatKeys.AnswerDate] as? String {
  message.queryDate = dateFormatter.date(from: queryDate)! as NSDate
  message.answerDate = dateFormatter.date(from: answerDate)! as NSDate}if let type = action[Client.ChatKeys.ResponseType] as? String,
  let data = answers[0][Client.ChatKeys.Data] as? [[String : AnyObject]] {
  if type == ActionType.answer.rawValue {
     message.message = action[Client.ChatKeys.Expression] as! String
     message.actionType = ActionType.answer.rawValue
    message.answerData = AnswerAction(action: action)
  } else if type == ActionType.map.rawValue {
    message.actionType = ActionType.map.rawValue
    message.mapData = MapAction(action: action)
  } else if type == ActionType.anchor.rawValue {
    message.actionType = ActionType.anchor.rawValue
    message.anchorData = AnchorAction(action: action)
    message.message = message.anchorData!.text
  }
}

Since, the response from the server for a particular query might contain numerous action types, we create loop inside a method to capture all those action types and save each one of them. Since, there are multiple action types thus we need a list containing all the messages created for the action types. For each action in the loop, corresponding data is saved into their specific objects.

Let’s discuss the individual action objects now.

  • AnswerAction
class AnswerAction: Object {
  dynamic var expression: String = ""
  convenience init(action: [String : AnyObject]) {
    self.init()
    if let expression = action[Client.ChatKeys.Expression] as? String {
      self.expression = expression
    }
  }
}

 This is the simplest action type implementation. It contains a single property `expression` which is a string type. For initializing it, we take the action object and extract the expression key-value and save it.

if type == ActionType.answer.rawValue {
  message.message = action[Client.ChatKeys.Expression] as! String
  message.actionType = ActionType.answer.rawValue
  // pass action object and save data in `answerData`
  message.answerData = AnswerAction(action: action)
}

Above is the way an answer action is checked and data saved inside the `answerData` variable.

2)   MapAction

class MapAction: Object {
  dynamic var latitude: Double = 0.0
  dynamic var longitude: Double = 0.0
  dynamic var zoom: Int = 13

  convenience init(action: [String : AnyObject]) {
    self.init()
    if let latitude = action[Client.ChatKeys.Latitude] as? String,
    let longitude = action[Client.ChatKeys.Longitude] as? String,
    let zoom = action[Client.ChatKeys.Zoom] as? String {
      self.longitude = Double(longitude)!
      self.latitude = Double(latitude)!
      self.zoom = Int(zoom)!
    }
  }
}

This action implementation contains three properties, `latitude` `longitude` `zoom`. Since the server responds the values inside a string, each of them need to be converted to their respective type using force-casting. Default values are provided for each property in case some illegal value comes from the server.

3)   AnchorAction

class AnchorAction: Object {
  dynamic var link: String = ""
  dynamic var text: String = ""

  convenience init(action: [String : AnyObject]) {
    self.init()if let link = action[Client.ChatKeys.Link] as? String,
    let text = action[Client.ChatKeys.Text] as? String {
      self.link = link
      self.text = text
    }
  }
}

Here, the link to the openstreetmap website is saved in order to retrieve the image for displaying.

Finally, we need to call the API and create the message object and use the `write` clock of a realm instance to save it into the DB.

if success {
  self.collectionView?.performBatchUpdates({
    for message in messages! {
    // real write block
      try! self.realm.write {
        self.realm.add(message)
        self.messages.append(message)
        let indexPath = IndexPath(item: self.messages.count - 1, section: 0)
        self.collectionView?.insertItems(at: [indexPath])
      }
   }
}, completion: { (_) in
    self.scrollToLast()
  })
}

list of message items and inserted into the collection view.Below is the output of the Realm Browser which is a UI for viewing the database.

References:

Continue ReadingSave Chat Messages using Realm in SUSI iOS

Adding Tumblr Upload Feature in Phimpme

The Phimpme Android application along with various other cloud storage and social media upload features provides an option to upload the images on Tumblr without having to download any other applications. In this post, I will be explaining how I integrated Tumblr in phimpme as there is no proper guide on the web how to integrate to Tumblr in Android. Tumblr provides an Android-SDK but there is no proper documentation to it and is not enough to authenticate and upload the images to it. After so much research I came to a solution. So read this article to know how to integrate Tumblr in Android.

Step 1:

First, add two dependencies to your project one is for Android SDK of Tumblr and one is for loglr which help you to get login on Tumblr.

dependencies {

compile 'com.daksh:loglr:1.2.1'

compile 'com.tumblr:jumblr:0.0.11'

}

Step 2:

  1. Register your app on Tumblr to obtain developer keys.
  2. Enter callback URL it is important to get keys.
  3. Generate CONSUMER_KEY & CONSUMER_SECRET from the official developer console of Tumblr.

Register your application

Step 3:

Now we use Loglr library to log in to Tumblr. Tumblr doesn’t provide any library to login so I am using Loglr library for login Tumblr. After successfully log in Loglr will return API_KEY and API_SECRET. We will use these keys later to upload the image. Save these keys as constant variables.

public final static String TUMBLR_CONSUMER_KEY = "ENTER-CONSUMER-KEY";

public final static String TUMBLR_CONSUMER_SECRET = "ENTER-CONSUMER-SECRET";

Step 4:

To authenticate the Tumblr use loglr login instance and it can be done as follows.

Loglr.getInstance()

     .setConsumerKey(Constants.TUMBLR_CONSUMER_KEY)

     .setConsumerSecretKey(Constants.TUMBLR_CONSUMER_SECRET)

     .setLoginListener(loginListener)

     .setExceptionHandler(exceptionHandler)

     .enable2FA(true)

     .setUrlCallBack(Constants.CALL_BACK_TUMBLR)

     .initiateInActivity(AccountActivity.this);

After that you will be prompt to enter your tumblr credentials to authenticate the phimpme Android app. Once you have done it will return api_token and api_secret. Now save this in database.

account.setToken(loginResult.getOAuthToken());

account.setSecret(loginResult.getOAuthTokenSecret());

Step 5:

Once the authentication is done now we can upload an image directly to Tumblr from the Share activity in the Phimpme Android application. To upload an image create an async task so that uploading process will run in a background thread and not block the main UI thread. Keep in mind Tumblr require 4 variable to create Tumblr client CONSUMER_KEY, CONSUMER_SECRET, API_KEY and API_SECRET. Now we can create a Tumblr client using these 4 values. Once the client is created we are ready to get data from Tumblr and upload an image on Tumblr. Before uploading an image on Tumblr we need blog name because the user can have multiple blogs on Tumblr so we need to ask the user to choose a blog name from the list or we can provide dialog to enter blog name manually. Now enter the following code in the doInBackground() method of asynctask.

PhotoPost post = null;

try {

 post = client.newPost(user.getBlogs().get(0).getName(), PhotoPost.class);

 if (caption!=null && !caption.isEmpty())

 post.setCaption(caption);

 post.setData(new File(imagePath));

 post.save();

} catch (IllegalAccessException | InstantiationException e) {

 success = false;

}

If success variable is true that means our image is uploaded successfully. This is how I implemented the upload feature to Tumblr using two different libraries. To get the full source code, please refer to the Phimpme Android repository.

Resources:

Continue ReadingAdding Tumblr Upload Feature in Phimpme

Uploading Images to Box Storage from the Phimpme Application

The Phimpme Android application along with many other cloud storage applications integrated like Dropbox, Imgur, Pinterest has the option to upload the image to the Box storage without having to install any other applications on the device. From the Phimpme app, the user can click the photo, edit it, view any image from the gallery and then can upload multiple images to many storage services or social media without any hassle. In this tutorial, I will be discussing how we achieved the functionality to upload the images on the Box storage.

Step 1:

To integrate Box storage to the application, the first thing we need to do is create an application from the Box developers console and get the CLIENT_ID and the CLIENT_SECRET. To do this:

  1. Go to the Box developer page and log in.
  2. Create a new application from the app console.
  3. It will now give options to select what kind of application are you using. Select the option partner integration to get the image upload functionality to all the users.
  4. Give a name to your application and finally click the create application button.
  5. After this, you will be taken to the screen similar to the below screenshot from where you can copy the CLIENT_ID and CLIENT_SECRET to be used later on.

Step 2:

Now coming to the Android part, to get the image upload functionality to the box storage, we need to make use of the Android SDK provided by Box to achieve the uploading functionality easily. To add the SDK to the Android project from Gradle, copy the following line of Gradle script which will download and install the SDK from the maven repository.

maven{ url "https://mvnrepository.com/artifact/com.box/box-android-sdk" }
compile group: 'com.box', name: 'box-android-sdk', version: '4.0.8'

Now after adding the SDK, rebuild the project.

Step 3:

After this, to upload the file on the box storage, we need to login to the Box account to get the access token and the user name to store it in Realm database. For this, first we have to configure the Box client. This can be done using the following line of code.

BoxConfig.CLIENT_ID = BOX_CLIENT_ID;
BoxConfig.CLIENT_SECRET = BOX_CLIENT_SECRET;

Replace the BOX_CLIENT_ID and BOX_CLIENT_SECRET in the above code with the key received by following the step 1.

After configuring the Box API client, we can authenticate the user using the sessionBox object by following lines of code.

sessionBox = new BoxSession(AccountActivity.this);
sessionBox.authenticate();

After the authentication, we have to get the user name and the access token of the user using the authenticated session box.

account.setUsername(sessionBox.getUser().getName());
account.setToken(String.valueOf(accessToken));

After authenticating the user, we can upload the photos directly to the Box storage from the Share Activity in the Phimpme Android application. For this, we have to create a new Asynchronous task in android which will do the network operations in the background to avoid the Network on main thread exception. After this, we can make use of the Box file API to upload the photos to the Box storage and call the function getUploadRequest which takes the three parameters file input stream, the upload name of the file and the destination of the folder respectively. This can be done by the following lines of code.

mFileApi = new BoxApiFile(sessionBox);
BoxRequestsFile.UploadFile request = mFileApi.getUploadRequest(inputStream, uploadName, destinationFolderId);

This upload request throws a BoxException so we have to catch that exception using the try/catch block to avoid the application crash in case the upload request fails.

This is how we have implemented the upload to Box storage functionality in the Phimpme Android application. To get the full source code, please refer to the Phimpme Android repository.

Resources

  1. Box Developers app console : https://app.box.com/developers/console/
  2. Box Android SDK GitHub page : https://github.com/box/box-android-sdk
  3. Android’s Developer page on NetworkOnMainThreadExceoption : https://developer.android.com/reference/android/os/NetworkOnMainThreadException.html
Continue ReadingUploading Images to Box Storage from the Phimpme Application

Posting Scraped Tweets to Loklak server from Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected  messages to the Loklak Server. The suggestions to search tweets are fetched using suggest API endpoint. Using the suggestion queries, tweets are scraped. The scraped tweets are shown in a RecyclerView and simultaneously they are posted to loklak server using push API endpoint. Let’s see how this is implemented.

Adding Dependencies to the project

This feature heavily uses Retrofit2, Reactive extensions(RxJava2, RxAndroid and Retrofit RxJava adapter) and RetroLambda (for Java lambda support in Android).

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Implementation

The suggest and push API endpoint is defined in LoklakApi interface

public interface LoklakApi {

   @GET("/api/suggest.json")
   Observable<SuggestData> getSuggestions(@Query("q") String query, @Query("count") int count);

   @POST("/api/push.json")
   @FormUrlEncoded
   Observable<Push> pushTweetsToLoklak(@Field("data") String data);
}

 

The POJOs (Plain Old Java Objects) for suggestions and posting tweets are obtained using jsonschema2pojo, Gson uses POJOs to convert JSON to Java objects.

The REST client is created by Retrofit2 and is implemented in RestClient class. The Gson converter and RxJava adapter for retrofit is added in the retrofit builder. create method is called to generate the API methods(retrofit implements LoklakApi Interface).

public class RestClient {

   private RestClient() {
   }

   private static void createRestClient() {
       sRetrofit = new Retrofit.Builder()
               .baseUrl(BASE_URL)
               // gson converter
               .addConverterFactory(GsonConverterFactory.create(gson))
               // retrofit adapter for rxjava
               .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
               .build();
   }

   private static Retrofit getRetrofitInstance() {
       if (sRetrofit == null) {
           createRestClient();
       }
       return sRetrofit;
   }

   public static <T> T createApi(Class<T> apiInterface) {
       // create method to generate API methods
       return getRetrofitInstance().create(apiInterface);
   }

}

 

The suggestions are fetched by calling getSuggestions after LoklakApi interface is implemented. getSuggestions returns an Observable of type SuggestData, which contains the suggestions in a List. For scraping tweets only a single query needs to be passed to LiquidCore, so flatmap is used to transform the observabe and then fromIterable operator is used to emit single queries as string to LiquidCore which then scrapes tweets, as implemented in fetchSuggestions

private Observable<String> fetchSuggestions() {
   LoklakApi loklakApi = RestClient.createApi(LoklakApi.class);
   Observable<SuggestData> observable = loklakApi.getSuggestions("", 2);
   return observable.flatMap(suggestData -> {
       List<Query> queryList = suggestData.getQueries();
       List<String> queries = new ArrayList<>();
       for (Query query : queryList) {
           queries.add(query.getQuery());
       }
       return Observable.fromIterable(queries);
   });
}

 

As LiquidCore uses callbacks to create a connection between NodeJS instance and Android, to maintain a flow of observables a custom observable is created using create operator which encapsulates the callbacks inside it. For a detail understanding of how LiquidCore event handling works, please go through the example. The way it is implemented in getScrapedTweets:

private Observable<ScrapedData> getScrapedTweets(final String query) {
   final String LC_TWITTER_URI = "android.resource://org.loklak.android.wok/raw/twitter";
   URI uri = URI.create(LC_TWITTER_URI);

   return Observable.create(emitter -> { // custom observable creation
       EventListener startEventListener = (service, event, payload) -> {
               service.emit(LC_QUERY_EVENT, query);
           service.emit(LC_FETCH_TWEETS_EVENT);
       };

       EventListener getTweetsEventListener = (service, event, payload) -> {
           ScrapedData scrapedData = mGson.fromJson(payload.toString(), ScrapedData.class);
           emitter.onNext(scrapedData); // data emitted using ObservableEmitter
       };

       MicroService.ServiceStartListener serviceStartListener = (service -> {
           service.addEventListener(LC_START_EVENT, startEventListener);
           service.addEventListener(LC_GET_TWEETS_EVENT, getTweetsEventListener);
       });

       MicroService microService = new MicroService(getActivity(), uri, serviceStartListener);
       microService.start();
   });
}

 

Now that we are getting suggestions and using them to get scraped tweets, this needs to be done periodically, so that tweets are pushed continuously to the loklak server. For this interval operator is used. A List is maintained which contains the suggestion queries based on which tweets are to be scraped. Once the scraping is done, the suggestion query is removed from the list when they are displayed in RecyclerView. And if the list is empty, then only a new set of suggestions are fetched.

Observable.interval(4, TimeUnit.SECONDS)
       .flatMap(this::getSuggestionsPeriodically)
       .flatMap(query -> {
           mSuggestionQuerries.add(query); // query added to list
           return getScrapedTweets(query);
       });

 

Method reference is used to maintain the modularity, so the logic of periodically fetching suggestions is implemented in getSuggestionsPeriodically

private Observable<String> getSuggestionsPeriodically(Long time) {
   if (mSuggestionQuerries.isEmpty()) { // checks if list is empty
       mInnerCounter = 0;
       return fetchSuggestions(); // new suggestions
   } else { // wait for a previous request to complete
       mInnerCounter++;
       if (mInnerCounter > 3) { // if some strange error occurs
           mSuggestionQuerries.clear();
       }
       return Observable.never(); // no observable is passed to subsriber, subscriber waits
   }
}

 

Now, it’s time to display the fetched tweets and then push the tweets to loklak server. When periodic fetching of suggestions was implemented we used interval operator and then flatMap to transform observables i.e. chaining network requests.

Till this point the observable we were creating were Cold Observable.Cold observables only emit values when a subscription is made. As we need to display scraped tweets and then push it, i.e. one source of observables and two (multiple) subscribers. By intuition the observable should be subscribed two times, for example:

Observable observable = Observable.interval(4, TimeUnit.SECONDS)
       .flatMap(this::getSuggestionsPeriodically)
       .flatMap(query -> {
           mSuggestionQuerries.add(query);
           return getScrapedTweets(query);
       });

// first time subscription
observable
       .subscribeOn(Schedulers.io())
       .observeOn(AndroidSchedulers.mainThread())
       .subscribe(
	// display in RecyclerVIew
       );


// second time subscription
observable
       .flatMap(// trnasformations to push data to server)
       .subscribeOn(Schedulers.io())
       .observeOn(AndroidSchedulers.mainThread())
       .subscribe(
	// display in number of tweets pushed
       );

 

But the source observable is cold observable i.e. it emits objects when it is subscribed to, due to which there will be two different network calls, one for first subscription and one for second subscription. So, both the subscriptions will have different data, which is not what is desired. The expected result is that there should be a single network call, and the data obtained from that call should be displayed and pushed to loklak server.

For this, hot Observables are used. Hot observables start emitting objects the moment they are created, irrespective of whether they are subscribed or not.

A cold observable can be converted to a hot observable by using publish operator and it starts emitting objects when connect operator is used. This is implemented in displayAndPostScrapedData:

ConnectableObservable<ScrapedData> observable = Observable.interval(4, TimeUnit.SECONDS)
           .flatMap(this::getSuggestionsPeriodically)
           .flatMap(query -> {
               mSuggestionQuerries.add(query);
               return getScrapedTweets(query);
           })
           .retry(2)
           .publish();

   // first time subscription to display scraped data
   Disposable viewDisposable = observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   this::displayScrapedData,
                   this::setNetworkErrorView
           );
   mCompositeDisposable.add(viewDisposable);
  
   // second time subscription for pushing data to loklak
   Disposable pushDisposable = observable
           .flatMap(this::pushScrapedData) // scraped data transformed for pushing
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(
                   push -> {
                       mHarvestedTweets += push.getRecords();
                       harvestedTweetsCountTextView.setText(String.valueOf(mHarvestedTweets));
                   },
                   throwable -> {}
           );
   mCompositeDisposable.add(pushDisposable);

   Disposable publishDisposable = observable.connect(); // hot observable starts emitting
   mCompositeDisposable.add(publishDisposable);
}

 

The two subscriptions are made before connect operator is invoked because the hot observable emits objects due to successful network calls and network calls can’t be done on MainThread (UI Thread). So, doing the subscription before, channels the network calls to a background thread.

The scraped data is converted to JSON from objects using Gson, the JSON is converted to string and then using push API endpoint it is posted to loklak server. This is implemented in pushScrapedData method, which is used in second subscription by using method referencing.

private Observalbe<Push> pushScrapedData(ScrapedData scrapedData) throws Exception{    
    LoklakApi loklakApi = RestClient.createApi(LoklakApi.class);
    List<Status> statuses = scrapedData.getStatuses();
    String data = mGson.toJson(statuses);
    JSONArray jsonArray = new JSONArray(data);
    JSONObject jsonObject = new JSONObject();
    jsonObject.put("statuses", jsonArray);
    return loklakApi.pushTweetsToLoklak(jsonObject.toString());
}

 

Method reference for displayScrapedData and setNetworkErrorView methods are used to display the scraped data and handle unsuccessful network requests.

Only 80 tweets are preserved in RecyclerView. If number of tweets exceeds 80, then old tweets are removed.

private void displayScrapedData(ScrapedData scrapedData) {
   String query = scrapedData.getQuery();
   List<Status> statuses = scrapedData.getStatuses();
   mSuggestionQuerries.remove(query);
   if (mHarvestedTweetAdapter.getItemCount() > 80) {
       mHarvestedTweetAdapter.clearAdapter(); // old tweets removed
   }
   mHarvestedTweetAdapter.addHarvestedTweets(statuses);
   int count = mHarvestedTweetAdapter.getItemCount() - 1;
   recyclerView.scrollToPosition(count);
}

 

In case of a network error, the visibility of RecyclerView and TextView (which shows number of tweets pushed) is changed to gone and a message is displayed that there is network error.

private void setNetworkErrorView(Throwable throwable) {
   Log.e(LOG_TAG, throwable.toString());
   // recyclerView and TextView showing count of harvested tweets are hidden
   ButterKnife.apply(networkViews, GONE);
   // network error message displayed
   networkErrorTextView.setVisibility(View.VISIBLE);
}

References

Resources

Continue ReadingPosting Scraped Tweets to Loklak server from Loklak Wok Android

Realm database in Loklak Wok Android for Persistent view

Loklak Wok Android provides suggestions for tweet searches. The suggestions are stored in local database to provide a persistent view, resulting in a better user experience. The local database used here is Realm database instead of sqlite3 which is supported by Android SDK. The proper way to use an sqlite3 database is to first create a contract where the schema of the database is defined, then a database helper class which extends from SQLiteOpenHelper class where the schema is created i.e. tables are created and finally write ContentProvider so that you don’t have to write long SQL queries every time a database operation needs to be performed. This is just a lot of hard work to do, as this includes a lot of steps, debugging is also difficult. A solution to this can be using an ORM that provides a simple API to use sqlite3, but the currently available ORMs lack in terms of performance, they are too slow. A reliable solution to this problem is realm database, which is faster than raw sqlite3 and has really simple API for database operations. This blog explains the use of realm database for storing tweet search suggestions.

Adding Realm database to Android project

In project level build.gradle

buildscript {
   repositories {
       jcenter()
   }
   dependencies {
       classpath 'com.android.tools.build:gradle:2.3.3'
       classpath "io.realm:realm-gradle-plugin:3.3.1"

       // NOTE: Do not place your application dependencies here; they belong
       // in the individual module build.gradle files
   }
}

 

And at the top of app/build.gradle “apply plugin: ‘realm-android'”  is added.

Using Realm Database

Let’s start with a simple example. We have a Student class that has only two attributes name and age. To create the model for the database, the Student class is simply extended to RealmObject.

public class Student extends RealmObject {

   private String name;
   private int age;

   // A constructor needs to be explicitly defined, be it an empty constructor
   public Student(String name, int age) {
       this.name = name;
       this.age = age;
   }

   // getters and setters
}

 

To push data to the database, Java objects are created, a transaction is initialized, then copyToRealm method is used to push the data and finally the transaction is committed. But before all this, the database is initialized and a Realm instance is obtained.

Realm.init(context); // Database initialized
Realm realm = Realm.getDefaultInstance(); // realm instance obtained
      
Student student = new Student("Rahul Dravid", 22); // Simple java object created
realm.beginTransaction() // initialization of transaction
realm.copyToRealm(student); // pushed to database
realm.commitTransaction(); // transaction committed

 

copyToRealm takes only a single parameter, the parameter can be an object or an Iterable. Off course, the passed parameter should extend RealmObject. A List of Student can be passed as a parameter to copyToRealm to push multiple data into the database.

The above way of inserting data is synchronous. Realm also supports asynchronous transactions, you guessed it right, you don’t have to depend on AsyncTaskLoader. The same operation can be performed asynchronously as

realm.executeTransaction(new Realm.Transaction() { 
    @Override public void execute(Realm realm) {
        Student student = new Student("Rahul Dravid", 22);
        realm.copyToRealm(student);
    }
});

 

Now, querying the database is as easy as inserting.

RealmResults<Student> studentList = realm.where(Student.class).findAll();

 

No, transaction is required as we are not manipulating the database, as data is just read from the database. RealmResults extend Java List, so List methods which don’t manipulate the List can be used on studentList e.g. get(int index) to obtain object at the index.

The result can also be a filtered one, for example filtering students who are 22 years old.

RealmResults<Student> studentList = realm.where(Student.class).equalTo(age, 22).findAll();

 

Now, removing data from database. deleteAllFromRealm method can be executed on the obtained RealmResults or to completely remove data of a model class delete(Model.class) method on the realm instance is invoked. The operations should be enclosed between beginTransaction and commitTransaction if synchronous behaviour is required else for asynchronous behaviour the operation is done in execute method of an anonymous object of Realm.Transaction.

studentList.deleteAllFromRealm(); // removes the filtered result
realm.delete(Student.class); // removes all data of model class

 

Storing Tweet Search Suggestions in Loklak Wok Android for Persistent view

Loklak Wok Android uses Retrofit2 for sending network requests, for which POJO classes are already created so that it becomes easy for parsing the obtained JSON from network request. Due to this using Realm database becomes more easier, as the defined POJOs can be simply extended to RealmObject to create the model class of the data e.g. Query class extends RealmObject, one of the attribute is the suggestion query i.e. mQuery.

The database is initialized in LoklakWokApplication, the application class, this way the database is initialized only once which persists throughout the app lifecycle.

@Override
public void onCreate() {
   super.onCreate();
   Realm.init(this);
   RealmConfiguration realmConfiguration = new RealmConfiguration.Builder()
           .name(Realm.DEFAULT_REALM_NAME)
           .deleteRealmIfMigrationNeeded()
           .build();
   Realm.setDefaultConfiguration(realmConfiguration);
}

 

deleteRealmIfMigrationNeeded removes the old realm database and creates a new one if any of the Model class get changed i.e. an attribute is removed, added or simply the name of attribute is changed. This is done as we are not storing user generated data, we are just using database to provide persistent view. So, the previously kept data is not important.

The database is closed in onTerminate callback of the application

@Override
public void onTerminate() {
   Realm.getDefaultInstance().close();
   super.onTerminate();
}

 

Now that database is initialized. We fetch the previously stored data and display it in RecyclerView. If the network request is successful the queries from database are replaced by the queries fetched in network request, else the queries from database are displayed providing a persistent view. The way it is implemented in onCreateView of SuggestFragment

mRealm = Realm.getDefaultInstance();
...
// old queries obtained from database
RealmResults<Query> queryRealmResults = mRealm.where(Query.class).findAll();
List<Query> queries = mRealm.copyFromRealm(queryRealmResults);
// RecyclerView adapter created with old queries
mSuggestAdapter = new SuggestAdapter(queries, this);
tweetSearchSuggestions.setLayoutManager(new LinearLayoutManager(getActivity()));
tweetSearchSuggestions.setAdapter(mSuggestAdapter);

 

The old queries are replaced in onSuccessfulRequest

private void onSuccessfulRequest(SuggestData suggestData) {
   // suggestData contains suggestion queries
   if (suggestData != null) {
       // old queries replaced with new ones
       mSuggestAdapter.setQueries(suggestData.getQueries());
   }
   setAfterRefreshingState();
}

 

Now suggestion queries needs to be inserted into the database. Only the latest suggestions are inserted i.e. queries present when onStop lifecycle method of fragment is called, and as previous queries are not needed anymore, they are deleted. The operation is performed in a synchronous way.

@Override
public void onStop() {
   ...
   mRealm.beginTransaction();
   // old queries deleted
   mRealm.delete(Query.class);
   // new queries inserted
   mRealm.copyToRealm(mSuggestAdapter.getQueries());
   mRealm.commitTransaction();
   // fragment lifecycle called i.e. a new fragment/activity opens
   super.onStop();
}

 

Conclusion: Sqlite3 and Realm comparison

Operations Sqlite3 Realm
Table creation CREATE TABLE … extends RealmObject
Inserting data INSERT INTO … copyToRealm
Searching data SELECT … realm.where(Model.class)
Deleting data DELETE FROM … realmResults.deleteAllFromRealm() or realm.delete(Model.class)

Resources:

 

 

Continue ReadingRealm database in Loklak Wok Android for Persistent view

Implementing Tweet Search Suggestions in Loklak Wok Android

Loklak Wok Android not only is a peer harvester for Loklak Server but it also provides users to search tweets using Loklak’s API endpoints. To provide a better search tweet search experience to the users, the app provides search suggestions using suggest API endpoint. The blog describes how “Search Suggestions” is implemented.

Third Party Libraries used to Implement Suggestion Feature

  • Retrofit2: Used for sending network request
  • Gson: Used for serialization, JSON to POJOs (Plain old java objects).
  • RxJava and RxAndroid: Used to implement a clean asynchronous workflow.
  • Retrolambda: Provides support for lambdas in Android.

These libraries can be installed by adding the following dependencies in app/build.gradle

android {
   .
   // removes rxjava file repetations
   packagingOptions {
      exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   // gson and retrofit2
    compile 'com.google.code.gson:gson:2.8.1'
    compile 'com.squareup.retrofit2:retrofit:2.3.0'
    compile 'com.squareup.retrofit2:converter-gson:2.3.0'
    compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   // rxjava and rxandroid
    compile 'io.reactivex.rxjava2:rxjava:2.0.5'
    compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
    compile 'com.jakewharton.rxbinding2:rxbinding:2.0.0'
}

 

To add retrolambda

// in project's build.gradle
dependencies {
    
    classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

// in app level build.gradle at the top
apply plugin: 'me.tatarka.retrolambda'

 

Fetching Suggestions

Retrofit2 sends a GET request to search API endpoint, the JSON response returned is serialized to Java Objects using the models defined in models.suggest package. The models can be easily generated using JSONSchema2Pojo. The benefit of using Gson is that, the hard work of parsing JSON is easily handled by it. The static method createRestClient creates the retrofit instance to be used for network calls

private static void createRestClient() {
   sRetrofit = new Retrofit.Builder()
           .baseUrl(BASE_URL) // base url : https://api.loklak.org/api/
           .addConverterFactory(GsonConverterFactory.create(gson))
           .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
           .build();
}

 

The suggest endpoint is defined in LoklakApi interface

public interface LoklakApi {

   @GET("/api/suggest.json")
   Observable<SuggestData> getSuggestions(@Query("q") String query);

   @GET("/api/suggest.json")
   Observable<SuggestData> getSuggestions(@Query("q") String query, @Query("count") int count);

   .
}

 

Now, the suggestions are obtained using fetchSuggestion method. First, it creates the rest client to send network requests using createApi method (which internally calls creteRestClient implemented above). The suggestion query is obtained from the EditText. Then the RxJava Observable is subscribed in a separate thread which is specially meant for doing IO operations and finally the obtained data is observed i.e. views are inflated in the MainUI thread.

private void fetchSuggestion() {
   LoklakApi loklakApi = RestClient.createApi(LoklakApi.class); // rest client created
   String query = tweetSearchEditText.getText().toString(); // suggestion query from EditText
   Observable<SuggestData> suggestionObservable = loklakApi.getSuggestions(query); // observable created
   Disposable disposable = suggestionObservable
           .subscribeOn(Schedulers.io()) // subscribed on IO thread
           .observeOn(AndroidSchedulers.mainThread()) // observed on MainUI thread
           .subscribe(this::onSuccessfulRequest, this::onFailedRequest); // views are manipulated accordingly
   mCompositeDisposable.add(disposable);
}

 

If the network request is successful onSuccessfulRequest method is called which updates the data in the RecyclerView.

private void onSuccessfulRequest(SuggestData suggestData) {
   if (suggestData != null) {
       mSuggestAdapter.setQueries(suggestData.getQueries()); // data updated.
   }
   setAfterRefreshingState();
}

 

If the network request fails then onFailedRequest is called which displays a toast saying “Cannot fetch suggestions, Try Again!”. If requests are sent simultaneously and they fail, the previous message i.e. the previous toast is removed.

private void onFailedRequest(Throwable throwable) {
   Log.e(LOG_TAG, throwable.toString());
   if (mToast != null) { // checks if a previous toast is present
       mToast.cancel(); // removes the previous toast.
   }
   setAfterRefreshingState();
   // value of networkRequestError: "Cannot fetch suggestions, Try Again!"
   mToast = Toast.makeText(getActivity(), networkRequestError, Toast.LENGTH_SHORT); // toast is crated
   mToast.show(); // toast is displayed
}

 

Lively Updating suggestions

One way to update suggestions as the user types in, is to send a GET request with a query parameter to suggest API endpoint and check if a previous request is incomplete cancel it. This includes a lot of IO work and seems unnecessary because we would be sending request even if the user hasn’t completed typing what he/she wants to search. One probable solution is to use a Timer.

private TextWatcher searchTextWatcher = new TextWatcher() {  
    @Override
    public void afterTextChanged(Editable arg0) {
        // user typed: start the timer
        timer = new Timer();
        timer.schedule(new TimerTask() {
            @Override
            public void run() {
                String query = editText.getText()
                // send network request to get suggestions
            }
        }, 600); // 600ms delay before the timer executes the „run" method from TimerTask
    }

    @Override
    public void beforeTextChanged(CharSequence s, int start, int count, int after) {}

    @Override
    public void onTextChanged(CharSequence s, int start, int before, int count) {
        // user is typing: reset already started timer (if existing)
        if (timer != null) {
            timer.cancel();
        }
    }
};

 

This one looks good and eventually gets the job done. But, don’t you think for a simple stuff like this we are writing too much of code that is scary at the first sight.

Let’s try a second approach by using RxBinding for Android views and RxJava operators. RxBinding simply converts every event on the view to an Observable which can be subscribed and worked upon.

In place of TextWatcher here we use RxTextView.textChanges method that takes an EditText as a parameter. textChanges creates Observable of character sequences for text changes on view, here EditText. Then debounce operator of RxJava is used which drops observables till the timeout expires, a clean alternative for Timer. The second approach is implemented in updateSuggestions.

private void updateSuggestions() {
   Disposable disposable = RxTextView.textChanges(tweetSearchEditText) // generating observables for text changes
           .debounce(400, TimeUnit.MILLISECONDS) // droping observables i.e. text changes within 4 ms
           .subscribe(charSequence -> {
               if (charSequence.length() > 0) {
                   fetchSuggestion(); // suggestions obtained
               }
           });
   mCompositeDisposable.add(disposable);
}

 

CompositeDisposable is a bucket that contains Disposable objects, which are returned each time an Observable is subscribed to. So, all the disposable objects are collected in CompositeDisposable and unsubscribed when onStop of the fragment is called to avoid memory leaks.

A SwipeRefreshLayout is used, so that user can retry if the request fails or refresh to get new suggestions. When refreshing is not complete a circular ProgressDialog is shown and the RecyclerView showing old suggestions is made invisible, executed by setBeforeRefreshingState method

private void setBeforeRefreshingState() {
   refreshSuggestions.setRefreshing(true);
   tweetSearchSuggestions.setVisibility(View.INVISIBLE);
}

 

Similarly, once refreshing is done ProgessDialog is stopped and the visibility of RecyclerView which now contains the updated suggestions is changed to VISIBLE. This is executed in setAfterRefreshingState method

private void setAfterRefreshingState() {
   refreshSuggestions.setRefreshing(false);
   tweetSearchSuggestions.setVisibility(View.VISIBLE);
}

 

References:

Continue ReadingImplementing Tweet Search Suggestions in Loklak Wok Android

Implementing JSON API for ‘settings/contact-info’ route in Open Event Frontend

In Open Event Frontend, under the settings route, there is a ‘contact-info’ route which allows the user to change his info (email and contact). Previously to achieve this we were using the mock response from the server. But since we have the JSON API now we could integrate and use the JSON API for it so as to let the user modify his/her email and contact info. In the following section, I will explain how it is built:

The first thing to do is to create the model for the user so as to have a skeleton of the database and include our required fields in it. The user model looks like:

export default ModelBase.extend({
  email        : attr('string'),
  password     : attr('string'),
  isVerified   : attr('boolean', { readOnly: true }),
  isSuperAdmin : attr('boolean', { readOnly: true }),
  isAdmin      : attr('boolean', { readOnly: true }),

  firstName : attr('string'),
  lastName  : attr('string'),
  details   : attr('string'),
  contact   : attr('string'),
});

Above is the user’s model, however just the related fields are included here(there are more fields in user’s model). The above code shows that email and contact are two attributes which will accept ‘string’ values as inputs. We have a contact-info form located at ‘settings/contact-info’ route. It has two input fields as follows:

<form class="ui form {{if isLoading 'loading'}}" {{action 'submit' on='submit'}} novalidate>
  <div class="field">
    <label>{{t 'Email'}}</label>
    {{input type='email' name='email' value=data.email}}
  </div>
  <div class="field">
    <label>{{t 'Phone'}}</label>
    {{input type='text' name='phone' value=data.contact}}
  </div>
  <button class="ui teal button" type="submit">{{t 'Save'}}</button>
</form>

The form has a submit button which triggers the submit action. We redirect the submit action from the component to the controller so as to maintain ‘Data down, actions up’. The code is irrelevant, hence not shown here. Following is the action which is used to update the user which we are handling in the contact-info.js controller.

updateContactInfo() {
      this.set('isLoading', true);
      let currentUser = this.get('model');
      currentUser.save({
        adapterOptions: {
          updateMode: 'contact-info'
        }
      })
        .then(user => {
          this.set('isLoading', false);
          let userData = user.serialize(false).data.attributes;
          userData.id = user.get('id');
          this.get('authManager').set('currentUserModel', user);
          this.get('session').set('data.currentUserFallback', userData);
          this.get('notify', 'Updated information successfully');
        })
        .catch(() => {
        });
    }

We are returning the current user’s model from the route’s model method and storing it into ‘currentUser’ variable. Since we have data binding ember inputs in our contact-info form, the values will be grabbed automatically once the form submits. Thus, we can call ‘save’ method on the ‘currentUser’ model and pass an object called ‘adapterOptions’ which has key ‘updateMode’. We send this key to the ‘user’ serializer so that it picks only the attributes to be updated and omits the other ones. We have customized our user serializer as:

if (snapshot.adapterOptions && snapshot.adapterOptions.updateMode === 'contact-info') {
        json.data.attributes = pick(json.data.attributes, ['email', 'contact']);
}

The ‘save’ method on the ‘currentUser’ ‘currentUser.save()’ returns a promise. We resolve the promise by setting the ‘currentUserModel’ as the updated ‘user’ as returned by the promise. Thus, we are able to update the email and contact-info using the JSON API.

Resources:

Ember data official guide

Blog on models and Ember data by Embedly

Continue ReadingImplementing JSON API for ‘settings/contact-info’ route in Open Event Frontend

How RSS Action Type is Implemented in SUSI Android

Important skills of SUSI.AI are to display web search queries, a map of any location and provide a list of relevant information of a topic. RSS action type is similar to websearch action type but when the web search is to be performed on the client side, it is denoted by websearch action type and when the web search is performed by the server itself, it is denoted by rss action type. In this blog, I will show you how rss action type is implemented in SUSI Android.

In case of RSS action type server searches the internet and using RSS feeds, returns an array of objects containing :

  • Title
  • Description
  • Link
{
  “title”: “dog-doh: Definitions Index”,
  “description”: “dog-doh: Definitions Index. dog dog and pony show dog biscuit dog collar dog days …”,
  “link”: “http://websters.yourdictionary.com/index/dog-doh/”,
}

title: Title related to user query

description: Description of user query.

link: If user want to know more information then user can use link to find more information.

How rss action type is parsed in SUSI Android

SUSI  reply in json format. It should be parsed properly to show it in android app. We used retrofit library developed by square to parse json data. Retrofit library parse data according to model class. We code model class according to expected json reply. For example, each susi response contains answer jsonarray. There are two jsonarray data and action inside answer jsonarray. We made a different model class for each jsonarray.

First model class is SusiResponse. We used this model class to parse ‘answers’ jsonarray.

@SerializedName(“answers”)
private List<Answer> answers = new ArrayList<>();

Here we used List<>  because ‘answers’ jsonarray contain a list of jsonarray and jsonobject. Answer is second model class. We used it to parse two important jsonarray ‘data’ and ‘action’. The ‘action’ attribute has information about action type.

public class Answer {
   @SerializedName(“data”)
   private RealmList<Datum> data = new RealmList<>();
}

Here also we used the list because data jsonarray also contains a list of jsonobject but instead of simple list we used RealmList<> because after parsing we save data using realm. ‘data’ jsonarray contain multiple jsonobject and each jsonobject contain three important information ‘title’, ‘description’ and ‘link’.

Datum class is the main model class which is used to save and retrieve ‘title’, ‘description’ and ‘link’. setTitle, setLink and setDescription method of Datum class are used to save ‘title’, ‘description’ and ‘link’ and getTitle, getDescription and getLink method are use to retrieve ‘title’, ‘description’ and ‘link’.

How rss action type data is retrieved and saved

As already mentioned we used retrofit to retrieve data and realm to save data. susiResponse is response we received from SUSI server. We used susiResponse to retrieve a list of Datum class type data.

List<Datum> datumList = susiResponse.getAnswers().get(0).getData();

We then loop through datumList and from each element we extract ‘title’, ‘description’ and ‘link’ using getTitle(), getDescription() and getLink() method respectively. Datum class is model class for both retrofit and realm. realmDatum is instance of Datum class and datumRealmList is an instance of RealmList of Datum class type. After extracting data we save data using setTitle(), setDescription() and setLink().

for (Datum datum : datumList) {
          Datum realmDatum = bgRealm.createObject(Datum.class);
          realmDatum.setDescription(datum.getDescription());
          realmDatum.setLink(datum.getLink());
          realmDatum.setTitle(datum.getTitle());
         datumRealmList.add(realmDatum);
       }      

Layout design to show rss action type reply

There are three textview with id ‘title’, ‘description’ and ‘link’ to show ‘title’, ‘description’ and ‘link’ retrieved from SUSI’s reply. We used recyclerview to show list of results.

Datum datum = datumList.get(position);

holder.titleTextView.setText(Html.fromHtml(datum.getTitle()));

holder.descriptionTextView.setText(Html.fromHtml(datum.getDescription()));

holder.descriptionTextView.setText(Html.fromHtml(datum.getDescription()));

Resources

Continue ReadingHow RSS Action Type is Implemented in SUSI Android

How Anonymous Mode is Implemented in SUSI Android

Login and signup are an important feature for some android apps like chat app because user wants to save messages and secure messages from others. In SUSI Android we save messages for logged in user on the server corresponding to their account. But  users can also  use the app without logging in. In this blog, I will show you how the anonymous mode is implemented in SUSI Android .

When the user logs in using the username and password we provide a token to user for a limited amount of time, but in case of anonymous mode we never provide a token to the user and also we set ANONYMOUS_LOGGED_IN flag true which shows that the user is using the app anonymously.

PrefManager.clearToken()

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true)

We use ANONYMOUS_LOGGED_IN flag to check user is using the app anonymously or not. When a user opens the app we first check user is already logged in or not. If the user is not logged in then we check ANONYMOUS_LOGGED_IN flag is true or false. The true means user is using the app in anonymous mode.

if(PrefManager.getBoolean(Constant.ANONYMOUS_LOGGED_IN, false)) {

 intent = new Intent(LoginActivity.this, MainActivity.class);

startActivity(intent);

}

Else we show login page to the user. The user can use app either by login using username and password or anonymously by clicking skip button. On clicking skip button ANONYMOUS_LOGGED_IN flag set to true.

public void skip() {

Intent intent = new Intent(LoginActivity.this,MainActivity.class);

PrefManager.clearToken();

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, true);

startActivity(intent);

}

If the user is using the app in anonymous mode but he/she want to login then he/she can login. There is an option for login in menu.

When the user selects login option from the menu, then it redirects the user to the login screen and ANONYMOUS_LOGGED_IN flag is set to false. ANONYMOUS_LOGGED_IN flag is set to false to ensure that instead of login if the user closes the app and again open it, then he/she can’t use the app until logged in or click skip button.

case R.id.action_login:

PrefManager.putBoolean(Constant.ANONYMOUS_LOGGED_IN, false);

Reference

Continue ReadingHow Anonymous Mode is Implemented in SUSI Android

Using NodeJS modules of Loklak Scraper in Android

Loklak Scraper JS implements scrapers of social media websites so that they can be used in other platforms, like Android or in a native Java project. This way there will be only a single source of scraper, as a result it will be easier to update the scrapers in response to the change in websites. This blog explains how Loklak Wok Android, a peer for Loklak Server on Android platform uses the Twitter JS scraper to scrape tweets.

LiquidCore is a library available for android that can be used to run standard NodeJS modules. But Twitter scraper can’t be used directly, due to the following problems:

  • 3rd party NodeJS libraries are used to implement the scraper, like cheerio and request-promise-native and LiquidCore doesn’t support 3rd party libraries.
  • The scrapers are written in ES6, as of now LiquidCore uses NodeJS 6.10.2, which doesn’t support ES6 completely.

So, if 3rd party NodeJS libraries can be included in our scraper code and ES6 can be converted to ES5, LiquidCore can easily execute Twitter scraper.

3rd party NodeJS libraries can be bundled into Twitter scraper using Webpack and ES6 can be transpiled to ES5 using Babel.

The required dependencies can be installed using:

$npm install --save-dev webpack
$npm install --save-dev babel-core babel-loader babel-preset-es2015

Bundling and Transpiling

Webpack does bundling based on the configurations provided in webpack.config.js, present in root directory of project.

var fs = require('fs');

function listScrapers() {
   var src = "./scrapers/"
   var files = {};
   fs.readdirSync(src).forEach(function(data) {
       var entryName = data.substr(0, data.indexOf("."));
       files[entryName] = src+data;
   });
   return files;
}

module.exports = {
 entry: listScrapers(),
 target: "node",
 module: {
     loaders: [
         {
             loader: "babel-loader",
             test: /\.js?$/,
             query: {
                 presets: ["es2015"],
             }
         },
     ]
 },
 output: {
   path: __dirname + '/build',
   filename: '[name].js',
   libraryTarget: 'var',
   library: '[name]',
 }
};

 

Now let’s break the config file, the function listScrapers returns a JSONObject with key as name of scraper and value as relative location of scraper, ex:

{
   twitter: "./scrapers/twitter.js",
   github: "./scrapers/github.js"
   // same goes for other scrapers
}

The parameters in module.exports as described in the documentation of webpack for multiple inputs and to use the generated output externally:

  • entry: Since a bundle file is required for each scraper we provide the  the JSONObject returned by listScrapers function. The multiple entry points provided generate multiple bundled files.
  • target: As the bundled files are to be used in NodeJS platform,  “node” is set here.
  • module: Using webpack the code can be directly transpiled while bundling, the end users don’t need to run separate commands for transpiling. module contains babel configurations for transpiling.
  • output: options here customize the compilation of webpack
    • path: Location where bundled files are kept after compilation, “__dirname” means the current directory i.e. root directory of the project.
    • filename: Name of bundled file, “[name]“ here refers to the key of JSONObject provided in entry i.e. key of JSONObect returned from listScrapers. Example for Twitter scraper, the filename of bundled file will be “twitter.js”.
    • libraryTarget: by default the functions or methods inside bundled files can’t be used externally – can’t be imported. By providing the “var” the functions in bundled module can be accessed.
    • library: the name of the library.

Now, time to do the compilation work:

$ ./node_modules/.bin/webpack

The bundled files can be found in build directory. But, the generated bundled files are large files – around 77,000 lines. Large files are not encouraged for production purposes. So, a “-p” flag is used to generate bundled files for production – around 400 lines.

$ ./node_modules/.bin/webpack -p

Using LiquidCore to execute bundled files

The generated bundled file can be copied to the raw directory in res (resources directory in Android). Now, events are emitted from Activity/Fragment and in response to those events the scraping function is invoked in the bundled JS file, present in raw directory, the vice-versa is also possible.

So, we handle some events in our JS file and send some events to the android Activity/Fragment. The event handling and event creating code in JS file:

var query = "";
LiquidCore.on("queryEvent", function(msg) {
  query = msg.query;
});

LiquidCore.on("fetchTweets", function() {
  var twitterScraper = new twitter();
  twitterScraper.getTweets(query, function(data) {
    LiquidCore.emit("getTweets", {"query": query, "statuses": data});
  });
});

LiquidCore.emit('start');

 

First a “start” event is emitted from JS file, which is consumed in TweetHarvestingFragment by getScrapedTweet method using startEventListener.

EventListener startEventListener = (service, event, payload) -> {
   JSONObject jsonObject = new JSONObject();
   try {
       jsonObject.put("query", query);
       service.emit(LC_QUERY_EVENT, jsonObject); // value of LC_QUERY_EMIT is  "queryEvent"
   } catch (JSONException e) {
       Log.e(LOG_TAG, e.toString());
   }
   service.emit(LC_FETCH_TWEETS_EVENT); //value of  LC_FETCH_TWEETS_EVENT is  "fetchTweets"
};

 

The startEventListener then emits “queryEvent” with a JSONObject that contains the query to search tweets for scraping. This event is consumed in JS file by:

var query = "";
LiquidCore.on("queryEvent", function(msg) {
  query = msg.query;
});

 

After “queryEvent”, “fetchTweets” event is emitted from fragment, which is handled in JS file by:

LiquidCore.on("fetchTweets", function() {
  var twitterScraper = new twitter(); // scraping object is created
  twitterScraper.getTweets(query, function(data) { // function that scrapes twitter
    LiquidCore.emit("getTweets", {"query": query, "statuses": data});
  });
});

 

Once the scraped data is obtained, it is sent back to fragment by emitting “getTweets” event from JS file, “{“query”: query, “statuses”: data}” contains scraped data. This event is consumed in android by getTweetsEventListener.

EventListener getTweetsEventListener = (service, event, payload) -> { // payload contains scraped data
   Push push = mGson.fromJson(payload.toString(), Push.class);
   emitter.onNext(push);
};

 

LiquidCore creates a NodeJS instance to execute the bundled JS file. The NodeJS instance is called MicroService in LiquidCore terminology. For all this event handling to work, the NodeJS instance is created inside the method with a ServiceStartListner where all EventListener are added.

MicroService.ServiceStartListener serviceStartListener = (service -> {
   service.addEventListener(LC_START_EVENT, startEventListener);
   service.addEventListener(LC_GET_TWEETS_EVENT, getTweetsEventListener);
});
URI uri = URI.create("android.resource://org.loklak.android.wok/raw/twitter"); // Note .js is not used
MicroService microService = new MicroService(getActivity(), uri, serviceStartListener);
microService.start();

Resources

Continue ReadingUsing NodeJS modules of Loklak Scraper in Android