Route Based Chunking in Loklak Search

The loklak search application running at loklak.org is growing in size as the features are being added into the application, this growth is a linear one, and traditional SPA, tend to ship all the code is required to run the application in one pass, as a single monolithic JavaScript file, along with the index.html. This approach is suitable for the application with few pages which are frequently used, and have context switching between those logical pages at a high rate and almost simultaneously as the application loads.

But generally, only a fraction of code is what is accessed most frequently by most users, so as the application size grows it does not make sense to include all the code for the entire application at the first request, as there is always some code in the application, some views, are rarely accessed. The loading of such part of the application can be delayed until they are accessed in the application. The angular router provides an easy way to set up such system and is used in the latest version of loklak search.

The technique which is used here is to load the content according to the route. This makes sure only the route which is viewed is loaded on the initial load, and subsequent loading is done at the runtime as and when required.

Old setup for baseline

Here are the compiled file sizes, of the project without the chunking the application. Now as we can see that the file sizes are huge, especially the vendor bundle, which is of 5.4M and main bundle which is about 0.5M now, these are the files which are loaded on the first load and due to their huge sizes, the first paint of the application suffers, to a great extent. These numbers will act as a baseline upon which we will measure the impact of route based chunking.

Setup for route based chunking

The setup for route based chunking is fairly simple, this is our routing configuration, the part of the modules which we want to lazy load are to be passed as loadChildren attribute of the route, this attribute is a string which is a path of the feature module which, and part after the hash symbol is the actual class name of the module, in that file. This setup enables the router to load that module lazily when accessed by the user.

const routes: Routes = [
{
path: '',
pathMatch: 'full',
loadChildren: './home/home.module#HomeModule',
data: { preload: true }

},
{
path: 'about',
loadChildren: './about/about.module#AboutModule'
},

{
path: 'contact',
loadChildren: './contact/contact.module#ContactModule'
},

{
path: 'search',
loadChildren: './feed/feed.module#FeedModule',
data: { preload: true }
},
{
path: 'terms',

loadChildren: './terms/terms.module#TermsModule'
},
{
path: 'wall',
loadChildren: './media-wall/media-wall.module#MediaWallModule'
}
];

Preloading of some routes

As we can see that in two of the configurations above, there is a data attribute, on which preload: true attribute is specified. Sometimes we need to preload some part of theapplication, which we know we will access, soon enough. So angular also enables us to set up our own preloading strategy to preload some critical parts of the application, which we know are going to be accessed. In our case, Home and Feed module are the core parts of the application, and we can be sure that, if someone has to use our application, these two modules need to be loaded. Defining the preloading strategy is also really simple, it is a class which implements PreloadingStrategy interface, and have a preload method, this method receives the route and load function as an argument, and this preload method either returns the load() observable or null if preload is set to true.

export class CustomPreloadStrategy implements PreloadingStrategy {
preload(route: Route, load: Function): Observable<any> {
return route.data && route.data.preload ? load() : of(null);
}
}

Results of route based chunking

The results of route based chunking are the 50% reduction in the file size of vendor bundle and 70% reduction in the file size of the main bundle this provides the edge which every application needs to perform well at the load time, as unnecessary bytes are not at all loaded until required.

Resources

Lazy Loading Images in Loklak Search

In last blog post, I discussed the basic Web API’s which helps us to create the lazy image loader component. I also discussed the structure which is used in the application, to load the images lazily. The core idea is to wrap the <img> element in a wrapper, <app-lazy-img> element. This enables us the detection of the element in the viewport and corresponding loading only if the image is present in the viewport.

In this blog post, I will be discussing the implementation details about how this is achieved in Loklak search in an optimized manner.

The logic for lazy loading of images in the application is divided into a Component and a corresponding Service. The reason for this splitting of logic will be explained as we discuss the core parts of the code for this feature.

Detecting the Intersection with Viewport

The lazy image service is a service for the lazy image component which is registered at the by the modules which intend to use this app lazy image component. The task of this service is to register the elements with the intersection observer, and, then emit an event when the element comes in the viewport, which the element can react on and then use the other methods of services to actually fetch the image.

@Injectable()
export class LazyImgService {
private intersectionObserver: IntersectionObserver
= new IntersectionObserver(this.observerCallback.bind(this), { rootMargin: '50% 50%' });
private elementSubscriberMap: Map<Element, Subscriber<boolean>>
= new Map<Element, Subscriber<boolean>>();
}

The service has two member attributes, one is IntersectionObserver, and the other is a Map which stores the the reference of the subscribers of this intersection observer. This reference is then later used to emit the event when the element comes in viewport. The rootMargin of the intersection observer is set to 50% this makes sure that when the element is 50% away from the viewport.

The obvserve public method of the service, takes an element and pass it to intersection observer to observe, also put the element in the subscriber map.

public observe(element: Element): Observable<boolean> {
const observable: Observable<boolean> = new Observable<boolean>(subscriber => {
this.elementSubscriberMap.set(element, subscriber);
});
this.intersectionObserver.observe(element);
return observable;
}

Then there is the observer callback, this method, as an argument receives all the objects intersecting the root of the observer, when this callback is fired, we find all the intersecting elements and emit the intersection event. Indicating that the element is nearby the viewport and this is the time to load it.

private observerCallback(entries: IntersectionObserverEntry[], observer: IntersectionObserver) {
entries.forEach(entry => {
if (this.elementSubscriberMap.has(entry.target)) {
if (entry.intersectionRatio > 0) {
const subscriber = this.elementSubscriberMap.get(entry.target);
subscriber.next(true);
this.elementSubscriberMap.delete(entry.target);
}
}
});
}

Now, our LazyImgComponent enables us to uses this service to register its element, with the intersection observer and then reacting to it later, when the event is emitted. This method sets up the IO, to load the image, and subscribes to the event emittes by the service and eventually calls the loadImage method when the element intersects with the viewport.

private setupIntersectionObserver() {
this.lazyImgService
.observe(this.elementRef.nativeElement)
.subscribe(value => {
if (value) {
this.loadImage();
}
});
}

Loading and rendering the image

Our lazy image service has another public API method fetch to fetch the image resource, this method returns an observable, which on successful fetching of image emits a Base64 image string.

public fetch(resource: string): Observable<string> {
return new Observable<string>(subscriber => {
fetch(resource)
.then(this.processStatus)
.then(this.getBufferResponse)
.then(this.arrayBufferToBase64)
.then(strBuffer => {
subscriber.next(strBuffer);
subscriber.complete();
})
.catch((error) => {
subscriber.error(error);
subscriber.complete();
});
});
}

The intermediate promise then chain is for converting the raw response buffer to a Base64 string, this string is then emited as the observable emmision. The component then subscribes to this fetch Observable, when the load image method is called.

private loadImage() {
this.isLoading = true;
this.lazyImgService
.fetch(this.src)
.subscribe(this.handleResponse.bind(this), this.handleError.bind(this));
}

The handler methods for the response and errors then contain the code to handle the effects of loading of results, ie. rendering the image inside the img element. The intresting thing to note here is, if we give the Base64 string as the src attribute of an img tag, instead of resource path then also it renders the image properly.

private handleResponse(imageStr: string) {
const base64Flag = `data:image/${this.imageType};base64,`;
this.elementRef.nativeElement.querySelector('img').src = base64Flag + imageStr;
}

And this completes our workflow of the app-lazy-img and gives us, a robust lazy image loader, and also is compliant with accessibility guidelines, including all the necessary attributes like, title, width, height etc. for the generation of proper accessibility tree. This technique can be extended to any level, and is more or less platform and framework independent, as this relies solely on Web Standards API’s. This is an optimized solution, as at a time only one intersection observer is active on a page and is seeing all the images, rather than per component instance based intersection observers which can be a performane bottleneck in low memory devices.

Resources and Links

  • Intersection observer API
  • Intersection Observer polyfill for the browsers which don’t support Intersection Observer
  • Fetch API documentation
  • Fetch API polyfill for the browsers which don’t support fetch.
  • Loklak Search Repo

Enhancing LoklakWordCloud app present on Loklak apps site

LoklakWordCloud app is presently hosted on loklak apps site. Before moving into the content of this blog, let us get a brief overview of the app. What does the app do? The app generates a word cloud using twitter data returned by loklak based on the query word provided by the user. The user enters a word in the input field and presses the search button. After that a word cloud is created using the content (text body, hashtags and mentioned) of the various tweets which contains the user provided query word.

In my previous post I wrote about creating the basic functional app. In this post I will be describing the next steps that have been implemented in the app.

Making the word cloud clickable

This is one of the most important and interesting features added to the app. The words in the cloud are now clickable.Whenever an user clicks on a word present in the cloud, the cloud is replaced by the word cloud of that selected word. How do we achieve this behaviour? Well, for this we use Jqcloud’s handler feature. While creating the list of objects for each word and its frequency, we also specify a handler corresponding to each of the word. The handler is supposed to handle a click event. Whenever a click event occurs, we set the value of $scope.tweet to the selected word and invoke the search function, which calls the loklak API and regenerates the word cloud.

for (var word in $scope.wordFreq) {
            $scope.wordCloudData.push({
                text: word,
                weight: $scope.wordFreq[word],
                handlers: {
                    click: function(e) {
                        $scope.tweet = e.target.textContent;
                        $scope.search();
                    }
                }
            });
        }

As it can be seen in the above snippet, handlers is simply an JavaScript object, which takes a function for the click event. In the function we pass the word selected as value of the tweet variable and call search method.

Adding filters to the app

Previously the app generated word cloud using the entire tweet content, that is, hashtags, mentions and tweet body. Thus the app was not flexible. User was not able to decide on which field he wants his word cloud to be generated. User might want to generate his  word cloud using only the hashtags or the mentions or simply the tweet body. In order to make this possible, filters have been introduced. Now we have filters for hashtags, mentions, tweet body and date.

<div class="col-md-6 tweet-filters">
              <strong>Filters</strong>
              <hr>
              <div class="filters">
                <label class="checkbox-inline"><input type="checkbox" value="" ng-model="hashtags">Hashtags</label>
                <label class="checkbox-inline"><input type="checkbox" value="" ng-model="mentions">Mentions</label>
                <label class="checkbox-inline"><input type="checkbox" value="" ng-model="tweetbody">Tweet body</label>
              </div>
              <div class="filter-all">
                <span class="select-all" ng-click="selectAll()"> Select all </span>
              </div>
            </div>

We have used checkboxes for the individual filters and have kept an option to select all the filters at once. Next we require to hook this HTML to AngularJS code to make the filters functional.

if ($scope.hashtags) {
                tweet.hashtags.forEach(function (hashtag) {
                    $scope.filteredWords.push("#" + hashtag);
                });
            }

            if ($scope.mentions) {
                tweet.mentions.forEach(function (mention) {
                    $scope.filteredWords.push("@" + mention);
                });
            }

In the above snippet, before adding the hashtags to the list of filtered words, we first make sure that the checkbox for hashtags is selected. Once we find out the the variable bound to the hashtags checkbox is true, we proceed further and add the hashtags associated with a given tweet to the list of filteredWords. The same strategy is applied for both mentions (shown in the snippet) and tweet bodies.

Adding error notification

Next, we handle certain errors to notify the users that there is problem in their input. Such cases include empty input. If user provides empty input then we notify him or her and break the search. Next we check whether From date is before To date or not. If From date is after To date then we notify the user about the problem.

if ($scope.tweet === "" || $scope.tweet === undefined) {
            $scope.error = "Please enter a valid query word";
            $scope.showError();
            return;
}

In the above snippet we check for empty or undefined input and display snackbar along with error accordingly.

if ((sinceDate !== "" && sinceDate !== undefined) && (endDate !== "" && endDate !== undefined)) {
            var date1 = new Date(sinceDate);
            var date2 = new Date(endDate);
            if (date1 > date2) {
                $scope.error = "To date should be after From date";
                $scope.showError();
                return;
            }
        }

The above snippet compares date. For comparing dates, first we fetch the values entered (via jquery date widget) into the respective input fields and then create JavaScript Date objects out of them. Finally we compare those Date objects to find out if there is any error or not.

Now it might happen that a particular search is taking a long time (perhaps due to network problem), however the user becomes impatient and tries to search again. In that case we need to inform the user that the previous search is still going on. For this purpose we use a boolean variable  to keep track whether the previous search is completed or still going on. If the previous search is going on and user tries to make a new search then we provide a proper notification and prevent the user from making further searches.

Finally we need to make sure that the user is online and has an active internet connection before the search can take place and Loklak API can be called. For this we have used navigator. We have polled the onLine property of navigator to find out whether the user is online or not. If the user is offline then we inform him that we cannot initiate a search due to internet connectivity problem.

if ($scope.isLoading === true) {
            $scope.error = "Previous search not completed. Please wait...";
            $scope.showError();
            return;
        }
        if (!navigator.onLine) {
            $scope.error = "You are currently offline. Please check your internet connection!";
            $scope.showError();
            return;
        }

Important resources

  • View the app source here.
  • View loklak apps site source here.
  • View Loklak API documentation here
  • View Jqcloud documentation here.
  • Learn more about AngularJS here.

Writing Dredd Test for Event Topic-Event Endpoint in Open Event API Server

The API Server exposes a large set of endpoints which are well documented using apiary’s API Blueprint. Ton ensure that these documentations describe exactly what the API does, as in the response made to a request, testing them is crucial. This testing is done through Dredd Documentation testing with the help of FactoryBoy for faking objects.

In this blogpost I describe how to use FactoryBoy to write Dredd tests for the Event Topic- Event endpoint of Open Event API Server.

The endpoint for which tests are described here is this: For testing this endpoint, we need to simulate the API GET request by making a call to our database and then compare the response received to the expected response written in the api_blueprint.apib file. For GET to return some data we need to insert an event with some event topic in the database.

The documentation for this endpoint is the following:

To add the event topic and event objects for GET events-topics/1/events, we use a hook. This hook is written in hook_main.py file and is run before the request is made.

We add this decorator on the function which will add objects to the database. This decorator basically traverses the APIB docs following level with number of ‘#’ in the documentation to ‘>’ in the decorator. So for
 we have,

Now let’s write the method itself. In the method here, we first add the event topic object using EventTopic Factory defined in the factories/event-topic.py file, the code for which can be found here.

Since the endpoint also requires some event to be created in order to fetch events related to an event topic, we add an event object too based on the EventFactoryBasic class in factories/event.py  file. [Code]

To fetch the event related to a topic, the event must be referenced in that particular event topic. This is achieved by passing event_topic_id=1 when creating the event object, so that for the event that is created by the constructor, event topic is set as id = 1.
event = EventFactoryBasic(event_topic_id=1)
In the EventFactoryBasic class, the event_topic_id is set as ‘None’, so that we don’t have to create event topic for creating events in other endpoints testing also. This also lets us to not add event-topic as a related factory. To add event_topic_id=1 as the event’s attribute, an event topic with id = 1 must be already present, hence event_topic object is added first.
After adding the event object also, we commit both of these into the database. Now that we have an event topic object with id = 1, an event object with id = 1 , and the event is related to that event topic, we can make a call to GET event-topics/1/events and get the correct response.

Related:

One Click Deployment Button for loklak Using Heroku with Gradle Build

The one click deploy button makes it easy for the users of loklak to get their own cloud instance created and deployed in their heroku account and can be used according to their flexibility. Heroku uses an app.json manifest in the code repo to figure out what add-ons, config and other deployment steps are required to make the code run. This is used to configure and deploy the app.

Once you have provide the app name and then click on deploy button, Heroku will start deploying the loklak server to a new app on your account:

When setup is complete, you can open the deployed app in your browser or inspect it in Dashboard.

All these steps and requirements can now be encoded in an app.json file and placed in a repo alongside a button that kicks off the setup with a single click.

App.json is a manifest format for describing apps and specifying what their config requirements are. Heroku uses this file to figure out how code in a particular repo should be deployed on the platform. Here is the loklak’s app.json file which used gradle build pack:

{
	"name": "Loklak Server",
	"description": "Distributed Tweet Search Server",
	"logo": "https://raw.githubusercontent.com/loklak/loklak_server/master/html/images/loklak_anonymous.png",
	"website": "http://api.loklak.org",
	"repository": "https://github.com/loklak/loklak_server.git",
	"image": "loklak/loklak_server:latest-master",
	"env": {
		"BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-gradle.git"
	}
}

 

If you are interested you can try deploying the peer from here itself. Checkout how simple it can be to deploy.

Deploy button:

Deploy

Resources:

Auto Deploying loklak Server on Google Cloud Using Travis

This is a setup for loklak server which want to check in only the source files, but have the development branch in Kubernetes deployment automatically updated with some compiled output every time the push using details from Travis build.

How to achieve it?

Unix commands and shell script is one of the best option to automate all deployment and build activities. I explored Kubernetes Gcloud which can be accessed through unix command.

1.Checking for Travis build details before deployment:

Firstly check whether the repository is loklak_server, pull request is available and branches are either master or development, and then decide to update the docker image or not. The code of the aforementioned things is as follows:

if [ "$TRAVIS_REPO_SLUG" != "loklak/loklak_server" ]; then
    echo "Skipping image update for repo $TRAVIS_REPO_SLUG"
    exit 0
fi

if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then
    echo "Skipping image update for pull request"
    exit 0
fi

if [ "$TRAVIS_BRANCH" != "master" ] && [ "$TRAVIS_BRANCH" != "development" ]; then
    echo "Skipping image update for branch $TRAVIS_BRANCH"
    exit 0
fi

2. Setting up Tag and Decrypting the credentials:

For the Kubernetes deployment, each time the travis build is successful, it takes the commit details from travis and appended into tag details for deployment and gcloud credentials is decrypted from the json file.

openssl aes-256-cbc -K $encrypted_48d01dc243a6_key -iv $encrypted_48d01dc243a6_iv  -in kubernetes/gcloud-credentials.json.enc -out kubernetes/gcloud-credentials.json -d

3. Install, Authenticate and Configure GCloud details with Kubernetes:

In this step, initially Google Cloud SDK should be installed with Kubernetes-

curl https://sdk.cloud.google.com | bash > /dev/null
source ~/google-cloud-sdk/path.bash.inc
gcloud components install kubectl

 

Then, Authenticate Google Cloud using the above mentioned decrypted credentials and finally configure the Google Cloud with the details like zone, project name, cluster details, number of nodes etc.

4. Update the Kubernetes deployment:

Since, in this issue it is specific to the loklak_server/development branch, so in here it checks if the branch is development or not and then updates the deployment using following command:

if [ $TRAVIS_BRANCH == "development" ]; then
    kubectl set image deployment/server --namespace=web server=$TAG
fi

 

Conclusion:

In this post, how to write a script in such a way that with each successful push after travis build how to update the deployment on Kubernetes GCloud.

Resources:

Adding Voice Recognition in Description Dialog Box in Phimpme project

In this blog, I will explain how I added Voice Recognition in a dialog box to describe an image in Phimpme Android application. In Phimpme Android application we have an option to add a description for the image. Sometimes the description can be long. Adding Voice Recognition text to speech will ease the user’s experience to add a description for the image.

Adding appropriate Dialog Box

In order to take input from the user to prompt the Voice Recognition function, I have added an image button in the description dialog box. Since the description dialog box will only contain an EditText and a button will have used material design to make it look better and add caption on top of it.

 

<LinearLayout
   android:id="@+id/rl_description_dialog"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:orientation="horizontal"
   android:padding="15dp">
   <EditText
       android:id="@+id/description_edittxt"
       android:layout_weight="1"
       android:layout_width="fill_parent"
       android:layout_height="wrap_content"
       android:padding="15dp"
       android:hint="@string/description_hint"
       android:textColorHint="@color/grey"
       android:layout_margin="20dp"
       android:gravity="top|left"
       android:inputType="text" />
   <ImageButton
       android:layout_width="@dimen/mic_image"
       android:layout_height="@dimen/mic_image"
       android:layout_alignRight="@+id/description_edittxt"
       app2:srcCompat="@drawable/ic_mic_black"
       android:layout_gravity="center"
       android:background="@color/transparent"
       android:paddingEnd="10dp"
       android:paddingTop="12dp"
       android:id="@+id/voice_input"/>
</LinearLayout>

Function to prompt dialog box

We have added a function to prompt the dialog box from anywhere in the application. getDescriptionDialog() function is used to prompt the description dialog box. getDescriptionDialog() returns EditText which can be further be used to manipulate the text in the EditText. Please follow the following steps to inflate description dialog box in the activity.

First,

In the getDescriptionDialog() function we will inflate the layout by using getLayoutInflater function. We will pass the layout id as an argument in the function.

public EditText getDescriptionDialog(final ThemedActivity activity, AlertDialog.Builder descriptionDialog){
final View DescriptiondDialogLayout = activity.getLayoutInflater().inflate(R.layout.dialog_description, null);

Second,

Get the TextView in the description dialog box.

final TextView DescriptionDialogTitle = (TextView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_title);

Third,

Present the dialog using cardview to make use of the material design. Then take an instance of the EditText. This EditText can be further used to input text from the user either by text or Voice Recognition.

final CardView DescriptionDialogCard = (CardView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_card);
EditText editxtDescription = (EditText) DescriptiondDialogLayout.findViewById(R.id.description_edittxt);

Fourth,

Set onClickListener when the user clicks the mic image icon. This onClicklistener will prompt the voice Recognition in the activity. We need to specify the language for the speech to text input. In the case of Phimpme its English so “en-US”. We have set the maximum results to 15.  

ImageButton VoiceRecognition = (ImageButton) DescriptiondDialogLayout.findViewById(R.id.voice_input);
VoiceRecognition.setOnClickListener(new View.OnClickListener() {
   @Override
   public void onClick(View v) {
       // This are the intents needed to start the Voice recognizer
       Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
       i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
       i.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 15); // number of maximum results..
       i.putExtra(RecognizerIntent.EXTRA_PROMPT, R.string.caption_speak);
       startActivityForResult(i, REQ_CODE_SPEECH_INPUT);

   }
});

Putting Text in the EditText

After Voice Recognition prompt ends the onActivityResult function checks to see if the data is received or not.

if (requestCode == REQ_CODE_SPEECH_INPUT && data!=null) {

We get the spoken text from intent data.getString() and store it in ArrayList. To store the received data in a string we need to get the first string from the ArrayList.

ArrayList<String> result = data
       .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
voiceInput = result.get(0);

Setting the received data in the the EditText

editTextDescription.setText(voiceInput);

Conclusion

Using Voice recognition is a quick and simple way to add a long description on the image. It’s Speech to Text feature works without many mistakes and is useful in our Phimpme Android application.

Github

https://github.com/fossasia/phimpme-android

Resources

Tutorial for speech to Text: https://www.androidhive.info/2014/07/android-speech-to-text-tutorial/

To add description dialog box: https://developer.android.com/guide/topics/ui/dialogs.html

Developing LoklakWordCloud app for Loklak apps site

LoklakWordCloud app is an app to visualise data returned by loklak in form of a word cloud.

The app is presently hosted on Loklak apps site.

Word clouds provide a very simple, easy, yet interesting and effective way to analyse and visualise data. This app will allow users to create word cloud out of twitter data via Loklak API.

Presently the app is at its very early stage of development and more work is left to be done. The app consists of a input field where user can enter a query word and on pressing search button a word cloud will be generated using the words related to the query word entered.

Loklak API is used to fetch all the tweets which contain the query word entered by the user.

These tweets are processed to generate the word cloud.

Related issue: https://github.com/fossasia/apps.loklak.org/pull/279

Live app: http://apps.loklak.org/LoklakWordCloud/

Developing the app

The main challenge in developing this app is implementing its prime feature, that is, generating the word cloud. How do we get a dynamic word cloud which can be easily generated by the user based on the word he has entered? Well, here comes in Jqcloud. An awesome lightweight Jquery plugin for generating word clouds. All we need to do is provide list of words along with their weights.

Let us see step by step how this app (first version) works. First we require all the tweets which contain the entered word. For this we use Loklak search service. Once we get all the tweets, then we can parse the tweet body to create a list of words along with their frequency.

var url = "http://35.184.151.104/api/search.json?callback=JSON_CALLBACK&count=100&q=" + query;
        $http.jsonp(url)
            .then(function (response) {
                $scope.createWordCloudData(response.data.statuses);
                $scope.tweet = null;
            });

Once we have all the tweets, we need to extract the tweet texts and create a list of valid words. What are valid words? Well words like ‘the’, ‘is’, ‘a’, ‘for’, ‘of’, ‘then’, does not provide us with any important information and will not help us in doing any kind of analysis. So there is no use of including them in our word cloud. Such words are called stop words and we need to get rid of them. For this we are using a list of commonly used stop words. Such lists can be very easily found over the internet. Here is the list which we are using. Once we are able to extract the text from the tweets, we need to filter stop words and insert the valid words into a list.

 tweet = data[i];
            tweetWords = tweet.text.replace(", ", " ").split(" ");

            for (var j = 0; j < tweetWords.length; j++) {
                word = tweetWords[j];
                word = word.trim();
                if (word.startsWith("'") || word.startsWith('"') || word.startsWith("(") || word.startsWith("[")) {
                    word = word.substring(1);
                }
                if (word.endsWith("'") || word.endsWith('"') || word.endsWith(")") || word.endsWith("]") ||
                    word.endsWith("?") || word.endsWith(".")) {
                    word = word.substring(0, word.length - 1);
                }
                if (stopwords.indexOf(word.toLowerCase()) !== -1) {
                    continue;
                }
                if (word.startsWith("#") || word.startsWith("@")) {
                    continue;
                }
                if (word.startsWith("http") || word.startsWith("https")) {
                    continue;
                }
                $scope.filteredWords.push(word);
            }

What are we actually doing in the above snippet? We are simply iterating over each of the statuses returned by Loklak API. For each tweet, first we are splitting the text into words and then we are iterating over those words. For a given word we do a number of checks. First we check if the word begins or ends with a special character, for example quotation marks or brackets. If so we remove those character as it will cause trouble in calculating frequencies. Next we also check if the word is beginning with ‘#’ or ‘@’. If it is true, then we discard such words as we are handling hashtags and mentions separately. Finally we check whether the word is a stop word or not. If it is a stop word then we discard it. If a word passes all the checks, we add it to our list of valid words.

Once we are done with the tweet bodies, next we need to handle hashtags and mentions.

tweet.hashtags.forEach(function (hashtag) {
                $scope.filteredWords.push("#" + hashtag);
            });

            tweet.mentions.forEach(function (mention) {
                $scope.filteredWords.push("@" + mention);
            });

The above code simply iterates over the hashtags and mentions and inserts them into the filteredWords list. We have handled hashtags and mentions separately so that we can apply filters in future.

Once we are done with generating list of valid words, we need to calculate weight for each of the word. Here weight is nothing but the number of times a particular word is present in the list. We calculate this using JavaScript object. We iterate over the list of valid words. If word is not present in the object (or dictionary as you wish to call it), we create a new key by the name of that word and set its value to one. If a word is already present as a key, then we simply increment its value by one.

for (var word in $scope.wordFreq) {
            $scope.wordCloudData.push({
                text: word,
                weight: $scope.wordFreq[word]
            });
        }

The above code snippet calculates the frequency of each word by the process mentioned above.

Now we are all set to generate our word cloud. We simply use Jqcloud’s interface to configure it with the words and their respective frequencies, provide a list of color codes for a color gradient, and set autoResize to true so that our word cloud resizes itself when the screen size changes.

$scope.generateWordCloud = function() {
        if ($scope.wordCloud === null) {
            $scope.wordCloud = $('.wordcloud').jQCloud($scope.wordCloudData, {
                colors: ["#D50000", "#FF5722", "#FF9800", "#4CAF50", "#8BC34A", "#4DB6AC", "#7986CB", "#5C6BC0", "#64B5F6"],
                fontSize: {
                    from: 0.06,
                    to: 0.01
                },
                autoResize: true
            });
        } else {
            $scope.wordCloud = $(".wordcloud").jQCloud('update', $scope.wordCloudData);
        }
    }

Whenever the user searches for a new word, we simply update the existing word cloud with the cloud of the new word.

Future roadmap

  • Make the words in the cloud clickable. On clicking a word, the cloud should get replaced by the selected word’s cloud.
  • Add filters for hashtags, mentions, date.
  • Add option for exporting the cloud to an image, so that user’s can also use this app as a tool to generate word clouds as images and save them.
  • Add a loader and error notification for invalid or empty input.

Important resources

  • View the app source code here.
  • Learn more about Loklak API here.
  • Learn more about Jqcloud here.
  • Learn more about AngularJS here.

Enhancing SUSI Line Bot UI by Showing Carousels and Location Map

A good UI primarily focuses on attracting large numbers of users and any good UI designer aims to achieve user satisfaction with a user-friendly design. In this blog, we will learn how to show carousels and location map in SUSI line bot to make UI better and easy to use. We can show web search result from SUSI in form of text responses in line bot but it doesn’t follow design rule as it is not a good approach to show more text in constricted space. Users deem such a layout as less attractive. In SUSI webchat, location is returned with a map which looks more promising to users as illustrated below:

Web search is returned as carousels and is viewable as:

We can implement web search in line by sending an array of text responses. We can do this with the code below:

for (var i = 0; i < 5; i++) {
   msg[i] = "";
   msg[i] = {
       type: 'text',
       text: 'text to be sent here'
   }
}
return client.replyMessage(event.replyToken, msg);

If we send web search using text response it looks messy, a user will not like it that much as it is difficult for a user to read and understand a lot of text in one place. You can see it in the screenshot below:

To make it better looking we will implement carousels for web search in SUSI line bot. Carousels are tiles that can be scrolled horizontally to show rich content. We can implement carousels using this code:

for (var i = 1; i < 4; i++) {
   title = 'title of carousel';
   query = title;
   msg = 'description of carousel here';
   link = 'link to be opened here';
   if (title.length >= 40) {
       title = title.substring(0, 36);
       title = title + "...";
   }
   if (msg.length >= 60) {
       msg = msg.substring(0, 56);
       msg = msg + "...";
   }
   carousel[i] = {
       "title": title,
       "text": msg,
       "actions": [{
               "type": "uri",
               "label": "View detail",
               "uri": link
           },
           {
               "type": "message",
               "label": "Ask SUSI again",
               "text": query
           }
       ]
   };
}

In above code, we are checking the length of title and description because line API has a limit for the title up to 40 characters and description up to 60 characters. If limit exceeds we then trim the title or description accordingly and use it. Next, we assign title, description, and link to be opened in action to carousel so that we can use it in below code.

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "template",
       "altText": "this is a carousel template",
       "template": {
           "type": "carousel",
           "columns": [
               carousel[1],
               carousel[2],
               carousel[3]
           ]
       }
   }
]
return client.replyMessage(event.replyToken, answer);

Above code shows an array names answer in which we have added carousels that we have created in last code. After implementing this web search will look like this:

This UI looks neat and easy to read and users will like this. Ask SUSI again will send the title of the carousel to SUSI. To show location map on SUSI line bot just like it is shown on SUSI webchat we will follow this code:

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "location",
       "title": "name of the place here",
       "address": address,
       "latitude": latitude of location here,
       "longitude": longitude of location here
   }
]
return client.replyMessage(event.replyToken, answer);

We will send a location type message which will include latitude and longitude of the location so that Line bot can show location using map. After implementing location type response it will look like this:

On clicking on this location it will open a map inside line app on which we will see a pointer pointing at the location that we have provided.

If you want to learn more about line messaging API refer to this https://devdocs.line.me/en/#messaging-api

Resources
https://devdocs.line.me/en/#messaging-api