Deploying loklak Server on Kubernetes with External Elasticsearch

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

kubernetes.io

Kubernetes is an awesome cloud platform, which ensures that cloud applications run reliably. It runs automated tests, flawless updates, smart roll out and rollbacks, simple scaling and a lot more.

So as a part of GSoC, I worked on taking the loklak server to Kubernetes on Google Cloud Platform. In this blog post, I will be discussing the approach followed to deploy development branch of loklak on Kubernetes.

New Docker Image

Since Kubernetes deployments work on Docker images, we needed one for the loklak project. The existing image would not be up to the mark for Kubernetes as it contained the declaration of volumes and exposing of ports. So I wrote a new Docker image which could be used in Kubernetes.

The image would simply clone loklak server, build the project and trigger the server as CMD

FROM alpine:latest

ENV LANG=en_US.UTF-8
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8

WORKDIR /loklak_server

RUN apk update && apk add openjdk8 git bash && \
    git clone https://github.com/loklak/loklak_server.git /loklak_server && \
    git checkout development && \
    ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport && \
    # Some Configurations and Cleanups

CMD ["bin/start.sh", "-Idn"]

[SOURCE]

This image wouldn’t have any volumes or exposed ports and we are now free to configure them in the configuration files (discussed in a later section).

Building and Pushing Docker Image using Travis

To automatically build and push on a commit to the master branch, Travis build is used. In the after_success section, a call to push Docker image is made.

Travis environment variables hold the username and password for Docker hub and are used for logging in –

docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD

[SOURCE]

We needed checks there to ensure that we are on the right branch for the push and we are not handling a pull request –

# Build and push Kubernetes Docker image
KUBERNETES_BRANCH=loklak/loklak_server:latest-kubernetes-$TRAVIS_BRANCH
KUBERNETES_COMMIT=loklak/loklak_server:kubernetes-$TRAVIS_COMMIT
  
if [ "$TRAVIS_BRANCH" == "development" ]; then
    docker build -t loklak_server_kubernetes kubernetes/images/development
    docker tag loklak_server_kubernetes $KUBERNETES_BRANCH
    docker push $KUBERNETES_BRANCH
    docker tag $KUBERNETES_BRANCH $KUBERNETES_COMMIT
    docker push $KUBERNETES_COMMIT
elif [ "$TRAVIS_BRANCH" == "master" ]; then
    # Build and push master
else
    echo "Skipping Kubernetes image push for branch $TRAVIS_BRANCH"
fi

[SOURCE]

Kubernetes Configurations for loklak

Kubernetes cluster can completely be configured using configurations written in YAML format. The deployment of loklak uses the previously built image. Initially, the image tagged as latest-kubernetes-development is used –

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: server
  namespace: web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
      - name: server
        image: loklak/loklak_server:latest-kubernetes-development
        ...

[SOURCE]

Readiness and Liveness Probes

Probes act as the top level tester for the health of a deployment in Kubernetes. The probes are performed periodically to ensure that things are working fine and appropriate steps are taken if they fail.

When a new image is updated, the older pod still runs and servers the requests. It is replaced by the new ones only when the probes are successful, otherwise, the update is rolled back.

In loklak, the /api/status.json endpoint gives information about status of deployment and hence is a good target for probes –

livenessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3
readinessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3

[SOURCE]

These probes are performed periodically and the server is restarted if they fail (non-success HTTP status code or takes more than 3 seconds).

Ports and Volumes

In the configurations, port 80 is exposed as this is where Jetty serves inside loklak –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

If we notice, this is the port that we used for running the probes. Since the development branch deployment holds no dumps, we didn’t need to specify any explicit volumes for persistence.

Load Balancer Service

While creating the configurations, a new public IP is assigned to the deployment using Google Cloud Platform’s load balancer. It starts listening on port 80 –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

Since this service creates a new public IP, it is recommended not to replace/recreate this services as this would result in the creation of new public IP. Other components can be updated individually.

Kubernetes Configurations for Elasticsearch

To maintain a persistent index, this deployment would require an external Elasticsearch cluster. loklak is able to connect itself to external Elasticsearch cluster by changing a few configurations.

Docker Image and Environment Variables

The image used for Elasticsearch is taken from pires/docker-elasticsearch-kubernetes. It allows easy configuration of properties from environment variables in configurations. Here is a list of configurable variables, but we needed just a few of them to do our task –

image: quay.io/pires/docker-elasticsearch-kubernetes:2.0.0
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
  value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
  valueFrom:
    fieldRef:
      fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
  value: "loklakcluster"
- name: "DISCOVERY_SERVICE"
  value: "elasticsearch"
- name: NODE_MASTER
  value: "true"
- name: NODE_DATA
  value: "true"
- name: HTTP_ENABLE
  value: "true"

[SOURCE]

Persistent Index using Persistent Cloud Disk

To make the index last even after the deployment is stopped, we needed a stable place where we could store all that data. Here, Google Compute Engine’s standard persistent disk was used. The disk can be created using GCP web portal or the gcloud CLI.

Before attaching the disk, we need to declare a volume where we could mount it –

volumeMounts:
- mountPath: /data
  name: storage

[SOURCE]

Now that we have a volume, we can simply mount the persistent disk on it –

volumes:
- name: storage
  gcePersistentDisk:
    pdName: data-index-disk
    fsType: ext4

[SOURCE]

Now, whenever we deploy these configurations, we can reuse the previous index.

Exposing Kubernetes to Cluster

The HTTP and transport clients are enabled on port 9200 and 9300 respectively. They can be exposed to the rest of the cluster using the following service –

apiVersion: v1
kind: Service
...
Spec:
  ...
  ports:
  - name: http
    port: 9200
    protocol: TCP
  - name: transport
    port: 9300
    protocol: TCP

[SOURCE]

Once deployed, other deployments can access the cluster API from ports 9200 and 9300.

Connecting loklak to Kubernetes

To connect loklak to external Elasticsearch cluster, TransportClient Java API is used. In order to enable these settings, we simply need to make some changes in configurations.

Since we enable the service named “elasticsearch” in namespace “elasticsearch”, we can access the cluster at address elasticsearch.elasticsearch:9200 (web) and elasticsearch.elasticsearch:9300 (transport).

To confine these changes only to Kubernetes deployment, we can use sed command while building the image (in Dockerfile) –

sed -i.bak 's/^\(elasticsearch_transport.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(elasticsearch_transport.addresses\).*/\1=elasticsearch.elasticsearch:9300/' conf/config.properties && \

[SOURCE]

Now when we create the deployments in Kubernetes cluster, loklak auto connects to the external elasticsearch index and creates indices if needed.

Verifying persistence of the Elasticsearch Index

In order to see that the data persists, we can completely delete the deployment or even the cluster if we want. Later, when we recreate the deployment, we can see all the messages already present in the index.

I  [2017-07-29 09:42:51,804][INFO ][node                     ] [Hellion] initializing ...
 
I  [2017-07-29 09:42:52,024][INFO ][plugins                  ] [Hellion] loaded [cloud-kubernetes], sites []
 
I  [2017-07-29 09:42:52,055][INFO ][env                      ] [Hellion] using [1] data paths, mounts [[/data (/dev/sdb)]], net usable_space [84.9gb], net total_space [97.9gb], spins? [possibly], types [ext4]
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] initialized
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] starting ...
 
I  [2017-07-29 09:42:53,620][INFO ][transport                ] [Hellion] publish_address {10.8.1.13:9300}, bound_addresses {10.8.1.13:9300}
 
I  [2017-07-29 09:42:53,633][INFO ][discovery                ] [Hellion] loklakcluster/cJtXERHETKutq7nujluJvA
 
I  [2017-07-29 09:42:57,866][INFO ][cluster.service          ] [Hellion] new_master {Hellion}{cJtXERHETKutq7nujluJvA}{10.8.1.13}{10.8.1.13:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
 
I  [2017-07-29 09:42:57,955][INFO ][http                     ] [Hellion] publish_address {10.8.1.13:9200}, bound_addresses {10.8.1.13:9200}
 
I  [2017-07-29 09:42:57,955][INFO ][node                     ] [Hellion] started
 
I  [2017-07-29 09:42:58,082][INFO ][gateway                  ] [Hellion] recovered [8] indices into cluster_state

In the last line from the logs, we can see that indices already present on the disk were recovered. Now if we head to the public IP assigned to the cluster, we can see that the message count is restored.

Conclusion

In this blog post, I discussed how we utilised the Kubernetes setup to shift loklak to Google Cloud Platform. The deployment is active and can be accessed from the link provided under wiki section of loklak/loklak_server repo.

I introduced these changes in pull request loklak/loklak_server#1349 with the help of @niranjan94, @uday96 and @chiragw15.

Resources

Continue ReadingDeploying loklak Server on Kubernetes with External Elasticsearch

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load.

In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index.

Fields to Consider while Caching

At the classifier endpoint, aggregations can be requested based on the following fields –

  • Classifier Name
  • Classifier Classes
  • Countries
  • Start Date
  • End Date

But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be –

  • Classifier Name
  • Start Date
  • End Date

Type of Cache

The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section.

Key

The key is built using the fields mentioned previously –

private static String getKey(String index, String classifier, String sinceDate, String untilDate) {
    return index + "::::"
        + classifier + "::::"
        + (sinceDate == null ? "" : sinceDate) + "::::"
        + (untilDate == null ? "" : untilDate);
}

[SOURCE]

In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose.

Value

The object used as key in the HashMap is a wrapper containing the following –

  1. json – It is a JSONObject containing the actual data.
  2. expiry – It is the expiry of the object in milliseconds.

class JSONObjectWrapper {
    private JSONObject json; 
    private long expiry;
    ... 
}

Timeout

The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject –

class JSONObjectWrapper {
    ...
    private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000);

    JSONObjectWrapper(JSONObject json) {
        this.json = json;
        this.expiry = System.currentTimeMillis() + timeout;
    }
    ...
}

 

Cache Hit

For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner –

String key = getKey(index, classifier, sinceDate, untilDate);
if (cacheMap.keySet().contains(key)) {
    JSONObjectWrapper jw = cacheMap.get(key);
    if (!jw.isExpired()) {
        // Do something with jw
    }
}
// Calculate the aggregations
...

But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed.

Filtering results

For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed.

Since the number of fields to filter out, i.e. classes and countries, would not be that high, this process would not be that resource intensive. And at the same time, would save us from requesting heavy aggregation tasks from the user.

Since the data about classes is nested inside the respective country field, we need to perform two level of filtering –

JSONObject retJson = new JSONObject(true);
for (String key : json.keySet()) {
    JSONArray value = filterInnerClasses(json.getJSONArray(key), classes);
    if ("GLOBAL".equals(key) || countries.contains(key)) {
        retJson.put(key, value);
    }
}

Cache Miss

In the case of a cache miss, the helper functions are called from ElasticsearchClient.java to get results. These results are then parsed from HashMap to JSONObject and stored in the cache for future usages.

JSONObject freshCache = getFromElasticsearch(index, classifier, sinceDate, untilDate);
cacheMap.put(key, new JSONObjectWrapper(freshCache));

The getFromElasticsearch method finds all the possible classes and makes a request to the appropriate method in ElasticsearchClient, getting data for all classifiers and all countries.

Conclusion

In this blog post, I discussed the need for caching of aggregations and the way it is achieved in the loklak server. This feature was introduced in pull request loklak/loklak_server#1333 by @singhpratyush (me).

Resources

Continue ReadingCaching Elasticsearch Aggregation Results in loklak Server

Adding tip to drop downs in Susper using CSS in Angular

To create simple drop downs using twitter bootstrap, it is fairly easy for developers. The issue faced in Susper, however, was to add a tip on the top over such dropdowns similar to Google:

This is how it looks finally, in Susper, with a tip over the standard rectangular drop-down:

This is how it was done:

  1. First, make sure you have designed your drop-down according to your requirements, added the desired height, width and padding. These were the specifications used in Susper’s drop-down.

.dropdown-menu{
height: 500px;
width: 327px;
padding: 28px;
}
  1. Next add the following code to your drop-down class css:

.dropdown-menu:before {
position: absolute;
top: -7px;
right: 19px;
display: inlineblock;
border-right: 7px solid transparent;
border-bottom: 7px solid #ccc;
border-left: 7px solid transparent;
border-bottom-color: rgba(0, 0, 0, 0.2);
content: ;
}
.dropdown-menu:after {
position: absolute;
top: -5px;
right: 20px;
display: inlineblock;
border-right: 6px solid transparent;
border-bottom: 6px solid #ffffff;
border-left: 6px solid transparent;
content: ;
}

In css, :before inserts the style before any other html, whereas :after inserts the style after the html is loaded. Some of the parameters are explained here:

  • Top: can be used to change the position of the menu tip vertically, according to the position of your button and menu.
  • Right: can be used to change the position of the menu tip horizontally, so that it can be positioned used below the menu icon.
  • Position : absolute is used to make sure all our values are absolute and not relative to the higher div hierarchically
  • Border: All border attributes are used to specify border thickness, color and transparency before and after, which collectively gives the effect of a tip for the drop down.
  • Content : This value is set to a blank string ‘’, because otherwise none of our changes will be visible, since the divs will have no space allocated to them.

Resources

Continue ReadingAdding tip to drop downs in Susper using CSS in Angular

Implementation of Customizable Instant Search on Susper using Local Storage

Results on Susper could be instantly displayed as user types in a query. This was a strict feature till some time before, where the user doesn’t have customizable option to choose. But now one could turn off and on this feature.

To turn on and off this feature visit ‘Search Settings’ on Susper. This will be the link to it: http://susper.com/preferences and you will find different options to choose from.

How did we implement this feature?

Searchsettings.component.html:

<div>
 <h4><strong>Susper Instant Predictions</strong></h4>
 <p>When should we show you results as you type?</p>
 <input name="options" [(ngModel)]="instantresults" disabled value="#" type="radio" id="op1"><label for="op1">Only when my computer is fast enough</label><br>
 <input name="options" [(ngModel)]="instantresults" [value]="true" type="radio" id="op2"><label for="op2">Always show instant results</label><br>
 <input name="options" [(ngModel)]="instantresults" [value]="false" type="radio" id="op3"><label for="op3">Never show instant results</label><br>
</div>

User is displayed with options to choose from regarding instant search.when the user selects a new option, his selection is stored in the instantresults variable in search settings component using ngModel.

Searchsettings.component.ts:

Later when user clicks on save button the instantresults object is stored into localStorage of the browser

onSave() {
 if (this.instantresults) {
   localStorage.setItem('instantsearch', JSON.stringify({value: true}));
 } else {
   localStorage.setItem('instantsearch', JSON.stringify({ value: false }));
   localStorage.setItem('resultscount', JSON.stringify({ value: this.resultCount }));
 }
 this.router.navigate(['/']);
}

 

Later this value is retrieved from the localStorage function whenever a user enters a query in search bar component and search is made according to user’s preference.

Searchbar.component.ts

Later this value is retrieved from the localStorage function whenever a user enters a query in search bar component and search is made according to user’s preference.

onquery(event: any) {
 this.store.dispatch(new query.QueryAction(event));
 let instantsearch = JSON.parse(localStorage.getItem('instantsearch'));

 if (instantsearch && instantsearch.value) {
   this.store.dispatch(new queryactions.QueryServerAction({'query': event, start: this.searchdata.start, rows: this.searchdata.rows}));
   this.displayStatus = 'showbox';
   this.hidebox(event);
 } else {
   if (event.which === 13) {
     this.store.dispatch(new queryactions.QueryServerAction({'query': event, start: this.searchdata.start, rows: this.searchdata.rows}));
     this.displayStatus = 'showbox';
     this.hidebox(event);
   }
 }
}

 

Interaction of different components here:

  1. First we set the instantresults object in Local Storage from search settings component.
  2. Later this value is retrieved and used by search bar component using localstorage.get() method to decide whether to display results instantly or not.

Below, Gif shows how you could use this feature in Susper to customise the instant results in your browser.

References:

 

Continue ReadingImplementation of Customizable Instant Search on Susper using Local Storage

Using Hidden Attribute for Angular in Susper

In Angular, we can use the hidden attribute, to hide and show different components of the page. This blog explains what the hidden attribute is, how it works and how to use it for some common tasks.
In Susper, we used the [hidden] attribute for two kinds of tasks.

  1. To hide components of the page until all the search results load.
  2. To hide components of the page, if they were meant to appear only in particular cases (say only the first page of the search results etc).

Let us now see how we apply this in a html file.
Use the [hidden] attribute for the component, to specify a flag variable responsible for hiding it.
When this variable is set to true or 1, the component is hidden otherwise it is shown.
Here is an example of how the [hidden] attribute is used:

<app-infobox [hidden]=”hidefooter class=“infobox col-md-4” *ngIf=“Display(‘all’)”></app-infobox>

Note that [hidden] in a way simply sets the css of the component as { display: none }, whereas in *ngIf, the component is not loaded in the DOM.
So, in this case unless Display(‘all’) returns true the component is not even loaded to the DOM but if [hidden] is set to true, then the component is still present, only not displayed.
In the typescript files, here is how the two tasks are performed:
To hide components of the page, until all the search results load.

this.querychange$ = store.select(fromRoot.getquery);
this.querychange$.subscribe(res => {
this.hidefooter = 1;

this.responseTime$ = store.select(fromRoot.getResponseTime);
this.responseTime$.subscribe(responsetime => {
this.hidefooter = 0;

The component is hidden when the query request is just sent. It is then kept hidden until the results for the previously sent query are available.

2. To hide components of the page, if they were meant to appear only in particular cases.
For example, if you wish to show a component like Autocorrect only when you are on the first page of the search results, here is how you can do it:

if (this.presentPage === 1) {
this.hideAutoCorrect = 0;
} else {
this.hideAutoCorrect = 1;
}

This should hopefully give you a good idea on how to use the hidden attribute. These resources can be referred to for more information.

Continue ReadingUsing Hidden Attribute for Angular in Susper

Crawl Job Feature For Susper To Index Websites

The Yacy backend provides search results for Susper using a web crawler (or) spider to crawl and index data from the internet. They also require some minimum input from the user.

As stated by Michael Christen (@Orbiter) “a web index is created by loading a lot of web pages first, then parsing the content and placing the result into a search index. The question is: how to get a large list of URLs? This is solved by a crawler: we start with a single web page, extract all links, then load these links and go on. The root of such a process is the ‘Crawl Start’.”

Yacy has a web crawler module that can be accessed from here: http://yacy.searchlab.eu/CrawlStartExpert.html. As we would like to have a fully supported front end for Yacy, we also introduced a crawler in Susper. Using crawler one could tell Yacy what process to do and how to crawl a URL to index search results on Yacy server. To support the indexing of web pages with the help of Yacy server, we had implemented a ‘Crawl Job’ feature in Susper.

1)Visit http://susper.com/crawlstartexpert and give information regarding the sites you want Susper to crawl.Currently, the crawler accepts an input of URLs or a file containing URLs. You could customise crawling process by tweaking crawl parameters like crawling depth, maximum pages per domain, filters, excluding media etc.

2) Once crawl parameters are set, click on ‘Start New Crawl Job’ to start the crawling process.

3) It will raise a basic authentication pop-up. After filling, the user will receive a success alert and will be redirected back to home page.

The process of crawl job on Yacy server will get started according to crawling parameters.

Implementation of Crawler on Susper:

We have created a separate component and service in Susper for Crawler

Source code can be found at:

When the user initiates the crawl job by pressing the start button, it calls startCrawlJob() function from the component and this indeed calls the CrawlStart service.We send crawlvalues to the service and subscribe, to the return object confirming whether the crawl job has started or not.

crawlstart.component.ts:-

startCrawlJob() {
 this.crawlstartservice.startCrawlJob(this.crawlvalues).subscribe(res => {
   alert('Started Crawl Job');
   this.router.navigate(['/']);
 }, (err) => {
   if (err === 'Unauthorized') {
     alert("Authentication Error");
   }
 });
};

 

After calling startCrawlJob() function from the service file, the service file creates a URLSearchParams object to create parameters for each key in input and send it to Yacy server through JSONP request.

crawlstart.service.ts

startCrawlJob(crawlvalues) {
 let params = new URLSearchParams();
 for (let key in crawlvalues) {
   if (crawlvalues.hasOwnProperty(key)) {
     params.set(key, crawlvalues[key]);
   }

 }
 params.set('callback', 'JSONP_CALLBACK');


 let options = new RequestOptions({ search: params });
 return this.jsonp
   .get('http://yacy.searchlab.eu/Crawler_p.json', options).map(res => {
     res.json();
   });

}

Resources:

Continue ReadingCrawl Job Feature For Susper To Index Websites

Using @Output EventEmitter to Hide Search Suggestions in Angular for Susper Web App

Problem: In Susper the suggestions box doesn’t hide when there are no suggestions. To fix this, we have used @Output to create interaction between the search bar and suggestions box.

Susper gives suggestions to the user when user types a query. These suggestions are retrieved from the suggest.json endpoint from Yacy server.

We have a separate component for searching a query and a separate component for showing suggestions (auto-complete.component.ts). The architectural link between the query box, suggestion box and the results page is a bit complicated.

The search bar and the auto-complete component doesn’t interact directly. Whenever a new query is entered, the search bar triggers an action with a payload including the query. On receiving the new query, auto-complete component calls Yacy server to get suggestions from the endpoint and display them inside the suggestion box. Whenever a user searches making a new query, the search bar implementation opens the suggestion box even if there are no results. So there should be a way to inform search bar component that suggestions box has received empty results and search bar could hide the suggestions box.

To achieve this we used @Output to emit an event

@Output() hidecomponent: EventEmitter<any> = new EventEmitter<any>();

autocomplete.component.ts:-

this.autocompleteservice.getsearchresults(query).subscribe(res => {
 if (res) {
   if (res[0]) {
     this.results = res[1];
     if (this.results.length === 0) {
       this.hidecomponent.emit(1);
     } else {
       this.hidecomponent.emit(0);
     }
}

 

Then in search bar component, this is binded to a function hidesuggestions() which takes care of hiding the suggestion box.

searchbar.component.html

<app-auto-complete (hidecomponent)="hidesuggestions($event)" id="auto-box" [hidden]="!ShowAuto()"></app-auto-complete>

 

searchbar.component.ts

hidesuggestions(data: number) {
 if (data === 1) {
   this.displayStatus = 'hidebox';
 } else {
   this.displayStatus = 'showbox';
 }
}
ShowAuto() {
 return (this.displayStatus === 'showbox');
}

 

Here you could see that the auto-complete component’s hidden attribute in searchbar.component.ts is binded with ShowAuto() function which takes care about the interaction and hides the suggestions box whenever there are no results.

Below a GIF shows how this suggestions feature is working on Susper

Source code related to this implementation is available at this pull

References:

Continue ReadingUsing @Output EventEmitter to Hide Search Suggestions in Angular for Susper Web App

Multiple Page Rendering on a Single Query in Susper Angular Front-end

Problem: Susper used to render a new results page for each new character input. It should render a single page for the final query as reported in issue 371. For instance, the browser’s back button shows five pages for each of the five characters entered as a query.

Solution: This problem was arising due to code:

this.router.navigate(['/search'], {queryParams: this.searchdata});

Before we have this one line in search-bar component which gets called on each character entry

Fix:To fix this issue we required calling router.navigate only when we receive results and not on each character input.

So, we first removed the line which was cause of this issue from search-bar component and replaced it with

this.store.dispatch(new queryactions.QueryServerAction(query));

 

This triggers a QueryServer action, and make a request to Yacy end point for search results.

Now in app.component.ts , we get subscribed to resultscomponentchange$ which gets called only when new search results are received and hence we navigate to a new page after the resultscomponentchange subscription is called.

this.resultscomponentchange$ = store.select(fromRoot.getItems);
this.resultscomponentchange$.subscribe(res => {
 if (this.searchdata.query.length > 0) {
   this.router.navigate(['/search'], {queryParams: this.searchdata});
 }

});
this.wholequery$ = store.select(fromRoot.getwholequery);
this.wholequery$.subscribe(data => {
 this.searchdata = data;
});
if (localStorage.getItem('resultscount')) {
 this.store.dispatch(new queryactions.QueryServerAction({'query': '', start: 0, rows: 10, search: false}));
}

 

 

Finally, this problem got fixed and now there is only one page being rendered for a valid search. Source code for this implementation is available in this pull.

Resources:

Continue ReadingMultiple Page Rendering on a Single Query in Susper Angular Front-end

Using RouterLink in the Susper Angular Frontend to Speed up the Loading Time

In Susper, whenever the user clicks on some links, the whole application used to load again, thereby taking more time to load the page. But in Single Page Applications (SPAs) we don’t need to load the whole application. In Fact, SPAs are known to load internal pages faster than traditional HTML web pages. To achieve this we have to inform the application that a link will redirect the user to an internal page. So that the application doesn’t reload completely and reinitializes itself. In angular, this can be done by replacing href with routerLink for the tag.

Routerlink when used with tag syntactically as

<a routerLink="/contact" routerLinkActive="active">Contact</a>

doesn’t load the whole page instead it asks the server for only the contact component and renders it in place of <router-outlet></router-outlet>

This happens through an ajax call to the server asking for only contact component, thereby reducing the time it takes and doesn’t show a whole complete reload of the page.

Below time graph shows requests made when a tag with href was clicked.

If you observe it takes more than 3 seconds to load the page.

But when you use [routerLink] as an attribute for navigation, you find the page being displayed in just a blink.

What we have done in Susper?

In Susper, on issue #167, @mariobehling has noticed that there are some links which are loading slowly. On looking at the issue and a test run of the issue, I found that the problem is with the loading of the whole page, thereby immediately checked with the tag and found that a “href” attribute was used instead of “[routerLink]” angular attribute. I made a pull changing href to “[routerLink]” thereby speeding up Susper to around 3x faster than before.

https://github.com/fossasia/susper.com/pull/234/files

References

Continue ReadingUsing RouterLink in the Susper Angular Frontend to Speed up the Loading Time

Implementing a suggestion box in Susper Angular JS Front-end

In Susper, we have implemented a suggestion box for our input box. This was done using Yacy suggest API. This is how the box looks:

In this blog, let us see how to implement such a box using html, css, twitter-bootstrap and typescript. You can also check the code at the Susper repository.

The html code is simple and straightforward:

 

class=“suggestion-box” *ngIf=“results”>

</div>

A few points to notice :

  • *ngIf=”results” ensures that the box is displayed only when it has suggestions to display and not otherwise
  • The [routerLink] and [queryParams] attributes together link every result to the search page, with the correct query.

This is the css code :

a {
text-decoration: none;
}.suggestions {
font-family: Arial, sans-serif;
font-size: 17px;
margin-left: 2.4%;
}.query-suggestions:hover {
background: #E3E3E3;
}.suggestion-box{
width: 635.2px;
max-width: 100%;
border: 1px solid rgba(150,150,150,0.3);
background-color: white;
margin-left: -25.7px;
position: absolute;
boxshadow: 0px 0.2px 0px;
}

A few points to notice again:

    • Box-shadow: This gives the drop up a shadow effect, which looks really nice, the first 3 parameters are for dimensions (X-offset, Y-offset, Blur). The rgba specifies color, with parameters as (red-component, green-component, blue-component, opacity).
    • Text-decoration: This attribute is used to add/remove decoration like underline for links.
    • Font-family: The font-family mentioned here is Arial, if Arial is unavailable sans-serif is used.

In the typescript file, there are two major tasks:

  1. Splice the results if greater than five. A neat suggestion-box should have a maximum of five results, hence we splice the results:

this.results.concat(res[0]);
if ( this.results.length > 5) {
this.results = this.results.splice (0, 5);
  1. Hide the suggestion-box if there are no suggestions from the API:
@Output() hidecomponent: EventEmitter<any> = new EventEmitter<any>();

this.query$.subscribe( query => {
if (query) {
this.autocompleteservice.getsearchresults(query).subscribe(res => {
if (res) {
this.results = res[1];
if (this.results.length === 0) {
this.hidecomponent.emit(1);
} else {
this.hidecomponent.emit(0);
}

If you want a more elaborate picture, you can view the entire html, css and typescript files of the auto-complete component.

This tutorial, http://4dev.tech/2016/03/tutorial-creating-an-angular2-autocomplete/ is very useful in case you want to implement auto complete suggestion feature from scratch.

In addition, this stack overflow thread has some interesting insights too: https://stackoverflow.com/questions/35881815/implementing-autocomplete-for-angular2

Continue ReadingImplementing a suggestion box in Susper Angular JS Front-end