Implementing Health Check Endpoint in Open Event Server

A health check endpoint was required in the Open Event Server be used by Kubernetes to know when the web instance is ready to receive requests.

Following are the checks that were our primary focus for health checks:

  • Connection to the database.
  • Ensure sql-alchemy models are inline with the migrations.
  • Connection to celery workers.
  • Connection to redis instance.

Runscope/healthcheck seemed like the way to go for the same. Healthcheck wraps a Flask app object and adds a way to write simple health-check functions that can be used to monitor your application. It’s useful for asserting that your dependencies are up and running and your application can respond to HTTP requests. The Healthcheck functions are exposed via a user defined flask route so you can use an external monitoring application (monit, nagios, Runscope, etc.) to check the status and uptime of your application.

Health check endpoint was implemented at /health-check as following:

from healthcheck import HealthCheck
health = HealthCheck(current_app, "/health-check")

 

Following is the function for checking the connection to the database:

def health_check_db():
   """
   Check health status of db
   :return:
   """
   try:
       db.session.execute('SELECT 1')
       return True, 'database ok'
   except:
       sentry.captureException()
       return False, 'Error connecting to database'

 

Check functions take no arguments and should return a tuple of (bool, str). The boolean is whether or not the check passed. The message is any string or output that should be rendered for this check. Useful for error messages/debugging.

The above function executes a query on the database to check whether it is connected properly. If the query runs successfully, it returns a tuple True, ‘database ok’. sentry.captureException() makes sure that the sentry instance receives a proper exception event with all the information about the exception. If there is an error connecting to the database, the exception will be thrown. The tuple returned in this case will be return False, ‘Error connecting to database’.

Finally to add this to the endpoint:

health.add_check(health_check_db)

Following is the response for a successful health check:

{
   "status": "success",
   "timestamp": 1500915121.52474,
   "hostname": "shubham",
   "results": [
       {
           "output": "database ok",
           "checker": "health_check_db",
           "expires": 1500915148.524729,
           "passed": true,
           "timestamp": 1500915121.524729
       }
   ]
}

If the database is not connected the following error will be shown:

{
           "output": "Error connecting to database",
           "checker": "health_check_db",
           "expires": 1500965798.307425,
           "passed": false,
           "timestamp": 1500965789.307425
}

Related:

Deploying loklak Server on Kubernetes with External Elasticsearch

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

kubernetes.io

Kubernetes is an awesome cloud platform, which ensures that cloud applications run reliably. It runs automated tests, flawless updates, smart roll out and rollbacks, simple scaling and a lot more.

So as a part of GSoC, I worked on taking the loklak server to Kubernetes on Google Cloud Platform. In this blog post, I will be discussing the approach followed to deploy development branch of loklak on Kubernetes.

New Docker Image

Since Kubernetes deployments work on Docker images, we needed one for the loklak project. The existing image would not be up to the mark for Kubernetes as it contained the declaration of volumes and exposing of ports. So I wrote a new Docker image which could be used in Kubernetes.

The image would simply clone loklak server, build the project and trigger the server as CMD

FROM alpine:latest

ENV LANG=en_US.UTF-8
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8

WORKDIR /loklak_server

RUN apk update && apk add openjdk8 git bash && \
    git clone https://github.com/loklak/loklak_server.git /loklak_server && \
    git checkout development && \
    ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport && \
    # Some Configurations and Cleanups

CMD ["bin/start.sh", "-Idn"]

[SOURCE]

This image wouldn’t have any volumes or exposed ports and we are now free to configure them in the configuration files (discussed in a later section).

Building and Pushing Docker Image using Travis

To automatically build and push on a commit to the master branch, Travis build is used. In the after_success section, a call to push Docker image is made.

Travis environment variables hold the username and password for Docker hub and are used for logging in –

docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD

[SOURCE]

We needed checks there to ensure that we are on the right branch for the push and we are not handling a pull request –

# Build and push Kubernetes Docker image
KUBERNETES_BRANCH=loklak/loklak_server:latest-kubernetes-$TRAVIS_BRANCH
KUBERNETES_COMMIT=loklak/loklak_server:kubernetes-$TRAVIS_COMMIT
  
if [ "$TRAVIS_BRANCH" == "development" ]; then
    docker build -t loklak_server_kubernetes kubernetes/images/development
    docker tag loklak_server_kubernetes $KUBERNETES_BRANCH
    docker push $KUBERNETES_BRANCH
    docker tag $KUBERNETES_BRANCH $KUBERNETES_COMMIT
    docker push $KUBERNETES_COMMIT
elif [ "$TRAVIS_BRANCH" == "master" ]; then
    # Build and push master
else
    echo "Skipping Kubernetes image push for branch $TRAVIS_BRANCH"
fi

[SOURCE]

Kubernetes Configurations for loklak

Kubernetes cluster can completely be configured using configurations written in YAML format. The deployment of loklak uses the previously built image. Initially, the image tagged as latest-kubernetes-development is used –

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: server
  namespace: web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
      - name: server
        image: loklak/loklak_server:latest-kubernetes-development
        ...

[SOURCE]

Readiness and Liveness Probes

Probes act as the top level tester for the health of a deployment in Kubernetes. The probes are performed periodically to ensure that things are working fine and appropriate steps are taken if they fail.

When a new image is updated, the older pod still runs and servers the requests. It is replaced by the new ones only when the probes are successful, otherwise, the update is rolled back.

In loklak, the /api/status.json endpoint gives information about status of deployment and hence is a good target for probes –

livenessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3
readinessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3

[SOURCE]

These probes are performed periodically and the server is restarted if they fail (non-success HTTP status code or takes more than 3 seconds).

Ports and Volumes

In the configurations, port 80 is exposed as this is where Jetty serves inside loklak –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

If we notice, this is the port that we used for running the probes. Since the development branch deployment holds no dumps, we didn’t need to specify any explicit volumes for persistence.

Load Balancer Service

While creating the configurations, a new public IP is assigned to the deployment using Google Cloud Platform’s load balancer. It starts listening on port 80 –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

Since this service creates a new public IP, it is recommended not to replace/recreate this services as this would result in the creation of new public IP. Other components can be updated individually.

Kubernetes Configurations for Elasticsearch

To maintain a persistent index, this deployment would require an external Elasticsearch cluster. loklak is able to connect itself to external Elasticsearch cluster by changing a few configurations.

Docker Image and Environment Variables

The image used for Elasticsearch is taken from pires/docker-elasticsearch-kubernetes. It allows easy configuration of properties from environment variables in configurations. Here is a list of configurable variables, but we needed just a few of them to do our task –

image: quay.io/pires/docker-elasticsearch-kubernetes:2.0.0
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
  value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
  valueFrom:
    fieldRef:
      fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
  value: "loklakcluster"
- name: "DISCOVERY_SERVICE"
  value: "elasticsearch"
- name: NODE_MASTER
  value: "true"
- name: NODE_DATA
  value: "true"
- name: HTTP_ENABLE
  value: "true"

[SOURCE]

Persistent Index using Persistent Cloud Disk

To make the index last even after the deployment is stopped, we needed a stable place where we could store all that data. Here, Google Compute Engine’s standard persistent disk was used. The disk can be created using GCP web portal or the gcloud CLI.

Before attaching the disk, we need to declare a volume where we could mount it –

volumeMounts:
- mountPath: /data
  name: storage

[SOURCE]

Now that we have a volume, we can simply mount the persistent disk on it –

volumes:
- name: storage
  gcePersistentDisk:
    pdName: data-index-disk
    fsType: ext4

[SOURCE]

Now, whenever we deploy these configurations, we can reuse the previous index.

Exposing Kubernetes to Cluster

The HTTP and transport clients are enabled on port 9200 and 9300 respectively. They can be exposed to the rest of the cluster using the following service –

apiVersion: v1
kind: Service
...
Spec:
  ...
  ports:
  - name: http
    port: 9200
    protocol: TCP
  - name: transport
    port: 9300
    protocol: TCP

[SOURCE]

Once deployed, other deployments can access the cluster API from ports 9200 and 9300.

Connecting loklak to Kubernetes

To connect loklak to external Elasticsearch cluster, TransportClient Java API is used. In order to enable these settings, we simply need to make some changes in configurations.

Since we enable the service named “elasticsearch” in namespace “elasticsearch”, we can access the cluster at address elasticsearch.elasticsearch:9200 (web) and elasticsearch.elasticsearch:9300 (transport).

To confine these changes only to Kubernetes deployment, we can use sed command while building the image (in Dockerfile) –

sed -i.bak 's/^\(elasticsearch_transport.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(elasticsearch_transport.addresses\).*/\1=elasticsearch.elasticsearch:9300/' conf/config.properties && \

[SOURCE]

Now when we create the deployments in Kubernetes cluster, loklak auto connects to the external elasticsearch index and creates indices if needed.

Verifying persistence of the Elasticsearch Index

In order to see that the data persists, we can completely delete the deployment or even the cluster if we want. Later, when we recreate the deployment, we can see all the messages already present in the index.

I  [2017-07-29 09:42:51,804][INFO ][node                     ] [Hellion] initializing ...
 
I  [2017-07-29 09:42:52,024][INFO ][plugins                  ] [Hellion] loaded [cloud-kubernetes], sites []
 
I  [2017-07-29 09:42:52,055][INFO ][env                      ] [Hellion] using [1] data paths, mounts [[/data (/dev/sdb)]], net usable_space [84.9gb], net total_space [97.9gb], spins? [possibly], types [ext4]
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] initialized
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] starting ...
 
I  [2017-07-29 09:42:53,620][INFO ][transport                ] [Hellion] publish_address {10.8.1.13:9300}, bound_addresses {10.8.1.13:9300}
 
I  [2017-07-29 09:42:53,633][INFO ][discovery                ] [Hellion] loklakcluster/cJtXERHETKutq7nujluJvA
 
I  [2017-07-29 09:42:57,866][INFO ][cluster.service          ] [Hellion] new_master {Hellion}{cJtXERHETKutq7nujluJvA}{10.8.1.13}{10.8.1.13:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
 
I  [2017-07-29 09:42:57,955][INFO ][http                     ] [Hellion] publish_address {10.8.1.13:9200}, bound_addresses {10.8.1.13:9200}
 
I  [2017-07-29 09:42:57,955][INFO ][node                     ] [Hellion] started
 
I  [2017-07-29 09:42:58,082][INFO ][gateway                  ] [Hellion] recovered [8] indices into cluster_state

In the last line from the logs, we can see that indices already present on the disk were recovered. Now if we head to the public IP assigned to the cluster, we can see that the message count is restored.

Conclusion

In this blog post, I discussed how we utilised the Kubernetes setup to shift loklak to Google Cloud Platform. The deployment is active and can be accessed from the link provided under wiki section of loklak/loklak_server repo.

I introduced these changes in pull request loklak/loklak_server#1349 with the help of @niranjan94, @uday96 and @chiragw15.

Resources

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load.

In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index.

Fields to Consider while Caching

At the classifier endpoint, aggregations can be requested based on the following fields –

  • Classifier Name
  • Classifier Classes
  • Countries
  • Start Date
  • End Date

But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be –

  • Classifier Name
  • Start Date
  • End Date

Type of Cache

The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section.

Key

The key is built using the fields mentioned previously –

private static String getKey(String index, String classifier, String sinceDate, String untilDate) {
    return index + "::::"
        + classifier + "::::"
        + (sinceDate == null ? "" : sinceDate) + "::::"
        + (untilDate == null ? "" : untilDate);
}

[SOURCE]

In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose.

Value

The object used as key in the HashMap is a wrapper containing the following –

  1. json – It is a JSONObject containing the actual data.
  2. expiry – It is the expiry of the object in milliseconds.

class JSONObjectWrapper {
    private JSONObject json; 
    private long expiry;
    ... 
}

Timeout

The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject –

class JSONObjectWrapper {
    ...
    private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000);

    JSONObjectWrapper(JSONObject json) {
        this.json = json;
        this.expiry = System.currentTimeMillis() + timeout;
    }
    ...
}

 

Cache Hit

For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner –

String key = getKey(index, classifier, sinceDate, untilDate);
if (cacheMap.keySet().contains(key)) {
    JSONObjectWrapper jw = cacheMap.get(key);
    if (!jw.isExpired()) {
        // Do something with jw
    }
}
// Calculate the aggregations
...

But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed.

Filtering results

For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed.

Since the number of fields to filter out, i.e. classes and countries, would not be that high, this process would not be that resource intensive. And at the same time, would save us from requesting heavy aggregation tasks from the user.

Since the data about classes is nested inside the respective country field, we need to perform two level of filtering –

JSONObject retJson = new JSONObject(true);
for (String key : json.keySet()) {
    JSONArray value = filterInnerClasses(json.getJSONArray(key), classes);
    if ("GLOBAL".equals(key) || countries.contains(key)) {
        retJson.put(key, value);
    }
}

Cache Miss

In the case of a cache miss, the helper functions are called from ElasticsearchClient.java to get results. These results are then parsed from HashMap to JSONObject and stored in the cache for future usages.

JSONObject freshCache = getFromElasticsearch(index, classifier, sinceDate, untilDate);
cacheMap.put(key, new JSONObjectWrapper(freshCache));

The getFromElasticsearch method finds all the possible classes and makes a request to the appropriate method in ElasticsearchClient, getting data for all classifiers and all countries.

Conclusion

In this blog post, I discussed the need for caching of aggregations and the way it is achieved in the loklak server. This feature was introduced in pull request loklak/loklak_server#1333 by @singhpratyush (me).

Resources

Upload Images to OwnCloud and NextCloud in Phimpme Android

As increasing the stack of account manager in Phimpme Android. We have now two new items OwnCloud and NextCloud to add. Both are open source storage services. Provides complete source code of their official apps and libraries on Github. You can check below

OwnCloud: https://github.com/owncloud

NextCloud: https://github.com/nextcloud

This requires a hosting server, where you can deploy it and access it through their web app and Mobile apps. I added a feature in Phimpme to upload images directly to the server right from the app using their android-library.

Steps (How I did in Phimpme)

  • Add library in Application gradle file

Firstly, to work with, we need to add the android-library they provide.

compile "com.github.nextcloud:android-library:$rootProject.nextCloudVersion"

Check the new version from here and apply over it: https://github.com/nextcloud/android-library/releases

  • Login from Account Manager

As per our Phimpme app flow, User first connect itself from the account manager and then share image from app using these credentials. Added a new Login activity for OwnCloud and NextCloud both.

          

  • Saved credentials in Database

To use that further in android-library, I store the credentials in Realm database.

account.setServerUrl(data.getStringExtra(getString(R.string.server_url)));
account.setUsername(data.getStringExtra(getString(R.string.auth_username)));
account.setPassword(data.getStringExtra(getString(R.string.auth_password)));
  • Uploading image using library

As per the official guide of OwnCloud, used Created an object of OwnCloudClient. Set the username and password.

private OwnCloudClient mClient;
mClient = OwnCloudClientFactory.createOwnCloudClient(serverUri, this, true);
mClient.setCredentials(
       OwnCloudCredentialsFactory.newBasicCredentials(
               username,
               password
       )
);

Passed the image path which we are getting in the SharingActivity. Modified with adding the separator.

File fileToUpload = new File(saveFilePath);
String remotePath = FileUtils.PATH_SEPARATOR + fileToUpload.getName();

Used the UploadRemoteOperation Class and just need to pass the path, mimeType and timeStamp. The library have already defined functions to execute the upload operations.

UploadRemoteFileOperation uploadOperation =
       new UploadRemoteFileOperation(fileToUpload.getAbsolutePath(), remotePath, mimeType, timeStamp);
uploadOperation.execute(mClient, this, mHandler);

  • Setup Account using Docker and Digital Ocean

I have already a previous blog post on how to setup NextCloud or OwnCloud account on server using Digital Ocean and Docker.

Link: http://blog.fossasia.org/how-to-use-digital-ocean-and-docker-to-setup-test-cms-for-phimpme/

Resource:

  1. NextCloud Developer Mannual: https://docs.nextcloud.com/server/9/developer_manual/index.html
  2. OwnCloud Library installation: https://doc.owncloud.org/server/9.0/developer_manual/android_library/library_installation.html
  3. Examples: https://doc.owncloud.org/server/9.0/developer_manual/android_library/examples.html

Common Utility classes Progress Bar and Snack Bar in Phimpme Android

As the Phimpme Android is scaling very fast on its features, code gets redundant sometimes. Some of the widely used design widgets in Android are Progress Bar and Snack Bar. Progress Bar is shown to user when some process is happening in the background. Snackbar is a feedback operation to user of its recent process. In other words we can say Snackbar is the new toast in Android with a cool feature of setting action on them. So that User can interact with the feedback received on the process.

As In Phimpme lots of account Login and Logout progress happens. Uploading success and failure required Snackbar to show to the Users. So to remove the redundancy of the boilerplate of these codes, I added two Utilities class one is Phimpme ProgressbarHandler and other is SnackbarHandler in the app. Below is one by one code and explanation of both.

Progress Bar Handler

In the constructor I passed Context as parameter. Created a ViewGroup object and set view of android. Setting the progress bar style and length using Android core attributes such as progressBarStyleLarge and duration to setIndeterminate true.

private ProgressBar mProgressBar;

public PhimpmeProgressBarHandler(Context context) {
   ViewGroup layout = (ViewGroup) ((Activity) context).findViewById(android.R.id.content)
           .getRootView();

   mProgressBar = new ProgressBar(context, null, android.R.attr.progressBarStyleLarge);
   mProgressBar.setIndeterminate(true);



   RelativeLayout.LayoutParams params = new
           RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT,
           RelativeLayout.LayoutParams.MATCH_PARENT);

   RelativeLayout rl = new RelativeLayout(context);

   rl.setGravity(Gravity.CENTER);
   rl.addView(mProgressBar);

   layout.addView(rl, params);

   hide();

}

Next is used dynamically created Relative Layout object and setup the parameters for width and height as MATCH_PARENT. Setting gravity of the layout to center and added the progress bar view on it using the addView method. So basically we have a progress bar ready and we dynamically created a relative layout and added the view over it.

The function used in setting up the views and progress bar are from AOSP only.

After that a Progressbar is set, we now need functions to show and hide the progress bar in the code. Created two functions show() and hide().

public void show() {
   mProgressBar.setVisibility(View.VISIBLE);
}

public void hide() {
   mProgressBar.setVisibility(View.INVISIBLE);
}

These functions set the visibility of the the progress bar.

Usage:

Now in any class we can create object of our Progressbar handler class pass the context on it and use the show() and hide() methods wherever we want to show this and hide. Below is the code snippet to show the illustration.

phimpmeProgressBarHandler = new PhimpmeProgressBarHandler(this);

phimpmeProgressBarHandler.show();

phimpmeProgressBarHandler.hide();

Snackbar Handler

To do this, I created a separate class as Snackbar Handler. What we can do is to create a static function show() and inside the declaration, we can create an object of Snackbar and apply the styles to that.

As you can see in the code snippet below, I Created a static function with parameters such as View (to take the view instance), String (to show the message) and duration  of the Snackbar. Set Up the text, textsize and action on the snackbar. An “OK” action is predefined in the function only.

public static void show(View view, String text, int duration) {
   final Snackbar snackbar = Snackbar.make(view, text, duration);
   View sbView = snackbar.getView();
   TextView textView = (TextView)sbView.findViewById(android

.support.design.R.id.snackbar_text);
   textView.setTextColor(Color.WHITE);
   textView.setTextSize(12);
   snackbar.setAction("OK", new View.OnClickListener() {
       @Override
       public void onClick(View view) {
           snackbar.dismiss();
       }
   });
   snackbar.show();
}

Usage:

To use this directly call the show method pass the view and String of the message which you want to show on Snackbar. There are overloaded methods as well in which you can pass the durations. See the below code as example.

SnackBarHandler.show(parentLayout, getString(R.string.no_account_signed_in));

Resources

Adding Color Options in loklak Media Wall

Color options in loklak media wall gives user the ability to set colors for different elements of the media wall. Taking advantage of Angular two-way data binding property and ngrx/store, we can link up the CSS properties of the elements with concerned state properties which stores the user-selected color. This makes color customization fast and reactive for media walls.

In this blog here, I am explaining the unidirectional workflow using ngrx for updating various colors and working of color customization.

Flow Chart

The flowchart below explains how the color as a string is taken as an input from the user and how actions, reducers and component observables link up to change the current CSS property of the font color.

Working

Designing Models: It is important at first to design model which must contain every CSS color property that can be customized. A single interface for a particular HTML element of media wall can be added so that color customization for a particular element can take at once with faster rendering. Here we have three interfaces:

  • WallHeader
  • WallBackground
  • WallCard

These three interfaces are the models for the three core components of the media wall that can be customized.

export interface WallHeader {
backgroundColor: string;
fontColor: string;
}
export interface WallBackground {
backgroundColor: string;
}
export interface WallCard {
fontColor: string;
backgroundColor: string;
accentColor: string;
}

 

Creating Actions: Next step is to design actions for customization. Here we need to pass the respective interface model as a payload with updated color properties. These actions when dispatched causes reducer to change the respective state property, and hence, the linked CSS color property.

export class WallHeaderPropertiesChangeAction implements Action {
type = ActionTypes.WALL_HEADER_PROPERTIES_CHANGE;constructor(public payload: WallHeader) { }
}
export class WallBackgroundPropertiesChangeAction implements Action {
type = ActionTypes.WALL_BACKGROUND_PROPERTIES_CHANGE;constructor(public payload: WallBackground) { }
}
export class WallCardPropertiesChangeAction implements Action {
type = ActionTypes.WALL_CARD_PROPERTIES_CHANGE;constructor(public payload: WallCard) { }
}

 

Creating reducers: Now, we can proceed to create reducer functions so as to change the current state property. Moreover, we need to define an initial state which is the default state for uncustomized media wall. Actions can now be linked to update state property using this reducer when dispatched. These state properties serve two purposes:

  • Updating Query params for Direct URL.
  • Updating Media wall Colors

case mediaWallCustomAction.ActionTypes.WALL_HEADER_PROPERTIES_CHANGE: {
const wallHeader = action.payload;return Object.assign({}, state, {
wallHeader
});
}case mediaWallCustomAction.ActionTypes.WALL_BACKGROUND_PROPERTIES_CHANGE: {
const wallBackground = action.payload;return Object.assign({}, state, {
wallBackground
});
}case mediaWallCustomAction.ActionTypes.WALL_CARD_PROPERTIES_CHANGE: {
const wallCard = action.payload;return Object.assign({}, state, {
wallCard
});
}

 

Extracting Data to the component from the store: In ngrx, the central container for states is the store. Store is itself an observable and returns observable related to state properties. We have already defined various states for media wall color options and now we can use selectors to return state observables from the store. These observables can now easily be linked to the CSS of the elements which changes according to customization.

private getDataFromStore(): void {
this.wallCustomHeader$ = this.store.select(fromRoot.getMediaWallCustomHeader);
this.wallCustomCard$ = this.store.select(fromRoot.getMediaWallCustomCard);
this.wallCustomBackground$ = this.store.select(fromRoot.getMediaWallCustomBackground);
}

 

Linking state observables to the CSS properties: At first, it is important to remove all the CSS color properties from the elements that need to be customized. Now, we will instead use style directive provided by Angular in the template which can be used to update CSS properties directly from the component variables. Since the customized color received from the central store are observables, we need to use the async pipe to extract string color data from it.

Here, we are updating background color of the wall.

<span class=“wrapper”
[style.background-color]=“(wallCustomBackground$ | async).backgroundColor”>
</span>

 

For other child components, we need to use @Input Decorator to send color data as an input to it and use the style directive as used above.

Here, we are interacting with the child component i.e. media wall card component using @Input Decorator.

Template:

<media-wall-card
[feedItem]=“item”
[wallCustomCard$]=“wallCustomCard$”></media-wall-card>

 

Component:

export class MediaWallCardComponent implements OnInit {
..
@Input() feedItem: ApiResponseResult;
@Input() wallCustomCard$: Observable<WallCard>;
..
}

 

This creates a perfect binding of CSS properties in the template with the state properties of color actions. Now, we can dispatch different actions to update the state and hence, the colors of media wall.

Reference

Supporting Dasherized Attributes and Query Params in flask-rest jsonapi for Open Event Server

In the Open Event API Server project attributes of the API are dasherized.

What was the need for dasherizing the attributes in the API ?

All the attributes in our database models are separated by underscores i.e first name would be stored as first_name. But most of the API client implementations support dasherized attributes by default. In order to attract third party client implementations in the future and making the API easy to set up for them was the primary reason behind this decision.Also to quote the official json-api spec recommendation for the same:

Member names SHOULD contain only the characters “a-z” (U+0061 to U+007A), “0-9” (U+0030 to U+0039), and the hyphen minus (U+002D HYPHEN-MINUS, “-“) as separator between multiple words.

Note: The dasherized version for first_name will be first-name.

flask-rest-jsonapi is the API framework used by the project. We were able to dasherize the API responses and requests by adding inflect=dasherize to each API schema, where dasherize is the following function:

def dasherize(text):
   return text.replace('_', '-')

 

flask-rest-jsonapi also provides powerful features like the following through query params:

But we observed that the query params were not being dasherized which rendered the above awesome features useless 🙁 . The reason for this was that flask-rest-jsonapi took the query params as-is and search for them in the API schema. As Python variable names cannot contain a dash, naming the attributes with a dash in the internal API schema was out of the question.

For adding dasherizing support to the query params, change in the QueryStringManager located at querystring.py of the framework root are required. A config variable named DASHERIZE_APIwas added to turn this feature on and off.

Following are the changes required for dasherizing query params:

For Sparse Fieldsets in the fields function, replace the following line:

result[key] = [value]
with
if current_app.config['DASHERIZE_API'] is True:
    result[key] = [value.replace('-', '_')]
else:
    result[key] = [value]

 

For sorting, in the sorting function, replace the following line:

field = sort_field.replace('-', '')

with

if current_app.config['DASHERIZE_API'] is True:
   field = sort_field[0].replace('-', '') + sort_field[1:].replace('-', '_')
else:
   field = sort_field[0].replace('-', '') + sort_field[1:]

 

For Include related objects, in include function, replace the following line:

return include_param.split(',') if include_param else []

with

if include_param:
   param_results = []
   for param in include_param.split(','):
       if current_app.config['DASHERIZE_API'] is True:
           param = param.replace('-', '_')
       param_results.append(param)
   return param_results
return []

Related links:

Running Dredd Hooks as a Flask App in the Open Event Server

The Open Event Server is based on the micro-framework Flask from its initial phases. After implementing API documentation, we decided to implement the Dredd testing in the Open Event API.

After isolating each request in Dredd testing, the real challenge is now to bind the database engine to the Dredd Hooks. And as we have been using Flask-SQLAlchemy db.Model Baseclass for building all the models and Flask, being a micro framework itself, came to our rescue as we could easily bind the database engine to the Flask app. Conventionally dredd hooks are written in pure Python, but we will running them as a self contained Flask app itself.

How to initialise this flask app in our dredd hooks. The Flask app can be initialised in the before_all hook easily as shown below:

def before_all(transaction):
    app = Flask(__name__)
    app.config.from_object('config.TestingConfig')

 

The database can be binded to the app as follows:

def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)

 

The challenge now is how to bind the application context when applying the database fixtures. In a normal Flask application this can be done as following:

with app.app_context():
#perform your operation

 

While for unit tests in python:

with app.test_request_context():
#perform tests

 

But as all the hooks are separate from each other, Dredd-hooks-python supports idea of a single stash list where you can store all the desired variables(a list or the name stash is not necessary).

The app and db can be added to stash as shown below:

@hooks.before_all
def before_all(transaction):
app = Flask(__name__)
app.config.from_object('config.TestingConfig')
db.init_app(app)
Migrate(app, db)
stash['app'] = app
stash['db'] = db

 

These variables stored in the stash can be used efficiently as below:

@hooks.before_each
def before_each(transaction):
with stash['app'].app_context():
db.engine.execute("drop schema if exists public cascade")
db.engine.execute("create schema public")
db.create_all()

 

and many other such examples.

Related Links:
1. Testing Your API Documentation With Dredd: https://matthewdaly.co.uk/blog/2016/08/08/testing-your-api-documentation-with-dredd/
2. Dredd Tutorial: https://help.apiary.io/api_101/dredd-tutorial/
3. Dredd Docs: http://dredd.readthedocs.io/

Adding Unit Tests for Services in loklak search

In Loklak search, it can be tricky to write tests for services as these services are customizable and not fixed. Therefore, we need to test every query parameter of the URL. Moreover, we need to test if service is parsing data in a correct manner and returns only data of type ApiResponse.

In this blog here, we are going to see how to build different components for unit testing services. We will be going to test Search service in loklak search which makes Jsonp request to get the response from the loklak search.json API which are displayed as feeds on loklak search. We need to test if the service handles the response in a correct way and if the request parameters are exactly according to customization.

Service to test

Search service in loklak search is one of the most important component in the loklak search. SearchService is a class with a method fetchQuery() which takes parameter and sets up URL parameters for the search.json API of loklak. Now, it makes a JSONP request and maps the API response. The Method fetchQuery() can be called from other components with parameters query and lastRecord to get the response from the server based on a certain search query and the last record to implement pagination feature in loklak search. Now as the data is retrieved, a callback function is called to access the response returned by the API. Now, the response received from the server is parsed to JSON format data to extract data from the response easily.

@Injectable()
export class SearchService {
private static readonly apiUrl: URL = new URL(‘http://api.loklak.org/api/search.json’);
private static maximum_records_fetch = 20;
private static minified_results = true;
private static source = ‘all’;
private static fields = ‘created_at,screen_name,mentions,hashtags’;
private static limit = 10;
private static timezoneOffset: string = new Date().getTimezoneOffset().toString();constructor(
private jsonp: Jsonp
) { }// TODO: make the searchParams as configureable model rather than this approach.
public fetchQuery(query: string, lastRecord = 0): Observable<ApiResponse> {
const searchParams = new URLSearchParams();
searchParams.set(‘q’, query);
searchParams.set(‘callback’, ‘JSONP_CALLBACK’);
searchParams.set(‘minified’, SearchService.minified_results.toString());
searchParams.set(‘source’, SearchService.source);
searchParams.set(‘maximumRecords’, SearchService.maximum_records_fetch.toString());
searchParams.set(‘timezoneOffset’, SearchService.timezoneOffset);
searchParams.set(‘startRecord’, (lastRecord + 1).toString());
searchParams.set(‘fields’, SearchService.fields);
searchParams.set(‘limit’, SearchService.limit.toString());
return this.jsonp.get(SearchService.apiUrl.toString(), { search: searchParams })
.map(this.extractData)}private extractData(res: Response): ApiResponse {
try {
return <ApiResponse>res.json();
} catch (error) {
console.error(error);
}
}

Testing the service

  • Create a mock backend to assure that we are not making any Jsonp request. We need to use Mock Jsonp provider for this. This provider sets up MockBackend and wires up all the dependencies to override the Request Options used by the JSONP request.

const mockJsonpProvider = {
provide: Jsonp,
deps: [MockBackend, BaseRequestOptions],
useFactory: (backend: MockBackend, defaultOptions: BaseRequestOptions) => {
return new Jsonp(backend, defaultOptions);
}
};

 

  • Now, we need to configure the testing module to isolate service from other dependencies. With this, we can instantiate services manually. We have to use TestBed for unit testing and provide all necessary imports/providers for creating and testing services in the unit test.

describe(‘Service: Search’, () => {
let service: SearchService = null;
let backend: MockBackend = null;
beforeEach(() => {
TestBed.configureTestingModule({
providers: [
MockBackend,
BaseRequestOptions,
mockJsonpProvider,
SearchService
]
});
});

 

  • Now, we will inject Service (to be tested) and MockBackend into the Testing module. As all the dependencies are injected, we can now initiate the connections and start testing the service.

beforeEach(inject([SearchService, MockBackend], (searchService: SearchService, mockBackend: MockBackend) => {
service = searchService;
backend = mockBackend;
}));

 

  • We will be using it() block to mention about what property/feature we are going to test in the block. All the tests will be included in this block. One of the most important part is to induce callback function done which will close the connection as soon the testing is over.

it(‘should call the search api and return the search results’, (done)=>{
// test goes here
});

 

  • Now, we will create a connection to the MockBackend and subscribe to this connection. We need to configure ResponseOptions so that mock response is JSONified and returned when the request is made.  Now, the MockBackend is set up and we can proceed to make assertions and test the service.

const result = MockResponse;
backend.connections.subscribe((connection: MockConnection) => {
const options = new ResponseOptions({
body: JSON.stringify(result)
});
connection.mockRespond(new Response(options));

 

  • We can now add test by using expect() block to check if the assertion is true or false. We will now test:
    • Request method: We will be testing if the request method used by the connection created is GET.

expect(connection.request.method).toEqual(RequestMethod.Get);
    • Request Url: We will be testing if all the URL Search Parameters are correct and according to what we provide as a parameter to the method fetchQuery().

expect(connection.request.url).toEqual(
`http://api.loklak.org/api/search.json` +
`?q=${query}` +
`&callback=JSONP_CALLBACK` +
`&minified=true&source=all` +
`&maximumRecords=20&timezoneOffset=${timezoneOffset}` +
`&startRecord=${lastRecord + 1}` +
`&fields=created_at,screen_name,mentions,hashtags&limit=10`);
});
);

 

  • Response:  Now, we need to call the service to make a request to the backend and subscribe to the response returned. Next, we will make an assertion to check if the response returned and parsed by the service is equal the Mock Response that should be returned. At the end, we need to call the callback function done() to close the connection.

service
.fetchQuery(query, lastRecord)
.subscribe((res) => {
expect(res).toEqual(result);
done();
});
});

Reference

Auto-Refreshing Mode in loklak Media Wall

Auto-refreshing wall means that the request to the loklak server for the feeds must be sent after every few seconds and adding up new feeds in the media wall as soon as the response is received for a single session. For a nice implementation, it is also necessary to check if the new feeds are being received from the server and consequently, close the connection as soon as no feeds are received as to maintain session singularity.

In this blog post, I am explaining how I implemented the auto-refreshing mode for media wall using tools like ngrx/store and ngrx/effects.

Flow Chart

The flowchart below explains the workflow of how the actions, effects and service are linked to create a cycle of events for auto-refreshing mode. It also shows up how the response is handled as a dependency for the next request. Since effects play a major role for this behaviour, we can say it as the “Game of Effects”.

Working

  • Effect wallSearchAction$: Assuming the Query for media wall has changed and ACTION: WALL_SEARCH has been dispatched, we will start from this point of time. Looking into the flowchart, we can see as soon the action WALL_SEARCH is dispatched, a effect needs to be created to detect the action dispatched.This effect customizes the query and sets up various configurations for search service and calls the service. Depending on whether the response is received or not, it either dispatches WallSearchCompleteSuccessAction or WallSearchCompleteFailAction respectively. Moreover, this effect is responsible for changing the route/location of the application.

@Effect()
wallSearchAction$: Observable<Action>
= this.actions$
.ofType(wallAction.ActionTypes.WALL_SEARCH)
.debounceTime(400)
.map((action: wallAction.WallSearchAction) => action.payload)
.switchMap(query => {
const nextSearch$ = this.actions$.ofType(wallAction.ActionTypes.WALL_SEARCH).skip(1);
const searchServiceConfig: SearchServiceConfig = new SearchServiceConfig();if (query.filter.image) {
searchServiceConfig.addFilters([‘image’]);
} else {
searchServiceConfig.removeFilters([‘image’]);
}
if (query.filter.video) {
searchServiceConfig.addFilters([‘video’]);
} else {
searchServiceConfig.removeFilters([‘video’]);
}return this.apiSearchService.fetchQuery(query.queryString, searchServiceConfig)
.takeUntil(nextSearch$)
.map(response => {
const URIquery = encodeURIComponent(query.queryString);
this.location.go(`/wall?query=${URIquery}`);
return new apiAction.WallSearchCompleteSuccessAction(response);
})
.catch(() => of(new apiAction.WallSearchCompleteFailAction()));
  • Property lastResponseLength: Looking into the flow chart, we can see that after WallSearchCompleteSuccessAction is dispatched, we need to check for the number of feeds in the response. If the number of feeds in the response is more than 0, we can continue to make a new request to the server. On the other hand, if no feeds are received, we need to close the connection and stop requesting for more feeds. This check is implemented using lastResponseLength state property of the reducer which maintains the length of the entities for the last response received.

case apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS: {
const apiResponse = action.payload;return Object.assign({}, state, {
entities: apiResponse.statuses,
lastResponseLength: apiResponse.statuses.length
});
}

 

  • Effect nextWallSearchAction$: Now, we have all the information regarding if we should dispatch WALL_NEXT_PAGE_ACTION depending on the last response received. We need to implement an effect that detects WALL_SEARCH_COMPLETE_SUCCESS  keeping in mind that the next request should be made 10 seconds after the previous response is received. For this behaviour, we need to use debounceTime() which emits a value only after certain specified time period has passed. Here, debounce is set to 10000ms which is equal to 10 seconds. The effect also needs to dispatch the next action depending on the lastResponseLength state property of the reducer. It should dispatch WallNextPageAction if the entities length of the response is more than 0, otherwise, it should dispatch StopWallPaginationAction.

@Effect()
nextWallSearchAction$
= this.actions$
.ofType(apiAction.ActionTypes.WALL_SEARCH_COMPLETE_SUCCESS)
.debounceTime(10000)
.withLatestFrom(this.store$)
.map(([action, state]) => {
if (state.mediaWallResponse.lastResponseLength > 0) {
return new wallPaginationAction.WallNextPageAction();
}
else {
return new wallPaginationAction.StopWallPaginationAction();
}
});

 

  • Effect wallPagination$: Now, we need to have an effect that should detect WALL_NEXT_PAGE_ACTION and call the SearchService similar to wallSearchAction$ Effect. However, we need to keep a check on the last record of the entities from the previous response received. This can be done using lastRecord state property which maintains the last record of the entities.

@Effect()
wallPagination$: Observable<Action>
= this.actions$
.ofType(wallPaginationAction.ActionTypes.WALL_NEXT_PAGE)
.map((action: wallPaginationAction.WallNextPageAction) => action.payload)
.withLatestFrom(this.store$)
.map(([action, state]) => {
return {
query: state.mediaWallQuery.query,
lastRecord: state.mediaWallResponse.entities.length
};
})
.switchMap(queryObject => {
const nextSearch$ = this.actions$.ofType(wallAction.ActionTypes.WALL_SEARCH);this.searchServiceConfig.startRecord = queryObject.lastRecord + 1;
if (queryObject.query.filter.image) {
this.searchServiceConfig.addFilters([‘image’]);
} else {
this.searchServiceConfig.removeFilters([‘image’]);
}
if (queryObject.query.filter.video) {
this.searchServiceConfig.addFilters([‘video’]);
} else {
this.searchServiceConfig.removeFilters([‘video’]);
}return this.apiSearchService.fetchQuery(queryObject.query.queryString, this.searchServiceConfig)
.takeUntil(nextSearch$)
.map(response => {
return new wallPaginationAction.WallPaginationCompleteSuccessAction(response);
})
.catch(() => of(new wallPaginationAction.WallPaginationCompleteFailAction()));
});

 

  • Effect nextWallPageAction$: Similar to the nextWallSearchAction$ effect, we need to implement an effect that detects WALL_PAGINATION_SUCCESS_ACTION and depending on the lastResponseLength should either dispatch WallNextPageAction or StopWallPaginationAction after a certain specified debounceTime.

@Effect()
nextWallPageAction$
= this.actions$
.ofType(wallPaginationAction.ActionTypes.WALL_PAGINATION_COMPLETE_SUCCESS)
.debounceTime(10000)
.withLatestFrom(this.store$)
.map(([action, state]) => {
if (state.mediaWallResponse.lastResponseLength > 0) {
return new wallPaginationAction.WallNextPageAction();
}
else {
return new wallPaginationAction.StopWallPaginationAction();
}
});

 

Now the cycle is created and requests will be automatically made after every 10 seconds depending on the previous response. This cycle also closes the connection and stops making a pagination request for the particular query as soon as no feeds are received from the server.

Reference