Updating Settings Activity With MVP Architecture in SUSI Android

SUSI Android app includes settings that allow users to modify app features and behaviours. For example, SUSI Android allows users to specify whether they want voice output i.e text to speech irrespective of input type.

Currently different settings available in SUSI Android are:

Settings                                          Use
Enter As Send It allows users to specify whether they want to use action button of soft keyboard as carriage return or send message
Mic Input It allows users to specify whether they want to use speech for input type or not. User can also use keyboard to provide input.
Speech Always It allows users to specify whether they want speech output i.e text to speech irrespective of input type
Speech Output It allows users to specify whether they want speech output in case of speech input or not.
Select Server It allows user to specify whether they want to use default susi_server or their own server.
Language It allows users to select text to speech engine language.

Android’s standard architecture isn’t always sufficient, especially in the case of complex applications that need to be tested regularly. So we need an architecture that can improve the app’s testability and MVP(Model View Presenter) architecture is one of the best options for that.

Working with MVP architecture to show settings

MVP architecture creates three layers: Model, View and Presenter. Use of each layer is described below

  • Model: The Model holds the business logic of the application. It controls how data can be created, stored, and modified.
  • View: The View is a UI that displays data and routes user actions to the Presenter.
  • Presenter: Presenter is a mediator between View and Model. It retrieves data from the Model and shows it in the View. It also processes user actions forwarded to it by the View.

Steps followed to implement MVP architecture are:

1. I first created three interfaces: ISettingsView, ISettingModel and  ISettingsPresenter. These classes contain all the important functions we needed in model, presenter and view. Presenter’s interaction with model is handled by  ISettingModel and presenter’s interaction with view is handled by ISettingsPresenter and ISettingView.

  1. I created model, view and presenter class i.e SettingModel, SettingsPresenter and SettingsActivity and implemented ISettingModel, ISettingsPresenter and ISettingsView in SettingModel, SettingPresenter and SettingsActivity respectively so that presenter can communicate with both model and view.
class SettingsPresenter(fragmentActivity: FragmentActivity): ISettingsPresenter, ISettingModel.onSettingFinishListener
class SettingsActivity : AppCompatActivity(), ISettingsView
class SettingModel: ISettingModel
  1. After that, I created ChatSettingsFragment class. It is a PreferenceFragment class which resides in SettingsActivity and contains all settings.
class ChatSettingsFragment : PreferenceFragmentCompat() 

How Presenter communicates with Model and View

As already mentioned presenter’s interaction with view is handled by ISettingsView and ISettingsPresenter. So first I created object of ISettingsPresenter in ChatSettingsFragment, which is part of view, as shown below

lateinit var settingsPresenter: ISettingsPresenter
settingsPresenter = SettingsPresenter(activity)

And used it to call method of presenter, SettingsPresenter

settingsPresenter.sendSetting(Constant.ENTER_SEND, newValue.toString())

After that, I created an object of ISettingsView in SettingsPresenter, so that it can communicate with view, SettingsActivity.

Presenter’s interaction with model is handled by ISettingModel. So first I created an object of SettingModel class which implemented ISettingModel and then used it to call methods of SettingModel.

var settingModel: SettingModel = SettingModel()
settingModel.sendSetting(key, value, this)

Here the third parameter in the sendSetting method is ISettingModel.onSettingFinishListener which is part of ISettingModel. SettingModel used it to communicate with SettingsPresenter as shown below

override fun sendSetting(key: String, value: String, listener: ISettingModel.onSettingFinishListener) {

settingResponseCall.enqueue(object : Callback<ChangeSettingResponse> {

 override fun onResponse(call: Call<ChangeSettingResponse>?, response:

Response<ChangeSettingResponse>) {

          listener.onSuccess(response)

      }

  })

}

onSuccess method present in SettingsPresenter.

Reference

Continue ReadingUpdating Settings Activity With MVP Architecture in SUSI Android

Shorten the Travis build time of Meilix

Meilix is a lubuntu based script. It uses Travis to compile and deploy the iso as Github Release. Normally Travis took around 17-21 minutes to build the script and then to deploy the iso on the page. It is quite good that one gets the iso in such a small interval of time but if we would able to decrease this time even more then that will be better.

Meilix script consists of basically 2 tests and 1 deploy:

The idea behind reducing the time of Meilix building is to parallely run the tests which are independent to each other. So here we run the build.sh and aptRepoUpdater.sh parallely to reduce the time upto some extent.
This pictures denotes that both the tests are running parallely and Github Releases is waiting for them to get complete successfully to deploy the iso.

Let’s see the code through which this made possible:

jobs:
  include:
    - script: ./build.sh
    - script: 'if [ "$TRAVIS_PULL_REQUEST" = "false" ]; then bash ./scripts/aptRepoUpdater.sh; fi'
    - stage: GitHub Release
      script: echo "Deploying to GitHub releases ..."

Here we included a job section in which we wrote the test which Travis has to carry out parallely. This will run both the script at the same time and can help to reduce the time.
After both the script run successfully then Github Release the iso.

Here we can see that we are only able to save around 30-40 seconds and that much matters a lot in case if we have more than 1 build going on the same time.

Links to follow:
Travis guide to build stages

Continue ReadingShorten the Travis build time of Meilix

Firefox Customization for Meilix

Meilix a lightweight operating system can be easily customized. This article talks about the way one must proceed to customize the configuration of Firefox of Meilix or on its own Linux distro and how to copy the configuration file of Firefox directly to the home folder of the user.
Meilix script contains a pref.js file which is responsible for providing the configuration. This file contains various function through which one can edit them according to its need to get the required configuration of its need.

Let’s see an example:

user_pref("browser.startup.homepage", "http://www.google.com/cse/home?cx=partner-pub-6065445074637525:8941524350");

This line is used to set the browser startup page and it can be edited according to user choice to find the same page whenever he starts his Firefox.

There are several lines too which can be edited to make the required changes.

How does this work?

This is the Mozilla User Preference file and should be placed in the location /home/user_name/.mozilla/firefox/*.default/prefs.js. It actually controls the attributes of Firefox preference and set the command from here to change it.

How to use it?

One can directly go and edit it according to the choice to use it.

How meilix script uses it to change the user preference?

As we can see that .mozilla folder should be under the home directory, therefore we copy the .mozilla to the the skel folder, so that it gets automatically copied to the home location and we would be able to use.
There is also a shell script which comes in handy to implement the browser startup page and the script even copies the configuration file.

1.#!/bin/bash

2.# firefox
3.# http://askubuntu.com/questions/73474/how-to-install-firefox-addon-from-command-line-in-scripts
4.for user_name in `ls /home/`
5.do
  6.preferences_file="`echo /home/$user_name/.mozilla/firefox/*.default/prefs.js`"
  7.if [ -f "$preferences_file" ]
  8.then
    9.echo "user_pref(\"browser.startup.homepage\", \"https://google.com/\");" >> $preferences_file
  10.fi
11.done

This file is taken from here. And it used in Meilix to run the script to set the default startup page in Firefox. This will be taken input from the user end from the Meilix Generator webapp and it will change the line 9 url according to the input given by the user.
On line 3, *.default will set automatically by the script itself, it generated randomly.
After that, the script will copy the prefs.js in its location and it will implement the changes.

Links to follow:

Firefox-preference guide
Firefox-editing-configuration

Continue ReadingFirefox Customization for Meilix

Generate Requirement File for Python App for Meilix-Generator

Meilix-Generator is based upon Flask (a Python framework) which has several dependencies to fulfill before actually running the app properly. This article will guide you through the way I used it to automatically generate the requirement file for Meilix Generator app so that one doesn’t have to manually type all the requirements.

An app powered by Python always has several dependencies to fulfill to run the app successfully. The app root directory contains a file named as requirements.txt which contains the name of the dependency and their version. There are features ways to generate the requirement file for an app but the one which I will demonstrate is the best one. So I used this idea to generate the requirement file for webapp Meilix Generator.

Ways to get the requirement.txt

The internet has a featured way through which one has just to run a command to get a list of all the different dependencies within an app.

pip freeze > requirements.txt

This way will generate a bunch of dependencies that we not even required.

Why do we really require to generate a requirement file?

Yes, one may even ask that we can even write the dependency in the requirements.txt file. Why do we need a command to generate it?

Since because it will take care of two important things:
1. It will ensure that all the dependencies have been included, from user input one may forget to find some of the dependency and to include that.

  1. It will also take care of the Python Package Version Pinning which is really important. People use to version pinning for Python requirements as “>=” style. It’s important to follow “==” style because If we want to install the program in one year in the future, the required packages should be pinned to assure that the API changes in the installed packages do not break the program. Please read here for more info.

The way mentioned below will ensure to provide both these features.

How I generated it for Meilix Generator?

Meilix Generator run on Flask that require a requirement.txt file to fulfill the dependencies. Let’s get straight to the way to generate it for the project.

First we will simply create a requirements.in file in which we will simply mention all the dependencies in a simple way:

Flask
gunicorn
Werkzeug

Now we will use a command to latest packages:

pip install --upgrade -r requirements.in

#Note that if you would like to change the requirements, please edit the requirements.in file and run this command to update the dependencies

Then type this command to generate the requirements.txt file from requirements.in

pip-compile --output-file requirements.txt requirements.in

#fix the versions that definitely work for an eternity.
This will generate a file something as:

click==6.7                # via flask
Flask==0.12.2
gunicorn==19.7.1
itsdangerous==0.24        # via flask
Jinja2==2.9.6             # via flask
MarkupSafe==1.0           # via jinja2
Werkzeug==0.12.2          # via flask

Now you generated a perfect requirements.txt file with all the dependencies satisfied with proper python package pinning.

The meilix-generator repo which uses this:
https://github.com/fossasia/meilix-generator

Continue ReadingGenerate Requirement File for Python App for Meilix-Generator

Heroku Deployment through Travis for Meilix-Generator

This article will tell the way to deploy the Meilix Generator on Heroku with the help of Travis. A successful deployment will help as a test for a good PR. Later in the article, we’ll see the one-button deployment on Heroku.

We will here deploy Meilix Generator on Heroku. The way to deploy the project on Heroku is that one should connect its Github account and deploy it on Heroku. The problem arises when one wants to deploy the project on each and every commit. This will help to test that the commits are passing or not. Here we will see that how to use Travis to deploy on Heroku on each and every commit. If the Travis test passed which means that the changes made in the commit are implemented.
We used the same idea to test the commits for Meilix Generator.

Idea behind it

Travis (.travis.yml) will be helpful to us to achieve this. We will use this deploy build to Heroku on each commit. If it gets successfully deployed then it proves that the commit made is working.

How to implement it

I will use Meilix Generator repository to tell the way to implement this. It is as simple as editing the .travis.yml file. We just have to add few lines to .travis.yml and hence it will get deployed.

deploy:
  provider: heroku
  api_key:
    secure: "YOUR ENCRYPTED API KEY"     # explained below
  app: meilix-generator                    # write the name of the app
  on:
    repo: fossasia/meilix-generator                # repo name
    branch: master                        # branch name

Way to generate the api key:
This is really a matter of concern since if this gets wrong then the deployment will not occur.

Steps:

  • cd into the repository which you want to deploy on Heroku.
  • Login Heroku CI and Travis CI into your terminal and type the following.

travis encrypt $(heroku auth:token) --add deploy.api_key

This will automatically provide the key inside the .travis.yml file.

You can also configure manually using

travis heroku setup

That it, you are done, test the build.

Things are still left:

But we are still left with the test of the PR.
For this we have to create a new app.json file as:

{
    "name": "Meilix-Generator",
    "description": "A webapp which generates iso for you",
    "repository": "https://github.com/fossasia/meilix-generator/",
    "logo": "https://github.com/fossasia/meilix-generator/blob/master/static/logo.png",
    "keywords": [
        "meilix-generator",
        "fossasia",
        "flask"
    ],
    "env": {
        "APP_SECRET_TOKEN": {
            "generator": "secret"
        },
        "ON_HEROKU": "true",
        "FORCE_SSL": "true",
        "INVITATION_CODE": {
            "generator": "secret"
        }
    },
    "buildpacks": {
            "url": "heroku/python"        # this is the only place of concern
        }
}

This code should be put in a file in the root of the repo with the name as app.json.
In the buildpacks : the url should be the one which contains the code base language used.

This can be helpful in 2 ways:

  1. Test the commit made and deploy it on Heroku
    2. One-click deployment button which will deploy the app on Heroku
  • Test the deployment through the URL:

https://heroku.com/deploy?template=https://github.com/user_name/repo_name/tree/master

  • Way to add the button:

[![Deploy](https://www.herokucdn.com/deploy/button.svg)](https://heroku.com/deploy)

How can this idea be helpful to a developer

A developer can use this to deploy its app on Heroku and test the commit automatically and view the quality and status of PR too.

Useful repositories and link which uses this:

I have used the same idea in my project. Do have a look:

https://github.com/fossasia/meilix-generator
deployment on Heroku
one-click deployment
app.json file schema   

Continue ReadingHeroku Deployment through Travis for Meilix-Generator

Checking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

In the Open Event Server, in the pull requests, if there is some change in the sqlalchemy model, sometimes proper migrations for the same are missed in the PR.

The first approach to check whether the migrations were up to date in the database was with the following health check function:

from subprocess import check_output
def health_check_migrations():
   """
   Checks whether database is up to date with migrations, assumes there is a single migration head
   :return:
   """
   head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]
   
   if head == version_num:
       return True, 'database up to date with migrations'
   return False, 'database out of date with migrations'

 

In the above function, we get the head according to the migration files as following:

head = check_output(["python", "manage.py", "db", "heads"]).split(" ")[0]


The table alembic_version contains the latest alembic revision to which the database was actually upgraded. We can get this revision from the following line:

version_num = (db.session.execute('SELECT version_num from alembic_version').fetchone())['version_num']

 

Then we compare both of the given heads and return a proper tuple based on the comparison output.While this method was pretty fast, there was a drawback in this approach. If the user forgets to generate the migration files for the the changes done in the sqlalchemy model, this approach will fail to raise a failure status in the health check.

To overcome this drawback, all the sqlalchemy models were fetched automatically and simple sqlalchemy select queries were made to check whether the migrations were up to date.

Remember that a raw SQL query will not serve our purpose in this case as you’d have to specify the columns explicitly in the query. But in the case of a sqlalchemy query, it generates a SQL query based on the fields defined in the db model, so if migrations are missing to incorporate the said change proper error will be raised.

We can accomplish this from the following function:

def health_check_migrations():
   """
   Checks whether database is up to date with migrations by performing a select query on each model
   :return:
   """
   # Get all the models in the db, all models should have a explicit __tablename__
   classes, models, table_names = [], [], []
   # noinspection PyProtectedMember
   for class_ in db.Model._decl_class_registry.values():
       try:
           table_names.append(class_.__tablename__)
           classes.append(class_)
       except:
           pass
   for table in db.metadata.tables.items():
       if table[0] in table_names:
           models.append(classes[table_names.index(table[0])])

   for model in models:
       try:
           db.session.query(model).first()
       except:
           return False, '{} model out of date with migrations'.format(model)
   return True, 'database up to date with migrations'

 

In the above code, we automatically get all the models and tables present in the database. Then for each model we try a simple SELECT query which returns the first row found. If there is any error in doing so, False, ‘{} model out of date with migrations’.format(model) is returned, so as to ensure a failure status in health checks.

Related:

Continue ReadingChecking Whether Migrations Are Up To Date With The Sqlalchemy Models In The Open Event Server

Implementing Health Check Endpoint in Open Event Server

A health check endpoint was required in the Open Event Server be used by Kubernetes to know when the web instance is ready to receive requests.

Following are the checks that were our primary focus for health checks:

  • Connection to the database.
  • Ensure sql-alchemy models are inline with the migrations.
  • Connection to celery workers.
  • Connection to redis instance.

Runscope/healthcheck seemed like the way to go for the same. Healthcheck wraps a Flask app object and adds a way to write simple health-check functions that can be used to monitor your application. It’s useful for asserting that your dependencies are up and running and your application can respond to HTTP requests. The Healthcheck functions are exposed via a user defined flask route so you can use an external monitoring application (monit, nagios, Runscope, etc.) to check the status and uptime of your application.

Health check endpoint was implemented at /health-check as following:

from healthcheck import HealthCheck
health = HealthCheck(current_app, "/health-check")

 

Following is the function for checking the connection to the database:

def health_check_db():
   """
   Check health status of db
   :return:
   """
   try:
       db.session.execute('SELECT 1')
       return True, 'database ok'
   except:
       sentry.captureException()
       return False, 'Error connecting to database'

 

Check functions take no arguments and should return a tuple of (bool, str). The boolean is whether or not the check passed. The message is any string or output that should be rendered for this check. Useful for error messages/debugging.

The above function executes a query on the database to check whether it is connected properly. If the query runs successfully, it returns a tuple True, ‘database ok’. sentry.captureException() makes sure that the sentry instance receives a proper exception event with all the information about the exception. If there is an error connecting to the database, the exception will be thrown. The tuple returned in this case will be return False, ‘Error connecting to database’.

Finally to add this to the endpoint:

health.add_check(health_check_db)

Following is the response for a successful health check:

{
   "status": "success",
   "timestamp": 1500915121.52474,
   "hostname": "shubham",
   "results": [
       {
           "output": "database ok",
           "checker": "health_check_db",
           "expires": 1500915148.524729,
           "passed": true,
           "timestamp": 1500915121.524729
       }
   ]
}

If the database is not connected the following error will be shown:

{
           "output": "Error connecting to database",
           "checker": "health_check_db",
           "expires": 1500965798.307425,
           "passed": false,
           "timestamp": 1500965789.307425
}

Related:

Continue ReadingImplementing Health Check Endpoint in Open Event Server

Displaying Multiple Markers on Map

Connfa and Open Event Android app provide the facility to see the locations on Google map by displaying the marker on places where the sessions will be taking place. As there are many sessions in a conference we need to display marker for each location. In this blog, we learn how to display multiple markers on a map using Google map API with the reference of Connfa app.
First, you need to set up the Google map API in your app. Find the detailed tutorial for that here where you can follow to get an API key and you’ll need a Gmail account for that. Then you can define a fragment like this in your XML layout where the map will appear.

<FrameLayout
   android:id="@+id/fragmentHolder"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:layout_below="@+id/layoutInfo"
   android:background="@android:color/white" />

Now you can define the in the fragment to make map in this FrameLayout named “fragmentHolder”,

private void replaceMapFragment() {
   CustomMapFragment mapFragment = CustomMapFragment.newInstance(LocationFragment.this);
   LocationFragment
           .this.getChildFragmentManager()
           .beginTransaction()
           .replace(R.id.fragmentHolder, mapFragment)
           .commitAllowingStateLoss();
}

We receive the data about locations in a List containing longitude, latitude, name and room number of the locations. Find the sample result in Open Event format. Here is how we fill in the locations,

for (int i = 0; i < locations.size(); i++) {
   Location location = locations.get(i);
   LatLng position = new LatLng(location.getLat(), location.getLon());
   mGoogleMap.addMarker(new MarkerOptions().position(position));

   if (i == 0) {
       CameraPosition camPos = new CameraPosition(position, ZOOM_LEVEL, TILT_LEVEL, BEARING_LEVEL);
       mGoogleMap.moveCamera(CameraUpdateFactory.newCameraPosition(camPos));
       fillTextViews(location);
   }
}

The CameraPostition defines the properties like Zoom level which can be adjusted by changing the parameters. You can define the like this,

private static final int ZOOM_LEVEL = 15;
private static final int TILT_LEVEL = 0;
private static final int BEARING_LEVEL = 0;

Find the complete implementation here. Here is the screenshot for the final result.

References:

  • Google Map APIs Documentation – https://developers.google.com/maps/documentation/android-api/map-with-marker
Continue ReadingDisplaying Multiple Markers on Map

Advanced functionality in SUSI Tweetbot

SUSI AI is integrated to Twitter (blog). During the initial phase, SUSI Tweetbot had basic UI and functionalities like just “plain text” replies. Twitter provides with many more features like quick replies i.e. presenting to the user with some choices to choose from or visiting SUSI server repository by just clicking buttons during the chat etc.

All these features are provided to enhance the user experience with our chatbot on Twitter.

This blog post walks you through on adding these functionalities to the SUSI Tweetbot:

  1. Quick replies
  2. Buttons

    Quick replies:

    This feature provides options to the user to choose from.

    The user doesn’t need to type the next query but rather select a quick reply from the options available. This speeds up the process and makes it easy for the user. Also, it helps developers know all the possible queries which can come next, from the user. Hence, it helps in efficient coding on how to handle those queries.In SUSI Tweetbot this feature is used to welcome a new user to the SUSI A.I.’s chat window, as shown in the image above. The user can select any option among “Get started” and “Start chatting”.The “Get started” option is basically for introduction of SUSI A.I. to the user. While, “Start chatting” when clicked shows the user of what all queries the user can try.Let’s come to the code part on how to show these options and what events happen when a user selects one of the options.

    To show the Welcome message, we call SUSI API with the query as string “Welcome” and store the reply in message variable. The code snippet used:

var queryUrl = 'http://api.susi.ai/susi/chat.json?q=Welcome';
var message = '';
request({
    url: queryUrl,
    json: true
}, function (err, response, data) {
    if (!err && response.statusCode === 200) {
        message = data.answers[0].actions[0].expression;
    } 
    else {
        // handle error
    }
});

To show options with the message:

var msg =  {
        "welcome_message" : {
                    "message_data": {
                        "text": message,
                        "quick_reply": {
                              "type": "options",
                              "options": [
                                {
                                  "label": "Get started",
                                  "metadata": "external_id_1"
                                },
                                {
                                  "label": "Start chatting",
                                  "metadata": "external_id_2"
                                }
                              ]
                            }
                    }
                      }
       };
T.post('direct_messages/welcome_messages/new', msg, sent);

The line T.post() makes a POST request to the Twitter API, to register the welcome message with Twitter for our chatbot. The return value from this request includes a welcome message id in it corresponding to this welcome message.

We set up a welcome message rule for this welcome message using it’s id. By setting up the rule is to set this welcome message as the default welcome message shown to new users. Twitter also provides with custom welcome messages, information about which can be found in their official docs.

The welcome message rule is set up by sending the welcome message id as a key in the request body:

var welcomeId = data.welcome_message.id;
var welcomeRule = {
            "welcome_message_rule": {
                "welcome_message_id": welcomeId
            }
};
T.post('direct_messages/welcome_messages/rules/new', welcomeRule, sent);

Now, we are all set to show the new users with a welcome message.

Buttons:

Let’s go a bit further. If the user clicks on the option “Get started”, we want to show a basic introduction of SUSI A.I. to the user. Along with that we provide some buttons to visit the SUSI A.I. repository or experience chatting with SUSI A.I. on the web client.

The procedure followed to show buttons is almost the same as followed in case of options.This doc by Twitter proves to be helpful to get familiar with buttons.

As soon as a person clicks on “Get started” option, Twitter sends a message to our bot with the query as “Get started”.

For the message part, we call SUSI API with the query as string “Get started” and store the reply in a message variable. The code snippet used:

var queryUrl = 'http://api.susi.ai/susi/chat.json?q=Get+started';
var message = '';
request({
    url: queryUrl,
    json: true
}, function (err, response, data) {
    if (!err && response.statusCode === 200) {
        message = data.answers[0].actions[0].expression;
    } 
    else {
        // handle error
    }
});

Both the buttons to be shown with the message should have a corresponding url. So that after clicking the button a person is redirected to that url in a new browser tab.

To show buttons a simple message won’t help, we need to create an event. This event constitutes of our message and buttons. The buttons are referred to as call-to-action i.e. CTAs by Twitter dev’s. The maximum number of buttons in an event can not be more than three in number.

The code used to make an event in our case:

var msg = {
        "event": {
                "type": "message_create",
                "message_create": {
                              "target": {
                                "recipient_id": sender
                              },
                              "message_data": {
                                "text": message,
                                "ctas": [
                                  {
                                    "type": "web_url",
                                    "label": "View Repository",
                                    "url": "https://www.github.com/fossasia/susi_server"
                                  },
                                  {
                                    "type": "web_url",
                                    "label": "Chat on the web client",
                                    "url": "http://chat.susi.ai"
                                  }
                                ]
                              }
                            }
            }
        };

T.post('direct_messages/events/new', msg, sent);

The line T.post() makes a POST request to the Twitter API, to send this event to the concerned recipient guiding them on how to get started with the bot.

Resources:

  1. Speed up customer service with quick replies and welcome messages by Ian Cairns from Twitter blog.
  2. Drive discovery of bots and other customer experiences in direct messages by Travis Lull from Twitter blog.
Continue ReadingAdvanced functionality in SUSI Tweetbot

Deploying loklak Server on Kubernetes with External Elasticsearch

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

kubernetes.io

Kubernetes is an awesome cloud platform, which ensures that cloud applications run reliably. It runs automated tests, flawless updates, smart roll out and rollbacks, simple scaling and a lot more.

So as a part of GSoC, I worked on taking the loklak server to Kubernetes on Google Cloud Platform. In this blog post, I will be discussing the approach followed to deploy development branch of loklak on Kubernetes.

New Docker Image

Since Kubernetes deployments work on Docker images, we needed one for the loklak project. The existing image would not be up to the mark for Kubernetes as it contained the declaration of volumes and exposing of ports. So I wrote a new Docker image which could be used in Kubernetes.

The image would simply clone loklak server, build the project and trigger the server as CMD

FROM alpine:latest

ENV LANG=en_US.UTF-8
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF8

WORKDIR /loklak_server

RUN apk update && apk add openjdk8 git bash && \
    git clone https://github.com/loklak/loklak_server.git /loklak_server && \
    git checkout development && \
    ./gradlew build -x test -x checkstyleTest -x checkstyleMain -x jacocoTestReport && \
    # Some Configurations and Cleanups

CMD ["bin/start.sh", "-Idn"]

[SOURCE]

This image wouldn’t have any volumes or exposed ports and we are now free to configure them in the configuration files (discussed in a later section).

Building and Pushing Docker Image using Travis

To automatically build and push on a commit to the master branch, Travis build is used. In the after_success section, a call to push Docker image is made.

Travis environment variables hold the username and password for Docker hub and are used for logging in –

docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD

[SOURCE]

We needed checks there to ensure that we are on the right branch for the push and we are not handling a pull request –

# Build and push Kubernetes Docker image
KUBERNETES_BRANCH=loklak/loklak_server:latest-kubernetes-$TRAVIS_BRANCH
KUBERNETES_COMMIT=loklak/loklak_server:kubernetes-$TRAVIS_COMMIT
  
if [ "$TRAVIS_BRANCH" == "development" ]; then
    docker build -t loklak_server_kubernetes kubernetes/images/development
    docker tag loklak_server_kubernetes $KUBERNETES_BRANCH
    docker push $KUBERNETES_BRANCH
    docker tag $KUBERNETES_BRANCH $KUBERNETES_COMMIT
    docker push $KUBERNETES_COMMIT
elif [ "$TRAVIS_BRANCH" == "master" ]; then
    # Build and push master
else
    echo "Skipping Kubernetes image push for branch $TRAVIS_BRANCH"
fi

[SOURCE]

Kubernetes Configurations for loklak

Kubernetes cluster can completely be configured using configurations written in YAML format. The deployment of loklak uses the previously built image. Initially, the image tagged as latest-kubernetes-development is used –

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: server
  namespace: web
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: server
    spec:
      containers:
      - name: server
        image: loklak/loklak_server:latest-kubernetes-development
        ...

[SOURCE]

Readiness and Liveness Probes

Probes act as the top level tester for the health of a deployment in Kubernetes. The probes are performed periodically to ensure that things are working fine and appropriate steps are taken if they fail.

When a new image is updated, the older pod still runs and servers the requests. It is replaced by the new ones only when the probes are successful, otherwise, the update is rolled back.

In loklak, the /api/status.json endpoint gives information about status of deployment and hence is a good target for probes –

livenessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3
readinessProbe:
  httpGet:
    path: /api/status.json
    port: 80
  initialDelaySeconds: 30
  timeoutSeconds: 3

[SOURCE]

These probes are performed periodically and the server is restarted if they fail (non-success HTTP status code or takes more than 3 seconds).

Ports and Volumes

In the configurations, port 80 is exposed as this is where Jetty serves inside loklak –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

If we notice, this is the port that we used for running the probes. Since the development branch deployment holds no dumps, we didn’t need to specify any explicit volumes for persistence.

Load Balancer Service

While creating the configurations, a new public IP is assigned to the deployment using Google Cloud Platform’s load balancer. It starts listening on port 80 –

ports:
- containerPort: 80
  protocol: TCP

[SOURCE]

Since this service creates a new public IP, it is recommended not to replace/recreate this services as this would result in the creation of new public IP. Other components can be updated individually.

Kubernetes Configurations for Elasticsearch

To maintain a persistent index, this deployment would require an external Elasticsearch cluster. loklak is able to connect itself to external Elasticsearch cluster by changing a few configurations.

Docker Image and Environment Variables

The image used for Elasticsearch is taken from pires/docker-elasticsearch-kubernetes. It allows easy configuration of properties from environment variables in configurations. Here is a list of configurable variables, but we needed just a few of them to do our task –

image: quay.io/pires/docker-elasticsearch-kubernetes:2.0.0
env:
- name: KUBERNETES_CA_CERTIFICATE_FILE
  value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: NAMESPACE
  valueFrom:
    fieldRef:
      fieldPath: metadata.namespace
- name: "CLUSTER_NAME"
  value: "loklakcluster"
- name: "DISCOVERY_SERVICE"
  value: "elasticsearch"
- name: NODE_MASTER
  value: "true"
- name: NODE_DATA
  value: "true"
- name: HTTP_ENABLE
  value: "true"

[SOURCE]

Persistent Index using Persistent Cloud Disk

To make the index last even after the deployment is stopped, we needed a stable place where we could store all that data. Here, Google Compute Engine’s standard persistent disk was used. The disk can be created using GCP web portal or the gcloud CLI.

Before attaching the disk, we need to declare a volume where we could mount it –

volumeMounts:
- mountPath: /data
  name: storage

[SOURCE]

Now that we have a volume, we can simply mount the persistent disk on it –

volumes:
- name: storage
  gcePersistentDisk:
    pdName: data-index-disk
    fsType: ext4

[SOURCE]

Now, whenever we deploy these configurations, we can reuse the previous index.

Exposing Kubernetes to Cluster

The HTTP and transport clients are enabled on port 9200 and 9300 respectively. They can be exposed to the rest of the cluster using the following service –

apiVersion: v1
kind: Service
...
Spec:
  ...
  ports:
  - name: http
    port: 9200
    protocol: TCP
  - name: transport
    port: 9300
    protocol: TCP

[SOURCE]

Once deployed, other deployments can access the cluster API from ports 9200 and 9300.

Connecting loklak to Kubernetes

To connect loklak to external Elasticsearch cluster, TransportClient Java API is used. In order to enable these settings, we simply need to make some changes in configurations.

Since we enable the service named “elasticsearch” in namespace “elasticsearch”, we can access the cluster at address elasticsearch.elasticsearch:9200 (web) and elasticsearch.elasticsearch:9300 (transport).

To confine these changes only to Kubernetes deployment, we can use sed command while building the image (in Dockerfile) –

sed -i.bak 's/^\(elasticsearch_transport.enabled\).*/\1=true/' conf/config.properties && \
sed -i.bak 's/^\(elasticsearch_transport.addresses\).*/\1=elasticsearch.elasticsearch:9300/' conf/config.properties && \

[SOURCE]

Now when we create the deployments in Kubernetes cluster, loklak auto connects to the external elasticsearch index and creates indices if needed.

Verifying persistence of the Elasticsearch Index

In order to see that the data persists, we can completely delete the deployment or even the cluster if we want. Later, when we recreate the deployment, we can see all the messages already present in the index.

I  [2017-07-29 09:42:51,804][INFO ][node                     ] [Hellion] initializing ...
 
I  [2017-07-29 09:42:52,024][INFO ][plugins                  ] [Hellion] loaded [cloud-kubernetes], sites []
 
I  [2017-07-29 09:42:52,055][INFO ][env                      ] [Hellion] using [1] data paths, mounts [[/data (/dev/sdb)]], net usable_space [84.9gb], net total_space [97.9gb], spins? [possibly], types [ext4]
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] initialized
 
I  [2017-07-29 09:42:53,543][INFO ][node                     ] [Hellion] starting ...
 
I  [2017-07-29 09:42:53,620][INFO ][transport                ] [Hellion] publish_address {10.8.1.13:9300}, bound_addresses {10.8.1.13:9300}
 
I  [2017-07-29 09:42:53,633][INFO ][discovery                ] [Hellion] loklakcluster/cJtXERHETKutq7nujluJvA
 
I  [2017-07-29 09:42:57,866][INFO ][cluster.service          ] [Hellion] new_master {Hellion}{cJtXERHETKutq7nujluJvA}{10.8.1.13}{10.8.1.13:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
 
I  [2017-07-29 09:42:57,955][INFO ][http                     ] [Hellion] publish_address {10.8.1.13:9200}, bound_addresses {10.8.1.13:9200}
 
I  [2017-07-29 09:42:57,955][INFO ][node                     ] [Hellion] started
 
I  [2017-07-29 09:42:58,082][INFO ][gateway                  ] [Hellion] recovered [8] indices into cluster_state

In the last line from the logs, we can see that indices already present on the disk were recovered. Now if we head to the public IP assigned to the cluster, we can see that the message count is restored.

Conclusion

In this blog post, I discussed how we utilised the Kubernetes setup to shift loklak to Google Cloud Platform. The deployment is active and can be accessed from the link provided under wiki section of loklak/loklak_server repo.

I introduced these changes in pull request loklak/loklak_server#1349 with the help of @niranjan94, @uday96 and @chiragw15.

Resources

Continue ReadingDeploying loklak Server on Kubernetes with External Elasticsearch