Animations in Loklak Wok Android

Imagine an Activity popping out of nowhere suddenly in front of the user. And even more irritating, the user doesn’t even know whether a button was clicked. Though these are very small animation implementations but these animations enhance the user experience to a new level. This blog deals with the animations in Loklak Wok Android, a peer message harvester of Loklak Server.

Activity transition animation

Activity transition is applied when we move from a current activity to a new activity or just go back to an old activity by pressing back button.

In Loklak Wok Android, when user navigates for search suggestions from TweetHarvestingActivity to SuggestActivity, the new activity i.e. SuggestActivity comes from right side of the screen and the old one i.e. TweetHarvestingActivity leaves the screen through the left side. This is an example of left-right activity transition. For implementing this, two xml files which define the animations are created, enter.xml and exit.xml are created.




NOTE: The entering activity comes from right side, that’s why android:fromXDelta parameter is set to 100% and as the activity finally stays at extreme left, android:toXDelta parameter is set to 0%.

As the current activity, in this case TweetHarvestingActivity, leaves the screen from left to the negative of left. So, in exit.xml the android:fromXDelta parameter is set to 0% and android:toXDelta parameter is set to -100%.

Now, that we are done with defining the animations in xml, it’s time we apply the animations, which is really easy. The animations are applied by invoking Activity.overridePendingTransition(enterAnim, exitAnim) just after the startActivity method. For example, in openSuggestActivity

private void openSuggestActivity() {
   Intent intent = new Intent(getActivity(), SuggestActivity.class);
   getActivity().overridePendingTransition(R.anim.enter, R.anim.exit);


Touch Selectors

Using touch selectors background color of a button or any clickable can be changed, this way a user can see that the clickable responded to the click. The background is usually light accent color or a lighter shade of the icon present in button.

There are three states involved while a clickable is touched, pressed, activated and selected. And a default state, i.e. the clickable is not clicked. The background color of each state is defined in a xml file like media_button_selector, which is present in drawable directory.

<selector xmlns:android="">

   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_pressed="true"/>
   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_activated="true"/>
   <item android:drawable="@color/media_button_touch_selector_backgroud" android:state_selected="true"/>

   <item android:drawable="@android:color/transparent"/>


The selector is applied by setting it as the background of a clickable, for example, touch selector applied on Location image button present in fragment_tweet_posting.xml .

   android:background="@drawable/media_button_selector" />


Notice the change in the background color of the buttons when clicked.


Some youtube videos for getting started:

Open Event API Server: Implementing FAQ Types

In the Open Event Server, there was a long standing request of the users to enable the event organisers to create a FAQ section.

The API of the FAQ section was implemented subsequently. The FAQ API allowed the user to specify the following request schema

 "data": {
   "type": "faq",
   "relationships": {
     "event": {
       "data": {
         "type": "event",
         "id": "1"
   "attributes": {
     "question": "Sample Question",
     "answer": "Sample Answer"


But, what if the user wanted to group certain questions under a specific category. There was no solution in the FAQ API for that. So a new API, FAQ-Types was created.

Why make a separate API for it?

Another question that arose while designing the FAQ-Types API was whether it was necessary to add a separate API for it or not. Consider that a type attribute was simply added to the FAQ API itself. It would mean the client would have to specify the type of the FAQ record every time a new record is being created for the same. This would mean trusting that the user will always enter the same spelling for questions falling under the same type. The user cannot be trusted on this front. Thus the separate API made sure that the types remain controlled and multiple entries for the same type are not there.

Helps in handling large number of records:

Another concern was what if there were a large number of FAQ records under the same FAQ-Type. Entering the type for each of those questions would be cumbersome for the user. The FAQ-Type would also overcome this problem

Following is the request schema for the FAQ-Types API

 "data": {
   "attributes": {
     "name": "abc"
   "type": "faq-type",
   "relationships": {
     "event": {
       "data": {
         "id": "1",
         "type": "event"



  • FAQ to FAQ-type is a many to one relation.
  • A single FAQ can only belong to one Type
  • The FAQ-type relationship will be optional, if the user wants different sections, he/she can add it ,if not, it’s the user’s choice.

Related links

Deleting Meilix Github Releases

Meilix is the repository which uses build script to generate community version of lubuntu as LXQT Desktop. Meilix-Generator is the webapp which uses Meilix to generate ISO and deploy it on Meilix Github Release. Then the webapp mail the link of the ISO to the user.
Increasing number of ISO will increase the number of releases which results in dirty looking of Meilix repository. So we need to delete older releases after certain interval of time to make the repository release page looks good and decrease unwanted space.
This script will do this work for us.

#!/usr/bin/env bash
set -e
echo "This is a script to delete obsolete meilix iso builds by Abishek V Ashok"
echo "You have to add an authorization token to make it functional."

# jq is the JSON parser we will be using
sudo apt-get -y install jq

# Storing the response to a variable for future usage
response=`curl | jq '.[] | .id, .published_at'`

index=1  # when index is odd, $i contains id and when it is even $i contains published_date
delete=0 # Should we delete the release?
current_year=`date +%Y`  # Current year eg) 2001
current_month=`date +%m` # Current month eg) 2
current_day=`date +%d`   # Current date eg) 24

for i in $response; do
    if [ $((index % 2)) -eq 0 ]; then # We get the published_date of the release as $i's value here

        if [ $published_year -lt $current_year ]; then
             let "delete=1"
            if [ $published_month -lt $current_month ]; then
                let "delete=1"
                if [ $((current_day-$published_day)) -gt 10 ]; then
                    let "delete=1"
    else # We get the id of the release as $i`s value here
        if [ $delete -eq 1 ]; then
            curl -X DELETE -H "Authorization: token $KEY"$i
            let "delete=0"
    let "index+=1"

This code uses Github API to curl the Meilix releases. Github API is very useful in providing lots of information but here we are only concerned with the release date and time of the build.
Then we setup a condition if that satisfies then the release will automatically will get deleted.

For taking care of the authentication, a token has been uploaded to the Travis settings of Meilix of FOSSASIA.

The personal token has been generated by a user with write access to the repository with repo scope token.

This sort out the issue of having bulk of releases in the Meilix repository of FOSSASIA.

Users Github API  by REST API v3
Repo Github API   by REST API v3

Analyzing Production Build Size in Loklak Search

Loklak search being a web application it is critical to keep the size of the application in check to ensure that we are not transferring any non-essential bytes to the user so that application load is faster, and we are able to get the minimal first paint time. This requires a mechanism for the ability to check the size of the build files which are generated and served to the user. Alongside the ability to check sizes it is also critically important to analyze the distribution of the modules along with their sizes in various chunks. In this blog post, I discuss the analysis of the application code of loklak search and the generated build files.

Importance of Analysis

The chunk size analysis is critical to any application, as the chunk size of any application directly determines the performance of any application, at any scale. The smaller the application the lesser is the load time, thus faster it becomes usable at the user side. The time to first-paint is the most important metric to keep in mind while analyzing any web application for performance, though the first paint time consists of many critical parts from loading, parsing, layout and paint, but still the size of any chunk determines all the time it will take to render it on the screen.

Also as we use the 3rd party libraries and components it becomes crucially important to inspect the impact on the size of the application upon the inclusion of those libraries and components.

Development Phase Checking

Angular CLI provides a clean mechanism to track and check the size of all the chunks always at the runtime, these stats simply show the size of each chunk in the application in the terminal on every successful compilation, and this provides us a broad idea about the chunks to look and address.

Deep Analysis using Webpack Bundle Analyzer

The angular cli while generating the production build provides us with an option to generates the statistics about the chunks including the size and namespaces of the each module which is part of that chunk. These stats are directly generated by the webpack at the time of bundling, code splitting, and tree shaking. These statistics thus provide us to peek into the actual deeper level of chunk creation in webpack to analyze sizes of its various components. To generate the statistics we just need to enable the –stats-json flag while building.

ng serve --prod --aot --stats-json

This will generate the statistics file for the application in the /dist directory, alongside all the bundles. Now to have the visual and graphical analysis of these statistics we can use a tool like webpack-bundle-analyzer to analyze the statistics. We can install the webpack-bundle-analyzer via npm,

npm install --save-dev webpack-bundle-analyzer

Now, to our package.json we can add a script, running this script will open up a web page which contains graphical visualization of all the chunks build in the application

// package.json

      “scripts”: {
         "analyze": "webpack-bundle-analyzer dist/stats.json"

These block diagrams also contain the information about the sub modules contained in each chunk, and thus we can easily analyze and compare the size of each component we add in the application.

Now, we can see in the above distribution, the main.bundle is of the largest size among all the other chunks. And the major part of it is being occupied by, moment.js, this analysis provides us with a deeper insight into the impact of a module like moment.js on the application size. This helps us to reason about the analyze which part of the application is worth, and which parts of the application can be replaced with lighter alternatives and which parts of the application are worth the size they are consuming, as for a 3rd party module which consumes a lot of sizes but is used in some insignificant feature, must be replaced with a lightweight alternative.


Thus being able to see the description of modules in each and every chunk provides us with a method to reason about, and compare the alternative approaches for a particular solution to a problem, in terms of the effect of those approaches on the size of the application so we are able to make the best decision.

Resources and Links

  • Analyzing the builds blog by hackernoon
  • Bundle analysis for webpack applications blog by Nimesh

Using CSS Grid in Loklak Search

CSS Grid is the latest web standard for the layouts in the web applications. This is the web standard which allows the HTML page to be viewed as 2-dimensional for laying out the elements in the page. It is thus used in parts of loklak search for layout. In this blog post, I will discuss the basic naming convention for CSS grid and its usage in Loklak Search for layout structuring and responsiveness.

CSS Grid Basics

There are some basic terminologies regarding grid few major ones are the following

Grid Container

The grid container is the container which is the wrapper of all the grid items. It is declared by display: grid, this makes all the direct children of that element to become grid items.

Grid Tracks

We define rows and columns of the grid as the lines, the area between any two lines is called a grid track. Tracks can be defined using any length unit. Grid also introduces an additional length unit to help us create flexible grid tracks. The new fr unit represents a fraction of the available space in the grid container.

Grid Cells

The area between any two horizontal and vertical lines is called a grid cell.

Grid Area

The area formed by the combination of two or more cells is called a grid area.

Using CSS grid in Loklak Search

The CSS grid is used in loklak search uses CSS grid in the feeds page to align elements in a responsive way on mobile and desktop. Earlier there was the issue that on small displays the info box of the results appeared after the feed results, and we needed to make sure that it appears on top on smaller displays. This is the outline of the structure of the feed page.

<div class=”feed-wrapper”>
<div class=”feed-results”>
<!-- Feed Results -->

<div class=”feed-info-box”>
<!-- Feed Info Box -->

Now we use the CSS grid to position the items according to the display width. First we declare the “feed-wrapper” as display:grid to make it a Grid Container, and we associate the rows and columns accordingly.

.feed-wrapper {
   display: grid;
   grid-template-columns: 150px 632px 455px 1fr;
   grid-template-rows: auto;

This defines the grid to be consisting of 4 columns of width 150px, 632px,  455px and one remaining unit i.e. 1fr. The rows are set to be auto.

Now we define the grid areas i.e. the names of the areas using the grid-area:<area> css property. This gives names to the elements in the CSS grid.

.feed-results {
   grid-area: feed-results;

.feed-info-box {
   grid-area: feed-info-box;

The last thing which remains now is to specify the position of these grid elements in the grid cells according to the display width, we use simple media queries along with simple grid area positioning property, i.e. grid-template-areas.

.feed-wrapper {
   /* Other Properties */
   @media(min-width: 1200px) {
      grid-template-areas: ". feed-results feed-info-box .";

   @media(max-width: 1199px) {
      grid-template-columns: 1fr;

This positions both the boxes according to the display width, in one column for large displays, and info box on top of results on mobile displays.

This is how it looks on the large desktop displays


This is how it looks on small mobile displays

Links and References



Adding additional information to store listing page of Loklak apps site

Loklak apps site has now got a completely functional store listing page where users can find all relevant information about the app which they want to view. The page has a left side bar which shows various categories to switch between, a right sidebar for suggesting similar kind of apps to users and a middle section to provide users with various important informations about the app like getting started, use of app, promo images, preview images, test link and various other details. In this blog I will be describing how the bottom section of the middle column has been created (related issue: #209).

The bottom section

The bottom section provides various informations like updated, version, app source, developer information, contributors, technology stack, license. All these informations has to be dynamically loaded for each selected app. As I had previously mentioned here, no HTML content can be hard coded in the store listing page. So how do we show the above mentioned informations for the different apps? Well, for this we will once again use the app.json of the corresponding app like we had done for the middle section here.

At first, for a given app we need to define some extra fields in the app.json file as shown below.

"appSource": "",
  "contributors": [{"name": "djmgit", "url": ""}],
  "techStack": ["HTML", "CSS", "AngularJs", "Morris.js", "Bootstrap", "Loklak API"],
  "license": {"name": "LGPL 2.1", "url": ""},
  "version": "1.0",
  "updated": "June 10,2017",

The above code snippet shows the new fields included in app.json. The fields are as described below.

  • appSource: Stores link to the source code of the app.
  • Contributors: Stores a list containing objects. Each object stores name of the contributor and an url corresponding to that contributor.
  • techStack: A list containing names of the technologies used.
  • License: Name and link of the license.
  • Version: The current version of the app.
  • Updated: Date on which the app was last updated.

These fields provide the source for the informations present in the bottom section of the app.

Now we need to render these information on the store listing page. Let us take an example. Let us see how version is rendered.

<div ng-if="appData.version !== undefined && appData.version !== ''" class="col-md-4 add-info">
                  <div class="info-type">
                    <h5 class="info-header">
                  <div class="info-body">

We first check if version field is defined and version is not empty. Then we print a header (Version in this case) and then we print the value. This is how updated, appSource and license are also displayed. What about technology stack and contributors? Technology stack is basically an list and it may contain quite a number of strings(technology names). If we display all the values at once the bottom section will get crowded and it may degrade the UI of the page.To avoid this a popup dialog has been used. When user clicks on the technology stack label, a popup dialogue appears which shows the various technologies used in the app.

<div class="info-body">
                    <div class="dropdown">
                      <div class="dropdown-toggle" type="button" data-toggle="dropdown">
                        View technology stack
                      <ul class="dropdown-menu">
                        <li ng-repeat="item in appData.techStack" class="tech-item">

After displaying a header, we iterate over the techStack list and populate our popup dialogue. This popup dialogue is attached to the label ‘View technology stack‘. Whenever a user clicks on this label, the popup is shown. The same technique technique is also applied for rendering contributors. A popup dialogue is used to display all the contributors. Thus technology stack and contributors list is shown only on demand.

For developer information, name of the developer is shown which is linked to his/her website and there is an option to send email or copy email id if present.

<div class="info-body">
                    <span ng-if=" !== undefined && !== ''">
                      <a href="{{}}"> {{}} </a>
                    <a ng-if=" !== undefined && !== ''" class="mail"
                      <span class="glyphicon glyphicon-envelope"></span>

For email id, bootstrap’s email glyphicon is used along with a mailto link pointing to the developer’s email id. What does mailto do? It simply opens your default mail client. For example if you are on linux, it might open Thunderbird. If you do not have a mail client installed, but your default browser is google chrome, it will open gmail mail composer. If you are viewing the site on android device, it will open gmail app directly.

The bottom section can be viewed here.

Important resources


Implementing Permissions for Orders API in Open Event API Server

Open Event API Server Orders API is one of the core APIs. The permissions in Orders API are robust and secure enough to ensure no leak on payment and ticketing.The permission manager provides the permissions framework to implement the permissions and proper access controls based on the dev handbook.

The following table is the permissions in the developer handbook.


List View Create Update Delete
Event organizer [1] [1] [1] [1][2] [1][3]
Registered user [4]
Everyone else
  1. Only self-owned events
  2. Can only change order status
  3. A refund will also be initiated if paid ticket
  4. Only if order placed by self

Super Admins and admins are allowed to create any order with any amount but any coupon they apply is not consumed on creating order. They can update almost every field of the order and can provide any custom status to the order. Permissions are applied with the help of Permission Manager which takes care the authorization roles. For example, if a permission is set based on admin access then it is automatically set for super admin as well i.e., to the people with higher rank.

Self-owned events

This allows the event admins, Organizer and Co-Organizer to manage the orders of the event they own. This allows then to view all orders and create orders with or without discount coupon with any custom price and update status of orders. Event admins can provide specific status while others cannot

if not has_access('is_coorganizer', event_id=data['event']):
   data['status'] = 'pending'

And Listing requires Co-Organizer access

elif not has_access('is_coorganizer', event_id=kwargs['event_id']):
   raise ForbiddenException({'source': ''}, "Co-Organizer Access Required")

Can only change order status

The organizer cannot change the order fields except the status of the order. Only Server Admin and Super Admins are allowed to update any field of the order.

if not has_access('is_admin'):
   for element in data:
       if element != 'status':
           setattr(data, element, getattr(order, element))

And Delete access is prohibited to event admins thus only Server admins can delete orders by providing a cancelling note which will be provided to the Attendee/Buyer.

def before_delete_object(self, order, view_kwargs):
   if not has_access('is_coorganizer',
       raise ForbiddenException({'source': ''}, 'Access Forbidden')

Registered User

A registered user can create order with basic details like the attendees’ records and payment method with fields like country and city. They are not allowed to provide any custom status to the order they are creating. All orders will be set by default to “pending”

Also, they are not allowed to update any field in their order. Any status update will be done internally thus maintaining the security of Order System. Although they are allowed to view their place orders. This is done by comparing their logged in user id with the user id of the purchaser.

if not has_access('is_coorganizer_or_user_itself', event_id=order.event_id, user_id=order.user_id):
   return ForbiddenException({'source': ''}, 'Access Forbidden')

Event Admins

The event admins have one more restriction, as an event admin, you cannot provide discount coupon and even if you do it will be ignored.

# Apply discount only if the user is not event admin
if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):

Also an event admin any amount you will provide on creating order will be final and there will be no further calculation of the amount will take place

if not has_access('is_coorganizer', event_id=data['event']):

Creating Attendees Records

Before sending a request to Orders API it is required to create to attendees mapped to some ticket and for this registered users are allowed to create the attendees without adding a relationship of the order. The mapping with the order is done internally by Orders API and its helpers.


  1. Dev Handbook – Niranjan R
    The Open Event Developer Handbook
  2. Flask-REST-JSONAPI Docs
    Permissions and Data layer | Flask-REST-JSONAPI
  3. A guide to use permission manager in API Server


Generating Ticket PDFs in Open Event API Server

In the ordering system of Open Event API Server, there is a requirement to send email notifications to the attendees. These attendees receive the URL of the pdf of the generated ticket. On creating the order, first the pdfs are generated and stored in the preferred storage location and then these are sent to the users through the email.

Generating PDF is a simple process, using xhtml2pdf we can generate PDFs from the html. The generated pdf is then passed to storage helpers to store it in the desired location and pdf-url is updated in the attendees record.

Sample PDF

PDF Template

The templates are written in HTML which is then converted using the module xhtml2pdf.
To store the templates a new directory was created at  app/templates where all HTML files are stored. Now, The template directory needs to be updated at flask initializing app so that template engine can pick the templates from there. So in app/ we updated flask initialization with

template_dir = os.path.dirname(__file__) + "/templates"

app = Flask(__name__, static_folder=static_dir, template_folder=template_dir)

This allows the template engine to pick the templates files from this template directory.

Generating PDFs

Generating PDF is done by rendering the html template first. This html content is then parsed into the pdf

file = open(dest, "wb")

pisa.CreatePDF(cStringIO.StringIO(pdf_data.encode('utf-8')), file)


The generated pdf is stored in the temporary location and then passed to storage helper to upload it.

uploaded_file = UploadedFile(dest, filename)

upload_path = UPLOAD_PATHS['pdf']['ticket_attendee'].format(identifier=get_file_name())

new_file = upload(uploaded_file, upload_path)

This generated pdf path is returned here

Rendering HTML and storing PDF

for holder in order.ticket_holders:

  if !=

      pdf = create_save_pdf(render_template('/pdf/ticket_attendee.html', order=order, holder=holder))


      pdf = create_save_pdf(render_template('/pdf/ticket_purchaser.html', order=order))

  holder.pdf_url = pdf


The html is rendered using flask template engine and passed to create_save_pdf and link is updated on the attendee record.

Sending PDF on email

These pdfs are sent as a link to the email after creating the order. Thus a ticket is sent to each attendee and a summarized order details with attendees to the purchased.






  html= MAILS[TICKET_PURCHASED_ATTENDEE]['message'].format(





  1. Readme – xhtml2pdf
  2. Using xhtml2pdf and create pdfs


Open Event Server: Getting The Identity From The Expired JWT Token In Flask-JWT

The Open Event Server uses JWT based authentication, where JWT stands for JSON Web Token. JSON Web Tokens are an open industry standard RFC 7519 method for representing claims securely between two parties. [source:]

Flask-JWT is being used for the JWT-based authentication in the project. Flask-JWT makes it easy to use JWT based authentication in flask, while on its core it still used PyJWT.

To get the identity when a JWT token is present in the request’s Authentication header , the current_identity proxy of Flask-JWT can be used as follows:

def example():
   return '%s' % current_identity


Note that it will only be set in the context of function decorated by jwt_required(). The problem with the current_identity proxy when using jwt_required is that the token has to be active, the identity of an expired token cannot be fetched by this function.

So why not write a function on our own to do the same. A JWT token is divided into three segments. JSON Web Tokens consist of three parts separated by dots (.), which are:

  • Header
  • Payload
  • Signature

The first step would be to get the payload, that can be done as follows:

token_second_segment = _default_request_handler().split('.')[1]


The payload obtained above would still be in form of JSON, it can be converted into a dict as follows:

payload = json.loads(token_second_segment.decode('base64'))


The identity can now be found in the payload as payload[‘identity’]. We can get the actual user from the paylaod as follows:

def jwt_identity(payload):
   Jwt helper function
   :param payload:
   return User.query.get(payload['identity'])


Our final function will now be something like:

def get_identity():
   To be used only if identity for expired tokens is required, otherwise use current_identity from flask_jwt
   token_second_segment = _default_request_handler().split('.')[1]
   missing_padding = len(token_second_segment) % 4
   payload = json.loads(token_second_segment.decode('base64'))
   user = jwt_identity(payload)
   return user


But after using this function for sometime, you will notice that for certain tokens, the system will raise an error saying that the JWT token is missing padding. The JWT payload is base64 encoded, and it requires the payload string to be a multiple of four. If the string is not a multiple of four, the remaining spaces can pe padded with extra =(equal to) signs. And since Python 2.7’s .decode doesn’t do that by default, we can accomplish that as follows:

missing_padding = len(token_second_segment) % 4

# ensures the string is correctly padded to be a multiple of 4
if missing_padding != 0:
   token_second_segment += b'=' * (4 - missing_padding)


Related links:

Automatic Signing and Publishing of Android Apps from Travis

As I discussed about preparing the apps in Play Store for automatic deployment and Google App Signing in previous blogs, in this blog, I’ll talk about how to use Travis Ci to automatically sign and publish the apps using fastlane, as well as how to upload sensitive information like signing keys and publishing JSON to the Open Source repository. This method will be used to publish the following Android Apps:

Current Project Structure

The example project I have used to set up the process has the following structure:

It’s a normal Android Project with some .travis.yml and some additional bash scripts in scripts folder. The file is standard app build and repo push file found in FOSSASIA projects. The process used to develop it is documented in previous blogs. First, we’ll see how to upload our keys to the repo after encrypting them.

Encrypting keys using Travis

Travis provides a very nice documentation on encrypting files containing sensitive information, but a crucial information is buried below the page. As you’d normally want to upload two things to the repo – the app signing key, and API JSON file for release manager API of Google Play for Fastlane, you can’t do it separately by using standard file encryption command for travis as it will override the previous encrypted file’s secret. In order to do so, you need to create a tarball of all the files that need to be encrypted and encrypt that tar instead. Along with this, before you need to use the file, you’ll have to decrypt in in the travis build and also uncompress it for use.

So, first install Travis CLI tool and login using travis login (You should have right access to the repo and Travis CI in order to encrypt the files for it)

Then add the signing key and fastlane json in the scripts folder. Let’s assume the names of the files are key.jks and fastlane.json

Then, go to scripts folder and run this command to create a tar of these files:

tar cvf secrets.tar fastlane.json key.jks


secrets.tar will be created in the folder. Now, run this command to encrypt the file

travis encrypt-file secrets.tar


A new file secrets.tar.enc will be created in the folder. Now delete the original files and secrets tar so they do not get added to the repo by mistake. The output log will show the the command for decryption of the file to be added to the .travis.yml file.

Decrypting keys using Travis

But if we add it there, the keys will be decrypted for each commit on each branch. We want it to happen only for master branch as we only require publishing from that branch. So, we’ll create a bash script for the task with following content

set -e


if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_REPO_SLUG" != "iamareebjamal/android-test-fastlane" -o "$TRAVIS_BRANCH" != "$DEPLOY_BRANCH" ]; then
    echo "We decrypt key only for pushes to the master branch and not PRs. So, skip."
    exit 0

openssl aes-256-cbc -K $encrypted_4dd7_key -iv $encrypted_4dd7_iv -in ./scripts/secrets.tar.enc -out ./scripts/secrets.tar -d
tar xvf ./scripts/secrets.tar -C scripts/


Of course, you’ll have to change the commands and arguments according to your need and repo. Specially, the decryption command keys ID

The script checks if the repo and branch are correct, and the commit is not of a PR, then decrypts the file and extracts them in appropriate directory

Before signing the app, you’ll need to store the keystore password, alias and key password in Travis Environment Variables. Once you have done that, you can proceed to signing the app. I’ll assume the variable names to be $STORE_PASS, $ALIAS and $KEY_PASS respectively

Signing App

Now, come to the part in script where you have the unsigned release app built. Let’s assume its name is app-release-unsigned.apk.Then run this command to sign it

cp app-release-unsigned.apk app-release-unaligned.apk
jarsigner -verbose -tsa -sigalg SHA1withRSA -digestalg SHA1 -keystore ../scripts/key.jks -storepass $STORE_PASS -keypass $KEY_PASS app-release-unaligned.apk $ALIAS


Then run this command to zipalign the app

${ANDROID_HOME}/build-tools/25.0.2/zipalign -v -p 4 app-release-unaligned.apk app-release.apk


Remember that the build tools version should be the same as the one specified in .travis.yml

This will create an apk named app-release.apk

Publishing App

This is the easiest step. First install fastlane using this command

gem install fastlane


Then run this command to publish the app to alpha channel on Play Store

fastlane supply --apk app-release.apk --track alpha --json_key ../scripts/fastlane.json --package_name com.iamareebjamal.fastlane


You can always configure the arguments according to your need. Also notice that you have to provide the package name for Fastlane to know which app to update. This can also be stored as an environment variable.

This is all for this blog, you can read more about travis CLI, fastlane features and signing process in these links below: