Adding Modules API on Open Event Server

The Open Event Server enables organizers to manage events from concerts to conferences and meet-ups. It offers features for events with several tracks and venues. Event managers can create invitation forms for speakers and build schedules in a drag and drop interface. The event information is stored in a database. The system provides API endpoints to fetch the data, and to modify and update it.

The Open Event Server is based on JSON 1.0 Specification and hence build on top of Flask Rest Json API (for building Rest APIs) and Marshmallow (for Schema).

In this blog, we will talk about how to add API for accessing the Modules on Open Event Server. The focus is on Schema creation and it’s API creation.

Schema Creation

For the ModuleSchema, we’ll make our Schema as follows

Now, let’s try to understand this Schema.

In this feature, we are providing Admin the rights to set whether Admin wants to include tickets, payment and donation in the open event application.

  1. First of all, we will provide three fields in this Schema, which are ticket_include, payment_include and donation_include.
  2. The very first attribute ticket_include should be Boolean as we want Admin to update it whether he wants to include ticketing system in the application from default one which is False.
  3. Next attribute payment_include should be Boolean as we want Admin to update it whether he wants to include payment system in the application from default one which is False.
  4. Next attribute donation_include should be Boolean as we want Admin to update it whether he wants to include donation system in the application from default one which is False.

API Creation

For the ModuleDetail, we’ll make our API as follows

Now, let’s try to understand this API.

In this API, we are providing Admin the rights to set whether Admin wants to include tickets, payment and donation in the open event application.

  1. First of all, there is the need to know that this API has two method GET and PATCH.
  2. Decorators shows us that only Admin has permissions to access PATCH method for this API i.e. only Admins can modify the modules .
  3. before_get method shows us that this API will give first record of Modules model irrespective of the id requested by user.
  4. Schema used here is default one of Modules
  5. Hence, GET Request is accessible to all the users.

So, we saw how Module Schema and API is created to allow users to get it’s values and Admin users to modify it’s values.

Resources

Contributing to Open Event Android App

The Open Event Android project consists of two components. The App Generator is a web application that is hosted on a server and generates an event Android app from a zip with JSON and binary files (examples here) or through an API. The second component we are developing in the project is a generic Android app – the output of the app generator. The Android app has a standard configuration file, that sets the details of the app (e.g. color scheme, logo of event, link to JSON app data).

The process for making a contribution in the project starts with making your account on GitHub. Secondly find the open source projects that interest you. Now as for me I started with open-event-android. Then follow these steps:

  1. Go through the project’s README.md file and get information about the various aspects and technologies of the project.
  2. Now fork that repo in your account.
  3. Open or setup the project as per the given information present in its documentation, for example for the sample android app you have to clone the project in your local machine and open it up using Android Studios.

Now you have your version of the project now it’s time for you to use the project on your own.

While doing so you have to work like a tester of the project, so you should explore each and every bit and find out any possible anomaly in the project that you would like to work on. Once you have that you are ready to create an issue.

Following are the steps to create a new issue,

Navigate to the main repo link, you will see an issues section as follows:

  • Click the new issue button and report every detail about the issue. For eg. The first issue that i worked on was Issue-1934
  • Even if you don’t find any problem in the project on your own you can always work on issues created by others, you just have to let the maintainers know that you want to work on the issue by commenting in it Issue 1709.
  • Now the next step is to work on that issue
  • On your machine you don’t have to change the code in the development branch as it’s considered to be as a bad practice. Hence checkout as a new branch. For eg. I checked out for the above issue as ‘crashfixed’
  • Make the necessary changes to that branch and test that the code is compiling and the issue is fixed followed by
  • The add command is used to bring the changes to the staging area so that they are ready to be committed/saved. You can also add individual files with the command
  • Next we want to save the changes that we made till now which can be done through git commit.
  • Finally we would push the changes that we made to our forked repository in github.
  • Now navigate to the repo and you will an option to create a Pull Request.
    Mention the Issue number and description and changes you done ,include screenshots of the fixed app.For eg.My first PR was  Pull Request 1936.

Sending the pull request is asking the maintainers of the code to add your changes to the main project which would be visible to all the contributors. But before getting merged your code may have to pass through several tests as in my case were:

-codacy/pr

-continuous-integration/travis-ci/pr

After your code passes the above tests you you would require one approved review from one of the maintainers of the project to get your code merged into the main one.

If your code is perfect in the very first attempt it would be accepted by the maintainer otherwise you would be asked to do some changes in it. You could carry out the changes on your local machine and once you are done with them you could push them to your forked repository and the changes would be amended in your pull request on its own.

Resources

Installing Susper Search Engine and Deploying it to Heroku

Susper is a decentralized Search Engine that uses the peer to peer system yacy and Apache Solr to crawl and index search results.

Search results are displayed using the Solr server which is embedded into YaCy. All search results must be provided by a YaCy search server which includes a Solr server with a specialized JSON result writer. When a search request is made in one of the search templates, a HTTP request is made to YaCy. The response is JSON because that can much better be parsed than XML in JavaScript.

In this blog, we will talk about how to install Susper search engine locally and deploying it to Heroku (A cloud application platform).

How to clone the repository

Sign up / Login to GitHub and head over to the Susper repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/susper.com

2.   Get the clone of the forked version on your local machine using

git clone https://github.com/<username>/susper.com.git

3. Add upstream to synchronize repository using

git remote add upstream https://github.com/fossasia/susper.com.git

Getting Started

The Susper search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/[email protected]

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Susper Search Engine to Heroku :

  1. We need to install Heroku on our machine. Type the following in your Linux terminal:
wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

This installs the Heroku Toolbelt on your machine to access Heroku from the command line.

  1. Create a Procfile inside root directory and write
web: ng serve
  1. Next, we need to login to our Heroku server (assuming that you have already created an account).

Type the following in the terminal:

heroku login

Enter your credentials and login.

  1. Once logged in we need to create a space on the Heroku server for our application. This is done with the following command
heroku create
  1. Add nodejs buildpack to the app
heroku buildpacks:add –index 1 heroku/nodejs
  1. Then we deploy the code to Heroku.
git push heroku master
git push heroku yourbranch:master # If you are in a different branch other than master

Resources

Installing the Loklak Search and Deploying it to Surge

The Loklak search creates a website using the Loklak server as a data source. The goal is to get a search site, that offers timeline search as well as custom media search, account and geolocation search.

In order to run the service, you can use the API of http://api.loklak.org or install your own Loklak server data storage engine. Loklak_server is a server application which collects messages from various social media tweet sources, including Twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

The site of this repo is deployed on the GitHub gh-pages branch and automatically deployed here: http://loklak.org

In this blog, we will talk about how to install Loklak_Search locally and deploying it to Surge (Static web publishing for Front-End Developers).

How to clone the repository

Sign up / Login to GitHub and head over to the Loklak_Search repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/loklak_search
  1.   Get the clone of the forked version on your local machine using
git clone https://github.com/<username>/loklak_search.git
  1.   Add upstream to synchronize repository using
git remote add upstream https://github.com/fossasia/loklak_search.git

Getting Started

The Loklak search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/[email protected]

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Loklak Search on Surge :

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages.

  1. We need to install surge on our machine. Type the following in your Linux terminal:
npm install –global surge

This installs the Surge on your machine to access Surge from the command line.

  1. In your project directory just run
surge
  1. After this, it will ask you three parameters, namely
Email
Password
Domain

After specifying all these three parameters, the deployment link with the respective domain is generated.

Auto deployment of Pull Requests using Surge :

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file
  • The pr_deploy.sh file will be executed only after success of Travis CI i.e. when Travis CI passes by using command bash pr_deploy.sh
#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
npm i -g surge
export [email protected]
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206
export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-LoklakSearch.surge.sh
surge –project ./dist –domain $DEPLOY_DOMAIN;

Here, Travis CI is first installing surge locally by npm i -g surge  and then we are exporting the environment variables SURGE_LOGIN , SURGE_TOKEN and DEPLOY_DOMAIN.

Now, execute pr_deploy.sh file from .travis.yml by using command bash pr_deploy.sh

Resources

Auto Deployment of Pull Requests on Susper using Surge Technology

Susper is being improved every day. Following every best practice in the organization, each pull request includes a working demo link of the fix. Currently, the demo link for Susper can be generated by using GitHub pages by running these simple commands – ng build and npm run deploy. Sometimes this process on slow-internet connectivity takes up to 30 mins to generate a working demo link of the fix.

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages:

  • As soon as the pull request passes Travis CI, the deployment link is generated. It has been set up as such, no extra terminal commands will be required.
  • Faster loading compared to deployment link is generated using GitHub pages.

Surge can be used to deploy only static web pages. Static web pages mean websites that contain fixed contents.

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file which will be executed during Travis CI testing.
  • The pr_deploy.sh file can be executed after success i.e. when Travis CI passes by using command bash pr_deploy.sh.

The pr_deploy.sh file for Susper looks like this:

#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
angular build production

npm i -g surge

export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-susper.surge.sh
surge project ./dist domain $DEPLOY_DOMAIN;

 

Once pr_deploy.sh file has been created, execute the file in the travis.yml by using command bash pr_deploy.sh.

In this way, we have integrated the surge technology for auto-deployment of the pull requests in Susper.

References:

Getting Started Developing on Phimpme Android

Phimpme is an Android app for editing photos and sharing them on social media. To participate in project start by learning how people contribute in open source, learning about the version control system Git and other tools like Codacy and Travis.

Firstly, sign up for GitHub. Secondly, find the open source projects that interest you. Now as for me I started with Phimpme. Then follow these steps:

  1. Go through the project ReadMe.md and read all the technologies and tools they are using.
  2. Now fork that repo in your account.
  3. Open the Android Studio/Other applications that are required for that project and import the project through Git.
  4. For Android Studio sync all the Gradle files and other changes and you are all done and ready for the development process.

Install the app and run it on a phone. Now explore each and every bit use this app as a tester, think about the end cases and boundary condition that will make the app ‘ANR’ (App not responding) dialog appear. Congratulations you are ready to create an issue if that is a verified and original and actually is a bug.

Next,

  • Navigate to the main repo link, you will see an issues section as follows:
  • Create a new issue and report every detail about the issue (logcat, screenshots) For eg. Refer to Issue-1120
  • Now the next step is to work on that issue
  • On your machine, you don’t have to change the code in the development branch as it’s considered to be as a bad practice. Hence checkout as a new branch.
    For eg., I checked out for the above issue as ‘crashfixed’
git checkout -b "Any branch name you want to keep"
  • Make the necessary changes to that branch and test that the code is compiling and the issue is fixed followed by
git add.
git commit -m "Fix #Issue No -Description "
git push origin branch-name
  • Now navigate to the repo and you will an option to create a Pull Request.
    Mention the Issue number and description and changes you done, include screenshots of the fixed app.For eg. Pull Request 1131.

Hence you have done your first contribution in open source while learning with git. The pull request will initiate some checks like Codacy and Travis build and then if everything works it is reviewed and merged by co-developers.

The usual way how this works is, that it should be reviewed by other co-developers. These co-developers do not need merge or write access to the repository. Any developer can review pull requests. This will also help contributors to learn about the project and make the job of core developers easier.

Resources

Deleting SUSI Skills from Server

SUSI Skill CMS is a web application to create and edit skills. In this blog post I will be covering how we made the skill deleting feature in Skill CMS from the SUSI Server.
The deletion of skill was to be made in such a way that user can click a button to delete the skill. As soon as they click the delete button the skill is deleted it is removed from the directory of SUSI Skills. But admins have an option to recover the deleted skill before completion of 30 days of deleting the skill.

First we will accept all the request parameters from the GET request.

        String model_name = call.get("model", "general");
        String group_name = call.get("group", "Knowledge");
        String language_name = call.get("language", "en");
        String skill_name = call.get("skill", "wikipedia");

In this we get the model name, category, language name, skill name and the commit ID. The above 4 parameters are used to make a file path that is used to find the location of the skill in the Susi Skill Data repository.

 if(!DAO.deleted_skill_dir.exists()){
            DAO.deleted_skill_dir.mkdirs();
   }

We need to move the skill to a directory called deleted_skills_dir. So we check if the directory exists or not. If it not exists then we create a directory for the deleted skills.

  if (skill.exists()) {
   File file = new File(DAO.deleted_skill_dir.getPath()+path);
   file.getParentFile().mkdirs();
   if(skill.renameTo(file)){
   Boolean changed =  new File(DAO.deleted_skill_dir.getPath()+path).setLastModified(System.currentTimeMillis());
     }

This is the part where the real deletion happens. We get the path of the skill and rename that to a new path which is in the directory of deleted skills.

Also here change the last modified time of the skill as the current time. This time is used to check if the skill deleted is older than 30 days or not.

    try (Git git = DAO.getGit()) {
                DAO.pushCommit(git, "Deleted " + skill_name, rights.getIdentity().isEmail() ? rights.getIdentity().getName() : "[email protected]");
                json.put("accepted", true);
                json.put("message", "Deleted " + skill_name);
            } catch (IOException | GitAPIException e) {

Finally we add the changes to Git. DAO.pushCommit pushes to commit to the Susi Skill Data repository. If the user is logged in we get the email of the user and set that email as the commit author. Else we set the username “[email protected]”.

Then in the caretaker class there is a method deleteOldFiles that checks for all the files whose last modified time was older than 30 days. If there is any file whose last modified time was older than 30 days then it quietly delete the files.

public void deleteOldFiles() {
     Collection<File> filesToDelete = FileUtils.listFiles(new         File(DAO.deleted_skill_dir.getPath()),
     new 
(DateTime.now().withTimeAtStartOfDay().minusDays(30).toDate()),
            TrueFileFilter.TRUE);    // include sub dirs
        for (File file : filesToDelete) {
               boolean success = FileUtils.deleteQuietly(file);
            if (!success) {
                System.out.print("Deleted skill older than 30 days.");
            }
      }
}

To test this API endpoint, we need to call http://localhost:4000/cms/deleteSkill.txt?model=general&group=Knowledge&language=en&skill=<skill_name>

Resources

JGit Documentation: https://eclipse.org/jgit/documentation/

Commons IO: https://commons.apache.org/proper/commons-io/

Age Filter: https://commons.apache.org/proper/commons-io/javadocs/api-1.4/org/apache/commons/io/filefilter/AgeFileFilter.html

JGit User Guide: http://wiki.eclipse.org/JGit/User_Guide

JGit Repository access: http://www.codeaffine.com/2014/09/22/access-git-repository-with-jgit/

Getting SUSI Skill at a Commit ID

Susi Skill CMS is a web app to edit and create new skills. We use Git for storing different versions of Susi Skills. So what if we want to roll back to a previous version of the skill? To implement this feature in Susi Skill CMS, we needed an API endpoint which accepts the name of the skill and the commit ID and returns the file at that commit ID.

In this blog post I will tell about making an API endpoint which works similar to git show.

First we will accept all the request parameters from the GET request.

        String model_name = call.get("model", "general");
        String group_name = call.get("group", "Knowledge");
        String language_name = call.get("language", "en");
        String skill_name = call.get("skill", "wikipedia");
        String commitID  = call.get("commitID", null);

In this we get the model name, category, language name, skill name and the commit ID. The above 4 parameters are used to make a file path that is used to find the location of the skill in the Susi Skill Data repository.

This servlet need CommitID to work and if commit ID is not given in the request parameters then we send an error message saying that the commit id is null and stop the servlet execution.

    Repository repository = DAO.getRepository();
    ObjectId CommitIdObject = repository.resolve(commitID);

Then we get the git repository of the skill from the DAO and initialize the repository object.

From the commitID that we got in the request parameters we create a CommitIdObject.

   (RevWalk revWalk = new RevWalk(repository)) {
   RevCommit commit = revWalk.parseCommit(CommitIdObject);
   RevTree tree = commit.getTree();


Now using commit’s tree, we will find the find the path and get the tree of the commit.

From the TreeWalk in the repository we will set a filter to find a file. This searches recursively for the files inside all the folders.

                revWalk = new RevWalk(repository)) {
                try (TreeWalk treeWalk = new TreeWalk(repository)) {
                    treeWalk.addTree(tree);
                    treeWalk.setRecursive(true);
                    treeWalk.setFilter(PathFilter.create(path));
                    if (!treeWalk.next()) {
                        throw new IllegalStateException("Did not find expected file");
                    }

If the TreeWalk reaches to an end and does not find the specified skill path then it returns anIllegal State Exception with an message saying did not found the file on that commit ID.

       ObjectId objectId = treeWalk.getObjectId(0);
       ObjectLoader loader = repository.open(objectId);
       OutputStream output = new OutputStream();
       loader.copyTo(output);

And then one can the loader to read the file. From the treeWalk we get the object and create an output stream to copy the file content in it. After that we create the JSON and put the OutputStream object as as String in it.

       json.put("file",output);

This Servlet can be seen working api.susi.ai: http://api.susi.ai/cms/getFileAtCommitID.json?model=general&group=knowledge&language=en&skill=bitcoin&commitID=214791f55c19f24d7744364495541b685539a4ee

Resources

JGit Documentation: https://eclipse.org/jgit/documentation/

JGit User Guide: http://wiki.eclipse.org/JGit/User_Guide

JGit Repository access: http://www.codeaffine.com/2014/09/22/access-git-repository-with-jgit/

JGit Github: https://github.com/eclipse/jgit

sTeam GSoC 2016 Windup

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

An overview of the work done by ajinkya007 during Google Summer of code 2016 with FOSSASIA on its project sTeam.

The community bonding period saw the creation of a docker image and a debian package for the sTeam server. The integration of the sTeam shell into vi, improvements in the export and import to git scripts, user and group manipulation commands, sending mails through the commandline, viewing logs and the edit script modifications were done subsequently. The later part of GSOC saw that the sTeam-rest repository was restructured, unit and api-end point tests were performed. The new web interface developed was tested.
The code written during this period by me and siddhant was merged and the conflicts were resolved. The merged code was tested thoroughly as no automated test integration tool supports pike programming language. Documentation was generated using Doxygen and deployed in the gh-pages of the sTeam server repository.

A trello board was maintained throughout the course of GSOC 2016.

Trello Board: sTeam

Accomplishments

Issues Reported and Resolved

A list of tasks covered and all the Pull requests related to each:

Tasks Issue PR
Make changes in the Makefile for installation of sTeam. Issue-25 Issue-27 PR-66 PR-67
Edit script modifications Issue-20 Issue-29 Issue-43 PR-44 PR-48
Indentation of output in steal-shell. Issue-24 PR-42
Integrate steam-shell into vim or emacs. Issue-37 Issue-43 Issue-49 PR-41 PR-48 PR-51
Improve the import and export from git scripts. Issue-9 Issue-14 Issue-16 Issue-18 Issue-19 Issue-46 PR-45 PR-54 PR-55 PR-76
Create, Delete and List the user through commandline Issue-58 Issue-69 Issue-72 PR-59 PR-70 PR-78
Sending Mails through commandline Issue-74 PR-85
Generate error logs and display them in CLI Issue-83 PR-86
Create a file of any mime type from command line. Issue-79 PR-82
Add more commands for group operations. Issue-80 PR-84
Add more utility to the steam-shell Issue-56 Issue-71 Issue-73 PR-57 PR-75 PR-81
Restructure the sTeam-rest repository List of Issue’s List of PR’s
Write test cases to test sTeam-rest api List of Issue’s List of PR’s
Create a debian package and a docker image for easy deployment Create docker image Docker Image
Document the work done Issue 149 sTeam Server Structure, sTeam Server Documentation
Test the web-interface

Commits Merged

During the course of GSOC 2016, work was done on the sTeam and sTeam-rest repositories.

1. The work done on the sTeam repository.

We have combined all the work into two branches for the ease of creating a debian package. The commits made by me in each branch can be seen here.

2. The work done on the sTeam-rest repository

The push request’s sent for the issue’s are yet to be merged in the main repository. The list of PR’s for the sTeam-rest repository.

sTeam-rest PR’s

The weekly blogs

The blogs summarizing the work done during the week were published on my personal website. These can be found on Weekly Blogs
All the blogs can also be found on the Fossasia blog.
The list in reverse chronological order is as follows.

Scrums

Scrum reports were posted on the #steam-devel on irc.freenode.net and sTeam google group. The sTeam trello board also has everyday scrum reports.

Further Improvements

  1. sTeam command line lacks the functionality to read and set the object access permissions. sTeam function analogous to getfacl() to change the sTeam server object permisssions.
  2. sTeam debian package for easy installation of the sTeam server. The debian package is yet to be fully packaged.

Special Thanks

  • I would like to thank my mentors Mario Behling, Hong Phuc Dang, Martin Bahr, Trilok Tourani and my peers for being there to help me and guide me.
  • I would like to thank FOSSASIA, sTeam and Pike Community for giving me this opportunity and guiding me in this endeavour.
  • I would also like to thank Google Summer of Code for this experience.

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Push your apk to your GitHub repository from Travis

In this post I’ll guide you on how to directly upload the compiled .apk from travis to your GitHub repository.

Why do we need this?

Well, assume that you need to provide an app to your testers after each commit on the repository, so instead of manually copying and emailing them the app, we can setup travis to upload the file to our repository where the testers can fetch it from.

So, lets get to it!

Step 1 :

Link Travis to your GitHub Account.

Open up https://travis-ci.org.

Click on the green button in the top right corner that says “Sign in with GitHub”

screenshot-area-2016-07-15-205733.png<

Step 2 :

Add your existing repository to Travis

Click the “+” button next to your Travis Dashboard located on the left.

screenshot-area-2016-07-15-210630.png<

Choose the project that you want to setup Travis from the next page

screenshot-area-2016-07-15-210916.png
Toggle the switch for the project that you want to integrate

Click the cog here and add an Environment Variable named GITHUB_API_KEY.
Proceed by adding your Personal Authentication Token there.
Read up here on how to get the Token.

 screenshot-area-2016-07-15-213931.png<

Great, we are pretty much done here.

Let us move to the project repository that we just integrated and create a new file in the root of repository by clicking on the “Create new file” on the repo’s page.

Name it .travis.yml and add the following commands over there

language: android 
jdk:
  - oraclejdk8
android:
  components:
    - tools
    - build-tools-24.0.0
    - android-24
    - extra-android-support
    - extra-google-google_play_services
    - extra-android-m2repository
    - extra-google-m2repository
    - addon-google_apis-google-24
 before_install:
 - chmod +x gradlew
 - export JAVA8_HOME=/usr/lib/jvm/java-8-oracle
 - export JAVA_HOME=$JAVA8_HOME
 after_success:
 - chmod +x ./upload-gh-pages.sh
 - ./upload-apk.sh
 script:
 - ./gradlew build

Next, create a bash file in the root of your repository using the same method and name it upload-apk.sh

  #create a new directory that will contain out generated apk
  mkdir $HOME/buildApk/ 
  #copy generated apk from build folder to the folder just created
  cp -R app/build/outputs/apk/app-debug.apk $HOME/android/
  #go to home and setup git
  cd $HOME
  git config --global user.email "[email protected]"
  git config --global user.name "Your Name" 
  #clone the repository in the buildApk folder
  git clone --quiet --branch=master  https://user-name:[email protected]/user-name/repo-name  master > /dev/null
  #go into directory and copy data we're interested
  cd master  cp -Rf $HOME/android/* .
  #add, commit and push files
  git add -f .
  git remote rm origin
  git remote add origin https://user-name:[email protected]/user-name/repo-name.git
  git add -f .
  git commit -m "Travis build $TRAVIS_BUILD_NUMBER pushed"
  git push -fq origin master > /dev/null
  echo -e "Donen"

Once you have done this, commit and push these files, a Travis build will be initiated in few seconds.
You can see it ongoing in your Dashboard at https://travis-ci.org/.

After the build has completed, you will can see an app-debug.apk in your Repository.

IMPORTANT NOTE :

You might be wondering as to why did I write [skip ci] in the commit message.

Well the reason for that is, Travis starts a new build as soon as it detects a commit made on the master branch of your repository.

So once the apk is uploaded, that will trigger another build in Travis and hence forming an infinite loop.

We can prevent this in 2 ways :

First, simply write [skip ci] somewhere in the commit message and it will cause Travis to ignore the commit.

Or, push the apk to any other branch which is not configured for Travis build.

So well, that’s almost it.

I hope that you found this tutorial helpful, and if you have any doubts regarding this feel free to comment down below, I would love to help you out.

Cheers.