Auto Deployment of Badgeyay Backend by Heroku Pipeline

Badgeyay project is now divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python. One of the challenging job is that, it should support the uncoupled architecture. Now, we have to integrate Heroku deployed API with Github which should auto deploy every Pull Request made to the Development Branch and help in easing the Pull Request review process.

In this blog, I’ll be discussing how I have configured Heroku Pipeline to auto deploy every Pull request made to the Development Branch and help in easing the Pull Request review process  in Badgeyay in my Pull Request.
First, Let’s understand Heroku Pipeline and its features. Then we will move onto configuring the Pipeline file to run auto deploy PR.. Let’s get started and understand it step by step.

What is Heroku Pipeline ?

A pipeline is a group of Heroku apps that share the same codebase. Each app in a pipeline represents one of the following steps in a continuous delivery workflow:

  • Review
  • Development
  • Staging
  • Production

A common Heroku continuous delivery workflow has the following steps:

  • A developer creates a pull request to make a change to the codebase.
  • Heroku automatically creates a review app for the pull request, allowing    developers to test the change.
  • When the change is ready, it’s merged into the codebase Default branch.
  • The Default branch is automatically deployed to staging for further testing.
  • When it’s ready, the staging app is promoted to production, where the change is available to end users of the app.

In badgeyay, I have used Review App and Development App steps for auto deployment of Pull Request.

Pre – requisites:

  • You should have admin rights of the Github Repository.
  • You should be the owner of the Heroku deployed app.
  • For creating a Review App , Below mentioned files are needed to be in the root of the project repository to trigger the Heroku Build.

1. App.json

{
    "name": "BadgeYay-API",
    "description": "A fully functional REST API for badges generator using flask",
    "repository": "https://github.com/fossasia/badgeyay/backend/",
    "keywords": [
        "badgeyay",
        "fossasia",
        "flask"
    ],
    "buildpacks": [
        {
            "url": "heroku/python"
        }
    ]
}
2. Procfile

web: gunicorn --pythonpath backend/app/ main:app

 

Now, I have fulfilled all the prerequisites needed for integrating Github repository to Heroku Deployed Badgeyay API. Let’s move to Heroku Dashboard of the Badgeyay API and implement auto deployment of every Pull Request.

Step 1 :

Open the heroku Deployed App on the dashboard. Yow will see following tabs in top of the dashboard.

Step 2 :

Click on Deploy and first create a new pipeline by giving a name to it and choose a stage for the pipeline.

Step 3 :

  • Choose a Deployment Method. For the badgeyay project, I have  integrated Github for auto deployment of PR.
  • Select the repository and connect with it.
  • You will receive a pop-up which will ensure that repository is connected to Heroku.

Step 4 :
Enable automatic deploys for the Github repository.

Step 5 :

Now after adding the pipeline, present app get nested under the pipeline. Click on the pipeline name on the top and now we have a pipeline dashboard like this :

Step 6:

Now for auto deployment of PR, enable Review Apps by filling the required information like this :

Step 7:

Verify by creating a test PR after following every above mentioned steps.

 

Now we are all done with setting up auto deployment of every pull request to badgeyay repository.

This is how I have configured Heroku Pipeline to auto deploy every Pull request made to the Development Branch and help in easing the Pull Request review process.

About Author :

I have been contributing in open source organization FOSSASIA, where I’m working on a project called BadgeYaY. It is a badge generator with a simple web UI to add data and generate printable badges in PDF.

Resources:

  • Heroku Pipelines Article – Link
Continue ReadingAuto Deployment of Badgeyay Backend by Heroku Pipeline

Unit Tests for REST-API in Python Web Application

Badgeyay backend is now shifted to REST-API and to test functions used in REST-API, we need some testing technology which will test each and every function used in the API. For our purposes, we chose the popular unit tests Python test suite.

In this blog, I’ll be discussing how I have written unit tests to test Badgeyay  REST-API.

First, let’s understand what is unittests and why we have chosen it. Then we will move onto writing API tests for Badgeyay. These tests have a generic structure and thus the code I mention would work in other REST API testing scenarios, often with little to no modifications.

Let’s get started and understand API testing step by step.

What is Unittests?

Unitests is a Python unit testing framework which supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework. The unittest module provides classes that make it easy to support these qualities for a set of tests.

Why Unittests?

We get two primary benefits from unit testing, with a majority of the value going to the first:

  • Guides your design to be loosely coupled and well fleshed out. If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
  • Provides fast automated regression for re-factors and small changes to the code.
  • Unit testing also gives you living documentation about how small pieces of the system work.

We should always strive to write comprehensive tests that cover the working code pretty well.

Now, here is glimpse of how  I wrote unit tests for testing code in the REST-API backend of Badgeyay. Using unittests python package and requests modules, we can test REST API in test automation.

Below is the code snippet for which I have written unit tests in one of my pull requests.

def output(response_type, message, download_link):
    if download_link == '':
        response = [
            {
                'type': response_type,
                'message': message
            }
        ]
    else:
        response = [
            {
                'type': response_type,
                'message': message,
                'download_link': download_link
            }
        ]
    return jsonify({'response': response})

 

To test this function, I basically created a mock object which could simulate the behavior of real objects in a controlled way, so in this case a mock object may simulate the behavior of the output function and return something like an JSON response without hitting the real REST API. Now the next challenge is to parse the JSON response and feed the specific value of the response JSON to the Python automation script. So Python reads the JSON as a dictionary object and it really simplifies the way JSON needs to be parsed and used.

And here’s the content of the backend/tests/test_basic.py file.

 #!/usr/bin/env python3
"""Tests for Basic Functions"""
import sys
import json
import unittest

sys.path.append("../..")
from app.main import *


class TestFunctions(unittest.TestCase):
      """Test case for the client methods."""
    def setup(self):
        app.app.config['TESTING'] = True
        self.app = app.app.test_client()
      # Test of Output function
    def test_output(self):
        with app.test_request_context():
            # mock object
            out = output('error', 'Test Error', 'local_host')
            # Passing the mock object
            response = [
                {
                    'type': 'error',
                    'message': 'Test Error',
                    'download_link': 'local_host'
                }
            ]
            data = json.loads(out.get_data(as_text=True))
            # Assert response
            self.assertEqual(data['response'], response)


if __name__ == '__main__':
    unittest.main()

 

And finally, we can verify that everything works by running nosetests .

This is how I wrote unit tests in BadgeYaY repository. You can find more of work here.

Resources:

  • The Purpose of Unit Testing – Link
  • Unit testing framework – Link
Continue ReadingUnit Tests for REST-API in Python Web Application

Parallelizing Builds In Travis CI

Badgeyay project is now divided into two parts i.e front-end of emberJS and back-end with REST-API programmed in Python. Now, one of the challenging job is that, it should support the uncoupled architecture. It should therefore run tests for the front-end and backend i.e, of two different languages on isolated instances by making use of the isolated parallel builds.

In this blog, I’ll be discussing how I have configured Travis CI to run the tests parallely in isolated parallel builds in Badgeyay in my Pull Request.

First let’s understand what is Parallel Travis CI build and why we need it. Then we will move onto configuring the travis.yml file to run tests parallely. Let’s get started and understand it step by step.

Why Parallel Travis CI Build?

The integration test suites tend to test more complex situations through the whole stack which incorporates front-end and back-end, they likewise have a tendency to be the slowest part, requiring various minutes to run, here and there even up to 30 minutes. To accelerate a test suite like that, we can split it up into a few sections utilizing Travis build matrix feature. Travis will decide the build matrix based on environment variables and schedule two builds to run.

Now our objective is clear that we have to configure travis.yml to build parallel-y. Our project requires two buildpacks, Python and node_js, running the build jobs for both them would speed up things by a considerable amount.It seems be possible now to run several languages in one .travis.yml file using the matrix:include feature.

Below is the code snippet of the travis.yml file  for the Badgeyay project in order to run build jobs in a parallel fashion.

sudo: required
dist: trusty

# check different combinations of build flags which is able to divide builds into “jobs”.
matrix:

# Helps to run different languages in one .travis.yml file
include:

# First Job in Python.
- language: python3

apt:
packages:
- python-dev

python:
- 3.5
cache:
directories:
- $HOME/backend/.pip-cache/

before_install:
- sudo apt-get -qq update
- sudo apt-get -y install python3-pip
- sudo apt-get install python-virtualenv

install:
- virtualenv  -p python3 ../flask_env
- source ../flask_env/bin/activate
- pip3 install -r backend/requirements/test.txt --cache-dir

before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3

script:
- python backend/app/main.py >> log.txt 2>&1  &
- python backend/app/main.py > /dev/null &
- py.test --cov ../  ./backend/app/tests/test_api.py

after_success:
- bash <(curl -s https://codecov.io/bash)

# Second Job in node js.
- language: node_js
node_js:
- "6"

addons:
chrome: stable

cache:
directories:
- $HOME/frontend/.npm

env:
global:
# See https://git.io/vdao3 for details.
- JOBS=1

before_install:
- cd frontend
- npm install
- npm install -g ember-cli
- npm i eslint-plugin-ember@latest --save-dev
- npm config set spin false

script:
- npm run lint:js
- npm test

 

Now, as we have added travis.yml and pushed it to the project repo. Here is the screenshot of passing Travis CI after parallel build jobs.

The related PR of this work is https://github.com/fossasia/badgeyay/pull/512

Resources :

Travis CI documentation – Link

Continue ReadingParallelizing Builds In Travis CI

Adding Static Code Analyzers in Open Event Orga Android App

This week, in Open Event Orga App project (Github Repo), we wanted to add some static code analysers that run on each build to ensure that the app code is free of potential bugs and follows a certain style. Codacy handles a few of these things, but it is quirky and sometimes produces false positives. Furthermore, it is not a required check for builds so errors can creep in gradually. We chose checkstyle, PMD and Findbugs for static analysis as they are most popular for Java. The area they work on kind of overlaps but gives security regarding code quality. Findbugs actually analyses the bytecode instead of source code to find possible JVM bugs.

Adding dependencies

The first step was to add the required dependencies. We chose the library android-check as it contained all 3 libraries and was focused on Android and easily configurable. First, we add classpath in project level build.gradle

dependencies {
   classpath 'com.noveogroup.android:check:1.2.4'
}

 

Then, we apply the plugin in app level build.gradle

apply plugin: 'com.noveogroup.android.check'

 

This much is enough to get you started, but by default, the build will not fail if any violations are found. To change this behaviour, we add this block in app level build.gradle

check {
   abortOnError true
}

 

There are many configuration options available for the library. Do check out the project github repo using the link provided above

Configuration

The default configuration is of easy level, and will be enough for most projects, but it is of course configurable. So we took the default hard configs for 3 analysers and disabled properties which we did not need. The place you need to store the config files is the config folder in either root project directory or the app directory. The name of the config file should be checkstyle.xml, pmd.xml and findbugs.xml

These are the default settings and you can obviously configure them by following the instructions on the project repo

Checkstyle

For checkstyle, you can find the easy and hard configuration here

The basic principle is that if you need to add a check, you include a module like this:

<module name="NewlineAtEndOfFile" />

 

If you want to modify the default value of some property, you do it like this:

<module name="RegexpSingleline">
   <property name="format" value="\s+$" />
   <property name="minimum" value="0" />
   <property name="maximum" value="0" />
   <property name="message" value="Line has trailing spaces." />
   <property name="severity" value="info" />
</module>

 

And if you want to remove a check, you can ignore it like this:

<module name="EqualsHashCode">
   <property name="severity" value="ignore" />
</module>

 

It’s pretty straightforward and easy to configure.

Findbugs

For findbugs, you can find the easy and hard configuration here

Findbugs configuration exists in the form of filters where we list resources it should skip analyzing, like:

<Match>
   <Class name="~.*\.BuildConfig" />
</Match>

 

If we want to ignore a particular pattern, we can do so like this:

<!-- No need to force hashCode for simple models -->
<Match>
   <Bug pattern="HE_EQUALS_USE_HASHCODE " />
</Match>

 

Sometimes, you’d want to only ignore a pattern only for certain files or fields. Findbugs supports regex to match such items:

<!-- Don't complain about rules in tests. -->
<Match>
   <Field name="~.*mockitoRule"/>
   <Bug pattern="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD" />
</Match>

 

You can also annotate your code to suppress warning in the particular class, mehod or field rather than disabling it for the whole project. For that, you need to add findbugs annotations dependency in the project

compile 'com.google.code.findbugs:findbugs-annotations:3.0.1'

 

And then use it like this:

@SuppressFBWarnings(
   value = "ICAST_IDIV_CAST_TO_DOUBLE",
   justification = "We want granularity to be integer")
public void showChart(LineChart lineChart) {
   ...
}

 

It also allows setting the justification of suppressing the rule for clarity

PMD

For findbugs, you can find the easy and hard configuration here

Like checkstyle, you have to first add a rule set to tell PMD which checks to perform:

<rule ref="rulesets/java/android.xml" />

 

If you want to modify the default value of the rule, you can do it like this:

<rule ref="rulesets/java/codesize.xml/TooManyMethods">
   <properties>
       <property name="maxmethods" value="15" />
   </properties>
</rule>

 

Or if you want to entirely exclude a rule, you can do it like this:

<rule ref="rulesets/java/basic.xml">
   <exclude name="OverrideBothEqualsAndHashcode" />
</rule>

 

PMD also supports suppressing warnings in the code itself using annotations. You don’t require any external libraries for it as it supports the in built java.lang.SuppessWarnings annotations. You can use it like this:

@SuppressWarnings("PMD.AvoidInstantiatingObjectsInLoops") // Entries cannot be created outside loop
private LineDataSet setData(Map<String, Long> map, String label) throws ParseException {
   ...
}

 

As you can see, we need to prepend “PMD.” to the rule name so that there are no clashes while annotation processing. Remember to comment the reason for suppressing the warning so that your co-developers know and can remove it in future if criteria does not meet anymore.

There is a lot more to learn about these static analyzers, which you can read upon in their official documentation:

Continue ReadingAdding Static Code Analyzers in Open Event Orga Android App

Can solving lint bugs be interesting?

Today I am going to present you how we’ve changed monotonous solving bugs into motivating process.

PEP

Most developers need to improve their code quality. To do  that they can use style guide for e.g for Python code (PEP). PEP contains an index of all Python Enhancement Proposals.

Below you can find which logs PEP returned in a command line.

Do you think that this logs’ presentation is  good enough to interest a developer? Will he solve these  thousands of bugs?

Undoubtedly, there are much information about errors and warnings so PEP returns long logs. But developer can not even know how to start solving bugs. And even if she/he finally starts, after each commit he/she needs to run that script again to check if quantity of bugs are increased or decreased. It seems to be endless, exhausting and very monotonous.  Nobody is encouraged to do it.

logi.png

Quality monitoring

Open Event team wants to increase our productivity and code quality. Therefore we use a tool which allow us to check code style, security, duplication complexity and test coverage on every commit. That tool is Codacy and it fulfils our requirements in 100%. It is very helpful because it adds comments to pull requests and enables developer quickly find where a bug is located. It’s very comfortable, because you don’t need to check issues in above awful logs results. Take a look how it looks in Codacy.

-DO NOT MERGE  Ticketing Flow by niranjan94 · Pull Request  1927 · fossasia open event orga server.png

Isn’t it clear? Of course that it’s. Codacy shows in which line issue ocurres and which type of issue it’s.

Awesome statistics dashboard

I’d like to give an answer how you can engage your team to solve issues and make this process more interesting. On the main page codacy tool welcomes you with great statistics about your project.

open event orga server   Codacy   Dashboard

You can see number of issues, category like code complexity, code style, compatibility, documentation, error prone, performance, security and unused code. That params show in which stage of code quality your project is. I think that every developer’s aim is to have the highest code quality and increasing these statistics. But if project has many issues, developer sees only a few changes in project charts.

Define Goals

Recently I’ve discovered how you can motivate yourself more. You can define a goal which you’d like achive. It can be goal of category or goal of file. For example Open Event team has defined goal for a specific file to achieve. If you define small separate goals, you can quicker see the results of your work.

open event orga server_2   Codacy   Goals

On the left sidebar you can find a item which is named “Goals”. In this area you can easily add your projects goals. Everything is user friendly so you shouldn’t have a problem  to create own goals.

Continue ReadingCan solving lint bugs be interesting?

Configuring Codacy: Use Your Own Conventions

Screenshot from 2016-07-23 01:08:20

All the developers agree on at least one thing – writing clean code is necessary. Because as someone anonymous said, always write a code as if the developer who comes after you is a homicidal maniac who knows your address. So, yeah, writing clean code is very important. Codacy helps in code reviewing and code quality monitoring. You can set codacy in any of your github project. It automatically identifies new static analysis issues, code coverage, code duplication and code complexity evolution in every commit and pull request.

Continue ReadingConfiguring Codacy: Use Your Own Conventions