Adding Logs for Request Status in Open Event Web app

Open Event Web app handles multiple requests from the client using task queue, every request from client is put in the job queue and handled one at a time. The only log shown to client was either ‘waiting’ or ‘processing’, so we need to show additional logs like request waiting number as well. The logs are shown in real time using sockets.

How to add logs?

The logs of any request are shown to the client in real time using socket emit and listener events. Whenever any data is to be displayed inside the logs, the server emits an event with the data. The socket listens to the event and appends the data received to the logs section of the view.

Creating helper for emitting data

The helper function named ‘logger’ is created which emits the event ‘buildLog’ whenever it is called with the data being passed as arguments. Every time a message is passed to this procedure, it adds it to array of objects containing the data.

'use strict';

// eslint-disable-next-line no-var
var exports = module.exports = {};
const buildLog = [];
let obj = {};
let emit, largeMsg, message;

exports.addLog = function(type, smallMessage, socket, largeMessage) {
 if (typeof largeMessage === 'undefined') {
   largeMsg = smallMessage;
 }

 buildLog.push({'type': type, 'smallMessage': smallMessage, 'largeMessage': largeMsg});
 message = largeMsg.toString();
 obj = {'type': type, 'smallMessage': smallMessage, 'largeMessage': message};
 emit = false;

 if (socket.constructor.name === 'Socket') {
   emit = true;
 }
 if (emit) {
   socket.emit('buildLog', obj);
 }
};

Updating logs in real time

The helper created above emits the event ‘buildLog’, the socket on listening this event appends the data inside logs division with the data received from the helper while emitting the event.

socket.on('buildLog', function(data) {
   const spanElem = $('<span></span>'); // will contain the info about type of statement
   const spanMess = $('<span></span>'); // will contain the actual message
   const aElem = $('<button></button>'); // Button to view the detailed error log
   const divElem = $('

); // Contain the detailed error log
   const paragraph = $('<p></p>'); // Contain the whole statement

  //Code for styling the logs division
  ....
  ....


     divElem.text(data.largeMessage);
     paragraph.append(aElem);
     paragraph.append(divElem);
     updateStatusAnimate(data.smallMessage, 200, 'red');
     $('#btnGenerate').prop('disabled', false);
     $('input[ type = "radio" ]').attr('disabled', false);
     $('#email').prop('disabled', false);
   }
   $('#buildLog').append(paragraph);
   $('#buildLog').scrollTop($('#buildLog')[0].scrollHeight);
 });
});

Add request waiting number

Whenever a new request is received from the client the server emits the event ‘waiting’ if any other job is currently being processed. The helper above is used to add request waiting number to the logs.

const jobs = await queue.getJobs('waiting', {start: 0, end: 25});
const activeJob = await queue.getJobs('active', {start: 0, end: 25});
const jobIds = jobs.map((currJob) => currJob.id);

if (jobIds.indexOf(currJobId) !== -1) {
 socket.emit('waiting');
 logger.addLog('Info', 'Request waiting number: ' + (currJobId - activeJob[0].id), socket);
}

Add status in logs

On listening the event named ‘waiting’ the status is updated to ‘waiting’ in the view and is shown to the client.

socket.on('waiting', function() {
 updateStatusAnimate('Request status: Waiting');
});

Update request waiting number

Whenever a job is started being processed from the queue, the waiting number of all the requests in the ready queue is updated. The socket connection for corresponding request is obtained from the main socket object(socketObj) which updates whenever a new request comes from the client.

const jobs = new Promise(function(resolve) {
 resolve(queue.getJobs('waiting', {start: 0, end: 25}));
});

generator.createDistDir(job.data, socketObj[processId], done);
jobs.then(function(waitingJobs) {
 waitingJobs.forEach(function(waitingJob) {
   logger.addLog('Info', 'Request waiting number: ' + (waitingJob.id - job.id), socketObj[waitingJob.id]);
 });
});

 

Resources

Continue Reading Adding Logs for Request Status in Open Event Web app

Adding Code of Conduct in Open Event Web app

Open Event Server sends JSON data as a response of its REST (Representational State Transfer) API. The main eventyay platform allows organizers to add code of conduct to their event, as a result the JSON data sent by the server contains code of conduct key value pair, this value is extracted from the data and is used to create a separate page for Code of Conduct in any event.

The steps for data extraction and compilation are as follows:

Extracting code of conduct

Since open event server has two types of JSON data formats v1 and v2, both of them contains code of conduct. The key for code of conduct in v1 is code_of_conduct and for v2 is code-of-conduct. The data extraction for v1 data format occurs in fold_v1.js and the main event details are stored in an object urls as shown below:

fold_v1.js

const urls= {
 ....
 ....
 ....

 email: event.email,
 orgname: event.organizer_name,
 location_name: event.location_name,
 featuresection: featuresection,
 sponsorsection: sponsorsection,
 codeOfConduct: event.code_of_conduct
};

 

fold_v2.js

const urls= {
 ....
 ....
 ....

 email: event.email,
 orgname: event['organizer-name'],
 location_name: event['location-name'],
 featuresection: featuresection,
 sponsorsection: sponsorsection,
 codeOfConduct: event['code-of-conduct']
};

Adding template for CoC

Now we have extracted the data and have stored the value for code of conduct in an object, we need to render this in a template. For this, we created a template named CoC.hbs and the data for code of conduct is accessed via {{{eventurls.codeOfConduct}}} as shown below.

{{>navbar}}
<div class="main-coc-container container">
 <div class="row">
   <div class="middle col-sm-12">
     <h2 class="filter-heading track-heading text-center">
       <span>Code of Conduct</span>
     </h2>
   </div>
 </div>

 <div class="row">
   <div class="col-sm-12 col-md-12">
     <div class="coc">
       {{{eventurls.codeOfConduct}}}
     </div>
   </div>
 </div>
</div>
{{>footer}}

Compiling and minifying

Now we have stored the event details in an object we copy this object as a key to jsonData, this data is passed as an argument for compiling the code of conduct template namely CoC.hbs to a HTML file and is lately minified. For minification purpose gulp module is used.

if(jsonData.eventurls.codeOfConduct) {
 setPageFlag('CoC');
 fs.writeFileSync(distHelper.distPath + '/' + appFolder + '/CoC.html', minifyHtml(codeOfConductTpl(jsonData)));
}

Adding link to CoC page

Till now, we have successfully compiled a HTML page for code of conduct of an event. This page is linked under a heading in the footer section of every page by placing reference to it in footer.hbs.

{{#if eventurls.codeOfConduct}}
 <li><a target="_self" href="CoC.html">Code of Conduct</a></li>
{{/if}}

Customizing the CoC container

The code of conduct page is customized by placing the container in the center and aligning the text. Styling like background-color, padding and margin are set on the container to provide a better appearance to the page.

.coc {
 margin: auto;
 text-align: justify;
 width: 60%;

 a {
   &:hover {
     color: $dark-black;
   }
 }
}

.main-coc-container {
 background-color: $main-background;
 margin-bottom: 4%;
 margin-top: 2%;
 padding-bottom: 50px;
 padding-top: 2%;
}

Resources

Continue Reading Adding Code of Conduct in Open Event Web app

Implementing job queue in Open Event Web app

Open Event Web app enables multiple request handling by the implementation of queue system in its generator. Every request received from the client is saved and stored in the queue backed by redis server. The jobs are then processed one at a time using the FCFS (First come First Serve) job scheduling algorithm. Processing the requests one by one prevents the crashing of app and also prevents the loss of requests from the client.

Initialising job queue

The job queue is initialised with a name and the connection object of redis server as the arguments.

const redisClient =  require('redis').createClient(process.env.REDIS_URL);
const Queue = require('bee-queue');
const queue = new Queue('generator-queue', {redis: redisClient});

Handling jobs in queue

The client emits an event namely ‘live’ when request for event generation is received, the corresponding event is listened and a new job for the request is created and enqueued in the job queue. Every request received by the client is saved to ensure that there is no loss of request. The queue is then searched for the requests or the jobs which are in ‘waiting’ state, if the current request status for the job Id is waiting the socket emits an event namely ‘waiting’.

socket.on('live', function(formData) {
 const req = {body: formData};
 const job = queue.createJob(req);

 job.on('succeeded', function() {
   console.log('completed job ' + job.id);
 });

 job.save(async function(err, currentJob) {
   if (err) {
     console.log('job failed to save');
   }
   emitter = socket;
   console.log('saved job ' + currentJob.id);
   const jobs = await queue.getJobs('waiting', {start: 0, end: 25});
   const jobIds = await jobs.map((currJob) => currJob.id);

   if(jobIds.indexOf(currentJob.id) !== -1) {
     socket.emit('waiting');
   }
 });

});

Updating the status of request

If the socket emits the event ‘waiting’ it signifies that some other job is currently in process and the status of the current request is ‘waiting’.

socket.on('waiting', function () {
 updateStatusAnimate('Request status: Waiting');
});

Processing the jobs

When the queue is in ready state and no job is currently in process, it starts processing the saved job. The job is not completed until it receives a callback. The generator starts generating the event when the processing of request starts.

queue.on('ready', function() {
 queue.process(function(job, done) {
   console.log('processing job ' + job.id);
   generator.createDistDir(job.data, emitter, done);
 });
 console.log('processing jobs...');
});

 

The generator calls the callback function for the current job when the event generation completes or it is halted in between due to some error. As soon as the current job completes, next job in the queue starts being processed.

generator.createDistDir() = function(req, socket, callback){
  
  .....
  .....
  .....

  mailer.uploadAndsendMail(req.body.email, eventName, socket, (obj) => {
    if(obj.mail)
      logger.addLog('Success', 'Mail sent succesfully', socket);
    else
      logger.addLog('Error', 'Error sending mail', socket);

    if(emit) {
      socket.emit('live.ready', {
        appDir: appFolder,
        url: obj.url
      });
      callback(null);
    }
    else {
      callback(appFolder);
    }

    done(null, 'write');
   });
}

Resources

Continue Reading Implementing job queue in Open Event Web app

Parallelizing travis build in Open Event Web app

 

Open Event Web app uses Travis CI as a platform to perform unit testing. Travis CI is a hosted, distributed continuous integration service used to build and test projects hosted at GitHub. Travis CI automatically detects when a commit has been made and pushed to a GitHub repository that is using Travis CI, and each time this happens, it will try to build the project and run tests. Travis build took around 24 minutes to complete when any commit is made to the project, which is a very large time, to reduce the build time we parallelized the build which uses maximum amount of resources available at that time and run the builds parallely which resulted into better use of resources as well as required lesser amount of time.

Open Event web app uses saucelabs integration to perform selenium tests and travis to perform continuous integration.

Why parallelize the build?

When there are unit tests that are independent of each other and can be executed using a common set of dependencies, those procedures can be performed parallely on different virtual machines bringing out maximum throughput.

Running say a large number of tests on a single machine can increase the build time to a large extent, this build time can be reduced significantly by running the tests parallely on different machines. Open Event Webapp has a build time of around 24 minutes which is reduced to half on parallelising the build.

Parallelizing your builds across virtual machines

To speed up a test suite, you can break it up into several parts using Travis CI’s build matrix feature.

Say you want to split up your unit tests and your integration tests into two different build jobs. They’ll run in parallel and fully utilize the available build capacity and the resources.

The architecture of open event webapp supports test suite for all the pages in the generated application. To parallelize the build, the test suite is divided in different files  with the directory structure as shown below:

   
   ├── test
      ├── serverTest.js
      ├── roomsAndSpeakers.js
      ├── tracks.js
      ├── generatorAndSchedule.js
      ├── sessionAndEvent,js

 

The env key in travis.yml is modified as shown below:

env:
  - TESTFOLDER=test/serverTest.js
  - TESTFOLDER=test/roomsAndSpeakers.js
  - TESTFOLDER=test/tracks.js
  - TESTFOLDER=test/generatorAndSchedule.js
  - TESTFOLDER=test/sessionAndEvent.js

 

The script running tests fetches environment variable and runs the test file accordingly as shown below:

# installing required items for build
install:
 - npm install -g istanbul mocha@3
 - npm install
 - npm install --save-dev

# testing script
script:
 - istanbul cover _mocha -- $TESTFOLDER

# notify codecov and deploy to cloud
after_success:
 if ([ "$TESTFOLDER" == "test/serverTest.js" ]); then
   bash <(curl -s https://codecov.io/bash);
   bash gh_deploy.sh && kubernetes/travis/deploy.sh;
 fi

Results:

The build time which was earlier 23 minutes is reduced to 12 minutes after parallelizing the build.

Resources

 

Continue Reading Parallelizing travis build in Open Event Web app

Enable web app generation for multiple API formats

 

 

 

 

 

Open event server has two types of API (Application Programming Interface) formats, with one being generated by the legacy server and other one by the server side of decoupled development structure. The open event web app supported only the new version of API format, thus an error in read contents of JSON was thrown for the old version API format. To enable the support for both kind of API formats such that web app can be generated for each of them and there is no need to convert JSON files of version v1 to v2 we added an option field to the generator, where the client can choose the API version.

Excerpts and description for difference between data formats of API v1 and v2

The following excerpt is a subprogram getCopyrightData in both versions v1 and v2. The key for getting licence details in v1 is ‘licence_details’ and in v2 is ‘licence-details’. Similarly the key for getting copyright details in v1 is ‘copyright’ and in v2 is ‘event-copyright’.

So the data is extracted from the JSON files depending on the API version, the client has selected.

API V1

function getCopyrightData(event) {
 if(event.licence_details) {
   return convertLicenseToCopyright(event.licence_details, event.copyright);
 } else {
   return event.copyright;
 }
}

 

API V2

function getCopyrightData(event) {
 if(event['licence-details']) {
   return convertLicenseToCopyright(event['licence-details'], event['event-copyright']);
 } else {
   event['event-copyright'].logo = event['event-copyright']['logo-url'];
   return event['event-copyright'];
 }
}

 

Another example showing the difference between the API formats of v1 and v2 is excerpted below.

The following excerpt shows a constant ‘url’ containing the event URLs and the details. The version v1 uses event_url as a key for the main page url whereas v2 uses event-url for getting the same. A similar kind of structural difference is present for rest of the fields where the special character underscore has been replaced by hyphen and a bit of change in the name format for keys such as start_time, end_time.

API v1

const urls= {
 main_page_url: event.event_url,
 logo_url: event.logo,
 background_url: event.background_image,
 background_path: event.background_image,
 description: event.description,
 location: event.location_name,
 orgname: event.organizer_name,
 location_name: event.location_name,
};

 

API v2

const urls= {
 main_page_url: event['event-url'],
 logo_url: event['logo-url'],
 background_url: event['original-image-url'],
 background_path: event['original-image-url'],
 location: event['location-name'],
 orgname: event['organizer-name'],
 location_name: event['location-name'],
};

How we enabled support for both API formats?

To add the support for both API formats we added a options field on generator’s index page where the user chooses the type of API format for web app generation.

<label>Choose your API version</label>
<ul style="list-style-type:none">
 <li id="version1"><input name="apiVersion" type="radio" value="api_v1">   API_v1</li>
 <li id="version2"><input name="apiVersion" type="radio" value="api_v2"> API_v2</li>
</ul>

The generator depending on the version of API format, chooses the correct file where the data extraction from the input JSON files takes place. The file names are fold_v1.js and fold_v2.js for extraction of JSON v1 data and JSON v2 data respectively.

var type = req.body.apiVersion || 'api_v2';

if(type === 'api_v1') {
 fold = require(__dirname + '/fold_v1.js');
}
else {
 fold = require(__dirname + '/fold_v2.js');
}

 

The excerpts of code showing the difference between API formats of v1 and v2 are the contents of fold_v1.js and fold_v2.js files respectively.

Resources

Continue Reading Enable web app generation for multiple API formats

Improving the JSON file upload structure – Open Event Web app

Open Event Web app generator also allows user to upload JSON file for the event data other than the API endpoint. The generator used the socket connection ID to uniquely identify uploaded files on the server which worked good for a single socket connection but failed for multiple due to overlap of connection IDs which resulted in crashing of web app. The problem is fixed by providing a unique ID to every file uploaded on the server and creating a separate field for uploaded file ID in the request body.

How to add listener for file upload?

A listener for socket ‘file’ event is added in the file app.js, which is triggered when the event namely file is emitted by the socket. The file ID kept unique by introducing a counter for number of files uploaded on the server till now and incrementing the counter subsequently for every new file.

ss(socket).on('file', function(stream, file) {
 generator.startZipUpload(count, socket);
 console.log(file);
 filename = path.join(__dirname, '..', 'uploads/connection-' +    count.toString()) + '/upload.zip';
 count += 1;
 stream.pipe(fs.createWriteStream(filename));
});

 

The procedure named startZipUpload in generator.js is executed when the zip file upload starts which further calls the helper function to make uploads directory on the server.

exports.startZipUpload = function(id, socket) {
 console.log('========================ZIP UPLOAD START\n\n');
 distHelper.makeUploadsDir(id, socket);
 distHelper.cleanUploads(id);
};

Creating uploads directory

Uploads directory is created in the root directory using the file system interfaces, the ID passed as parameter ensures that the file names do not overlap.

makeUploadsDir: function(id, socket) {
 fs.mkdirpSync(uploadsPath + '/connection-' + id.toString());
 socket.emit('uploadsId', id);
}

Embedding uploads ID with the data

After the successful creation of uploads directory and the file, the socket emits the ID of uploaded file through the event uploadsID. The value of ID thus received is embedded in the object namely data along with the other entries in the form.

socket.on('uploadsId', function(data) {
 initialValue = data;
});

function getData(initValue) {
 const data = initValue;
 const formData = $('#form').serializeArray();

 formData.forEach(function(field) {
   if (field.name === 'email') {
     data.email = field.value;
   }
   ....
   ....
   ....
   ....
 
   if (field.name === 'apiVersion') {
     data.apiVersion = field.value;
   }
 });

Resources

 

Continue Reading Improving the JSON file upload structure – Open Event Web app

Open Event Web App – A PWA

Introduction

Progressive Web App (PWA) are web applications that are regular web pages or websites but can appear to the user like traditional applications or native mobile applications. The application type attempts to combine features offered by most modern browsers with the benefits of mobile experience. Open Event web app is a web application generator which has now introduced this new feature in its generated applications.

 

Why Progressive Web Apps?

The reasons why we enabled this functionality are that PWAs are –

  • Reliable – Load instantly and never show the downasaur, even in uncertain network conditions.
  • Fast – Respond quickly to user interactions with silky smooth animations and no janky scrolling.
  • Engaging – Feel like a natural app on the device, with an immersive user experience.

Thus where Open Event Web app generated applications are informative and only requires one time loading with functionalities like bookmarks depending on local storage of browser, we found Progressive web apps perfect to explain and demonstrate these applications as a whole.

How PWAs work?

The components associated with a progressive web application are :

Manifest: The web app manifest is a W3C specification defining a JSON-based manifest to provide developers a centralized place to put metadata associated with a web application.

Service Workers: Service Workers provide a scriptable network proxy in the web browser to manage the web/HTTP requests programmatically. The Service Workers lie between the network and device to supply the content. They are capable of using the cache mechanisms efficiently and allow error-free behavior during offline periods.

How we turned Open event Web app to a PWA?

Adding manifest.json

 {
"icons": [
    {
      "src": "./images/logo.png",
      "type": "image/png",
      "sizes": "96x96"
    }
  ],
  "start_url": "index.html",
  "scope": ".",
  "display": "standalone",
  "orientation": "portrait-primary",
  "background_color": "#fff",
  "theme_color": "#3f51b5",
  "description": "Open Event Web Application Generator",
  "dir": "ltr",
  "lang": "en-US"
}

 

Adding service workers

The initialization of service workers is done by calling an event listener namely ‘install’ :

var urlsToCache = [
 './css/bootstrap.min.css',
 './offline.html',
 './images/avatar.png'
];

self.addEventListener('install', function(event) {
 event.waitUntil(
   caches.open(CACHE_NAME).then(function(cache) {
     return cache.addAll(urlsToCache);
   })
 );
});

The service workers fetch the data from the cache when event listener ‘fetch’ is triggered. When a cache hit occurs the response data  is sent to the client from there otherwise it tries to fetch the data by making a request to the network. In case when network does not send response status code ‘200’, it sends an error response otherwise caches the data received.

self.addEventListener('fetch', function(event) {
 event.respondWith(
   caches.match(event.request).then(function(response) {
     // Cache hit - return response
     if (response) {
       return response;
     }

     var fetchRequest = event.request.clone();

     return fetch(fetchRequest)
       .then(function(response) {
         if (
           !response ||
           response.status !== 200 ||
           response.type !== 'basic'
         ) {
           return response;
         }
         var responseToCache = response.clone();
         caches.open(CACHE_NAME).then(function(cache) {
           cache.put(event.request, responseToCache);
         });
         return response;
       })
       .catch(function(err) {
         if (event.request.headers.get('Accept').indexOf('text/html') !== -1) {
           return caches.match('./offline.html');
         } else if (event.request.headers.get('Accept').indexOf('image') !== -1) {
           return caches.match('./images/avatar.png');
         } else {
           console.log(err);
         }
       });
   })
 );
});

The service workers are activated through the event listener namely ‘activate’ :

self.addEventListener('activate', function(event) {
 event.waitUntil(
   caches.keys().then(function(cacheNames) {
     return Promise.all(
       cacheNames.map(function(cacheName) {
         if (cacheName !== CACHE_NAME) {
           console.log('Deleting cache ' + cacheName);
           return caches.delete(cacheName);
         }
       })
     );
   })
 );
});

Adding service workers and manifest to the generator

Since we need to add the service workers and manifest to every web application generated through app generator, we copy the files ‘sw.js’ and ‘manifest.json’ in the directory structure of that particular web app using filestream module with the help of two abstract functions ‘copyServiceWorker’ and ‘copyManifestFile’ present in ‘distHelper.js’ code file.

 

distHelper.copyServiceWorker(appFolder, hashObj['hash'], function (err) {
 if (err) {
   console.log(err);
   logger.addLog('Error', 'Error occurred while copying service worker file', socket, err);
   return done(err);
 }
 return done(null);
});

distHelper.copyManifestFile(appFolder, eventName, function(err) {
 if (err) {
   console.log(err);
   logger.addLog('Error', 'Error occured while copying manifest file', socket, err);
   return done(err);
 }
 return done(null);
});

 

Further Improvements

Enabling push notifications for the bookmarked tracks and sessions. The user would be notified about the upcoming events through the notifications in the way the native mobile applications do.

Resources

 

Continue Reading Open Event Web App – A PWA

Unit Tests for REST-API in Python Web Application

Badgeyay backend is now shifted to REST-API and to test functions used in REST-API, we need some testing technology which will test each and every function used in the API. For our purposes, we chose the popular unit tests Python test suite.

In this blog, I’ll be discussing how I have written unit tests to test Badgeyay  REST-API.

First, let’s understand what is unittests and why we have chosen it. Then we will move onto writing API tests for Badgeyay. These tests have a generic structure and thus the code I mention would work in other REST API testing scenarios, often with little to no modifications.

Let’s get started and understand API testing step by step.

What is Unittests?

Unitests is a Python unit testing framework which supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework. The unittest module provides classes that make it easy to support these qualities for a set of tests.

Why Unittests?

We get two primary benefits from unit testing, with a majority of the value going to the first:

  • Guides your design to be loosely coupled and well fleshed out. If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
  • Provides fast automated regression for re-factors and small changes to the code.
  • Unit testing also gives you living documentation about how small pieces of the system work.

We should always strive to write comprehensive tests that cover the working code pretty well.

Now, here is glimpse of how  I wrote unit tests for testing code in the REST-API backend of Badgeyay. Using unittests python package and requests modules, we can test REST API in test automation.

Below is the code snippet for which I have written unit tests in one of my pull requests.

def output(response_type, message, download_link):
    if download_link == '':
        response = [
            {
                'type': response_type,
                'message': message
            }
        ]
    else:
        response = [
            {
                'type': response_type,
                'message': message,
                'download_link': download_link
            }
        ]
    return jsonify({'response': response})

 

To test this function, I basically created a mock object which could simulate the behavior of real objects in a controlled way, so in this case a mock object may simulate the behavior of the output function and return something like an JSON response without hitting the real REST API. Now the next challenge is to parse the JSON response and feed the specific value of the response JSON to the Python automation script. So Python reads the JSON as a dictionary object and it really simplifies the way JSON needs to be parsed and used.

And here’s the content of the backend/tests/test_basic.py file.

 #!/usr/bin/env python3
"""Tests for Basic Functions"""
import sys
import json
import unittest

sys.path.append("../..")
from app.main import *


class TestFunctions(unittest.TestCase):
      """Test case for the client methods."""
    def setup(self):
        app.app.config['TESTING'] = True
        self.app = app.app.test_client()
      # Test of Output function
    def test_output(self):
        with app.test_request_context():
            # mock object
            out = output('error', 'Test Error', 'local_host')
            # Passing the mock object
            response = [
                {
                    'type': 'error',
                    'message': 'Test Error',
                    'download_link': 'local_host'
                }
            ]
            data = json.loads(out.get_data(as_text=True))
            # Assert response
            self.assertEqual(data['response'], response)


if __name__ == '__main__':
    unittest.main()

 

And finally, we can verify that everything works by running nosetests .

This is how I wrote unit tests in BadgeYaY repository. You can find more of work here.

Resources:

  • The Purpose of Unit Testing – Link
  • Unit testing framework – Link
Continue Reading Unit Tests for REST-API in Python Web Application

Parallelizing Builds In Travis CI

Badgeyay project is now divided into two parts i.e front-end of emberJS and back-end with REST-API programmed in Python. Now, one of the challenging job is that, it should support the uncoupled architecture. It should therefore run tests for the front-end and backend i.e, of two different languages on isolated instances by making use of the isolated parallel builds.

In this blog, I’ll be discussing how I have configured Travis CI to run the tests parallely in isolated parallel builds in Badgeyay in my Pull Request.

First let’s understand what is Parallel Travis CI build and why we need it. Then we will move onto configuring the travis.yml file to run tests parallely. Let’s get started and understand it step by step.

Why Parallel Travis CI Build?

The integration test suites tend to test more complex situations through the whole stack which incorporates front-end and back-end, they likewise have a tendency to be the slowest part, requiring various minutes to run, here and there even up to 30 minutes. To accelerate a test suite like that, we can split it up into a few sections utilizing Travis build matrix feature. Travis will decide the build matrix based on environment variables and schedule two builds to run.

Now our objective is clear that we have to configure travis.yml to build parallel-y. Our project requires two buildpacks, Python and node_js, running the build jobs for both them would speed up things by a considerable amount.It seems be possible now to run several languages in one .travis.yml file using the matrix:include feature.

Below is the code snippet of the travis.yml file  for the Badgeyay project in order to run build jobs in a parallel fashion.

sudo: required
dist: trusty

# check different combinations of build flags which is able to divide builds into “jobs”.
matrix:

# Helps to run different languages in one .travis.yml file
include:

# First Job in Python.
- language: python3

apt:
packages:
- python-dev

python:
- 3.5
cache:
directories:
- $HOME/backend/.pip-cache/

before_install:
- sudo apt-get -qq update
- sudo apt-get -y install python3-pip
- sudo apt-get install python-virtualenv

install:
- virtualenv  -p python3 ../flask_env
- source ../flask_env/bin/activate
- pip3 install -r backend/requirements/test.txt --cache-dir

before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3

script:
- python backend/app/main.py >> log.txt 2>&1  &
- python backend/app/main.py > /dev/null &
- py.test --cov ../  ./backend/app/tests/test_api.py

after_success:
- bash <(curl -s https://codecov.io/bash)

# Second Job in node js.
- language: node_js
node_js:
- "6"

addons:
chrome: stable

cache:
directories:
- $HOME/frontend/.npm

env:
global:
# See https://git.io/vdao3 for details.
- JOBS=1

before_install:
- cd frontend
- npm install
- npm install -g ember-cli
- npm i [email protected] --save-dev
- npm config set spin false

script:
- npm run lint:js
- npm test

 

Now, as we have added travis.yml and pushed it to the project repo. Here is the screenshot of passing Travis CI after parallel build jobs.

The related PR of this work is https://github.com/fossasia/badgeyay/pull/512

Resources :

Travis CI documentation – Link

Continue Reading Parallelizing Builds In Travis CI

Deploying BadgeYaY with Docker on Docker Cloud

We already have a Dockerfile present in the repository but  there is problem in many lines of code.I studied about Docker and learned how It is deployed and I am now going to explain how I deployed BadgeYaY on Docker Cloud.

To make deploying of Badgeyay easier we are now supporting Docker based installation.

Before we start to deploy, let’s have a quick brief about what is docker and how it works ?

What is Docker ?

Docker is an open-source technology that allows you create, deploy, and run applications using containers. Docker allows you deploy technologies with many underlying components that must be installed and configured in a single, containerized instance.Docker makes it easier to create and deploy applications in an isolated environment.

Now, let’s start with how to deploy on docker cloud:

Step 1 – Installing Docker

Get the latest version of docker. See the offical site for installation info for your platform.

Step 2 – Create Dockerfile

With Docker, we can just grab a portable Python runtime as an image, no installation necessary. Then, our build can include the base Python image right alongside our app code, ensuring that our app, its dependencies, and the runtime, all travel together.

These portable images are defined by something called a Dockerfile.

In DockerFile, there are all the commands a user could call on the command line to assemble an image. Here’s is the Dockerfile of BadgeYaY.

# The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.
FROM python:3.6

# We copy just the requirements.txt first to leverage Docker cache
COPY ./app/requirements.txt /app/


# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /app


# The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
RUN pip install -r requirements.txt


# The COPY instruction copies new files.
COPY . /app


# An ENTRYPOINT allows you to configure a container that will run as an executable.
ENTRYPOINT [ "python" ]

# The main purpose of a CMD is to provide defaults for an executing container.
CMD [ "main.py" ]

 

Step 3 – Build New Docker Image

sudo docker build -t badgeyay:latest .

 

When the command completed successfully, we can check the new image with the docker command below:

     sudo docker images

 

Step 4 – Run the app

Let’s run the app in the background, in detached mode:

 sudo docker run -d -p 5000:5000 badgeyay

 

We get the long container ID for our app and then are kicked back to our terminal.Our container is running in the background.Now use docker container stop to end the process, using the CONTAINER ID, like so :

 

docker container stop 1fa4ab2cf395

 

Step 5 – Publish the app.

Log in to the Docker public registry on your local machine.

docker login

 

Upload your tagged image to the repository:

docker push username/repository:tag

 

From now on, we can use docker run and run our app on any machine. No matter where docker run executes, it pulls your image, along with Python and all the dependencies from requirements.txt, and runs your code. It all travels together in a neat little package, and the host machine doesn’t have to install anything but Docker to run it.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

In BadgeYaY, we  also have a Deploy button button which directly deploys on Docker cloud with a single click .

The related PR of this work is https://github.com/fossasia/badgeyay/pull/401 .

Resources :

  • Docker documentation: Link
  • Get Started With Docker: Link
Continue Reading Deploying BadgeYaY with Docker on Docker Cloud