Dependency managers are software modules that coordinate the integration of external libraries or packages into larger application stack. Dependency managers use configuration files like composer.json, package.json, build.gradle or pom.xml to determine: What dependency to get, What version of the dependency in particular and, Which repository to get them from. Currently SUSPER has only NPM as a dependency manager which is used to install all dependencies. In this blog, I will describe how we have added facebook’s Yarn as a new dependency manager in Susper
Lets checkout Yarn in detail:
Yarn is a fast and good alternative to NPM. One of the great advantages of Yarn is that while remaining compatible with the npm registry, it replaces the workflow for npm client or other package managers Yarn was created by Facebook, to solve some particular problems that were faced while using NPM. Yarn was developed to deal with inconsistency in dependency installation while scaling and to increase speed.
What is advantages of using Yarn?
Improving Network performance:Queuing up the requests and avoiding requests waterfalls helps to maximize network utilization.
Checks Package Integrity:Package integrity is checked after each install to avoid corrupt packages installation.
Checks Package Integrity:Package integrity is checked after each install to avoid corrupt packages installation.
Caching: Yarn helps to install the dependencies without an internet connection if the dependency has been previously installed on the system. This is done by caching.
Lock File: Lock files are used to make sure that the node_modules directory has the exact same structure on all development environments.
Note: Ubuntu 17.04 comes with cmdtest installed by default. If anyone gets any errors from installing yarn, then remove it by sudo apt remove cmdtest first. Refer to this for more information.
If using nvm you can avoid the node installation by doing:
sudo apt-get install --no-install-recommends yarn
Test that Yarn is installed by running:
yarn --version
Now delete the node_modules folder so that all dependencies installed by npm is removed.
Now use yarn command in project’s repository.
yarn
Wait while dependencies are installed and then we will be done.
What is happening ?
Yarn has created a lock file yarn.lock. After each operation the file is updated (installing, updating or removing packages) to keep the track of exact package version. If kept in our Git repository we can see that the exact same result in node_modules is made available to all systems.
Badgeyay project is divided into two parts i.e front-end with Ember JS and back-end with REST-API programmed in Python.
Badgeyay has many features related to enhancement in the generation of badges. It gives the choice of uploading data entries i.e by CSV or manually. There are options available for choosing Badge Background and font specifications. But there is an important feature missing which will make the service more user-friendly in terms of creation of badges for different types of events i.e, Badge Size.
Badge Size feature is implemented in Backend. I need to send the data in the backend in the desired format for creation of Badges with different sizes.
In this Blog, I will be discussing how I implemented Badge Size feature in Badgeyay Frontend in my Pull Request.
Let’s get started and understand it step by step.
Step 1:
Create Badge Size component with Ember CLI.
$ ember g component badge-component/badge-size
Step 2:
Write the HTML required in the badge-size component:
A vulnerability is a problem in a project’s code that could be exploited to damage the confidentiality, integrity, or availability of the project or other projects that use its code. Depending on the severity level and the way your project uses the dependency, vulnerabilities can cause a range of problems for your project or the people who use it.GitHub tracks public vulnerabilities in Ruby gems and NPM packages on MITRE’s Common Vulnerabilities and Exposures (CVE) List.
What were vulnerabilities in SUSPER ?
SUSPER was having vulnerability in Gemfile.lock,Gemfile.lock makes our application a single package of both your own code and the third-party code it ran the last time you know for sure that everything worked. Specifying exact versions of the third-party code you depend on in your Gemfile would not provide the same guarantee, because gems usually declare a range of versions for their dependencies.
What were vulnerable dependencies in Gemfile.lock ?
Two dependency namely Nokogiri and Yajl-Ruby were having security vulnerability.
Nokogiri is an HTML, XML, SAX, and Reader parser. Among Nokogiri’s many features is the ability to search documents via XPath or CSS3 selectors whereas
Yajl-Ruby gem is a C binding to the excellent YAJL JSON parsing and generation library. Older versions of both the dependencies were having security vulnerability.
Security alerts for a vulnerable dependency in our repository include a severity level and a link to the affected file in our project. When available, the alerts also include a link to the CVE record and a suggested fix.
What was the suggested fix ?
One way to fix this problem was to update the vulnerable dependencies to latest versions.
The versions of Nokogiri and Yajl-Ruby which were used in SUSPER are:
Nokogiri (~>1.5)
Yajl-Ruby (1.1.0)
What are the best ways to update dependencies without breaking
the project ?
The best way to update a dependency is to check where those dependencies are used in project and what are breaking changes which are introduced within the dependencies.
How vulnerable dependencies were updated ?
Firstly we updated the Bundler the tool we use to update our gems in Gemfile.lock,from version 1.13.6 to 1.16.0.
We then updated Nokogiri dependency and other sub dependencies using bundle update nokogirii.e:
mini_portile2 (2.1.0) -> mini_portile2 (2.3.0)
nokogiri (1.6.8.1) ->nokogiri (1.8.2)
Then we checked the project for integrity , and the project was working well.
We then tried to update Yajl-Ruby, but there was a problem in updating Yajl-Ruby,
We later found that Yajl-Ruby was replaced by many other dependencies.
We therefore updated whole Gemfile.lock . Following are two simple steps to update Gemfile.lock
bundle update
bundle install
We later checked that whether the new dependencies do not break the current project and we found that there were no breaking changes involved in updated dependencies.
Security alerts for vulnerable dependencies list the affected dependency and, in some cases, use machine learning to suggest a fix from the GitHub community. By default, we receive a weekly email summarizing security alerts for up to 10 of our repositories. We can choose to receive security alerts individually by email, in a daily digest email, in our web notifications, or in the GitHub user interface.
Badgeyay project is divided into two parts i.e front-end with Ember JS and back-end with REST-API programmed in Python.
Badgeyay frontend has many features like Login and Sign up features and Login with OAuth and the most important, the badge generation feature is also up and running but the important thing from the User’s perspective is to get notified of all the actions performed in the application so that user can proceed easily further after performing a specific action in the Application..
In this Blog, I will be discussing how I integrated ember-notify in Badgeyay frontend to notify user about the actions performed in my Pull Request.
Ember-notify displays a little notification message down the bottom of our application.
Let’s get started and understand it step by step.
Step 1:
This module is an ember-cli addon, so installation is easy:
npm install ember-notify --save-dev
Step 2:
Inject the notify service in the controller of the template. Here, I will showing how I added it in showing Log In and Logout messages and you can check the whole code in my Pull request for other controllers also.
// controllers/login.js import Ember from 'ember';
import Controller from '@ember/controller';
const { inject } = Ember;
exportdefault Controller.extend({
session : inject.service(),
notify : inject.service('notify'),
..........
this_.transitionToRoute('/');
this_.get('notify').success('Log In Successful');
}).catch(function(err) {
console.log(err.message);
this_.get('notify').error('Log In Failed ! Please try again');
});
............
this_.transitionToRoute('/');
this_.get('notify').success('Log In Successful');
})
.catch(err => {
console.log(err);
});
}).catch(function(err) {
console.log(err.message);
this_.get('notify').error('Log In Failed ! Please try again');
});
..........
// controllers/logout.jsimport Ember from 'ember';
import Controller from '@ember/controller';
const { inject } = Ember;
exportdefault Controller.extend({
session : inject.service(),
notify : inject.service('notify'),
beforeModel() {
returnthis.get('session').fetch().catch(function() {});
},
actions: {
logOut() {
this.get('session').close();
this.transitionToRoute('/');
this.get('notify').warning('Log Out Successful');
}
}
});
I have implemented ember-notify for Logging In and Out feature & in the similar way I have implemented it for other controllers and complete code can be seen in my Pull Request.
Step 3::
Now run the server to see the implemented changes by following command.
$ ember serve
Navigate to localhost and perform login and logout actions to see the changes.
Successful Log In
Successful Log out
Successful CSV Upload
Now, we are done with the integration of ember-notify in Badgeyay frontend to notify user about the actions performed in the Application.
Badgeyay project is divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python.
We already have logging In features implemented with the help of Firebase Authentication. A User can login in the Badgeyay with the help of Google, Facebook and Twitter credentials through a single click. Now, the challenging part is to implement the sign up with Email feature in Frontend and Backend to enable the user to signup and Login with the help of Email and Password
In this blog, I will be discussing how I set up Sign up feature in Badgeyay frontend to send the data in backend besides having Oauth logging features in Badgeyay integrated with Firebase in my Pull Request.
The sign up form is already implemented and I have already mentioned in my previous blog. So we need to send the form data to backend to register user so that user can login using the registered credentials. We need an Adapter, Signup action, controller , Signup Data model and a serializer for doing this task.
Let’s get started and understand the terminologies before implementing the feature.
What is Ember Data ?
It is a data management library for Ember Framework which help to deal with persistent application data.
We will generate Ember data model using Ember CLI in which we will define the data structure we will be requiring to provide to our application for User Signup.
We already have the signup form implemented in frontend. Now we need to provide a action to the form when the user enters the data in form.
If we add the{{action}} helper to any HTML DOM element, when a user clicks the element, the named event will be sent to the template’s corresponding component or controller.
We need to add signUp action in sign-up component and controller.
// Signup Controller import Controller from '@ember/controller';
import { inject as service } from '@ember/service';
exportdefault Controller.extend({
routing : service('-routing'),
actions : {
signUp(email, username, password) {
const _this =this;
let user_ =this.get('store').createRecord('user-signup', {
email,
username,
password
});
user_.save()
.then(record => {
_this.transitionToRoute('/');
})
.catch(err => {
console.log(err);
});
}
}
});
// Sign up Componentimport Component from '@ember/component';
exportdefault Component.extend({
init() {
this._super(...arguments);
},
email :'',
password :'',
isLoading :false,
actions: {
signUp(event) {
event.preventDefault();
let email ='';
let password ='';
let username ='';
email =this.get('email');
password =this.get('password');
username =this.get('username');
this.get('signUp')(email, username, password);
}
},
});
What is an Adapter ?
An adapter determines how the data is persisted to a backend data store. We can configure the backend host, URL format and headers for REST API.
Now as we have specific Data Model for User Signup that we will be using for communicating with its backend so we have to create User-Signup Adapter with the help of Ember-CLI.
Step 1: Generate User Signup Adapter by following together.
$ ember generate adapter user-signup
Step 2: Extend the Adapter according to User-Signup Model.
Serializers format the Data sent to and received from the backend store. By default, Ember Data serializes data using the JSON API format.
Now as we have specific Data Model for User Signup that we will be using for communicating with its backend so we have to create User-Signup Serializer with the help Ember-CLI.
Step 1: Generate the User Signup Adapter by following command:
$ ember generate serializer user-signup
Step 2: Extend the serializer according to User-Signup Model.
We have successfully set up the User Signup in the frontend and data is communicated to backend in JSON API v1 specification with the help of serializers and Adapters.
This is how I set up Sign up feature in Badgeyay frontend to send the data in backend besides having Oauth logging features in Badgeyay integrated with Firebase in my Pull Request.
Badgeyay project is now divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python.
After a discussion, we have finalized to go with Semantic UI framework which uses simple, common language for parts of interface elements, and familiar patterns found in natural languages for describing elements. Semantic allows to build beautiful websites fast, with concise HTML, intuitive javascript and simplified debugging, helping make front-end development a delightful experience. Semantic is responsively designed allowing a web application to scale on multiple devices. Semantic is production ready and partnered with Ember framework which means we can integrate it with Ember frameworks to organize our UI layer alongside our application logic.
In this blog, I will be discussing how I added Log In and Signup Forms and their validations using Semantic UI for badgeyay frontend in my Pull Request.
Let’s get started and understand it step by step.
Step 1:
Generate ember components of Login and Sign up by using the following command :
$ ember generate component forms/login-form
$ ember generate component forms/signup-form
Step 2:
Generate Login and Sign up route by following commands.
$ ember generate route login
$ ember generate route signup
Step 3:
Generate Login and Sign up controller by following commands.
$ ember generate controller login
$ ember generate controller signup
Step 4:
Now we have set up the components, routes, and controllers for adding the forms for login and Sign up. Now let’s start writing HTML in handlebars, adding validations and implementing validations for the form components. In this blog, I will be sharing the code of Login form and actions related to logging In of user. You can check the whole code my Pull Request which I have made for adding these Forms.
Badgeyay project is divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python.
We have integrated PostgreSQL as the object-relational database in Badgeyay and we are using SQLAlchemy SQL Toolkit and Object Relational Mapper tools for working with databases and Python. As we have Flask microframework for Python, so we are having Flask-SQLAlchemy as an extension for Flask that adds support for SQLAlchemy to work with the ORM.
One of the challenging jobs is to manage changes we make to the models and propagate these changes in the database. For this purpose, I have added Added Migrations to Flask SQLAlchemy for handling database changes using the Flask-Migrate extension.
In this blog, I will be discussing how I added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.
First, Let’s understand Database Models, Migrations, and Flask Migrate extension. Then we will move onto adding migrations using Flask-Migrate. Let’s get started and understand it step by step.
What are Database Models?
A Database model defines the logical design and structure of a database which includes the relationships and constraints that determine how data can be stored and accessed. Presently, we are having a User and file Models in the project.
What are Migrations?
Database migration is a process, which usually includes assessment, database schema conversion. Migrations enable us to manipulate modifications we make to the models and propagate these adjustments in the database. For example, if later on, we make a change to a field in one of the models, all we will want to do is create and do a migration, and the database will replicate the change.
What is Flask Migrate?
Flask-Migrate is an extension that handles SQLAlchemy database migrations for Flask applications using Alembic. The database operations are made available through the Flask command-line interface or through the Flask-Script extension.
Now let’s add support for migration in Badgeyay.
Step 1 :
pip install flask-migrate
Step 2 :
We will need to edit run.py and it will look like this :
importosfromflaskimport Flask
fromflask_migrateimport Migrate // Imported Flask Migrate
fromapi.dbimport db
fromapi.configimport config
......
db.init_app(app)
migrate = Migrate(app, db) // It will allow us to run migrations
......
@app.before_first_request
defcreate_tables():
db.create_all()
if __name__ =='__main__':
app.run()
Step 3 :
Creation of Migration Directory.
export FLASK_APP=run.py
flask db init
This will create Migration Directory in the backend API folder.
We will do our first Migration by the following command.
flask db migrate
Step 5 :
We will apply the migrations by the following command.
flask db upgrade
Now we are all done with setting up Migrations to Flask SQLAlchemy for handling database changes in the badgeyay repository. We can verify the Migration by checking the database tables in the Database.
This is how I have added Migrations to Flask SQLAlchemy for handling Database changes using the Flask-Migrate extension in my Pull Request.
Badgeyay project is now divided into two parts i.e front-end of Ember JS and back-end with REST-API programmed in Python. One of the challenging job is that, it should support the uncoupled architecture. Now, we have to integrate Heroku deployed API with Github which should auto deploy every Pull Request made to the Development Branch and help in easing the Pull Request review process.
In this blog, I’ll be discussing how I have configured Heroku Pipeline to auto deploy every Pull request made to the Development Branch and help in easing the Pull Request review process in Badgeyay in my Pull Request.
First, Let’s understand Heroku Pipeline and its features. Then we will move onto configuring the Pipeline file to run auto deploy PR.. Let’s get started and understand it step by step.
What is Heroku Pipeline ?
A pipeline is a group of Heroku apps that share the same codebase. Each app in a pipeline represents one of the following steps in a continuous delivery workflow:
Review
Development
Staging
Production
A common Heroku continuous delivery workflow has the following steps:
A developer creates a pull request to make a change to the codebase.
Heroku automatically creates a review app for the pull request, allowing developers to test the change.
When the change is ready, it’s merged into the codebase Default branch.
Now, I have fulfilled all the prerequisites needed for integrating Github repository to Heroku Deployed Badgeyay API. Let’s move to Heroku Dashboard of the Badgeyay API and implement auto deployment of every Pull Request.
Step 1 :
Open the heroku Deployed App on the dashboard. Yow will see following tabs in top of the dashboard.
Step 2 :
Click on Deploy and first create a new pipeline by giving a name to it and choose a stage for the pipeline.
Step 3 :
Choose a Deployment Method. For the badgeyay project, I have integrated Github for auto deployment of PR.
Select the repository and connect with it.
You will receive a pop-up which will ensure that repository is connected to Heroku.
Step 4 : Enable automatic deploys for the Github repository.
Step 5 :
Now after adding the pipeline, present app get nested under the pipeline. Click on the pipeline name on the top and now we have a pipeline dashboard like this :
Step 6:
Now for auto deployment of PR, enable Review Apps by filling the required information like this :
Step 7:
Verify by creating a test PR after following every above mentioned steps.
Now we are all done with setting up auto deployment of every pull request to badgeyay repository.
This is how I have configured Heroku Pipeline to auto deploy every Pull request made to the Development Branch and help in easing the Pull Request review process.
About Author :
I have been contributing in open source organization FOSSASIA, where I’m working on a project called BadgeYaY. It is a badge generator with a simple web UI to add data and generate printable badges in PDF.
Badgeyay backend is now shifted to REST-API and to test functions used in REST-API, we need some testing technology which will test each and every function used in the API. For our purposes, we chose the popular unit tests Python test suite.
In this blog, I’ll be discussing how I have written unit tests to test Badgeyay REST-API.
First, let’s understand what is unittests and why we have chosen it. Then we will move onto writing API tests for Badgeyay. These tests have a generic structure and thus the code I mention would work in other REST API testing scenarios, often with little to no modifications.
Let’s get started and understand API testing step by step.
What is Unittests?
Unitests is a Python unit testing framework which supports test automation, sharing of setup and shutdown code for tests, aggregation of tests into collections, and independence of the tests from the reporting framework. The unittest module provides classes that make it easy to support these qualities for a set of tests.
Why Unittests?
We get two primary benefits from unit testing, with a majority of the value going to the first:
Guides your design to be loosely coupled and well fleshed out. If doing test driven development, it limits the code you write to only what is needed and helps you to evolve that code in small steps.
Provides fast automated regression for re-factors and small changes to the code.
Unit testing also gives you living documentation about how small pieces of the system work.
We should always strive to write comprehensive tests that cover the working code pretty well.
Now, here is glimpse of how I wrote unit tests for testing code in the REST-API backend of Badgeyay. Using unittests python package and requests modules, we can test REST API in test automation.
Below is the code snippet for which I have written unit tests in one of my pull requests.
To test this function, I basically created a mock object which could simulate the behavior of real objects in a controlled way, so in this case a mock object may simulate the behavior of the output function and return something like an JSON response without hitting the real REST API. Now the next challenge is to parse the JSON response and feed the specific value of the response JSON to the Python automation script. So Python reads the JSON as a dictionary object and it really simplifies the way JSON needs to be parsed and used.
Badgeyay project is now divided into two parts i.e front-end of emberJS and back-end with REST-API programmed in Python. Now, one of the challenging job is that, it should support the uncoupled architecture. It should therefore run tests for the front-end and backend i.e, of two different languages on isolated instances by making use of the isolated parallel builds.
In this blog, I’ll be discussing how I have configured Travis CI to run the tests parallely in isolated parallel builds in Badgeyay in my Pull Request.
First let’s understand what is Parallel Travis CI build and why we need it. Then we will move onto configuring the travis.yml file to run tests parallely. Let’s get started and understand it step by step.
Why Parallel Travis CI Build?
The integration test suites tend to test more complex situations through the whole stack which incorporates front-end and back-end, they likewise have a tendency to be the slowest part, requiring various minutes to run, here and there even up to 30 minutes. To accelerate a test suite like that, we can split it up into a few sections utilizing Travis build matrix feature. Travis will decide the build matrix based on environment variables and schedule two builds to run.
Now our objective is clear that we have to configure travis.yml to build parallel-y. Our project requires two buildpacks, Python and node_js, running the build jobs for both them would speed up things by a considerable amount.It seems be possible now to run several languages in one .travis.yml file using the matrix:includefeature.
Below is the code snippet of the travis.yml file for the Badgeyay project in order to run build jobs in a parallel fashion.
sudo: required
dist: trusty
# check different combinations of build flags which is able to divide builds into “jobs”.
matrix:
# Helps to run different languages in one .travis.yml file
include:
# First Job in Python.
- language: python3
apt:
packages:
- python-dev
python:
- 3.5
cache:
directories:
- $HOME/backend/.pip-cache/
before_install:
- sudo apt-get -qq update
- sudo apt-get -y install python3-pip
- sudo apt-get install python-virtualenv
install:
- virtualenv -p python3 ../flask_env
- source ../flask_env/bin/activate
- pip3 install -r backend/requirements/test.txt --cache-dir
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
- sleep 3
script:
- python backend/app/main.py >> log.txt 2>&1 &
- python backend/app/main.py > /dev/null &
- py.test --cov ../ ./backend/app/tests/test_api.py
after_success:
- bash <(curl -s https://codecov.io/bash)
# Second Job in node js.
- language: node_js
node_js:
- "6"
addons:
chrome: stable
cache:
directories:
- $HOME/frontend/.npm
env:
global:
# See https://git.io/vdao3 for details.
- JOBS=1
before_install:
- cd frontend
- npm install
- npm install -g ember-cli
- npm i eslint-plugin-ember@latest --save-dev
- npm config set spin false
script:
- npm run lint:js
- npm test
Now, as we have added travis.yml and pushed it to the project repo. Here is the screenshot of passing Travis CI after parallel build jobs.
You must be logged in to post a comment.