Deploying loklak search on Heroku & Using config to store and load external URLs

It is really important to have a separate configuration for all the hardcoded URLs having multiple instances across the project. It would help in testing and comparing projects like loklak with a separate configuration for hardcoded URLs. We would also be discussing deployment of Angular based loklak on heroku through this blog post.

Creating shared URL config

The idea here is to export a const Object containing all the collective hardcoded URLs used inside the project. We would try to store similar/collective URLs inside a same key e.g. all the github URLs must go inside key: ‘github’.

export const defaultUrlConfig = {
	fossasia: {
		root: 'https://fossasia.org',
		blog: 'https://blog.fossasia.org'
	},
	loklak: {
		apiServer: 'https://api.loklak.org',
		apiBase: 'loklak.org',
		blog: 'http://blog.loklak.net',
		dev: 'https://dev.loklak.org',
		apps: 'https://apps.loklak.org'
	},
	github: {
		loklak: 'https://github.com/loklak',
		fossasia: 'https://github.com/fossasia'
	},
	phimpme: {
		root: 'https://phimp.me'
	},
	susper: {
		root: 'https://susper.com'
	},
	susiChat: {
		root: 'https://chat.susi.ai'
	},
	pslab: {
		root: 'https://pslab.fossasia.org'
	},
	eventyay: {
		root: 'https://eventyay.com'
	}
};

 

Storing URLs with a key instead of storing it inside a simple Array, makes it easier to add a new URL at required place and easy to modify the existing ones. Now, this configuration can easily be called inside any Component file.

Using config inside the component

Now, the work is really simple. We just need to import the configuration file inside the required Component and store/use directly the respective URLs.

First step would be to import the configuration file as follows:

import { defaultUrlConfig } from ‘../shared/url-config’;

 

Note: Respective path for url-config file for different components might differ.

Now, we would need to store the defaultUrlConfig inside a variable.

public configUrl = defaultUrlConfig;

 

At this point, we have all the configuration URLs which could be extracted from configUrl.

Displaying URLs in HTML

We would use Angular’s interpolation binding syntax to display the URLs from configUrl in HTML. Let’s say we want to display FOSSASIA’s github url in HTML, then we would simply need to do:

{{ configUrl.github.fossasia }}

 

This could be used as an example to to replace all the hardcoded URLs inside the project.

Deploying loklak search on Heroku

Note: I am skipping the initial steps of creating a project on heroku. It is very easy to setup a project on heroku, for initial steps please follow up here.

First step in this direction would be to add a server.js file in root directory and add the following express server code in it:

const express = require(‘express’);
const path = require(‘path’);
const app = express();
app.use(express.static(__dirname + ‘/dist/<name-of-app>’));
app.get(‘/*’, function(req,res) {
res.sendFile(path.join(__dirname+‘/dist/<name-of-app>/index.html’));
});
app.listen(process.env.PORT || 8080);

 

Second step would be to add the following commands inside package.json at respective attributes.

“postinstall”: “ng build –aot -prod”
“start”: “node server.js”

 

Testing

Click on the respective URL link inside the project UI to test the configuration for hardcoded URLs. To check the deployment on heroku, please open the following URLs:

Development branch: loklak-search-dev.herokuapp.com

Master branch: loklak-search.herokuapp.com

Resources

Continue ReadingDeploying loklak search on Heroku & Using config to store and load external URLs

Deploying Susper On Heroku

In Susper, currently we have two branches, master and development branch. The master branch is deployed on susper.com and we needed a deployment of development branch. For this we are using heroku and in this blog I will describe how we have deployed Susper’s development branch on heroku at http://susper-dev.herokuapp.com

Here are the steps:

1.Get a free heroku account:

To deploy our apps on heroku we must have a heroku account which can be created for free on https://www.heroku.com.

2.Create a new app on heroku:

After creating an account we need to create an app with a name and a region, for Susper we have used name susper-dev and region as United States.

3.Link the app to Susper’s development branch:

After successfully creating the app we need to link our app to Susper’s development branch and in Deploy section and then we have to enable automatic deployment from development branch after CI passes.

  1. Setup the Susper Project:

Now we have deployed our and we need to configure it so that it can successfully start on heroku. Following are the steps to configure Susper.

a) Add node and npm engine in package.json:

Now we need to tell heroku which npm and node version to use while running our app,this can be done by defining an engine field in package.json and adding npm and node versions as values as shown here:

"engines": {
   "node": "8.2.1",
   "npm": "6.0.1"
 }

 

b) Adding a postinstall field under scripts field in package.json:

Now we need to tell heroku the command that we will be using to build our app. Since on susper.com we are using ng build –prod –build-optimizer that builds our app for production by optimizing the build artifacts. Therefore on heroku also we will be using the same command:

"postinstall": "ng build --prod --build-optimizer"

 

c) Defining dependencies and typescript version for our project:This can be done by defining the @angular/cli ,@angular/compiler-cli and typescript and their version under dependencies field in package.json. In Susper we are using the following dependency versions.

"@angular/cli": "^1.1.0",
"@angular/compiler": "4.1.3",
"typescript": "~2.3.4"

 

d) Creating a javascript file to install express server and run our app: Locally when we run ng serve angular cli automatically creates a express server where our app is deployed locally but on heroku we will be required to start our express server which will be used by our app using a javascript file. Here is the code for starting the server.

//Install the server
const express = require('express');
const path = require('path');
const app = express();
// Serving the static file from dist folder
app.use(express.static(__dirname + '/dist'));
app.get('/*', function(request,response) {
response.sendFile(path.join(__dirname+'/dist/index.html'));
});
// Starting the app and listening on default heroku port.
app.listen(process.env.PORT || 8080);

 

Now we need to run this file, for this we will change start command in package.json file under script to start this file.

"start": "node server.js"

 

Now everytime a new commit is made to development branch our app will be automatically deployed on heroku at https://susper-dev.herokuapp.com

Resources

 

 

Continue ReadingDeploying Susper On Heroku

Deploying a Postgres-Based Open Event Server to Kubernetes

In this post, I will walk you through deploying the Open Event Server on Kubernetes, hosted on Google Cloud Platform’s Compute Engine. You’ll be needing a Google account for this, so create one if you don’t have one.

First, I cd into the root of our project’s Git repository. Now I need to create a Dockerfile. I will use Docker to package our project into a nice image which can be then be “pushed” to Google Cloud Platform. A Dockerfile is essentially a text doc which simply contains the commands required to assemble an image. For more details on how to write one for your project specifically, check out Docker docs. For Open Event Server, the Dockerfile looks like the following:

FROM python:3-slim

ENV INSTALL_PATH /open_event
RUN mkdir -p $INSTALL_PATH

WORKDIR $INSTALL_PATH

# apt-get update and update some packages
RUN apt-get update && apt-get install -y wget git ca-certificates curl && update-ca-certificates && apt-get clean -y


# install deps
RUN apt-get install -y --no-install-recommends build-essential python-dev libpq-dev libevent-dev libmagic-dev && apt-get clean -y

# copy just requirements
COPY requirements.txt requirements.txt
COPY requirements requirements

# install requirements
RUN pip install --no-cache-dir -r requirements.txt 
RUN pip install eventlet

# copy remaining files
COPY . .

CMD bash scripts/docker_run.sh

These commands simply install the dependencies and set up the environment for our project. The final CMD command is for running our project, which, in our case, is a server.

After our Dockerfile is configured, I go to Google Cloud Platform’s console and create a new project:

Screen Shot 2018-08-09 at 7.37.34 PM

Once I enter the product name and other details, I enable billing in order to use Google’s cloud resources. A credit card is required to set up a billing account, but Google doesn’t charge any money for that. Also, one of the perks of being a part of FOSSASIA was that I had about $3000 in Google Cloud credits! Once billing is enabled, I then enable the Container Engine API. It is required to support Kubernetes on Google Compute Engine. Next step is to install Google Cloud SDK. Once that is done, I run the following command to install Kubernetes CLI tool:

gcloud components install kubectl

Then I configure the Google Cloud Project Zone via the following command:

gcloud config set compute/zone us-west1-a

Now I will create a disk (for storing our code and data) as well as a temporary instance for formatting that disk:

gcloud compute disks create pg-data-disk --size 1GB
gcloud compute instances create pg-disk-formatter
gcloud compute instances attach-disk pg-disk-formatter --disk pg-data-disk

Once the disk is attached to our instance, I SSH into it and list the available disks:

gcloud compute ssh "pg-disk-formatter"

Now, ls the available disks:

ls /dev/disk/by-id

This will list multiple disks (as shown in the Terminal window below), but the one I want to format is “google-persistent-disk-1”.

Screen Shot 2018-08-09 at 7.48.39 PM.png

Now I format that disk via the following command:

sudo mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/disk/by-id/google-persistent-disk-1

Finally, after the formatting is done, I exit the SSH session and detach the disk from the instance:

gcloud compute instances detach-disk pg-disk-formatter --disk pg-data-disk

Now I create a Kubernetes cluster and get its credentials (for later use) via gcloud CLI:

gcloud container clusters create opev-cluster
gcloud container clusters get-credentials opev-cluster

I can additionally bind our deployment to a domain name! If you already have a domain name, you can use the IP reserved below as an A record of your domain’s DNS Zone. Otherwise, get a free domain at freenom.com and do the same with that domain. This static external IP address is reserved in the following manner:

gcloud compute addresses create testip --region us-west1

It’ll be useful if I note down this IP for further reference. I now add this IP (and our domain name if I used one) in Kubernetes configuration files (for Open Event Server, these were specific files for services like nginx and PostgreSQL):

#
# Deployment of the API Server and celery worker
#
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: api-server
namespace: web
spec:
replicas: 1
template:
metadata:
labels:
app: api-server
spec:
volumes:
- name: data-store
emptyDir: {}
containers:
- name: api-server
image: eventyay/nextgen-api-server:latest
command: ["/bin/sh","-c"]
args: ["./kubernetes/run.sh"]
livenessProbe:
httpGet:
path: /health-check/
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
envFrom:
- configMapRef:
name: api-server
env:
- name: C_FORCE_ROOT
value: 'true'
- name: PYTHONUNBUFFERED
value: 'TRUE'
- name: DEPLOYMENT
value: 'api'
- name: FORCE_SSL
value: 'yes'
volumeMounts:
- mountPath: /opev/open_event/static/uploads/
name: data-store
- name: celery
image: eventyay/nextgen-api-server:latest
command: ["/bin/sh","-c"]
args: ["./kubernetes/run.sh"]
envFrom:
- configMapRef:
name: api-server
env:
- name: C_FORCE_ROOT
value: 'true'
- name: DEPLOYMENT
value: 'celery'
volumeMounts:
- mountPath: /opev/open_event/static/uploads/
name: data-store
restartPolicy: Always

These files will look similar for your specific project as well. Finally, I run the deployment script provided for Open Event Server to deploy with defined configuration:

./kubernetes/deploy.sh create all

This is the specific deployment script for this project; for your app, this will be a bunch of simple kubectl create * commands specific to your app’s context. Once this script completes, our app is deployed! After a few minutes, I can see it live at the domain I provided above!

I can also see our project’s statistics and set many other attributes at Google Cloud Platform’s console:

Screen Shot 2018-08-09 at 7.49.42 PM.png

And that’s it for this post! For more details on deploying Docker-based containers on Kubernetes clusters hosted by Google, visit Google Cloud Platform’s documentation.

Resources

Continue ReadingDeploying a Postgres-Based Open Event Server to Kubernetes

Deploying Angular App Susper on GitHub Pages

What are the issues we had with automatic deployment of Susper?

SUSPER project which is built on Angular is kept on master branch of the repository .

Whenever any PR is merged,Travis CI does all the builds, and checks for any error. After successful checking it triggers deployment script in SUSPER (deploy.sh) .

Here is the deployment script:

#!/bin/bash

# SOURCE_BRANCH & TARGET_BRANCH stores the name of different susper.com github branches.
SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
    echo "Skipping deploy; The request or commit is not on master"
    exit 0
fi

# Store some useful information into variables
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

# Decryption of the `deploy.enc`
openssl aes-256-cbc -k "$super_secret_password" -in deploy.enc -out deploy_key -d

# give `deploy_key`. the permission to read and write
chmod 600 deploy_key
eval `ssh-agent -s`
ssh-add deploy_key

# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

# commit the changes, move to SOURCE_BRANCH
git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

git checkout $SOURCE_BRANCH

# Actual building and setup of current push or PR. Move to `TARGET_BRANCH` and move the `dist` folder
npm install
ng build --prod --aot
mv susper.xml dist/
mv 404.html dist/

git checkout $TARGET_BRANCH
mv dist/* .

# Staging the new build for commit; and then committing the latest build
git add .
git commit --amend --no-edit --allow-empty

# Actual push to gh-pages branch via Travis
git push --force $SSH_REPO $TARGET_BRANCH

This script starts after successfully checking the project (comments shows working of each command). The repository is cleaned,except some files like CNAME and 404.html and a commit is made. Then SUSPER’s build artifacts are generated in dist folder and all the files from dist folder are moved to gh-pages and the changes are amended. GitHub Pages engine then uses the build artifacts to generate static site at susper.com. But we faced a weird problem when we were unable to see changes from latest commits on susper.com. Slowly number of commits increased but still changes were not reflecting on susper.com .

What was the error in deployment of SUSPER?

All the changes were getting committed to the gh-pages branch properly. But the GitHub Pages engine was unable to process the branch to build static site. Due to this the site was not getting updated. The failing builds from gh-engine are notified to the owner via email.

The failing builds from gh-pages can be seen here https://github.com/fossasia/susper.com/commits/gh-pages

Between 21 May to 18 March.

There was a weird error ( Failure: The tag `chartjs` in line 4 on `node_modules/chart.js/docs/charts/bar.md` is not recognized liquid tag) while building the site from build artifacts. There was an issue with `gh-pages` engine as `node_modules/chart.js/docs/charts/bar.md`  was previously also present but then there was no such errors than.

Due to this error all commits which were made to SUSPER repository was not shown on susper.com as the site was not getting updated.

How we solved it?

We tried to search a lot about the above mentioned error but we were unable to find a proper solution. While searching we came across this question https://stackoverflow.com/questions/24713112/my-github-page-wont-update-its-content

on StackOverflow. There was no accepted answer for this question and every answer was different from other. So, we just reproduced the bug in our own repository by just making a similar repository to SUSPER with same branches. Once we successfully reproduced the bug. We tried to correlate all the answers to arrive at a solution to fix this bug. Luckily we found that any commit with no significant change from previous commit removed the error and allowed the gh-pages engine to successfully create the site. So we made a dummy commit in which we created a hidden file with no content and pushed it to gh-pages. This did the trick for us and gh-pages was now able build the site for us.

The whole problem was due to a bug in GitHub pages engine which got fixed by a dummy commit.

Resources

  1. Stackoverflow:https://stackoverflow.com/questions/24713112/my-github-page-wont-update-its-content
  2. Publishing to GitHub Pages:https://help.github.com/articles/configuring-a-publishing-source-for-github-pages/
  3. Commits(gh-pages):https://github.com/fossasia/susper.com/commits/gh-pages
Continue ReadingDeploying Angular App Susper on GitHub Pages

Build Button Resolution in Meilix

Meilix Generator is a webapp which has a build button and after clicking the button it triggers the travis of Meilix repository. You can even generate ISO from your phone. But while opening the web app on a phone, we came to see that the build button is not properly visible. Footer of the page hide the build button. This blog shows the way to identify the build button issue and to make it responsive for all screen sizes.

Error Rectification

There are two elements that need correction:

  • Build button
  • Footer

Build Button

<input class="btn btn-info mx-auto btn-block" id="file-upload" value="Build" required="" type="submit">

 

Here build button act as a input submit button.

class="form-group"> class="btn btn-info mx-auto btn-block" id="file-upload" value="Build" required="" type="submit" style="height: 50px; width: 360px; border-radius: 500px;"/>

 

We first added the group to include the button in that form. Then we add the build button property to give the button certain look.

But now also the footer overlap the button. So now we need to work on footer part.

Footer

<footer class="footer">

 

We remove footer class and made a new div tag with the id = “deployment”. Then we set the css for the deployment id to customise its appearance.

     #deployment {
      position: absolute;
      text-align: center;
      width: 100%;
      padding: 10px;
      bottom: 0px;
      border-top: 1px solid #bfbfbf;
      background-color: #f9f9f9;
    }

 

This set the footer and build button appropriate to become responsive for all sizes.

Reference:

HTML form tag

Input type Submit tag

Continue ReadingBuild Button Resolution in Meilix

How to deploy SUSI AI bots on ngrok for debugging

For production purposes, bots can be deployed in cloud services such as HerokuGoogle App Engine or Amazon Web Services – or in their own data center infrastructure.

However, for development purposes you can use ngrok to provide access to your bot running in your local network. Ngrok is easy to setup and use. To learn more about it, you can refer to its documentation.

Ngrok is a handy tool and service that allows you tunnel requests from the wide open Internet to your local machine when it’s behind a NAT or firewall. It’s commonly used to develop web services and webhooks.

In this blog, you’ll learn how to deploy SUSI AI bots on ngrok. We’re going to demonstrate the process for SUSI AI Kik bot. However it is the same for rest of the bots as well.

First of all, you need to configure a bot on Kik. To configure a Kik bot, follow the first few steps of this blog post till you get an API key for the bot that you created using Botsworth. Now, follow these steps:

  1. In order to run SUSI AI kik bot on ngrok, you have to make some changes to the index.js file after forking susi_kikbot repository and cloning it to your local machine.
  2. Open index.js and change the information in ‘bot’ object. Write username of your kik bot in front of username and api key that you got for your bot in front of apiKey in quotations and leave the baseUrl blank for now. It will look like this: (Don’t copy the API key shown below. It’s invalid and only for demonstration purposes.)
var bot = new Bot({
    username: 'susi.ai',
    apiKey: 'b5a5238b-b764-45fe-a4c5-629fsd1851bd',
    baseUrl: ''
});
  1. Now save the file. No need to commit it.
  2. Go to https://ngrok.com/ and sign up.
  3. Next, you’ll be redirected to the setup and installation page. Follow the instructions there to setup ngrok on your local machine.
  4. Now open terminal and use ‘cd’ command to go to the susi_kikbot directory.
  5. While deploying on ngrok, we set the port for listening to http requests. Hence remove “process.env.PORT” from index.js or else this will cause an error in the next step. After removing, it should look like this: (Now save the index.js file)
http.createServer(bot.incoming()).listen(8080);

Also, comment out setInterval function in index.js.

  1. Now type the command ngrok http 8080 in terminal. In case ‘ngrok’ command doesn’t run, copy the ngrok file that you downloaded in step 5 and paste it in the susi_kikbot directory. Then enter ./ngrok http 8080 in terminal.
  2. When it launches, you’ll see a screen similar to the following:

  1. Copy the “forwarding” address (https://d36f0add.ngrok.io), and paste it in front of ‘baseUrl’ in the ‘bot’ object in index.js file. Add “/incoming” after it and save it. Now the ‘bot’ object should look similar to this:
var bot = new Bot({
    username: 'susi.ai',
    apiKey: 'b5a5238b-b764-45fe-a4c5-629fsd1851bd',
    baseUrl: 'https://d36f0add.ngrok.io/incoming'
});
  1. Launch another terminal window and cd into the susi_kikbot directory in your local machine. Then type node index.js command.

Congratulations! You’ve successfully deployed SUSI AI Kik bot on ngrok. Now try sending a message to your SUSI AI Kik bot.

References

Continue ReadingHow to deploy SUSI AI bots on ngrok for debugging

Using file.io for Meilix Deployment

Meilix is developed in FOSSASIA and is deployed as a release on Github, but the download speed on GitHub for large files is slow. Alternatively deployment can be done on 3rd party servers. transfer.sh offers a good service for a start, but they only have a reduced storage for heavy usage such as what is required for Meilix. file.io is a good alternative. file.io has API features which can be used in .travis.yml to deploy meilix. For the time being, we are deploying a generator branch to file.io

Changes made in the .travis.yml

We need to edit the .travis.yml to deploy the artifact on file.io

deploy:
provider: script
script: curl -F "file=@/home/travis/$(image_name)" https://file.io
on:
branch: generator

We need to edit the deploy attribute in the travis to get the deployment done in file.io. Query contains the address of the file which needs to be uploaded on file.io. Then the branch name is provided on which deployment needs to be done.

Output:

Travis executes the command to deploy the application.

{"success":true,"key":"5QBEry","link":"https://file.io/5QBEry","expiry":"14 days"}

success: true  the artifact is successfully deployed.

key and link: gives the link from where the ISO can be downloaded.

expiry: tell the number of days after which the ISO will be deleted and by default it is set to 14 days.

We can manually input expiry parameter to declare the expiry time.

script: curl -F "file=@/home/travis/$(image_name)" https://file.io/?expires=1w

This will set the expiry time to seven days. If you set it with w, it will be number of weeks, m will be for number of months, y will be number of years.

file.io solved the most important issue of meilix deployment and this approach can be use several different project of FOSSASIA for the deployment purpose.

References:

Continue ReadingUsing file.io for Meilix Deployment

Deploying BadgeYaY with Docker on Docker Cloud

We already have a Dockerfile present in the repository but  there is problem in many lines of code.I studied about Docker and learned how It is deployed and I am now going to explain how I deployed BadgeYaY on Docker Cloud.

To make deploying of Badgeyay easier we are now supporting Docker based installation.

Before we start to deploy, let’s have a quick brief about what is docker and how it works ?

What is Docker ?

Docker is an open-source technology that allows you create, deploy, and run applications using containers. Docker allows you deploy technologies with many underlying components that must be installed and configured in a single, containerized instance.Docker makes it easier to create and deploy applications in an isolated environment.

Now, let’s start with how to deploy on docker cloud:

Step 1 – Installing Docker

Get the latest version of docker. See the offical site for installation info for your platform.

Step 2 – Create Dockerfile

With Docker, we can just grab a portable Python runtime as an image, no installation necessary. Then, our build can include the base Python image right alongside our app code, ensuring that our app, its dependencies, and the runtime, all travel together.

These portable images are defined by something called a Dockerfile.

In DockerFile, there are all the commands a user could call on the command line to assemble an image. Here’s is the Dockerfile of BadgeYaY.

# The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.
FROM python:3.6

# We copy just the requirements.txt first to leverage Docker cache
COPY ./app/requirements.txt /app/


# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /app


# The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
RUN pip install -r requirements.txt


# The COPY instruction copies new files.
COPY . /app


# An ENTRYPOINT allows you to configure a container that will run as an executable.
ENTRYPOINT [ "python" ]

# The main purpose of a CMD is to provide defaults for an executing container.
CMD [ "main.py" ]

 

Step 3 – Build New Docker Image

sudo docker build -t badgeyay:latest .

 

When the command completed successfully, we can check the new image with the docker command below:

     sudo docker images

 

Step 4 – Run the app

Let’s run the app in the background, in detached mode:

 sudo docker run -d -p 5000:5000 badgeyay

 

We get the long container ID for our app and then are kicked back to our terminal.Our container is running in the background.Now use docker container stop to end the process, using the CONTAINER ID, like so :

 

docker container stop 1fa4ab2cf395

 

Step 5 – Publish the app.

Log in to the Docker public registry on your local machine.

docker login

 

Upload your tagged image to the repository:

docker push username/repository:tag

 

From now on, we can use docker run and run our app on any machine. No matter where docker run executes, it pulls your image, along with Python and all the dependencies from requirements.txt, and runs your code. It all travels together in a neat little package, and the host machine doesn’t have to install anything but Docker to run it.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

In BadgeYaY, we  also have a Deploy button button which directly deploys on Docker cloud with a single click .

The related PR of this work is https://github.com/fossasia/badgeyay/pull/401 .

Resources :

  • Docker documentation: Link
  • Get Started With Docker: Link
Continue ReadingDeploying BadgeYaY with Docker on Docker Cloud

Installing Susper Search Engine and Deploying it to Heroku

Susper is a decentralized Search Engine that uses the peer to peer system yacy and Apache Solr to crawl and index search results.

Search results are displayed using the Solr server which is embedded into YaCy. All search results must be provided by a YaCy search server which includes a Solr server with a specialized JSON result writer. When a search request is made in one of the search templates, a HTTP request is made to YaCy. The response is JSON because that can much better be parsed than XML in JavaScript.

In this blog, we will talk about how to install Susper search engine locally and deploying it to Heroku (A cloud application platform).

How to clone the repository

Sign up / Login to GitHub and head over to the Susper repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/susper.com

2.   Get the clone of the forked version on your local machine using

git clone https://github.com/<username>/susper.com.git

3. Add upstream to synchronize repository using

git remote add upstream https://github.com/fossasia/susper.com.git

Getting Started

The Susper search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/cli@latest

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Susper Search Engine to Heroku :

  1. We need to install Heroku on our machine. Type the following in your Linux terminal:
wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

This installs the Heroku Toolbelt on your machine to access Heroku from the command line.

  1. Create a Procfile inside root directory and write
web: ng serve
  1. Next, we need to login to our Heroku server (assuming that you have already created an account).

Type the following in the terminal:

heroku login

Enter your credentials and login.

  1. Once logged in we need to create a space on the Heroku server for our application. This is done with the following command
heroku create
  1. Add nodejs buildpack to the app
heroku buildpacks:add –index 1 heroku/nodejs
  1. Then we deploy the code to Heroku.
git push heroku master
git push heroku yourbranch:master # If you are in a different branch other than master

Resources

Continue ReadingInstalling Susper Search Engine and Deploying it to Heroku

Installing the Loklak Search and Deploying it to Surge

The Loklak search creates a website using the Loklak server as a data source. The goal is to get a search site, that offers timeline search as well as custom media search, account and geolocation search.

In order to run the service, you can use the API of http://api.loklak.org or install your own Loklak server data storage engine. Loklak_server is a server application which collects messages from various social media tweet sources, including Twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

The site of this repo is deployed on the GitHub gh-pages branch and automatically deployed here: http://loklak.org

In this blog, we will talk about how to install Loklak_Search locally and deploying it to Surge (Static web publishing for Front-End Developers).

How to clone the repository

Sign up / Login to GitHub and head over to the Loklak_Search repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/loklak_search
  1.   Get the clone of the forked version on your local machine using
git clone https://github.com/<username>/loklak_search.git
  1.   Add upstream to synchronize repository using
git remote add upstream https://github.com/fossasia/loklak_search.git

Getting Started

The Loklak search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/cli@latest

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Loklak Search on Surge :

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages.

  1. We need to install surge on our machine. Type the following in your Linux terminal:
npm install –global surge

This installs the Surge on your machine to access Surge from the command line.

  1. In your project directory just run
surge
  1. After this, it will ask you three parameters, namely
Email
Password
Domain

After specifying all these three parameters, the deployment link with the respective domain is generated.

Auto deployment of Pull Requests using Surge :

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file
  • The pr_deploy.sh file will be executed only after success of Travis CI i.e. when Travis CI passes by using command bash pr_deploy.sh
#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
npm i -g surge
export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206
export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-LoklakSearch.surge.sh
surge –project ./dist –domain $DEPLOY_DOMAIN;

Here, Travis CI is first installing surge locally by npm i -g surge  and then we are exporting the environment variables SURGE_LOGIN , SURGE_TOKEN and DEPLOY_DOMAIN.

Now, execute pr_deploy.sh file from .travis.yml by using command bash pr_deploy.sh

Resources

Continue ReadingInstalling the Loklak Search and Deploying it to Surge