Auto Deployment of Pull Requests on Susper using Surge Technology

Susper is being improved every day. Following every best practice in the organization, each pull request includes a working demo link of the fix. Currently, the demo link for Susper can be generated by using GitHub pages by running these simple commands - ng build and npm run deploy. Sometimes this process on slow-internet connectivity takes up to 30 mins to generate a working demo link of the fix. Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages: As soon as the pull request passes Travis CI, the deployment link is generated. It has been set up as such, no extra terminal commands will be required. Faster loading compared to deployment link is generated using GitHub pages. Surge can be used to deploy only static web pages. Static web pages mean websites that contain fixed contents. To implement the feature of auto-deployment of pull request using surge, one can follow up these steps: Create a pr_deploy.sh file which will be executed during Travis CI testing. The pr_deploy.sh file can be executed after success i.e. when Travis CI passes by using command bash pr_deploy.sh. The pr_deploy.sh file for Susper looks like this: #!/usr/bin/env bash if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then echo "Not a PR. Skipping surge deployment." exit 0 fi angular build production npm i -g surge export SURGE_LOGIN=test@example.co.in # Token of a dummy account export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206 export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-susper.surge.sh surge --project ./dist --domain $DEPLOY_DOMAIN;   Once pr_deploy.sh file has been created, execute the file in the travis.yml by using command bash pr_deploy.sh. In this way, we have integrated the surge technology for auto-deployment of the pull requests in Susper. References: Static web publishing for the Front-End developers: https://surge.sh/

Continue ReadingAuto Deployment of Pull Requests on Susper using Surge Technology

Deploy to Azure Button for loklak

In this blog post, am going to tell you about yet a new deployment method for loklak which is easy and quick with just one click. Deploying to Azure Websites from a Git repository just got a little easier with the Deploy to Azure Button. Simply place the button in README.md with a link to the loklak, and users who click on it will be directed to a streamlined deployment process. If we want to do something more advanced and customize this behavior, then add an ARM template called “azuredeploy.json” at the root of the repository which will cause users to be presented with different inputs and configure your services as specified. I’m going to walk you through a workflow that I used to test them before checking them in to my repo, as well as describe some of the special behaviors that the “Deploy to Azure” site does Adding a button To add a deployment button, insert the following markdown to your README.md file: [![Deploy to Azure](https://azuredeploy.net/deploybutton.svg)](https://deploy.azure.com/?repository=https://github.com/loklak/loklak_server) How it works When a user clicks on the button, a “referrer” header is sent to azuredeploy.net which contains the location of the Git repository of loklak_server to deploy from. An Example Template This is a blank template which shows, how the azure divides its inputs. { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { }, "variables": { }, "resources": [ ], "outputs": { } } By following the above template, in the case of loklak server, the parameters used are name, image i.e., docker image, port , number of CPUs to be utilized and space i.e., memory required. In the resources section we use container, the type of the container will be "type": "Microsoft.ContainerInstance/containerGroups",   And as output, we expect a public IP address to access the azure cloud instance created by us. Everything under the root “parameters” property will be inputs into our template. Then these parameter values feed into the resources defined later in the template with the “[parameters(‘paramName’)]” syntax. Try the “Deploy to Azure” Button here: Resources Different azure templates available here: https://github.com/Azure/azure-quickstart-templates More about Deploy to Azure button: https://www.microsoft.com/developerblog/2017/01/17/the-deploy-to-azure-button/

Continue ReadingDeploy to Azure Button for loklak

Working of One Click Deployment Buttons in loklak

Today’s topic is deployment. It’s called one-click deployment for a reason: Developers are lazy. It’s hard to do less than clicking on one button, so that’s our goal to make use of one click button in loklak. For one click buttons we only need a central build server, which is our loklak_server. Everything written here was based on Apache ant, but later on ant build was deprecated and loklak server started to use gradle build. We wanted to make the process of provisioning and setting up a complete infrastructure of your own, from server to continuous integration tasks, as easy as possible. These button allows you to do all of that in one click. How does it work? You can see the one click buttons in the README page of loklak_server repository. These repositories may include a different files like scalingo.json for scalingo, docker-compose.yml and docker-cloud.yml for docker cloud etc files at their root, allowing them to define a few things like a name, description, logo and build environment (Gradle build in the case of loklak server). Once you've clicked on any of the buttons, you will be redirected to respective apps and prompted with this information for you to review before confirming the fork. This will effectively fork the repository in your account. Once the repo is ready, you can click on it. You will then be asked to "activate" or “deploy” your branch, allowing it to provision actual servers and run tasks. At the same time, you will be asked to review and potentially modify a few variables that were defined in the predefined files (for eg: app.json for heroku) of the apps. These are usually things like the Git URL of the repo for loklak, or some of the details related to the cloud provider you want to use (eg: Digital Ocean). Once you confirmed this last step, your branch i.e., most probably master branch of loklak server repo is activated and the button will start provisioning and configuring your servers, along with the tasks which may allow you to build and deploy your app. In most of the cases, you can go to the tasks/setup section and run the build task that will fetch loklak server's code, build it and deploy it on your server, all configurations included and will get a public IP. What's next In loklak we are also introducing new one click “AZURE” button, then the users can also start deploying loklak in azure platform. Resources About Bluemix one click: https://console.bluemix.net/docs/develop/deploy_button.html About Scalingo one click: http://doc.scalingo.com/deployment/one-click-deploy.html About Docker cloud one click: https://docs.docker.com/docker-cloud/apps/deploy-to-cloud-btn/ About Heroku one click: https://devcenter.heroku.com/articles/heroku-button

Continue ReadingWorking of One Click Deployment Buttons in loklak

One Click Deployment Button for loklak Using Heroku with Gradle Build

The one click deploy button makes it easy for the users of loklak to get their own cloud instance created and deployed in their heroku account and can be used according to their flexibility. Heroku uses an app.json manifest in the code repo to figure out what add-ons, config and other deployment steps are required to make the code run. This is used to configure and deploy the app. Once you have provide the app name and then click on deploy button, Heroku will start deploying the loklak server to a new app on your account: When setup is complete, you can open the deployed app in your browser or inspect it in Dashboard. All these steps and requirements can now be encoded in an app.json file and placed in a repo alongside a button that kicks off the setup with a single click. App.json is a manifest format for describing apps and specifying what their config requirements are. Heroku uses this file to figure out how code in a particular repo should be deployed on the platform. Here is the loklak’s app.json file which used gradle build pack: { "name": "Loklak Server", "description": "Distributed Tweet Search Server", "logo": "https://raw.githubusercontent.com/loklak/loklak_server/master/html/images/loklak_anonymous.png", "website": "http://api.loklak.org", "repository": "https://github.com/loklak/loklak_server.git", "image": "loklak/loklak_server:latest-master", "env": { "BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-gradle.git" } }   If you are interested you can try deploying the peer from here itself. Checkout how simple it can be to deploy. Deploy button: Resources: Read more about heroku one click deploy button: https://devcenter.heroku.com/articles/heroku-button How to write app.json file for your application: https://blog.heroku.com/introducing_the_app_json_application_manifest

Continue ReadingOne Click Deployment Button for loklak Using Heroku with Gradle Build

Deploying Yacy with Docker on Different Cloud Platforms

To make deploying of yacy easier we are now supporting Docker based installation. Following the steps below one could successfully run Yacy on docker. You can pull the image of Yacy from https://hub.docker.com/r/nikhilrayaprolu/yacygridmcp/ or buid it on your own with the docker file present at https://github.com/yacy/yacy_grid_mcp/blob/master/docker/Dockerfile One could pull the docker image using command: docker pull nikhilrayaprolu/yacygridmcp   2) Once you have an image of yacygridmcp you can run it by typing docker run <image_name>   You can access the yacygridmcp endpoint at localhost:8100 Installation of Yacy on cloud servers: Right now installation yacy on cloud servers is documented at https://github.com/nikhilrayaprolu/yacy_grid_mcp/tree/documentation/docs/installation We have documentation provided for hosting yacy on Google Cloud, AWS, Bluemix and digital Ocean and Heroku. Installing Yacy and all microservices with just one command: One can also download,build and run Yacy and all its microservices (presently supported are yacy_grid_crawler, yacy_grid_loader, yacy_grid_ui, yacy_grid_parser, and yacy_grid_mcp ) To build all these microservices in one command, run this bash script productiondeployment.sh `bash productiondeployment.sh build` will install all required dependencies and build microservices by cloning them from github repositories. `bash productiondeployment.sh run` will run all services and starts them. Right now all repositories are cloned into ~/yacy and you can make customisations and your own changes to this code and build your own customised yacy. The related PRs of this work are https://github.com/yacy/yacy_grid_mcp/pull/21 and https://github.com/yacy/yacy_grid_mcp/pull/20 and https://github.com/yacy/yacy_grid_mcp/pull/13 Resources: Docker documentation: https://docs.docker.com/ Deployment to Google Cloud: https://engineering.hexacta.com/automatic-deployment-of-multiple-docker-containers-to-google-container-engine-using-travis-e5d9e191d5ad Writing bash script http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html

Continue ReadingDeploying Yacy with Docker on Different Cloud Platforms

Auto Deployment of SUSI Server using Kubernetes on Google Cloud Platform

Recently, we auto deployed SUSI Server on Google Cloud Platform using Kubernetes and Docker Images after each commit in the GitHub repo with the help of Travis Continuous Integration. So, basically, whenever a new commit is added to the repo, during the Travis build, we build the docker image of the server and then use it to deploy the server on Google Cloud Platform. We use Kubernetes for deployment since it is very easy to scale up the Project when traffic on the server is increased and Docker because using it we can easily build docker images which then can be used to update the deployment. This schematic will make things more clear what exactly is the procedure. Prerequisites You must be signed in to your Google Cloud Console and have enabled billing and must have credits left in your account. You must have a docker account and a repo in it. If you don’t have one, make it now. You should have enabled Travis on your repo and have a Travis.yml file in your repo. You must already have a project in Google Cloud Console. Make a new one if you don’t have. Pre Deployment Steps You will be needed to do some work on Google Cloud Platform before actually starting the auto deployment process. Those are: Creating a new Cluster. Adding and Formatting Persistence Disk Adding a Persistent Volume CLaim (PVC) Labeling a node as primary. Check out this documentation on how to do that. It may help. Implementation Img src: https://cloud.google.com/solutions/continuous-delivery-with-travis-ci 1. The first step is simply to add this line in Travis.yml file and create an empty deploy.sh, file mentioned below. after_success: - bash kubernetes/travis/deploy.sh Now we’ll be moving line by line and adding commands in the empty deploy.sh file that we created in the previous step. 2. Next step is to remove obsolete Google Cloud files and install Google Cloud SDK and kubectl command. Use following lines to do that. echo ">>> Removing obsolete gcoud files" sudo rm -f /usr/bin/git-credential-gcloud.sh sudo rm -f /usr/bin/bq sudo rm -f /usr/bin/gsutil sudo rm -f /usr/bin/gcloud echo ">>> Installing new files" curl https://sdk.cloud.google.com | bash; source ~/.bashrc gcloud components install kubectl 3. In this step you will be needed to download a JSON file which contains your Google Cloud Credentials, then copy that file to your repo and encrypt it using Travis encryption keys. Follow https://youtu.be/7U4jjRw_AJk this video to see how to do that. 4. So, now you have added your encrypted credentials.json files in your repo and now you need to use those credentials to login into your google cloud account. So, use below lines to do that. echo ">>> Decrypting credentials and authenticating gcloud account" # Decrypt the credentials we added to the repo using the key we added with the Travis command line tool openssl aes-256-cbc -K $encrypted_YOUR_key -iv $encrypted_YOUR_iv -in ./kubernetes/travis/Credentials.json.enc -out Credentials.json -d gcloud auth activate-service-account --key-file Credentials.json export GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/Credentials.json #add gcoud project id gcloud config set project YOUR_PROJECT_ID gcloud container clusters get-credentials YOUR_CONTAINER…

Continue ReadingAuto Deployment of SUSI Server using Kubernetes on Google Cloud Platform