Build Button Resolution in Meilix

Meilix Generator is a webapp which has a build button and after clicking the button it triggers the travis of Meilix repository. You can even generate ISO from your phone. But while opening the web app on a phone, we came to see that the build button is not properly visible. Footer of the page hide the build button. This blog shows the way to identify the build button issue and to make it responsive for all screen sizes.

Error Rectification

There are two elements that need correction:

  • Build button
  • Footer

Build Button

<input class="btn btn-info mx-auto btn-block" id="file-upload" value="Build" required="" type="submit">

 

Here build button act as a input submit button.

class="form-group"> class="btn btn-info mx-auto btn-block" id="file-upload" value="Build" required="" type="submit" style="height: 50px; width: 360px; border-radius: 500px;"/>

 

We first added the group to include the button in that form. Then we add the build button property to give the button certain look.

But now also the footer overlap the button. So now we need to work on footer part.

Footer

<footer class="footer">

 

We remove footer class and made a new div tag with the id = “deployment”. Then we set the css for the deployment id to customise its appearance.

     #deployment {
      position: absolute;
      text-align: center;
      width: 100%;
      padding: 10px;
      bottom: 0px;
      border-top: 1px solid #bfbfbf;
      background-color: #f9f9f9;
    }

 

This set the footer and build button appropriate to become responsive for all sizes.

Reference:

HTML form tag

Input type Submit tag

How to deploy SUSI AI bots on ngrok for debugging

For production purposes, bots can be deployed in cloud services such as HerokuGoogle App Engine or Amazon Web Services – or in their own data center infrastructure.

However, for development purposes you can use ngrok to provide access to your bot running in your local network. Ngrok is easy to setup and use. To learn more about it, you can refer to its documentation.

Ngrok is a handy tool and service that allows you tunnel requests from the wide open Internet to your local machine when it’s behind a NAT or firewall. It’s commonly used to develop web services and webhooks.

In this blog, you’ll learn how to deploy SUSI AI bots on ngrok. We’re going to demonstrate the process for SUSI AI Kik bot. However it is the same for rest of the bots as well.

First of all, you need to configure a bot on Kik. To configure a Kik bot, follow the first few steps of this blog post till you get an API key for the bot that you created using Botsworth. Now, follow these steps:

  1. In order to run SUSI AI kik bot on ngrok, you have to make some changes to the index.js file after forking susi_kikbot repository and cloning it to your local machine.
  2. Open index.js and change the information in ‘bot’ object. Write username of your kik bot in front of username and api key that you got for your bot in front of apiKey in quotations and leave the baseUrl blank for now. It will look like this: (Don’t copy the API key shown below. It’s invalid and only for demonstration purposes.)
var bot = new Bot({
    username: 'susi.ai',
    apiKey: 'b5a5238b-b764-45fe-a4c5-629fsd1851bd',
    baseUrl: ''
});
  1. Now save the file. No need to commit it.
  2. Go to https://ngrok.com/ and sign up.
  3. Next, you’ll be redirected to the setup and installation page. Follow the instructions there to setup ngrok on your local machine.
  4. Now open terminal and use ‘cd’ command to go to the susi_kikbot directory.
  5. While deploying on ngrok, we set the port for listening to http requests. Hence remove “process.env.PORT” from index.js or else this will cause an error in the next step. After removing, it should look like this: (Now save the index.js file)
http.createServer(bot.incoming()).listen(8080);

Also, comment out setInterval function in index.js.

  1. Now type the command ngrok http 8080 in terminal. In case ‘ngrok’ command doesn’t run, copy the ngrok file that you downloaded in step 5 and paste it in the susi_kikbot directory. Then enter ./ngrok http 8080 in terminal.
  2. When it launches, you’ll see a screen similar to the following:

  1. Copy the “forwarding” address (https://d36f0add.ngrok.io), and paste it in front of ‘baseUrl’ in the ‘bot’ object in index.js file. Add “/incoming” after it and save it. Now the ‘bot’ object should look similar to this:
var bot = new Bot({
    username: 'susi.ai',
    apiKey: 'b5a5238b-b764-45fe-a4c5-629fsd1851bd',
    baseUrl: 'https://d36f0add.ngrok.io/incoming'
});
  1. Launch another terminal window and cd into the susi_kikbot directory in your local machine. Then type node index.js command.

Congratulations! You’ve successfully deployed SUSI AI Kik bot on ngrok. Now try sending a message to your SUSI AI Kik bot.

References

Using file.io for Meilix Deployment

Meilix is developed in FOSSASIA and is deployed as a release on Github, but the download speed on GitHub for large files is slow. Alternatively deployment can be done on 3rd party servers. transfer.sh offers a good service for a start, but they only have a reduced storage for heavy usage such as what is required for Meilix. file.io is a good alternative. file.io has API features which can be used in .travis.yml to deploy meilix. For the time being, we are deploying a generator branch to file.io

Changes made in the .travis.yml

We need to edit the .travis.yml to deploy the artifact on file.io

deploy:
provider: script
script: curl -F "[email protected]/home/travis/$(image_name)" https://file.io
on:
branch: generator

We need to edit the deploy attribute in the travis to get the deployment done in file.io. Query contains the address of the file which needs to be uploaded on file.io. Then the branch name is provided on which deployment needs to be done.

Output:

Travis executes the command to deploy the application.

{"success":true,"key":"5QBEry","link":"https://file.io/5QBEry","expiry":"14 days"}

success: true  the artifact is successfully deployed.

key and link: gives the link from where the ISO can be downloaded.

expiry: tell the number of days after which the ISO will be deleted and by default it is set to 14 days.

We can manually input expiry parameter to declare the expiry time.

script: curl -F "[email protected]/home/travis/$(image_name)" https://file.io/?expires=1w

This will set the expiry time to seven days. If you set it with w, it will be number of weeks, m will be for number of months, y will be number of years.

file.io solved the most important issue of meilix deployment and this approach can be use several different project of FOSSASIA for the deployment purpose.

References:

Deploying BadgeYaY with Docker on Docker Cloud

We already have a Dockerfile present in the repository but  there is problem in many lines of code.I studied about Docker and learned how It is deployed and I am now going to explain how I deployed BadgeYaY on Docker Cloud.

To make deploying of Badgeyay easier we are now supporting Docker based installation.

Before we start to deploy, let’s have a quick brief about what is docker and how it works ?

What is Docker ?

Docker is an open-source technology that allows you create, deploy, and run applications using containers. Docker allows you deploy technologies with many underlying components that must be installed and configured in a single, containerized instance.Docker makes it easier to create and deploy applications in an isolated environment.

Now, let’s start with how to deploy on docker cloud:

Step 1 – Installing Docker

Get the latest version of docker. See the offical site for installation info for your platform.

Step 2 – Create Dockerfile

With Docker, we can just grab a portable Python runtime as an image, no installation necessary. Then, our build can include the base Python image right alongside our app code, ensuring that our app, its dependencies, and the runtime, all travel together.

These portable images are defined by something called a Dockerfile.

In DockerFile, there are all the commands a user could call on the command line to assemble an image. Here’s is the Dockerfile of BadgeYaY.

# The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.
FROM python:3.6

# We copy just the requirements.txt first to leverage Docker cache
COPY ./app/requirements.txt /app/


# The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
WORKDIR /app


# The RUN instruction will execute any commands in a new layer on top of the current image and commit the results.
RUN pip install -r requirements.txt


# The COPY instruction copies new files.
COPY . /app


# An ENTRYPOINT allows you to configure a container that will run as an executable.
ENTRYPOINT [ "python" ]

# The main purpose of a CMD is to provide defaults for an executing container.
CMD [ "main.py" ]

 

Step 3 – Build New Docker Image

sudo docker build -t badgeyay:latest .

 

When the command completed successfully, we can check the new image with the docker command below:

     sudo docker images

 

Step 4 – Run the app

Let’s run the app in the background, in detached mode:

 sudo docker run -d -p 5000:5000 badgeyay

 

We get the long container ID for our app and then are kicked back to our terminal.Our container is running in the background.Now use docker container stop to end the process, using the CONTAINER ID, like so :

 

docker container stop 1fa4ab2cf395

 

Step 5 – Publish the app.

Log in to the Docker public registry on your local machine.

docker login

 

Upload your tagged image to the repository:

docker push username/repository:tag

 

From now on, we can use docker run and run our app on any machine. No matter where docker run executes, it pulls your image, along with Python and all the dependencies from requirements.txt, and runs your code. It all travels together in a neat little package, and the host machine doesn’t have to install anything but Docker to run it.

Docker Cloud

Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

In BadgeYaY, we  also have a Deploy button button which directly deploys on Docker cloud with a single click .

The related PR of this work is https://github.com/fossasia/badgeyay/pull/401 .

Resources :

  • Docker documentation: Link
  • Get Started With Docker: Link

Installing Susper Search Engine and Deploying it to Heroku

Susper is a decentralized Search Engine that uses the peer to peer system yacy and Apache Solr to crawl and index search results.

Search results are displayed using the Solr server which is embedded into YaCy. All search results must be provided by a YaCy search server which includes a Solr server with a specialized JSON result writer. When a search request is made in one of the search templates, a HTTP request is made to YaCy. The response is JSON because that can much better be parsed than XML in JavaScript.

In this blog, we will talk about how to install Susper search engine locally and deploying it to Heroku (A cloud application platform).

How to clone the repository

Sign up / Login to GitHub and head over to the Susper repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/susper.com

2.   Get the clone of the forked version on your local machine using

git clone https://github.com/<username>/susper.com.git

3. Add upstream to synchronize repository using

git remote add upstream https://github.com/fossasia/susper.com.git

Getting Started

The Susper search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/[email protected]

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Susper Search Engine to Heroku :

  1. We need to install Heroku on our machine. Type the following in your Linux terminal:
wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

This installs the Heroku Toolbelt on your machine to access Heroku from the command line.

  1. Create a Procfile inside root directory and write
web: ng serve
  1. Next, we need to login to our Heroku server (assuming that you have already created an account).

Type the following in the terminal:

heroku login

Enter your credentials and login.

  1. Once logged in we need to create a space on the Heroku server for our application. This is done with the following command
heroku create
  1. Add nodejs buildpack to the app
heroku buildpacks:add –index 1 heroku/nodejs
  1. Then we deploy the code to Heroku.
git push heroku master
git push heroku yourbranch:master # If you are in a different branch other than master

Resources

Installing the Loklak Search and Deploying it to Surge

The Loklak search creates a website using the Loklak server as a data source. The goal is to get a search site, that offers timeline search as well as custom media search, account and geolocation search.

In order to run the service, you can use the API of http://api.loklak.org or install your own Loklak server data storage engine. Loklak_server is a server application which collects messages from various social media tweet sources, including Twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

The site of this repo is deployed on the GitHub gh-pages branch and automatically deployed here: http://loklak.org

In this blog, we will talk about how to install Loklak_Search locally and deploying it to Surge (Static web publishing for Front-End Developers).

How to clone the repository

Sign up / Login to GitHub and head over to the Loklak_Search repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/loklak_search
  1.   Get the clone of the forked version on your local machine using
git clone https://github.com/<username>/loklak_search.git
  1.   Add upstream to synchronize repository using
git remote add upstream https://github.com/fossasia/loklak_search.git

Getting Started

The Loklak search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/[email protected]

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Loklak Search on Surge :

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages.

  1. We need to install surge on our machine. Type the following in your Linux terminal:
npm install –global surge

This installs the Surge on your machine to access Surge from the command line.

  1. In your project directory just run
surge
  1. After this, it will ask you three parameters, namely
Email
Password
Domain

After specifying all these three parameters, the deployment link with the respective domain is generated.

Auto deployment of Pull Requests using Surge :

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file
  • The pr_deploy.sh file will be executed only after success of Travis CI i.e. when Travis CI passes by using command bash pr_deploy.sh
#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
npm i -g surge
export [email protected]
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206
export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-LoklakSearch.surge.sh
surge –project ./dist –domain $DEPLOY_DOMAIN;

Here, Travis CI is first installing surge locally by npm i -g surge  and then we are exporting the environment variables SURGE_LOGIN , SURGE_TOKEN and DEPLOY_DOMAIN.

Now, execute pr_deploy.sh file from .travis.yml by using command bash pr_deploy.sh

Resources

Auto Deployment of Pull Requests on Susper using Surge Technology

Susper is being improved every day. Following every best practice in the organization, each pull request includes a working demo link of the fix. Currently, the demo link for Susper can be generated by using GitHub pages by running these simple commands – ng build and npm run deploy. Sometimes this process on slow-internet connectivity takes up to 30 mins to generate a working demo link of the fix.

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages:

  • As soon as the pull request passes Travis CI, the deployment link is generated. It has been set up as such, no extra terminal commands will be required.
  • Faster loading compared to deployment link is generated using GitHub pages.

Surge can be used to deploy only static web pages. Static web pages mean websites that contain fixed contents.

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file which will be executed during Travis CI testing.
  • The pr_deploy.sh file can be executed after success i.e. when Travis CI passes by using command bash pr_deploy.sh.

The pr_deploy.sh file for Susper looks like this:

#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
angular build production

npm i -g surge

export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-susper.surge.sh
surge project ./dist domain $DEPLOY_DOMAIN;

 

Once pr_deploy.sh file has been created, execute the file in the travis.yml by using command bash pr_deploy.sh.

In this way, we have integrated the surge technology for auto-deployment of the pull requests in Susper.

References:

Deploy to Azure Button for loklak

In this blog post, am going to tell you about yet a new deployment method for loklak which is easy and quick with just one click. Deploying to Azure Websites from a Git repository just got a little easier with the Deploy to Azure Button. Simply place the button in README.md with a link to the loklak, and users who click on it will be directed to a streamlined deployment process. If we want to do something more advanced and customize this behavior, then add an ARM template called “azuredeploy.json” at the root of the repository which will cause users to be presented with different inputs and configure your services as specified.

I’m going to walk you through a workflow that I used to test them before checking them in to my repo, as well as describe some of the special behaviors that the “Deploy to Azure” site does

Adding a button

To add a deployment button, insert the following markdown to your README.md file:

[![Deploy to Azure](https://azuredeploy.net/deploybutton.svg)](https://deploy.azure.com/?repository=https://github.com/loklak/loklak_server)

How it works

When a user clicks on the button, a “referrer” header is sent to azuredeploy.net which contains the location of the Git repository of loklak_server to deploy from.

An Example Template

This is a blank template which shows, how the azure divides its inputs.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
  },
  "variables": {
  },
  "resources": [
  ],
  "outputs": {
  }
}

By following the above template, in the case of loklak server, the parameters used are name, image i.e., docker image, port , number of CPUs to be utilized and space i.e., memory required.

In the resources section we use container, the type of the container will be

"type": "Microsoft.ContainerInstance/containerGroups",

 

And as output, we expect a public IP address to access the azure cloud instance created by us.

Everything under the root “parameters” property will be inputs into our template. Then these parameter values feed into the resources defined later in the template with the “[parameters(‘paramName’)]” syntax.

Try the “Deploy to Azure” Button here:



Resources

Working of One Click Deployment Buttons in loklak

Today’s topic is deployment. It’s called one-click deployment for a reason: Developers are lazy. It’s hard to do less than clicking on one button, so that’s our goal to make use of one click button in loklak.

For one click buttons we only need a central build server, which is our loklak_server. Everything written here was based on Apache ant, but later on ant build was deprecated and loklak server started to use gradle build. We wanted to make the process of provisioning and setting up a complete infrastructure of your own, from server to continuous integration tasks, as easy as possible. These button allows you to do all of that in one click.

How does it work?

You can see the one click buttons in the README page of loklak_server repository.

These repositories may include a different files like scalingo.json for scalingo, docker-compose.yml and docker-cloud.yml for docker cloud etc files at their root, allowing them to define a few things like a name, description, logo and build environment (Gradle build in the case of loklak server). Once you’ve clicked on any of the buttons, you will be redirected to respective apps and prompted with this information for you to review before confirming the fork.

This will effectively fork the repository in your account. Once the repo is ready, you can click on it. You will then be asked to “activate” or “deploy” your branch, allowing it to provision actual servers and run tasks. At the same time, you will be asked to review and potentially modify a few variables that were defined in the predefined files (for eg: app.json for heroku) of the apps. These are usually things like the Git URL of the repo for loklak, or some of the details related to the cloud provider you want to use (eg: Digital Ocean).

Once you confirmed this last step, your branch i.e., most probably master branch of loklak server repo is activated and the button will start provisioning and configuring your servers, along with the tasks which may allow you to build and deploy your app. In most of the cases, you can go to the tasks/setup section and run the build task that will fetch loklak server’s code, build it and deploy it on your server, all configurations included and will get a public IP.

What’s next

In loklak we are also introducing new one click “AZURE” button, then the users can also start deploying loklak in azure platform.

Resources

One Click Deployment Button for loklak Using Heroku with Gradle Build

The one click deploy button makes it easy for the users of loklak to get their own cloud instance created and deployed in their heroku account and can be used according to their flexibility. Heroku uses an app.json manifest in the code repo to figure out what add-ons, config and other deployment steps are required to make the code run. This is used to configure and deploy the app.

Once you have provide the app name and then click on deploy button, Heroku will start deploying the loklak server to a new app on your account:

When setup is complete, you can open the deployed app in your browser or inspect it in Dashboard.

All these steps and requirements can now be encoded in an app.json file and placed in a repo alongside a button that kicks off the setup with a single click.

App.json is a manifest format for describing apps and specifying what their config requirements are. Heroku uses this file to figure out how code in a particular repo should be deployed on the platform. Here is the loklak’s app.json file which used gradle build pack:

{
	"name": "Loklak Server",
	"description": "Distributed Tweet Search Server",
	"logo": "https://raw.githubusercontent.com/loklak/loklak_server/master/html/images/loklak_anonymous.png",
	"website": "http://api.loklak.org",
	"repository": "https://github.com/loklak/loklak_server.git",
	"image": "loklak/loklak_server:latest-master",
	"env": {
		"BUILDPACK_URL": "https://github.com/heroku/heroku-buildpack-gradle.git"
	}
}

 

If you are interested you can try deploying the peer from here itself. Checkout how simple it can be to deploy.

Deploy button:

Deploy

Resources: