Deploying Angular App Susper on GitHub Pages

What are the issues we had with automatic deployment of Susper? SUSPER project which is built on Angular is kept on master branch of the repository . Whenever any PR is merged,Travis CI does all the builds, and checks for any error. After successful checking it triggers deployment script in SUSPER (deploy.sh) . Here is the deployment script: #!/bin/bash # SOURCE_BRANCH & TARGET_BRANCH stores the name of different susper.com github branches. SOURCE_BRANCH="master" TARGET_BRANCH="gh-pages" # Pull requests and commits to other branches shouldn't try to deploy. if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then echo "Skipping deploy; The request or commit is not on master" exit 0 fi # Store some useful information into variables REPO=`git config remote.origin.url` SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:} SHA=`git rev-parse --verify HEAD` # Decryption of the `deploy.enc` openssl aes-256-cbc -k "$super_secret_password" -in deploy.enc -out deploy_key -d # give `deploy_key`. the permission to read and write chmod 600 deploy_key eval `ssh-agent -s` ssh-add deploy_key # Cloning the repository to repo/ directory, # Creating gh-pages branch if it doesn't exists else moving to that branch git clone $REPO repo cd repo git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH cd .. # Setting up the username and email. git config user.name "Travis CI" git config user.email "$COMMIT_AUTHOR_EMAIL" # Cleaning up the old repo's gh-pages branch except CNAME file and 404.html find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1 -exec rm -rf {} \; 2> /dev/null cd repo # commit the changes, move to SOURCE_BRANCH git add --all git commit -m "Travis CI Clean Deploy : ${SHA}" git checkout $SOURCE_BRANCH # Actual building and setup of current push or PR. Move to `TARGET_BRANCH` and move the `dist` folder npm install ng build --prod --aot mv susper.xml dist/ mv 404.html dist/ git checkout $TARGET_BRANCH mv dist/* . # Staging the new build for commit; and then committing the latest build git add . git commit --amend --no-edit --allow-empty # Actual push to gh-pages branch via Travis git push --force $SSH_REPO $TARGET_BRANCH This script starts after successfully checking the project (comments shows working of each command). The repository is cleaned,except some files like CNAME and 404.html and a commit is made. Then SUSPER’s build artifacts are generated in dist folder and all the files from dist folder are moved to gh-pages and the changes are amended. GitHub Pages engine then uses the build artifacts to generate static site at susper.com. But we faced a weird problem when we were unable to see changes from latest commits on susper.com. Slowly number of commits increased but still changes were not reflecting on susper.com . What was the error in deployment of SUSPER? All the changes were getting committed to the gh-pages branch properly. But the GitHub Pages engine was unable to process the branch to build static site. Due to this the site was not getting updated. The failing builds from gh-engine are notified to the owner via email. The failing builds from gh-pages can be seen here https://github.com/fossasia/susper.com/commits/gh-pages Between 21 May…

Continue ReadingDeploying Angular App Susper on GitHub Pages

Parallelizing Builds In Travis CI

Badgeyay project is now divided into two parts i.e front-end of emberJS and back-end with REST-API programmed in Python. Now, one of the challenging job is that, it should support the uncoupled architecture. It should therefore run tests for the front-end and backend i.e, of two different languages on isolated instances by making use of the isolated parallel builds. In this blog, I’ll be discussing how I have configured Travis CI to run the tests parallely in isolated parallel builds in Badgeyay in my Pull Request. First let’s understand what is Parallel Travis CI build and why we need it. Then we will move onto configuring the travis.yml file to run tests parallely. Let's get started and understand it step by step. Why Parallel Travis CI Build? The integration test suites tend to test more complex situations through the whole stack which incorporates front-end and back-end, they likewise have a tendency to be the slowest part, requiring various minutes to run, here and there even up to 30 minutes. To accelerate a test suite like that, we can split it up into a few sections utilizing Travis build matrix feature. Travis will decide the build matrix based on environment variables and schedule two builds to run. Now our objective is clear that we have to configure travis.yml to build parallel-y. Our project requires two buildpacks, Python and node_js, running the build jobs for both them would speed up things by a considerable amount.It seems be possible now to run several languages in one .travis.yml file using the matrix:include feature. Below is the code snippet of the travis.yml file  for the Badgeyay project in order to run build jobs in a parallel fashion. sudo: required dist: trusty # check different combinations of build flags which is able to divide builds into “jobs”. matrix: # Helps to run different languages in one .travis.yml file include: # First Job in Python. - language: python3 apt: packages: - python-dev python: - 3.5 cache: directories: - $HOME/backend/.pip-cache/ before_install: - sudo apt-get -qq update - sudo apt-get -y install python3-pip - sudo apt-get install python-virtualenv install: - virtualenv -p python3 ../flask_env - source ../flask_env/bin/activate - pip3 install -r backend/requirements/test.txt --cache-dir before_script: - export DISPLAY=:99.0 - sh -e /etc/init.d/xvfb start - sleep 3 script: - python backend/app/main.py >> log.txt 2>&1 & - python backend/app/main.py > /dev/null & - py.test --cov ../ ./backend/app/tests/test_api.py after_success: - bash <(curl -s https://codecov.io/bash) # Second Job in node js. - language: node_js node_js: - "6" addons: chrome: stable cache: directories: - $HOME/frontend/.npm env: global: # See https://git.io/vdao3 for details. - JOBS=1 before_install: - cd frontend - npm install - npm install -g ember-cli - npm i eslint-plugin-ember@latest --save-dev - npm config set spin false script: - npm run lint:js - npm test   Now, as we have added travis.yml and pushed it to the project repo. Here is the screenshot of passing Travis CI after parallel build jobs. The related PR of this work is https://github.com/fossasia/badgeyay/pull/512 Resources : Travis CI documentation -…

Continue ReadingParallelizing Builds In Travis CI

Setting up Codecov in Badgeyay

  BadgeYaY already has Travis CI and Codacy to test code quality and Pull Request but there was no support for testing Code Coverage in repository against every Pull Request. So I decided to go with setting up Codecov to test the code coverage. In this blog post, I’ll be discussing how I have set up codecov in BadgeYaY in my Pull Request. First, let’s understand what is codecov and why do we need it. For that we have to first understand what is code coverage then we will move on to how to add Codecov with help of Travis CI . Let’s get started and understand it step by step. What is Code Coverage ? Code coverage is a measurement used to express which lines of code were executed by a test suite. We use three primary terms to describe each lines executed. hit indicates that the source code was executed by the test suite. partial indicates that the source code was not fully executed by the test suite; there are remaining branches that were not executed. miss indicates that the source code was not executed by the test suite. Coverage is the ratio of hits / (hit + partial + miss). A code base that has 5 lines executed by tests out of 12 total lines will receive a coverage ratio of 41% . In BadgeYaY , Code Coverage is 100%. How CodeCov helps in Code Coverage ? Codecov focuses on integration and promoting healthy pull requests. Codecov delivers <<<or "injects">>> coverage metrics directly into the modern workflow to promote more code coverage, especially in pull requests where new features and bug fixes commonly occur. I am listing down top 5 Codecov Features: Browser Extension Pull Request Comments Commit Status Merging Reports Flags e.g. #unittests vs #functional We can change the configuration of how Codecov processes reports and expresses coverage information. Let’s see how we configure it according to BadgeYaY by integrating it with Travis CI. Now generally, the codecov works better with Travis CI. With the one line bash <(curl -s https://codecov.io/bash)   the code coverage can now be easily reported. Add a script for testing: "scripts": { - nosetests app/tests/test.py -v --with-coverage } Here is a particular example of travis.yml from the project repository of BadgeYaY: Script: - python app/main.py >> log.txt 2>&1 & - nosetts app/tests/test.py -v --with-coverage - python3 -m pyflakes after_success: - bash <(curl -s https://codecov.io/bash)   Let’s have a look at Codecov.yml to check exact configuration that I have used for BadgeYaY. Codecov: # yes: will delay sending notifications until all ci is finished notify: require_ci_to_pass: yes coverage: # how many decimal places to display in the UI: 0 <= value <= 4 precision: 2 # how coverage is rounded: down/up/nearest round: down # custom range of coverage colors from red -> yellow -> green range: "70...100" status: # measuring the overall project coverage project: yes # pull requests only: this commit status will measure the entire pull requests Coverage Diff. Checking…

Continue ReadingSetting up Codecov in Badgeyay

Automatically deploy SUSI Web Chat on surge after Travis passes

We are using surge from the very beginning of this SUSI web chat and SUSI skill cms projects development. We used surge for provide preview links for Pull requests. Surge is really easy tool to use. We can deploy our static web pages really easily and quickly.  But If user had to change something in pull request user has to deploy again in surge and update the link. If we can connect this operation with travis ci we can minimise re-works. We can embed the deploying commands inside the travis.yml. We can tell travis to make a preview link (surge deployment) if test cases are passed by embedding the surge deployment commands inside the travis.yml like below. This is travis.yml file sudo: required dist: trusty language: node_js node_js: - 6 script: - npm test after_success: - bash ./surge_deploy.sh - bash ./deploy.sh cache: directories: - node_modules branches: only: - master Surge deployment commands are inside the “surge_deploy.sh” file. In that we have to check the status of the pull request whether it is passing test cases or not. We can do it like below. if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then echo "Not a PR. Skipping surge deployment" exit 0 fi Then we have to install surge in the environment. Then after install all npm packages and run build. npm i -g surge npm install npm run build Since there is a issue with displaying moving to child routes we have to take a copy of index.html file and name it as a 404.html. cp ./build/index.html ./build/404.html Then make two environment variables for your surge email address and surge token export SURGE_LOGIN=fossasiasusichat@example.com # surge Token (run ‘surge token’ to get token) export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206 Now we have to make the surge deployment URL (Domain). It should be unique so we made a URL that contains pull request number. export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-susi-web-chat.surge.sh surge --project ./build/ --domain $DEPLOY_DOMAIN; Since all our static contents which made after the build process are in “build” folder we have to tell surge to get static html files from that. Now make a pull request. you would find the deployment link in travis ci report after travis passed. Expand the output of the surge_deploy.sh You will find the deployment link as we defined in the surge_deploy.sh file References: Integrating with travis ci - https://surge.sh/help/integrating-with-travis-ci React Routes to Deploy 404 page on gh-pages and surge - https://blog.fossasia.org/react-routes-to-deploy-404-page-on-gh-pages-and-surge/

Continue ReadingAutomatically deploy SUSI Web Chat on surge after Travis passes

How to Send a Script as Variable to the Meilix ISO with Travis and Meilix Generator

We wanted to add more features to Melix Generator web app to be able to customize Meilix ISO with more features so we thought of sending every customization we want to apply as a different variable and then use the scripts from Meilix Generator repo to generate ISO but that idea was bad as many variables are to be made and need to be maintained on both Heroku and Travis CI and keep growing with addition of features to web app. So we thought of a better idea of creating a combined script with web app for each feature to be applied to ISO and send it as a variable to Travis CI. Now another problem was how to send script as a variable after generating it as json do not support special characters inside the script. We tried escaping the special characters and the data was successfully sent to Travis CI and was shown in config but when setting that variable as an environment variable in Travis CI the whole value of variable was not taken as we had spaces in the script. So to eliminate that problem we encoded the variable in the app as base64 and sent it to Travis CI and used it using following code. To generate the variable from script. with open('travis_script_1.sh','rb') as f: os.environ["TRAVIS_SCRIPT"] = str(base64.b64encode(f.read()))[1:]   For this we have to import base64 module and open the script generated in binary mode and using base64 we encode the script and using Travis CI API we send variable as script to the Travis CI to build the ISO with script in chroot we were also required to make changes in Meilix to be able to decode the script and then copy it into chroot during the ISO build. sudo su <<EOF echo "$TRAVIS_SCRIPT" > edit/meilix-generator.sh mv browser.sh edit/browser.sh EOF   Using script inside chroot. chmod +x meilix-generator.sh browser.sh echo "$(<meilix-generator.sh)" #to test the file ./browser.sh rm browser.sh Resources Base64 python documentation from docs.python.org Base64 bash tutorial from scottlinux.com by Scott Miller su in a script from unix.stackexchange.com answered by Ankit

Continue ReadingHow to Send a Script as Variable to the Meilix ISO with Travis and Meilix Generator

Including a Graph Component in the Remote Access Framework for PSLab

The remote-lab software of the pocket science lab enables users to access their devices remotely via the Internet. It includes an API server designed with Python Flask, and a web-app designed with EmberJS that allows users to access the API and carry out various tasks such as writing and executing Python scripts. In a series of blog posts, various aspects of this framework such as  remote execution of function strings, automatic deployment on various domains, creating and submitting python scripts which will be run on the remote server etc have already been explored.  This blog post deals with the inclusion of a graph component in the webapp that will be invoked when the user utilises the `plot` command in their scripts. The JQPLOT library is being used for this purpose, and has been found to be quite lightweight and has a vast set of example code . Task list for enabling the plotting feature Add a plot method to the codeEvaluator module in the API server and allow access to it by adding it to the evalGlobals dictionary Create an EmberJS component for handling plots Create a named div in the template Invoke the Jqplot initializer from the JS file and pass necessary arguments and data to the jqplot instance Add a conditional statement to include the jqplot component whenever a plot subsection is present in the JSON object returned by the API server after executing a script Adding a plot method to the API server Thus far, in addition to the functions supported by the sciencelab.py instance of PSLab, users had access to print, print_, and button functions. We shall now add a plot function. def plot(self,x,y,**kwargs): self.generatedApp.append({"type":"plot","name":kwargs.get('name','myPlot'),"data":[np.array([x,y]).T.tolist()]})   The X,Y datasets provided by the user are stacked in pairs because jqplot requires [x,y] pairs . not separate datasets. We also need to add this to evalGlobals, so we shall modify the __init__ routine slightly: self.evalGlobals['plot']=self.plot Building an Ember component for handling plots First, well need to install jqplot:   bower install --save jqplot And this must be followed by including the following files using app.import statements in ember-cli-build.js bower_components/jqplot/jquery.jqplot.min.js bower_components/jqplot/plugins/jqplot.cursor.js bower_components/jqplot/plugins/jqplot.highlighter.js bower_components/jqplot/plugins/jqplot.pointLabels.js bower_components/jqplot/jquery.jqplot.min.css In addition to the jqplot js and css files, we have also included a couple of plugins we shall use later. Now we need to set up a new component : ember g component jqplot-graph Our component will accept an object as an input argument. This object will contain the various configuration options for the plot Add the following line in templates/components/jqplot-graph.hbs: style="solid gray 1px;" id="{{data.name}}"> The JS file for this template must invoke the jqplot function in order to insert a complete plot into the previously defined <div> after it has been created. Therefore, the initialization routine must override the didInsertElement routine of the component. components/jqplot-graph.js import Ember from 'ember'; export default Ember.Component.extend({ didInsertElement () { Ember.$.jqplot(this.data.name,this.data.data,{ title: this.title, axes: { xaxis: { tickInterval: 1, rendererOptions: { minorTicks: 4 } }, }, highlighter: { show: true, showLabel: true, tooltipAxes: 'xy', sizeAdjust: 9.5 , tooltipLocation :…

Continue ReadingIncluding a Graph Component in the Remote Access Framework for PSLab

PSLab Remote Lab: Automatically deploying the EmberJS WebApp and Flask API Server to different domains

The remote-lab software of the pocket science lab enables users to access their devices remotely via the internet. Its design involves an API server designed with Python Flask, and a web-app designed with EmberJS that allows users to access the API and carry out various tasks such as writing and executing Python scripts. For testing purposes, the repository needed to be setup to deploy both the backend as well as the webapp automatically when a build passes, and this blog post deals with how this can be achieved. Deploying the API server The Heroku PaaS was chosen due to its ease of use with a wide range of server software, and support for postgresql databases. It can be configured to automatically deploy branches from github repositories, and conditions such as passing of a linked CI can also be included. The following screenshot shows the Heroku configuration page of an app called pslab-test1. Most of the configuration actions can be carried out offline via the Heroku-Cli   In the above page, the pslab-test1 has been set to deploy automatically from the master branch of github.com/jithinbp/pslab-remote . The wait for CI to pass before deploy has been disabled since a CI has not been setup on the repository. Files required for Heroku to deploy automatically Once the Heroku PaaS has copied the latest commit made to the linked repository, it searches the base directory for a configuration file called runtime.txt which contains details about the language of the app and the version of the compiler/interpretor to use, and a Procfile which contains the command to launch the app once it is ready. Since the PSLab’s API server is written in Python, we also have a requirements.txt which is a list of dependencies to be installed before launching the application. Procfile web: gunicorn app:app --log-file - runtime.txt python-3.6.1 requirements.txt gunicorn==19.6.0 flask >= 0.10.1 psycopg2==2.6.2 flask-sqlalchemy SQLAlchemy>=0.8.0 numpy>=1.13 flask-cors>=3.0.0 But wait, our app cannot run yet, because it requires a postgresql database, and we did not do anything to set up one. The following steps will set up a postgres database using the heroku-cli usable from your command prompt. Point Heroku-cli to our app $ heroku git:remote -a pslab-test1 Create a postgres database under the hobby-dev plan available for free users. $ heroku addons:create heroku-postgresql:hobby-dev Creating heroku-postgresql:hobby-dev on ⬢ pslab-test1... free Database has been created and is available ! This database is empty. If upgrading, you can transfer ! data from another database with pg:copy Created postgresql-slippery-81404 as HEROKU_POSTGRESQL_CHARCOAL_URL Use heroku addons:docs heroku-postgresql to view documentation The previous step created a database along with an environment variable HEROKU_POSTGRESQL_CHARCOAL_URL . As a shorthand, we can also refer to it simply as CHARCOAL . In order to make it our primary database, it must be promoted $ heroku pg:promote HEROKU_POSTGRESQL_CHARCOAL_URL The database will now be available via the environment variable DATABASE_URL Further documentation on creating and modifying postgres databases on Heroku can be found in the articles section . At this point, if the app is…

Continue ReadingPSLab Remote Lab: Automatically deploying the EmberJS WebApp and Flask API Server to different domains

Auto Deploying loklak Server on Google Cloud Using Travis

This is a setup for loklak server which want to check in only the source files, but have the development branch in Kubernetes deployment automatically updated with some compiled output every time the push using details from Travis build. How to achieve it? Unix commands and shell script is one of the best option to automate all deployment and build activities. I explored Kubernetes Gcloud which can be accessed through unix command. 1.Checking for Travis build details before deployment: Firstly check whether the repository is loklak_server, pull request is available and branches are either master or development, and then decide to update the docker image or not. The code of the aforementioned things is as follows: if [ "$TRAVIS_REPO_SLUG" != "loklak/loklak_server" ]; then echo "Skipping image update for repo $TRAVIS_REPO_SLUG" exit 0 fi if [ "$TRAVIS_PULL_REQUEST" != "false" ]; then echo "Skipping image update for pull request" exit 0 fi if [ "$TRAVIS_BRANCH" != "master" ] && [ "$TRAVIS_BRANCH" != "development" ]; then echo "Skipping image update for branch $TRAVIS_BRANCH" exit 0 fi 2. Setting up Tag and Decrypting the credentials: For the Kubernetes deployment, each time the travis build is successful, it takes the commit details from travis and appended into tag details for deployment and gcloud credentials is decrypted from the json file. openssl aes-256-cbc -K $encrypted_48d01dc243a6_key -iv $encrypted_48d01dc243a6_iv -in kubernetes/gcloud-credentials.json.enc -out kubernetes/gcloud-credentials.json -d 3. Install, Authenticate and Configure GCloud details with Kubernetes: In this step, initially Google Cloud SDK should be installed with Kubernetes- curl https://sdk.cloud.google.com | bash > /dev/null source ~/google-cloud-sdk/path.bash.inc gcloud components install kubectl   Then, Authenticate Google Cloud using the above mentioned decrypted credentials and finally configure the Google Cloud with the details like zone, project name, cluster details, number of nodes etc. 4. Update the Kubernetes deployment: Since, in this issue it is specific to the loklak_server/development branch, so in here it checks if the branch is development or not and then updates the deployment using following command: if [ $TRAVIS_BRANCH == "development" ]; then kubectl set image deployment/server --namespace=web server=$TAG fi   Conclusion: In this post, how to write a script in such a way that with each successful push after travis build how to update the deployment on Kubernetes GCloud. Resources: Read more about Kubernetes GCloud deployment here: http://thylong.com/ci/2016/deploying-from-travis-to-gce/ Documentation available on Kubernetes: https://kubernetes.io/docs/tutorials/

Continue ReadingAuto Deploying loklak Server on Google Cloud Using Travis

Using Travis CI to Generate Sample Apks for Testing in Open Event Android

In the Open Event Android app we were using Travis already to push an apk of the Android app to the apk branch for easy testing after each commit in the repo. A better way to test the dynamic nature of the app would be to use the samples of different events from the Open Event repo to generate an apk for each sample. This could help us identify bugs and inconsistencies in the generator and the Android app easily. In this blog I will be talking about how this was implemented using Travis CI. What is a CI? Continuous Integration is a DevOps software development practice where developers regularly push their code changes into a central repository. After the merge automated builds and tests are run on the code that has been pushed. This helps developers to identify bugs in code quite easily. There are many CI’s available such as Travis, Codecov etc. Now that we are all caught up with let’s dive into the code. Script for replacing a line in a file (configedit.sh) The main role of this script would be to replace a line in the config.json file. Why do we need this? This would be used to reconfigure the Api_Link in the config.json file according to our build parameters. If we want the apk for Mozilla All Hands 2017 to be built, I would use this  script to replace the Api_Link in the config.json file to the one for Mozilla All Hands 2017. This is what the config.json file for the app looks like. {   "Email": "dev@fossasia.org",   "App_Name": "Open Event",   "Api_Link": "https://eventyay.com/api/v1/events/6/" } We are going to replace line 4 of this file with "Api_Link":"https://raw.githubusercontent.com/fossasia/open-event/master/sample/MozillaAllHands17" VAR=0 STRING1=$1 while read line do ((VAR+=1)) if [ "$VAR" = 4 ]; then echo "$STRING1" else echo "$line" fi done < app/src/main/assets/config.json The script above reads the file line by line. If it reaches line 4 it would print out the string that was given into the script as a parameter else it just prints out the line in the file. This script would print the required file for us in the terminal if called but NOT create the required file. So we redirect the output of this file into the same file config.json. Now let’s move on to the main script which is responsible for the building of the apks. Build Script(generate_apks.sh) Main components of the script Build the apk for the default sample i.e FOSSASIA17 using the build scripts ./gradlew build and ./gradlew assembleRelease. Store all the Api_Links and apk names for which we need the apks for different arrays Replace the Api_Link in the json file found under android/app/src/main/assets/config.json using the configedit.sh. Run the build scripts ./gradlew build and ./gradlew assembleRelease to generate the apk. Move the generated apk from app/build/outputs/apk/ to a folder called daily where we store all the generated apks. We then repeat this process for the other Api_Links in the array. As of now we are generating the apks…

Continue ReadingUsing Travis CI to Generate Sample Apks for Testing in Open Event Android

Flask App to Upload Wallpaper On the Server for Meilix Generator

We had a problem of getting a wallpaper from the user using Meilix Generator and use the wallpaper with the Meilix build scripts to generate the ISO. So, we were required to host the wallpaper on the server and downloaded by Travis CI during the build to include it in the ISO. A solution is to render HTML templates and access data sent by POST using the request object from the flask. Redirect and url_for will be used to redirect the user once the upload is done and send_from_directory will help us to host the file under the /uploads that the user just uploaded which will be downloaded by the Travis for building the ISO. We start by creating the HTML form marked with enctype=multipart/form-data. <form action="upload" method="post" enctype="multipart/form-data"> <input type="file" name="file"><br /><br /> <input type="submit" value="Upload"> </form>   First, we need imports of modules required. Most important is werkzeug.secure_filename(). import os from flask import Flask, render_template, request, redirect, url_for, send_from_directory from werkzeug import secure_file   Now, we’ll define where to upload and the type of file allowed for uploading. The path to upload directory on the server is defined by the extensions in app.config which is uploads/ here. app.config['UPLOAD_FOLDER'] = 'uploads/' app.config['ALLOWED_EXTENSIONS'] = set(['png', 'jpg', 'jpeg'])   This functions will check for valid extension for the wallpaper which are png, jpg and jpeg in this case defined above in app.config. def allowed_file(filename): return '.' in filename and \ filename.rsplit('.', 1)[1] in app.config['ALLOWED_EXTENSIONS']   After, getting the name of uploaded file from the user then using above function check if there are allowed file type and store it in a variable filename after that it move the files to the upload folder to save it. Upload function check if the file name is safe and remove unsupported characters (line 3) after that moves it from a temporal folder to the upload folder. After moving, it renames the file as wallpaper so that the download link is same always which we have used in Meilix build script to download from server. def upload(): file = request.files['file'] if file and allowed_file(file.filename): filename = secure_filename(file.filename) file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) os.rename(UPLOAD_FOLDER + filename, UPLOAD_FOLDER+'wallpaper') filename = 'wallpaper'   At this point, we have only uploaded the wallpaper and renamed the uploaded file to ‘wallpaper’ only. We cannot access the file outside the server it will result in 403 error so to make it available, the uploaded file need to be registered and then hosted using below code snippet. We can also register uploaded_file as build_only rule and use the SharedDataMiddleware. @app.route('/uploads/<filename>') def uploaded_file(filename): return send_from_directory(app.config['UPLOAD_FOLDER'],filename) The hosted wallpaper is used by Meilix in Travis CI to generate ISO using the download link which remains same for the uploaded wallpaper. Why should we use secure secure_filename() function? just imagine someone sends the following information as the filename to your app. filename = "../../../../home/username/.sh"   If the number of ../ is correct and you would join this with your UPLOAD_FOLDER the hacker might have the ability to modify…

Continue ReadingFlask App to Upload Wallpaper On the Server for Meilix Generator