Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Installing Query Server Search and Adding Search Engines

    The query server can be used to search a keyword/phrase on a search engine (Google, Yahoo, Bing, Ask, DuckDuckGo and Yandex) and get the results as json or xml. The tool also stores the searched query string in a MongoDB database for analytical purposes. (The search engine scraper is based on the scraper at fossasia/searss.)

    In this blog, we will talk about how to install Query-Server and implement the search engine of your own choice as an enhancement.

    How to clone the repository

    Sign up / Login to GitHub and head over to the Query-Server repository. Then follow these steps.

    1. Go ahead and fork the repository

    https://github.com/fossasia/query-server

    2. Star the repository

    3. Get the clone of the forked version on your local machine using

    git clone https://github.com/<username>/query-server.git

    4. Add upstream to synchronize repository using

    git remote add upstream https://github.com/fossasia/query-server.git

    Getting Started

    The Query-Server application basically consists of the following :

    1. Installing Node.js dependencies

    npm install -g bower
    
    bower install

    2. Installing Python dependencies (Python 2.7 and 3.4+)

    pip install -r requirements.txt

    3. Setting up MongoDB server

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    
    echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release   -sc)"/mongodb-org/3.0   multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
    
    sudo apt-get update
    
    sudo apt-get install -y mongodb-org
    
    sudo service mongod start

    4. Now, run the query server:

    python app/server.py

    Go to http://localhost:7001/

    How to contribute :

    Add a search engine of your own choice

    You can add a search engine of your choice apart from the existing ones in application.

    • Just add or edit 4 files and you are ready to go.

    For adding a search engine ( say Exalead ) :

    1. Add exalead.py in app/scrapers directory :

    from __future__ import print_function
    
    from generalized import Scraper
    
    
    class Exalead(Scraper): # Exalead class inheriting Scraper
    
        """Scrapper class for Exalead"""
    
    
        def __init__(self):
    
           self.url = 'https://www.exalead.com/search/web/results/'
    
           self.defaultStart = 0
    
           self.startKey = ‘start_index’
    
    
        def parseResponse(self, soup):
    
           """ Parses the reponse and return set of urls
    
           Returns: urls (list)
    
                   [[Tile1,url1], [Title2, url2],..]
    
           """
    
           urls = []
    
           for a in soup.findAll('a', {'class': 'title'}): # Scrap data with the corresponding tag
    
               url_entry = {'title': a.getText(), 'link': a.get('href')}
    
               urls.append(url_entry)
    
    
           return urls

    Here, scraping data depends on the tag / class from where we could find the respective link and the title of the webpage.

    2. Edit generalized.py in app/scrapers directory

    from __future__ import print_function
    
    import json
    
    import sys
    
    from google import Google
    
    from duckduckgo import Duckduckgo
    
    from bing import Bing
    
    from yahoo import Yahoo
    
    from ask import Ask
    
    from yandex import Yandex
    
    from exalead import Exalead   # import exalead.py
    
    
    
    scrapers = {
    
        'g': Google(),
    
        'b': Bing(),
    
        'y': Yahoo(),
    
        'd': Duckduckgo(),
    
        'a': Ask(),
    
        'yd': Yandex(),
    
        't': Exalead() # Add exalead to scrapers with index ‘t’
    
    }

    From the scrapers dictionary, we could find which search engines had supported the project.

    3. Edit server.py in app directory

    @app.route('/api/v1/search/<search_engine>', methods=['GET'])
    
    def search(search_engine):
    
        try:
    
           num = request.args.get('num') or 10
    
           count = int(num)
    
           qformat = request.args.get('format') or 'json'
    
           if qformat not in ('json', 'xml'):
    
               abort(400, 'Not Found - undefined format')
    
    
           engine = search_engine
    
           if engine not in ('google', 'bing', 'duckduckgo', 'yahoo', 'ask', ‘yandex' ‘exalead’): # Add exalead to the tuple
    
               err = [404, 'Incorrect search engine', qformat]
    
               return bad_request(err)
    
    
           query = request.args.get('query')
    
           if not query:
    
               err = [400, 'Not Found - missing query', qformat]
    
               return bad_request(err)
    
    

    Checking, if the passed search engine is supporting the project, or not.

    4.  Edit index.html in app/templates directory

         <button type="submit" value="ask" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/ask_icon.ico') }}" width="30px" alt="Ask Icon"> Ask</button>
    
         <button type="submit" value="yandex" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/yandex_icon.png') }}" width="30px" alt="Yandex Icon"> Yandex</button>
    
         <button type="submit" value="exalead" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/exalead_icon.png') }}" width="30px" alt="Exalead Icon"> Exalead</button> # Add button for exalead
    
    • In a nutshell,

    Scrape the data using the anchor tag having specific class name.

    For example, searching fossasia using exalead

    https://www.exalead.com/search/web/results/?q=fossasia&start_index=1

    Here, after inspecting element for the links, you will find that anchor having class name as title is having the link and title of the webpage. So, scrap data using title classed anchor tag.

    Similarly, you can add other search engines as well.

    Resources

    Link Preview Holder on SUSI.AI Android Chat

    SUSI Android contains several view holders which binds a view based on its type, and one of them is LinkPreviewHolder. As the name suggests it is used for previewing links in the chat window. As soon as it receives an input as of link it inflates a link preview layout. The problem which exists was that whenever a user inputs a link as an input to app, it crashed. It crashed because it tries to inflate component that doesn’t exists in the view that is given to ViewHolder. So it gave a Null pointer Exception, due to which the app crashed. The work around for fixing this bug was that based on the type of user it will inflate the layout and its components. Let’s see how all functionalities were implemented in the LinkPreviewHolder class.

    Components of LinkPreviewHolder

    @BindView(R.id.text)
    public TextView text;
    @BindView(R.id.background_layout)
    public LinearLayout backgroundLayout;
    @BindView(R.id.link_preview_image)
    public ImageView previewImageView;
    @BindView(R.id.link_preview_title)
    public TextView titleTextView;
    @BindView(R.id.link_preview_description)
    public TextView descriptionTextView;
    @BindView(R.id.timestamp)
    public TextView timestampTextView;
    @BindView(R.id.preview_layout)
    public LinearLayout previewLayout;
    @Nullable @BindView(R.id.received_tick)
    public ImageView receivedTick;
    @Nullable
    @BindView(R.id.thumbs_up)
    protected ImageView thumbsUp;
    @Nullable
    @BindView(R.id.thumbs_down)
    protected ImageView thumbsDown;

    Currently in this it binds the view components with the associated id using declarator @BindView(id)

    Instantiates the class with a constructor

    public LinkPreviewViewHolder(View itemView , ClickListener listener) {
       super(itemView, listener);
       realm = Realm.getDefaultInstance();
       ButterKnife.bind(this,itemView);
    }

    Here it binds the current class with the view passed in the constructor using ButterKnife and initiates the ClickListener.

    Now it is to set the components described above in the setView function:

    Spanned answerText;
    text.setLinksClickable(true);
    text.setMovementMethod(LinkMovementMethod.getInstance());
    if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
    answerText = Html.fromHtml(model.getContent(), Html.FROM_HTML_MODE_COMPACT);
    } else {
    answerText = Html.fromHtml(model.getContent());
    }

    Sets the textView inside the view with a clickable link. Version checking also has been put for checking the version of Android (Above Nougat or not) and implement the function accordingly.

    This ViewHolder will inflate different components based on the thing that who has requested the output. If the query wants to inflate the LinkPreviewHolder them some extra set of components will get inflated which need not be inflated for the response apart from the basic layout.

    if (viewType == USER_WITHLINK) {
       if (model.getIsDelivered())
           receivedTick.setImageResource(R.drawable.ic_check);
       else
           receivedTick.setImageResource(R.drawable.ic_clock);
    }

    In the above code  received tick image resource is set according to the attribute of message is delivered or not for the Query sent by the user. These components will only get initialised when the user has sent some links.

    Now comes the configuration for the result obtained from the query.  Every skill has some rating associated to it. To mark the ratings there needs to be a counter set for rating the skills, positive or negative. This code should only execute for the response and not for the query part. This is the reason for crashing of the app because the logic tries to inflate the contents of the part of response but the view that is passed belongs to query. So it gives NullPointerException there, so there is a need to separate the logic of Response from the Query.

    if (viewType != USER_WITHLINK) {
       if(model.getSkillLocation().isEmpty()){
           thumbsUp.setVisibility(View.GONE);
           thumbsDown.setVisibility(View.GONE);
       } else {
           thumbsUp.setVisibility(View.VISIBLE);
           thumbsDown.setVisibility(View.VISIBLE);
       }
    
       if(model.isPositiveRated()){
           thumbsUp.setImageResource(R.drawable.thumbs_up_solid);
       } else {
           thumbsUp.setImageResource(R.drawable.thumbs_up_outline);
       }
    
       if(model.isNegativeRated()){
           thumbsDown.setImageResource(R.drawable.thumbs_down_solid);
       } else {
           thumbsDown.setImageResource(R.drawable.thumbs_down_outline);
       }
    
       thumbsUp.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    
    
       thumbsDown.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    }

    As you can see in the above code  it inflates the rating components (thumbsUp and thumbsDown) for the view of the SUSI.AI response and set on the clickListeners for the rating buttons. Them in the below code it previews the link and commit the data using Realm in the database through WebLink class.

    LinkPreviewCallback linkPreviewCallback = new LinkPreviewCallback() {
       @Override
       public void onPre() { . . . }
    
       @Override
       public void onPos(final SourceContent sourceContent, boolean b) { . . . }
    }

    This method calls the api and set the rating of that skill on the server. On successful result it made the thumb Icon change and alter the rating method and commit those changes in the databases using Realm.

    private void rateSusiSkill(final String polarity, String locationUrl, final Context context) {..}

    References

    UI automated testing using Selenium in Badgeyay

    With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

    What is Selenium?

    Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

    Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7).

    First things first:
    To install these package run this code on the CLI:

    pip install selenium==2.40
    pip install nose
    

    Don’t forget to add them in the requirements.txt file

    Web Browser:
    We also need to have Firefox installed on your machine.

    Writing the Test
    An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

    A test for the web page title:

    import unittest
    from selenium import webdriver
    
    class SampleTest(unittest.TestCase):
    
        @classmethod
        def setUpClass(cls):
            cls.driver = webdriver.Firefox()
            cls.driver.get('http://badgeyay-dev.herokuapp.com/')
    
        def test_title(self):
            self.assertEqual(self.driver.title, 'Badgeyay')
    
        @classmethod
        def tearDownClass(cls):
            cls.driver.quit()
    

     

    Run the test using nose test.py

    Clicking the element
    For our next test, we click the menu button, and check if the menu becomes visible.

    elem = self.driver.find_element_by_css_selector(".custom-menu-content")
    self.driver.find_element_by_css_selector(".glyphicon-th").click()
    self.assertTrue(elem.is_displayed())
    

     

    Uploading a CSV file:
    For our next test, we upload a CSV file and see if a success message pops up.

    def test_upload(self):
            Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png'))
            CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv'))
            self.driver.find_element_by_name("file").send_keys(CSVpath)
            self.driver.find_element_by_name("image").send_keys(Imagepath)
            self.driver.find_element_by_css_selector("form .btn-primary").click()
            time.sleep(3)
            success = self.driver.find_element_by_css_selector(".flash-success")
            self.assertIn(u'Your badges has been successfully generated!', success.text)
    

     

    The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests

    We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

    Resources

    Make Flask Fast and Reliable – Simple Steps

    Flask is a microframework for Python, which is mostly used in web-backend development.There are projects in FOSSASIA that are using flask for development purposes such as Open Event Server, Query Server, Badgeyay. Optimization is indeed one of the most important steps for a successful software product. So, in this post some few off- the-hook tricks will be shown which will make your flask-app more fast and reliable.

    Flask-Compress

    1. Flask-Compress is a python package which basically provides de-facto lossless compression  to your Flask application.
    2. Enough with the theory, now let’s understand the coding part:
      1. First install the module

    2. Then for a basic setup

    3.That’s it! All it takes is just few lines of code to make your flask app optimized .To know more about the module check out flask-compress module.

    Requirements Directory

    1. A common practice amongst different FOSSASIA  projects which involves dividing requirements.txt files for development,testing as well as production.
    2. Basically when projects either use TRAVIS CI for testing or are deployed to Cloud Services like Heroku, there are some modules which are not really required at some places.  For example: gunicorn is only required for deployment purposes and not for development.
    3. So how about we have a separate directory wherein different .txt files are created for different purposes.
    4. Below is the image of file directory structure followed for requirements in badgeyay project.

    1. As you can see different .txt files are created for different purposes
      1. dev.txt – for development
      2. prod.txt – for production(i.e. deployment)
      3. test.txt – for testing.

    Resources

    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();
    

     

    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    
    function reset() {
    recognizing = false;
    }
    
    recognition.onerror = function(e){
    console.log(e.error);
    };
    
    recognition.onend = function(){
    reset();
    };
    

     

    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
          recognition.stop();
          reset();
        } else {
          recognition.start();
          recognizing = true;
        }
    }
    

     

    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {
        toggleStartStop();
    });
    

     

    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;
            submitForm();
          }
        }
    };
    

     

    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    chrome.tabs.create({
    url: chrome.runtime.getURL("options.html")
    });
    });navigator.webkitGetUserMedia({
    audio: true
    }, function(stream) {
    stream.stop();
    }, function () {
    console.log('no access');
    });
    

     

    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);
        window.speechSynthesis.speak(voiceMsg);
    }
    

     

    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.

    Resources

     

    Using Inkscape to create SVG Files for Background of Event Badges in Badgeyay

    Inkscape is a free and open-source vector graphics editor. I used it in the FOSSASIA Badgeyay repository whose main purpose is to create badges for the event created using open-event. Badges were created in Scalable Vector Graphics (SVG) because of its advantages over JPEG, png etc. such as: scalability, Search Engine Optimization (SEO) friendly, easy editing ability (as it gets saved in an XML format) and resolution independence.

    My task (issue #20) was to create the background in SVG format so that it can be edited using XML file. Aim was to create the background in such a way so that we just have to find and replace the color code to see the color change in the image/background. Following background was to be reproduced in SVG format using Inkscape whose color can be edited using a text editor.

    badge

    This was achieved using Inkscape (as suggested in the issue itself) which let us create an SVG file. I created 2 layers, 1 for plain background, and the other containing the triangles of Voronoi Diagram. General steps are included in this awesome video tutorial – AbstractBackground.

    I found this quite helpful in understanding the interface of Inkscape. After following this tutorial, I had to do changes as follows:

    • Layer 1 rectangle was made using mesh, giving 4 different colors at corners. I set these colors as grey with different opacity/alpha factor.
    • Then just like in the video, I created a small circular object, set it to ‘path to object’, made duplicates of them, scattered them on the rectangle of 1st layer.
    • Used extensions menu to use ‘voronoi diagram‘, and then applied this to the selected circles.
    • Then I removed these circles, ungrouped all the triangles formed , changed their color, just by picking with the background (which was grey — with different opacities!). Grouped them together again, removed the lines which were separating the triangles by setting stroke to none.
    • Now all one have to do is change the color of 1st layer’ rectangle, and the final image/background will get changed .

    This change of color can be changed using a text editor too. I just had to find layer 1 rectangle in the XML tree, replace the ‘fill’ attribute with the required color code.

    This was achieved using INKSCAPE.

    badge background

    Now using text editor (here Sublime Text 3) , find layer 1, and change ‘fill’ value of rect with say ‘37C871’.

     <g
          inkscape:label="Layer 1"
          inkscape:groupmode="layer"
          id="layer1"
          style="display:inline;opacity:1">
         <rect
            id="rect4504"
            width="141.3569"
            height="200.82413"
            x="-63.25676"
            y="-14.052279"
            style="opacity:1;fill:#37C871;fill-opacity:1;stroke:url(#linearGradient2561);stroke-width:0.57644272" />
       </g>
    

    changed badge background color
    Then again opening the svg file, gives us the output as :

    Results can be seen in my Pull request #152 which eventually got merged.

    Using the similar background and adding logo of FOSSASIA on the top, also adding editable Headings like ‘VIP’, ‘BUSINESS PASS’  was done further in #PR167.

    If you want to contribute to FOSSASIA/badgeyay, you can create an issue here.

    Resources:

    Enhancing SUSI Desktop to Display a Loading Animation and Auto-Hide Menu Bar by Default

    SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop. The benefits of using chat.susi.ai as a submodule is that it inherits all the features that the webapp offers and thus serves them in a nicely build native application.

    Display a loading animation during DOM load.

    Electron apps should give a native feel, rather than feeling like they are just rendering some DOM, it would be great if we display a loading animation while the web content is actually loading, as depicted in the gif below is how I implemented that.
    Electron provides a nice, easy to use API for handling BrowserWindow, WebContent events. I read through the official docs and came up with a simple solution for this, as depicted in the below snippet.

    onload = function () {
    	const webview = document.querySelector('webview');
    	const loading = document.querySelector('#loading');
    
    	function onStopLoad() {
    		loading.classList.add('hide');
    	}
    
    	function onStartLoad() {
    		loading.classList.remove('hide');
    	}
    
    	webview.addEventListener('did-stop-loading', onStopLoad);
    	webview.addEventListener('did-start-loading', onStartLoad);
    };
    

    Hiding menu bar as default

    Menu bars are useful, but are annoying since they take up space in main window, so I hid them by default and users can toggle their display on pressing the Alt key at any point of time, I used the autoHideMenuBar property of BrowserWindow class while creating an object to achieve this.

    const win = new BrowserWindow({
    	
    	show: false,
    	autoHideMenuBar: true
    });
    

    Resources

    1. More information about BrowserWindow class in the official documentation at electron.atom.io.
    2. Follow a quick tutorial to kickstart creating apps with electron at https://www.youtube.com/watch?v=jKzBJAowmGg.
    3. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    badgeYAY – An abrupt flow of code

    Badgeyay is a web application which takes a CSV file, an image file and an optional config.json file, and converts them into a PDF file which consist of a set of badges as per the data in the CSV and the image as its background. In order to contribute to the badgeyay repository, a contributor is expected to have some knowledge of Python Flask, HTML and CSS. An understanding of git version control system is inevitable in open source.

    Flask – Web development in baby steps

    First things first – Having a local copy

    Sign up for GitHub and head over to the Badgeyay repository. Then follow these steps.

    1. Go ahead and Fork the repository
    2. Star the repository
    3. Get the clone of the forked version on you local machine using git clone https://github.com/<username>/badgeyay.git
    4. Add upstream using git remote add upstream https://github.com/fossasia/badgeyay.git

    How a flask application works

    A flask application basically consists of an app.py or main.py file which is run using the command python main.py

    The main.py file consists of:


    from flask import Flask, render_template
    app = Flask(__name__)
    @app.route('/')
    def index():
    return render_template('index.html')
    if __name__ == '__main__':
    app.run(debug=True)

    This snippet starts the flask server at localhost:5000 and index.html template gets rendered on visiting the root url. All the templates reside in templates folder while the static asset files are stored in static folder.

    Steps:

    1. First. we imported the Flask class and a function render_template.
    2. Next, we created a new instance of the Flask class.
    3. We then mapped the URL / to the function index(). Now, when someone visits this URL, the function index() will execute.
    4. The function index() uses the Flask function render_template() to render the index.html template we just created from the templates/ folder to the browser.
    5. Finally, we use run() to run our app on a local server. We’ll set the debug flag to true, so we can view any applicable error messages if something goes wrong, and so that the local server automatically reloads after we’ve made changes to the code.

    The template consists of a base layout which is extended by the pages.

    templates/layout.html

    <!DOCTYPE html>
    <html>
    <head>
    <title>Flask App</title>
    </head>
    <body>
    <header>
    <h1 class="logo">Flask App</h1>
    </header>

    {% block content %}
    {% endblock %}

    </body>
    </html>

    templates/index.html

    {% extends "layout.html" %}
    {% block content %}
    <h2>Welcome to the Flask app</h2>
    <h3>This is the index page for the Flask app</h3>
    <h3>{% endblock %}</h3>

    With this and a little understanding of python, and you are all set to contribute to flask repositories such as badgeyay.

    Resources

    Getting Started Developing on Phimpme Android

    Phimpme is an Android app for editing photos and sharing them on social media. To participate in project start by learning how people contribute in open source, learning about the version control system Git and other tools like Codacy and Travis.

    Firstly, sign up for GitHub. Secondly, find the open source projects that interest you. Now as for me I started with Phimpme. Then follow these steps:

    1. Go through the project ReadMe.md and read all the technologies and tools they are using.
    2. Now fork that repo in your account.
    3. Open the Android Studio/Other applications that are required for that project and import the project through Git.
    4. For Android Studio sync all the Gradle files and other changes and you are all done and ready for the development process.

    Install the app and run it on a phone. Now explore each and every bit use this app as a tester, think about the end cases and boundary condition that will make the app ‘ANR’ (App not responding) dialog appear. Congratulations you are ready to create an issue if that is a verified and original and actually is a bug.

    Next,

    • Navigate to the main repo link, you will see an issues section as follows:
    • Create a new issue and report every detail about the issue (logcat, screenshots) For eg. Refer to Issue-1120
    • Now the next step is to work on that issue
    • On your machine, you don’t have to change the code in the development branch as it’s considered to be as a bad practice. Hence checkout as a new branch.
      For eg., I checked out for the above issue as ‘crashfixed’
    git checkout -b "Any branch name you want to keep"
    
    • Make the necessary changes to that branch and test that the code is compiling and the issue is fixed followed by
    git add.
    git commit -m "Fix #Issue No -Description "
    git push origin branch-name
    
    • Now navigate to the repo and you will an option to create a Pull Request.
      Mention the Issue number and description and changes you done, include screenshots of the fixed app.For eg. Pull Request 1131.

    Hence you have done your first contribution in open source while learning with git. The pull request will initiate some checks like Codacy and Travis build and then if everything works it is reviewed and merged by co-developers.

    The usual way how this works is, that it should be reviewed by other co-developers. These co-developers do not need merge or write access to the repository. Any developer can review pull requests. This will also help contributors to learn about the project and make the job of core developers easier.

    Resources