Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Installing Query Server Search and Adding Search Engines

    The query server can be used to search a keyword/phrase on a search engine (Google, Yahoo, Bing, Ask, DuckDuckGo and Yandex) and get the results as json or xml. The tool also stores the searched query string in a MongoDB database for analytical purposes. (The search engine scraper is based on the scraper at fossasia/searss.)

    In this blog, we will talk about how to install Query-Server and implement the search engine of your own choice as an enhancement.

    How to clone the repository

    Sign up / Login to GitHub and head over to the Query-Server repository. Then follow these steps.

    1. Go ahead and fork the repository

    https://github.com/fossasia/query-server

    2. Star the repository

    3. Get the clone of the forked version on your local machine using

    git clone https://github.com/<username>/query-server.git

    4. Add upstream to synchronize repository using

    git remote add upstream https://github.com/fossasia/query-server.git

    Getting Started

    The Query-Server application basically consists of the following :

    1. Installing Node.js dependencies

    npm install -g bower
    
    bower install

    2. Installing Python dependencies (Python 2.7 and 3.4+)

    pip install -r requirements.txt

    3. Setting up MongoDB server

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    
    echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release   -sc)"/mongodb-org/3.0   multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
    
    sudo apt-get update
    
    sudo apt-get install -y mongodb-org
    
    sudo service mongod start

    4. Now, run the query server:

    python app/server.py

    Go to http://localhost:7001/

    How to contribute :

    Add a search engine of your own choice

    You can add a search engine of your choice apart from the existing ones in application.

    • Just add or edit 4 files and you are ready to go.

    For adding a search engine ( say Exalead ) :

    1. Add exalead.py in app/scrapers directory :

    from __future__ import print_function
    
    from generalized import Scraper
    
    
    class Exalead(Scraper): # Exalead class inheriting Scraper
    
        """Scrapper class for Exalead"""
    
    
        def __init__(self):
    
           self.url = 'https://www.exalead.com/search/web/results/'
    
           self.defaultStart = 0
    
           self.startKey = ‘start_index’
    
    
        def parseResponse(self, soup):
    
           """ Parses the reponse and return set of urls
    
           Returns: urls (list)
    
                   [[Tile1,url1], [Title2, url2],..]
    
           """
    
           urls = []
    
           for a in soup.findAll('a', {'class': 'title'}): # Scrap data with the corresponding tag
    
               url_entry = {'title': a.getText(), 'link': a.get('href')}
    
               urls.append(url_entry)
    
    
           return urls

    Here, scraping data depends on the tag / class from where we could find the respective link and the title of the webpage.

    2. Edit generalized.py in app/scrapers directory

    from __future__ import print_function
    
    import json
    
    import sys
    
    from google import Google
    
    from duckduckgo import Duckduckgo
    
    from bing import Bing
    
    from yahoo import Yahoo
    
    from ask import Ask
    
    from yandex import Yandex
    
    from exalead import Exalead   # import exalead.py
    
    
    
    scrapers = {
    
        'g': Google(),
    
        'b': Bing(),
    
        'y': Yahoo(),
    
        'd': Duckduckgo(),
    
        'a': Ask(),
    
        'yd': Yandex(),
    
        't': Exalead() # Add exalead to scrapers with index ‘t’
    
    }

    From the scrapers dictionary, we could find which search engines had supported the project.

    3. Edit server.py in app directory

    @app.route('/api/v1/search/<search_engine>', methods=['GET'])
    
    def search(search_engine):
    
        try:
    
           num = request.args.get('num') or 10
    
           count = int(num)
    
           qformat = request.args.get('format') or 'json'
    
           if qformat not in ('json', 'xml'):
    
               abort(400, 'Not Found - undefined format')
    
    
           engine = search_engine
    
           if engine not in ('google', 'bing', 'duckduckgo', 'yahoo', 'ask', ‘yandex' ‘exalead’): # Add exalead to the tuple
    
               err = [404, 'Incorrect search engine', qformat]
    
               return bad_request(err)
    
    
           query = request.args.get('query')
    
           if not query:
    
               err = [400, 'Not Found - missing query', qformat]
    
               return bad_request(err)
    
    

    Checking, if the passed search engine is supporting the project, or not.

    4.  Edit index.html in app/templates directory

         <button type="submit" value="ask" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/ask_icon.ico') }}" width="30px" alt="Ask Icon"> Ask</button>
    
         <button type="submit" value="yandex" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/yandex_icon.png') }}" width="30px" alt="Yandex Icon"> Yandex</button>
    
         <button type="submit" value="exalead" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/exalead_icon.png') }}" width="30px" alt="Exalead Icon"> Exalead</button> # Add button for exalead
    
    • In a nutshell,

    Scrape the data using the anchor tag having specific class name.

    For example, searching fossasia using exalead

    https://www.exalead.com/search/web/results/?q=fossasia&start_index=1

    Here, after inspecting element for the links, you will find that anchor having class name as title is having the link and title of the webpage. So, scrap data using title classed anchor tag.

    Similarly, you can add other search engines as well.

    Resources

    Make Flask Fast and Reliable – Simple Steps

    Flask is a microframework for Python, which is mostly used in web-backend development.There are projects in FOSSASIA that are using flask for development purposes such as Open Event Server, Query Server, Badgeyay. Optimization is indeed one of the most important steps for a successful software product. So, in this post some few off- the-hook tricks will be shown which will make your flask-app more fast and reliable.

    Flask-Compress

    1. Flask-Compress is a python package which basically provides de-facto lossless compression  to your Flask application.
    2. Enough with the theory, now let’s understand the coding part:
      1. First install the module

    2. Then for a basic setup

    3.That’s it! All it takes is just few lines of code to make your flask app optimized .To know more about the module check out flask-compress module.

    Requirements Directory

    1. A common practice amongst different FOSSASIA  projects which involves dividing requirements.txt files for development,testing as well as production.
    2. Basically when projects either use TRAVIS CI for testing or are deployed to Cloud Services like Heroku, there are some modules which are not really required at some places.  For example: gunicorn is only required for deployment purposes and not for development.
    3. So how about we have a separate directory wherein different .txt files are created for different purposes.
    4. Below is the image of file directory structure followed for requirements in badgeyay project.

    1. As you can see different .txt files are created for different purposes
      1. dev.txt – for development
      2. prod.txt – for production(i.e. deployment)
      3. test.txt – for testing.

    Resources

    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();
    

     

    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    
    function reset() {
    recognizing = false;
    }
    
    recognition.onerror = function(e){
    console.log(e.error);
    };
    
    recognition.onend = function(){
    reset();
    };
    

     

    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
          recognition.stop();
          reset();
        } else {
          recognition.start();
          recognizing = true;
        }
    }
    

     

    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {
        toggleStartStop();
    });
    

     

    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;
            submitForm();
          }
        }
    };
    

     

    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    chrome.tabs.create({
    url: chrome.runtime.getURL("options.html")
    });
    });navigator.webkitGetUserMedia({
    audio: true
    }, function(stream) {
    stream.stop();
    }, function () {
    console.log('no access');
    });
    

     

    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);
        window.speechSynthesis.speak(voiceMsg);
    }
    

     

    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.

    Resources

     

    Store User’s Personal Information with SUSI

    In this blog, I discuss how SUSI.AI stores personal information of it’s users. This personal information is mostly about usernames/links to different websites like LinkedIn, GitHub, Facebook, Google/Gmail etc. To store such details, we have a dedicated API. Endpoint is :

    https://api.susi.ai/aaa/storePersonalInfo.json
    

    In this API/Servlet, storing the details and getting the details, both the aspects are covered. At the time of making the API call, user has an option either to ask the server for a list of available store names along with their values or request the server to store the value for a particular store name. If a store name already exists and a client makes a call with new/updated value, The servlet will update the value for that particular store name.

    The reason you are looking at minimal user role as USER is quite obvious, i.e. these details correspond to a particular user. Hence neither we want someone writing such information anonymously nor we want this information to be visible to anonymous user until allowed by the user.

    In the next steps, we start evaluating the API call made by the client. We look at the combination of the parameters present in the request. If the request is to fetch list of available stores, server first checks if Accounting object even has a JSONObject for “stores” or not. If not found, it sends an error message “No personal information is added yet.” and error code 420. Prior to all these steps, server first generates an accounting object for the user. If found, details are encoded as JSONObject’s parameter. Look at the code below to understand things fairly.

    Accounting accounting = DAO.getAccounting(authorization.getIdentity());
            if(post.get("fetchDetails", false)) {
                if(accounting.getJSON().has("stores")){
                    JSONObject jsonObject = accounting.getJSON().getJSONObject("stores");
                    json.put("stores", jsonObject);
                    json.put("accepted", true);
                    json.put("message", "details fetched successfully.");
                    return new ServiceResponse(json);
                } else {
                    throw new APIException(420, "No personal information is added yet.");
                }
            }
    

    If the request was not to fetch the list of available stores, It means client wants server to save a new field or update a previous value for that of a store name. A combination of If-else evaluates whether the call even contains required parameters.

    if (post.get(“storeName”, null) == null) {
    throw new APIException(422, “Bad store name encountered!”);
    }

    String storeName = post.get(“storeName”, null);
    if (post.get(“value”, null) == null) {
    throw new APIException(422, “Bad store name value encountered!”);
    }

    If request contains all the required data, then store name & value are extracted as key-value pair from the request.

    In the next steps, since the server is expected to store list of the store names for a particular user, First the identity is gathered from the already present authorization object in “serviceImpl” method. If the server finds a “null” identity, It throws an error with error code 400 and error message “Specified User Setting not found, ensure you are logged in”.

    Else, server first checks if a JSONObject with key “stores” exists or not. If not, It will create an object and will put the key value pair in the new JSONObject. Otherwise it would anyways do so.

    Since these details are for a particular account (i.e. for a particular user), these are placed in the Accounting.json file. For better knowledge, Look at the code snippet below.

    if (accounting.getJSON().has("stores")) {
                    accounting.getJSON().getJSONObject("stores").put(storeName, value);
                } else {
                    JSONObject jsonObject = new JSONObject(true);
                    jsonObject.put(storeName, value);
                    accounting.getJSON().put("stores", jsonObject);
                }
    
                json.put("accepted", true);
                json.put("message", "You successfully updated your account information!");
                return new ServiceResponse(json);
    

    Additional Resources :

    Enhancing SUSI Desktop to Display a Loading Animation and Auto-Hide Menu Bar by Default

    SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop. The benefits of using chat.susi.ai as a submodule is that it inherits all the features that the webapp offers and thus serves them in a nicely build native application.

    Display a loading animation during DOM load.

    Electron apps should give a native feel, rather than feeling like they are just rendering some DOM, it would be great if we display a loading animation while the web content is actually loading, as depicted in the gif below is how I implemented that.
    Electron provides a nice, easy to use API for handling BrowserWindow, WebContent events. I read through the official docs and came up with a simple solution for this, as depicted in the below snippet.

    onload = function () {
    	const webview = document.querySelector('webview');
    	const loading = document.querySelector('#loading');
    
    	function onStopLoad() {
    		loading.classList.add('hide');
    	}
    
    	function onStartLoad() {
    		loading.classList.remove('hide');
    	}
    
    	webview.addEventListener('did-stop-loading', onStopLoad);
    	webview.addEventListener('did-start-loading', onStartLoad);
    };
    

    Hiding menu bar as default

    Menu bars are useful, but are annoying since they take up space in main window, so I hid them by default and users can toggle their display on pressing the Alt key at any point of time, I used the autoHideMenuBar property of BrowserWindow class while creating an object to achieve this.

    const win = new BrowserWindow({
    	
    	show: false,
    	autoHideMenuBar: true
    });
    

    Resources

    1. More information about BrowserWindow class in the official documentation at electron.atom.io.
    2. Follow a quick tutorial to kickstart creating apps with electron at https://www.youtube.com/watch?v=jKzBJAowmGg.
    3. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Enhancing Settings Menu in SUSI Webchat using Material-UI Menu React Component

    Material-UI is a great library for react developers since you can directly use the material components in your app. The SUSI.AI Webchat uses Material-UI (https://github.com/callemall/material-ui). In this blog we’ll see how a Menu component is implemented in settings page of susi’s web chat app.

    Menu & MenuItem Component

    Menu component allows you execute an action on selecting from a list. In the settings menu we want to change the state variable to which setting details is binded (ie. on change of state variable the setting details corresponding to selected menu changes).

    A simple menu component is defined like this –

    <Menu>
            <MenuItem primaryText="Item 1" />
            <MenuItem primaryText="Item 2" />
            <MenuItem primaryText="Item 3" />
            <MenuItem primaryText="Item 4" />
    </Menu>
    
    

    Menu component and MenuItem accepts some properties too. Like if you add disable={true} then the MenuItem it will disable onClick action of MenuItem.

    Adding Icon

    MenuItem can contain icons through defining leftIcon and rightIcon properties.

    Example snippet:

    <MenuItem primaryText="Download" leftIcon={<Download />} />
    

    Output:

    You can also add style to the leftIcon or rightIcon in style property of MenuItem.

    Active State of MenuItem

    You must be interested in assigning a different style to the active MenuItem in the Menu. This can be achieved through selectedMenuItemStyle property. It allows overriding the inline-style property of selected MenuItem.

    To implement this we need to use the concept of ‘controlled component’. Each MenuItem has to be assigned a value. Also, assign the Menu with a state variable.

    <Menu 
     selectedMenuItemStyle={{color: '#FFFFFF'} }   
     value={this.state.selectedItem} > 
        <MenuItem primaryText="Item 1" value='1'/> 
        <MenuItem primaryText="Item 2" value='2' /> 
    </Menu>
    

    This way the state variable will control Menu’s value and the selectedMenuItemStyle property will override the inline-style of the corresponding MenuItem.

    Implement onClick function for MenuItem and change the state value.

    This way you can add style to active MenuItem.

    You can see the demo of how it was implemented in chat settings at https://chat.susi.ai/settings

    Also, you can check out the github repo https://github.com/fossasia/chat.susi.ai

    Note – Make sure that you define the state change function outside render else it will get a warning like this.

    Warning: setState(…): Cannot update during an existing state transition (such as within `render` or another component’s constructor). Render methods should be a pure function of props and state; constructor side-effects are an anti-pattern, but can be moved to `componentWillMount`.

    This will result in abnormal behaviour in runtime. So keep that mind in while creating the function to change the state.

    Resources

     

    Getting Started Developing on Phimpme Android

    Phimpme is an Android app for editing photos and sharing them on social media. To participate in project start by learning how people contribute in open source, learning about the version control system Git and other tools like Codacy and Travis.

    Firstly, sign up for GitHub. Secondly, find the open source projects that interest you. Now as for me I started with Phimpme. Then follow these steps:

    1. Go through the project ReadMe.md and read all the technologies and tools they are using.
    2. Now fork that repo in your account.
    3. Open the Android Studio/Other applications that are required for that project and import the project through Git.
    4. For Android Studio sync all the Gradle files and other changes and you are all done and ready for the development process.

    Install the app and run it on a phone. Now explore each and every bit use this app as a tester, think about the end cases and boundary condition that will make the app ‘ANR’ (App not responding) dialog appear. Congratulations you are ready to create an issue if that is a verified and original and actually is a bug.

    Next,

    • Navigate to the main repo link, you will see an issues section as follows:
    • Create a new issue and report every detail about the issue (logcat, screenshots) For eg. Refer to Issue-1120
    • Now the next step is to work on that issue
    • On your machine, you don’t have to change the code in the development branch as it’s considered to be as a bad practice. Hence checkout as a new branch.
      For eg., I checked out for the above issue as ‘crashfixed’
    git checkout -b "Any branch name you want to keep"
    
    • Make the necessary changes to that branch and test that the code is compiling and the issue is fixed followed by
    git add.
    git commit -m "Fix #Issue No -Description "
    git push origin branch-name
    
    • Now navigate to the repo and you will an option to create a Pull Request.
      Mention the Issue number and description and changes you done, include screenshots of the fixed app.For eg. Pull Request 1131.

    Hence you have done your first contribution in open source while learning with git. The pull request will initiate some checks like Codacy and Travis build and then if everything works it is reviewed and merged by co-developers.

    The usual way how this works is, that it should be reviewed by other co-developers. These co-developers do not need merge or write access to the repository. Any developer can review pull requests. This will also help contributors to learn about the project and make the job of core developers easier.

    Resources

    Announcing the FOSSASIA 2017 #CodeHeat Winners

    Drum roll please! We are very proud to announce the 2017 FOSSASIA #CodeHeat Grand Prize Winners and Finalists. 442 participants from 13 countries and 5 continents committed over 1000 pull requests to our repositories over the course of the contest. Congratulations to everyone to this fantastic achievement! The winners were now chosen by our jury. Thank you for reviewing the contributions.

    Our three Grand Prize winners will travel to the FOSSASIA Summit in Singapore from March 17-19 and present their work to developers from around the world. The winners are (in alphabetical order):

    Mayank Tripathi (mayank408)

    Medozonuo Suohu (magdalenesuo)

    Shubham Padia (shubham-padia)

    Our other finalists will receive a voucher to support trips to Open Tech conferences in the region to connect with developers and the community.

    Achint Sharma (Achint08)

    Deepjyoti Mondal (djmgit)

    Hemant Jadon (hemantjadon)

    Pranjal Paliwal (betterclever)

    Rishi Raj (rishiraj824)

    Saurabh (gogeta95)

    Utkarsh Gupta (uttu357)

     

    Certificate of Participation

    Many developers made outstanding contributions in the last months and we all learned a lot during the contest. Thank you so much! We hope you stay on board and continue to participate in the community to develop Open Tech that improves peoples lives and to seize the opportunity to develop your code profile with FOSSASIA. To receive your certificate of participation now, please claim it here.

    Supporting Partners

    We would also like to extend a special Thank you to our jury and to our supporting partners at the UNESCO and the Open Tech Society.

    Thanks FOSSASIA mentors!

    And we really love the work of our mentors! Thanks for your patient guidance, helping everyone with learning about best practices, reviewing the huge number of pull requests and discussing questions on the chat channels. Many of our mentors have been students in coding programs before and we are very very proud to see how you help newcomers to join. Keep up the fantastic work!