Comparison between SUSI AI with Mycroft AI and Amazon Alexa

Now is the era of Voice User Interface (VUI) devices and they play a very important role as personal assistants. Here we compare the SUSI AI, Mycroft AI and Amazon Alexa based on the number of skills, their availability, easiness to add and edit skills and the provision of the user to modify the skill and add more to it if needed, etc.

Issue: https://github.com/fossasia/labs.fossasia.org/issues/215

The Comparison:

  1. Starting with the number of skills, here Amazon Alexa supports way more number of skills as compared to both Mycroft AI and SUSI AI.
  2. Availability: Mycroft AI and SUSI AI are available everywhere and can set up anywhere regardless of the country whereas Alexa is available in U.S., U.K., Germany,  India but they are aggressively expanding.
  3. Adding and editing skills: Mycroft and SUSI are open source and their skills can be added and edited and viewed by the open source community. Issues can be made to enhance the functionality of the skills whereas Alexa skills are not open source and certification and publishing of the skill is done by the Amazon team. Mycroft and SUSI skills can be customized by the user but this fails with Alexa as users have to create that same skill from scratch if they have to customize them.
  4. Platforms supported: Mycroft, SUSI and Alexa all support Linux. Mycroft lacks support for Windows and Mac but supports Raspberry Pi and Android, Alexa provides support for Windows and Mac and Raspberry Pi. SUSI also provides support for Android and iOS and can be integrated with speakers, vehicles, Pi, etc.
  5. Dedicated devices: As of now SUSI AI lacks such device. Mycroft has Mark 1 and Alexa has Echo. These devices are portable and are good candidates for home automation.
  6. Languages used for skill development: Mycroft mostly uses python. Alexa uses python, NodeJS, C#, etc for development of applications. SUSI uses its own language but language like javascript can be included in it. It’s easier to specify patterns using wildcards and variables in SUSI.

Due to different languages used, Mycroft AI skills can’t be directly used in SUSI AI. We need to convert Mycroft skills to SUSI skills if Mycroft skills are to be used for SUSI.

Some suggestions for making a dedicated device for SUSI:

  1. We can use a Raspberry Pi, USB headphones and a microphone to make a basic platform.
  2. We can install Jasper to enable the voice input on the Pi. Jasper is a open source application that enables us to make voice controlled applications.
  3. We can use SUSI server to interact with the device and the home appliances like lights. SUSI server can process the states of the the appliance (lights in this case) and return it as JSON objects to Raspberry Pi and then it may change the state as per user input.

Make a simple Hello World skill with SUSI:

  1. Visit https://github.com/fossasia/susi_skill_cms/blob/master/docs/Skill_Tutorial.md for a basic introduction to SUSI skills syntax and how does it work.
  2. Go to http://dream.susi.ai .
  3. Enter the skill name, say “hello”.
  4. You will be greeted by a welcome message – “roses are red…..”. Delete it and replace it with the following snippet.
::name <Skill_name> #<— Enter skill name. for example hello

::author <author_name>

::author_url <author_url> #<— You can leave this empty as of now.

::description <description> #<— skill description

::dynamic_content No

::developer_privacy_policy <link> #<— you can leave this as of now.

::image <image_url> #<— You can leave this as of now.

::term_of_use <link>

#Intent. Comments are written with a #

hi|hello|what’s up #<— This is what the user says

Hi|I am good|Hello #<— This is what the skill answers

6. Now go to http://susi.ai/chat for the testing.

7. In the SUSI chat dialog box (present at the bottom of the page) enter dream <test application name> where “test application name” is the name you enter when you first visit http://dream.susi.ai. In this case “dream hello”.

8. You can input “what’s up” in the dialog box and it will give you the desired output which you mentioned in the application.

Conclusion:

SUSI has its own good points but it lacks in some department like the number and type of skills. Like Mycroft we can start making various skills and try to make a basic prototype of a dedicated SUSI personal assistant device.

Resources

  1. Jasper
  2. Skill addition to SUSI
  3. Mycroft hello world skill

 

Continue ReadingComparison between SUSI AI with Mycroft AI and Amazon Alexa

How to get a cost effective PCB for Production

Designing a PCB for a DIY project involves in making up the schematics which then turned into a PCB layout. Components used in these PCBs will be mostly “Through Hole” which are commonly available in the market. Once the PCB is printed in either screen printing techniques or using photo resistive dry films, making alterations to the component mounting pads and connections will be somewhat possible.

When dealing with a professional PCB design, there are many properties we need to consider. DIY PCBs will simply be single sided in most cases. A professional printed circuit board will most likely to have more than one layer. The PCB for PSLab device has 4 layers. Adding more layers to a PCB design makes it easier to draw connections. But on the other hand, the cost will increase exponentially. The designer must try to optimize the design to have less layers as much as possible. The following table shows the estimated cost for printing for 10 PSLab devices if the device had that many layers.

One Layer Two Layers Four Layers Six Layers
$4.90 $4.90 $49.90 $305.92

Once the layer levels increase from 2, the other layers will be inner layers. The effective area of inner layers will be reduced if the designer adds more through hole components or vias which connects a connection from a one layer with a connection with another layer. The components used will then be limited to surface mount components.

Surface Mount components (SMD) are expensive compared to their Through Hole (TH) counterpart. But the smaller size of SMD makes it easier to place many components in a smaller area than to Through Hole components. Soldering and assembling Through Hole components can be done manually using hand soldering techniques. SMD components need special tools and soldering equipments to assemble and solder them. Much more precision is required when SMD components are soldered. Hence automated assembly is used in industry where robot arms are used to place components and reflow soldering techniques to solder the SMD components. This emphasizes that the number of SMD components used in the PCB will increase the assembly cost as well as the component cost but it will greatly reduce the size of the PCB.

SMD components comes in different packages. Passive components such as resistors, capacitors will come in 0.25 mm upto 7.4mm dimensions. PSLab device uses 0805/2012 sized package which is easier to find in the market and big enough to pick and assemble by hand. The packaging refers to its dimensions. 0805 reads as 0.08 inches long and 0.05 inches wide.

Finding the components in the market is the next challenging task. We can easily purchase components from an online store but the price will be pretty high. If the design can spare some space, it will be wise to have alternative pads for a Through Hole component for the SMD component as Through Hole components can be found much easier than SMD components in a local store.

The following image is taken from Sparkfun which illustrates different common IC packages. Selecting the correct footprint for the SMD IC and vise versa is very important. It is a good practice to check the stores for the availability and prices for the components before finalizing the PCB design with footprints and sending it to printing. We may find some ICs are not available for immediate purchase as the stocks ran out but a different package of the same IC is available. Then the designer can alter the foot print to the packaging and use the more common packaging type in the design.

Considering all the factors above, a cost effective PCB can be designed and manufactured once the design is optimized to have the minimum number of layers with components with the minimum cost for both assembly and components.

Resources:

Continue ReadingHow to get a cost effective PCB for Production

Creating Bill of Materials for PSLab using KiCAD

PSLab device consists of a hundreds of electronic components. Resistors, diodes, transistors, integrated circuits are to name a few. These components are of two types; Through hole and surface mounted.

Surface mount components (SMD) are smaller in size. Due to this reason, it is hard to hand solder these components onto a printed circuit board. We use wave soldering or reflow soldering to connect them with a circuit.

Through Hole components (TH) are fairly larger than their SMD counter part. They are made bigger to make it easy for hand soldering. These components can also be soldered using wave soldering.

Once a PCB has completed its design, the next step is to manufacture it with the help of a PCB manufacturer. They will require the circuit design in “gerber” format along with its Bill of Materials (BoM) for assembly. The common requirement of BoM is the file in a csv format. Some manufacturers will require the file in xml format. There are many plugins available in KiCAD which does the job.

KiCAD when first installed, doesn’t come configured with a BoM generation tool. But there are many scripts developed with python available online free of charge. KiBoM is one of the famous plugins available for the task.

Go to “Eeschema” editor in KiCAD where the schematic is present and then click on the “BoM” icon in the menu bar. This will open a dialog box to select which plugin to use to generate the bill of materials.

Initially there won’t be any plugins available in the “Plugins” section. As we are adding plugins to it, they will be listed down so that we can select which plugin we need. To add a plugin, click on the “Add Plugin” button to open the dialog box to browse to the specific plugin we have already downloaded. There are a set of available plugins in the KiCAD installation directory.

The path is most probably will be (unless you have made any changes to the installation);

usr/lib/kicad/plugins

Once a plugin is selected, click on “Generate” button to generate the bom file. “Plugin Info” will display where the file was made and it’s name.

Make sure we have made the BoM file compatible to the file required by the manufacturer. That is; removed all the extra content and added necessary details such as manufacturer’s part numbers and references replacing the auto generated part numbers.

Resources:

Continue ReadingCreating Bill of Materials for PSLab using KiCAD

Auto Deployment of Pull Requests on Susper using Surge Technology

Susper is being improved every day. Following every best practice in the organization, each pull request includes a working demo link of the fix. Currently, the demo link for Susper can be generated by using GitHub pages by running these simple commands – ng build and npm run deploy. Sometimes this process on slow-internet connectivity takes up to 30 mins to generate a working demo link of the fix.

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages:

  • As soon as the pull request passes Travis CI, the deployment link is generated. It has been set up as such, no extra terminal commands will be required.
  • Faster loading compared to deployment link is generated using GitHub pages.

Surge can be used to deploy only static web pages. Static web pages mean websites that contain fixed contents.

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file which will be executed during Travis CI testing.
  • The pr_deploy.sh file can be executed after success i.e. when Travis CI passes by using command bash pr_deploy.sh.

The pr_deploy.sh file for Susper looks like this:

#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
angular build production

npm i -g surge

export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-susper.surge.sh
surge project ./dist domain $DEPLOY_DOMAIN;

 

Once pr_deploy.sh file has been created, execute the file in the travis.yml by using command bash pr_deploy.sh.

In this way, we have integrated the surge technology for auto-deployment of the pull requests in Susper.

References:

Continue ReadingAuto Deployment of Pull Requests on Susper using Surge Technology

Automatically deploy SUSI Web Chat on surge after Travis passes

We are using surge from the very beginning of this SUSI web chat and SUSI skill cms projects development. We used surge for provide preview links for Pull requests. Surge is really easy tool to use. We can deploy our static web pages really easily and quickly.  But If user had to change something in pull request user has to deploy again in surge and update the link. If we can connect this operation with travis ci we can minimise re-works. We can embed the deploying commands inside the travis.yml.

We can tell travis to make a preview link (surge deployment) if test cases are passed by embedding the surge deployment commands inside the travis.yml like below.

This is travis.yml file

sudo: required
dist: trusty
language: node_js
node_js:
 - 6
script:
 - npm test
after_success:
 - bash ./surge_deploy.sh
 - bash ./deploy.sh
cache:
 directories:
   - node_modules
branches:
 only:
   - master

Surge deployment commands are inside the “surge_deploy.sh” file.
In that we have to check the status of the pull request whether it is passing test cases or not. We can do it like below.

if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
   echo "Not a PR. Skipping surge deployment"
   exit 0
fi

Then we have to install surge in the environment. Then after install all npm packages and run build.

npm i -g surge
npm install
npm run build

Since there is a issue with displaying moving to child routes we have to take a copy of index.html file and name it as a 404.html.

cp ./build/index.html ./build/404.html

Then make two environment variables for your surge email address and surge token

export SURGE_LOGIN=fossasiasusichat@example.com
# surge Token (run ‘surge token’ to get token)
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

Now we have to make the surge deployment URL (Domain). It should be unique so we made a URL that contains pull request number.

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-susi-web-chat.surge.sh
surge --project ./build/ --domain $DEPLOY_DOMAIN;

Since all our static contents which made after the build process are in “build” folder we have to tell surge to get static html files from that.
Now make a pull request. you would find the deployment link in travis ci report after travis passed.

Expand the output of the surge_deploy.sh

You will find the deployment link as we defined in the surge_deploy.sh file

References:

  • Integrating with travis ci – https://surge.sh/help/integrating-with-travis-ci
  • React Routes to Deploy 404 page on gh-pages and surge – https://blog.fossasia.org/react-routes-to-deploy-404-page-on-gh-pages-and-surge/
Continue ReadingAutomatically deploy SUSI Web Chat on surge after Travis passes

Recognise new SUSI users and Welcome them

SUSI web chat application is up and running now. It gives better answers for most of the questions that users ask. But for new users application does not display a welcome message or introduction about the application. It is a distraction for new users. So a new requirement arrived that is to show a welcome message for new users or give them a introduction about the application.

To give a introduction or to show a welcome message we need to identify new users. For that I used cookies.
I added a new dialog to show welcome message and introductory video. Then placed below code in the DialogSection.js file which contains codes about every dialog-box of the application.

 <Dialog
          contentStyle={{ width: '610px' }}
          title="Welcome to SUSI Web Chat"
          open={this.props.tour}
        >
          
            width="560"
            height="315"
            src="https://www.youtube.com/embed/9T3iMoAUKYA"
            gesture="media"
            allow="encrypted-media"
            >
          
          <Close style={closingStyle} onTouchTap={this.props.onRequestCloseTour()} />
        </Dialog>

We already have installed ‘universal-cookie’ npm module in our application so we can use this module to identify cookies.
I used this way to check whether user is new or not.

           <DialogSection
             {...this.props}
             openLogin={this.state.showLogin}
              .
              .
              .
              onRequestCloseTour={()=>this.handleCloseTour}
              tour={!cookies.get('visited')}

           	/>

Now it shows dialog-box for each and every user we don’t need to display the welcome message to old users so we need to store a cookie in users computer.
I stored a cookie in users computer when user clicks on the close button of the welcome dialog-box.
Below function makes new cookie in users computer.

  handleCloseTour = ()=>{
   this.setState({
     showLogin: false,
     showSignUp: false,
     showThemeChanger: false,
     openForgotPassword: false,
     tour:false
    });
    cookies.set('visited', true, { path: '/' });
 }

 

Below line sets a cookie and { path : ’/’ } makes cookie accessible on all pages.

References:

Continue ReadingRecognise new SUSI users and Welcome them

The Road to Success in Google Summer of Code 2017

It’s the best time when GCI students can get the overview experience of GSoC and all the aspiring participant can get themselves into different projects of FOSSASIA.

I’m a Junior year undergraduate student pursuing B.Tech in Electrical Engineering from Indian Institute of Technology Patna. This summer, I spent coding in Google Summer of Code with FOSSASIA organization. It feels great to be an open-source enthusiast, and Google as a sponsor make it as icing on the cake. People can learn new things here and meet new people.

I came to know about GSoC through my senior colleagues who got selected in GSoC in the year 2016. It was around September 2016 and I was in 2nd year of my college. At that time, last year, result of GSoC was declared.

What is GSoC?

Consider GSoC as a big bowl which has lots of small balls and those small balls are open-source organizations. Google basically acts as a sponsor for the open-source organizations. A timeline is proposed according to the applied organization and then student select their favorite organization and start to contribute to it. Believe me, it’s not only computer science branch specific, anyone can take part in it and there is no minimum CPI requirement. I consider myself to be one of the examples who have an electrical branch with not so good academic performance yet successfully being part of GSoC 2017.

How to select an organization?

This is the most important step and it takes time. I wandered around 100 organizations to find where my interest actually lies. But now, I’ll describe how to sort this and find your organization a little quicker. Take a pen and paper (kindly don’t use notepad of pc) and write down your field of interest in computer science. Number every point in decreasing order of your interest. Then for each respective field write down its basic pre-requisites. Visit GSoC website, go to organization tab and there is a slide for searching working field of the organization. Select only one organization, dig out its website, see the previous project and its application. If nothing fits you, repeat the same with another organization. And if that organization interests you, then look for a project of that organization. First of all, look at that application of the project, and give that application a try and must give a feedback to the organization. Then try to find that what languages, modules, etc that project used to work and how the project works. Don’t worry if nothing goes into your mind. Find out the developers mailing list, their chat channel, their code base area. And ask developers out there for help.

First Love It:

Open-Source, it’s a different world which exists on Earth. All organizations are open-source and all their codes are open and free to view. Find things that interests you the most and start to love the work. If you don’t understand a code, learn things by doing and asking. Most of the times we don’t get favorable responses, in such times we need to carry on and have patience for the best to happen.

My Favourite part:

GSoC has been my dream since the day I came to know about it. It’s only through this that one gets a chance to explore open-source softwares, and organizations get a chance to hire on board developers. This is the great initiative taken by Google which brings hope for the developers to increase the use of open-source. This is one of the ways through which one can look into the codes of the developers and help them out and even also get helped.

GSoC is the platform through which one can implement lots of new things, meet new people, develop new softwares and see the world around in a different way. That’s what happened with me, it’s just at the end of the first phase, my love towards open-source increased exponentially. Now I see every problem in my life as a way to solve it through the open-source. Rather it’s part of arranging an event or designing an invitation, I am encouraged to use open-source tools to help me out. It becomes very easy to distribute data and convey information through open-source, so the people can reach to you much easier.

You always see a thing according to your perspective and it’s always the best but the open-source gives it a view through the perspective of the world and gets the best from them through a compilation of all the sources. One can give ideas, their views, find something that other can’t even see and increase its karma through contribution. And all these things have been made possible through GOOGLE only. I became such that I can donate the rest of my life working for open-source. GSoC is responsible for including the open-source contribution in my daily life. It made me feel really bad if my Github profile page has 0 contributions at the end of the day. Open Source opens door to another world.

Challenging part:

To conclude, I would say that GSoC made me love the challenge. I became such that the things that come easily to me don’t taste good to me at all. Specifically, GSoC’s most challenging part is to get into it that is to get selected. I still can’t believe that I was selected. Now onwards it’s just fun and learning. Each and every day, I encountered several issues, bugs, etc but just before going to bed at night, there were things which collectively made me feel that whether the bug has been solved or not, but I was able to break the upper most covering of that conch shell. And such things increases the motivation and light up the enthusiasm to tackle the problem. Open-Source not only taught me to control different snapshots of software but also of time. I learn to manage different works of day efficiently and it includes the contribution in open-source as part of my daily life.

Advice to students:

The only problem new developers have is to get started. I’ll advise them to close their eyes and dive into it without thinking whether they would be able to complete this task or not. Believe me, you will gradually find that whether the task is completed or not but you are much above the condition than you were at the time of beginning the task.

Just learn by doing the things.

Make mistakes and enlist them as “things that will not work” so one may read it and avoid it.

GSoC Project link: https://summerofcode.withgoogle.com/projects/#5560333780385792
Final Code Submission: https://gist.github.com/meets2tarun/270f151d539298831ce542be5f733c82

Continue ReadingThe Road to Success in Google Summer of Code 2017
Read more about the article Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners
SUSI Desktop

Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Continue ReadingSetting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

    Installing Query Server Search and Adding Search Engines

    The query server can be used to search a keyword/phrase on a search engine (Google, Yahoo, Bing, Ask, DuckDuckGo and Yandex) and get the results as json or xml. The tool also stores the searched query string in a MongoDB database for analytical purposes. (The search engine scraper is based on the scraper at fossasia/searss.)

    In this blog, we will talk about how to install Query-Server and implement the search engine of your own choice as an enhancement.

    How to clone the repository

    Sign up / Login to GitHub and head over to the Query-Server repository. Then follow these steps.

    1. Go ahead and fork the repository

    https://github.com/fossasia/query-server

    2. Star the repository

    3. Get the clone of the forked version on your local machine using

    git clone https://github.com/<username>/query-server.git

    4. Add upstream to synchronize repository using

    git remote add upstream https://github.com/fossasia/query-server.git

    Getting Started

    The Query-Server application basically consists of the following :

    1. Installing Node.js dependencies

    npm install -g bower
    
    bower install

    2. Installing Python dependencies (Python 2.7 and 3.4+)

    pip install -r requirements.txt

    3. Setting up MongoDB server

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    
    echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release   -sc)"/mongodb-org/3.0   multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
    
    sudo apt-get update
    
    sudo apt-get install -y mongodb-org
    
    sudo service mongod start

    4. Now, run the query server:

    python app/server.py

    Go to http://localhost:7001/

    How to contribute :

    Add a search engine of your own choice

    You can add a search engine of your choice apart from the existing ones in application.

    • Just add or edit 4 files and you are ready to go.

    For adding a search engine ( say Exalead ) :

    1. Add exalead.py in app/scrapers directory :

    from __future__ import print_function
    
    from generalized import Scraper
    
    
    class Exalead(Scraper): # Exalead class inheriting Scraper
    
        """Scrapper class for Exalead"""
    
    
        def __init__(self):
    
           self.url = 'https://www.exalead.com/search/web/results/'
    
           self.defaultStart = 0
    
           self.startKey = ‘start_index’
    
    
        def parseResponse(self, soup):
    
           """ Parses the reponse and return set of urls
    
           Returns: urls (list)
    
                   [[Tile1,url1], [Title2, url2],..]
    
           """
    
           urls = []
    
           for a in soup.findAll('a', {'class': 'title'}): # Scrap data with the corresponding tag
    
               url_entry = {'title': a.getText(), 'link': a.get('href')}
    
               urls.append(url_entry)
    
    
           return urls

    Here, scraping data depends on the tag / class from where we could find the respective link and the title of the webpage.

    2. Edit generalized.py in app/scrapers directory

    from __future__ import print_function
    
    import json
    
    import sys
    
    from google import Google
    
    from duckduckgo import Duckduckgo
    
    from bing import Bing
    
    from yahoo import Yahoo
    
    from ask import Ask
    
    from yandex import Yandex
    
    from exalead import Exalead   # import exalead.py
    
    
    
    scrapers = {
    
        'g': Google(),
    
        'b': Bing(),
    
        'y': Yahoo(),
    
        'd': Duckduckgo(),
    
        'a': Ask(),
    
        'yd': Yandex(),
    
        't': Exalead() # Add exalead to scrapers with index ‘t’
    
    }

    From the scrapers dictionary, we could find which search engines had supported the project.

    3. Edit server.py in app directory

    @app.route('/api/v1/search/<search_engine>', methods=['GET'])
    
    def search(search_engine):
    
        try:
    
           num = request.args.get('num') or 10
    
           count = int(num)
    
           qformat = request.args.get('format') or 'json'
    
           if qformat not in ('json', 'xml'):
    
               abort(400, 'Not Found - undefined format')
    
    
           engine = search_engine
    
           if engine not in ('google', 'bing', 'duckduckgo', 'yahoo', 'ask', ‘yandex' ‘exalead’): # Add exalead to the tuple
    
               err = [404, 'Incorrect search engine', qformat]
    
               return bad_request(err)
    
    
           query = request.args.get('query')
    
           if not query:
    
               err = [400, 'Not Found - missing query', qformat]
    
               return bad_request(err)
    
    

    Checking, if the passed search engine is supporting the project, or not.

    4.  Edit index.html in app/templates directory

         <button type="submit" value="ask" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/ask_icon.ico') }}" width="30px" alt="Ask Icon"> Ask</button>
    
         <button type="submit" value="yandex" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/yandex_icon.png') }}" width="30px" alt="Yandex Icon"> Yandex</button>
    
         <button type="submit" value="exalead" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/exalead_icon.png') }}" width="30px" alt="Exalead Icon"> Exalead</button> # Add button for exalead
    
    • In a nutshell,

    Scrape the data using the anchor tag having specific class name.

    For example, searching fossasia using exalead

    https://www.exalead.com/search/web/results/?q=fossasia&start_index=1

    Here, after inspecting element for the links, you will find that anchor having class name as title is having the link and title of the webpage. So, scrap data using title classed anchor tag.

    Similarly, you can add other search engines as well.

    Resources

    Continue ReadingInstalling Query Server Search and Adding Search Engines

    UI automated testing using Selenium in Badgeyay

    With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

    What is Selenium?

    Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

    Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7).

    First things first:
    To install these package run this code on the CLI:

    pip install selenium==2.40
    pip install nose
    

    Don’t forget to add them in the requirements.txt file

    Web Browser:
    We also need to have Firefox installed on your machine.

    Writing the Test
    An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

    A test for the web page title:

    import unittest
    from selenium import webdriver
    
    class SampleTest(unittest.TestCase):
    
        @classmethod
        def setUpClass(cls):
            cls.driver = webdriver.Firefox()
            cls.driver.get('http://badgeyay-dev.herokuapp.com/')
    
        def test_title(self):
            self.assertEqual(self.driver.title, 'Badgeyay')
    
        @classmethod
        def tearDownClass(cls):
            cls.driver.quit()
    

     

    Run the test using nose test.py

    Clicking the element
    For our next test, we click the menu button, and check if the menu becomes visible.

    elem = self.driver.find_element_by_css_selector(".custom-menu-content")
    self.driver.find_element_by_css_selector(".glyphicon-th").click()
    self.assertTrue(elem.is_displayed())
    

     

    Uploading a CSV file:
    For our next test, we upload a CSV file and see if a success message pops up.

    def test_upload(self):
            Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png'))
            CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv'))
            self.driver.find_element_by_name("file").send_keys(CSVpath)
            self.driver.find_element_by_name("image").send_keys(Imagepath)
            self.driver.find_element_by_css_selector("form .btn-primary").click()
            time.sleep(3)
            success = self.driver.find_element_by_css_selector(".flash-success")
            self.assertIn(u'Your badges has been successfully generated!', success.text)
    

     

    The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests

    We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

    Resources

    Continue ReadingUI automated testing using Selenium in Badgeyay