FOSSASIA Summit 2018 Singapore – Call for Speakers

The FOSSASIA Open Tech Summit is Asia’s leading Open Technology conference for developers, companies, and IT professionals. The event will take place from Thursday, 22nd – Sunday, 25th March at the Lifelong Learning Institute in Singapore.

During four days developers, technologists, scientists, and entrepreneurs convene to collaborate, share information and learn about the latest in open technologies, including Artificial Intelligence software, DevOps, Cloud Computing, Linux, Science, Hardware and more. The theme of this year’s event is “Towards the Open Conversational Web“.

For our feature event we are looking for speaker submissions about Open Source for the following areas:

  • Artificial Intelligence, Algorithms, Search Engines, Cognitive Experts
  • Open Design, Hardware, Imaging
  • Science, Tech and Education
  • Kernel and Platform
  • Database
  • Cloud, Container, DevOps
  • Internet Society and Community
  • Open Event Solutions
  • Security and Privacy
  • Open Source in Business
  • Blockchain

There will be special events celebrating the 20th anniversary of the Open Source Initiative and its impact in Open Source business. An exhibition space is available for company and project stands.

Submission Guidelines

Please propose your session as early as possible and include a description of your session proposal that is as complete as possible. The description is of particular importance for the selection. Once accepted, speakers will receive a code for a speakers ticket. Speakers will receive a free speakers ticket and two standard tickets for their partner or friends. Sessions are accepted on an ongoing basis.

Submission Link: 2018.fossasia.org/speaker-registration

Dates & Deadlines

Please send us your proposal as soon as possible via the FOSSASIA Summit speaker registration.

Deadline for submissions: December 27th, 2017

Late submissions: Later submissions are possible, but early submissions have priority

Notification of acceptance: On an ongoing basis

Schedule Announced: January 20, 2018

FOSSASIA Open Tech Summit: March 22nd – 25th, 2018

Sessions and Tracks

Talks and Workshops

Talk slots are 20 minutes long plus 5-10 minutes for questions and answers. The idea is, that participants will use the sessions to get an idea of the work of others and are able to follow up in more detail in break-out areas, where they discuss more and start to work together. Speakers can also sign up for either a 1-hour long or a 2-hours workshop sessions. Longer sessions are possible in principle. Please tell us the proposed length of your session at the time of submission.

Lightning talks

You have some interesting ideas but do not want to submit a full talk? We suggest you go for a lightning talk which is a 5 minutes slot to present your idea or project. You are welcome to continue the discussion in breakout areas. There are tables and chairs to serve your get-togethers.

Stands and assemblies

We offer spaces in our exhibition area for companies, projects, installations, team gatherings and other fun activities. We are curious to know what you would like to make, bring or show. Please add details in the submission form.

Developer Rooms/Track Hosts

Get in touch early if you plan to organize a developer room at the event. FOSSASIA is also looking for team members who are interested to co-host and moderate tracks. Please sign up to become a host here.

Publication

Audio and video recordings of the lectures will be published in various formats under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. This license allows commercial use by media institutions as part of their reporting. If you do not wish for material from your lecture to be published or streamed, please let us know in your submission.

Sponsorship & Contact

If you would like to sponsor FOSSASIA or have any questions, please contact us via [email protected].

Suggested Topics

  • Artificial Intelligence (SUSI.AI, Algorithms, Cognitive Expert Systems AI on a Chip)
  • Hardware (Architectures, Maker Culture, Small Devices)
  • 20 years Impact of Open Source in Business
  • DevOps (Continuous Delivery, Lean IT, Moving at Cloud-speed)
  • Networking (Software Defined Networking, OpenFlow, Satellite Communication)
  • Security (Coding, Configuration, Testing, Malware)
  • Cloud & Microservices (Containers – Libraries, Runtimes, Composition; Kubernetes; Docker, Distributed Services)
  • Databases (Location-aware and Mapping, Replication and Clustering, Data Warehousing, NoSQL)
  • Science and Applications (Pocket Science Lab, Neurotech, Biohacking, Science Education)
  • Business Development (Open Source Business Models, Startups, Kickstarter Campaigns)
  • Internet of Everything (Smart Home, Medical Systems, Environmental Systems)
  • Internet Society and Culture (Collaborative Development, Community, Advocacy, Government, Governance, Legal)​
  • Kernel Development and Linux On The Desktop (Meilix, Light Linux systems, Custom Linux Generator)
  • Open Design and Libre Art (Open Source Design)
  • Open Event (Event Management systems, Ticketing solutions, Scheduling, Event File Formats)

Links

Speaker Registration and Proposal Submission:
2018.fossasia.org/speaker-registration

FOSSASIA Summit: 2018.fossasia.org

FOSSASIA Summit 2017: Event Wrap-Up

FOSSASIA Photos: flickr.com/photos/fossasia/

FOSSASIA Videos: Youtube FOSSASIA

FOSSASIA on Twitter: twitter.com/fossasia

Join Codeheat Coding Contest 2017

Codeheat is a coding contest for developers interested in contributing to Open Source software and hardware projects at FOSSASIA.  Join development of real world software applications, build up your developer profile, learn new new coding skills, collaborate with the community and make new friends from around the world! Sign up for #CodeHeat here now and follow Codeheat on Twitter.

The contest runs until 1st February 2018. All FOSSASIA projects take part in Codeheat including:

Grand prize winners will be invited to present their work at the FOSSASIA OpenTechSummit in Singapore from March 23rd -25th 2018 and will get 600 SGD in travel funding to attend, plus a free speaker ticket and beautiful Swag.

Our jury will choose three winners from the top 10 contributors according to code quality and relevance of commits for the project. The jury also takes other contributions like submitted weekly scrum reports and monthly technical blog posts into account, but of course awesome code is the most important item on the list.

Other participants will have the chance to win Tshirts, Swag and vouchers to attend Open Tech events in the region and will get certificates of participation.

codeheat-logo

Team mentors and jury members from 10 different countries support participants of the contest.

Participants should take the time to read through the contest FAQ and familiarize themselves with the introductory information and Readme.md of each project before starting to work on an issue.

Developers interested in the contest can also contact mentors through project channels on the FOSSASIA gitter.

Additional Links

Website: codeheat.org

Codeheat Twitter: twitter.com/codeheat_

Codeheat Facebook: facebook.com/codeheat.org

Participating Projects: All FOSSASIA Repositories on GitHub at github.com/fossasia

Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Installing Query Server Search and Adding Search Engines

    The query server can be used to search a keyword/phrase on a search engine (Google, Yahoo, Bing, Ask, DuckDuckGo and Yandex) and get the results as json or xml. The tool also stores the searched query string in a MongoDB database for analytical purposes. (The search engine scraper is based on the scraper at fossasia/searss.)

    In this blog, we will talk about how to install Query-Server and implement the search engine of your own choice as an enhancement.

    How to clone the repository

    Sign up / Login to GitHub and head over to the Query-Server repository. Then follow these steps.

    1. Go ahead and fork the repository

    https://github.com/fossasia/query-server

    2. Star the repository

    3. Get the clone of the forked version on your local machine using

    git clone https://github.com/<username>/query-server.git

    4. Add upstream to synchronize repository using

    git remote add upstream https://github.com/fossasia/query-server.git

    Getting Started

    The Query-Server application basically consists of the following :

    1. Installing Node.js dependencies

    npm install -g bower
    
    bower install

    2. Installing Python dependencies (Python 2.7 and 3.4+)

    pip install -r requirements.txt

    3. Setting up MongoDB server

    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
    
    echo "deb http://repo.mongodb.org/apt/ubuntu "$(lsb_release   -sc)"/mongodb-org/3.0   multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.0.list
    
    sudo apt-get update
    
    sudo apt-get install -y mongodb-org
    
    sudo service mongod start

    4. Now, run the query server:

    python app/server.py

    Go to http://localhost:7001/

    How to contribute :

    Add a search engine of your own choice

    You can add a search engine of your choice apart from the existing ones in application.

    • Just add or edit 4 files and you are ready to go.

    For adding a search engine ( say Exalead ) :

    1. Add exalead.py in app/scrapers directory :

    from __future__ import print_function
    
    from generalized import Scraper
    
    
    class Exalead(Scraper): # Exalead class inheriting Scraper
    
        """Scrapper class for Exalead"""
    
    
        def __init__(self):
    
           self.url = 'https://www.exalead.com/search/web/results/'
    
           self.defaultStart = 0
    
           self.startKey = ‘start_index’
    
    
        def parseResponse(self, soup):
    
           """ Parses the reponse and return set of urls
    
           Returns: urls (list)
    
                   [[Tile1,url1], [Title2, url2],..]
    
           """
    
           urls = []
    
           for a in soup.findAll('a', {'class': 'title'}): # Scrap data with the corresponding tag
    
               url_entry = {'title': a.getText(), 'link': a.get('href')}
    
               urls.append(url_entry)
    
    
           return urls

    Here, scraping data depends on the tag / class from where we could find the respective link and the title of the webpage.

    2. Edit generalized.py in app/scrapers directory

    from __future__ import print_function
    
    import json
    
    import sys
    
    from google import Google
    
    from duckduckgo import Duckduckgo
    
    from bing import Bing
    
    from yahoo import Yahoo
    
    from ask import Ask
    
    from yandex import Yandex
    
    from exalead import Exalead   # import exalead.py
    
    
    
    scrapers = {
    
        'g': Google(),
    
        'b': Bing(),
    
        'y': Yahoo(),
    
        'd': Duckduckgo(),
    
        'a': Ask(),
    
        'yd': Yandex(),
    
        't': Exalead() # Add exalead to scrapers with index ‘t’
    
    }

    From the scrapers dictionary, we could find which search engines had supported the project.

    3. Edit server.py in app directory

    @app.route('/api/v1/search/<search_engine>', methods=['GET'])
    
    def search(search_engine):
    
        try:
    
           num = request.args.get('num') or 10
    
           count = int(num)
    
           qformat = request.args.get('format') or 'json'
    
           if qformat not in ('json', 'xml'):
    
               abort(400, 'Not Found - undefined format')
    
    
           engine = search_engine
    
           if engine not in ('google', 'bing', 'duckduckgo', 'yahoo', 'ask', ‘yandex' ‘exalead’): # Add exalead to the tuple
    
               err = [404, 'Incorrect search engine', qformat]
    
               return bad_request(err)
    
    
           query = request.args.get('query')
    
           if not query:
    
               err = [400, 'Not Found - missing query', qformat]
    
               return bad_request(err)
    
    

    Checking, if the passed search engine is supporting the project, or not.

    4.  Edit index.html in app/templates directory

         <button type="submit" value="ask" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/ask_icon.ico') }}" width="30px" alt="Ask Icon"> Ask</button>
    
         <button type="submit" value="yandex" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/yandex_icon.png') }}" width="30px" alt="Yandex Icon"> Yandex</button>
    
         <button type="submit" value="exalead" class="btn btn-lg  search btn-outline"><img src="{{ url_for('static', filename='images/exalead_icon.png') }}" width="30px" alt="Exalead Icon"> Exalead</button> # Add button for exalead
    
    • In a nutshell,

    Scrape the data using the anchor tag having specific class name.

    For example, searching fossasia using exalead

    https://www.exalead.com/search/web/results/?q=fossasia&start_index=1

    Here, after inspecting element for the links, you will find that anchor having class name as title is having the link and title of the webpage. So, scrap data using title classed anchor tag.

    Similarly, you can add other search engines as well.

    Resources

    KiCAD Simulation to Validate Circuitry in PSLab Device

    A circuit is a combination of passive or active electronic components which are interconnected with wires and provided to power to perform a specific task. Bringing a conceptual circuit design into an actual model includes several steps. It all starts with a problem definition such as a “Power module to regulate input voltage to output 5V”. The next step is to design the schematic with the help of a designing tool. Once the schematic is complete, the PCB layout can be made which will be later printed out as the final circuit.

    The importance of testing the schematic circuit for performance and functionalities is very important as once the circuit is printed out, there is no way to modify the wiring or components. That is when the SPICE simulation comes into picture.

    PSLab device is consisted of hundreds of circuit components and they are interconnected using a 4 layer printed circuit board. A fault in one sub circuitry may fail the complete device. Hence each of them must be tested and simulated using proper tools to ensure functionality against a test input data set.

    KiCAD requires an external SPICE engine to be installed. Ngspice is a famous SPICE tool used in the industry.

    The test procedures carried out to ensure the circuitry functions in PSLab device is described in this blog. Once the circuit is complete, generate the spice netlist. This will open up a dialog box and in the “Spice” tab, select “Prefix references ‘U’ and ‘IC’ with ‘X’”.

    U and IC prefixes are used with chips which cannot be simulated with SPICE. Click “Generate” to build the netlist. Note that this is not the netlist we use to build up the PCB but a netlist which can be used in SPICE simulation.

    Now browse to the project folder and rename the file extension of cir to cki to make them compatible with command line SPICE commands.

    cp <filename>.cir <filename>.cki
    

    Then open the file using a text editor and modify the GND connection to have a global ground connection by replacing “GND” with “0” which is required in SPICE simulation. Once the SPICE code is complete run the following commands to get the SPICE script compiled;

    export SPICE_ASCIIRAWFILE=1
    ngspice -b -r <filename>.raw <filename>.cki
    ngnutmeg SPIce.raw
    

    This will open up a data analysis and manipulation program provided with ngspice to plot graphs and analyse SPICE simulations. Using this we can verify if the circuit can produce expected outputs with respect to the inputs we are providing and make adjustments if necessary.

    Resource:

    Link Preview Holder on SUSI.AI Android Chat

    SUSI Android contains several view holders which binds a view based on its type, and one of them is LinkPreviewHolder. As the name suggests it is used for previewing links in the chat window. As soon as it receives an input as of link it inflates a link preview layout. The problem which exists was that whenever a user inputs a link as an input to app, it crashed. It crashed because it tries to inflate component that doesn’t exists in the view that is given to ViewHolder. So it gave a Null pointer Exception, due to which the app crashed. The work around for fixing this bug was that based on the type of user it will inflate the layout and its components. Let’s see how all functionalities were implemented in the LinkPreviewHolder class.

    Components of LinkPreviewHolder

    @BindView(R.id.text)
    public TextView text;
    @BindView(R.id.background_layout)
    public LinearLayout backgroundLayout;
    @BindView(R.id.link_preview_image)
    public ImageView previewImageView;
    @BindView(R.id.link_preview_title)
    public TextView titleTextView;
    @BindView(R.id.link_preview_description)
    public TextView descriptionTextView;
    @BindView(R.id.timestamp)
    public TextView timestampTextView;
    @BindView(R.id.preview_layout)
    public LinearLayout previewLayout;
    @Nullable @BindView(R.id.received_tick)
    public ImageView receivedTick;
    @Nullable
    @BindView(R.id.thumbs_up)
    protected ImageView thumbsUp;
    @Nullable
    @BindView(R.id.thumbs_down)
    protected ImageView thumbsDown;

    Currently in this it binds the view components with the associated id using declarator @BindView(id)

    Instantiates the class with a constructor

    public LinkPreviewViewHolder(View itemView , ClickListener listener) {
       super(itemView, listener);
       realm = Realm.getDefaultInstance();
       ButterKnife.bind(this,itemView);
    }

    Here it binds the current class with the view passed in the constructor using ButterKnife and initiates the ClickListener.

    Now it is to set the components described above in the setView function:

    Spanned answerText;
    text.setLinksClickable(true);
    text.setMovementMethod(LinkMovementMethod.getInstance());
    if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
    answerText = Html.fromHtml(model.getContent(), Html.FROM_HTML_MODE_COMPACT);
    } else {
    answerText = Html.fromHtml(model.getContent());
    }

    Sets the textView inside the view with a clickable link. Version checking also has been put for checking the version of Android (Above Nougat or not) and implement the function accordingly.

    This ViewHolder will inflate different components based on the thing that who has requested the output. If the query wants to inflate the LinkPreviewHolder them some extra set of components will get inflated which need not be inflated for the response apart from the basic layout.

    if (viewType == USER_WITHLINK) {
       if (model.getIsDelivered())
           receivedTick.setImageResource(R.drawable.ic_check);
       else
           receivedTick.setImageResource(R.drawable.ic_clock);
    }

    In the above code  received tick image resource is set according to the attribute of message is delivered or not for the Query sent by the user. These components will only get initialised when the user has sent some links.

    Now comes the configuration for the result obtained from the query.  Every skill has some rating associated to it. To mark the ratings there needs to be a counter set for rating the skills, positive or negative. This code should only execute for the response and not for the query part. This is the reason for crashing of the app because the logic tries to inflate the contents of the part of response but the view that is passed belongs to query. So it gives NullPointerException there, so there is a need to separate the logic of Response from the Query.

    if (viewType != USER_WITHLINK) {
       if(model.getSkillLocation().isEmpty()){
           thumbsUp.setVisibility(View.GONE);
           thumbsDown.setVisibility(View.GONE);
       } else {
           thumbsUp.setVisibility(View.VISIBLE);
           thumbsDown.setVisibility(View.VISIBLE);
       }
    
       if(model.isPositiveRated()){
           thumbsUp.setImageResource(R.drawable.thumbs_up_solid);
       } else {
           thumbsUp.setImageResource(R.drawable.thumbs_up_outline);
       }
    
       if(model.isNegativeRated()){
           thumbsDown.setImageResource(R.drawable.thumbs_down_solid);
       } else {
           thumbsDown.setImageResource(R.drawable.thumbs_down_outline);
       }
    
       thumbsUp.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    
    
       thumbsDown.setOnClickListener(new View.OnClickListener() {
           @Override
           public void onClick(View view) { . . . }
       });
    
    }

    As you can see in the above code  it inflates the rating components (thumbsUp and thumbsDown) for the view of the SUSI.AI response and set on the clickListeners for the rating buttons. Them in the below code it previews the link and commit the data using Realm in the database through WebLink class.

    LinkPreviewCallback linkPreviewCallback = new LinkPreviewCallback() {
       @Override
       public void onPre() { . . . }
    
       @Override
       public void onPos(final SourceContent sourceContent, boolean b) { . . . }
    }

    This method calls the api and set the rating of that skill on the server. On successful result it made the thumb Icon change and alter the rating method and commit those changes in the databases using Realm.

    private void rateSusiSkill(final String polarity, String locationUrl, final Context context) {..}

    References

    Creating a Notification in Open Event Android App

    It is a good practice to show user a notification for alerts and have their attention for important events they want to remember. Open Event Android app shows notifications for the actions like bookmarks, upcoming events etc. In this blog we learn how to create similar kind of alert notification.

     

    Displaying notification after bookmarking a track

    NotificationCompat is available as part of the Android Support Library, so the first step is opening your project’s module-level build.gradle file and adding the support library to the dependencies section. First we initialize the notification manager with the context of application so a user can see notification irrespective of where it is in app.

    NotificationManager mManager = (NotificationManager) this.getApplicationContext().getSystemService(NOTIFICATION_SERVICE);
    int id = intent.getIntExtra(ConstantStrings.SESSION, 0);
    String session_date;
    Session session = realmRepo.getSessionSync(id);

    We then get the info we want to display in the notification from the intent. While adding an action to your notification is optional, the reality is that the vast majority of applications add actions to their notifications. We define a notification action using a PendingIntent. In this instance, we update our basic notification with a PendingIntent.

    Intent intent1 = new Intent(this.getApplicationContext(), SessionDetailActivity.class);
    intent1.putExtra(ConstantStrings.SESSION, session.getTitle());
    intent1.putExtra(ConstantStrings.ID, session.getId());
    intent1.putExtra(ConstantStrings.TRACK,session.getTrack().getName());
    PendingIntent pendingNotificationIntent = PendingIntent.getActivity(this.getApplicationContext(), 0, intent1, PendingIntent.FLAG_UPDATE_CURRENT);
    Bitmap largeIcon = BitmapFactory.decodeResource(getResources(), R.mipmap.ic_launcher);

    We also test the condition for the OS version to display the marker image, see image 1 for reference. The minimum requirement for a notification are:

    • An icon: Create the image you want to use and then add it to you project’s ‘drawable’ folder. Here notification shows bookmark option
    • Title text. You can set a notification’s title either by referencing a string resource, or by adding the text to your notification directly.
    • Detail text. This is the most important part of your notification, so this text must include everything the user needs to understand exactly what they’re being notified about.
    int smallIcon = R.drawable.ic_bookmark_white_24dp;
    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) smallIcon = R.drawable.ic_noti_bookmark;
    
    String session_timings = String.format("%s - %s",
           DateConverter.formatDateWithDefault(DateConverter.FORMAT_12H, session.getStartsAt()),
           DateConverter.formatDateWithDefault(DateConverter.FORMAT_12H, session.getEndsAt()));
    session_date = DateConverter.formatDateWithDefault(DateConverter.FORMAT_DATE_COMPLETE, session.getStartsAt());

    Finally we build notification using notification builder having various options to set text style, small icons, big icon etc., see the complete class here,

    NotificationCompat.Builder mBuilder = new NotificationCompat.Builder(this)
           .setSmallIcon(smallIcon)
           .setLargeIcon(largeIcon)
           .setContentTitle(session.getTitle())
           .setContentText(session_date + "\n" + session_timings)
           .setAutoCancel(true)
           .setStyle(new NotificationCompat.BigTextStyle().bigText(session_date + "\n" + session_timings))
           .setContentIntent(pendingNotificationIntent);
    intent1.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP);
    
    mBuilder.setSound(RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION));
    mManager.notify(session.getId(), mBuilder.build());

    References

    UI automated testing using Selenium in Badgeyay

    With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements.

    What is Selenium?

    Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction.

    Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7).

    First things first:
    To install these package run this code on the CLI:

    pip install selenium==2.40
    pip install nose
    

    Don’t forget to add them in the requirements.txt file

    Web Browser:
    We also need to have Firefox installed on your machine.

    Writing the Test
    An automated test automates what you’d do via manual testing – but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser – mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application.

    A test for the web page title:

    import unittest
    from selenium import webdriver
    
    class SampleTest(unittest.TestCase):
    
        @classmethod
        def setUpClass(cls):
            cls.driver = webdriver.Firefox()
            cls.driver.get('http://badgeyay-dev.herokuapp.com/')
    
        def test_title(self):
            self.assertEqual(self.driver.title, 'Badgeyay')
    
        @classmethod
        def tearDownClass(cls):
            cls.driver.quit()
    

     

    Run the test using nose test.py

    Clicking the element
    For our next test, we click the menu button, and check if the menu becomes visible.

    elem = self.driver.find_element_by_css_selector(".custom-menu-content")
    self.driver.find_element_by_css_selector(".glyphicon-th").click()
    self.assertTrue(elem.is_displayed())
    

     

    Uploading a CSV file:
    For our next test, we upload a CSV file and see if a success message pops up.

    def test_upload(self):
            Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png'))
            CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv'))
            self.driver.find_element_by_name("file").send_keys(CSVpath)
            self.driver.find_element_by_name("image").send_keys(Imagepath)
            self.driver.find_element_by_css_selector("form .btn-primary").click()
            time.sleep(3)
            success = self.driver.find_element_by_css_selector(".flash-success")
            self.assertIn(u'Your badges has been successfully generated!', success.text)
    

     

    The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests

    We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window.

    Resources

    Open Event Server: Creating/Rebuilding Elasticsearch Index From Existing Data In a PostgreSQL DB Using Python

    The Elasticsearch instance in the current Open Event Server deployment is currently just used to store the events and search through it due to limited resources.

    The project uses a PostgreSQL database, this blog will focus on setting up a job to create the events index if it does not exist. If the indices exists, the job will delete all the previous the data and rebuild the events index.

    Although the project uses Flask framework, the job will be in pure python so that it can run in background properly while the application continues its work. Celery is used for queueing up the aforementioned jobs. For building the job the first step would be to connect to our database:

    from config import Config
    import psycopg2
    conn = psycopg2.connect(Config.SQLALCHEMY_DATABASE_URI)
    cur = conn.cursor()
    

     

    The next step would be to fetch all the events from the database. We will only be indexing certain attributes of the event which will be useful in search. Rest of them are not stored in the index. The code given below will fetch us a collection of tuples containing the attributes mentioned in the code:

    cur.execute(
           "SELECT id, name, description, searchable_location_name, organizer_name, organizer_description FROM events WHERE state = 'published' and deleted_at is NULL ;")
       events = cur.fetchall()
    

     

    We will be using the the bulk API, which is significantly fast as compared to adding an event one by one via the API. Elasticsearch-py, the official python client for elasticsearch provides the necessary functionality to work with the bulk API of elasticsearch. The helpers present in the client enable us to use generator expressions to insert the data via the bulk API. The generator expression for events will be as follows:

    event_data = ({'_type': 'event',
                      '_index': 'events',
                      '_id': event_[0],
                      'name': event_[1],
                      'description': event_[2] or None,
                      'searchable_location_name': event_[3] or None,
                      'organizer_name': event_[4] or None,
                      'organizer_description': event_[5] or None}
                     for event_ in events)
    

     

    We will now delete the events index if it exists. The the event index will be recreated. The generator expression obtained above will be passed to the bulk API helper and the event index will repopulated. The complete code for the function will now be as follows:

     

    @celery.task(name='rebuild.events.elasticsearch')
    def cron_rebuild_events_elasticsearch():
       """
       Re-inserts all eligible events into elasticsearch
       :return:
       """
       conn = psycopg2.connect(Config.SQLALCHEMY_DATABASE_URI)
       cur = conn.cursor()
       cur.execute(
           "SELECT id, name, description, searchable_location_name, organizer_name, organizer_description FROM events WHERE state = 'published' and deleted_at is NULL ;")
       events = cur.fetchall()
       event_data = ({'_type': 'event',
                      '_index': 'events',
                      '_id': event_[0],
                      'name': event_[1],
                      'description': event_[2] or None,
                      'searchable_location_name': event_[3] or None,
                      'organizer_name': event_[4] or None,
                      'organizer_description': event_[5] or None}
                     for event_ in events)
       es_store.indices.delete('events')
       es_store.indices.create('events')
       abc = helpers.bulk(es_store, event_data)
    

     

    Currently we run this job on each week and also on each new deployment. Rebuilding the index is very important as some records may not be indexed when the continuous sync is taking place.

    To know more about it please visit https://gocardless.com/blog/syncing-postgres-to-elasticsearch-lessons-learned/

    Related links:

    Make Flask Fast and Reliable – Simple Steps

    Flask is a microframework for Python, which is mostly used in web-backend development.There are projects in FOSSASIA that are using flask for development purposes such as Open Event Server, Query Server, Badgeyay. Optimization is indeed one of the most important steps for a successful software product. So, in this post some few off- the-hook tricks will be shown which will make your flask-app more fast and reliable.

    Flask-Compress

    1. Flask-Compress is a python package which basically provides de-facto lossless compression  to your Flask application.
    2. Enough with the theory, now let’s understand the coding part:
      1. First install the module

    2. Then for a basic setup

    3.That’s it! All it takes is just few lines of code to make your flask app optimized .To know more about the module check out flask-compress module.

    Requirements Directory

    1. A common practice amongst different FOSSASIA  projects which involves dividing requirements.txt files for development,testing as well as production.
    2. Basically when projects either use TRAVIS CI for testing or are deployed to Cloud Services like Heroku, there are some modules which are not really required at some places.  For example: gunicorn is only required for deployment purposes and not for development.
    3. So how about we have a separate directory wherein different .txt files are created for different purposes.
    4. Below is the image of file directory structure followed for requirements in badgeyay project.

    1. As you can see different .txt files are created for different purposes
      1. dev.txt – for development
      2. prod.txt – for production(i.e. deployment)
      3. test.txt – for testing.

    Resources

    How to Store Mobile Settings in the Server from SUSI Web Chat Settings Page

    While we are adding new features and capabilities to SUSI Web Chat application, we wanted to provide settings changing capability to SUSI users. SUSI team decided to maintain a settings page to give that capability to users.

    This is how it’s interface looks like now.

    In this blog post I’m going to add another setting category to our setting page. This one is for  saving mobile phone number and dial code in the server.

    UI Development:

    First we need to  add new category to settings page and it should be invisible when user is not logged in. Anonymous users should not get mobile phone category in settings page.

         let menuItems = cookies.get('loggedIn') ?
                <div>
                    <div className="settings-list">
                        <Menu
                            onItemTouchTap={this.loadSettings}
                            selectedMenuItemStyle={blueThemeColor}
                            style={{ width: '100%' }}
                            value={this.state.selectedSetting}
                        >
                           <MenuItem value='Mobile' className="setting-item" leftIcon={<MobileIcon />}>Mobile<ChevronRight className="right-chevron" /></MenuItem>
                            <hr className="break-line" />
                        </Menu>
                    </div>
                </div>
    

     

    Next we have to show settings UI when user clicks on the category name.

     if (this.state.selectedSetting === 'Mobile' && cookies.get('loggedIn')) {}
                    currentSetting = (
      <Translate text="Country/region : " />
                                <DropDownMenu maxHeight={300}
                   value={this.state.countryCode?this.state.countryCode:'US'}
    

     

    Show US if the state does not deines the country code

                                    onChange={this.handleCountryChange}>
                                    {countries}
                                </DropDownMenu>
    <Translate text="Phone number : " />
                                <TextField name="selectedCountry"
                                disabled={true}
                                value={countryData.countries[this.state.countryCode?this.state.countryCode:'US'].countryCallingCodes[0] }
                             	/>
                                <TextField name="serverUrl"
                                    onChange={this.handleTelephoneNoChange}
                                    value={this.state.phoneNo }
     />
    )}
    

     

    Then we need to get list of country names and country dial codes to show in the above drop down. We used country-data node module for that.

    To install country-data module use this  command.

    npm install --save country-data
    

     

    We have used it in the settings page as below.

    import countryData from 'country-data';
        	countryData.countries.all.sort(function(a, b) {
                if(a.name < b.name){ return -1};
                if(a.name > b.name){ return 1};
                return 0;
            });
            let countries = countryData.countries.all.map((country, i) => {
             	return (<MenuItem value={countryData.countries.all[i].alpha2} key={i} primaryText={ countryData.countries.all[i].name+' '+ countryData.countries.all[i].countryCallingCodes[0] } />);
            });
    

     

    First we sort the country data list from it’s name. After that we made a list of “”s from this list of data.
    Then we have to check whether the user changed or added the phone number and region (dial code).
    It handles by this function mentioned above. ( onChange={this.handleCountryChange}> and
    onChange={this.handleTelephoneNoChange} )

        handleCountryChange = (event, index, value) => {
            this.setState({'countryCode': value });
        }
    

     

    Then we have to get the phone number using below function.

        handleTelephoneNoChange = (event, value) => {
            this.setState({'phoneNo': value});
        }
    

     

    Next we have to update the function that triggers when user clicks the save button.

        handleSubmit = () => {
            let newCountryCode = !this.state.countryCode?
            this.intialSettings.countryCode:this.state.countryCode;
            let newCountryDialCode = !this.state.countryDialCode?
            this.intialSettings.countryDialCode:this.state.countryDialCode;
            let newPhoneNo = this.state.phoneNo;
            let vals = {
                countryCode: newCountryCode,
                countryDialCode: newCountryDialCode,
                phoneNo: newPhoneNo
    }
    let settings = Object.assign({}, vals);
    cookies.set('settings', settings);
     this.implementSettings(vals);
     }
    

     

    This code snippet stores Country Code, Country Dial code and phone no in the server.
    Now we have to update the Store. Here we are going to change UserPreferencesStore “UserPreferencesStore” .
    First we have to setup default values for things we are going to store.

    let _defaults = {
    	  CountryCode: 'US',
       	  CountryDialCode: '+1',
       	  PhoneNo: ''
    }
    

     

    Finally we have to update the dispatchToken to change and get these new data

    UserPreferencesStore.dispatchToken = ChatAppDispatcher.register(action => {
       switch (action.type) {
           case ActionTypes.SETTINGS_CHANGED: {
               let settings = action.settings;
               if(settings.hasOwnProperty('theme')){
                       _defaults.Theme = settings.theme;
               }
               if(settings.hasOwnProperty('countryDialCode')){
                   _defaults.countryDialCode = settings.countryDialCode;
               }
               if(settings.hasOwnProperty('phoneNo')){
                   _defaults.phoneNo = settings.phoneNo;
               }
               if(settings.hasOwnProperty('countryCode')){
                   _defaults.countryCode = settings.countryCode;
               }
               UserPreferencesStore.emitChange();
               break;
    }
    }
    

     

    Finally application is ready to store and update Mobile phone number and region code in the server.

    Resources:

    SUSI.AI Chrome Bot and Web Speech: Integrating Speech Synthesis and Recognition

    Susi Chrome Bot is a Chrome extension which is used to communicate with Susi AI. The advantage of having chrome extensions is that they are very accessible for the user to perform certain tasks which sometimes needs the user to move to another tab/site.

    In this blog post, we will be going through the process of integrating the web speech API to SUSI Chromebot.

    Web Speech API

    Web Speech API enables web apps to be able to use voice data. The Web Speech API has two components:

    Speech Recognition:  Speech recognition gives web apps the ability to recognize voice data from an audio source. Speech recognition provides the speech-to-text service.

    Speech Synthesis: Speech synthesis provides the text-to-speech services for the web apps.

    Integrating speech synthesis and speech recognition in SUSI Chromebot

    Chrome provides the webkitSpeechRecognition() interface which we will use for our speech recognition tasks.

    var recognition = new webkitSpeechRecognition();
    

     

    Now, we have a speech recognition instance recognition. Let us define necessary checks for error detection and resetting the recognizer.

    var recognizing;
    
    function reset() {
    recognizing = false;
    }
    
    recognition.onerror = function(e){
    console.log(e.error);
    };
    
    recognition.onend = function(){
    reset();
    };
    

     

    We now define the toggleStartStop() function that will check if recognition is already being performed in which case it will stop recognition and reset the recognizer, otherwise, it will start recognition.

    function toggleStartStop() {
        if (recognizing) {
          recognition.stop();
          reset();
        } else {
          recognition.start();
          recognizing = true;
        }
    }
    

     

    We can then attach an event listener to a mic button which calls the toggleStartStop() function to start or stop our speech recognition.

    mic.addEventListener("click", function () {
        toggleStartStop();
    });
    

     

    Finally, when the speech recognizer has some results it calls the onresult event handler. We’ll use this event handler to catch the results returned.

    recognition.onresult = function (event) {
        for (var i = event.resultIndex; i < event.results.length; ++i) {
          if (event.results[i].isFinal) {
            textarea.value = event.results[i][0].transcript;
            submitForm();
          }
        }
    };
    

     

    The above code snipped tests for the results produced by the speech recognizer and if it’s the final result then it sets textarea value with the result of speech recognition and then we submit that to the backend.

    One problem that we might face is the extension not being able to access the microphone. This can be resolved by asking for microphone access from an external tab/window/iframe. For SUSI Chromebot this is being done using an external tab. Pressing on the settings icon makes a new tab which then asks for microphone access from the user. This needs to be done only once, so that does not cause a lot of trouble.

    setting.addEventListener("click", function () {
    chrome.tabs.create({
    url: chrome.runtime.getURL("options.html")
    });
    });navigator.webkitGetUserMedia({
    audio: true
    }, function(stream) {
    stream.stop();
    }, function () {
    console.log('no access');
    });
    

     

    In contrast to speech recognition, speech synthesis is very easy to implement.

    function speakOutput(msg){
        var voiceMsg = new SpeechSynthesisUtterance(msg);
        window.speechSynthesis.speak(voiceMsg);
    }
    

     

    This function takes a message as input, declares a new SpeechSynthesisUtterance instance and then calls the speak method to convert the text message to voice.

    There are many properties and attributes that come with this speech recognition and synthesis interface. This blog post only introduces the very basics.

    Resources