Automatically Generating index for documentation in Yaydoc

Yaydoc which uses Sphinx Documentation Generator internally needs a document named index.rst describing the overall layout of the documentation to generate a proper table of contents. Without an index.rst present, the build fails. With this week’s update that constraint has been relaxed. Now if yaydoc detects that index.rst has not been supplied, it automatically generates a minimal index for basic use. Although it is still recommended to provide your own index, you won’t be punished for its absence. The following sections show how this was implemented and also shows this feature in action.

Implementation

For generating a minimal index.rst, we perform the following steps:

  • If the repository has a README.rst or a README.md, we include it in the index
  • Several toctrees are generated as per how the documents in the repository are arranged.

The following code snippet returns a valid rst block which includes the document dirpath/filename

def get_include(dirpath, filename):
    ext = os.path.splitext(filename)[1]
    if ext == '.md':
        directive = 'mdinclude'
    else:
        directive = 'include'
    template = '.. {directive}:: {document}'
    path = os.path.relpath(os.path.join(dirpath, filename))
    document = path.replace(os.path.sep, '/')
    return template.format(directive=directive, document=document)

The following code snippet returns a valid rst block which creates a toctree of dirpath.

def get_toctree(dirpath, filenames):
    toctree = ['.. toctree::', '   :maxdepth: 1']
    caption_template = '   :caption: {caption}'
    content_template = '   {document}'

    caption = os.path.basename(dirpath).replace('_', ' ').title()
    if caption == os.curdir:
        caption = 'Contents'
    toctree.append(caption_template.format(caption=caption))
    # Inserting a blank line
    toctree.append('')

    valid = False
    for filename in filenames:
        path, ext = os.path.splitext(os.path.join(dirpath, filename))
        if ext not in ('.md', '.rst'):
            continue
        document = path.replace(os.path.sep, '/')
        document = document.lstrip('./').rstrip('/')
        toctree.append(content_template.format(document=document))
        valid = True

    if valid:
        return '\n'.join(toctree)
    else:
        return ''

The following code snippet walks the documentation directory and returns a valid content to be written to index.rst.

def get_index(root):
    index = []
    # Include README from root
    root_files = next(os.walk(root))[2]
    if 'README.rst' in root_files:
        index.append(get_include(root, 'README.rst'))
    elif 'README.md' in root_files:
        index.append(get_include(root, 'README.md'))
    # Add toctrees as per the directory structure
    for (dirpath, dirnames, filenames) in os.walk(os.curdir):
    if filenames:
        toctree = get_toctree(dirpath, filenames)
        if toctree:
            index.append(toctree)
    return '\n\n'.join(index) + '\n'

Result

Let’s assume that a sample project has the following directory tree for documentation.

+---_README.md
+---_docs/
|   +---_installation_guide/
|   |   +--- setup_heroku.md
|   |   +--- setup_docker.md
|   +---_tutorial/
|   |   +--- basic.md
|   |   +--- advanced.md

The following index.rst would be generated from the above tree

.. mdinclude:: ../README.md

.. toctree::
   :caption: Installation Guide
   :maxdepth: 1

   setup_heroku
   setup_docker

.. toctree::
   :caption: Tutorial
   :maxdepth:

   basic
   advanced

As you can see, this index.rst would be enough for most use cases. This update decreases the entry barrier for yaydoc. More features are on the way.

Resources

Continue ReadingAutomatically Generating index for documentation in Yaydoc

Using Root Directory as the Documentation Directory with Yaydoc

In our test builds for Yaydoc, we found that If we set the root as the documentation directory, the build would fail with a very long build log. In the build process, we create some temporary directories such as a virtual environment and the build directory in the root. After some inspection of the build logs, we found out that when the root is itself used as the documentation directory, we were accidently recursively copying the build directory into itself which led to build failure. Together with this, since the virtual environment directory was also being accidently copied to the build directory, we were actually building the documentation of the entire Python standard library on each build.

Once the problem and It’s cause was known, the course of action to be taken was clear. We needed to ensure that any temporary directories which we create as part of the build process was not being copied to the build directory. The following changes were made to achieve that.

  • The virtual environment directory was now being created in the HOME directory instead of the root.
  • Any other temporary directories which except the main build directory was now deleted before copying.
  • To prevent the recursive copying, we used the –exclude parameter of rsync.
rsync --exclude=BUILD_DIR DOCS_DIR/ BUILD_DIR/

After this patch, root can also be used as the documentation directory with Yaydoc. To do so, just set the environment variable DOCPATH as “.”

Continue ReadingUsing Root Directory as the Documentation Directory with Yaydoc

Adding dynamic segments to a route in Open Event Frontend Project

When we talk about a web application the first thing comes up is how to decide what to display at a given time which in most of the application is decided with the help of the URL. The URL of the application can be set either by loading the application or by writing the URL manually or may be by clicking some link. In our Open Event Frontend project which is written in Ember.js, an incredibly powerful JavaScript framework for creating web applications, the URL is mapped to the router handlers with the helper of router to render the template for the page, to load the data model to display, to navigate within the application or to handle any actions within the page like button clicking etc.

Suppose the user opens the open event application for the very first time what s/he will see a page containing the list of all the events which are going to happen in the near future along with their details like event name, timings, place, tags etc. If the user clicks one of the events from the list, the current page will be redirected to the detailed specific page for that particular event. The behaviour of changing the content of the page which we observed during this process can be explained with the help of the dynamic segments concept. The dynamic segment is a section of the path for a route which changes based on the content of a page.
This post will focus on how we have added dynamic segments to the route in the open event frontend project.

Let’s demonstrate the process of adding the dynamic segments to the route by taking an example of sessions routes where we can see the list of all the accepted, pending, confirmed and rejected sessions along with their details.

To add a dynamic segment, we need to have a route with path which we add to the route definition in app/router.js file

this.route('sessions',  function() {
   this.route('list', { path: '/:sessions_state' });
});

Dynamic segments are made up of a : followed by an identifier. Ember follows the convention of :model-name_id for two reasons. The first reason is that routes know how to fetch the right model by default if we follow the convention. The second is that params is an object, and can only have one value associated with a key.

After defining the path in app/router.js file we need to add template file,  app/templates/events/sessions/list.hbs which contain the markup to display the data which is defined in the file, app/routes/events/sessions/list.js under the model hook of the route in order to display the correct content for the specified option.

Code containing the markup for the page in app/templates/events/sessions/list.hbs file

<div class="sixteen wide column">
  <table class="ui tablet stackable very basic table">
    <thead>
      <tr>
        <th>{{t 'State'}}</th>
        <th>{{t 'Title'}}</th>
        <th>{{t 'Speakers'}}</th>
        <th>{{t 'Track'}}</th>
        <th>{{t 'Short Abstract'}}</th>
        <th>{{t 'Submission Date'}}</th>
        <th>{{t 'Last Modified'}}</th>
        <th>{{t 'Email Sent'}}</th>
        <th></th>
        <th></th>
      </tr>
    </thead>
    <tbody>
      {{#each model as |session|}}
        <tr>
          <td>
            {{#if (eq session.state "confirmed")}}
              <div class="ui green label">{{t 'Confirmed'}}</div>
            {{else}}
              <div class="ui red label">{{t 'Not Confirmed'}}</div>
            {{/if}}
          </td>
          <td>
            {{session.title}}
          </td>
          <td>
            <div class="ui ordered list">
              {{#each session.speakers as |speaker|}}
                <div class="item">{{speaker.name}}</div>
              {{/each}}
            </div>
          </td>
          <td>
            {{session.track}}
          </td>
          <td>
            {{session.shortAbstract}}
          </td>
          <td>
            {{moment-format session.submittedAt 'dddd, DD MMMM YYYY'}}
          </td>
          <td>
            {{moment-format session.modifiedAt 'dddd, DD MMMM YYYY'}}
          </td>
          <td>
            {{session.emailSent}}
          </td>
          <td>
            <div class="ui vertical compact basic buttons">
              {{#ui-popup content=(t 'View') class='ui icon button' position='left center'}}
                <i class="unhide icon"></i>
              {{/ui-popup}}
              {{#ui-popup content=(t 'Edit') class='ui icon button' position='left center'}}
                <i class="edit icon"></i>
              {{/ui-popup}}
              {{#ui-popup content=(t 'Delete') class='ui icon button' position='left center'}}
                <i class="trash outline icon"></i>
              {{/ui-popup}}
              {{#ui-popup content=(t 'Browse edit history') class='ui icon button' position='left center'}}
                <i class="history icon"></i>
              {{/ui-popup}}
            </div>
          </td>
          <td>
            <div class="ui vertical compact basic buttons">
              {{#ui-dropdown class='ui icon bottom right pointing dropdown button'}}
                <i class="green checkmark icon"></i>
                <div class="menu">
                  <div class="item">{{t 'With email'}}</div>
                  <div class="item">{{t 'Without email'}}</div>
                </div>
              {{/ui-dropdown}}
              {{#ui-dropdown class='ui icon bottom right pointing dropdown button'}}
                <i class="red remove icon"></i>
                <div class="menu">
                  <div class="item">{{t 'With email'}}</div>
                  <div class="item">{{t 'Without email'}}</div>
                </div>
              {{/ui-dropdown}}
            </div>
          </td>
        </tr>
      {{/each}}
    </tbody>
  </table>
</div>

 

Code containing the model hook in app/routes/events/sessions/list.js to display the correct content. We access the dynamic portion of the URL using params.

import Ember from 'ember';

const { Route } = Ember;

export default Route.extend({
  titleToken() {
    switch (this.get('params.session_status')) {
      case 'pending':
        return this.l10n.t('Pending');
      case 'accepted':
        return this.l10n.t('Accepted');
      case 'confirmed':
        return this.l10n.t('Confirmed');
      case 'rejected':
        return this.l10n.t('Rejected');
    }
  },
  model(params) {
    this.set('params', params);
    return [{
      title         : 'Test Session 1',
      speakers      : [{ name: 'speaker 1', id: 1, organization: 'fossasia' }, { name: 'speaker 2', id: 1, organization: 'fossasia' }],
      track         : 'sample track',
      shortAbstract : 'Lorem Ipsum is simply dummy text of the printing and typesetting industry.',
      submittedAt   : new Date(),
      modifiedAt    : new Date(),
      emailSent     : 'No',
      state         : 'confirmed'
    },
    {
      title         : 'Test Session 2',
      speakers      : [{ name: 'speaker 3', id: 1, organization: 'fossasia' }, { name: 'speaker 4', id: 1, organization: 'fossasia' }],
      track         : 'sample track',
      shortAbstract : 'Lorem Ipsum is simply dummy text of the printing and typesetting industry.',
      submittedAt   : new Date(),
      modifiedAt    : new Date(),
      emailSent     : 'Yes',
      state         : 'confirmed'
    }];
  }
});

 

After the route is fully configured, we need to start linking it from the templates which mean we need to link it in our parent template, app/templates/events/view/sessions.hbs file using the {{link-to}} helper. The code for the linking looks like this:

    {{#tabbed-navigation isNonPointing=true}}
        {{#link-to 'events.view.sessions.index' class='item'}}
          {{t 'All'}}
        {{/link-to}}
        {{#link-to 'events.view.sessions.list' 'pending' class='item'}}
          {{t 'Pending'}}
        {{/link-to}}
        {{#link-to 'events.view.sessions.list' 'accepted' class='item'}}
          {{t 'Accepted'}}
        {{/link-to}}
        {{#link-to 'events.view.sessions.list' 'confirmed' class='item'}}
          {{t 'Confirmed'}}
        {{/link-to}}
        {{#link-to 'events.view.sessions.list' 'rejected' class='item'}}
          {{t 'Rejected'}}
        {{/link-to}}
      {{/tabbed-navigation}} 

 

The User Interface for the above code looks like this:

Fig. : The page containing all the accepted session

To conclude this, we can say the task of the route is to load the modal to display the data. For example, if we have the route this.route(‘sessions’);, the route might load all of the sessions for the app but we want only the particular type of session so the dynamic segments help to load the particular model and make it easier to load and display the data.

Reference: The link to the complete code is here. For getting more knowledge about dynamic segments please visit this.

Continue ReadingAdding dynamic segments to a route in Open Event Frontend Project

Using Ember.js Components in Open Event Frontend

Ember.js is a comprehensive JavaScript framework for building highly ambitious web applications. The basic tenet of Ember.js is convention over configuration which means that it understands that a large part of the code, as well as development process, is common to most of the web applications. Talking about the components which are nothing but the elements whose role remain same with same properties and functions within the entire project. Components allow developers to bundle up HTML elements and styles into reusable custom elements which can be called anywhere within the project.

In Ember, the components consist of two parts: some JavaScript code and an HTMLBars template. The JavaScript component file defines the behaviour and properties of the component. The behaviours of the component are typically defined using actions. The HTMLBars file defines the markup for the component’s UI. By default, the component will be rendered into a ‘div’ tag element, but a different element can be defined if required. A great thing about templates in Ember is that other components can be called inside of a component’s template. To call a component in an Ember app, we must use {{curly-brace-syntax}}. By design, components are completely isolated which means that they are not directly affected by any surrounding CSS or JavaScript.

Let’s demonstrate a basic Ember component in reference to Open Event Frontend Project for displaying the text as a popup. The component will render a simple text view which will display the entire text. The component is designed with the purpose that many times due to unavailability of space we’re unable to show the complete text so such cases the component will compare the available space with the space required by the whole text view to display the text. If in case the available space is not sufficient then the text will be ellipsized and on hovering the text a popup will appear where the complete text can be seen.

Generating the component

The component can be generated using the following command:

$ ember g component smart-overflow

Note: The components name needs to include a hyphen. This is an Ember convention, but it is an important one as it’ll ensure there are no naming collisions with future HTML elements.This will create the required .js and .hbs files needed to define the component, as well as an Ember integration test.

The Component Template

In the app/templates/components/smart-overflow.hbs file we can create some basic markup to display the text when the component is called.

<span> {{yield}} </span>

The {{yield}} is handlebars expressions which will be helpful in rendering the data to display when the component is called.

The JavaScript Code

In the app/components/smart-overflow.js file, we will define the how the component will work when it is called.

import Ember from 'ember';

const { Component } = Ember;

export default Component.extend({
  classNames: ['smart-overflow'],
  didInsertElement() {
    this._super(...arguments);
    var $headerSpan = this.$('span');
    var $header = this.$();
    $header.attr('data-content', $headerSpan.text());
    $header.attr('data-variation', 'tiny');
    while ($headerSpan.outerHeight() > $header.height()) {
      $headerSpan.text((index, text) => {
        return text.replace(/\W*\s(\S)*$/, '...');
      });
      $header.popup({
        position: 'top center'
      });
      this.set('$header', $header);
    }
  },
  willDestroyElement() {
    this._super(...arguments);
    if (this.get('$header')) {
      this.get('$header').popup('destroy');
    }
  }
});

 

In the above piece of code, we have first taken the size of the available space in header variable and then taken the size of the content in header span variable. After that, we’re comparing both the sizes to check if the content is greater than the available space then we are ellipsizing the content and create a popup to display the complete text to produce good user experience.

Passing data to the component

To allow the component to display the data properly, we need to pass it in.

In the app/templates/components/event-card.hbs file we can call the component as many times as desired and pass in relevant data for each attribute.

<div class="ui card {{unless isWide 'event fluid' 'thirteen wide computer ten wide tablet sixteen wide mobile column'}}">
    {{#unless isWide}}
      <a class="image" href="{{href-to 'public' event.identifier}}">
        {{widgets/safe-image src=(if event.large event.large event.placeholderUrl)}}
      </a>
    {{/unless}}
    <a class="main content" href="{{href-to 'public' event.identifier}}">
      {{#smart-overflow class='header'}}
        {{event.name}}
      {{/smart-overflow}}
      <div class="meta">
        <span class="date">
          {{moment-format event.startTime 'ddd, MMM DD HH:mm A'}}
        </span>
      </div>
      {{#smart-overflow class='description'}}
        {{event.shortLocationName}}
      {{/smart-overflow}}
    </a>
    <div class="extra content small text">
      <span class="right floated">
        <i role="button" class="share alternate link icon" {{action shareEvent event}}></i>
      </span>
      <span>
        {{#if hasBlock}}
          {{yield}}
        {{else}}
          {{#each tags as |tag|}}
            <a>{{tag}}</a>
          {{/each}}
        {{/if}}
      </span>
    </div>
  </div>

 

Now if you view the app in the browser at localhost:4200, you will see something like this.

Fig. 1

In the end, we can say that with the components, the code remains much clear and readable. It makes more sense to the developers who happen upon them. The best part about them is their reusability across the application making the development process faster and much more efficient.

Reference: The Complete source for the smart overflow can be found here.

Continue ReadingUsing Ember.js Components in Open Event Frontend

How to make SUSI AI Line Bot

In order to integrate SUSI’s API with Line bot you will need to have a line account first so that you can follow below procedure. You can download app from here.

Pre-requisites:

  • Line app
  • Github
  • Heroku

    Steps:
    1. Install Node.js from the link below on your computer if you haven’t installed it already https://nodejs.org/en/.
    2. Create a folder with any name and open shell and change your current directory to the new folder you created.
    3. Type npm init in command line and enter details like name, version and entry point.
    4. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created.
    5. Type following commands in command line  npm install –save @line/bot-sdk. After bot-sdk is installed type npm install –save express after express is installed type npm install –save request when all the modules are installed check your package.json modules will be included within dependencies portion.

      Your package.json file should look like this.

      {
      "name": "SUSI-Bot",
      "version": "1.0.0",
      "description": "SUSI AI LINE bot",
      "main": "index.js",
      "dependencies": {
         "@line/bot-sdk": "^1.0.0",
         "express": "^4.15.2",
         "request": "^2.81.0"
      },
      "scripts": {
         "start": "node index.js"
       }
      }
    6. Copy following code into file you created i.e index.js
      'use strict';
      const line = require('@line/bot-sdk');
      const express = require('express');
      var request = require("request");
      
      // create LINE SDK config from env variables
      
      const config = {
         channelAccessToken: process.env.CHANNEL_ACCESS_TOKEN,
         channelSecret: process.env.CHANNEL_SECRET,
      };
      
      // create LINE SDK client
      
      const client = new line.Client(config);
      
      
      // create Express app
      // about Express: https://expressjs.com/
      
      const app = express();
      
      // register a webhook handler with middleware
      
      app.post('/webhook', line.middleware(config), (req, res) => {
         Promise
             .all(req.body.events.map(handleEvent))
             .then((result) => res.json(result));
      });
      
      // event handler
      
      function handleEvent(event) {
         if (event.type !== 'message' || event.message.type !== 'text') {
             // ignore non-text-message event
             return Promise.resolve(null);
         }
      
         var options1 = {
             method: 'GET',
             url: 'http://api.asksusi.com/susi/chat.json',
             qs: {
                 timezoneOffset: '-330',
                 q: event.message.text
             }
         };
      
         request(options, function(error, response, body) {
             if (error) throw new Error(error);
             // answer fetched from susi
             //console.log(body);
             var ans = (JSON.parse(body)).answers[0].actions[0].expression;
             // create a echoing text message
             const answer = {
                 type: 'text',
                 text: ans
             };
      
             // use reply API
      
             return client.replyMessage(event.replyToken, answer);
         })
      }
      
      // listen on port
      
      const port = process.env.PORT || 3000;
      app.listen(port, () => {
         console.log(`listening on ${port}`);
      });
    7. Now we have to get channel access token and channel secret to get that follow below steps.

    8. If you have Line account then move to next step else sign up for an account and make one.
    9. Create Line account on  Line Business Center with messaging API and follow these steps:
    10. In the Line Business Center, select Messaging API under the Service category at the top of the page.
    11. Select start using messaging API, enter required information and confirm it.
    12. Click LINE@ Manager option, In settings go to bot settings and Enable messaging API
    13. Now we have to configure settings. Allow messages using webhook and select allow for “Use Webhooks”.
    14. Go to Accounts option at top of page and open LINE Developers.
    15. To get Channel access token for accessing API, click ISSUE for the “Channel access token” item.
    16. Click EDIT and set a webhook URL for your Channel. To get webhook url deploy your bot to heroku and see below steps.
    17. Before deploying we have to make a github repository for chatbot to make github repository follow these steps:

      In command line change current directory to folder we created above and  write

      git init
      git add .
      git commit -m”initial”
      git remote add origin <URL for remote repository> 
      git remote -v
      git push -u origin master 

      You will get URL for remote repository by making repository on your github and copying this link of your repository.

    18. To deploy your bot to heroku you need an account on Heroku and after making an account make an app.
    19. Deploy app using github deployment method.


    20. Select Automatic deployment method.


    21. After making app copy this link and paste it in webhook url in Line channel console page from where we got channel access token.

                https://<your_heroku_app_name>.herokuapp.com/webhook
    22. Your SUSI AI Line bot is ready add this account as a friend and start chatting with SUSI.
      Here is the LINE API reference https://devdocs.line.me/en/
Continue ReadingHow to make SUSI AI Line Bot

Youtube videos in the SUSI iOS Client

The iOS and android client already have the functionality to play videos based on the queries. In order to implement this feature of playing videos in the iOS client, we use the Youtube Data API v3. The task here was to create an UI/UX for the playing of videos within the app. An API call is made initially to fetch the youtube videos based on the query and they video ID of the first object is extracted and used to play the video.

The API endpoint for youtube data API looks like:

https://www.googleapis.com/youtube/v3/search?part=snippet&q={query}&key={your_api_key}

Using this we get the following result: ( I am adding only the first item which is required since the response is too long )

Path: $.items[0]

{

 "kind": "youtube#searchResult",

 "etag": "\"m2yskBQFythfE4irbTIeOgYYfBU/oR-eA572vNoma1XIhrbsFTotfTY\"",

 "id": {

"kind": "youtube#channel",

"channelId": "UCQprMsG-raCIMlBudm20iLQ"

 },

 "snippet": {

"publishedAt": "2015-01-01T11:06:00.000Z",

"channelId": "UCQprMsG-raCIMlBudm20iLQ",

"title": "FOSSASIA",

"description": "FOSSASIA is supporting the development of Free and Open Source technologies for social change in Asia. The annual FOSSASIA Summit brings together ...",

"thumbnails": {

"default": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s88-c-k-no-mo-rj-c0xffffff/photo.jpg"

},

"medium": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg"

},

"high": {

"url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg"

}

},

"channelTitle": "FOSSASIA",

"liveBroadcastContent": "upcoming"

 }

}

We parse the above object to grab the videoID, based on the query,  we will use code below:

if let itemsObject = response[Client.YoutubeResponseKeys.Items] as? [[String : AnyObject]] {
    if let items = itemsObject[0][Client.YoutubeResponseKeys.ID] as? [String : AnyObject] {
         let videoID = items[Client.YoutubeResponseKeys.VideoID] as? String
         completion(videoID, true, nil)
    }
}

This videoID is returned to the Controller where this method was called.

Now, we begin with designing the UI for the same. First of all, we need a view on which the youtube video will be played and this view would help dismiss the video by clicking on it.

First, we add the blackView to the entire screen.

// declaration
let blackView = UIView()

// Add backgroundView
func addBackgroundView() {

   If let window = UIApplication.shared.keyWindow {

           self.view.addSubview(blackView) 

           // Cover the entire screen
           blackView.frame = window.frame

           blackView.backgroundColor = UIColor(white: 0, alpha: 0.5)
           blackView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleDismiss)))

   }

}

func handleDismiss() {
   UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {
       self.blackView.removeFromSuperview()
   }, completion: nil)
}

Next, we add the YoutubePlayerView. For this we use the Pod `YoutubePlayer`. Since, it had a few warnings showing as well as some videos not being played I had to make fixes to the original pod and use my own customized version ( available here ).

// Youtube Player
lazy var youtubePlayer: YouTubePlayerView = {
    let frame = CGRect(x: 0, y: 0, width: self.view.frame.width - 16, height: self.view.frame.height * 1 / 3)
    let player = YouTubePlayerView(frame: frame)
    return player
}()

// Shows Youtube Player

func addYotubePlayer(_ videoID: String) {
    if let window = UIApplication.shared.keyWindow {

       // Add YoutubePlayer view on top of blackView
        self.blackView.addSubview(self.youtubePlayer)
        // Calculate and set frame

       let centerX = UIScreen.main.bounds.size.width / 2
        let centerY = UIScreen.main.bounds.size.height / 3
        self.youtubePlayer.center = CGPoint(x: centerX, y: centerY)

       // Load Player using the Video ID 
        self.youtubePlayer.loadVideoID(videoID)

        blackView.alpha = 0
        youtubePlayer.alpha = 0

        UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {
            self.blackView.alpha = 1
            self.youtubePlayer.alpha = 1
       }, completion: nil)
    }
}

We are set with the UI and the only thing we are left with is to actually call the API in the client and after getting the `videoID` from that we call the above method passing this `videoID`. Before calling we need to check whether our query contains the play action or not and if it does we make the API call and add the player.

if let text = inputTextField.text {
    if text.contains("play") || text.contains("Play") {
        let query = text.replacingOccurrences(of: "play", with: "").replacingOccurrences(of: "Play", with: "")
        Client.sharedInstance.searchYotubeVideos(query) { (videoID, _, _) in
            DispatchQueue.main.async {
                if let videoID = videoID {
                    self.addYotubePlayer(videoID)
                 }
             }
          }
    }

}

We are all set now!Below is the output for the Youtube Player:

Continue ReadingYoutube videos in the SUSI iOS Client

Websearch and Link Preview support in SUSI iOS

The SUSI.AI server responds to API calls with answers to the queries made. These answers might contain an action, for example a web search, where the client needs to make a web search request to fetch different web pages based on the query. Thus, we need to add a link preview in the iOS Client for each such page extracting and displaying the title, description and a main image of the webpage.

At first we make the API call adding the query to the query parameter and get the result from it.

API Call:

http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=amazon

And get the following result:

{

"query": "amazon",

"count": 1,

"client_id": "aG9zdF8xMDguMTYyLjI0Ni43OQ==",

"query_date": "2017-06-02T14:34:15.675Z",

"answers": [

{

"data": [{

"0": "amazon",

"1": "amazon",

"timezoneOffset": "-330"

}],

"metadata": {

"count": 1

},

"actions": [{

"type": "answer",

"expression": "I don't know how to answer this. Here is a web search result:"

},

{

"type": "websearch",

"query": "amazon"

}]

}],

"answer_date": "2017-06-02T14:34:15.773Z",

"answer_time": 98,

"language": "en",

"session": {

"identity": {

"type": "host",

"name": "108.162.246.79",

"anonymous": true

}

}

}

After parsing this response, we first recognise the type of action that needs to be performed, here we get `websearch` which means we need to make a web search for the query. Here, we use `DuckDuckGo’s` API to get the result.

API Call to DuckDuckGo:

http://api.duckduckgo.com/?q=amazon&format=json

I am adding just the first object of the required data since the API response is too long.

Path: $.RelatedTopics[0]

{

 "Result": "<a href=\"https://duckduckgo.com/Amazon.com\">Amazon.com</a>Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company...",

 "Icon": {

"URL": "https://duckduckgo.com/i/d404ba24.png",

"Height": "",

"Width": ""

},

 "FirstURL": "https://duckduckgo.com/Amazon.com",

 "Text": "Amazon.com Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company..."

}

For the link preview, we need an image logo, URL and a description for the same so, here we will use the `Icon.URL` and `Text` key. We have our own class to parse this data into an object.

class WebsearchResult: NSObject {
   
   var image: String = "no-image"
   var info: String = "No data found"
   var url: String = "https://duckduckgo.com/"
   var query: String = ""
   
   init(dictionary: [String:AnyObject]) {
       
       if let relatedTopics = dictionary[Client.WebsearchKeys.RelatedTopics] as? [[String : AnyObject]] {
           
           if let icon = relatedTopics[0][Client.WebsearchKeys.Icon] as? [String : String] {
               if let image = icon[Client.WebsearchKeys.Icon] {
                   self.image = image
               }
           }
           
           if let url = relatedTopics[0][Client.WebsearchKeys.FirstURL] as? String {
               self.url = url
           }
           
           if let info = relatedTopics[0][Client.WebsearchKeys.Text] as? String {
               self.info = info
           }
           
           if let query = dictionary[Client.WebsearchKeys.Heading] as? String {
               let string = query.lowercased().replacingOccurrences(of: " ", with: "+")
               self.query = string
           }   
       }   
   }   
}

We now have the data and only thing left is to display it in the UI.

Within the Chat Bubble, we need to add a container view which will contain the image, and the text description.

 let websearchContentView = UIView()
    
    let searchImageView: UIImageView = {
        let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 44, height: 44))
        imageView.contentMode = .scaleAspectFit

       // Placeholder image assigned
        imageView.image = UIImage(named: "no-image")
        return imageView
    }()
    
    let websiteText: UILabel = {
        let label = UILabel()
        label.textColor = .white
        return label
    }()

   func addLinkPreview(_ frame: CGRect) {
        textBubbleView.addSubview(websearchContentView)
        websearchContentView.backgroundColor = .lightGray
        websearchContentView.frame = frame
        
        websearchContentView.addSubview(searchImageView)
        websearchContentView.addSubview(websiteText)




       // Add constraints in UI
        websearchContentView.addConstraintsWithFormat(format: "H:|-4-[v0(44)]-4-[v1]-4-|", views: searchImageView, websiteText)
        websearchContentView.addConstraintsWithFormat(format: "V:|-4-[v0]-4-|", views: searchImageView)
        websearchContentView.addConstraintsWithFormat(format: "V:|-4-[v0(44)]-4-|", views: websiteText)
    }

Next, in the Collection View, while checking other action types, we add checking for `websearch` and then call the API there followed by adding frame size and calling the `addLinkPreview` function.

else if message.responseType == Message.ResponseTypes.websearch {
  let params = [
    Client.WebsearchKeys.Query: message.query!,
    Client.WebsearchKeys.Format: "json"
  ]

  Client.sharedInstance.websearch(params, { (results, success, error) in                      
    if success {
      cell.message?.websearchData = results
      message.websearchData = results
      self.collectionView?.reloadData()
      self.scrollToLast()
    } else {
      print(error)
    }
  })
                    
  cell.messageTextView.frame = CGRect(x: 16, y: 0, width: estimatedFrame.width + 16, height: estimatedFrame.height + 30)
  

 cell.textBubbleView.frame = CGRect(x: 4, y: -4, width: estimatedFrame.width + 16 + 8 + 16, height: estimatedFrame.height + 20 + 6 + 64)
                    
  let frame = CGRect(x: 16, y: estimatedFrame.height + 20, width: estimatedFrame.width + 16 - 4, height: 60 - 8)
  

 cell.addLinkPreview(frame)

}

And set the collection View cell’s size.

else if message.responseType == Message.ResponseTypes.websearch {
  return CGSize(width: view.frame.width, height: estimatedFrame.height + 20 + 64)
}

And we are done 🙂

Here is the final version how this would look like on the device:

 

Continue ReadingWebsearch and Link Preview support in SUSI iOS

Use of ViewPager in Phimpme

Previously GalleryView was used in phimpme android app but as it is now deprecated, I decided to use ViewPager instead of GalleryView.

ViewPager allows us to view data with a horizontal swipe with the help of layoutManager.

Steps to implement the viewPager:

  1. First, add the ViewPager in Activity.xml file where you want to implement the ViewPager. This can be done using the line of code below:
<android.support.v4.view.ViewPager
             android:id="@+id/view_pager"
                android:layout_width="match_parent"
               android:layout_height="match_parent">

</android.support.v4.view.ViewPager>
  1.  To display the content of viewPager we use the viewPagerAdapter. Create new java file ViewPagerAdapter and extends it to PagerAdapter.

ViewPagerAdapter.java

public class ViewPagerAdapter extends PagerAdapter {
}
  1. After extending to PagerAdaper we have to override the two basic methods of PagerAdapter.

First, implement the constructor which helps us to provide the context of activity to ViewPagerAdapter.

You can override by pressing Alt+enter combination, click on “implement methods” and then selects these two methods.

It will implement two methods  

  • getCount()
  • isViewFromObject()

getCount will return the number of items in view pager.

  1. Now we override the few methods which are required to inflate and destroy view in viewPager.

First,

Override the instantiateItem() method it creates the page for given position.

@Override

public Object instantiateItem(ViewGroup container, int position) {
 return super.instantiateItem(container, position);
}

Now we will modify this method to inflate the view for viewPager.

As we want to display imageView in viewPager first we have to inflate the imageView and set Image according to the position of ViewPager.

Next steps,

  • Implement the customView for imageView.
  • And provide the data for  ViewPager i.e Array of images.

Create new custom_layout.xml and add ImageView in it.

<ImageView

   android:layout_width="match_parent"

   android:id="@+id/image_view"

   android:layout_height="match_parent" />

And create an array for images if you want to show images from the local memory so collect path of the images which you want to show with the array name Images.

Now we will use custom_layout layout in our ViewPager instantiateItem() method.

@Override

public Object instantiateItem(ViewGroup container, int position) {

   LayoutInflater layoutInflater = (LayoutInflater)  context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);

   View view=  layoutInflater.inflate(R.layout.custom_view,null);

   ImageView imageView = (ImageView)view.findViewById(R.id.image_view);

   imageView.setBackgroundResource(images[position]);

   container.addView(view,0);

   return view;

}

The above code inflates the imageView in ViewPager.

Now we have to override destroyItem() method.  This method will destroy the content of viewPager from given position.

The below code will remove the view which we added in instantiateItem() method.

@Override

public void destroyItem(ViewGroup container, int position, Object object) {
  container.removeView((View) object);
}

Now PagerAdapter is ready, we can use this in our Activity.

  1. Reference the viewPager and set the ViewPagerAdapter to ViewPager.

Activity.java

@Override

protected void onCreate(Bundle savedInstanceState) {

   super.onCreate(savedInstanceState);

   setContentView(R.layout.activity_main);

   ViewPager viewPager = (ViewPager) findViewById(R.id.view_pager);

   viewPager.setAdapter(new ViewPagerAdapter(this));

}

The above code will set the pagerAdapter to our viewPager and display the content which we defined in instantiateItem() method of pagerAdapter.

 

This is how viewPager will allow viewing images by swiping horizontally in Phimpme.

Resources:

https://developer.android.com/reference/android/support/v4/view/PagerAdapter.html

https://github.com/fossasia/phimpme-android/pull/407/files

Continue ReadingUse of ViewPager in Phimpme

Debugging Using Stetho in Open Event Android

The Open Event Android project helps event organizers to generator Apps (apk format) for their events/conferences by providing api end point or zip generated using Open Event server. In this android app data is fetched from the internet using network calls to the Open Event server and the data of the event is stored in a database. It is difficult to debug an app with so many network calls and database connectivity. A way to approach this is using  Stetho which is very helpful tool for debugging an app which deals with network calls and database.  

Stetho is Facebook’s open source project works as debug bridge for android applications which gives powerful Chrome Developers Tools for debugging android applications using Chrome desktop browser.

What can you do using stetho ?

  • Analyze network traffic
  • Inspect elements(layouts/views)  
  • View SQLite database in table format
  • Run queries on SQLite database
  • View shared preference and edit it
  • Manipulate android app from command line

Setup

1. Add Gradle dependency

To add stetho in your app add ‘com.facebook.stetho:stetho:1.5.0’ dependency in your app  module’s build.gradle file. This dependency is strictly required.

dependencies{
    compile 'com.facebook.stetho:stetho:1.5.0'
}

For network inspection add one of the following dependency according to which you will be using

'com.facebook.stetho:stetho-okhttp:1.5.0'
'com.facebook.stetho:stetho-okhttp3:1.5.0'
'com.facebook.stetho:stetho-urlconnection:1.5.0'

2. Initialize stetho

Initialize stetho in class MyApplication which extends Application class by overriding  onCreate() method. Make sure you have added MyAppication in manifest.

public class MyApplication extends Application {
    public void onCreate() {
        super.onCreate()
        Stetho.initializeWithDefaults(this);
    }
}

Stetho.initializeWithDefaults(this) initializes stetho with defaults configuration(without network inspection and more). It will be able to debug database.

Manifest file

<Application   android:name=”.MyApplication    …   />

For enabling network inspection add StethoInterceptor  in OkHttpClient

new OkHttpClient.Builder()
    .addNetworkInterceptor(new StethoInterceptor())
    .build();

Using Chrome Developer Tools

1. Open Inspect window

Run stetho initialized app on device or emulator then start Google chrome and type chrome://inspect. You will see you device with your app’s package name. Click on “inspect”2. Network Inspection

Go to Network tab. It will show all network calls. With almost all info like url(path), method, returned status code, returned data type, size, time taken etc.

You can also see preview of image and preview, response of returned json data by clicking on Name.

3. SQLite Database

Go to “Resources”  tab and select “Web SQL”. You will see database file(.db). By clicking on database file you will see all tables in that database file. And by clicking on table name you will see data in row-column format for that table.

4. Run queries on SQLite database

Same as above go to “Resources”  tab and select “Web SQL”. You will see database file(.db). By clicking on database file you will see console on right side, where you can run queries on SQLite database. Example,

SELECT * FROM tracks ;

5. Shared Preferences Inspection

Go to “Resources”  tab and select “Local Storage”. You will show all files that your app used to save key-value pairs in shared preference and by clicking on file you will see all key-value pairs.

6. Element(View/Layout) Inspection

Go to “Elements” tab. You will see top layout/view in view hierarchy. By clicking it you will see child layout/view of that layout/view. On hover on layout/view you view will be inspected in your device/emulator.

 

In this blog the most important have been put forward, but there are still  some nice stuff available,like:

  • An integration with JavaScript Console : Enables JavaScript code execution that can interact with the application
  • Dumpapp  : It allows an integration higher than the Developer Tools, enabling the development of custom plugins.

By now, you must have realized that stetho can significantly improve your debugging experience. To learn more about stetho, refer to

Continue ReadingDebugging Using Stetho in Open Event Android

Continuous Deployment Implementation in Loklak Search

In current pace of web technology, the quick response time and low downtime are the core goals of any project. To achieve a continuous deployment scheme the most important factor is how efficiently contributors and maintainers are able to test and deploy the code with every PR. We faced this question when we started building loklak search.

As Loklak Search is a data driven client side web app, GitHub pages is the simplest way to set it up. At FOSSASIA apps are developed by many developers working together on different features. This makes it more important to have a unified flow of control and simple integration with GitHub pages as continuous deployment pipeline.

So the broad concept of continuous deployment boils down to three basic requirements

  1. Automatic unit testing.
  2. The automatic build of the applications on the successful merge of PR and deployment on the gh-pages branch.
  3. Easy provision of demo links for the developers to test and share the features they are working on before the PR is actually merged.

Automatic Unit Testing

At Loklak Search we use karma unit tests. For loklak search, we get the major help from angular/cli which helps in running of unit tests. The main part of the unit testing is TravisCI which is used as the CI solution. All these things are pretty easy to set up and use.

Travis CI has a particular advantage which is the ability to run custom shell scripts at different stages of the build process, and we use this capability for our Continuous Deployment.

Automatic Builds of PR’s and Deploy on Merge

This is the main requirement of the our CD scheme, and we do so by setting up a shell script. This file is deploy.sh in the project repository root.

There are few critical sections of the deploy script. The script starts with the initialisation instructions which set up the appropriate variables and also decrypts the ssh key which travis uses for pushing the repo on gh-pages branch (we will set up this key later).

  • Here we also check that we run our deploy script only when the build is for Master Branch and we do this by early exiting from the script if it is not so.
#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
echo "Skipping deploy; The request or commit is not on master"
exit 0
fi

 

  • We also store important information regarding the deploy keys which are generated manually and are encrypted using travis.
# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

# Decryption of the deploy_key.enc
ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out deploy_key -d

chmod 600 deploy_key
eval `ssh-agent -s`
ssh-add deploy_key

 

  • We clone our repo from GitHub and then go to the Target Branch which is gh-pages in our case.
# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

 

  • Now we do a clean up of our directory here, we do this so that fresh build is done every time, here we protect our files which are static and are not generated by the build process. These are CNAME and 404.html
# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

 

  • After checking out to our Master Branch we do an npm install to install all our dependencies here and then do our project build. Then we move our files generated by the ng build to our gh-pages branch, and then we make a commit, to this branch.
git checkout $SOURCE_BRANCH
# Actual building and setup of current push or PR.
npm install
ng build --prod --aot
git checkout $TARGET_BRANCH
mv dist/* .
# Staging the new build for commit; and then committing the latest build
git add .
git commit --amend --no-edit --allow-empty

 

  • Now the final step is to push our build files to gh-pages branch and as we only want to put the build there if the code has actually changed, we make sure by adding that check.
# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

echo "No Changes in the Build; exiting"
exit 0

else
# There are changes in the Build; push the changes to gh-pages
echo "There are changes in the Build; pushing the changes to gh-pages"

# Actual push to gh-pages branch via Travis
git push --force $SSH_REPO $TARGET_BRANCH
fi

 

Now this 70 lines of code handle all our heavy lifting and automates a large part of our CD. This makes sure that no incorrect builds are entering the gh-pages branch and also enabling smoother experience for both developers and maintainers.

The important aspect of this script is ability to make sure Travis is able to push to gh-pages. This requires the proper setup of Keys, and it definitely is the trickiest part the whole setup.

  • The first step is to generate the SSH key. This is done easily using terminal and ssh-keygen.
$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com”

 

  • I would recommend not using any passphrase as it will then be required by Travis and thus will be tricky to setup.
  • Now, this generates the RSA public/private key pair.
  • We now add this public deploy key to the settings of the repository.
  • After setting up the public key on GitHub we give the private key to Travis so that Travis is able to push on GitHub.
  • For doing this we use the Travis Client, this helps to encrypt the key properly and send the key and iv to the travis. Which then using these values is able to decrypt the private key.
$ travis encrypt-file deploy_key
encrypting deploy_key for domenic/travis-encrypt-file-example
storing result as deploy_key.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_0a6446eb3ae3_key -iv $encrypted_0a6446eb3ae3_key -in super_secret.txt.enc -out super_secret.txt -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add deploy_key.enc to the git repository.
Make sure not to add deploy_key to the git repository.
Commit all changes to your .travis.yml.

 

  • Make sure to add deploy_key.enc to git repository and not to add deploy_key to git.

And after all these steps everything is done our client-side web application will deploy on every push on the master branch.

These steps are required only one time in project life cycle. At loklak search, we haven’t touched the deploy.sh since it was written, it’s a simple script but it does all the work of Continuous Deployment we want to achieve.

Generation of Demo Links and Test Deployments

This is also an essential part of the continuous agile development that developers are able to share what they have built and the maintainers to review those features and fixes. This becomes difficult in a web application as the fixes and features are more often than not visual and attaching screenshots with every PR become the hassle. If the developers are able to deploy their changes on their gh-pages and share the demo links with the PR then it’s a big win for development at a faster pace.

Now, this step is highly specific for Angular projects while there are are similar approaches for React and other frameworks as well, if not we can build the page easily and push our changes to gh-pages of our fork.

We use @angular/cli for building project then use angular-cli-ghpages npm package to actually push to gh-pages branch of the fork. These commands are combined and are provided as node command npm run deploy. And this makes our CD scheme complete.

Conclusion

Clearly, the continuous deployment scheme has a lot of advantages over the other methods especially in the client side web apps where there are a lot of PR’s. This essentially eliminates all the deployment hassles in a simple way that any deployment doesn’t require any manual interventions. The developers can simply concentrate on coding the application and maintainers can just simply review the PR’s by seeing the demo links and then merge when they feel like the PR is in good shape and the deployment is done all by the Shell Script without requiring the commands from a developer or a maintainer.

Links

Loklak Search GitHub Repository: https://github.com/fossasia/loklak_search

Loklak Search Application: http://loklak.net/

Loklak Search TravisCI: https://travis-ci.org/fossasia/loklak_search/

Deploy Script: https://github.com/fossasia/loklak_search/blob/master/deploy.sh

Further Resources

Continue ReadingContinuous Deployment Implementation in Loklak Search