Creating Instruction Guide using Bottomsheet

The PSLab android app consists of different instruments like oscilloscope, multimeter, wave generator etc and each instrument has different functionality and usage so it is necessary that there should be an instruction guide for every instrument so that the user can easily read the instruction to understand the functionality of the instrument.

In this we will create an instruction guide for the Wave Generator which will contain information about the instrument, it’s functionalities, steps for how to use the instrument.

The main component that I used to create instruction guide is Bottom SheetBottom Sheet is introduced in Android Support v23.2 . It is a special UI widget which slide up from the bottom of the screen and it can be used to reveal some extra information that we cannot show on the main layout like bottom menus,  instructions etc.

They are of two types :

  1. Modal Bottom Sheet:–  This Bottom Sheet has properties very similar to normal dialogs present in Android like elevation only difference is that they pop up from the bottom of screen with proper animation and they are implemented using BottomSheetDialogFragment Class.

  2. Persistent Bottom Sheet:– This Bottom Sheet is included as a part of the layout and they can be slid up and down to reveal extra information. They are implemented using BottomSheetBehaviour Class.

For my project, I used persistent Bottom Sheet as modal Bottom Sheet can’t be slid up and down by the swipe of the finger whereas persistent Bottom Sheet can be slid up and down and can be hidden by swipe features.

Implementing the Bottom Sheet

Step 1: Adding the Dependency

To start using Bottom Sheet we have to add the dependency (We have also include Jake Wharton-Butterknife library for view binding but it is optional.)

dependencies {

   implementation fileTree(include: ['*.jar'], dir: 'libs')

   implementation "com.android.support:appcompat-v7:26.0.1"
   implementation "com.android.support:design:26.0.1"
   implementation "com.jakewharton:butterknife:8.8.1"

   annotationProcessor "com.jakewharton:butterknife-compiler:8.8.1"

Step 2: Creating Bottom Sheet layout file

In this step, we will create the layout of the Bottom Sheet, as our purpose of making Bottom Sheet is to show extra information regarding the instrument so we will include ImageView and TextView inside the layout that will be used to show the content later.

Some attributes in the layout worth noting are:

  • app:layout_behavior: This attribute makes the layout act as Bottom Sheet.
  • app:behavior_peekHeight: This is the height of the Bottom Sheet when it is minimized.
  • app:behavior_hideable: Defines if the Bottom Sheet can be hidden by swiping it down.

Here, we will also create one extra LinearLayout having height equal to the peek_height. This  LinearLayout will be at the top of the BottomSheet as shown in Figure 1 and it will be visible when the BottomSheet is in a minimized state(not hidden). Here we will put text view with like “Show guide” and an arrow pointing upwards so that it is easier for the user to understand that sheet can be viewed by sliding up.

Figure 1 LinearLayout containing textview and imageview

Here is the gist[2] that contains code for the layout of the Bottom Sheet guide

After this step, we can see a layout of Bottom Sheet in our Android Editor as shown in Figure 2

Figure 2 shows the layout of the Bottom Sheet

Step 3: Creating the Container view layout containing content and Bottom Sheet

For container view, we will create new layout under Res Layout and name it “container_view_wavegenerator.xml”

In this layout, we will use ‘Coordinator Layout’ as ViewGroup because persistent Bottom Sheet is implemented using BottomSheetBehavior class which can only be applied to the child of ‘CoordinatorLayout’.

Then add the main layout of our instrument and the layout of the Bottom Sheet inside this layout as its child.

<android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android"
   android:layout_width="match_parent"
   android:layout_height="match_parent">

   <include layout="@layout/activity_wave_generator" />

   <include layout="@layout/bottom_sheet_custom" />

Step 4: Setting Up Bottom Sheet and Handling callbacks

Now we will head over to the “WaveGenerator.java” file(or any instrument java file)Here we will handle set up Bottom Sheet and handle callbacks by using following classes:

BottomSheetBehavior provides callbacks and makes the Bottom Sheet work with CoordinatorLayout.

BottomSheetBehavior.BottomSheetCallback() provides the callback when the Bottom Sheet changes its state. It has two methods that need to be overridden:

  1. public void onSlide(@NonNull View bottomSheet, float slideOffset)

    This method is called when the Bottom Sheet slides up and down on the screen. It has slideOffset as a parameter whose value varies from -1.0 to 0.0 when the Bottom Sheet comes from the hidden state to collapsed and 0.0 to 1.0 when it goes from collapsed state to expanded state.

  2. public void onStateChanged(@NonNull View bottomSheet, int newState)

    This method is called when BottomSheet changed its state. Here, let us also understand the different states which can be attained by the Bottom Sheet:

    • BottomSheetBehavior.STATE_EXPANDED : When the Bottom Sheet is fully expanded showing all the content.
    • BottomSheetBehavior.STATE_HIDDEN : When the Bottom Sheet is hidden at the bottom of the layout.
    • BottomSheetBehavior.STATE_COLLAPSED : When the Bottom Sheet is in a collapsed state that is only the peek_height view part of the layout is visible.
    • BottomSheetBehavior.STATE_DRAGGING : When the Bottom Sheet is dragging.
    • BottomSheetBehavior.STATE_SETTLING : When the Bottom Sheet is settling either at expanded height or at collapsed height.

We will implement these methods in our instrument class, and also put the content that needs to be put inside the Bottom Sheet.

@BindView(R.id.bottom_sheet)
LinearLayout bottomsheet;
@BindView(R.id.guide_title)
TextView bottomSheetGuideTitle;
@BindView(R.id.custom_dialog_schematic)
ImageView bottomSheetSchematic;
@BindView(R.id.custom_dialog_desc)
TextView bottomSheetDesc;

BottomSheetBehavior bottomSheetBehavior;

@Override
protected void onCreate(Bundle savedInstanceState) { 
              
           //main instrument implementation code

           setUpBottomSheet()
}

private void setUpBottomSheet() {

   bottomSheetBehavior = BottomSheetBehavior.from(bottomsheet);

    bottomSheetGuideTitle.setText(R.string.wave_generator);
   bottomSheetSchematic.setImageResource(R.drawable.sin_wave_guide);
   bottomSheetDesc.setText(R.string.wave_gen_guide_desc);

  bottomSheetBehavior.setBottomSheetCallback(new 
  BottomSheetBehavior.BottomSheetCallback() {
       @Override
       public void onStateChanged(@NonNull View bottomSheet, int newState) {
           if (newState == BottomSheetBehavior.STATE_EXPANDED) {
               bottomSheetSlideText.setText(R.string.hide_guide_text);
           } else {
               bottomSheetSlideText.setText(R.string.show_guide_text);
           }
       }

       @Override
       public void onSlide(@NonNull View bottomSheet, float slideOffset) {
           Log.w("SlideOffset", String.valueOf(slideOffset));
        }
      });
}

After following all the above steps the Bottom Sheet will start working properly in the Instrument layout as shown in Figure 3

Figure 3 shows the Bottom Sheet in two different states

To read more about implementing Bottom Sheet in layout refer this: AndroidHive article- working with bottomsheet

Resources

  1. Developer Documentation – BottomSheetBehaviour
  2. Gist containing xml file for layout of Custom Bottomsheet
Continue ReadingCreating Instruction Guide using Bottomsheet

Creating Control Panel For Wave Generator using Constraint Layout

 

In the blog Creating the onScreen Monitor Using CardView I had created the monitors to view the wave properties in this blog we will create the UI of controlling panel that will be used for that monitors with multiple buttons for both analog and digital waveforms.

Which layout to choose?

In today’s world, there are millions of Android devices present with different screen sizes and densities and the major concern of an Android developer is to make the layout that fits all the devices and this task is really difficult to handle with a linear or relative layout with fixed dimensions.

To create a complex layout with lots of views inside the parent using linear layout we have to make use of the attribute layout_weight for their proper stretching and positioning, but such a complex layout require a lot of nesting of weights and  android tries to avoid it by giving a warning :

Nested Weights are bad for performance

This is because layout_weight attribute requires a widget to be measured twice[1]. When a LinearLayout with non-zero weights is nested inside another LinearLayout with non-zero weights, then the number of measurements increases exponentially.

So, to overcome this issue we will make use of special type of layout “Constraint Layout” which was introduced in Google I/O 2016.

Features of Constraint Layout:-

  • It is similar to Relative Layout as all views are laid out according to their relationship with the sibling, but it is more flexible than Relative Layout.
  • It helps to flatten the view hierarchy for complex layouts.
  • This layout is created with the help of powerful tool provided by Android which has a palette on the left-hand side from where we can drag and drop the different widgets like TextView, ImageView, Buttons etc. and on the right-hand side it provides options for positioning, setting margins and other styling option like setting color, change text style etc.
  • It automatically adjusts the layout according to the screen size and hence doesn’t require the use of layout_weight attribute.

In following steps, I will create the controlling panel for Wave generator which is a complex layout with lots of buttons with the help of constraint layout.

Step 1: Add the dependency of the Constraint Layout in the Project

To use Constraint layout add the following to your build.gradle file and sync the project

dependencies {
    implementation "com.android.support.constraint:constraint-layout:1.1.0"
}

Step 2: Applying Guidelines to the layout

Guidelines[3] are anchors that won’t be displayed in your app, they are like one line of a grid above your layout and can be used to attach or constraint your widgets to it. They are only visible on your blueprint or preview editor. These will help to position and constraint the UI components on the screen easily.

For adding guidelines :

As shown in Figure 1 Right-click anywhere on the layout -> Select helpers -> Select horizontal or vertical guideline according to your need.

Figure 1 shows the horizontal guideline being added to the layout.

And for positioning the guideline we have to set the value of attribute layout_constraintGuide_percent  

Let’s say we want the guideline to be at the middle of the screen so we’ll set :

app:layout_constraintGuide_percent=”0.50″

For my layout I have added three guideline :

  • One horizontal guideline at 50%
  • Two vertical guidelines at 30% and 65%

Doing this will bifurcate the screen into six square blocks as shown in below figure :

Figure 2 shows the blueprint of constraint layout containing two vertical and one horizontal guidelines with their percentage offset from respective bases

Step 3: Adding the buttons in the blocks

Until now we have created six squares blocks, now we have to put a button view in each of the boxes.

  • First drag and drop button view from the Palette (shown in Figure 3) on the left side inside the box.

    Figure 3 shows the layout editor palette

     

  • Then we have to set constraints of this button by clicking on the small circle present on the middle of edges and dragging it onto the side of the block facing it.

    Figure 4 shows the button widget getting constrained to sides

     

  • Set the layout_width and layout_height attribute of the button to be “0dp”, doing this the button will expand in all the direction occupying all the space with respect to the border it has been constrained with.

    Figure 6 shows the button widget expanding to all the available space in the box

Similarly, adding buttons in all the square blocks and providing proper theme color we will have a blueprint and layout as shown in Figure 6.

Figure 6 shows the waveform panel blueprint and actual layout for analog waves with six buttons

Following the same steps until now, I have created the other controlling panel layout having buttons for digital waves as shown in Figure 7

Figure 7 shows other constraint layout for digital waves having seven buttons

Detailing and combining the panels to form Complete UI

After adding both the panels we have created in this layout inside the Wave Generator we have the layout as shown in Figure 8

Figure 8 shows the UI of Wave Generator as shown by a actual Android device in the PSLab app.

As we can see on adding the panels the button created inside the layout shrink so as to adapt to the screen and giving out a beautiful button-like appearance.

Resources   

  1. Blog on Nested Weights are bad for performance
  2. Developer Article – Build a Responsive UI with ConstraintLayout
  3. Information about Guidelines
Continue ReadingCreating Control Panel For Wave Generator using Constraint Layout

Adding feature to preview chatbot

You can access all features of SUSI.AI on chat.susi.ai. But this restricts the usage of SUSI.AI to this site only. To solve this issue, we can use a Chatbot which can be embedded on any website and then can be used by the people who visit that website, hence generalising the usage of SUSI.AI.

How to embed SUSI.AI Chatbot on any website?

To embed SUSI.AI Chatbot on any website, go to skills.susi.ai and login and then check out the Botbuilder section. There you can find a deploy button which will give you a code which looks something like this :

<script type="text/javascript" id="susi-bot-script" data-userid="" data-group="" data-language="" data-skill="" src="https://skills.susi.ai/susi-chatbot.js" > </script>

 

As can be seen from the code above, it is simply a script tag with an id and a custom attribute data-token (which is the unique access token of the User for that session). The source file for this script is susi-chatbot.js. This is the file that is responsible for displaying the Chatbot on the screen.

But, let’s say that you configured your Chatbot in the entire Botbuilder process. So, we need to have a Preview button which could show us a preview of the Chatbot with the chosen configurations, otherwise we would have to test it out by embedding it on our website, which would be tiresome.

How does the Preview Button work?

The code for Botbuilder page is in the Botbuilder.js file.

The code for the Preview button is as follows :

  {!this.state.chatbotOpen?(<RaisedButton
        label='Preview'
        style={{ width: '148px' }}
        onClick={this.injectJS}
    />):(<RaisedButton
        label='Close Preview'
        style={{ width: '148px' }}
        onClick={this.handleChatbotClose}
  />)}

 

It can be seen that on clicking the ‘Preview’ button, injectJS() function is called, and on clicking the ‘Close Preview’ button, handleChatbotClose() function is called.

     injectJS = () => {
        const myscript = document.createElement('script');
        myscript.type = 'text/javascript';
        myscript.id = 'susi-bot-script';
        myscript.setAttribute('data-token',cookies.get('loggedIn'));
        myscript.src = '/susi-chatbot.js';
        myscript.async = true;
        document.body.appendChild(myscript);
        // some code
    };

     handleChatbotClose = () =>{
      // some code
      $('#susi-launcher-container').remove();
      $('#susi-frame-container').remove();
      $('#susi-launcher-close').remove();
      $('#susi-bot-script').remove();
    }

 

The above 2 functions are the key functions in understanding how the Preview button is previewing the Chatbot on the screen.

Let us see the functioning of injectJS() function first. Firstly, the createElement() method creates an element node of type ‘script’. We store this element node in a variable myscript. We then add attribute values to the script tag by using myscript.attribute_name = attribute_value. We can add ‘type’, ‘id’, ‘src’ and ‘async’ attributes in this way because they are predefined attributes.

However, we need to also add a custom attribute to store the access_token of the User, because we would need to pass down this information to detect the owner of the Chatbot. To add a custom attribute, we use the following code :

myscript.setAttribute('data-token',cookies.get('loggedIn'));

 

This add a custom attribute ‘data-token’ and sets its value to the access token of the session, which would later be used to identify the User.

This script is then appended to the body of the document. As soon as the script is appended to the page, 3 new <div> tags are created and added to the page’s HTML. These 3 <div> tags contain the code for the UI and functionality of the Chatbot itself.

Next, let us look at the functioning of handleChatbotClose() function. This function’s basic objective is to close the Chatbot and remove the 3 components that were rendered as a result of the injectJS() function, and also remove the script itself. As can be seen from the code of the function, we are removing the 4 components which can be seen in the above image of the developer console.

So, this is how the Preview functionality has been added to the Chatbot. Visit https://skills.susi.ai and login to check out the Botbuilder section to try it out.

Resources

Continue ReadingAdding feature to preview chatbot

How api.susi.ai responds to a query and send response

A direct way to access raw data for a query is https://api.susi.ai. It is an API of susi server which sends responses to a query by a user in form of a JSON (JavaScript Object Notation) object (more on JSON here). This JSON object is the raw form of any response to a query which is a bunch of key (attribute) value pair. This data is then send from the server to various APIs like chat.susi.ai, susi bots, android and ios apps of SUSI.AI.

Whenever a user in using an API, for example chat.susi.ai, the user send a query to the API. This query is then sent by the API as a request to the susi server. The server then process the request and sends the answer to the query in form of json data as a response to the request. This request is then used by the API to display the answer after applying stylings to the json data.

Continue ReadingHow api.susi.ai responds to a query and send response

How to integrate SUSI.AI with Google Sign In API

Google Sign-In manages the OAuth 2.0 flow and token lifecycle, simplifying our integration with Google APIs. A user always has the option to revoke access to an application at any time. Users can see a list of all the apps which they have given permission to access their account details and are using the login API. In this blog post I will discuss how to integrate this API with susi server API to enhance our AAA system. More on Google API here.

Continue ReadingHow to integrate SUSI.AI with Google Sign In API

How does conversation view of bot wizard work in the SUSI.AI Botbuilder?

 

Earlier, we could view the skills in code view only in SUSI.AI. Code views shows user queries along with bot responses in a simple form. Different types of responses are coded in different ways and there’s a proper documentation for it here. While this works for a developer, it’s not user friendly. The main purpose of a chatbot web plugin is that user should be able to customise it. User should be able to add skills. Hence we need a conversation view to better show the skills of SUSI.

Fetching data from code view and its storage form:

Skill data from code view is fetched and then stored in the following form:

{
   userQueries: [],
   botResponses: [],
}

This is stored in an object called conversationData.
userQueries stores an array of all the user queries. Similarly, botResponses stores the answers to these queries.
For the query at index 0 of the array in userQueries, the response is at index 0 of botResponses. For the query at index 1 of the array in userQueries, the response is at index 1 of botResponses. The queries and responses are stored in this way.

For example:
If the query is: “Hi|Hello” and the response is “Hi. I am SUSI.|Hello!”
Then the object conversationData will look like this:

{
   userQueries: [[‘Hi’, ‘Hello’]],
   botResponses: [[‘Hi. I am SUSI’, ‘Hello!’]],
}

You can see that index 0 of userQueries holds the queries and botResponses holds its responses.
An example of data from code view getting stored in object conversationData.

::name <Skill_name>
::author <author_name>
::author_url <author_url>
::description <description>
::dynamic_content <Yes/No>
::developer_privacy_policy <link>
::image <image_url>
::terms_of_use <link>

Hi|Hello
Hi. I’m a chatbot.

I need help.|Can you help me?
Sure! I’ll help you. What do you want help with?

How do I track my order?|I need to track my order.
Please tell me the order id.

This will be stored in conversationData in the following way:

{
   userQueries: [
      [‘Hi’, ‘Hello’],
      [‘I need help.’, ‘Can you help me?’],
      [‘How do I track my order?’, ‘I need to track my order.’]
   ],
   botResponses: [
      [Hi. I’m a chatbot.],
      [‘Sure! I’ll help you. What do you want help with?’],
      [‘Please tell me the order id.’]
   ],
}

Rendering data in conversation view:

The data collected in conversationData object is passed to conversation view JS file as props. Now this data can be rendered by injecting HTML elements using jQuery but that is not a right approach. So, we use an array to render it. We pass the HTML elements into the array using the push() method.

This will be more clear from the following code snippets.

This is the code for adding user query:

for (let query of user_query) {
   if (query !== ‘’) {
      conversation.push(
            <div className=”user-text-box”>{query}</div>,
      );
   }
}

This is the code for adding bot response:

for (let response of bot_response) {
   if (response !== ‘’) {
      conversation.push(
            <div className=”bot-text-box”>{response}</div>,
      );
   }
}

References:

 

Continue ReadingHow does conversation view of bot wizard work in the SUSI.AI Botbuilder?

Using Wikipedia API for knowledge graph in SUSPER

Knowledge Graph is way to give a brief description about search query by connecting it to a real world entity. This helps users to get information about exactly what they want. Previously Susper had a Knowledge Graph which was implemented using DBpedia API. But since DBpedia do not provide content over HTTPS connections therefore the content was blocked on susper.com and there was a need to implement the Knowledge Graph using a new API that provide contents over HTTPS. In this blog, I will describe how getting a knowledge graph was made possible using Wikipedia API.

What is Wikipedia API ?

The MediaWiki action API is a web service that provides convenient access to wiki features, data, and metadata over HTTP, via a URL usually at api.php. Clients request particular “actions” by specifying an action parameter, mainly action=query to get information.

The endpoint :

https://en.wikipedia.org/w/api.php

The format :

format=json This tells the API that we want data to be returned in JSON format.

The action :

action=query

The MediaWiki web service API implements dozens of actions and extensions implement many more; the dynamically generated API help documents all available actions on a wiki. In this case, we’re using the “query” action to get some information.

The complete API which is used in SUSPER to extract information of a query is :

https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro=&explaintext=&titles=japan

Where titles=Search_Query, here Japan

How it is implemented in SUSPER?

For implementing it a service has been created which fetches information by setting various URL parameters. This result can be fetched by creating an instance of service and passing search query to getsearchresults(searchquery) function.

export class KnowledgeapiService {
 server = 'https://en.wikipedia.org';
 searchURL = this.server + '/w/api.php?';
 homepage = 'http://susper.com';
 logo = '../images/susper.svg';
 constructor(private http: Http,
             private jsonp: Jsonp,
             private store: Store<fromRoot.State>) {
 }
 getsearchresults(searchquery) {
   let params = new URLSearchParams();
   params.set('origin', '*');
   params.set('format', 'json');
   params.set('action', 'query');
   params.set('prop', 'extracts');
   params.set('exintro', '');
   params.set('explaintext', '');
   params.set('titles', searchquery);
   let headers = new Headers({ 'Accept': 'application/json' });
   let options = new RequestOptions({ headers: headers, search: params });
   return this.http
     .get(this.searchURL, options).map(res =>
         res.json().query.pages
     ).catch(this.handleError);
}

 

Since the result obtained is an observable therefore we have to subscribe for it and then extract information to local variables in infobox.component.ts file.

export class InfoboxComponent implements OnInit {
 public title: string;
 public description: string;
 query$: any;
 resultsearch = '/search';
 constructor(private knowledgeservice: KnowledgeapiService,
             private route: Router,
             private activatedroute: ActivatedRoute,
             private store: Store<fromRoot.State>,
             private ref: ChangeDetectorRef) {
   this.query$ = store.select(fromRoot.getquery);
   this.query$.subscribe( query => {
     if (query) {
       this.knowledgeservice.getsearchresults(query).subscribe(res => {
         const pageId = Object.keys(res)[0];
         if (res[pageId].extract) {
           this.title = res[pageId].title;
           this.description = res[pageId].extract;
         } else {
           this.title = '';
           this.description = '';
         }
       });
     }
   });
 }

The variable title and description are used to display results on results page.

<div *ngIf=“this.description” class=“card”>
  <div>
    <h2><b>{{this.title}}</b></h2>
    <p>{{this.description | slice:0:600}}<a href=‘https://en.wikipedia.org/wiki/{{this.title}}’>..more at Wikipedia</a></p>
  </div>
</div>

Resources

1.MediaWiki API : https://www.mediawiki.org/wiki/API:Main_page

2.Stackoverflow : https://stackoverflow.com/questions/8555320/is-there-a-clean-wikipedia-api-just-for-retrieve-content-summary

3.Angular Docs : https://angular.io/tutorial/toh-pt4

Continue ReadingUsing Wikipedia API for knowledge graph in SUSPER

How to receive different types of messages from SUSI Line Bot

In this blog, I will explain how to receive different types of messages responses from LINE Bot. This includes text, sticker, video, audio, location etc types of responses. Follow this tutorial to make SUSI AI LINE Bot.

How does LINE Messaging API work?

The messaging API of LINE allows data to be passed between the server of SUSI AI and the LINE Platform. When a user sends a message to SUSI bot, a webhook is triggered and the LINE Platform sends a request to our webhook URL. SUSI Server then sends a request to the LINE Platform to respond to the user. Requests are sent over HTTPS in JSON format.

Different Message types

  1. Image Messages

To send images, we need two things, the URLs of original image and smaller preview image in the message object. The preview image will be displayed in text and full image is displayed when user clicks on the preview image.
The message object for image message has 3 properties –

Property

Type

Description

type

String

image

originalContentUrl

String

Image URL

previewImageUrl

String

Preview image URL

Sample Message object:

{
   "type": "image",
   "originalContentUrl": "https://susi.ai/original.jpg",
   "previewImageUrl": "https://susi.ai/preview.jpg"
}
  1. Video Messages

To send videos, we need two things, the URL of video file and URL of preview image of video in the message object.
The message object for video messages has 3 properties:

Property

Type

Description

type

String

image

originalContentUrl

String

Video file URL

previewImageUrl

String

Preview image URL

Sample Message object for video type responses:

{
   "type": "video",
   "originalContentUrl": "https://susi.ai/original.mp4",
   "previewImageUrl": "https://susi.ai/preview.jpg"
}
  1. Audio Messages

To send an audio message, you have to include the URL to audio file and the duration in the message object.
The message object for audio messages has 3 properties:

Property

Type

Description

type

String

image

originalContentUrl

String

Audio file URL

duration

Number

Length of the Audio file in milliseconds

Sample Message object for audio type responses:

{
   "type": "audio",
   "originalContentUrl": "https://susi.ai/original.m4a",
   "duration": 10000
}
  1. Location Messages

To send location information to users, you need to include title, address, latitude and longitude of the location.
The message object needs to include 5 properties:

Property

Type

Required

Description

type

String

Required

location

title

String

Required

Title

address

String

Required

Address

latitude

Decimal

Required

Latitude

longitude

Decimal

Required

Longitude

Sample Message object for location type responses:

{
   "type": "location",
   "title": "singapore",
   "address": "address",
   "latitude": 1.2896698812440377,
   "longitude": 103.85006683126556
}

How to use these Message objects?

You have to send the response that you got from SUSI server to LINE platform. Now this response is sent in the form of an object. This object tells the LINE platform about the type of this message. So simply sending this object to reply API of LINE sends the message to user.
It looks like this:

return client.replyMessage(event.replyToken, answer);

References:

Continue ReadingHow to receive different types of messages from SUSI Line Bot

Adding Support for Displaying Images in SUSI iOS

SUSI provided exciting features in the chat screen. The user can ask a different type of queries and SUSI respond according to the user query. If the user wants to get news, the user can just ask news and SUSI respond with the news. Like news, SUSI can respond to so many queries. Even user can ask SUSI for the image of anything. In response, SUSI displays the image that the user asked. Exciting? Let’s see how displaying images in the chat screen of SUSI iOS is implemented.

Getting image URL from server side –

When we ask susi for the image of anything, it returns the URL of image source in response with answer action type. We get the URL of the image in the expression key of actions object as below:

actions:
[
{
language: "en",
type: "answer",
expression:
"https://pixabay.com/get/e835b60f2cf6033ed1584d05fb1d4790e076e7d610ac104496f1c279a0e8b0ba_640.jpg"
}
]

 

Implementation in App –

In the app, we check if the response coming from server is image URL or not by following two methods.

One – Check if the expression is a valid URL:

func isValidURL() -> Bool {
if let url = URL(string: self) {
return UIApplication.shared.canOpenURL(url)
}
return false
}

The method above return boolean value if the expression is valid or not.

Two – Check if the expression contains image source in URL:

func isImage() -> Bool {
let imageFormats = ["jpg", "jpeg", "png", "gif"]

if let ext = self.getExtension() {
return imageFormats.contains(ext)
}

return false
}

The method above returns the boolean value if the URL string is image source of not by checking its extension.

If both the methods return true, the expression is a valid URL and contain image source, then we consider it as an Image and proceed further.

In collectionView of the chat screen, we register ImageCell and present the cell if the response is the image as below.

Registering the Cell –

register(_:forCellWithReuseIdentifier:) method registers a class for use in creating new collection view cells.

collectionView?.register(ImageCell.self, forCellWithReuseIdentifier: ControllerConstants.imageCell)

Presenting the Cell using cellForItemAt method of UICollectionView

The implementation of cellForItemAt method is responsible for creating, configuring, and returning the appropriate cell for the given item. We do this by calling the dequeueReusableCell(withReuseIdentifier:for:) method of the collection view and passing the reuse identifier that corresponds to the cell type we want. That method always returns a valid cell object. Upon receiving the cell, we set any properties that correspond to the data of the corresponding item, perform any additional needed configuration, and return the cell.

if let expression = message.answerData?.expression, expression.isValidURL(), expression.isImage() {
if let cell = collectionView.dequeueReusableCell(withReuseIdentifier: ControllerConstants.imageCell, for: indexPath) as? ImageCell {
cell.message = message
return cell
}
}

In ImageCell, we present a UIImageView that display the image. When cell the message var, it call downloadImage method to catch and display image from server URL using Kingfisher method.

In method below –

  1. Get image URL string and check if it is image URL
  2. Convert image string to image URL
  3. Set image to the imageView
func downloadImage() {
if let imageString = message?.answerData?.expression, imageString.isImage() {
if let url = URL(string: imageString) {
imageView.kf.setImage(with: url, placeholder: ControllerConstants.Images.placeholder, options: nil, progressBlock: nil, completionHandler: nil)
}
}
}

We have added a tap gesture to the imageView so that when the user taps the image, it opens the full image in the browser by using the method below:

@objc func openImage() {
if let imageURL = message?.answerData?.expression {
if let url = URL(string: imageURL) {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
}
}

 

Final Output –

Resources –

  1. Apple’s Documentations on collectionView(_:cellForItemAt:) method
  2. Apple’s Documentations on register(_:forCellWithReuseIdentifier:) method
  3. SUSI iOS Link: https://github.com/fossasia/susi_iOS
  4. SUSI API Link: http://api.susi.ai/
  5. Kingfisher – A lightweight, pure-Swift library for downloading and caching images from the web
Continue ReadingAdding Support for Displaying Images in SUSI iOS

How does SUSI AI web bot plugin work

  

In this blog, we’ll learn how SUSI AI web plugin works. Normally, for any bot like Kik bot, we don’t have to worry about the chat boxes or the way chatbot is rendered on screen because all of that is handled by the service on which we are building our bot. So, for these bots, we simply take the text input from user, send a GET request to SUSI server, receive the response and display it.

This is not the case with SUSI AI Web Bot plugin. In this case, there is not Bot platform.

Hence, there are a lot of other things that we need to take care of here. Let’s explore the main ones among them.

Adding bot to website:

The final product that we’re going to provide to the client is going to be just a JavaScript code. Hence the HTML code of bot widget needs to be added to the <body> of the website. This is done usingappend() method of jQuery. The html() method sets or returns the content (innerHTML) of selected elements.

Syntax:

$(selector).append(content)

So we store the HTML code of bot widget in a variable and then returns the content of that variable to the body tag using:

$("body").append(mybot);

Here “mybot” is the variable containing HTML code of bot widget.
The text boxes of user texts and bot texts are added to a HTML element in the same way.

Event handling:

JavaScript’s interaction with HTML is handled through events that occur when the user or the browser manipulates a page. Loading of the page, clicking, pressing a key, closing a window, resizing the window etc are all examples of events.

In the SUSI AI web bot, there are two major events.

  1. Clicking a button

This is required for allowing clicks on send button. Clicking of a button can be done in many ways. There are two major ways that are used in SUSI AI web bot.

Some elements already exist on the webpage. For example – the HTML code of web bot. It is added to the body tag as soon as webpage loads. For these elements we use click(). The click() binding is called “direct” binding which can be attached to the elements that already exists.

Syntax:

$(selector).click(function);

Selector – Class, ID or name of HTML element.
Function – Specifies the function to run when click event occurs.

Some elements are dynamically generated. This means that they are added at a later point of time. For example – the text messages are added when user types in. This happens after loading of page. For this, we have to create “delegated” binding by using on() method.

Syntax:

$(selector).on(event,childSelector,data,function);

Selector – Class, ID or name of HTML element
Event – (Required) Specifies one or more event(s) or namespaces to attach to the selected elements.
childSelector – (Optional) Specifies that the event handler should only be attached to the specified child elements
Data – (Optional) Specifies additional data to pass along to the function
Function – Specifies the function to run when click event occurs.

  1. Pressing the enter key

For identifying which key is pressed, we use key codes. The code for enter key is 13.
The on() method is used for handling this event. It’ll be clear from the following code snippet:

Keyup – Keyboard key is released
Keypress – Keyboard key is pressed

$('#txMessage').on('keyup keypress', function(e) {
        var keyCode = e.keyCode || e.which;
        var text = $("#txMessage").val();
        if (keyCode === 13) {
                if(text == "" ||  $.trim(text) == '') {
                        e.preventDefault();
                        return false;
                } else {
                        $("#chat-input").blur();
                        setUserResponse(text);
                        send(text);
                        e.preventDefault();
                        return false;
                }
        }
});

This is the code used to handle “enter” key event in SUSI AI web bot. Here “txMessage” is the id of text box where user entered text.

Making GET HTTP request to SUSI Server

We use ajax() method of jQuery to perform an asynchronous HTTP request to SUSI Server. Looking at the code will make things more clear:

$.ajax({
        type: "GET",
        url: baseUrl+text,
        contentType: "application/json",
        dataType: "json",
        success: function(data) {
                main(data);
        },
        error: function(e) {
                console.log(e);
        }
});

This is a GET type Ajax request. The endpoint of SUSI API is in url . We can see from dataType that the response received will be a JSON file. If the file is successfully received then main function is called else the error is logged into console.

References :

Continue ReadingHow does SUSI AI web bot plugin work