Test SUSI Web App with Facebook Jest

Jest is used by Facebook to test all Javascript codes specially React code snippets. If you need to setup jest on your react application you can follow up these simple steps. But if your React application is made with “create-react-app”, you do not need to setup jest manually. Because it comes with Jest. You can run test using “react-scripts” node module. Since SUSI chat is made with “create-react-app” we do not need to install and run Jest directly. We executed our test cases using “npm test” it executes “react-scripts test” command. It executes all “.js” files under “__tests__” folders. And all other files with “.spec.js” and “.test.js” suffixes. React apps that are made from “create-react-app” come with sample test case (smoke test) and that checks whether whole application is built correctly or not. If it passes the smoke test then it starts to run further test cases. import React from 'react'; import ReactDOM from 'react-dom'; import ChatApp from '../../components/ChatApp.react'; it('renders without crashing', () => { const div = document.createElement('div'); ReactDOM.render( < ChatApp / > , div); }); This checks all components which are inside of the “<ChatApp />” component and it check whether all these components integrate correctly or not. If we need to check only one component in isolated environment. We can use shallow rendering API. we have used shallow rendering API to check each and every component in isolated environment. We have to install enzyme and test renderer before use it. npm install --save-dev enzyme react-test-renderer import React from 'react'; import MessageSection from '../../components/MessageSection.react'; import { shallow } from 'enzyme'; it('render MessageListItem without crashing',()=>{ shallow(<MessageSection />); }) This test case tests only the “MessageListItem”, not its child components. After executing “npm test” you will get the passed and failed number of test cases. If you need to see the coverage you can see it without installing additional dependencies. You just need to run this. npm test -- --coverage It will show the output like this. This view shows how many lines, functions, statements, branches your program has and this shows how much of those covered from the test cases you have. If we are going to write new test cases for susi chat, we have to make separate file in “__tests__” folder and name it with corresponding file name that we are going to test. it('your test case description',()=>{ //test what you need }) Normally test cases looks like this.in test cases you can use “test” instead of “it” .after test case description, there is a fat arrow function. In side this fat arrow function you can add what you need to test In below example I have compared returned value of the function with static value. function funcName(){ return 1; } it('your test case description',()=>{ expect(funcName()).toBe(1); }) You have to add your function/variable that need to be tested into “expect()” and value you expect from that function or variable into “toBe()”.  Instead of “toBe()” you can use different functions according to your need. If you have a…

Continue ReadingTest SUSI Web App with Facebook Jest

Adding a new Servlet/API to SUSI Server for Skill Wiki

Susi skill wiki is an editor to write and edit skill easily. It follows an API-centric approach where the Susi server acts as API server and a web front-end  act as the client for the API and provides the user interface. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. The schema for storing a skill is as following: Using this, one can access any skill based on four tuples parameters model, group, language, skill.  To achieve this on server side let’s create an API endpoint to list all skills based on given model, groups and languages. To check the source for this endpoint clone the susi_server repository from here. git clone https://github.com/fossasia/susi_server.git Have a look at documentation for more information about Susi Server. The Servlet java file is placed in susi_server/ai/susi/server/api/cms/ListSkillService. To implement the endpoint we will use the HttpServlet class which provides methods, such as doGet and doPost, for handling HTTP-specific services. In Susi Server an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.  Next we will inherit our ListSkillService class from AbstractAPIHandler and implement APIhandler interface. To implement our servlet we will be overriding 4 methods namely Minimal Base User role  public BaseUserRole getMinimalBaseUserRole() { return BaseUserRole.ANONYMOUS; } This method tells the minimum Userrole required to access this servlet it can also be ADMIN, USER. In our case it is Anonymous. A User need not to log in to access this endpoint. Default Permissions   public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) { return null; } This method returns the default permission attached with base user role, our servlets has nothing to do with it, therefore we can simply return null for this case. The API Path  public String getAPIPath() { return "/cms/getSkillList.json"; } This methods sets the API endpoint path, it gets appended to base path which is 127.0.0.1:4000/cms/getSkillList.json for the local host and http://api.susi.ai/cms/getSkillList.json for the server. The ServiceImpl method  public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) { String model_name = call.get("model", "general"); File model = new File(DAO.model_watch_dir, model_name); String group_name = call.get("group", "knowledge"); File group = new File(model, group_name); String language_name = call.get("language", "en"); File language = new File(group, language_name); ArrayList fileList = new ArrayList(); fileList = listFilesForFolder(language, fileList); JSONArray jsArray = new JSONArray(fileList); JSONObject json = new JSONObject(true) .put("model", model_name) .put("group", group_name) .put("language", language_name) .put("skills", jsArray); return new ServiceResponse(json); } ArrayList listFilesForFolder(final File folder, ArrayList fileList) { File[] filesInFolder = folder.listFiles(); if (filesInFolder != null) { for (final File fileEntry : filesInFolder) { if (!fileEntry.isDirectory()) { fileList.add(fileEntry.getName()+""); } } } return fileList; } To access any skill we need parameters model, group, language. We get this through call.get method where first parameter is the key for which we want to get the value and second parameter is the default value. Based on received model, group and language browse files in that folder and put them in Json array to return the Service Json response. That’s all…

Continue ReadingAdding a new Servlet/API to SUSI Server for Skill Wiki

How to make SUSI AI Line Bot

In order to integrate SUSI’s API with Line bot you will need to have a line account first so that you can follow below procedure. You can download app from here. Pre-requisites: Line app Github Heroku Steps: Install Node.js from the link below on your computer if you haven’t installed it already https://nodejs.org/en/. Create a folder with any name and open shell and change your current directory to the new folder you created. Type npm init in command line and enter details like name, version and entry point. Create a file with the same name that you wrote in entry point in above given step. i.e index.js and it should be in same folder you created. Type following commands in command line  npm install --save @line/bot-sdk. After bot-sdk is installed type npm install --save express after express is installed type npm install --save request when all the modules are installed check your package.json modules will be included within dependencies portion. Your package.json file should look like this. { "name": "SUSI-Bot", "version": "1.0.0", "description": "SUSI AI LINE bot", "main": "index.js", "dependencies": {   "@line/bot-sdk": "^1.0.0",   "express": "^4.15.2",   "request": "^2.81.0" }, "scripts": {   "start": "node index.js" } } Copy following code into file you created i.e index.js 'use strict'; const line = require('@line/bot-sdk'); const express = require('express'); var request = require("request"); // create LINE SDK config from env variables const config = {   channelAccessToken: process.env.CHANNEL_ACCESS_TOKEN,   channelSecret: process.env.CHANNEL_SECRET, }; // create LINE SDK client const client = new line.Client(config); // create Express app // about Express: https://expressjs.com/ const app = express(); // register a webhook handler with middleware app.post('/webhook', line.middleware(config), (req, res) => {   Promise       .all(req.body.events.map(handleEvent))       .then((result) => res.json(result)); }); // event handler function handleEvent(event) {   if (event.type !== 'message' || event.message.type !== 'text') {       // ignore non-text-message event       return Promise.resolve(null);   }   var options1 = {       method: 'GET',       url: 'http://api.asksusi.com/susi/chat.json',       qs: {           timezoneOffset: '-330',           q: event.message.text       }   };   request(options, function(error, response, body) {       if (error) throw new Error(error);       // answer fetched from susi       //console.log(body);       var ans = (JSON.parse(body)).answers[0].actions[0].expression;       // create a echoing text message       const answer = {           type: 'text',           text: ans       };       // use reply API       return client.replyMessage(event.replyToken, answer);   }) } // listen on port const port = process.env.PORT || 3000; app.listen(port, () => {   console.log(`listening on ${port}`); }); Now we have to get channel access token and channel secret to get that follow below steps. If you have Line account then move to next step else sign up for an account and make one. Create Line account on  Line Business Center with messaging API and follow these steps: In the Line Business Center, select Messaging API under the Service category at the top of the page. Select start using messaging API, enter required information and confirm it. Click LINE@ Manager option, In settings go to bot settings and Enable messaging API Now we have to configure settings. Allow messages using webhook and select allow for “Use Webhooks”. Go to Accounts option at top of page and open LINE Developers. To get Channel access…

Continue ReadingHow to make SUSI AI Line Bot

Skills for SUSI

Susi is an open source personal assistant can do a lot of amazing things for you apart from just answering queries in the text. Susi supports many action type such as answer, table, pie chart, RSS, web search, map. Actions contain a list of objects each with type attribute, there can be more than one actions in the object list. For example curl http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=Who+are+you We get a json response similar to { "query": "Who are you", "answers": [{ "data": [], "metadata": {"count": 0}, "actions": [{ "type": "answer", "expression": "I was told that I am a Social Universal Super Intelligence. What do you think?" }] }], } The above query is an example of action type ‘answer’, for developing more skills on action type answer refer to the Fossasia  blog : How to teach Susi skills.  In this blog we will see how to teach a table skill to susi and how susi interprets the skill .So let’s add a skill to display football teams and its players using service Football-Data.org . For writing rules Open a new etherpad with a desired name <etherpad name> at http://dream.susi.ai/ Next let’s set some query for which we want Susi to answer. Example queries: tell me the teams in premier league | Premier league teams To get answer we define the following rule !console: { "url":"http://api.football-data.org/v1/competitions/398/teams", "path":"$.teams", "actions":[{ "type":"table", "columns":{"name":"Name","code":"Code","shortName":"Short Name","crestUrl":Logo}, "count": -1 }] } eol Expalanation: The JSON response for above URL look like this: { "_links": { "self": { "href": "http://api.football-data.org/v1/competitions/398/teams" }, "competition": { "href": "http://api.football-data.org/v1/competitions/398" } }, "count": 20, "teams": [{ "_links": { "self": { "href": "http://api.football-data.org/v1/teams/66" }, "fixtures": { "href": "http://api.football-data.org/v1/teams/66/fixtures" }, "players": { "href": "http://api.football-data.org/v1/teams/66/players" } }, "name": "Manchester United FC", "code": "MUFC", "shortName": "ManU", "squadMarketValue": null, "crestUrl": "http://upload.wikimedia.org/wikipedia/de/d/da/Manchester_United_FC.svg" }, { "_links": { "self": { "href": "http://api.football-data.org/v1/teams/65" }, "fixtures": { "href": "http://api.football-data.org/v1/teams/65/fixtures" }, "players": { "href": "http://api.football-data.org/v1/teams/65/players" } }, "name": "Manchester City FC", "code": "MCFC", "shortName": "ManCity", "squadMarketValue": null, "crestUrl": "https://upload.wikimedia.org/wikipedia/en/e/eb/Manchester_City_FC_badge.svg" }] } The attribute 'path'  statuses object contains a list of objects for which we want to show the table. The table is defined with the action type "table" and a columns object which provides a mapping from the column value names to the descriptive names that will be rendered in the client's output. In our case, there are  4 columns  Name of the team, Team Code, Short Name of the team, and the Team logo’s URL. The count attribute is used to denote how many rows to populate the table. If count = -1 , then it means as many as possible or displays all the results.It's easy, isn’t it? We have successfully created table skill for SUSI. Let’s try it out Go to http://chat.susi.ai/ and type: dream < your dream name>. And type your queries like  “tell me the teams in premier league” in our case  and see how SUSI presents you the results Next, want to create some more skills. Let’s teach SUSI to play a game Rock, Paper, Scissors, Lizard, Spock. SUSI can give random answers…

Continue ReadingSkills for SUSI

Hotword Detection with Pocketsphinx for SUSI.AI

Susi has many apps across all the major platforms. Latest addition to them is the Susi Hardware which allows you to setup Susi on a Hardware Device like Raspberry Pi. Susi Hardware was able to interact with a push of a button, but it is always cool and convenient to call your assistant anytime rather than using a button. Hotword Detection helps achieve that. Hotword Detection involves running a process in the background that continuously listens for voice. On noticing an utterance, we need to check whether it contains desired word. Hotword Detection and its integration with Susi AI can be explained using the diagram below:   What is PocketSphinx? PocketSphinx is a lightweight speech recognition engine, specifically tuned for handheld and mobile devices, though it works equally well on the desktop. PocketSphinx is free and open source software. PocketSphinx has various applications but we utilize its power to detect a keyword (say Hotword) in a verbally spoken phrase. Official Github Repository: https://github.com/cmusphinx/pocketsphinx Installing PocketSphinx We shall be using PocketSphinx with Python. Latest version on it can be installed by pip install pocketsphinx If you are using a Raspberry Pi or ARM based other board, Python 3.6 , it will install from sources by the above step since author doesn't provide a Python Wheel. For that, you may need to install swig additionally. sudo apt install swig How to detect Hotword with PocketSphinx? PocketSphinx can be used in various languages. For Susi Hardware, I am using Python 3. Steps: Import PyAudio and PocketSphinx from pocketsphinx import * import pyaudio   Create a decoder with certain model, we are using en-us model and english us default dictionary. Specify a keyphrase for your application, for Susi AI , we are using “Susi” as Hotword pocketsphinx_dir = os.path.dirname(pocketsphinx.__file__) model_dir = os.path.join(pocketsphinx_dir, 'model') config = pocketsphinx.Decoder.default_config() config.set_string('-hmm', os.path.join(model_dir, 'en-us')) config.set_string('-keyphrase', 'susi') config.set_string('-dict', os.path.join(model_dir, dict_name)) config.set_float('-kws_threshold', self.threshold)   Start a PyAudio Stream from Microphone Input p = pyaudio.PyAudio() stream = p.open(format=pyaudio.paInt16, channels=1, rate=16000, input=True, frames_per_buffer=20480) stream.start_stream()   In a forever running loop, read from stream and process buffer in chunks of 1024 frames if it is not empty. buf = stream.read(1024) if buf: decoder.process_raw(buff) else: break;   Check for hotword and start speech recognition if hotword detected. After returning from the method, start detection again. if decoder.hyp() is not None: print("Hotword Detected") decoder.end_utt() start_speech_recognition() decoder.start_utt()   Run the app, if detection doesn’t seem to work well, adjust kws_threshold in step 2 to give optimal results. In this way, Hotword Detection can be added to your Python Project. You may also develop some cool hacks with our AI powered Assistant Susi by Voice Control. Check repository for more info: https://github.com/fossasia/susi_hardware

Continue ReadingHotword Detection with Pocketsphinx for SUSI.AI

Displaying Web Search and Map Preview using Glide in SUSI Android App

SUSI.AI has many skills. Two of which are displaying web search of a certain query and displaying a map of certain position. This post will cover how these things are implemented in Susi Android App and how a preview is displayed using Bumptech’s free and open source Glide library. So, what is glide? According to Glide docs, Glide is a fast and efficient open source media management and image loading framework for Android that wraps media       decoding, memory and disk caching, and resource pooling into a simple and easy to use interface. Now, lets describe how this framework is used in Susi App. So, let’s cover it’s two uses one by one : Map Preview :      Whenever a user queries something like “Where is Singapore” or “Where am I”, the response from server is a map with latitude, longitude and zoom level. See the “type” which is map. It indicates android app that it needs to display a map with provided values. "actions": [      {        "type": "answer",        "expression": "Singapore is a place with a population of 3547809. Here is a map: https://www.openstreetmap.org/#map=13/1.2896698812440377/103.85006683126556"      },      {        "type": "anchor",        "link": "https://www.openstreetmap.org/#map=13/1.2896698812440377/103.85006683126556",        "text": "Link to Openstreetmap: Singapore"      },      {        "type": "map",        "latitude": "1.2896698812440377",        "longitude": "103.85006683126556",        "zoom": "13"      }    ]  }] So, these values are then plugged into the below mentioned url where length x width is size of image to be retrieved. This url links to an image of the map with center and size as defined. http://staticmap.openstreetmap.de/staticmap.php?center=latitude,longitude&zoom=zoom&size=lengthxwidth So, now as we have a link to the image to be displayed. We just need something to get the image from that link and display it in the app. That’s where Glide comes to rescue. It loads the image from the link to an ImageView. Glide.with(currContext).load(mapHelper.getMapURL()).listener(new RequestListener<String, GlideDrawable>() { @Override public boolean onException(Exception e, String model, Target<GlideDrawable> target, boolean isFirstResource) { return false; } @Override public boolean onResourceReady(GlideDrawable resource, String model, Target<GlideDrawable> target, boolean isFromMemoryCache, boolean isFirstResource){ mapViewHolder.pointer.setVisibility(View.VISIBLE); return false; } }).into(mapViewHolder.mapImage); So, what the above code does is that it displays image from url mapHelper.getMapURL() to mapViewHolder.mapImage. So, by this way, we are able to display a preview of the map in the chat itself. The user can then click on the image to load the         complete map. Web Search Preview : When a user enters queries like “search for dog”, the response from server is a websearch with the query to be searched,         something like this. "actions": [      {        "type": "answer",        "expression": "Here is a web search result:"      },      {        "type": "websearch",        "query": "dog"      }    ] So, now we know the query to be searched, we can use any search api to get results about that query ad display it in the app. In Susi android app, we have used DuckDuckGo open source api to do that task. We call this url https://api.duckduckgo.com/?format=json&pretty=1&q=query&ia=meanings which gives a json response with results. We then use the json     results which contains the link to result, image to be displayed and a short…

Continue ReadingDisplaying Web Search and Map Preview using Glide in SUSI Android App

Conversion of CSS styles into React styles in SUSI Web Chat App

Earlier this week we had an issue where the text in our search box of the SUSI web app was not white even after having all the required styles. After careful inspection it was found that there is a attribute named -webkit-text-fill-color which was set to black. Now I faced this issue as adding such attribute to our reactJs code will cause lint errors. So after careful searching stackoverflow, i found a way to add css attribute to our react code by using different case. I decided to write a blog on this for future reference and it might come handy to other developers as well. If you want to write css in javascript, you have to turn dashed-key-words into camelCaseKeys For example: background-color => backgroundColor border-radius => borderRadius but vendor prefix starts with capital letter (except ms) -webkit-box-shadow => WebkitBoxShadow (capital W) -ms-transition => msTransition ('ms' is the only lowercase vendor prefix) const containerStyle = {  WebkitBoxShadow: '0 0 0 1000px white inset' }; So in our case:- -webkit-text-fill-color became WebkitTextFillColor The final code of styles looked like: - const searchstyle = {      WebkitTextFillColor: 'white',      color: 'white'    } Now, because inline styles gets attached on tags directly instead of using selectors, we have to put this style on the <input> tag itself, not the container. See the react doc #inline-styles section for more details.

Continue ReadingConversion of CSS styles into React styles in SUSI Web Chat App

How to add a new attribute in an action type for SUSI.AI

In this blog, I'll be telling on how to add more attributes to already implemented action types so that it becomes easy to filter out the JSON results that we are getting by calling various APIs. What are actions? Actions are a set of defined functionality that how Susi works for special JSON return type from some API endpoint. These include table, pie chart, rss, web search etc. These action types are implemented and defined at the backend(on server). The definition contains several attributes and their data types. Current action types and their attributes: table : {columns, length} piechart : {total, key, value, unit} rss : (title, description, link) websearch : {query} map : {latitude, longitude, zoom} * Please see that these are as of the date of publish of this blog. These are subject to change. Also keep in mind that these are optional attributes. Need for new attributes: There are millions of open APIs out there and hence their JSON response structure also varies. Some return a JSON Array and some return a single JSON object. Some might return JSON Array inside a JSONObject. They all varies. So to give proper filters, we may need new action types. In this blog, we'll take up the example of how "length" attribute was added. Similarly you can add more attributes to any of the action types. Some APIs return few as 5 elements in a JSON Array and some give out 100s of elements. So the power of limiting the rows in a particular skill is given to Susi-skill developers only with addition of length parameter. First things first.. Identify the action type in which you want to add parameters. Browse to SusiMind.java and SusiAction.java files. These are the main files where Susi looks for attributes while evaluating  a dream or a skill. In SusiAction.java file, find the corresponding public action method which is returning a JSONObject. Currently the method looks like this: public static JSONObject tableAction(JSONObject cols) { JSONObject json = new JSONObject(true) .put("type", RenderType.table.name()) .put("columns", cols); return json; } Add a parameter of corresponding data type (attribute you are adding) in argument list. Following this, put that variable in in JSONObject variable (that is json) which also being returned at the end. After changes, the method will look like this: public static JSONObject tableAction(JSONObject cols, int length) { JSONObject json = new JSONObject(true) .put("type", RenderType.table.name()) .put("columns", cols) .put("length", length); return json; } In SusiMind.java file, find the if condition where other attributes of the corresponding action type are being checked. (sticking to length attribute in table action type, following is what you have to do). if (type.equals(SusiAction.RenderType.table.toString()) && boa.has("columns")) { actions.put(SusiAction.tableAction(boa.getJSONObject("columns")); } Now depending on the need, add the conditions in the code, make the changes. The demand of this action type is : if columns attribute is present, then the user may or may not put the length parameter, but if the columns are absent, length will have no value. So now the code gets modified and…

Continue ReadingHow to add a new attribute in an action type for SUSI.AI

Youtube videos in the SUSI iOS Client

The iOS and android client already have the functionality to play videos based on the queries. In order to implement this feature of playing videos in the iOS client, we use the Youtube Data API v3. The task here was to create an UI/UX for the playing of videos within the app. An API call is made initially to fetch the youtube videos based on the query and they video ID of the first object is extracted and used to play the video. The API endpoint for youtube data API looks like: https://www.googleapis.com/youtube/v3/search?part=snippet&q={query}&key={your_api_key} Using this we get the following result: ( I am adding only the first item which is required since the response is too long ) Path: $.items[0] {  "kind": "youtube#searchResult",  "etag": "\"m2yskBQFythfE4irbTIeOgYYfBU/oR-eA572vNoma1XIhrbsFTotfTY\"",  "id": { "kind": "youtube#channel", "channelId": "UCQprMsG-raCIMlBudm20iLQ"  },  "snippet": { "publishedAt": "2015-01-01T11:06:00.000Z", "channelId": "UCQprMsG-raCIMlBudm20iLQ", "title": "FOSSASIA", "description": "FOSSASIA is supporting the development of Free and Open Source technologies for social change in Asia. The annual FOSSASIA Summit brings together ...", "thumbnails": { "default": { "url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s88-c-k-no-mo-rj-c0xffffff/photo.jpg" }, "medium": { "url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg" }, "high": { "url": "https://yt3.ggpht.com/-CP18cWbo34A/AAAAAAAAAAI/AAAAAAAAAAA/kEmIgO8OjCk/s240-c-k-no-mo-rj-c0xffffff/photo.jpg" } }, "channelTitle": "FOSSASIA", "liveBroadcastContent": "upcoming"  } } We parse the above object to grab the videoID, based on the query,  we will use code below: if let itemsObject = response[Client.YoutubeResponseKeys.Items] as? [[String : AnyObject]] {    if let items = itemsObject[0][Client.YoutubeResponseKeys.ID] as? [String : AnyObject] {         let videoID = items[Client.YoutubeResponseKeys.VideoID] as? String         completion(videoID, true, nil)    } } This videoID is returned to the Controller where this method was called. Now, we begin with designing the UI for the same. First of all, we need a view on which the youtube video will be played and this view would help dismiss the video by clicking on it. First, we add the blackView to the entire screen. // declaration let blackView = UIView() // Add backgroundView func addBackgroundView() {    If let window = UIApplication.shared.keyWindow {            self.view.addSubview(blackView)             // Cover the entire screen            blackView.frame = window.frame            blackView.backgroundColor = UIColor(white: 0, alpha: 0.5)            blackView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleDismiss)))    } } func handleDismiss() {    UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 1, initialSpringVelocity: 1, options: .curveEaseOut, animations: {        self.blackView.removeFromSuperview()    }, completion: nil) } Next, we add the YoutubePlayerView. For this we use the Pod `YoutubePlayer`. Since, it had a few warnings showing as well as some videos not being played I had to make fixes to the original pod and use my own customized version ( available here ). // Youtube Player lazy var youtubePlayer: YouTubePlayerView = {    let frame = CGRect(x: 0, y: 0, width: self.view.frame.width - 16, height: self.view.frame.height * 1 / 3)    let player = YouTubePlayerView(frame: frame)    return player }() // Shows Youtube Player func addYotubePlayer(_ videoID: String) {    if let window = UIApplication.shared.keyWindow {        // Add YoutubePlayer view on top of blackView        self.blackView.addSubview(self.youtubePlayer)        // Calculate and set frame        let centerX = UIScreen.main.bounds.size.width / 2        let centerY = UIScreen.main.bounds.size.height / 3        self.youtubePlayer.center = CGPoint(x: centerX, y: centerY)        // Load Player using the Video ID        self.youtubePlayer.loadVideoID(videoID)        blackView.alpha = 0        youtubePlayer.alpha = 0        UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping:…

Continue ReadingYoutube videos in the SUSI iOS Client

Websearch and Link Preview support in SUSI iOS

The SUSI.AI server responds to API calls with answers to the queries made. These answers might contain an action, for example a web search, where the client needs to make a web search request to fetch different web pages based on the query. Thus, we need to add a link preview in the iOS Client for each such page extracting and displaying the title, description and a main image of the webpage. At first we make the API call adding the query to the query parameter and get the result from it. API Call: http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=amazon And get the following result: { "query": "amazon", "count": 1, "client_id": "aG9zdF8xMDguMTYyLjI0Ni43OQ==", "query_date": "2017-06-02T14:34:15.675Z", "answers": [ { "data": [{ "0": "amazon", "1": "amazon", "timezoneOffset": "-330" }], "metadata": { "count": 1 }, "actions": [{ "type": "answer", "expression": "I don't know how to answer this. Here is a web search result:" }, { "type": "websearch", "query": "amazon" }] }], "answer_date": "2017-06-02T14:34:15.773Z", "answer_time": 98, "language": "en", "session": { "identity": { "type": "host", "name": "108.162.246.79", "anonymous": true } } } After parsing this response, we first recognise the type of action that needs to be performed, here we get `websearch` which means we need to make a web search for the query. Here, we use `DuckDuckGo’s` API to get the result. API Call to DuckDuckGo: http://api.duckduckgo.com/?q=amazon&format=json I am adding just the first object of the required data since the API response is too long. Path: $.RelatedTopics[0] {  "Result": "<a href=\"https://duckduckgo.com/Amazon.com\">Amazon.com</a>Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company...",  "Icon": { "URL": "https://duckduckgo.com/i/d404ba24.png", "Height": "", "Width": "" },  "FirstURL": "https://duckduckgo.com/Amazon.com",  "Text": "Amazon.com Amazon.com, also called Amazon, is an American electronic commerce and cloud computing company..." } For the link preview, we need an image logo, URL and a description for the same so, here we will use the `Icon.URL` and `Text` key. We have our own class to parse this data into an object. class WebsearchResult: NSObject {        var image: String = "no-image"    var info: String = "No data found"    var url: String = "https://duckduckgo.com/"    var query: String = ""        init(dictionary: [String:AnyObject]) {                if let relatedTopics = dictionary[Client.WebsearchKeys.RelatedTopics] as? [[String : AnyObject]] {                        if let icon = relatedTopics[0][Client.WebsearchKeys.Icon] as? [String : String] {                if let image = icon[Client.WebsearchKeys.Icon] {                    self.image = image                }            }                        if let url = relatedTopics[0][Client.WebsearchKeys.FirstURL] as? String {                self.url = url            }                        if let info = relatedTopics[0][Client.WebsearchKeys.Text] as? String {                self.info = info            }                        if let query = dictionary[Client.WebsearchKeys.Heading] as? String {                let string = query.lowercased().replacingOccurrences(of: " ", with: "+")                self.query = string            }           }       }    } We now have the data and only thing left is to display it in the UI. Within the Chat Bubble, we need to add a container view which will contain the image, and the text description.  let websearchContentView = UIView()        let searchImageView: UIImageView = {        let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 44, height: 44))        imageView.contentMode = .scaleAspectFit        // Placeholder image assigned…

Continue ReadingWebsearch and Link Preview support in SUSI iOS