Setup Lint Testing in SUSI Android

As developers tend to make mistakes while writing code, small mistakes or issues can cause a negative impact on the overall functionality and speed of the app. Therefore it is necessary to understand the importance of Lint testing in Android. Android Lint is a tool present in Android studio which is effective in scanning and reporting different types of bugs present in the code, it also find typos and security issues in the app. The issue is reported along with severity level thus allowing the developer to fix it based on the priority and level of damage they can cause. It is easy to use and can significantly improve the quality of your code. Effect of Lint testing on Speed of the Android App Lint testing can significantly improve the speed of the app in the following ways Android Link test helps in removing the declaration redundancy in the code, thus the Gradle need not to bind the same object again and again helping to improve the speed. Lint test helps to find bugs related to Class Structure in different Activities of the Application which is necessary to avoid the case of memory leaks. Lint testing also tells the developer above the size of resources use for example the drawable resources which sometimes take up a large piece of memory in the application. Cleaning these resources or replacing them with lightweight drawables helps in increasing the speed of the app. Overall Lint Testing helps in removing Typos, unused import statement, redundant strings, hence refactoring the whole code and increasing stability and speed. Setup We can use Gradle to invoke the list test by the following commands in the root directory of the folder. To set up we can add this code to our build.gradle file lintOptions { lintConfig file("lint.xml") } The lint.xml generated will look something like this <?xml version="1.0" encoding="UTF-8"?> <lint>    <!-- Changes the severity of these to "error" for getting to a warning-free build -->    <issue id="UnusedResources" severity="error"/> </lint> To explicitly run the test on Android we can use the following commands. On Windows gradlew lint On Mac ./gradlew lint We can also use the lint testing on various variants of the app, using commands such as gradle lintDebug    or  gradle lintRelease. The xml file generated contains the error along with their severity level . <?xml version="1.0" encoding="UTF-8"?> <lint>    <issue id="Invalid Package" severity="ignore" />    <!-- All below are issues that have been brought to informational (so they are visible, but don't break the build) -->    <issue id="GradleDependency" severity="informational" />    <issue id="Old TargetApi" severity="informational" /> </lint> Testing on Susi Android:- After testing the result on Susi Android we find the following errors. As we can see that there are two errors and a lot of warnings. Though warning are not that severe but we can definitely improve on this. Thus making a habit of testing your code with lint test will improve the performance of your app and will make it more faster. The test provides a complete…

Continue ReadingSetup Lint Testing in SUSI Android

How to add a new Servlet/API to SUSI Server

You have got a new feature added to enhance SUSI-AI (in web/android/iOS application) but do not find an API which could assist you in your work to make calls to the server {since the principle of all Susi-AI clients is to contact with SUSI-server for any feature}. Making servlets for  Susi is quite different from a normal JAVA servlet. Though the real working logic remains the same but we have got classes which allow you to directly focus on one thing and that is to maintain your flow for the feature. To find already implemented servlets, first clone the susi_server repository  from here. git clone https://github.com/fossasia/susi_server.git Cd to susi_server directory or open your terminal in susi_server directory. (This blog focuses on servlet development for Susi only and hence it is assumed that you have any version of JAVA8 installed properly). If you have not gone through how to run a susi_server manually, then follow  below steps to start the server: ./gradlew build //some set of files and dependencies will be downloaded bin/start.sh //command to start the server This will start your Susi server and it will listen at port 4000. The first step is to analyze that to which class of API is your  servlet  going to be added. Let us take a small example and see how to proceed step by step. Let us look at development of ListSettingsService servlet. (to find the code of this servlet, browse to the following location: susi_server->src->ai->susi->server->api->aaa). Once you have decided the classification of your srvlet, create a .java file in it (Like we created ListSettingsService.java file in aaa folder). Extend AbstractAPIHandler class to your class and implement APIHandler to your class. If you are using any IDE like Intelij IDEA or eclipse then they will give you an error message and when you click on it, it  will ask you to Override some methods. Select the option and if you are using a simple text editor, then override the following methods in the given way: @Override public String getAPIPath() { return null; } @Override public BaseUserRole getMinimalBaseUserRole() { return null; } @Override public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) { return null; } @Override public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization rights, JsonObjectWithDefault permissions) throws APIException { Return null; } What all these methods are for and why do we need them? These are those 4 methods that make our work way easy. With the code compilation, first getAPIPath() is called to evaluate the end point.  Whenever this end point is called properly, it responds with whatever is defined in serviceImpl(). In our case we have given the endpoint "/aaa/listSettings.json". Ensure that you do not have 2  servlets with same end point. Next in the line is getMinimalBaseUserRole() method. While developing certain features, a need of special privilege {like admin login} might be required. If you are implementing a feature for Admins only (like  we are doing in this servlet), return BaseUserRole.ADMIN. If you want to give access to anyone (registered or not) then return…

Continue ReadingHow to add a new Servlet/API to SUSI Server

Making SUSI’s login experience easy

Every app should provide a smooth and user-friendly login experience, as it is the first point of contact between app and user. To provide easy login in SUSI, auto-suggestion email address is used. With this feature, the user is able to select his email from autocomplete dropdown if he has successfully logged in earlier in the app just by typing first few letters of his email. Thus one need not write the whole email address each and every time for login. Let’s see how to implement it. AutoCompleteTextView is the subclass of EditText class, which displays a list of suggestions in a drop down menu from which user can select only one suggestion or value. To use AutoCompleteTextView the dependency of the latest version of design library to the Gradle build file should be added. dependencies { compile "com.android.support:design:$support_lib_version" } Next, in the susi_android/app/src/main/res/layout/activity_login.xml. The AutoCompleteTextView is wrapped inside TextInputLayout to provide an input field for email to the user. <android.support.design.widget.TextInputLayout android:id="@+id/email" android:layout_width="match_parent" android:layout_height="wrap_content" app:errorEnabled="true"> <AutoCompleteTextView android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="@string/email" android:id="@+id/email_input" android:textColor="@color/edit_text_login_screen" android:inputType="textEmailAddress" android:textColorHint="@color/edit_text_login_screen" /> <android.support.design.widget.TextInputLayout/> href="https://github.com/fossasia/susi_android/blob/development/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java">susi_android/app/src/main/java/org/fossasia/susi/ai/activities/LoginActivity.java following import statements is added to import the collections. import android.widget.ArrayAdapter; import android.widget.AutoCompleteTextView; import java.util.ArrayList; import java.util.HashSet; import java.util.Set; The following code binds the AutoCompleteTextView using ButterKnife. @BindView(R.id.email_input) AutoCompleteTextView autoCompleteEmail; To store every successfully logged in email id, use Preference Manager in login response. ... if (response.isSuccessful() &amp;&amp; response.body() != null) { Toast.makeText(LoginActivity.this, response.body().getMessage(), Toast.LENGTH_SHORT).show(); // Save email for autocompletion savedEmails.add(email.getEditText().getText().toString()); PrefManager.putStringSet(Constant.SAVED_EMAIL,savedEmails); } ... Here Constant.SAVED_EMAIL is a string defined in Constants.java as public static final String SAVED_EMAIL="saved_email"; Next to specify the list of suggestions emails to be displayed the array adapter class is used. The setAdapter method is used to set the adapter of the autoCompleteTextView. private Set savedEmails = new HashSet&lt;&gt;(); if (PrefManager.getStringSet(Constant.SAVED_EMAIL) != null) { savedEmails.addAll(PrefManager.getStringSet(Constant.SAVED_EMAIL)); autoCompleteEmail.setAdapter(new ArrayAdapter&lt;&gt;(this, android.R.layout.simple_list_item_1, new ArrayList&lt;&gt;(savedEmails))); } Then just test it with first logging in and after that every time you log in, just type first few letters and see the email suggestions. So, next time when you make an app with login interface do include AutoCompleteview for hassle-free login.

Continue ReadingMaking SUSI’s login experience easy

Using Speech To Text Engine in Susi Android

Susi is an intelligent chatbot, it supports speech to text as the input. The user can talk to the susi just like he or she is talking to some other person. Also in case of speech to text input, the output of susi is in the form of text to speech giving a seamless conversational experience to the user. To achieve speech to text input in Susi Android or any other android application we have the following ways:- Using Android inbuilt Speech to Text function. Using Google Clouds Speech API. We will talk about each of these. Using Android inbuilt Speech to Text function. Android provides an inbuilt method to convert speech into text, it is the most easy way to convert speech to text. This method uses android.speech package and a specific class called android.speech.RecognizerIntent . Basically we trigger an intent (android.speech.RecognizerIntent) which shows dialog box to recognize speech input. This Activity then converts the speech into text and send backs the result to our calling Activity. When we invoke android.speech.RecognizerIntent intent, we must use startActivityForResult()as we must listen back for result text. The code snippet for the following is private void promptSpeechInput() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault()); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, getString(R.string.speech_prompt)); try { startActivityForResult(intent, REQ_CODE_SPEECH_INPUT); } catch (ActivityNotFoundException a) { Toast.makeText(getApplicationContext(), getString(R.string.speech_not_supported), Toast.LENGTH_SHORT).show(); } } In the above code as we can see that we putting some extra information while passing the intent. This information is used by speech to text engine to determine the language of the user. Thus while invoking RecognizerIntent, we must provide extra RecognizerIntent.EXTRA_LANGUAGE_MODE. Here we are setting its value to en-US. Since the recognizer is triggered we receive a callback onActivityResult(int requestCode, int resultCode, Intent data) which is an override method to handle the result. The RecognizerIntent will convert the speech input to text and send back the result as ArraList with key RecognizerIntent.EXTRA_RESULTS. Generally this list should be ordered in descending order of speech recognizer confidence. Only present when RESULT_OK is returned in an activity result. We just set the text that we got in result in text view txtText using txtText.setText() The screenshot of the implementation is Using Google Cloud Speech API Google Cloud Api is enable the developers to convert speech to text in Real time . It is used in Google Allo and Google Assistant. It is backed by powerful neural network and machine learning algorithms which makes it very efficient and fast at the same time. The Api is capable of recognizing more than 80 languages. To find more detail about Google Cloud Speech Api, one can refer to the official documentation at this link. The Google to text Api is not free and based on the usage of the Api. To use this Api, developer have to sign up at the google console to generate the Api key. On enabling the speech API a json will be created. protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); switch (requestCode) { case…

Continue ReadingUsing Speech To Text Engine in Susi Android

How to keep crash records of SUSI.AI Android with Crashlytics

At this stage of the development of SUSI.AI Android there are many changes and at times this results in inconsistencies and crashes of the app. One important questions we face is how to keep record of crashes so that we can improve our app. Using Crashlytics is a way keep record of crashes. The easiest way to add crashlytics in an app is to integrate the fabric plugin in Android Studio. First create an account at fabric. When you create account it will send you confirmation mail. After clicking confirmation mail it will redirect you to fabric page. It show you different platform option. Select android as platform. For window/linux user select setting from file menu. For Mac user select Preferences from menu. Select Plugins, click the Browse repositories button and search for "Fabric for Android" Click install plugin button to download and install plugin. You can see Fabric option in right side. Click on it and enter your credentials to sign in. Select susi_android project and click next. Fabric will list all the organizations you registered, so select the organization you want to associate the app with and click next. In my case organization is susi.  Fabric will then list all of its kits.select Crashlytics and click  Next .  Click the install button.It will add crashlytics in project. Fabric wants to make changes in MainApplication  and AndroidManifest.xml files , so click the Apply button for the changes to happen. Build and run your application to make sure that everything is configured properly. If your app was successfully configured, you will get an email sent instantly to the email address you used to sign up with fabric. Now you can track crashes of your app on the dashboard of your  fabric account. It will give you all details like 1.) How many users are affected and how many times app crashes with date. 2.) Details of  devices in which app crashes . 3.) Cause of errors For more information use these links: https://fabric.io/home https://fabric.io/kits/android/crashlytics

Continue ReadingHow to keep crash records of SUSI.AI Android with Crashlytics

How to teach SUSI.AI skills using external API’s

A powerful feature of SUSI is, that it can use external API’s to learn new skills. The generic syntax used here is: Question string !console:constant answer string + answer variable { "url" : "API to be called", "path" : "path from where answer will be fetched" } eol I will try to explain this syntax with the help of some useful examples. Let’s start with a very basic example: I want SUSI to be able to answer questions like “What is the date today?”. Let’s try to tackle this step by step. As we can infer from the above-written syntax, to teach SUSI a skill involving external API call, we need to be clear about five things namely: Question string i.e. “What is the date today?” (in this case). Constant answer string i.e. “The date today is ” The API to be called i.e. “http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today” The path which contains our answer. When we visit this API url, we get the result as follows: {   "date":"2017-05-16",   "season":"easter",   "season_week":5,   "celebrations":[     {       "title":"",       "colour":"white",       "rank":"ferial",       "rank_num":3.13     }   ],   "weekday":"tuesday" } The whole JSON object is represented with the ‘$’ sign. As date is a property of this object, so date can be accessed with “$.date” - this string is referred to as the path.  The last one is the answer variable. We can see that the result of API url contains many “key:value” pairs. Answer variable is the value of the last key variable(i.e. date) referred in path string. This value is stored in a variable named $object$. So our answer variable turns out to be $object$.    Now, as we have all the five things ready with us, we can make our SUSI skill: What is the date today? !console:$object$ { “url”:“http://calapi.inadiutorium.cz/api/v0/en/calendars/default/today”, “path” : “$.date” } eol Kudos! But where to feed this skill and check if SUSI chat bot is able to answer “What is the date today?” appropriately. To test the working of a skill: Open dream.asksusi.com, write whatever name you like for the pad and then click OK. Replace the data written on your pad with the skill code you created. You don’t need to save it, it is saved automatically. Now your page should look something like this: To check if this skill is working properly: Visit SUSI chat bot. In the textbox below, write dream followed by the name of your pad and then press Enter key. SUSI will reply with “dreaming enabled for YOUR-PAD-NAME”. Now write the question string i.e. What is the date today? and you should be shown today’s date! For more clarity, refer to this image: Great, that you made it! You can now contribute skills by making a PR to this repository and see those skills live on SUSI without enabling any dream! Just ask your question and get your own skilled answers. Let’s learn more about skills by introducing some changes to this question. Let’s go through some variations of this question:…

Continue ReadingHow to teach SUSI.AI skills using external API’s

How to add the Google Books API to SUSI AI

SUSI.AI is a Open Source personal assistant. You can also add new skills to SUSI easily. In this blog post I'm going to add Google's Books API to SUSI as a skill. A complete tutorial on SUSI.AI skills is n the repository. Check out Tutorial Level 11: Call an external API here and you will understand how can we integrate an external API with SUSI AI. To start adding book skills to SUSI.AI , first go to this URL http://dream.susi.ai/  > give a name in the text field and press OK.   Copy and paste above code to the newly opened etherpad. Go to this url http://chat.susi.ai to test new skill. Type "dream blogpost" on chat and press enter. Now we can use the skills we  add to the etherpad. To understand  Google's book API use this url.Your request url should be like this: [code]https://www.googleapis.com/books/v1/volumes?q=BOOKNAME&amp;amp;amp;amp;amp;amp;amp;amp;key=yourAPIKey[/code]   you should replace APIKey with your API key. To get started you first need to get an API key. Go to this url > click GET A KEY button which is in right top > and select "Create a new project" Add name to a project and click "CREATE AND ENABLE API" button Copy your API key and replace the API Key part of request URL. Paste request url on your browser address bar and replace BOOKNAME part with "flower" and go to the URL. It will give this JSON. We need to get the full name of books which is in items array to that we have to go through this hierarchy items array >first item>volumeInfo >title Go to the etherpad we made before and paste the following code. is there any book called * ? !console:did you mean "$title$" ? Here is a link to read more: $infoLink$ { "url":"https://www.googleapis.com/books/v1/volumes?q=$1$&key=AIzaSyCt3Wop5gN3S5H0r1CKZlXIgaM908oVDls", "path":"$.items[0].volumeInfo" } eol first line of the code "is there any book called *?" is the question user ask. *  is the variant part  of question. that part can be used in the code by $1$ , if there more variants we can add multiple asterisk marks and refer by using corresponding number Ex: $1$,$2$,$3$ In this code  "path" : "$.items[0].volumeInfo" $  represents full JSON result. items[0] for get first element .volumeInfo is to refer  volumeInfo object !console:did you mean "$title$" ?  Here is a link to read more: $infoLink$ this line produce the output. $title$ this one is for refer the "title" part of data that comes from "path" $infoLink$ this one gives link to more details Now go to the chat UI and type again "dream blogpost". And after it shows “dreaming enabled” type in"is there any book called world war?". It will result in the following. This  is a simple way to add any service to SUSI as a skill.

Continue ReadingHow to add the Google Books API to SUSI AI

Adding Send Button in SUSI.AI webchat

Our SUSI.AI web chat app is improving day by day. One such day it looked like this:  It replies to your query and have all the basic functionality, but something was missing. When viewed in mobile, we realised that this should have a send button. Send buttons actually make chat apps look cool and give them their complete look. Now a method was defined in MessageCompose Component of React App, which took the target value of  textarea and pass it as props. Method: _onClickButton(){ let text = this.state.text.trim(); if (text) { Actions.createMessage(text, this.props.threadID); } this.setState({text: ''}); } Now this method was to be called in onClick Action of our send Button, which was included in our div rendered by MessageComposer Component. This method will also be called on tap on ENTER key on keyboard. Implementation of this method has also been done, this can be seen here. Why wrap textarea and button in a div and not render as two independent items ? Well in react you can only render single components, so wrapping them in a div is our only option. Now since we had our functionality running, It was time for styling. Our team choose to use http://www.material-ui.com/ and it’s components for styling. We chose to have FloatingActionButton as send button. Now to use components of material ui in our component, several importing was to be done. But to enable these feature we needed to change our render to DOM to :- import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; const App = () => (  <MuiThemeProvider>    <ChatApp />  </MuiThemeProvider> ); ReactDOM.render(  <App /> ,  document.getElementById('root') ); Imports in our MessageComposer looked like this :- import Send from 'material-ui/svg-icons/content/send'; import FloatingActionButton from 'material-ui/FloatingActionButton'; import injectTapEventPlugin from 'react-tap-event-plugin'; injectTapEventPlugin(); The injectTapEventPlugin is very important method, in order to have event handler’s in our send button, we need to call this method and method which handles onClick event  is know as onTouchTap. The JSX code which was to be rendered looked like this: <div className="message-composer">        <textarea          name="message"          value={this.state.text}          onChange={this._onChange.bind(this)}          onKeyDown={this._onKeyDown.bind(this)}          ref={(textarea)=> { this.nameInput = textarea; }}          placeholder="Type a message..."        />        <FloatingActionButton          backgroundColor=' #607D8B'          onTouchTap={this._onClickButton.bind(this)}          style={style}>          <Send />        </FloatingActionButton>      </div> Styling for button was done separately and it looked like: const style = {    mini: true,    top: '1px',    right: '5px',    position: 'absolute', }; Ultimately after successfully implementing all of this our SUSI.AI web chat had a good looking FloatingAction send Button. This can be tested here.

Continue ReadingAdding Send Button in SUSI.AI webchat

Map Support for SUSI Webchat

SUSI chat client supports map tiles now for queries related to location. SUSI responds with an interactive internal map tile with the location pointed by a marker. It also provides you with a link to open street maps where you can get the whole view of the location using the zooming options provided and also gives the population count for that location. Lets visit SUSI WebChat and try it out. Query : Where is london Response : City of London is a place with a population of 7556900. Here is a map:https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228 Link to Openstreetmap: City of London (Hyperlinked with https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228) <Map Tile> Implementation: How do we know that a map tile is to be rendered? The actions in the API response tell the client what to render. The client loops through the actions array and renders the response for each action accordingly. "actions": [ { "type": "answer", "expression": "City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228" }, { "type": "anchor", "link": "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228", "text": "Link to Openstreetmap: City of London" }, { "type": "map", "latitude": "51.51279067225417", "longitude": "-0.09184009399817228", "zoom": "13" } ] Note: The API response has been trimmed to show only the relevant content. The first action element is of type answer so the client renders the text response, ‘City of London is a place with a population of 7556900. Here is a map: https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228’ The second action element is of type anchor with the text to display and the link to hyperlink specified by the text and link attributes, so the client renders the text `Link to Openstreetmap: City of London`, hyperlinked to "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228". Finally, the third action element is of type map. Latitude, Longitude and zoom level information are also  specified using latitude, longitude and zoom attributes. The client renders a map using these attributes. I used ‘react-leaflet’ module to render the interactive map tiles. To integrate it into our project and set the required style for the map tiles, we need to load Leaflet’s CSS style sheet and we also need to include height and width for the map component.  <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" /> .leaflet-container { height: 150px; width: 80%; margin: 0 auto; } case 'map': { let lat = parseFloat(data.answers[0].actions[index].latitude); let lng = parseFloat(data.answers[0].actions[index].longitude); let zoom = parseFloat(data.answers[0].actions[index].zoom); let mymap = drawMap(lat,lng,zoom); listItems.push( <li className='message-list-item' key={action+index}> <section className={messageContainerClasses}> {mymap} <p className='message-time'> {message.date.toLocaleTimeString()} </p>; </section> </li> ); break; } import { divIcon } from 'leaflet'; import { Map, Marker, Popup, TileLayer } from 'react-leaflet'; // Draw a Map function drawMap(lat,lng,zoom){ let position = [lat, lng]; const icon = divIcon({ className: 'map-marker-icon', iconSize: [35, 35] }); const map = ( <Map center={position} zoom={zoom}> <TileLayer attribution='' url='http://{s}.tile.osm.org/{z}/{x}/{y}.png'      /> <ExtendedMarker position={position} icon={icon}> <Popup> <span><strong>Hello!</strong> <br/> I am here.</span> </Popup> </ExtendedMarker> </Map> ); return map; } Here, I used a custom marker icon because the default icon provided by leaflet had an issue and was not being rendered. I used divIcon from leaflet to create a custom map marker icon. When the…

Continue ReadingMap Support for SUSI Webchat

Hyperlinking Support for SUSI Webchat

SUSI responses can contain links or email ids. Whenever we want to access those links or send a mail to those email ids, it is very inconvenient for the user to manually copy the link and check out the contents, which is a very bad UX. I used a module called ‘react-linkify’ to address this issue. ‘React-linkify’ is a React component to parse links (urls, emails, etc.) in text into clickable links. Usage: <Linkify>{text to linkify}</Linkify> Any link that appears inside the linkify component will be hyperlinked and is made clickable. It uses regular expressions and pattern matching to detect URLs and mail ids and are made clickable such that clicking on URLs opens the link in a new window and clicking a mail id opens “mailto:” . Code: export const parseAndReplace = (text) => {return <Linkify properties={{target:"_blank"}}>{text}</Linkify>;} Lets visit SUSI WebChat and try it out. Query: search internet Response: Internet The global system of interconnected computer networks that use the Internet protocol suite to... https://duckduckgo.com/Internet The link has been parsed from the response text and has been successfully hyperlinked. Clicking the links opens the respective URL in a new window. Resources Linkify Library Examples Using React-Linkify

Continue ReadingHyperlinking Support for SUSI Webchat