Implementing Version Control System for SUSI Skill CMS

SUSI Skill CMS now has a version control system where users can browse through all the previous revisions of a skill and roll back to a selected version. Users can modify existing skills and push the changes. So a skill could have been edited many times by the same or different users and so have many revisions. The version control functionalities help users to :

  • Browse through all the revisions of a selected skill
  • View the content of a selected revision
  • Compare any two selected revisions highlighting the changes
  • Option to edit and rollback to a selected revision.

Let us visit SUSI Skill CMS and try it out.

  1. Select a skill
  2. Click on versions button
  3. A table populated with previous revisions is displayed

  1. Clicking on a single revision opens the content of that version
  2. Selecting 2 versions and clicking on compare selected versions loads the content of the 2 selected revisions and shows the differences between the two.
  3. Clicking on Undo loads the selected revision and the latest version of that skill, highlighting the differences and also an editor loaded with the code of the selected revision to make changes and save to roll back.

How was this implemented?

Firstly, to get the previous revisions of a selected skill, we need the skills meta data including model, group, language and skill name which is used to make an ajax call to the server using the endpoint :

http://api.susi.ai/cms/getSkillHistory.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL_NAME

We create a new component SkillVersion and pass the skill meta data in the pathname while accessing that component. The path where SkillVersion component is loaded is /:category/:skill/versions/:lang . We parse this data from the path and set our state with skill meta data. In componentDidMount we use this data to make the ajax call to the server to get all previous version data and update our state. A sample response from getSkillHistory endpoint looks like :

{
  "session": {
    "identity": {
      "type": "",
      "name": "",
      "anonymous":
    }
  },
  "commits": [
    {
      "commitRev": "",
      "author_mail": "AUTHOR_MAIL_ID",
      "author": "AUTOR_NAME",
      "commitID": "COMMIT_ID",
      "commit_message": "COMMIT_MESSAGE",
     "commitName": "COMMIT_NAME",
     "commitDate": "COMMIT_DATE"
    },
  ],
  "accepted": TRUE/FALSE
}

We now populate the table with the obtained revision history. We used Material UI Table for tabulating the data. The first 2 columns of the table have radio buttons to select any 2 revisions. The left side radio buttons are for selecting the older versions and the right side radio buttons to select the more recent versions. We keep track of the selected versions through onCheck function of the radio buttons and updating state accordingly.

if(side === 'right'){
  if(!(index >= currLeft)){
    rightChecks.fill(false);
    rightChecks[index] = true;
    currRight = index;
  }
}
else if(side === 'left'){
  if(!(index <= currRight)){
    leftChecks.fill(false);
    leftChecks[index] = true;
    currLeft = index;
  }
}
this.setState({
  currLeftChecked: currLeft,
  currRightChecked: currRight,
  leftChecks: leftChecks,
  rightChecks: rightChecks,
});

Once 2 versions are selected and we click on compare selected versions button, we get the currently selected versions stored from getCheckedCommits function and we are redirected to /:category/:skill/compare/:lang/:oldid/:recentid where we pass the selected 2 revisions commitIDs in the URL.

{(this.state.commitsChecked.length === 2) &&
<Link to={{
  pathname: '/'+this.state.skillMeta.groupValue+
            '/'+this.state.skillMeta.skillName+
            '/compare/'+this.state.skillMeta.languageValue+
            '/'+checkedCommits[0].commitID+
            '/'+checkedCommits[1].commitID,
}}>
  <RaisedButton
    label='Compare Selected Versions'
    backgroundColor='#4285f4'
    labelColor='#fff'
    style={compareBtnStyle}
  />
</Link>
}

SkillHistory Component is now loaded and the 2 selected revisions commitIDs are parsed from the URL pathname. Once we have the commitIDs we make ajax calls to the server to get the code for that particular commit. The skill meta data is also parsed from the URL path which is required to make the server call to getFileAtCommitID.

http://api.susi.ai/cms/getSkillHistory.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL_NAME&commitID=COMMIT_ID

We make the ajax calls in componentDidMount and update the state with the received data. A sample response from getFileAtCommitID looks like :

{
  "accepted": TRUE/FALSE,
  "file": "CONTENT",
  "session": {
    "identity": {
       "type": "",
       "name": "",
       "anonymous":
    }
  }
}

We populate the code of each revision in an editor. We used react-ace as our editor component where we use the value prop to populate the content and display it in read-only mode.

<AceEditor
  mode='java'
  readOnly={true}
  theme={this.state.editorTheme}
  width='100%'
  fontSize={this.state.fontSizeCode}
  height= '400px'
  value={this.state.commitData[0].code}
  showPrintMargin={false}
  name='skill_code_editor'
  editorProps={{$blockScrolling: true}}
/>

We then show the differences between the 2 selected versions content. To compare and highlight the differences, we used react-diff package which takes in the content of both the commits as inputA and inputB props and we compare character by character using the type chars prop. Here input A is compared with input B. The component compares and returns the highlighted element which we display in a scrollable div preventing overflows.

{/* latest code should be inputB */}
<Diff
  inputA={this.state.commitData[0].code}
  inputB={this.state.commitData[1].code}
  type='chars'
/>

Clicking on Undo then redirects to /:category/:skill/edit/:lang/:latestid/:revertid where latest id is the commitID of the latest revision and revert id is the commitID of the oldest commit ID selected amongst the 2 commits selected initially. This redirects to SkillRollBack component where we again parse the skill meta data and the commit IDs from the URL pathname and call getFileAtCommitID to get the content for the latest and the reverting commit and again populate the content in editor using react-ace and also show the differences using react-diff and finally load the modify skill component where an editor is preloaded with the content of the reverting commit and a similar interface like modify skill is shown where user can edit the content of the reverting commit and push the changes.

let baseUrl = this.getSkillAtCommitIDUrl() ;
let self = this;
var url1 = baseUrl + self.state.latestCommit;
$.ajax({
  url: url1,
  jsonpCallback: 'pc',
  dataType: 'jsonp',
  jsonp: 'callback',
  crossDomain: true,
  success: function (data1) {
    var url2 = baseUrl + self.state.revertingCommit;
    $.ajax({
      url: url2,
      jsonpCallback: 'pd',
      dataType: 'jsonp',
      jsonp: 'callback',
      crossDomain: true,
      success: function (data2) {
        self.updateData([{
        code:data1.file,
        commitID:self.state.latestCommit,
      },{
        code:data2.file,
        commitID:self.state.revertingCommit,
      }])
      }
    });
  }
});

Here, we make nested ajax calls to maintain synchronization and update state after we receive data from both the calls else if we make ajax calls in a loop, then the second ajax call doesn’t wait for the first one to finish and is most likely to fail.

This is how the skill version system was implemented in SUSI Skill CMS. You can find the complete code at SUSI Skill CMS Repository. Feel free to contribute.

Resources:

Continue ReadingImplementing Version Control System for SUSI Skill CMS

Fetching Images for RSS Responses in SUSI Web Chat

Initially, SUSI Web Chat rendered RSS action type responses like this:

The response from the server initially only contained

  • Title
  • Description
  • Link

We needed to improvise the web search & RSS results display and also add images for the results.

The web search & RSS results are now rendered as :

How was this implemented?

SUSI AI uses Yacy to fetchRSSs feeds. Firstly the server using the console process to return the RSS feeds from Yacy needs to be configured to return images too.

"yacy":{
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20title,%20link%20FROM%20yacy%20WHERE%20query=%27java%27;%22",
  "url":"http://yacy.searchlab.eu/solr/select?wt=yjson&q=",
  "test":"java",
  "parser":"json",
  "path":"$.channels[0].items",
  "license":""
}

In a console process, we provide the URL needed to fetch data from, the query parameter needed to be passed to the URL and the path to look for the answer in the API response.

  • url = <url>   – the URL to the remote JSON service which will be used to retrieve information. It must contain a $query$ string.
  • test = <parameter> – the parameter that will replace the $query$ string inside the given URL. It is required to test the service.

Here the URL used is :

http://yacy.searchlab.eu/solr/select?wt=yjson&q=QUERY

To include images in RSS action responses, we need to parse the images also from the Yacy response. For this, we need to add `image` in the selection rule while calling the console process

"process":[
  {
    "type":"console",
    "expression":"SELECT title,description,link FROM yacy WHERE query='$1$';"
  }
]

Now the response from the server for RSS action type will also include `image` along with title, description, and link. An example response for the query `Google` :

{
  "title": "Terms of Service | Google Analytics \u2013 Google",
  "description": "Read Google Analytics terms of service.",
  "link": "http://www.google.com/analytics/terms/",
  "image":   "https://www.google.com/images/branding/googlelogo/1x/googlelogo_color_116x41dp.png",
}

However, the results at times, do not contain images because there are none stored in the index. This may happen if the result comes from p2p transmission within Yacy where no images are transmitted. So in cases where images are not returned by the server, we use the link preview service to preview the link and fetch the image.

The endpoint for previewing the link is :

BASE_URL+'/susi/linkPreview.json?url=URL'

On the client side, we first search the response for data objects with images in API actions. And the amongst the remaining data objects in answers[0].data, we preview the link to fetch image keeping a check on the count. This needs to be performed for processing the history cognitions too.To preview the remaining links in a loop, we cannot make ajax calls directly in a loop. To handle this, nested ajax calls are made using the function previewURLForImage() where we loop through the remaining links and on the success we decrement the count and call previewURLForImage() on the next link and on error we try previewURLForImage() on the next link without decrementing the count.

success: function (rssResponse) {
  if(rssResponse.accepted){
    respData.image = rssResponse.image;
    respData.descriptionShort = rssResponse.descriptionShort;
    receivedMessage.rssResults.push(respData);
  }
  if(receivedMessage.rssResults.length === count ||
    j === remainingDataIndices.length - 1){
    let message = ChatMessageUtils.getSUSIMessageData(receivedMessage, currentThreadID);
    ChatAppDispatcher.dispatch({
      type: ActionTypes.CREATE_SUSI_MESSAGE,
      message
    });
  }
  else{
    j+=1;
    previewURLForImage(receivedMessage,currentThreadID,
BASE_URL,data,count,remainingDataIndices,j);
  }
},

And we store the results as rssResults which are used in MessageListItems to fetch the data and render. The nested calling of previewURLForImage() ends when we have the required count of results or we have finished trying all links for previewing images. We then dispatch the message to the message store. We now improvise the UI. I used Material UI Cards to display the results and for the carousel like display, react-slick.

<Card className={cardClass} key={i} onClick={() => {
  window.open(tile.link,'_blank')
}}>
  {tile.image &&
    (
      <CardMedia>
        <img src={tile.image} alt="" className='card-img'/>
      </CardMedia>
    )
  }
  <CardTitle title={tile.title} titleStyle={titleStyle}/>
  <CardText>
    <div className='card-text'>{cardText}</div>
    <div className='card-url'>{urlDomain(tile.link)}</div>
  </CardText>
</Card>

We used the full width of the message section to display the results by not wrapping the result in message-list-item class. The entire card is hyperlinked to the link. Along with title and description, the URL info is also shown at the bottom right. To get the domain name from the link, urlDomain() function is used which makes use of the HTML anchor tag to get the domain info.

function urlDomain(data) {
  var a = document.createElement('a');
  a.href = data;
  return a.hostname;
}

To prevent stretching of images we use `object-fit: contain;` to make the images fit the image container and align it to the middle.

We finally have our RSS results with images and an improvised UI. The complete code can be found at SUSI WebChat Repo. Feel free to contribute

Resources
Continue ReadingFetching Images for RSS Responses in SUSI Web Chat

Implementing Text To Speech Settings in SUSI WebChat

SUSI Web Chat has Text to Speech (TTS) Feature where it gives voice replies for user queries. The Text to Speech functionality was added using Speech Synthesis Feature of the Web Speech API. The Text to Speech Settings were added to customise the speech output by controlling features like :

  1. Language
  2. Rate
  3. Pitch

Let us visit SUSI Web Chat and try it out.

First, ensure that the settings have SpeechOutput or SpeechOutputAlways enabled. Then click on the Mic button and ask a query. SUSI responds to your query with a voice reply.

To control the Speech Output, visit Text To Speech Settings in the /settings route.

First, let us look at the language settings. The drop down list for Language is populated when the app is initialised. speechSynthesis.onvoiceschanged function is triggered when the app loads initially. There we call speechSynthesis.getVoices() to get the voice list of all the languages currently supported by that particular browser. We store this in MessageStore using ActionTypes.INIT_TTS_VOICES action type.

window.speechSynthesis.onvoiceschanged = function () {
  if (!MessageStore.getTTSInitStatus()) {
    var speechSynthesisVoices = speechSynthesis.getVoices();
    Actions.getTTSLangText(speechSynthesisVoices);
    Actions.initialiseTTSVoices(speechSynthesisVoices);
  }
};

We also get the translated text for every language present in the voice list for the text – `This is an example of speech synthesis` using google translate API. This is called initially for all the languages and is stored as translatedText attribute in the voice list for each element. This is later used when the user wants to listen to an example of speech output for a selected language, rate and pitch.

https://translate.googleapis.com/translate_a/single?client=gtx&sl=en-US&tl=TARGET_LANGUAGE_CODE&dt=t&q=TEXT_TO_BE_TRANSLATED

When the user visits the Text To Speech Settings, then the voice list stored in the MessageStore is retrieved and the drop down menu for Language is populated. The default language is fetched from UserPreferencesStore and the default language is accordingly highlighted in the dropdown. The list is parsed and populated as a drop down using populateVoiceList() function.

let voiceMenu = voices.map((voice,index) => {
  if(voice.translatedText === null){
    voice.translatedText = this.speechSynthesisExample;
  }
  langCodes.push(voice.lang);
  return(
    <MenuItem value={voice.lang}
              key={index}
              primaryText={voice.name+' ('+voice.lang+')'} />
  );
});

The language selected using this dropdown is only used as the language for the speech output when the server doesn’t specify the language in its response and the browser language is undefined. We then create sliders using Material UI for adjusting speech rate and pitch.

<h4 style={{'marginBottom':'0px'}}><Translate text="Speech Rate"/></h4>
<Slider
  min={0.5}
  max={2}
  value={this.state.rate}
  onChange={this.handleRate} />

The range for the sliders is :

  • Rate : 0.5 – 2
  • Pitch : 0 – 2

The default value for both rate and pitch is 1. We create a controlled slider saving the values in state and using onChange function to record change in values. The Reset buttons can be used to reset the rate and pitch values respectively to their default values. Once the language, rate and pitch values have been selected we can click on `Play a short demonstration of speech synthesis`  to listen to a voice reply with the chosen settings.

{ this.state.playExample &&
  (
    <VoicePlayer
       play={this.state.play}
       text={voiceOutput.voiceText}
       rate={this.state.rate}
       pitch={this.state.pitch}
       lang={this.state.ttsLanguage}
       onStart={this.onStart}
       onEnd={this.onEnd}
    />
  )
}

We use the VoicePlayer by passing the required props to get the speech output. onStart and onEnd functions are triggered at the beginning and ending of the speech synthesis and are used to control the state from the parent component. Chosen language, rate, pitch and translated text are passed as props to VoicePlayer which creates a new SpeechSynthesisUtterance() with the passed props and plays the speech output.

On saving these settings and then using the Mic button to get voice replies we see that the voice output is controlled according to the selected settings.

Finally, we have to store the selected settings on the server and ensure that these are pulled when the app is initialized. The format in which these settings are stored in the server is :

Speech Rate

- Used to control rate of speech output.
- SETTING_NAME :  `speechRate`
- SETTING_VALUE : `0.5 - 2`
- DEFAULT_VALUE : `1`
 
Speech Pitch

- Used to control pitch of speech output.
- SETTING_NAME :  `speechPitch`
- SETTING_VALUE : `0 - 2`
- DEFAULT_VALUE : `1`
 
TTS Language

- Used to set the language for Text-To-Speech used when the response from server doesnt specify language and the browser language is also undefined.
- SETTING_NAME :  `ttsLanguage`
- SETTING_VALUE : `Language Code (string)`
- DEFAULT_VALUE : `en-US`

This is how the Text To Speech Settings were implemented in SUSI Web Chat. The complete code can be found at SUSI Web Chat Repository.

PS: To test whether your browser supports Text To Speech, open your browser console and try the following :

  • var msg = new SpeechSynthesisUtterance(‘Hello World’);
  • window.speechSynthesis.speak(msg)

If you get a speech output then the Web API Speech Synthesis is supported by your browser and Text To Speech features of SUSI Web Chat will work. The Web Speech API has support for all latest Chrome browsers as mentioned in the Web Speech API Mozilla docs.However there are few bugs with some Chromium versions please check out more on how to fix them locally here in this link.

Resources:

 

 

Continue ReadingImplementing Text To Speech Settings in SUSI WebChat

Generating Map Action Responses in SUSI AI

SUSI AI responds to location related user queries with a Map action response. The different types of responses are referred to as actions which tell the client how to render the answer. One such action type is the Map action type. The map action contains latitude, longitude and zoom values telling the client to correspondingly render a map with the given location.

Let us visit SUSI Web Chat and try it out.

Query: Where is London

Response: (API Response)

The API Response actions contain text describing the specified location, an anchor with text ‘Here is a map` linked to openstreetmaps and a map with the location coordinates.

Let us look at how this is implemented on server.

For location related queries, the key where is used as an identifier. Once the query is matched with this key, a regular expression `where is (?:(?:a )*)(.*)` is used to parse the location name.

"keys"   : ["where"],
"phrases": [
  {"type":"regex", "expression":"where is (?:(?:a )*)(.*)"},
]

The parsed location name is stored in $1$ and is used to make API calls to fetch information about the place and its location. Console process is used to fetch required data from an API.

"process": [
  {
    "type":"console",
    "expression":"SELECT location[0] AS lon, location[1] AS lat FROM locations WHERE query='$1$';"},
  {
    "type":"console",
    "expression":"SELECT object AS locationInfo FROM location-info WHERE query='$1$';"}
],

Here, we need to make two API calls :

  • For getting information about the place
  • For getting the location coordinates

First let us look at how a Console Process works. In a console process we provide the URL needed to fetch data from, the query parameter needed to be passed to the URL and the path to look for the answer in the API response.

  • url = <url>   – the url to the remote json service which will be used to retrieve information. It must contain a $query$ string.
  • test = <parameter> – the parameter that will replace the $query$ string inside the given url. It is required to test the service.

For getting the information about the place, we used Wikipedia API. We name this console process as location-info and added the required attributes to run it and fetch data from the API.

"location-info": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20location-info%20WHERE%20query=%27london%27;%22",
  "url":"https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=",
  "test":"london",
  "parser":"json",
  "path":"$.[2]",
  "license":"Copyright by Wikipedia, https://wikimediafoundation.org/wiki/Terms_of_Use/en"
}

The attributes used are :

  • url : The Media WIKI API endpoint
  • test : The Location name which will be appended to the url before making the API call.
  • parser : Specifies the response type for parsing the answer
  • path : Points to the location in the response where the required answer is present

The API endpoint called is of the following format :

https://en.wikipedia.org/w/api.php?action=opensearch&limit=1&format=json&search=LOCATION_NAME

For the query where is london, the API call made returns

[
  "london",
  ["London"],
  ["London  is the capital and most populous city of England and the United Kingdom."],
  ["https://en.wikipedia.org/wiki/London"]
]

The path $.[2] points to the third element of the array i.e “London  is the capital and most populous city of England and the United Kingdom.” which is stored in $locationInfo$.

Similarly to get the location coordinates, another API call is made to loklak API.

"locations": {
  "example":"http://127.0.0.1:4000/susi/console.json?q=%22SELECT%20*%20FROM%20locations%20WHERE%20query=%27rome%27;%22",
  "url":"http://api.loklak.org/api/console.json?q=SELECT%20*%20FROM%20locations%20WHERE%20location='$query$';",
  "test":"rome",
  "parser":"json",
  "path":"$.data",
  "license":"Copyright by GeoNames"
},

The location coordinates are found in $.data.location in the API response. The location coordinates are stored as latitude and longitude in $lat$ and $lon$ respectively.

Finally we have description about the location and its coordinates, so we create the actions to be put in the server response.

The first action is of type answer and the text to be displayed is given by $locationInfo$ where the data from wikipedia API response is stored.

{
  "type":"answer",
  "select":"random",
  "phrases":["$locationInfo$"]
},

The second action is of type anchor. The text to be displayed is `Here is a map` and it must be hyperlinked to openstreetmaps with the obtained $lat$ and $lon$.

{
  "type":"anchor",
  "link":"https://www.openstreetmap.org/#map=13/$lat$/$lon$",
  "text":"Here is a map"
},

The last action is of type map which is populated for latitude and longitude using $lat$ and $lon$ respectively and the zoom value is specified to be 13.

{
  "type":"map",
  "latitude":"$lat$",
  "longitude":"$lon$",
  "zoom":"13"
}

Final output from the server will now contain the three actions with the required data obtained from the respective API calls made. For the sample query `where is london` , the actions will look like :

"actions": [
  {
    "type": "answer",
    "language": "en",
    "expression": "London  is the capital and most populous city of England and the United Kingdom."
  },
  {
    "type": "anchor",
    "link":   "https://www.openstreetmap.org/#map=13/51.51279067225417/-0.09184009399817228",
    "text": "Here is a map",
    "language": "en"
  },
  {
    "type": "map",
    "latitude": "51.51279067225417",
    "longitude": "-0.09184009399817228",
    "zoom": "13",
    "language": "en"
  }
],

This is how the map action responses are generated for location related queries. The complete code can be found at SUSI AI Server Repository.

Resources:

Continue ReadingGenerating Map Action Responses in SUSI AI

Implementing the Feedback Functionality in SUSI Web Chat

SUSI AI now has a feedback feature where it collects user’s feedback for every response to learn and improve itself. The first step towards guided learning is building a dataset through a feedback mechanism which can be used to learn from and improvise the skill selection mechanism responsible for answering the user queries.

The flow behind the feedback mechanism is :

  1. For every SUSI response show thumbs up and thumbs down buttons.
  2. For the older messages, the feedback thumbs are disabled and only display the feedback already given. The user cannot change the feedback already given.
  3. For the latest SUSI response the user can change his feedback by clicking on thumbs up if he likes the response, else on thumbs down, until he gives a new query.
  4. When the new query is given by the user, the feedback recorded for the previous response is sent to the server.

Let’s visit SUSI Web Chat and try this out.

We can find the feedback thumbs for the response messages. The user cannot change the feedback he has already given for previous messages. For the latest message the user can toggle feedback until he sends the next query.

How is this implemented?

We first design the UI for feedback thumbs using Material UI SVG Icons. We need a separate component for the feedback UI because we need to store the state of feedback as positive or negative because we are allowing the user to change his feedback for the latest response until a new query is sent. And whenever the user clicks on a thumb, we update the state of the component as positive or negative accordingly.

import ThumbUp from 'material-ui/svg-icons/action/thumb-up';
import ThumbDown from 'material-ui/svg-icons/action/thumb-down';

feedbackButtons = (
  <span className='feedback' style={feedbackStyle}>
    <ThumbUp
      onClick={this.rateSkill.bind(this,'positive')}
      style={feedbackIndicator}
      color={positiveFeedbackColor}/>
    <ThumbDown
      onClick={this.rateSkill.bind(this,'negative')}
      style={feedbackIndicator}
      color={negativeFeedbackColor}/>
  </span>
);

The next step is to store the response in Message Store using saveFeedback Action. This will help us later to send the feedback to the server by querying it from the Message Store. The Action calls the Dispatcher with FEEDBACK_RECEIVED ActionType which is collected in the MessageStore and the feedback is updated in the Message Store.

let feedback = this.state.skill;

if(!(Object.keys(feedback).length === 0 &&    
feedback.constructor === Object)){
  feedback.rating = rating;
  this.props.message.feedback.rating = rating;
  Actions.saveFeedback(feedback);
}

case ActionTypes.FEEDBACK_RECEIVED: {
  _feedback = action.feedback;
  MessageStore.emitChange();
  break;
}

The final step is to send the feedback to the server. The server endpoint to store feedback for a skill requires other parameters apart from feedback to identify the skill. The server response contains an attribute `skills` which gives the path of the skill used to answer that query. From that path we need to parse :

  • Model : Highest level of abstraction for categorising skills
  • Group : Different groups under a model
  • Language : Language of the skill
  • Skill : Name of the skill

For example, for the query `what is the capital of germany` , the skills object is

"skills": ["/susi_skill_data/models/general/smalltalk/en/English-Standalone-aiml2susi.txt"]

So, for this skill,

    • Model : general
    • Group : smalltalk
    • Language : en
    • Skill : English-Standalone-aiml2susi

The server endpoint to store feedback for a particular skill is :

BASE_URL+'/cms/rateSkill.json?model=MODEL&group=GROUP&language=LANGUAGE&skill=SKILL&rating=RATING'

Where Model, Group, Language and Skill are parsed from the skill attribute of server response as discussed above and the Rating is either positive or negative and is collected from the user when he clicks on feedback thumbs.

When a new query is sent, the sendFeedback Action is triggered with the required attributes to make the server call to store feedback on server. The client then makes an Ajax call to the rateSkill endpoint to send the feedback to the server.

let url = BASE_URL+'/cms/rateSkill.json?'+
          'model='+feedback.model+
          '&group='+feedback.group+
          '&language='+feedback.language+
          '&skill='+feedback.skill+
          '&rating='+feedback.rating;

$.ajax({
  url: url,
  dataType: 'jsonp',
  crossDomain: true,
  timeout: 3000,
  async: false,
  success: function (response) {
    console.log(response);
  },
  error: function(errorThrown){
    console.log(errorThrown);
  }
});

This is how the feedback feedback mechanism works in SUSI Web Chat. The entire code can be found at SUSI Web Chat Repository.

Resources

 

Continue ReadingImplementing the Feedback Functionality in SUSI Web Chat

Adding a Scroll To Bottom button in SUSI WebChat

SUSI Web Chat now has a scroll-to-bottom button which helps the users to scroll the app automatically to the bottom of the scroll area on button click. When the chat history is lengthy and the user has to scroll down manually it results in a bad UX. So the basic requirements of this scroll-to-bottom button are:

  1. The button must only be displayed when the user has scrolled up the message section
  2. On clicking the scroll-to-bottom button, the scroll area must be automatically scrolled to bottom.

Let’s visit SUSI Web Chat and try this out.

The button is not visible until there are enough messages to enable scrolling and the user has scrolled up. On clicking the button, the app automatically scrolls to the bottom pointing to the most recent message.

How was this implemented?

We first design our scroll-to-bottom button using Material UI  Floating Action Button and SVG Icons.

import FloatingActionButton from 'material-ui/FloatingActionButton';
import NavigateDown from 'material-ui/svg-icons/navigation/expand-more';

The button needs to be styled to be displayed at a fixed position on the bottom right corner of the message section. Positioning it on top of MessageSection above the MessageComposer, the button is also aligned with respect to the edges.

const scrollBottomStyle = {
  button : {
    float: 'right',
    marginRight: '5px',
    marginBottom: '10px',
    boxShadow:'none',
  },
  backgroundColor: '#fcfcfc',
  icon : {
    fill: UserPreferencesStore.getTheme()==='light' ? '#90a4ae' : '#7eaaaf'
  }
}

The button must only be displayed when the user has scrolled up. To implement this we need a state variable showScrollBottom which must be set to true or false accordingly based on the scroll offset.

{this.state.showScrollBottom &&
  <div className='scrollBottom'>
    <FloatingActionButton mini={true}
      style={scrollBottomStyle.button}
      backgroundColor={scrollBottomStyle.backgroundColor}
      iconStyle={scrollBottomStyle.icon}
      onTouchTap={this.forcedScrollToBottom}>
      <NavigateDown />
    </FloatingActionButton>
  </div>
}

Now we have to set our state variable showScrollBottom corresponding to the scroll offset. It must be set to true is the user has scrolled up and false if the scrollbar is already at the bottom. To implement this we need to listen to the scrolling events. We used react-custom-scrollbars for the scroll area wrapping the message section. We can listen to the scrolling events using the onScroll props. We also need to tag the scroll area using refs to access the scroll area instead of using findDOMNode as it is being deprecated.

import { Scrollbars } from 'react-custom-scrollbars';

<Scrollbars
  ref={(ref) => { this.scrollarea = ref; }}
  onScroll={this.onScroll}
>
  {messageListItems}
</Scrollbars>

Now, whenever a scroll action is performed, the onScroll() function is triggered. We now have to know if the scroll bar is at the bottom or not. We make use of the scroll area’s props to get the scroll offsets. The getValues() function returns an object containing different scroll offsets and scroll area dimensions. We are interested in values.top which tells about the scroll-top’s progress from 0 to 1 i.e when the scroll bar is at the top most point values.top is 0 and when its on the bottom most point, values.top is 1. So whenever values.top is 1, showScrollBottom is false else true.

onScroll = () => {
  let scrollarea = this.scrollarea;
  if(scrollarea){
    let scrollValues = scrollarea.getValues();
    if(scrollValues.top === 1){
      this.setState({
        showScrollBottom: false,
      });
    }
    else if(!this.state.showScrollBottom){
      this.setState({
        showScrollBottom: true,
      });
    }
  }
}

Finally, we need to scroll the chat app to the bottom on button click. Whenever showScrollBottom is updated, the state is changed, so componentDidUpdate is triggered which calls the _scrollToBottom() function. But we should change this to avoid scrolling to bottom on showScrollBottom update and the user is intending to scroll here. We use the function forcedScrollToBottom to be triggered on clicking the scroll-to-bottom button, which resets the scrollTop value to the height of the scroll area, thus pointing the scrollbar to the bottom.

forcedScrollToBottom = () => {
  let ul = this.scrollarea;
  if (ul) {
    ul.scrollTop(ul.getScrollHeight());
  }
}

We don’t have to worry about resetting showScrollBottom on forced scroll to bottom as the scrolling will trigger the onScroll function where the showScrollBottom state is handled accordingly.

This is how the scroll to bottom button has been implemented in SUSI Web Chat. The entire code can be found at SUSI Web Chat Repository.

Resources

 

Continue ReadingAdding a Scroll To Bottom button in SUSI WebChat

Implementing the Message Response Status Indicators In SUSI WebChat

SUSI Web Chat now has indicators reflecting the message response status. When a user sends a message, he must be notified that the message has been received and has been delivered to server. SUSI Web Chat implements this by tagging messages with ticks or waiting clock icons and loading gifs to indicate delivery and response status of messages ensuring good UX.

This is implemented as:

  • When the user sends a message, the message is tagged with a `clock` icon indicating that the message has been received and delivered to server and is awaiting response from the server
  • When the user is waiting for a response from the server, we display a loading gif
  • Once the response from the server is received, the loading gif is replaced by the server response bubble and the clock icon tagged to the user message is replaced by a tick icon.

Lets visit SUSI WebChat and try it out.

Query : Hey

When the message is sent by the user, we see that the displayed message is tagged with a clock icon and the left side response bubble has a loading gif indicating that the message has been delivered to server and are awaiting response.

When the response from server is delivered, the loading gif disappears and the user message tagged with a tick icon.

 

How was this implemented?

The first step is to have a boolean flag indicating the message delivery and response status.

let _showLoading = false;

getLoadStatus(){
  return _showLoading;
},

The `showLoading` boolean flag is set to true when the user just sends a message and is waiting for server response.  When the user sends a message, the CREATE_MESSAGE action is triggered. Message Store listens to this action and along with creating the user message, also sets the showLoading flag as true.

case ActionTypes.CREATE_MESSAGE: {

  let message = action.message;
  _messages[message.id] = message;
  _showLoading = true;
  MessageStore.emitChange();
  
  break;
}

The showLoading flag is used in MessageSection to display the loading gif. We are using a saved gif to display the loading symbol. The loading gif is displayed at the end after displaying all the messages in the message store. Since this loading component must be displayed for every user message, we don’t save this component in MessageStore as a loading message as that would lead to repeated looping thorugh the messages in message store to add and delete loading component.

import loadingGIF from '../../images/loading.gif';

function getLoadingGIF() {

  let messageContainerClasses = 'message-container SUSI';

  const LoadingComponent = (
    <li className='message-list-item'>
      <section className={messageContainerClasses}>
        <img src={loadingGIF}
          style={{ height: '10px', width: 'auto' }}
          alt='please wait..' />
      </section>
    </li>
  );
  return LoadingComponent;
}

We then use this flag in MessageListItem class to tag the user messages with the clock icons. We used Material UI SVG Icons to display the clock and tick messages. We display these beside the time in the messages.

import ClockIcon from 'material-ui/svg-icons/action/schedule';

statusIndicator = (
  <li className='message-time' style={footerStyle}>
    <ClockIcon style={indicatorStyle}
      color={UserPreferencesStore.getTheme()==='light' ? '#90a4ae' : '#7eaaaf'}/>
  </li>
);

When the response from server is received, the CREATE_SUSI_MESSAGE action is triggered to render the server response. This action is again collected in MessageStore where the `showLoading` boolean flag is reset to false. This event also triggers the state of MessageSection where we are listening to showLoading value from MessageStore, hence triggering changes in MessageSection and accordingly in MessageListItem where showLoading is passed as props, removing the loading gif component and displaying the server response and replacing the clock icon with tick icon on the user message.

case ActionTypes.CREATE_SUSI_MESSAGE: {
  
  let message = action.message;
  MessageStore.resetVoiceForThread(message.threadID);
  _messages[message.id] = message;
  _showLoading = false;
  MessageStore.emitChange();
  
  break;
}

This is how the status indicators were implemented for messages. The complete code can be found at SUSI WebChat Repo.

Resources

Continue ReadingImplementing the Message Response Status Indicators In SUSI WebChat

How SUSI WebChat Implements RSS Action Type

SUSI.AI now has a new action type called RSS. As the name suggests, SUSI is now capable of searching the internet to answer user queries. This web search can be performed either on the client side or the server side. When the web search is to be performed on the client side, it is denoted by websearch action type. When the web search is performed by the server itself, it is denoted by rss action type. The server searches the internet and using RSS feeds, returns an array of objects containing :

  • Title
  • Description
  • Link
  • Count

Each object is displayed as a result tile and all the results are rendered as swipeable tiles.

Lets visit SUSI WebChat and try it out.

Query : Google
Response: API response

SUSI WebChat uses the same code abstraction to render websearch and rss results as both are results of websearch, only difference being where the search is being performed i.e client side or server side.

How does the client know that it is a rss action type response?

"actions": [
  {
    "type": "answer",
    "expression": "I found this on the web:"
  },
  {
    "type": "rss",
    "title": "title",
    "description": "description",
    "link": "link",
    "count": 3
  }
],

The actions attribute in the JSON API response has information about the action type and the keys to be parsed for title, link and description.

  • The type attribute tells the action type is rss.
  • The title attribute tells that title for each result is under the key – title for each object in answers[0].data.
  • Similarly keys to be parsed for description and link are description and link respectively.
  • The count attribute tells the client how many results to display.

We then loop through the objects in answers,data[0] and from each object we extract title, description and link.

let rssKeys = Object.assign({}, data.answers[0].actions[index]);

delete rssKeys.type;

let count = -1;

if(rssKeys.hasOwnProperty('count')){
  count = rssKeys.count;
  delete rssKeys.count;
}

let rssTiles = getRSSTiles(rssKeys,data.answers[0].data,count);

We use the count attribute and the length of answers[0].data to fix the number of results to be displayed.

// Fetch RSS data

export function getRSSTiles(rssKeys,rssData,count){

  let parseKeys = Object.keys(rssKeys);
  let rssTiles = [];
  let tilesLimit = rssData.length;

  if(count > -1){
    tilesLimit = Math.min(count,rssData.length);
  }

  for(var i=0; i<tilesLimit; i++){
    let respData = rssData[i];
    let tileData = {};

    parseKeys.forEach((rssKey,j)=>{
      tileData[rssKey] = respData[rssKeys[rssKey]];
    });

    rssTiles.push(tileData);
  }

return rssTiles;

}

We now have our list of objects with the information parsed from the response.We then pass this list to our renderTiles function where each object in the rssTiles array returned from getRSSTiles function is converted into a Paper tile with the title and description and the entire tile is hyperlinked to the given link using Material UI Paper Component and few CSS attributes.

// Draw Tiles for Websearch RSS data

export function drawTiles(tilesData){

let resultTiles = tilesData.map((tile,i) => {

  return(
    <div key={i}>
      <MuiThemeProvider>
        <Paper zDepth={0} className='tile'>
          <a rel='noopener noreferrer'
          href={tile.link} target='_blank'
          className='tile-anchor'>
            {tile.icon &&
            (<div className='tile-img-container'>
               <img src={tile.icon}
               className='tile-img' alt=''/>
             </div>
            )}
            <div className='tile-text'>
              <p className='tile-title'>
                <strong>
                  {processText(tile.title,'websearch-rss')}
                </strong>
              </p>
              {processText(tile.description,'websearch-rss')}
            </div>
          </a>
        </Paper>
      </MuiThemeProvider>
    </div>
  );

});

return resultTiles;
}

The tile title and description is processed for HTML special entities and emojis too using the processText function.

case 'websearch-rss':{

let htmlText = entities.decode(text);
processedText = <Emojify>{htmlText}</Emojify>;
break;

}

We now display our result tiles as a carousel like swipeable display using react-slick. We initialise our slider with few default options specifying the swipe speed and the slider UI.

import Slider from 'react-slick';

// Render Websearch RSS tiles

export function renderTiles(tiles){

  if(tiles.length === 0){
    let noResultFound = 'NO Results Found';
    return(<center>{noResultFound}</center>);
  }

  let resultTiles = drawTiles(tiles);
  
  var settings = {
    speed: 500,
    slidesToShow: 3,
    slidesToScroll: 1,
    swipeToSlide:true,
    swipe:true,
    arrows:false
  };

  return(
    <Slider {...settings}>
      {resultTiles}
    </Slider>
  );
}

We finally add CSS attributes to style our result tile and add overflow for text maintaining standard width for all tiles.We also add some CSS for our carousel display to show multiple tiles instead of one by default. This is done by adding some margin for child components in the slider.

.slick-slide{
  margin: 0 10px;
}

.slick-list{
  max-height: 100px;
}

We finally have our swipeable display of rss data tiles each tile hyperlinked to the source of the data. When the user clicks on a tile, he is redirected to the link in a new window i.e the entire tile is hyperlinked. And when there are no results to display, we show a `NO Results Found` message.

The complete code can be found at SUSI WebChat Repo. Feel free to contribute

Resources

 

Continue ReadingHow SUSI WebChat Implements RSS Action Type

Implementing Search Feature In SUSI Web Chat

SUSI WebChat now has a search feature. Users now have an option to filter or find messages. The user can enter a keyword or phrase in the search field and all the matched messages are highlighted with the given keyword and the user can then navigate through the results.

Lets visit SUSI WebChat and try it out.

  1. Clicking on the search icon on the top right corner of the chat app screen, we’ll see a search field expand to the left from the search icon.
  2. Type any word or phrase and you see that all the matches are highlighted in yellow and the currently focused message is highlighted in orange
  3. We can use the up and down arrows to navigate between previous and recent messages containing the search string.
  4. We can also choose to search case sensitively using the drop down provided by clicking on the vertical dots icon to the right of the search component.
  5. Click on the `X` icon or the search icon to exit from the search mode. We again see that the search field contracts to the right, back to its initial state as a search icon.

How does the search feature work?

We first make our search component with a search field, navigation arrow icon buttons and exit icon button. We then listen to input changes in our search field using onChange function, and on input change, we collect the search string and iterate through all the existing messages checking if the message contains the search string or not, and if present, we mark that message before passing it to MessageListItem to render the message.

let match = msgText.indexOf(matchString);

  if (match !== -1) {
    msgCopy.mark = {
    matchText: matchString,
    isCaseSensitive: isCaseSensitive
  };

}

We alse need to pass the message ID of the currently focused message to MessageListItem as we need to identify that message to highlight it in orange instead of yellow differentiating between all matches and the current match.

function getMessageListItem(messages, markID) {
  if(markID){
    return messages.map((message) => {
      return (
        <MessageListItem
          key={message.id}
          message={message}
          markID={markID}
        />
      );
    });
  }
}

We also store the indices of the messages marked in the MessageSection Component state which is later used to iterate through the highlighted results.

searchTextChanged = (event) => {

  let matchString = event.target.value;
  let messages = this.state.messages;
  let markingData = searchMsgs(messages, matchString,
    this.state.searchState.caseSensitive);

  if(matchString){

    let searchState = {
      markedMsgs: markingData.allmsgs,
      markedIDs: markingData.markedIDs,
      markedIndices: markingData.markedIndices,
      scrollLimit: markingData.markedIDs.length,
      scrollIndex: 0,
      scrollID: markingData.markedIDs[0],
      caseSensitive: this.state.searchState.caseSensitive,
      open: false,
      searchText: matchString
    };

    this.setState({
      searchState: searchState
    });

  }
}

After marking the matched messages with the search string, we pass the messages array into MessageListItem Component where the messages are processed and rendered. Here, we check if the message being received from MessageSection is marked or not and if marked, we then highlight the message. To highlight all occurrences of the search string in the message text, I used a module called react-text-highlight.

import TextHighlight from 'react-text-highlight';

if(this.props.message.id === markMsgID){
  markedText.push(
    <TextHighlight
      key={key}
      highlight={matchString}
      text={part}
      markTag='em'
      caseSensitive={isCaseSensitive}
    />
  );
}
else{
  markedText.push(
    <TextHighlight
      key={key}
      highlight={matchString}
      text={part}
      caseSensitive={isCaseSensitive}/>
  );
}

Here, we are using the message ID of the currently focused message, sent as props to MessageListItem to identify the currently focused message and highlight it specifically in orange instead of the default yellow color for all other matches.

I used ‘em’ tag to emphasise the currently highlighted message and colored it orange using CSS attributes.

em{
  background-color: orange;
}

We next need to add functionality to navigate through the matched results. The arrow buttons are used to navigate. We stored all the marked messages in the MessageSection state as `markedIDs` and their corresponding indices as `markedIndices`. Using the length of this array, we get the `scrollLimit` i.e we know the bounds to apply while navigating through the search results.

On clicking the up or down arrows, we update the currently highlighted message through `scrollID` and `scrollIndex`, and also check for bounds using `scrollLimit`  in the searchState. Once these are updated, the chat app must automatically scroll to the new currently highlighted message. Since findDOMNode is being deprecated, I used the custom scrollbar to find the node of the currently highlighted message without using findDOMNode. The custom scrollbar was implemented using the module react-custom-scrollbars. Once the node is found, we use the inbuilt HTML DOM method, scrollIntoView()  to automatically scroll to that message.

if(this.state.search){
  if (this.state.searchState.scrollIndex === -1
      || this.state.searchState.scrollIndex === null) {
      this._scrollToBottom();
  }
  else {
    let markedIDs = this.state.searchState.markedIDs;
    let markedIndices = this.state.searchState.markedIndices;
    let limit = this.state.searchState.scrollLimit;
    let ul = this.messageList;

    if (markedIDs && ul && limit > 0) {
      let currentID = markedIndices[this.state.searchState.scrollIndex];
      this.scrollarea.view.childNodes[currentID].scrollIntoView();
    }
  }
}

Let us now see how the search field was animated. I used a CSS transition property along width to get the search field animation to work. This gives the animation when there is a change of width for the search field. I fixed the width to be zero when the search mode is not activated, so only the search icon is displayed. When the search mode is activated i.e the user clicks on the search field, I fixed the width as 125px. Since the width has changed, the increase in width is displayed as an expanding animation due to the CSS transition property.

const animationStyle = {
  transition: 'width 0.75s cubic-bezier(0.000, 0.795, 0.000, 1.000)'
};

const baseStyles = {
  open: { width: 125 },
  closed: { width: 0 },
}

We also have a case sensitive option which is displayed on clicking the rightmost button i.e the three vertical dots button. We can toggle between case sensitive option, whose value is stored in MessageSection searchState and is passed along with the messages to MessageListItem where it is used by react-text-highlight to highlight text accordingly and render the highlighted messages.

This is how the search feature was implemented in SUSI WebChat. You can find the complete code at SUSI WebChat.

Resources
Continue ReadingImplementing Search Feature In SUSI Web Chat