Implementing Internationalization with Weblate Integration on SUSI Web Chat

SUSI Web Chat supports different browser languages on the Chat UI. The content used to render the date/time formats and the text is translated to the preferred language based on the language selected in the Language Settings.

To test it out on SUSI Web Chat, 

  1. Head over to http://chat.susi.ai
  2. Go to settings from the right dropdown.
  3. Set your preferred language inside Language Settings.
  4. Save and see the SUSI Chat render in the preferred language.

To achieve Internationalization, a number of important steps are to be followed –

  1. The best approach to follow would be to use po/pot files and get the translated string from the files. The format of the files can be used as follows. This is a JSON Structure for Javascript Projects. (File : de.json)
{
   "About":"About",
   "Chat":"Chat",
   "Skills":"Skills",
   "Settings":"Settings",
   "Login":"Login",
   "Logout":"Logout",
   "Themes": "Themes",
}

 

2. After creating the valid po/pot files in the right formats, we create a component which shall translate our text in the selected language and will import that particular string from that po file. To make it easier in Javascript we are using the JSON files that we created here.

3. Our Translate.react.js component is a special component which shall return us only a <span> text which shall get the User’s preferred language from the store and import that particular po/pot file and match the key as text which is being passed to it and give us the translated text. The following code snippet explains the above sentences more precisely.

changeLanguage = (text) => {
        this.setState({
            text:text
        })
  }
  // Here 'de' is the JSON file which we imported into this component
  componentDidMount() {
    let defaultPrefLanguage = this.state.defaultPrefLanguage;
    var arrDe = Object.keys(de);
    let text = this.state.text;
    if(defaultPrefLanguage!=='en-US'){
      for (let key=0;key<arrDe.length;key++) {
          if (arrDe[key]===text) {
              this.changeLanguage(de[arrDe[key]]);
          }
        }
    }
  } 
   render() {
        return <span>{this.state.text}</span>
     }

4. The next step is to bind all the text throughout our components into this <Translate text=” ”/> component which shall send us back the translated content. So any string in any component can be replaced with the following.

<Translate text="About" />

Here the text “About” is being sent over to the Translate.react.js component and it is getting us the German translation of the string About from the file de.json.

5. We then render the Translated content in our Chat UI. (File: Translate.react.js)

        

About Weblate

Weblate is a Web based translation tool with git integration supporting wide range of file formats and making it easy for translators to contribute. The translations should be kept within the same repository as source code and translation process should closely follow development. To know more about Weblate go to this link.

Integrating SUSI Web Chat with Weblate

  1. First, we deploy Weblate on our localhost using the installation guide given in these docs. I used the pip installation guide for Weblate as mentioned in this link. After doing that we copy weblate/settings_example.py to weblate/settings.py. Then we configure settings.py and use the following command to migrate the settings.
./manage.py migrate
  1. Next step is to create an admin using the following command.
./manage.py createadmin
  1. We then add a project from our Admin dashboard by filling details in the following manner as shown in the image
  2. Once the project is added, we add the component to link our Translation files as shown in the image.
  3. Once the files are linked we will see our Overview Project Page and the Information. It can be seen in the image below. The screenshot shows a 100% translation that means all of our strings are translated correctly for German.
  4. To change any translation we make changes and push it to the repository where our SSH key generated from Weblate is added. A full guide to do that is mentioned in this link.
  5. We can push any changes to the repository by making changes in our local. This will generate a commit from the Weblate Admin in our repository as seen in the following screenshot.

Resources

  1. React Internationalization Library  – react-intl
  2. Official Docs about Weblate – Weblate docs.
  3. Format for po/pot files, JSON files etc. – https://docs.weblate.org/en/latest/formats.html#json-and-nested-structure-json-files
  4. Weblate – https://weblate.org
Continue ReadingImplementing Internationalization with Weblate Integration on SUSI Web Chat

Auto Deployment of SUSI Web Chat on gh-pages with Travis-CI

SUSI Web Chat uses Travis CI with a custom build script to deploy itself on gh-pages after every pull request is merged into the project. The build system auto updates the latest changes hosted on chat.susi.ai. In this blog, we will see how to automatically deploy the repository on gh pages.

To proceed with auto deploy on gh-pages branch,

  1. We first need to setup Travis for the project.
  2. Register on https://travis-ci.org/ and turn on the Travis for this repository.

Next, we add .travis.yml in the root directory of the project.

# Set system config
sudo: required
dist: trusty
language: node_js

# Specifying node version
node_js:
  - 6

# Running the test script for the project
script:
  - npm test

# Running the deploy script by specifying the location of the script, here ‘deploy.sh’ 

deploy:
  provider: script
  script: "./deploy.sh"


# We proceed with the cache if there are no changes in the node_modules
cache:
  directories:
    - node_modules

branches:
  only:
    - master

To find the code go to https://github.com/fossasia/chat.susi.ai/blob/master/.travis.yml

The Travis configuration files will ensure that the project is building for every change made, using npm test command, in our case, it will only consider changes made on the master branch.

If one wants to watch other branches one can add the respective branch name in travis configurations. After checking for build passing we need to automatically push the changes made for which we will use a bash script.

#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
    echo "Skipping deploy; The request or commit is not on master"
    exit 0
fi

# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out ../deploy_key -d

chmod 600 ../deploy_key
eval `ssh-agent -s`
ssh-add ../deploy_key

# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

git checkout $SOURCE_BRANCH

# Actual building and setup of current push or PR.
npm install
npm run build
mv build ../build/

git checkout $TARGET_BRANCH
rm -rf node_modules/
mv ../build/* .
cp index.html 404.html

# Staging the new build for commit; and then committing the latest build
git add -A
git commit --amend --no-edit --allow-empty

# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

  echo "No Changes in the Build; exiting"
  exit 0

else
  # There are changes in the Build; push the changes to gh-pages
  echo "There are changes in the Build; pushing the changes to gh-pages"

  # Actual push to gh-pages branch via Travis
  git push --force $SSH_REPO $TARGET_BRANCH
fi

This bash script will enable Travis CI user to push changes to gh pages, for this we need to store the credentials of the repository in encrypted form.

1. To get the public/private rsa keys we use the following command

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

2.  It will generate keys in .ssh/id_rsa folder in your home repository.

  1. Make sure you do not enter any passphrase while generating credentials otherwise Travis will get stuck at the time of decryption of the keys.
  2. Copy the public key and deploy the key to repository by visiting  

5. We also need to set the environment variable ENCRYPTED_KEY in Travis. Here’s a screenshot where to set it in the Travis repository dashboard.

6. Next, install Travis for encryption of keys.

sudo apt install ruby ruby-dev
sudo gem install travis

7. Make sure you are logged in to Travis, to login use the following command.

travis login

8. Make sure you have copied the ssh to deploy_key and then encrypt your private deploy_key and add it to root of your repository, use command –

travis encrypt-file deploy_key

9. After successful encryption, you will see a message

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

openssl aes-256-cbc -K $encrypted_3dac6bf6c973_key -iv $encrypted_3dac6bf6c973_iv -in deploy_key.enc -out ../deploy_key -d
  1. Add the above-generated deploy_key in Travis and push the changes on your master branch. Do not push the deploy_key only the encryption file i.e., deploy_key.enc
  1. Finally, push the changes and create a Pull request and merge it to test the deployment. Visit Travis logs for more details and debugging.

Resources

Continue ReadingAuto Deployment of SUSI Web Chat on gh-pages with Travis-CI

Modifying SUSI Skills using SUSI Skill CMS

SUSI Skill CMS is a complete solution right from creating a skill to modifying the skill. The skills in SUSI are well synced with the remote repository and can be completely modified using the Edit Skill feature of SUSI Skill CMS. Here’s how to Modify a Skill.

  1. Sign Up/Login to the website using your credentials in skills.susi.ai
  2. Choose the SKill which you want to edit and click on the pencil icon.
  3. The following screen allows editing the skill. One can change the Group, Language, Skill Name, Image and the content as well.
  4. After making the changes the commit message can be added to Save the changes.

To achieve the above steps we require the following API Endpoints of the SUSI Server.

  1. http://api.susi.ai/cms/getSkillMetadata.json – This gives us the meta data which populates the various Skill Content, Image, Author etc.
  2. http://api.susi.ai/cms/getAllLanguages.json – This gives us all the languages of a Skill Group.
  3. http://api.susi.ai/cms/getGroups.json – This gives us all the list of Skill Groups whether Knowledge, Entertainment, Smalltalk etc.

Now since we have all the APIs in place we make the following AJAX calls to update the Skill Process.

  1. Since we are detecting changes in all the fields (Group Value, Skill Name, Language Value, Image Value, Commit Message, Content changes and the format of the content), the AJAX call can only be sent when there is a change in the PR and there is no null or undefined value in them. For that, we make various form validations. They are as follows.
    1. We first detect whether the User is in a logged in state.
if (!cookies.get('loggedIn')) {
            notification.open({
                message: 'Not logged In',
                description: 'Please login and then try to create/edit a skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
        }
  1. We check whether the image uploaded matches the format of the Skill image to be stored which is ::image images/imageName.png
if (!new RegExp(/images\/\w+\.\w+/g).test(this.state.imageUrl)) {
            notification.open({
                message: 'Error Processing your Request',
                description: 'image must be in format of images/imageName.jpg',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
        }
  1. We check if the commit message is not null and notify the user if he forgot to add a message.
if (this.state.commitMessage === null) {
            notification.open({
                message: 'Please make some changes to save the Skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
        }
  1. We also check whether the old values of the skill are completely similar to the new ones, in this case, we do not send the request.
if (toldValues===newValues {
            notification.open({
                message: 'Please make some changes to save the Skill',
                icon: <Icon type="close-circle" style={{ color: '#f44336' }} />,
            });
        }

To check out the complete code, go to this link.

  1. Next, if the above validations are successful, we send a POST request to the server and show the notification to the user accordingly, whether the changes to the Skill Data have been updated or not. Here’s the AJAX call snippet.
// create a form object
let form = new FormData();       
/* Append the following fields from the Skill Component:- OldModel, OldLanguage, OldSkill, NewModel, NewGroup, NewLanguage, NewSkill, changelog, content, imageChanged, old_image_name, new_image_name, image_name_changed, access_token */  
if (image_name_changed) {
            file = this.state.file;
            // append file to image
        }

        let settings = {
            "async": true,
            "crossDomain": true,
            "url": "http://api.susi.ai/cms/modifySkill.json",
            "method": "POST",
            "processData": false,
            "contentType": false,
            "mimeType": "multipart/form-data",
            "data": form
        };
        $.ajax(settings)..done(function (response) {
         //show success
        }.
        .fail(function(response){
         // show failure
        }
  1. To verify all this we head to the commits section of the SUSI Skill Data repo and see the changes we made. The changes can be seen here https://github.com/fossasia/susi_skill_data/commits/master 

Resources

  1. AJAX POST Request – https://api.jquery.com/jquery.post/ 
  2. Material UI – http://material-ui.com 
  3. Notifications – http://www.material-ui.com/#/components/notification 
Continue ReadingModifying SUSI Skills using SUSI Skill CMS

Enhanced Skill Tiles in SUSI Skill CMS

The SUSI Skill Wiki is a management system for all the SUSI Skills and the Skill display screen ought to look attractive. The earlier version of the Skill Display was just a display with the skill name populated as cards as shown in the image.

image

So as we progressed over to add more metadata to the SUSI Skills, we had the challenge to show all details which were as follows –

An example of a skill metadata format-

"cricketTest": {
      "image": "images/images.jpg",
      "author_url": "skill.susi.ai",
      "examples": ["Testing Works"],
      "developer_privacy_policy": "na",
      "author": "cms",
      "skill_name": "cricket",
      "dynamic_content": true,
      "terms_of_use": "na",
      "descriptions": "testing",
      "skill_rating": null
    }

To embed the Skill metadata in the Tiles the following steps are to be followed –

  1. We first use the end point at the SUSI Server, http://api.susi.ai/cms/getSkillList.json with the following attributes –
    1. model – The skill follows a general model or maybe a tutorial model
    2. group -The category or group of the skill.
    3. language – The language of the skill output.

2.  An AJAX request to this end point gives us the following response.

{
accepted: true,
model: "general",
group: "Knowledge",
language: "en",
skills: 
{
capital: 
{
image: "images/capital.png",
author_url: "https://github.com/chashmeetsingh",
examples: 
[
"capital of Bangladesh"
],
developer_privacy_policy: null,
author: "chashmeet singh",
skill_name: "capital",
dynamic_content: true,
terms_of_use: null,
descriptions: "a skill to tell user about capital of any country.",
skill_rating: 
{
negative: "0",
positive: "1"
}
}

We use the descriptions, skill_name, examples, image from the skill metadata to create our Skill Tile.

  1. The styles of the cards follow a CSS flexbox structure. A sample mock up of the Skill Card looks as follows.

We first handle all the base cases and show “No name available”, “No description available” for data which does not exist or is found to be “null”. We then create the card mock-up in ReactJS which looks somewhat like this code snippet in the file BrowseSkill.js

                            <Card style={styles.row} key={el}>
                                <div style={styles.right} key={el}>
                                    {image ? <div style={styles.imageContainer}>
                                        <img alt={skill_name} src={image} style={styles.image}/>
                                    </div>:
                                    <CircleImage name={el} size="48"/>}
                                    <div style={styles.titleStyle}>"{examples}"</div>
                                </div>
                                <div style={styles.details}>
                                    <h3 style={styles.name}>{skill_name}</h3>
                                    <p style={styles.description}>{description}</p>
                                </div>
                            </Card>

  1. We then add the following styles to the Card and its contents which complete the look of the View.
imageContainer:{   
        position: 'relative',
        height: '80px',
        width: '80px',
        verticalAlign: 'top'
    },
    name:{
        textAlign: 'left',
        fontSize: '15px',
        color: '#4285f4',
        margin: '4px 0'
    },
    details:{
        paddingLeft:'10px'
    },
    image:{
        maxWidth: '100%',
        border: 0
    },
description:{
        textAlign:'left',
        fontSize: '14px'   
    },
row: {
        width: 280,
        minHeight:'200px',
        margin:"10px",
        overflow:'hidden',
        justifyContent: "center",
        fontSize: '10px',
        textAlign: 'center',
        display: 'inline-block',
        background: '#f2f2f2',
        borderRadius: '5px',
        backgroundColor: '#f4f6f6',
        border: '1px solid #eaeded',
        padding: '4px',
        alignSelf:'flex-start'
    },
titleStyle:{
    textAlign: 'left',
    fontStyle: 'italic',
    fontSize: '16px',
    textOverflow: 'ellipsis',
    overflow: 'hidden',
    width: '138px',
    marginLeft: '15px',
    verticalAlign: 'middle',
    display: 'block'
}

To see the SUSI Skills or to contribute to it, please visit http://skills.susi.ai

Resources

Continue ReadingEnhanced Skill Tiles in SUSI Skill CMS

Storing SUSI Settings for Anonymous Users

SUSI Web Chat is equipped with a number of settings by which it can be configured. For a user who is logged in and using the chat, he can change his settings and save them by sending it to the server. But for an anonymous user, this feature is not available while opening the chat every time, unless one stores the settings. Thus to overcome this problem we store the settings in the browser’s cookies which we do by following the steps.

The current User Settings available are :-

  1. Theme – One can change the theme to either ‘dark’ or ‘light’.
  2. To have the option to send a message on enter.
  3. Enable mic to give voice input.
  4. Enable speech output only for speech input.
  5. To have speech output always ON.
  6. To select the default language.
  7. To adjust the speech output rate.
  8. To adjust the speech output pitch.

The first step is to have a default state for all the settings which act as default settings for all users when opening the chat for the first time. We get these settings from the User preferences store available at the file UserPreferencesStore.js

let _defaults = {
                Theme: 'light',
                Server: 'http://api.susi.ai',
                StandardServer: 'http://api.susi.ai',
                EnterAsSend: true,
                MicInput: true,
                SpeechOutput: true,
               SpeechOutputAlways: false,
               SpeechRate: 1,
               SpeechPitch: 1,
           };

We assign these default settings as the default constructor state of our component so that we populate them beforehand.

2. On changing the values of the various settings we update the state through various function handlers which are as follows-

handleSelectChange – handles all the theme changes, handleEnterAsSendhandleMicInput – handles Mic Input as true, handleSpeechOutput – handles whether Speech Output should be enabled or not, handleSpeechOutputAlways – handles if user wants to listen to speech after every input, handleLanguage – handles the language settings related to speech, handleTextToSpeech – handles the Text to Speech Settings including Speech Rate, Pitch.        

  1. Once the values are changed we make use of the universal-cookie package and save the settings in the User’s browser. This is done in a function handleSubmit in the Settings.react.js file.

        

 import Cookies from 'universal-cookie';
     const cookies = new Cookies(); // Create cookie object.
     // set all values
     let vals = {
            theme: newTheme,
            server: newDefaultServer,
            enterAsSend: newEnterAsSend,
            micInput: newMicInput,
            speechOutput: newSpeechOutput,
            speechOutputAlways: newSpeechOutputAlways,
            rate: newSpeechRate,
            pitch: newSpeechPitch,
      }

    let settings = Object.assign({}, vals);
    settings.LocalStorage = true;
    // Store in cookies for anonymous user
    cookies.set('settings',settings);   
  1. Once the values are set in the browser we load the new settings every time the user opens the SUSI Web Chat by having the following checks in the file API.actions.js. We get the values from the cookies using the get function if the person is not logged in and initialise the new settings for that user.
if(cookies.get('loggedIn')===null||
    cookies.get('loggedIn')===undefined){
    // check if not logged in and get the value from the set cookie
    let settings = cookies.get('settings');
    if(settings!==undefined){
      // Check if the settings are set in the cookie
      SettingsActions.initialiseSettings(settings);
    }
  }
else{
// call the AJAX for getting Settings for loggedIn users
}

Resources

Continue ReadingStoring SUSI Settings for Anonymous Users

Handling Offline Message Responses in SUSI Web Chat

Previously, the SUSI Web Chat stopped working when there was no Internet connectivity. The application’s overall state was disturbed as one would see the loading message gif and the users were left to wonder on how to proceed. To handle this situation, we required notifying the User in the offline mode, with a message, that there is no Internet Connectivity.

This following image demonstrates the previous state where the application hung.

This image shows how this state was handled currently. One can test this out on http://chat.susi.ai by disconnecting and sending a message to SUSI and then connecting back to the Internet.

To achieve this, one needed to handle the offline and online events of the browser. The following steps can be followed to achieve this.

  1. We make use of the following eventListener functions to know whether the user is connected to the Internet.
// handles the Offlines event
window.addEventListener('offline', handleOffline.bind(this));  

// handles the Online event
window.addEventListener('online', handleOnline.bind(this));
  1. We then set a global offline message which is modified on the connections switching from online to an offline state. They are handled by the following functions.
let offlineMessage = null;

function handleOffline() {
  offlineMessage = 'Sorry, cannot answer that now. I have no net connectivity';
}
function handleOnline() {
  offlineMessage = null;
}
  1. We then handle the action createSUSIMessage() in API.actions.js and send the  AJAX request which we are making according to the offline/online state. This enables us to send the correct message response to the User even in the offline state and not letting the Application state crash.
// So if the offlineMessage variable is not null we call the AJAX 
if(!offlineMessage){
        // handle AJAX
}
else {
    // we create a message saying there is no Internet connectivity.
}
     
  1. The messages on refreshing back restore back to the original state as these are not being stored in the server. Hence the User is able to see the correct History, i.e., only those messages which were sent to the server and successfully responded to by SUSI.

Resources

Continue ReadingHandling Offline Message Responses in SUSI Web Chat

Implementing Speech to Text for Chrome in SUSI Web Chat

SUSI Web Chat now replies to voice inputs. To achieve this, I made use of the Web Speech API. The voice input saves one from the pain of typing and it’s a much needed feature for the Web Chat and to maintain the similarity with the other SUSI Android and SUSI iOS clients.

To test the feature out in SUSI Web Chat, click on the microphone icon beside the text area on chat.susi.ai.

Say the message once the dialog appears, and you will see the message being sent to the Chat List rendered in text.

Let’s achieve the same result following the steps below.

  1. First, initialize the class Voice Recognition with defaults for the Speech Recognition, for that we create a file VoiceRecognition.js
  1. We first initialize the Speech Recognition API with the window object.
  2. We warn the User with a console message if there is no Speech Recognition API available.
  3. If it’s available call the recognition function using the following line

this.recognition = this.createRecognition(SpeechRecognition)

// Initialise the Speech recognition API
const SpeechRecognition = window.SpeechRecognition
      || window.webkitSpeechRecognition
      || window.mozSpeechRecognition
      || window.msSpeechRecognition
      || window.oSpeechRecognition
    // Warn the user if not available otherwise call the createRecognition function
    if (SpeechRecognition != null) {
      this.recognition = this.createRecognition(SpeechRecognition)
    } else {
      console.warn('The current browser does not support the SpeechRecognition API.');
    }
  }
  1. Then we write the createRecognition function
  1. We set our defaults first as “continuous  – true, interimResults – false, and language – ‘en-US’ ”
  2. We pass these options to the recognition object that we created in the above step and finally return the recognition object.
createRecognition = (SpeechRecognition) => {
    const defaults = {
      continuous: true,
      interimResults: false,
      lang: 'en-US'
    }

    const options = Object.assign({}, defaults, this.props)

    let recognition = new SpeechRecognition()

    recognition.continuous = options.continuous
    recognition.interimResults = options.interimResults
    recognition.lang = options.lang

    return recognition
  }
  1. Initialize all the helper functions to be passed as props.
  1. start – This method starts the recognition and invokes the Mic of the browser. It also checks if the browser has the access to the user’s Mic.
  2. stop – Stop method closes the Mic and returns the audio captured so far.
  3. abort – Abort method stops the SpeechRecognition service.
  4. onspeechend – This method is called if there is any inactivity and there is no voice input. Hence, stops the recognition service.
  5. componentWillReceiveProps – This method waits for the stop method and calls it when it has received the stop object.
  6. componentWIllUnmount – This method is invoked just before the component is about to unmount and therefore its function is to abort the Speech Recognition Service
  7. render –  We return null as there is nothing to return in this component and all the converted text of the captured Speech will be sent to the parent element.
start = () => {
    this.recognition.start()
  }

  stop = () => {
    this.recognition.stop()
  }

  abort = () => {
    this.recognition.abort()
  }
  onspeechend = () => {
    console.log('no sound detected');
    this.recognition.stop()
  }

  componentWillReceiveProps ({ stop }) {
    if (stop) {
      this.stop()
    }
  }

  componentWillUnmount () {
    this.abort()
  }

  render () {
    return null
  }
  1. Add event listeners to start and stop functions inside componentDidMount() to ensure every action that we want to perform from the parent element is after the component has successfully mounted itself.
  1. start – The start method is set with an action start so that we can pass the required action name to the VoiceRecognition component that we created
  2. end – The end method similarly is set with an action end
  3. After setting up the actions we finally call the bindResult function with the result that we received.
componentDidMount () {
    const events = [
      { name: 'start', action: this.props.onStart },
      { name: 'end', action: this.props.onEnd },
      { name: 'onspeechend', action: this.props.onspeechend }
    ]

    events.forEach(event => {
      this.recognition.addEventListener(event.name, event.action)
    })

    this.recognition.addEventListener('result', this.bindResult)

    this.start()
  }
  1. Bind the result and send it as the props to the parent element.
  2. Combine all interim results of the recognition and send it to the onResult function as finalTranscript
  3. The function bindResult – The function bindResult does all the binding of the interim results that we received and output a final result as finalTranscript.
  4. Lastly, we add the prop validations to ensure the correct props are being passed to our VoiceRecognition component.
// bindResult function
 bindResult = (event) => {
    let interimTranscript = ''
    let finalTranscript = ''
   // Bind all the results to finalTranscript
    for (let i = event.resultIndex; i < event.results.length; ++i) {
      if (event.results[i].isFinal) {
        finalTranscript += event.results[i][0].transcript
      } else {
        interimTranscript += event.results[i][0].transcript
      }
    }

    this.props.onResult({ interimTranscript, finalTranscript })
  }
// Add Prop Validations
VoiceRecognition.propTypes = {
  onStart: PropTypes.func,
  onEnd : PropTypes.func,
  onResult: PropTypes.func,
  Onspeechend: PropTypes.func,
  continuous: PropTypes.bool,
  lang: PropTypes.string,
  stop: PropTypes.bool
};
// Finally export the VoiceRecognition Component
export default VoiceRecognition
  1. Lastly, call the VoiceRecogntion component and pass the props from the MessageComposer Section to it in the following way.
  1. Initialize the default state in the constructor inside this.state
this.state = {
      text: '',
      start: false, // Starting the VoiceRecognition
      stop: false, // Stop the VoiceRecognition
      open: false, // Maintain the modal state
      result:'' // Maintain the result state
    };
  1. onStart function to call the VoiceRecognition component only when the Mic Button is pressed.
  2. onEnd to end the Speech Recognition service.
  3. onResult to send the message through the Actions.createMessage() function
onResult = ({interimTranscript,finalTranscript }) => {
    let result = interimTranscript;
    let voiceResponse = false;
    this.setState({result:result});
    if(finalTranscript) {
      result = finalTranscript;
      this.setState({
      start: false,
      result:result,
      stop: false,
      open:false,
      animate:false
      });
      if(this.props.speechOutputAlways || this.props.speechOutput){
        voiceResponse = true;
      }
      Actions.createMessage(result, this.props.threadID, voiceResponse);
      setTimeout(()=>this.setState({result: ''}),400);
      this.Button = <Mic />
    }
  }
  1. Fire the component based on the value of start variable and pass the requisite props as given below in the code.
// Only when the start is ‘true’ call the VoiceRecognition component

    {this.state.start && (
          <VoiceRecognition
            onStart={this.onStart}
            onEnd={this.onEnd}
            onResult={this.onResult}
            continuous={true}
            lang="en-US"
            stop={this.state.stop}
          />
        )}

 

  1. Update the text in the “Speak Now” Dialog to show the user the Speech to Text conversion
  1. Update the text in the Modal when it is converted from Speech to Text, i.e. when we set the state of the result variable.
{this.state.result !=='' ? this.state.result :
          'Speak Now...'}

To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai

Resources

Continue ReadingImplementing Speech to Text for Chrome in SUSI Web Chat

Implementing Text to Speech on SUSI Web Chat

SUSI Web Chat now gives voice replies while chatting with it similar to SUSI Android and SUSI iOS clients. To test the Voice Output on Chrome,

  1. Visit chat.susi.ai
  2. Click on the Mic input button.
  3. Say something using the Mic when the Speak Now View appears

The simplest way to add text-to-speech to your website is by using the official Speech API currently available for Chrome Browser. The following steps help to achieve it in ReactJS.

  1. Initialize state defaults in a component named, VoicePlayer.
const defaults = {
      text: '',
      volume: 1,
      rate: 1,
      pitch: 1,
      lang: 'en-US'
    } 

There are two states which need to be maintained throughout the component and to be passed as such in our props which will maintain the state. Thus our state is simply

this.state = {
      started: false,
      playing: false
    }
  1. Our next step is to make use of functions of the Web Speech API to carry out the relevant processes.
  1. speak() – window.speechSynthesis.speak(this.speech) – Calls the speak method of the Speech API
  2. cancel() – window.speechSynthesis.cancel() – Calls the cancel method of the Speech API
  1. We then use our component helper functions to assign eventListeners and listen to any changes occurring in the background. For this purpose we make use of the functions componentWillReceiveProps(), shouldComponentUpdate(), componentDidMount(), componentWillUnmount(), and render().
  1. componentWillReceiveProps() – receives the object parameter {pause} to listen to any paused action
  2. shouldComponentUpdate() simply returns false if no updates are to be made in the speech outputs.
  3. componentDidMount() is the master function which listens to the action start, adds the eventListener start and end, and calls the speak() function if the prop play is true.
  4. componentWillUnmount() destroys the speech object and ends the speech.

Here’s a code snippet for Function componentDidMount()

componentDidMount () {
  
    const events = [
      { name: 'start', action: this.props.onStart }
    ]
    // Adding event listeners
    events.forEach(e => {
      this.speech.addEventListener(e.name, e.action)
    })

    this.speech.addEventListener('end', () => {
      this.setState({ started: false })
      this.props.onEnd()
    })

    if (this.props.play) {
      this.speak()
    }
  }
  1. We then add proper props validation in the following way to our VoicePlayer component.
VoicePlayer.propTypes = {
  play: PropTypes.bool,
  text: PropTypes.string,
  onStart: PropTypes.func,
  onEnd: PropTypes.func
};
  1. The next step is to pass the props from a listener view to the VoicePlayer component. Hence the listener here is the component MessageListItem.js from where the voice player is initialized.
  1. First step is to initialise the state.
this.state = {
      play: false,
    }
  onStart = () => {
    this.setState({ play: true });
  }
  onEnd = () => {
    this.setState({ play: false });
  }
  1. Next, we set play to true when we want to pass the props and the text which is to be said and append it to the message lists which have voice set as `true`
 { this.props.message.voice &&
      (<VoicePlayer
             play
             text={voiceOutput}
             onStart={this.onStart}
             onEnd={this.onEnd}
 />)}

Finally, our message lists with voice true will be heard on the speaker as they have been spoken on the microphone. To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai or on our chat channel at gitter.im/fossasia/susi_webchat

Resources

  1. Speak-easy-synthesis repository http://mdn.github.io/web-speech-api/speak-easy-synthesis
  2. Web-speech-api repository https://github.com/mdn/web-speech-api/
Continue ReadingImplementing Text to Speech on SUSI Web Chat

Showing Offline and Online Status in SUSI Web Chat

A lot of times while chatting on SUSI Web Chat, one does not receive responses, this could either be due to no Internet connection or a down server.

For a better user experience, the user needs to be notified whether he is connected to the SUSI Chat. If one ever loses out on the Internet connection, SUSI Web Chat will notify you what’s the status through a snack bar.

Here are the steps to achieve the above:-

  1. The first step is to initialize the state of the Snackbar and the message.
this.state = {
snackopen: false,
snackMessage: 'It seems you are offline!'
}
  1. Next, we add eventListeners in the component which will bind to our browser’s window object. While this event can be added to any component in the App but we need it in the MessageSection particularly to show the snack bar.

The window object listens to any online offline activity of the browser.

  1. In the file MessageSection.react.js, inside the function componentDidMount()
  1. We initialize window listeners in our constructor section and bind them to the function handleOnline and handleOffline to set states to the opening of the SnackBar.
// handles the Offlines event
window.addEventListener('offline', this.handleOffline.bind(this));  

// handles the Online event
window.addEventListener('online', this.handleOnline.bind(this));
  1. We then create the handleOnline and handleOffline functions which sets our state to make the Snackbar open and close respectively.
// handles the Offlines event
window.addEventListener('offline', this.handleOffline.bind(this));  

// handles the Online event
window.addEventListener('online', this.handleOnline.bind(this));
  1. Next, we create a Snackbar component using the material-ui snackbar component.

The Snackbar is visible as soon as the the snackopen state is set to true with the message which is passed whether offline or online.

<Snackbar
       autoHideDuration={4000}
       open={this.state.snackopen}
       message={this.state.snackMessage}
 />

To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai

Resources

Continue ReadingShowing Offline and Online Status in SUSI Web Chat

Using react-slick for Populating RSS Feeds in SUSI Chat

To populate SUSI RSS Feed generated, while chatting on SUSI Web Chat, I needed a Horizontal Swipeable Tile Slider. For this purpose, I made use of the package react-slick. The information which was supposed to be handled as obtained from the SUSI Server to populate the RSS feed was

  • Title
  • Description
  • Link

Hence to show all of this information like a horizontal scrollable feed, tiles by react-slick solves the purpose. To achieve the same, let’s see follow the steps below.

  1. First step is to install the react-slick package into our project folder, for that we use
npm install react-slick --save
  1. Next we import the Slider component from react-slick package into the file where we want the slider, here MessageListItem.react.js
import Slider from 'react-slick'
  1. Add Slider with settings as given in the docs. This is totally customisable. For more customisable options go to https://github.com/akiran/react-slick
var settings = {
         speed: 500,
         slidesToShow: 3,
         slidesToScroll: 1,
        swipeToSlide:true,
         swipe:true,
         arrows:false
     };

speed – The Slider will scroll horizontally with this speed.

slidesToShow – The number of slides to populate in one visible screen

swipeToSlide, swipe – Enable swiping on touch screen devices.

arrows – Put false, to disable arrows

  1. The next step is to initialize the Slider component inside the render function and populate it with the tiles. The full code snippet is available at MessageListItem.react.js
<Slider {..settings}>//Append the settings which you created
    {yourListToProps} // Add the list tiles you want to see
</Slider>
  1. Adding a little bit of styling, full code available in ChatApp.css
 .slick-slide{
 margin: 0 10px;
}
.slick-list{
  max-height: 100px;
}
  1. This is the output you would get in your screen.

  • Note – To prevent errors like the following on testing with jest, you will have to add the following lines into the code.

Error log, which one may encounter while using react-slick –

 matchMedia not present, legacy browsers require a polyfill

  at Object.MediaQueryDispatch (node_modules/enquire.js/dist/enquire.js:226:19)
  at node_modules/enquire.js/dist/enquire.js:291:9
  at i (node_modules/enquire.js/dist/enquire.js:11:20)
  at Object.<anonymous> (node_modules/enquire.js/dist/enquire.js:21:2)
  at Object.<anonymous> (node_modules/react-responsive-mixin/index.js:2:28)
  at Object.<anonymous> (node_modules/react-slick/lib/slider.js:19:29)
  at Object.<anonymous> (node_modules/react-slick/lib/index.js:3:18)
  at Object.<anonymous> (src/components/Testimonials.jsx:3:45)
  at Object.<anonymous> (src/pages/Index.jsx:7:47)
  at Object.<anonymous> (src/App.jsx:8:40)
  at Object.<anonymous> (src/App.test.jsx:3:38)
  at process._tickCallback (internal/process/next_tick.js:103:7)

In package.json, add the following lines-

"peerDependencies": {
      "react": "^0.14.0 || ^15.0.1",
      "react-dom": "^0.14.0 || ^15.0.1"
    },
   "jest": {
      "setupFiles": ["./src/setupTests.js", "./src/node_modules/react-scripts/config/polyfills.js"]
   },

In src/setupTests.js, add the following lines.

window.matchMedia = window.matchMedia || (() => { return { matches: false, addListener: () => {}, removeListener: () => {}, }; });

These lines will help resolve any occurring errors while testing with Jest or ESLint.

To have a look at the full project, visit https://github.com/fossasia/chat.susi.ai and feel free to contribute. To test the project visit http://chat.susi.ai

Resources

 

Continue ReadingUsing react-slick for Populating RSS Feeds in SUSI Chat