Password Recovery Link Generation in SUSI with JSON

In this blog, I will discuss how the SUSI server processes requests using JSON when a request for password recovery is made.. The blog post will also cover some parts of the client side implementation as well for better insight.
All the clients function in the same way. When you click on forget password button, client asks you that whether you want to recover password for your own custom server or the standard one. This choice of user defines the base URL where to make the forget password request. If you select custom server radio button, then you will be asked to enter the URL to your server, Otherwise standard base URL will be used. The API endpoint used will be

/aaa/recoverpassword.json

The client will make a POST request with “forgotemail” parameter containing the email id of the user making the request. Following the email id provided, server generates a client identity if and only if the email id is registered. If email id is not found in the database, server throws an exception with error code 422.

String usermail = call.get("forgotemail", null);
ClientCredential credential = new ClientCredential(ClientCredential.Type.passwd_login, usermail);
ClientIdentity identity = new ClientIdentity(ClientIdentity.Type.email, credential.getName());
if (!DAO.hasAuthentication(credential)) {
	throw new APIException(422, "email does not exist");
}

If the email id is found to be registered against a valid user in the database, call to a method is made which returns a random string of length passed in as a parameter. This returned random string acts as a token.
Below is the implementation of the createRandomString(int length) method.

public static String createRandomString(Integer length){
    	char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789".toCharArray();
    	StringBuilder sb = new StringBuilder();
    	Random random = new Random();
    	for (int i = 0; i < length; i++) {
    	    char c = chars[random.nextInt(chars.length)];
    	    sb.append(c);
    	}
    	return sb.toString();
    }

This method is defined in AbsractAPIHandler class. The function createRandomString(int length) initialises an array with alphabet (both upper and lower cases) and integers 1 to 10. An object of StringBuilder is declared and initialised in the fol loop.
Next, the token generated is hashed against the user’s email id. Since we have used a token of length 30, there will be 30! (30 factorial) combinations and hence chances of two tokens to be same is ver very low (approximately Zero). Validity for token is set for 7 days (i.e. one week). After that the token will expire and to reset the password a new token will be needed.

String token = createRandomString(30);
		ClientCredential tokenkey = new ClientCredential(ClientCredential.Type.resetpass_token, token);
		Authentication resetauth = new Authentication(tokenkey, DAO.passwordreset);
		resetauth.setIdentity(identity);
		resetauth.setExpireTime(7 * 24 * 60 * 60);
		resetauth.put("one_time", true);

Everything is set by now. Only thing left is send a mail to the user. For that we call a method sendEmail() of EmailHandler class. This requires 3 parameters. User email id, subject for the email, and the body of the email. The body contains a verification link. To get this verification link, getVerificationMailContent(String token) is called and token generated in the previous step is sent to it as a parameter.

String verificationLink = DAO.getConfig("host.url", "http://127.0.0.1:9000") + "/apps/resetpass/index.html?token=" + token;

The above command gets the base URL for the server and appends the link to reset password app along with the token it received in the method call. Rest of the body is saved as a template in /conf//templates/reset-mail.txt file. Finally, if no exception was catched during the process, the message “Recovery email sent to your email ID. Please check” and accepted = true is encoded into JSON data and sent to the client. If some exceptions was encountered, The exception message and accepted = false is sent to client.
Now the client processes the JSON object and notifies the user appropriately.

Additional Resources

Continue ReadingPassword Recovery Link Generation in SUSI with JSON

How to Implement Memory like Servlets in SUSI.AI

In this blog, I’ll be discussing about how a server gets previous messages from the Log files in SUSI server. SUSI AI clients, Android, iOS and web chat, follow a very simple rule. Whenever a user logs in to the app, the app makes a http GET call to the server in the background and in response, server returns the chat history.

Link to the API endpoint -> http://api.susi.ai/susi/memory.json

But parsing a lot of data might depend on the connection speed. If the connection is poor or lacking speed, the history would cost user’s time. To prevent this, server by default returns last 10 pair of messages. It is up to the client that how many messages they want to render. So for example, if the client requests last 5 messages, then the client has to make a GET request and pass the cognitions parameter. Hence the modified end point will be :

http://api.susi.ai/susi/memory.json?cognitions=2

But how does the server process it? Let us see.
Browse to susi_server/src/ai/susi/server/api/susi/UserService.java file. This is the main working servlet. If you are new and wondering how servlets for susi are implemented, Please go through this first how-to-add-a-new-servletapi-to-susi-server
This is how serviceImpl() method looks like :

@Override
    public ServiceResponse serviceImpl(Query post, HttpServletResponse response, Authorization user, final JsonObjectWithDefault permissions) throws APIException {

        int cognitionsCount = Math.min(10, post.get("cognitions", 10));
        String client = user.getIdentity().getClient();
        List<SusiCognition> cognitions = DAO.susi.getMemories().getCognitions(client);
        JSONArray coga = new JSONArray();
        for (SusiCognition cognition: cognitions) {
            coga.put(cognition.getJSON());
            if (--cognitionsCount <= 0) break;
        }
        JSONObject json = new JSONObject(true);
        json.put("cognitions", coga);
        return new ServiceResponse(json);
    }

In the first step, we find the minimum of default value (i.e. 10) and the value of cognitions received as GET parameter. Messages equivalent to minimum variable are encoded in JSONArray and sent to the client.

Whenever the server receives a valid signup request, It makes a directory with the name “email_emailid”. In this directory, a log.txt file is maintained which stores all the queries along with the other details associated with it. For example if user has signed up with the email id example@example.com, Then the path of this directory will be /data/susi/email_example@example.com. If a user queries “http://api.susi.ai/susi/chat.json?timezoneOffset=-330&q=flip+a+coin”,  then

{
	"query": "flip a coin",
	"count": 1,
	"client_id": "",
	"query_date": "2017-06-30T12:22:05.918Z",
	"answers": [{
		"data": [{
			"0": "flip a coin",
			"token_original": "coin",
			"token_canonical": "coin",
			"token_categorized": "coin",
			"timezoneOffset": "-330",
			"answer": "tails",

			"skill": "/susi_skill_data/models/general/entertainment/en/flip_coin.txt",
			"_etherpad_dream": "cricket"
		},
		"metadata": {
			"count": 1
		},
		"actions": [{
			"type": "answer",
			"expression": "tails"
		}],
		"skills": ["/susi_skill_data/models/general/entertainment/en/flip_coin.txt"]
	}],
	"answer_date": "2017-06-30T12:22:05.928Z",
	"answer_time": 10,
	"language": "en"
}

The server has user’s identity. It will use this identity and store (will be appended) in the respective log file.

The next steps in retrieving the message are pretty easy and includes getting the identity of the current user session. Use this identity to populate the JSONArray named coga. This is finally encoded in a JSONObject along with other basic details and returned to the clients where they render the data received, and show the messages in an appropriate way.

Resources

Continue ReadingHow to Implement Memory like Servlets in SUSI.AI

Implementation of Responsive SUSI Web Chat Search Bar

When we were building the first phase of the SUSI Web Chat Application we didn’t consider about  the responsiveness as a main goal. The main thing we needed was a working application. This changed at a later stage. In this post I’m going to emphasize how we implemented the responsive design and problems we met while we were developing the design.

When we were moving to Material-UI from static HTML CSS we were able to make most of the parts responsive. As an example App-bar of the application. We added App-bar like as follows: We made a separate component for App-bar and it includes the “searchfield” element. Material-UI app bar handles the responsiveness for some extent. We have to handle responsiveness of other sub-parts of the app bar manually.

In “TopBar.react.js” I returned marterial-ui <Toolbar> element like this.

<Toolbar>
                <ToolbarGroup >
                </ToolbarGroup>
                <ToolbarGroup lastChild={true}> //inside of this we have to include other components of the top bar inside this element

                </ToolbarGroup>
             </Toolbar>

We have to add the search button inside the element.
In this we added search component.

This field has the ability to expand and collapse like this.

It looks good. But it appears on mobile screen in a different way. This is how it appears on mobile devices.

So we wanted to hide the SUSI logo on small sized screens. For that we wrote medial queries like this.

@media only screen and (max-width: 860px){
 .app-bar-search{
   background-image: none;
 }
 .search{
   width: 100px !important;
 }
}

Even in smaller screens it appears like this.

To avoid that we minimized the width of the search bar in different screen sizes using media queries .

@media only screen and (max-width: 480px){
 .search{
   width: 100px !important;
 }
}
@media only screen and (max-width: 360px){
 .search{
   width: 65px !important;
 }
}

But in even smaller screens it still appears in the same way. We can’t show the search bar on small screens because the screen size is not enough to show the search bar.
So we wrote another media query to hide all the elements of search component in small screens except close button. Because when we collapse the screen on search mode it hides all the search components and messagecomposer. To take it back to the chat mode we have to enable the close button on smaller screens.

@media only screen and (max-width: 300px){
 .displayNone{
   display: none !important;
 }
 .displayCloseNone{
     position: relative;
     top:6px  !important;
 }
}

We have to define these two classes in “SearchField.react.js” file.

<IconButton className='displayNone'
                   <SearchIcon />
               </IconButton>
               <TextField  name='search'
                   className='search displayNone'
                   placeholder="Search..."/>
               <IconButton className='displayNone'>
                   <UpIcon />
               </IconButton>
               <IconButton className='displayNone'>
                   <DownIcon />
               </IconButton>
               <IconButton className='displayCloseNone'>
                   <ExitIcon />
               </IconButton>

Since we have “Codacy” integrated into our Github Repository we have to change Codacy rules because we used “!important” in media queries to override inline style which comes from Material-UI.
To change codacy rules we can simply login to the codacy and select the “code pattern ” button from the column left side.

It shows the list of rules that Codacy checks. And you can see the “!important” rule under CSSlint category like this.

Just uncheck it. Then codacy will not check your source code for “!important” attributes.

Resources
Configuring Codacy: Use Your Own Conventions: https://blog.codacy.com/configuring-codacy-use-your-own-conventions-9272bee5dcdb
Media queries: https://www.w3schools.com/css/css_rwd_mediaqueries.asp

Continue ReadingImplementation of Responsive SUSI Web Chat Search Bar

Hotword Detection on SUSI MagicMirror with Snowboy

Magic Mirror in the story “Snow White and the Seven Dwarfs” had one cool feature. The Queen in the story could call Mirror just by saying “Mirror” and then ask it questions. MagicMirror project helps you develop a Mirror quite close to the one in the fable but how cool it would be to have the same feature? Hotword Detection on SUSI MagicMirror Module helps us achieve that.

The hotword detection on SUSI MagicMirror Module was accomplished with the help of Snowboy Hotword Detection Library. Snowboy is a cross platform hotword detection library. We are using the same library for Android, iOS as well as in MagicMirror Module (nodejs).

Snowboy can be added to a Javascript/Typescript project with Node Package Manager (npm) by:

$ npm install --save snowboy

For detecting hotword, we need to record audio continuously from the Microphone. To accomplish the task of recording, we have another npm package node-record-lpcm16. It used SoX binary to record audio. First we need to install SoX using

Linux (Debian based distributions)

$ sudo apt-get install sox libsox-fmt-all

Then, you can install node-record-lpcm16 package using npm using

$ npm install node-record-lpcm16

Then, we need to import it in the needed file using

import * as record from "node-record-lpcm16";

You may then create a new microphone stream using,

const mic = record.start({
   threshold: 0,
   sampleRate: 16000,
   verbose: true,
});

The mic constant here is a NodeJS Readable Stream. So, we can read the incoming data from the Microphone and process it.

We can now process this stream using Detector class of Snowboy. We declare a child class extending Snowboy Hotword Decoder to suit our needs.

import { Detector, Models } from "snowboy";

export class HotwordDetector extends Detector {
  
  1 constructor(models: Models) {
       super({
           resource: `${process.env.CWD}/resources/common.res`,
           models: models,
           audioGain: 2.0,
       });
       this.setUp();
   }

   // other methods
}

First, we create a Snowboy Detector by calling the parent constructor with resource file as common.res and a Snowboy model as argument. Snowboy model is a file which tells the detector which Hotword to listen for. Currently, the module supports hotword Susi but it can be extended to support other hotwords like Mirror too. You can train the hotword for SUSI for your voice and get the latest model file at https://snowboy.kitt.ai/hotword/7915 . You may then replace the susi.pmdl file in resources folder with our own susi.pmdl file for a better experience.

Now, we need to delegate the callback methods of Detector class to know about the current state of detector and take an action on its basis. This is done in the setUp() method.

private setUp(): void {
   this.on("silence", () => {
      // handle silent state
   });

   this.on("sound", () => {
      // handle sound detected state
   });

   this.on("error", (error) => {
      // handle error
   });

   this.on("hotword", (index, hotword) => {
      // hotword detected 
   });
}

If you go into the implementation of Detector class of Snowboy, it extends from NodeJS.WritableStream. So, we can pipe our microphone input read stream to Detector class and it handles all the states. This can be done using

mic.pipe(detector as any);

So, now all the input from Microphone will be processed by Snowboy detector class and we can know when the user has spoken the word “SUSI”. We can start speech recognition and do other changes in User Interface based on the different states.

After this, we can simply say “Susi” followed by our query to ask SUSI on the MagicMirror. A video implementation of the same can be seen here: 

Resources:

Continue ReadingHotword Detection on SUSI MagicMirror with Snowboy

Implementing Speech to Text for Chrome in SUSI Web Chat

SUSI Web Chat now replies to voice inputs. To achieve this, I made use of the Web Speech API. The voice input saves one from the pain of typing and it’s a much needed feature for the Web Chat and to maintain the similarity with the other SUSI Android and SUSI iOS clients.

To test the feature out in SUSI Web Chat, click on the microphone icon beside the text area on chat.susi.ai.

Say the message once the dialog appears, and you will see the message being sent to the Chat List rendered in text.

Let’s achieve the same result following the steps below.

  1. First, initialize the class Voice Recognition with defaults for the Speech Recognition, for that we create a file VoiceRecognition.js
  1. We first initialize the Speech Recognition API with the window object.
  2. We warn the User with a console message if there is no Speech Recognition API available.
  3. If it’s available call the recognition function using the following line

this.recognition = this.createRecognition(SpeechRecognition)

// Initialise the Speech recognition API
const SpeechRecognition = window.SpeechRecognition
      || window.webkitSpeechRecognition
      || window.mozSpeechRecognition
      || window.msSpeechRecognition
      || window.oSpeechRecognition
    // Warn the user if not available otherwise call the createRecognition function
    if (SpeechRecognition != null) {
      this.recognition = this.createRecognition(SpeechRecognition)
    } else {
      console.warn('The current browser does not support the SpeechRecognition API.');
    }
  }
  1. Then we write the createRecognition function
  1. We set our defaults first as “continuous  – true, interimResults – false, and language – ‘en-US’ ”
  2. We pass these options to the recognition object that we created in the above step and finally return the recognition object.
createRecognition = (SpeechRecognition) => {
    const defaults = {
      continuous: true,
      interimResults: false,
      lang: 'en-US'
    }

    const options = Object.assign({}, defaults, this.props)

    let recognition = new SpeechRecognition()

    recognition.continuous = options.continuous
    recognition.interimResults = options.interimResults
    recognition.lang = options.lang

    return recognition
  }
  1. Initialize all the helper functions to be passed as props.
  1. start – This method starts the recognition and invokes the Mic of the browser. It also checks if the browser has the access to the user’s Mic.
  2. stop – Stop method closes the Mic and returns the audio captured so far.
  3. abort – Abort method stops the SpeechRecognition service.
  4. onspeechend – This method is called if there is any inactivity and there is no voice input. Hence, stops the recognition service.
  5. componentWillReceiveProps – This method waits for the stop method and calls it when it has received the stop object.
  6. componentWIllUnmount – This method is invoked just before the component is about to unmount and therefore its function is to abort the Speech Recognition Service
  7. render –  We return null as there is nothing to return in this component and all the converted text of the captured Speech will be sent to the parent element.
start = () => {
    this.recognition.start()
  }

  stop = () => {
    this.recognition.stop()
  }

  abort = () => {
    this.recognition.abort()
  }
  onspeechend = () => {
    console.log('no sound detected');
    this.recognition.stop()
  }

  componentWillReceiveProps ({ stop }) {
    if (stop) {
      this.stop()
    }
  }

  componentWillUnmount () {
    this.abort()
  }

  render () {
    return null
  }
  1. Add event listeners to start and stop functions inside componentDidMount() to ensure every action that we want to perform from the parent element is after the component has successfully mounted itself.
  1. start – The start method is set with an action start so that we can pass the required action name to the VoiceRecognition component that we created
  2. end – The end method similarly is set with an action end
  3. After setting up the actions we finally call the bindResult function with the result that we received.
componentDidMount () {
    const events = [
      { name: 'start', action: this.props.onStart },
      { name: 'end', action: this.props.onEnd },
      { name: 'onspeechend', action: this.props.onspeechend }
    ]

    events.forEach(event => {
      this.recognition.addEventListener(event.name, event.action)
    })

    this.recognition.addEventListener('result', this.bindResult)

    this.start()
  }
  1. Bind the result and send it as the props to the parent element.
  2. Combine all interim results of the recognition and send it to the onResult function as finalTranscript
  3. The function bindResult – The function bindResult does all the binding of the interim results that we received and output a final result as finalTranscript.
  4. Lastly, we add the prop validations to ensure the correct props are being passed to our VoiceRecognition component.
// bindResult function
 bindResult = (event) => {
    let interimTranscript = ''
    let finalTranscript = ''
   // Bind all the results to finalTranscript
    for (let i = event.resultIndex; i < event.results.length; ++i) {
      if (event.results[i].isFinal) {
        finalTranscript += event.results[i][0].transcript
      } else {
        interimTranscript += event.results[i][0].transcript
      }
    }

    this.props.onResult({ interimTranscript, finalTranscript })
  }
// Add Prop Validations
VoiceRecognition.propTypes = {
  onStart: PropTypes.func,
  onEnd : PropTypes.func,
  onResult: PropTypes.func,
  Onspeechend: PropTypes.func,
  continuous: PropTypes.bool,
  lang: PropTypes.string,
  stop: PropTypes.bool
};
// Finally export the VoiceRecognition Component
export default VoiceRecognition
  1. Lastly, call the VoiceRecogntion component and pass the props from the MessageComposer Section to it in the following way.
  1. Initialize the default state in the constructor inside this.state
this.state = {
      text: '',
      start: false, // Starting the VoiceRecognition
      stop: false, // Stop the VoiceRecognition
      open: false, // Maintain the modal state
      result:'' // Maintain the result state
    };
  1. onStart function to call the VoiceRecognition component only when the Mic Button is pressed.
  2. onEnd to end the Speech Recognition service.
  3. onResult to send the message through the Actions.createMessage() function
onResult = ({interimTranscript,finalTranscript }) => {
    let result = interimTranscript;
    let voiceResponse = false;
    this.setState({result:result});
    if(finalTranscript) {
      result = finalTranscript;
      this.setState({
      start: false,
      result:result,
      stop: false,
      open:false,
      animate:false
      });
      if(this.props.speechOutputAlways || this.props.speechOutput){
        voiceResponse = true;
      }
      Actions.createMessage(result, this.props.threadID, voiceResponse);
      setTimeout(()=>this.setState({result: ''}),400);
      this.Button = <Mic />
    }
  }
  1. Fire the component based on the value of start variable and pass the requisite props as given below in the code.
// Only when the start is ‘true’ call the VoiceRecognition component

    {this.state.start && (
          <VoiceRecognition
            onStart={this.onStart}
            onEnd={this.onEnd}
            onResult={this.onResult}
            continuous={true}
            lang="en-US"
            stop={this.state.stop}
          />
        )}

 

  1. Update the text in the “Speak Now” Dialog to show the user the Speech to Text conversion
  1. Update the text in the Modal when it is converted from Speech to Text, i.e. when we set the state of the result variable.
{this.state.result !=='' ? this.state.result :
          'Speak Now...'}

To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai

Resources

Continue ReadingImplementing Speech to Text for Chrome in SUSI Web Chat

Implementing Text to Speech on SUSI Web Chat

SUSI Web Chat now gives voice replies while chatting with it similar to SUSI Android and SUSI iOS clients. To test the Voice Output on Chrome,

  1. Visit chat.susi.ai
  2. Click on the Mic input button.
  3. Say something using the Mic when the Speak Now View appears

The simplest way to add text-to-speech to your website is by using the official Speech API currently available for Chrome Browser. The following steps help to achieve it in ReactJS.

  1. Initialize state defaults in a component named, VoicePlayer.
const defaults = {
      text: '',
      volume: 1,
      rate: 1,
      pitch: 1,
      lang: 'en-US'
    } 

There are two states which need to be maintained throughout the component and to be passed as such in our props which will maintain the state. Thus our state is simply

this.state = {
      started: false,
      playing: false
    }
  1. Our next step is to make use of functions of the Web Speech API to carry out the relevant processes.
  1. speak() – window.speechSynthesis.speak(this.speech) – Calls the speak method of the Speech API
  2. cancel() – window.speechSynthesis.cancel() – Calls the cancel method of the Speech API
  1. We then use our component helper functions to assign eventListeners and listen to any changes occurring in the background. For this purpose we make use of the functions componentWillReceiveProps(), shouldComponentUpdate(), componentDidMount(), componentWillUnmount(), and render().
  1. componentWillReceiveProps() – receives the object parameter {pause} to listen to any paused action
  2. shouldComponentUpdate() simply returns false if no updates are to be made in the speech outputs.
  3. componentDidMount() is the master function which listens to the action start, adds the eventListener start and end, and calls the speak() function if the prop play is true.
  4. componentWillUnmount() destroys the speech object and ends the speech.

Here’s a code snippet for Function componentDidMount()

componentDidMount () {
  
    const events = [
      { name: 'start', action: this.props.onStart }
    ]
    // Adding event listeners
    events.forEach(e => {
      this.speech.addEventListener(e.name, e.action)
    })

    this.speech.addEventListener('end', () => {
      this.setState({ started: false })
      this.props.onEnd()
    })

    if (this.props.play) {
      this.speak()
    }
  }
  1. We then add proper props validation in the following way to our VoicePlayer component.
VoicePlayer.propTypes = {
  play: PropTypes.bool,
  text: PropTypes.string,
  onStart: PropTypes.func,
  onEnd: PropTypes.func
};
  1. The next step is to pass the props from a listener view to the VoicePlayer component. Hence the listener here is the component MessageListItem.js from where the voice player is initialized.
  1. First step is to initialise the state.
this.state = {
      play: false,
    }
  onStart = () => {
    this.setState({ play: true });
  }
  onEnd = () => {
    this.setState({ play: false });
  }
  1. Next, we set play to true when we want to pass the props and the text which is to be said and append it to the message lists which have voice set as `true`
 { this.props.message.voice &&
      (<VoicePlayer
             play
             text={voiceOutput}
             onStart={this.onStart}
             onEnd={this.onEnd}
 />)}

Finally, our message lists with voice true will be heard on the speaker as they have been spoken on the microphone. To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai or on our chat channel at gitter.im/fossasia/susi_webchat

Resources

  1. Speak-easy-synthesis repository http://mdn.github.io/web-speech-api/speak-easy-synthesis
  2. Web-speech-api repository https://github.com/mdn/web-speech-api/
Continue ReadingImplementing Text to Speech on SUSI Web Chat

Implementing Wallpapers in React JS SUSI Web Chat Application

The different SUSI AI clients need to match in their feature set. One feature that was missing in the React JS SUSI Web Chat application was the ability for users to change the application wallpaper or background. This is how we implemented it on SUSI Web Chat.
Firstly we added a text field after the circle picker that change the color of the Application body. Because there should be a place to add the wallpaper image URL. Added that text field like this.

   const components = componentsList.map((component) => {
       return

key={component.id} className=‘circleChoose’>

Change color of {component.name}:

 

         color={component} width={'100%'}
         onChangeComplete={ this.handleChangeComplete.bind(this,
         component.component) }
         onChange={this.handleColorChange.bind(this,component.id)}>
       

CirclePicker shows the circular color picker to choose the colors for each component of the application.

        
         name="backgroundImg"
         style={{display:component.component==='body'?'block':'none'}}
         onChange={
           (e,value)=>
           this.handleChangeBackgroundImage(value) }
         value={this.state.bodyBackgroundImage}
          floatingLabelText="Body Background Image URL" />
       

})

In ’TextField’ I have checked below condition whether to display or not to display the text field after the ‘body’ color picker.

style={{display:component.component==='body'?'block':'none'}}

To apply changes as soon as the user enters the image url, we refer the value of the ‘TextField’ and pass it into the ‘handleChangeBackgroundImage()’ function as ‘value’ on change like this.

         onChange={
           (e,value)=>
           this.handleChangeBackgroundImage(value) }

In ‘‘handleChangeBackgroundImage()’ function we change the state of the application and background of the application like this.

  handleChangeBackgroundImage(backImage){
   document.body.style.setProperty('background-image', 'url('+ backImage+')');
   document.body.style.setProperty('background-repeat', 'no-repeat');
   document.body.style.setProperty('background-size', 'cover');
   this.setState({bodyBackgroundImage:backImage});
 }

In here ‘document.body.style.setProperty’ we change the style of the application’s ‘’ tag. This is how wallpapers are changing on SUSI Web Chat Application.

Resources:

React Refs: https://facebook.github.io/react/docs/refs-and-the-dom.html

Refer Value from text field: https://stackoverflow.com/questions/43960183/material-ui-component-reference-does-not-work

Continue ReadingImplementing Wallpapers in React JS SUSI Web Chat Application

Showing Offline and Online Status in SUSI Web Chat

A lot of times while chatting on SUSI Web Chat, one does not receive responses, this could either be due to no Internet connection or a down server.

For a better user experience, the user needs to be notified whether he is connected to the SUSI Chat. If one ever loses out on the Internet connection, SUSI Web Chat will notify you what’s the status through a snack bar.

Here are the steps to achieve the above:-

  1. The first step is to initialize the state of the Snackbar and the message.
this.state = {
snackopen: false,
snackMessage: 'It seems you are offline!'
}
  1. Next, we add eventListeners in the component which will bind to our browser’s window object. While this event can be added to any component in the App but we need it in the MessageSection particularly to show the snack bar.

The window object listens to any online offline activity of the browser.

  1. In the file MessageSection.react.js, inside the function componentDidMount()
  1. We initialize window listeners in our constructor section and bind them to the function handleOnline and handleOffline to set states to the opening of the SnackBar.
// handles the Offlines event
window.addEventListener('offline', this.handleOffline.bind(this));  

// handles the Online event
window.addEventListener('online', this.handleOnline.bind(this));
  1. We then create the handleOnline and handleOffline functions which sets our state to make the Snackbar open and close respectively.
// handles the Offlines event
window.addEventListener('offline', this.handleOffline.bind(this));  

// handles the Online event
window.addEventListener('online', this.handleOnline.bind(this));
  1. Next, we create a Snackbar component using the material-ui snackbar component.

The Snackbar is visible as soon as the the snackopen state is set to true with the message which is passed whether offline or online.

<Snackbar
       autoHideDuration={4000}
       open={this.state.snackopen}
       message={this.state.snackMessage}
 />

To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai

Resources

Continue ReadingShowing Offline and Online Status in SUSI Web Chat

Hotword Detection in SUSI Android App using Snowboy

Hotword Detection is as cool as it sounds. What exactly is hotword detection? Hotword detection is a feature in which a device gets activated when it listens to a specific word or a phrase. You must have said “OK Google” or “Hey Cortana” or “Siri” or “Alexa” at least once in your lifetime. These all are hotwords which trigger the specific action attached to them. That specific action can be anything.

Implementing hotword detection from scratch in SUSI Android is not an easy task. You have to define language model, train the model and do various other processes before implementing it in Android. In short, not feasible to implement that along with the code of our Android app. There are many open source projects on hotword detection and speech recognition. They already have done what we need and we can make use of it. One such project is Snowboy. According to Snowboy GitHub repo “Snowboy is a DNN based hotword and wake word detection toolkit.”

Img src: https://snowboy.kitt.ai/

In SUSI Android App, we have used Snowboy for hotword detection with hotword as “susi” (pronounced as ‘suzi’). In this blog, I will tell you how Hotword detection is implemented in SUSI Android app. So, you can just follow the steps and you will be able to implement it in your application too or if you want to contribute in SUSI android app, it may help you a little in knowing the codebase better.

Pre Processing before Implementation

1. Generating Hotword Model

The start of implementation of hotword detection begins with creating a hotword model from snowboy website https://snowboy.kitt.ai/dashboard . Just log in and search for susi and then train it by saying “susi” thrice and download the susi.pmdl file.

There are two types of models:

  1. .pmdl : Personal Model
  2. .umdl : Universal Model

The personal model is specifically trained for you and is instantly available for you to download once you train the hotword by your voice. On the other hand, the Universal model is trained by minimum 500 hundred people and is only available once it is trained. So, we are going to use personal model for now since training of universal model is not yet completed.

Img src: https://snowboy.kitt.ai/

2. Adding some predefined native binary files in your app.

Once you have downloaded the susi.pmdl file and you need to copy some already written native binary file in your app.

    1. In your assets folder, make a directory named snowboy and add your downloaded susi.pmdl file along with this file in it.
    2. Copy this folder and add it in your  /app/src/main/java folder as it is. These are autogenerated swig files. So, don’t change it unless you know what you are doing.
    3. Also, create a new folder in your /app/src/main folder called jniLibs and add these files to it.

Implementation in SUSI Android App

Check out the implementation of Hotword detection in SUSI Android App here

You now have everything ready. Now you just need to implement some code in your MainActivity and you are good to go. Initiate Hotword detection in the app by using below written code snippet. The call initHotword() method from your onCreate method in MainActivity. Make sure you have given permission to RECORD_AUDIO and WRITE_EXTERNAL_STORAGE.

private void initHotword() {
   if (ActivityCompat.checkSelfPermission(this,
           Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED &&
       ActivityCompat.checkSelfPermission(this,
           Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) {
       AppResCopy.copyResFromAssetsToSD(this);

       recordingThread = new RecordingThread(new Handler() {
           @Override
           public void handleMessage(Message msg) {
               MsgEnum message = MsgEnum.getMsgEnum(msg.what);
               switch(message) {
                   case MSG_ACTIVE:
                       //HOTWORD DETECTED. NOW DO WHATEVER YOU WANT TO DO HERE
                       break;
                   case MSG_INFO:
                       break;
                   case MSG_VAD_SPEECH:
                       break;
                   case MSG_VAD_NOSPEECH:
                       break;
                   case MSG_ERROR:
                       break;
                   default:
                       super.handleMessage(msg);
                       break;
               }
           }
       }, new AudioDataSaver());
   }
}

In the above code under case, MSG_ACTIVE add your code which needs to be executed once hotword is detected. Now, hotword detection is initiated and you just have to start recording and stop recording whenever you want.

To start recording : (can be written in onStart() method)

if(recordingThread !=null && !isDetectionOn) {
    recordingThread.startRecording();
    isDetectionOn = true;
}

To stop recording : (can be written in onStop() method)

if(recordingThread !=null && isDetectionOn){
   recordingThread.stopRecording();
   isDetectionOn = false;
}

So, this this is the simple process to add hotword detection feature in your app.

Though this is great and will word wonderfully for you but it has a little problem and a solution which is discussed below.

Problems for now and future steps

As mentioned above, the susi.pmdl model is a personal model and only trained for your voice. So, it will work wonderfully for you and for people with a similar voice like you but not for everyone. There are two ways to make this solve this problem:

  1. Replace susi.pmdl with susi.umdl: This is very easy way to fix this problem. But to get the susi.umdl file at least 500 people have to train susi hotword on snowboy website. Once that target is achieved, you can download susi.umdl file and replace it with susi.pmdl file and you are good to go.
  2. Train and obtain susi.pmdl file for each user: Right now you trained and used your own personal model for everyone but there is a way you can use every user’s own personal model in the app. Now hotword will be trained by the user himself in the app. Snowboy provides a training API http://docs.kitt.ai/snowboy/#api-v1-train to train the hotword and getting personal model. Quoting snowboy docs “You can define truly customized hotword for each of your end customer. Just ask them to say the hotword 3 times and a model will be trained on the fly!”

Resources

  1. Official docs of Snowboy hotword detection http://docs.kitt.ai/snowboy/#introduction
  2. Code for snowboy hotword detection. It contains readme files which give very nice description about the project. https://github.com/kitt-ai/snowboy
  3. This readme file of snowboy which guides on how to set up snowboy in an android app https://github.com/Kitt-AI/snowboy/blob/master/examples/Android/README.md
Continue ReadingHotword Detection in SUSI Android App using Snowboy