Hotword Detection on SUSI MagicMirror with Snowboy

Magic Mirror in the story “Snow White and the Seven Dwarfs” had one cool feature. The Queen in the story could call Mirror just by saying “Mirror” and then ask it questions. MagicMirror project helps you develop a Mirror quite close to the one in the fable but how cool it would be to have the same feature? Hotword Detection on SUSI MagicMirror Module helps us achieve that.

The hotword detection on SUSI MagicMirror Module was accomplished with the help of Snowboy Hotword Detection Library. Snowboy is a cross platform hotword detection library. We are using the same library for Android, iOS as well as in MagicMirror Module (nodejs).

Snowboy can be added to a Javascript/Typescript project with Node Package Manager (npm) by:

$ npm install --save snowboy

For detecting hotword, we need to record audio continuously from the Microphone. To accomplish the task of recording, we have another npm package node-record-lpcm16. It used SoX binary to record audio. First we need to install SoX using

Linux (Debian based distributions)

$ sudo apt-get install sox libsox-fmt-all

Then, you can install node-record-lpcm16 package using npm using

$ npm install node-record-lpcm16

Then, we need to import it in the needed file using

import * as record from "node-record-lpcm16";

You may then create a new microphone stream using,

const mic = record.start({
   threshold: 0,
   sampleRate: 16000,
   verbose: true,
});

The mic constant here is a NodeJS Readable Stream. So, we can read the incoming data from the Microphone and process it.

We can now process this stream using Detector class of Snowboy. We declare a child class extending Snowboy Hotword Decoder to suit our needs.

import { Detector, Models } from "snowboy";

export class HotwordDetector extends Detector {
  
  1 constructor(models: Models) {
       super({
           resource: `${process.env.CWD}/resources/common.res`,
           models: models,
           audioGain: 2.0,
       });
       this.setUp();
   }

   // other methods
}

First, we create a Snowboy Detector by calling the parent constructor with resource file as common.res and a Snowboy model as argument. Snowboy model is a file which tells the detector which Hotword to listen for. Currently, the module supports hotword Susi but it can be extended to support other hotwords like Mirror too. You can train the hotword for SUSI for your voice and get the latest model file at https://snowboy.kitt.ai/hotword/7915 . You may then replace the susi.pmdl file in resources folder with our own susi.pmdl file for a better experience.

Now, we need to delegate the callback methods of Detector class to know about the current state of detector and take an action on its basis. This is done in the setUp() method.

private setUp(): void {
   this.on("silence", () => {
      // handle silent state
   });

   this.on("sound", () => {
      // handle sound detected state
   });

   this.on("error", (error) => {
      // handle error
   });

   this.on("hotword", (index, hotword) => {
      // hotword detected 
   });
}

If you go into the implementation of Detector class of Snowboy, it extends from NodeJS.WritableStream. So, we can pipe our microphone input read stream to Detector class and it handles all the states. This can be done using

mic.pipe(detector as any);

So, now all the input from Microphone will be processed by Snowboy detector class and we can know when the user has spoken the word “SUSI”. We can start speech recognition and do other changes in User Interface based on the different states.

After this, we can simply say “Susi” followed by our query to ask SUSI on the MagicMirror. A video implementation of the same can be seen here: 

Resources:

Continue ReadingHotword Detection on SUSI MagicMirror with Snowboy

Using API Blueprint with Yaydoc

As part of extending the capability of Yaydoc to document APIs, this week we integrated API Blueprint with Yaydoc. Now we can parse apib files and add the parsed content to the generated documentation. From the official Homepage of API Blueprint,

API Blueprint is simple and accessible to everybody involved in the API lifecycle. Its syntax is concise yet expressive. With API Blueprint you can quickly design and prototype APIs to be created or document and test already deployed mission-critical APIs. It is a documentation-oriented web API description language. The API Blueprint is essentially a set of semantic assumptions laid on top of the Markdown syntax used to describe a web API.

To Integrate API Blueprint with Yaydoc, we used an sphinx extension named sphinxcontrib-apiblueprint. This extension can directly translate text in API Blueprint format into docutils nodes. The advantage with this approach as compared to using tools like aglio is that the generated html fits in nicely with the already existent theme. Though we may in future provide ability to generate html using tools like aglio if the user prefers. Adding an extension to sphinx is very easy. In the conf.py template, we added the extension to the already enabled list of extensions.

extensions += [‘sphinxcontrib.apiblueprint’]

The above extension provides a directive apiblueprint which can be then used to include apib files. The directive is very similar to the built in include directive. The difference is just that it should be only be used to include files in API Blueprint format. You can see an example below of how to use this directive.

.. apiblueprint:: <path to apib file>

Although this is enough for projects which use the ResT markup format, This cannot be used with projects using markdown as the primary markup format, since markdown doesn’t support the concept of directives. To solve this, we used the eval_rst block provided by recommonmark in Yaydoc. It allows users to embed valid ReST within markdown and recommonmark will properly parse the embedded text as ReST. Now a user can use this to use directives within markdown. You can see an example below.

```eval_rst
.. apiblueprint:: <path to apib file>
```

In order to implement this, we used the AutoStructify class provided by recommonmark. Here’s a snippet from our conf.py template. Note that this does have far reaching effects. Now users would be able to use this to add constructs like toctree in markdown which wasn’t possible before.

from recommonmark.transform import AutoStructify

def setup(app):
    app.add_config_value('recommonmark_config', {
    'enable_eval_rst': True,
    }, True)
    app.add_transform(AutoStructify)

Let’s see all of this in action. Here’s a preview of a generated documentation with API Blueprint using Yaydoc.

Resources

Continue ReadingUsing API Blueprint with Yaydoc

Implementing Speech to Text for Chrome in SUSI Web Chat

SUSI Web Chat now replies to voice inputs. To achieve this, I made use of the Web Speech API. The voice input saves one from the pain of typing and it’s a much needed feature for the Web Chat and to maintain the similarity with the other SUSI Android and SUSI iOS clients.

To test the feature out in SUSI Web Chat, click on the microphone icon beside the text area on chat.susi.ai.

Say the message once the dialog appears, and you will see the message being sent to the Chat List rendered in text.

Let’s achieve the same result following the steps below.

  1. First, initialize the class Voice Recognition with defaults for the Speech Recognition, for that we create a file VoiceRecognition.js
  1. We first initialize the Speech Recognition API with the window object.
  2. We warn the User with a console message if there is no Speech Recognition API available.
  3. If it’s available call the recognition function using the following line

this.recognition = this.createRecognition(SpeechRecognition)

// Initialise the Speech recognition API
const SpeechRecognition = window.SpeechRecognition
      || window.webkitSpeechRecognition
      || window.mozSpeechRecognition
      || window.msSpeechRecognition
      || window.oSpeechRecognition
    // Warn the user if not available otherwise call the createRecognition function
    if (SpeechRecognition != null) {
      this.recognition = this.createRecognition(SpeechRecognition)
    } else {
      console.warn('The current browser does not support the SpeechRecognition API.');
    }
  }
  1. Then we write the createRecognition function
  1. We set our defaults first as “continuous  – true, interimResults – false, and language – ‘en-US’ ”
  2. We pass these options to the recognition object that we created in the above step and finally return the recognition object.
createRecognition = (SpeechRecognition) => {
    const defaults = {
      continuous: true,
      interimResults: false,
      lang: 'en-US'
    }

    const options = Object.assign({}, defaults, this.props)

    let recognition = new SpeechRecognition()

    recognition.continuous = options.continuous
    recognition.interimResults = options.interimResults
    recognition.lang = options.lang

    return recognition
  }
  1. Initialize all the helper functions to be passed as props.
  1. start – This method starts the recognition and invokes the Mic of the browser. It also checks if the browser has the access to the user’s Mic.
  2. stop – Stop method closes the Mic and returns the audio captured so far.
  3. abort – Abort method stops the SpeechRecognition service.
  4. onspeechend – This method is called if there is any inactivity and there is no voice input. Hence, stops the recognition service.
  5. componentWillReceiveProps – This method waits for the stop method and calls it when it has received the stop object.
  6. componentWIllUnmount – This method is invoked just before the component is about to unmount and therefore its function is to abort the Speech Recognition Service
  7. render –  We return null as there is nothing to return in this component and all the converted text of the captured Speech will be sent to the parent element.
start = () => {
    this.recognition.start()
  }

  stop = () => {
    this.recognition.stop()
  }

  abort = () => {
    this.recognition.abort()
  }
  onspeechend = () => {
    console.log('no sound detected');
    this.recognition.stop()
  }

  componentWillReceiveProps ({ stop }) {
    if (stop) {
      this.stop()
    }
  }

  componentWillUnmount () {
    this.abort()
  }

  render () {
    return null
  }
  1. Add event listeners to start and stop functions inside componentDidMount() to ensure every action that we want to perform from the parent element is after the component has successfully mounted itself.
  1. start – The start method is set with an action start so that we can pass the required action name to the VoiceRecognition component that we created
  2. end – The end method similarly is set with an action end
  3. After setting up the actions we finally call the bindResult function with the result that we received.
componentDidMount () {
    const events = [
      { name: 'start', action: this.props.onStart },
      { name: 'end', action: this.props.onEnd },
      { name: 'onspeechend', action: this.props.onspeechend }
    ]

    events.forEach(event => {
      this.recognition.addEventListener(event.name, event.action)
    })

    this.recognition.addEventListener('result', this.bindResult)

    this.start()
  }
  1. Bind the result and send it as the props to the parent element.
  2. Combine all interim results of the recognition and send it to the onResult function as finalTranscript
  3. The function bindResult – The function bindResult does all the binding of the interim results that we received and output a final result as finalTranscript.
  4. Lastly, we add the prop validations to ensure the correct props are being passed to our VoiceRecognition component.
// bindResult function
 bindResult = (event) => {
    let interimTranscript = ''
    let finalTranscript = ''
   // Bind all the results to finalTranscript
    for (let i = event.resultIndex; i < event.results.length; ++i) {
      if (event.results[i].isFinal) {
        finalTranscript += event.results[i][0].transcript
      } else {
        interimTranscript += event.results[i][0].transcript
      }
    }

    this.props.onResult({ interimTranscript, finalTranscript })
  }
// Add Prop Validations
VoiceRecognition.propTypes = {
  onStart: PropTypes.func,
  onEnd : PropTypes.func,
  onResult: PropTypes.func,
  Onspeechend: PropTypes.func,
  continuous: PropTypes.bool,
  lang: PropTypes.string,
  stop: PropTypes.bool
};
// Finally export the VoiceRecognition Component
export default VoiceRecognition
  1. Lastly, call the VoiceRecogntion component and pass the props from the MessageComposer Section to it in the following way.
  1. Initialize the default state in the constructor inside this.state
this.state = {
      text: '',
      start: false, // Starting the VoiceRecognition
      stop: false, // Stop the VoiceRecognition
      open: false, // Maintain the modal state
      result:'' // Maintain the result state
    };
  1. onStart function to call the VoiceRecognition component only when the Mic Button is pressed.
  2. onEnd to end the Speech Recognition service.
  3. onResult to send the message through the Actions.createMessage() function
onResult = ({interimTranscript,finalTranscript }) => {
    let result = interimTranscript;
    let voiceResponse = false;
    this.setState({result:result});
    if(finalTranscript) {
      result = finalTranscript;
      this.setState({
      start: false,
      result:result,
      stop: false,
      open:false,
      animate:false
      });
      if(this.props.speechOutputAlways || this.props.speechOutput){
        voiceResponse = true;
      }
      Actions.createMessage(result, this.props.threadID, voiceResponse);
      setTimeout(()=>this.setState({result: ''}),400);
      this.Button = <Mic />
    }
  }
  1. Fire the component based on the value of start variable and pass the requisite props as given below in the code.
// Only when the start is ‘true’ call the VoiceRecognition component

    {this.state.start && (
          <VoiceRecognition
            onStart={this.onStart}
            onEnd={this.onEnd}
            onResult={this.onResult}
            continuous={true}
            lang="en-US"
            stop={this.state.stop}
          />
        )}

 

  1. Update the text in the “Speak Now” Dialog to show the user the Speech to Text conversion
  1. Update the text in the Modal when it is converted from Speech to Text, i.e. when we set the state of the result variable.
{this.state.result !=='' ? this.state.result :
          'Speak Now...'}

To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai

Resources

Continue ReadingImplementing Speech to Text for Chrome in SUSI Web Chat

Implementing Text to Speech on SUSI Web Chat

SUSI Web Chat now gives voice replies while chatting with it similar to SUSI Android and SUSI iOS clients. To test the Voice Output on Chrome,

  1. Visit chat.susi.ai
  2. Click on the Mic input button.
  3. Say something using the Mic when the Speak Now View appears

The simplest way to add text-to-speech to your website is by using the official Speech API currently available for Chrome Browser. The following steps help to achieve it in ReactJS.

  1. Initialize state defaults in a component named, VoicePlayer.
const defaults = {
      text: '',
      volume: 1,
      rate: 1,
      pitch: 1,
      lang: 'en-US'
    } 

There are two states which need to be maintained throughout the component and to be passed as such in our props which will maintain the state. Thus our state is simply

this.state = {
      started: false,
      playing: false
    }
  1. Our next step is to make use of functions of the Web Speech API to carry out the relevant processes.
  1. speak() – window.speechSynthesis.speak(this.speech) – Calls the speak method of the Speech API
  2. cancel() – window.speechSynthesis.cancel() – Calls the cancel method of the Speech API
  1. We then use our component helper functions to assign eventListeners and listen to any changes occurring in the background. For this purpose we make use of the functions componentWillReceiveProps(), shouldComponentUpdate(), componentDidMount(), componentWillUnmount(), and render().
  1. componentWillReceiveProps() – receives the object parameter {pause} to listen to any paused action
  2. shouldComponentUpdate() simply returns false if no updates are to be made in the speech outputs.
  3. componentDidMount() is the master function which listens to the action start, adds the eventListener start and end, and calls the speak() function if the prop play is true.
  4. componentWillUnmount() destroys the speech object and ends the speech.

Here’s a code snippet for Function componentDidMount()

componentDidMount () {
  
    const events = [
      { name: 'start', action: this.props.onStart }
    ]
    // Adding event listeners
    events.forEach(e => {
      this.speech.addEventListener(e.name, e.action)
    })

    this.speech.addEventListener('end', () => {
      this.setState({ started: false })
      this.props.onEnd()
    })

    if (this.props.play) {
      this.speak()
    }
  }
  1. We then add proper props validation in the following way to our VoicePlayer component.
VoicePlayer.propTypes = {
  play: PropTypes.bool,
  text: PropTypes.string,
  onStart: PropTypes.func,
  onEnd: PropTypes.func
};
  1. The next step is to pass the props from a listener view to the VoicePlayer component. Hence the listener here is the component MessageListItem.js from where the voice player is initialized.
  1. First step is to initialise the state.
this.state = {
      play: false,
    }
  onStart = () => {
    this.setState({ play: true });
  }
  onEnd = () => {
    this.setState({ play: false });
  }
  1. Next, we set play to true when we want to pass the props and the text which is to be said and append it to the message lists which have voice set as `true`
 { this.props.message.voice &&
      (<VoicePlayer
             play
             text={voiceOutput}
             onStart={this.onStart}
             onEnd={this.onEnd}
 />)}

Finally, our message lists with voice true will be heard on the speaker as they have been spoken on the microphone. To get access to the full code, go to the repository https://github.com/fossasia/chat.susi.ai or on our chat channel at gitter.im/fossasia/susi_webchat

Resources

  1. Speak-easy-synthesis repository http://mdn.github.io/web-speech-api/speak-easy-synthesis
  2. Web-speech-api repository https://github.com/mdn/web-speech-api/
Continue ReadingImplementing Text to Speech on SUSI Web Chat

Exporting Functions in PSLab Firmware

Computer programs consist of several hundreds line of code. Smaller programs contain around few hundreds while larger programs like PSLab firmware contains lines of code expanding over 2000 lines. The major drawback with a code with lots of lines is the difficulty to debug and hardship for a new developer to understand the code. As a solution modularization techniques can be used to have the long code broken down to small set of codes. In C programming this can be achieved using different .c files and relevant .h(header) files.

The task is tricky as the main code contains a lot of interrelated functions and variables. Lots of errors were encountered when the large code in pslab-firmware being modularized into application specific files. In a scenario like this, global variables or static variables were a much of a help.

Global Variables in C

A global variable in C is a variable whose scope spans wide across the entire program. Updation to the variable will be reflected everywhere it is used.

extern TYPE VARIABLE = VALUE;

Static Variables in C

This type of variables preserve their content regardless of the scope they are in.

static TYPE VARIABLE = VALUE;

Both the variables preserve their values but the memory usage is different depending on the implementation. This can be explained using a simple example.

Suppose a variable is required in different C files and it is defined in one of the header files as a local variable. The header file is then added to several other c files. When the program is compiled the compiler will create several copies of the same variable which will throw a compilation error as variable is already declared. In PSLab firmware, a variable or a method from one library has a higher probability of it being used in another one or many libraries.

This issue can be addressed in two different ways. They are by using static variables or global variables. Depending on the selection, the implementation is different.

The first implementation is using static variables. This type of variables at the time of compilation, create different copies of himself in different c files. Due to the fact that C language treats every C file as a separate program, if we include a header file with static variables in two c files, it will create two copies of the same variable. This will avoid the error messages with variables redeclared. Even though this fixes the issue in firmware, the memory allocation will be of a wastage. This is the first implementation used in PSLab firmware. The memory usage was very high due to duplicate variables taking much memory than they should take. This lead to the second implementation.

first_header.h

#ifndef FIRST_HEADER_H 
#define FIRST_HEADER_H 

static int var_1;

#endif 

/* FIRST_HEADER_H */

second_header.h

#ifndef SECOND_HEADER_H
#define SECOND_HEADER_H

static int var_1;

#endif    

/* SECOND_HEADER_H */

first_header.c

#include <stdio.h>
#include "first_header.h"
#include "second_header.h"

int main() {
    var_1 = 10;
    printf("%d", var_1);
}

The next implementation uses global variables. This type of variables need to be declared only in one header file and can be reused by declaring the header file in other c files. The global variables must be declared in a header file with the keyword extern and defined in the relevant c file once. Then it will be available throughout the application and no errors of variable redeclaration will occur while compiling. This became the final implementation for the PSLab-firmware to fix the compilation issues modularizing application specific C and header files.

Resources:

Continue ReadingExporting Functions in PSLab Firmware

Drawing Lines on Images in Phimpme

In the editing section of the Phimpme app, we want a feature which lets the user write something on the image in their own handwriting. The user can also select different colour palette available inside the app. We aligned this feature in our editor section as Phimpme Image app allows a user to use our custom camera with the large number of editing options (Enhancement, Transform, Applying Stickers, Applying filters and writing on images). In this post, I am explaining how I implemented the draw feature in Phimpme Android app.

Let’s get started

Step -1: Create bitmap of Image and canvas to draw

The first step is to create a bitmap of Image on which you want to draw lines. Now we need a canvas to draw and canvas requires bitmap to work. So in this step, we will create a bitmap and new canvas to draw the line.

BitmapFactory.Options bmOptions = new BitmapFactory.Options();

Bitmap bitmap =  BitmapFactory.decodeFile(ImagePath, bmOptions);

Canvas canvas = new Canvas(bitmap);

Once the canvas is initialized, we use paint class to draw on the canvas.

Step-2: Create Paint class to draw on canvas

In this step, we will create a new Paint class with some properties like colour, Stroke Width and Coordinate where we want to draw a line and it can be done by using the following code.

Paint paint = new Paint();

paint.setColor(Color.rgb(255, 153, 51));

paint.setStrokeWidth(10);

int startx = 50;

int starty = 90;

int endx = 150;

int endy = 360;

canvas.drawLine(startx, starty, , endy, paint);

The above code will result in a straight line on the canvas like the below screenshot.

Step-3: Add support to draw on touch dynamically

In step 2 we have added the feature to draw a straight line, but what if the user wants to draw the lines on canvas with on touch event. So to achieve this we have to apply onTouchListener and applying coordinates dynamically and it can be done by using the following code.

@Override

public boolean onTouchEvent(MotionEvent event) {

 boolean ret = super.onTouchEvent(event);

 float x = event.getX();

 float y = event.getY();

 switch (event.getAction()) {

     case MotionEvent.ACTION_DOWN:

         ret = true;

         last_x = x;

         last_y = y;

         break;

     case MotionEvent.ACTION_MOVE:

         ret = true;

         canvas.drawLine(last_x, last_y, x, y);

         last_x = x;

         last_y = y;

         this.postInvalidate();

         break;

     case MotionEvent.ACTION_CANCEL:

     case MotionEvent.ACTION_UP:

         ret = false;

         break;

 }

 return ret;

}

Now the above code will let you draw the lines dynamically. But, how? The answer is In on touch event, I am fetching coordinates of touch and applying it dynamically to a canvas. Now our feature is ready to draw the lines on canvas with finger touch.

Drawing line in Phimpme Editor

Step – 4: Adding eraser functionality

Till now we have added the draw on canvas functionality, but what if the user wants to erase particular area. So now we have to implement the eraser functionality. It is very simple now what we have to do again we have to draw, but we will set paint color to the transparent color. It will draw the transparent color on the existing line which results in the previous picture and it looks like we are erasing. It can be done by adding one line.

paint.setColor(eraser ? Color.TRANSPARENT : color);

Now we have done by drawing the lines on an image and erasing functionality.

Resources:

Continue ReadingDrawing Lines on Images in Phimpme

Building Preference Screen in SUSI Android

SUSI provides various preferences to the user in the settings to customize the app. This allows the user to configure the application according to his own choice. There are different preferences available such as to select the theme or the language for text to speech. Preference Setting Activity is an important part of an Android application. Here we will see how we can implement it in an Android app taking SUSI Android (https://github.com/fossasia/susi_android) as the example.

Firstly, we will proceed by adding the Gradle Dependency for the Setting Preferences

compile 'com.takisoft.fix:preference-v7:25.4.0.3'

Then to create the custom style for our setting preference screen we can set

@style/PreferenceFixTheme

as the base theme and can apply various other modifications and color over this. By default it has the usual Day and Night theme with NoActionBar extension.

Now to make different preferences we can use different classes as shown below:

SwitchPreferenceCompat: This gives us the Switch Preference which we can use to toggle between two different modes in the setting.

EditTextPreference: This preference allows the user to give its own choice of number or string in the settings which can be used for different actions.

For more details on this you can refer the this link.

Implementation in SUSI Android

In SUSI Android we have created an activity named activity_settings which holds the Preference Fragment for the setting.

<?xml version="1.0" encoding="utf-8"?>


<fragment

  xmlns:android="http://schemas.android.com/apk/res/android"

  xmlns:tools="http://schemas.android.com/tools"

  android:id="@+id/chat_pref"

  android:name="org.fossasia.susi.ai.activities.SettingsActivity$ChatSettingsFragment"

  android:layout_width="match_parent"

  android:layout_height="match_parent"

  tools:context="org.fossasia.susi.ai.activities.SettingsActivity"/>

The Preference Settings Fragment contains different Preference categories that are implemented to allow the user to have different customization option while using the app. The pref_settings.xml is as follows

<?xml version="1.0" encoding="utf-8"?>

<PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android"

  xmlns:app="http://schemas.android.com/apk/res-auto"

  android:title="@string/settings_title">


  <PreferenceCategory

      android:title="@string/server_settings_title">

      <PreferenceScreen

          android:title="@string/server_pref"

          android:key="Server_Select"

          android:summary="@string/server_select_summary">

      </PreferenceScreen>

  </PreferenceCategory>


  <PreferenceCategory

      android:title="@string/settings_title">

      <com.takisoft.fix.support.v7.preference.SwitchPreferenceCompat

          android:id="@+id/enter_key_pref"

          android:defaultValue="true"

          android:key="@string/settings_enterPreference_key"

          android:summary="@string/settings_enterPreference_summary"

          android:title="@string/settings_enterPreference_label" />

  </PreferenceCategory>

All the logic related to Preferences and their action is written in SettingsActivity Java class. It listens for any change in the preference options and take actions accordingly in the following way.

public class SettingsActivity extends AppCompatActivity {


  private static final String TAG = "SettingsActivity";

  private static SharedPreferences prefs;


  @Override

  protected void onCreate(Bundle savedInstanceState) {

      super.onCreate(savedInstanceState);

      prefs = getSharedPreferences(Constant.THEME, MODE_PRIVATE);

      Log.d(TAG, "onCreate: " + (prefs.getString(Constant.THEME, DARK)));

      if(prefs.getString(Constant.THEME, "Light").equals("Dark")) {

          setTheme(R.style.PreferencesThemeDark);

      }

      else {

          setTheme(R.style.PreferencesThemeLight);

      }

      setContentView(R.layout.activity_settings);


  }

The class contains a ChatSettingFragment which extends the PreferenceFragmentCompat to give access to override functions such as onPreferenceClick. The code below shows the implementation of it.

public boolean onPreferenceClick(Preference preference) {

              Intent intent = new Intent();

              intent.setComponent( new ComponentName("com.android.settings","com.android.settings.Settings$TextToSpeechSettingsActivity" ));

              startActivity(intent);

              return true;

          }

      });


      rate=(Preference)getPreferenceManager().findPreference(Constant.RATE);

      rate.setOnPreferenceClickListener(new Preference.OnPreferenceClickListener() {

          @Override

          public boolean onPreferenceClick(Preference preference) {

              startActivity(new Intent(Intent.ACTION_VIEW, Uri.parse("http://play.google.com/store/apps/details?id=" + getContext().getPackageName())));

              return true;

          }

      });
}

For diving more into the code we can refer to the github repo of Susi Android (https://github.com/fossasia/susi_android).

Resources

Continue ReadingBuilding Preference Screen in SUSI Android

Upload Image to Imgur Anonymously Using Volley in Phimpme Android app

As Phimpme Android is an image app in which lets you share your image on multiple platform without installing that apps like Twitter, Facebook, Pinterest and Imgur. Imgur is the best place to share and enjoy the most awesome images on the Internet. Imgur provides APIs to  upload image from your account as well as anonymously. In this blog I am going to explain how I added the imgur upload feature in Phimpme Android app.

I have implemented upload to  Imgur anonymously. There is step by step guide to implementing it.

Step 1: Register your application on Imgur.

To register your application click here, For more detail read their documentation here Imgur API site.

Once you have registered your application, you will be provided with two keys, Client ID and Client Secret. Save them, we will need them later to upload an image and for = authentication purposes.

Step-2: Add a networking library in your Gradle, I have used volley.

Olley is a good networking library, works well for both API requests and File upload. To add volley in the project, add the following dependency

'com.android.volley:volley:1.0.0'

Step-3 Convert your image into the desired format to upload.

Imgur three kinds of images: A binary file, base64 data, or  URL of an image. (Up to 10MB).

In phimpme Android, we are uploading base64 image string.

So, we have to first convert our image to the bitmap and then convert the bitmap to base64. I recommend using an Image library to decode bitmap otherwise, there are chances of OutOfMemory exception thrown for large image files.

To convert bitmap to base64 string

public static String get64BaseImage (Bitmap bmp) {
       ByteArrayOutputStream baos = new ByteArrayOutputStream();
       bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos);
       byte[] imageBytes = baos.toByteArray();
       return Base64.encodeToString(imageBytes, Base64.DEFAULT);
   }

Best practice to add such methods into theutility class. Also, you can apply the image compression in above if you want to reduce the image size, here number 100 represents, preserve the  100% quality of the image.

Step-4:  Now we need 2 things, add image string in body and add AUTHENTICATION key in headers of the request.

We have added two methods to do above mentioned tasks.

  1. To upload the image data to Imgur.
  2. To add a header in our network request.

The header is required for Imgur to authenticate the client who uploads the file and your authentication key is your  CLIENT-ID, which we have generated in step 1.

@Override
               protected Map<String, String> getParams() throws AuthFailureError {
                   Map<String, String> parameters = new HashMap<String, String>();
                   parameters.put("image", imageString);
                   if(caption!=null && !caption.isEmpty())
                       parameters.put("title",caption);
                   return parameters;
               }


Now add the headers to authenticate the client.The above code contains the body part with the key as “image” and value as the imageString data, which was the result of get64BaseImage () method.

   @Override
               public Map<String, String> getHeaders() throws AuthFailureError {
                   Map<String, String> headers = new HashMap<String, String>();
                   headers.put("Authorization",”Client-Id {client-id});
                   return headers;
               }

Override the getHeader method of Volley library and return a map which has a key named “Authorization” and value is client id of Imgur.

Now we are ready to upload an image on Imgur through Phimpme Android app.

The problem I faced:

Whenever I was trying to upload large size images, I was getting volleytimeout exception, by default connection timeout was not sufficient to upload large files, so I resolved this error by adding below line in the request policy.

           request.setRetryPolicy(new DefaultRetryPolicy( 50000, 5, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));

Now it works seamlessly with large files even on slow internet network and we are receiving the URL of the image in the response.

Resources :

Continue ReadingUpload Image to Imgur Anonymously Using Volley in Phimpme Android app

Persistence Layer in Open Event Organizer Android App

Open Event Organizer is an Event Managing Android App with the core features of Attendee Check In by QR Code Scan and Data Sync with the Open Event API Server. As an event can be large, so the app will be dealing with a large amount of a data. Hence to avoid repetitive network requests for fetching the data, the app maintains a local database containing all the required data and the database is synced with the server. Android provides android.database.sqlite package which contains the API needed to use the database on the Android. But it is really not a good practice to use the sqlite queries everywhere in the app. So there comes a persistence layer. A persistence layer works between the database and the business logic. Open Event Organizer uses Raizlabs’s DbFlow, an ORM based Android Database Library for the same. I will be talking about its implementation through the app in this blog.

First of all, you declare the base class of the database which is used to create the database by Android for the app. You declare all the base constants here. The class looks like:

@Database(
   name = OrgaDatabase.NAME,
   version = OrgaDatabase.VERSION,
   ...
)
public class OrgaDatabase {
   public static final String NAME = "orga_database";
   public static final int VERSION = 2;
   ...
}

OrgaDatabase.java
app/src/main/java/org/fossasia/openevent/app/data/db/configuration/OrgaDatabase.java

Initialise the database in the Application class using FlowManager provided by the library. Choose the Application class to do this to ensure that the library finds the generated code in the DbFlow.

FlowManager.init(
   new FlowConfig.Builder(context)
       .addDatabaseConfig(
           new DatabaseConfig.Builder(OrgaDatabase.class)
           ...
           .build()
       )
       .build());

OrgaApplication.java
app/src/main/java/org/fossasia/openevent/app/OrgaApplication.java

The database is created now. For tables creation, DbFlow uses model classes which must be annotated using the annotations provided by the library. The basic annotations are – @Table, @PrimaryKey, @Column, @ForeignKey etc.

For example, the Attendee class in the app looks like:

@Table(database = OrgaDatabase.class)
public class Attendee ... {

   @PrimaryKey
   public long id;

   @Column
   public boolean checkedIn;
   ...
   ...
   @ForeignKey(
       onDelete = ForeignKeyAction.CASCADE,
       onUpdate = ForeignKeyAction.CASCADE)
   public Order order;
   ...
}

Attendee.java
app/src/main/java/org/fossasia/openevent/app/data/models/Attendee.java

This will create a table named attendee with the columns and relationships annotated. Now comes the part of accessing data from the database. Open Event App uses RxJava’s support to the DbFlow library which enables async data accessing. The getItems method from DataBaseRepository looks like:

public <T> Observable<T> getItems(Class<T> typeClass, SQLOperator... conditions) {
   return RXSQLite.rx(SQLite.select()
       .from(typeClass)
       .where(conditions))
       .queryList()
       .flattenAsObservable(items -> items);
}

 

The method returns an observable emitting the items from the result. For data saving, the method looks like:

DatabaseDefinition database = FlowManager.getDatabase(OrgaDatabase.class);
FastStoreModelTransaction<T> transaction = FastStoreModelTransaction
   .insertBuilder(FlowManager.getModelAdapter(itemClass))
   .addAll(items)
   .build();
database.executeTransaction(transaction);

 

And for updating data, the method looks like:

ModelAdapter<T> modelAdapter = FlowManager.getModelAdapter(classType);
modelAdapter.update(item);

DatabaseRepository.java
app/src/main/java/org/fossasia/openevent/app/data/db/DatabaseRepository.java

DbFlow provides DirectModelNotifier which is used to get notified of the database change anywhere in the app. Open Event App uses PublishSubjects to send notifications on database change event. The implementation of the DatabaseChangeListener in the app looks like:

public class DatabaseChangeListener<T> ... {
   private PublishSubject<ModelChange<T>> publishSubject = PublishSubject.create();
   private DirectModelNotifier.ModelChangedListener<T> modelModelChangedListener;
   ...
   public void startListening() {
       modelModelChangedListener = new DirectModelNotifier.ModelChangedListener<T>() {
           @Override
           public void onTableChanged(@Nullable Class<?> aClass, @NonNull BaseModel.Action action) {
               // No action to be taken
           }
           @Override
           public void onModelChanged(@NonNull T model, @NonNull BaseModel.Action action) {
               publishSubject.onNext(new ModelChange<>(model, action));
           }
       };
       DirectModelNotifier.get().registerForModelChanges(classType, modelModelChangedListener);
   }
   ...
}

DatabaseChangeListener.java
app/src/main/java/org/fossasia/openevent/app/data/db/DatabaseChangeListener.java

The class is used in the app to get notified of the data change and to update the required local data fields using data from item emitted by the publishSubject of the class. This is used in the app where same data is accessed at more than one places. For example, There are two fragments – AttendeesFragment and AttendeeCheckInFragment from which an attendee’s check in status is toggled. So when the status is toggled from AttendeeCheckInFragment, the change must be updated in the AttendeesFragment’s attendees list. This is carried out using DatabaseChangeListener in the AttendeesPresenter which provides attendees list to the AttendeesFragment. And on the change in the attendee’s check in status, AttendeePresenter’s attendeeListener listens for the change and update the attendee in the list accordingly.

Links:
1. Raizlabs’s DbFlow , an ORM Android Database Library Github Repo Link
2. DbFlow documentation
3. Android database managing API android.database.sqlite

Continue ReadingPersistence Layer in Open Event Organizer Android App

Image Uploading in Open Event API Server

Open Event API Server manages image uploading in a very simple way. There are many APIs such as “Event API” in API Server provides you data pointer in request body to send the image URL. Since you can send only URLs here if you want to upload any image you can use our Image Uploading API. Now, this uploading API provides you a temporary URL of your uploaded file. This is not the permanent storage but the good thing is that developers do not have to do anything else. Just send this temporary URL to the different APIs like the event one and rest of the work is done by APIs.
API Endpoints which receives the image URLs have their simple mechanism.

  • Create a copy of an uploaded image
  • Create different sizes of the uploaded image
  • Save all images to preferred storage. The Super Admin can set this storage in admin preferences

To better understand this, consider this sample request object to create an event

{
  "data": {
    "attributes": {
      "name": "New Event",
      "starts-at": "2002-05-30T09:30:10+05:30",
      "ends-at": "2022-05-30T09:30:10+05:30",
      "email": "example@example.com",
      "timezone": "Asia/Kolkata",
      "original-image-url": "https://cdn.pixabay.com/photo/2013/11/23/16/25/birds-216412_1280.jpg"
    },
    "type": "event"
  }
}

I have provided one attribute as “original-image-url”, server will open the image and create different images of different sizes as

      "is-map-shown": false,
      "original-image-url": "http://example.com/media/events/3/original/eUpxSmdCMj/43c6d4d2-db2b-460b-b891-1ceeba792cab.jpg",
      "onsite-details": null,
      "organizer-name": null,
      "can-pay-by-stripe": false,
      "large-image-url": "http://example.com/media/events/3/large/WEV4YUJCeF/f819f1d2-29bf-4acc-9af5-8052b6ab65b3.jpg",
      "timezone": "Asia/Kolkata",
      "can-pay-onsite": false,
      "deleted-at": null,
      "ticket-url": null,
      "can-pay-by-paypal": false,
      "location-name": null,
      "is-sponsors-enabled": false,
      "is-sessions-speakers-enabled": false,
      "privacy": "public",
      "has-organizer-info": false,
      "state": "Draft",
      "latitude": null,
      "starts-at": "2002-05-30T04:00:10+00:00",
      "searchable-location-name": null,
      "is-ticketing-enabled": true,
      "can-pay-by-cheque": false,
      "description": "",
      "pentabarf-url": null,
      "xcal-url": null,
      "logo-url": null,
      "can-pay-by-bank": false,
      "is-tax-enabled": false,
      "ical-url": null,
      "name": "New Event",
      "icon-image-url": "http://example.com/media/events/3/icon/N01BcTRUN2/65f25497-a079-4515-8359-ce5212e9669f.jpg",
      "thumbnail-image-url": "http://example.com/media/events/3/thumbnail/U2ZpSU1IK2/4fa07a9a-ef72-45f8-993b-037b0ad6dd6e.jpg",

We can clearly see that server is generating three other images on permanent storage as well as creating the copy of original-image-url into permanent storage.
Since we already have our Storage class, all we need to do is to make the little bit changes in it due to the decoupling of the Open Event. Also, I had to work on these points below

  • Fix upload module, provide support to generate url of locally uploaded file based on static_domain defined in settings
  • Using PIL create a method to generate new image by converting first it to jpeg(lower size than png) and resize it according to the aspect ratio
  • Create a helper method to create different sizes
  • Store all images in preferred storage.
  • Update APIs to incorporate this feature, drop any URLs in image pointers except original_image_url

Support for generating locally uploaded file’s URL
Here I worked on adding support to check if any static_domain is set by a user and used the request.url as the fallback.

if get_settings()['static_domain']:
        return get_settings()['static_domain'] + \
            file_relative_path.replace('/static', '')
    url = urlparse(request.url)
    return url.scheme + '://' + url.host + file_relative_path

Using PIL create a method to create image

This method is created to create the image based on any size passed it to as a parameter. The important role of this is to convert the image into jpg and then resize it on the basis of size and aspect ratio provided.
Earlier, in Orga Server, we were directly using the “open” method to open Image files but since they are no longer needed to be on the local server, a user can provide the link to any direct image. To add this support, all we needed is to use StringIO to turn the read string into a file-like object

image_file = cStringIO.StringIO(urllib.urlopen(image_file).read())

Next, I have to work on clearing the temporary images from the cloud which was created using temporary APIs. I believe that will be a cakewalk for locally stored images since I already had this support in this method.

if remove_after_upload:
        os.remove(image_file)

Update APIs to incorporate this feature
Below is an example how this works in an API.

if data.get('original_image_url') and data['original_image_url'] != event.original_image_url:
            uploaded_images = create_save_image_sizes(data['original_image_url'], 'event', event.id)
            data['original_image_url'] = uploaded_images['original_image_url']
            data['large_image_url'] = uploaded_images['large_image_url']
            data['thumbnail_image_url'] = uploaded_images['thumbnail_image_url']
            data['icon_image_url'] = uploaded_images['icon_image_url']
        else:
            if data.get('large_image_url'):
                del data['large_image_url']
            if data.get('thumbnail_image_url'):
                del data['thumbnail_image_url']
            if data.get('icon_image_url'):
                del data['icon_image_url']

Here the method “create_save_image_sizes” provides the different URL of different images of different sizes and we clearly dropping any other images of different sizes is provided by the user.

General Suggestion
Sometimes when we work on such issues there are some of the things to take care of for example, if you checked the first snippet, I tried to ensure that you will get the URL although it is sure that static_domain will not be blank, because even if the user (admin) doesn’t fill that field then it will be filled by server hostname
A similar situation is the one where there is no record in Image Sizes table, may be server admin didn’t add one. In that case, it will use the standard sizes stored in the codebase to create different images of different sizes.

Resources:

Continue ReadingImage Uploading in Open Event API Server