Adding Map and RSS Action Type Support to SUSI MagicMirror Module with React

SUSI being an interactive personal assistant, answers questions in a variety of formats. This includes maps, RSS, table, and pie-chart. SUSI MagicMirror Module earlier provided support for only Answer Action Type. So, if you were to ask about a location, it could not show you a map for that location. Support for a variety of formats was added to SUSI Module for MagicMirror so that users can benefit from rich responses by SUSI.AI.

One problem that was faced while adding UI components is that in the MagicMirror Module structure, each module needs to supply its DOM by overriding the getDom() method. Therefore, you need to manage all the UI programmatically. Managing UI programmatically in Javascript is a cumbersome task since you need to create DOM nodes, manually apply styling to them, and add them to parent DOM object which is needed to be returned. We need to write UI for each element like below:

getDom: function () {
        .... 
        ....
        const moduleDiv = document.createElement("div");

        const visualizerCanvas = document.createElement("canvas");
        moduleDiv.appendChild(visualizerCanvas);

        const mapDiv = document.createElement("div");
        loadMap(mapDiv,lat, long);
        moduleDiv.appendChild(mapDiv);
        ...
        ...
}

As you can see, manually managing the DOM is neither that easy nor a recommended practice. It can be done in a more efficient way using the React Library by Facebook.  React is an open source UI library by Facebook. It works on the concept of Virtual DOM i.e. the whole DOM object gets created in the memory and only the changed components are reflected on the document.

Since the SUSI MagicMirror Module is primarily written in open-source TypeScript Lang (a typed superset of JavaScript), we also need to write React in TypeScript. To add React to a Typescript Project, we need to add some dependencies. They can be added using:

$ yarn add react react-dom @types/react @types/react

Now, we need to change our Webpack config to build .tsx files for React. TSX like JSX can contain HTML like syntax for representing DOM object in a syntactic sugar form. This can be done by changing resolve extensions and loaders config so that awesome typescript loaded compiles that TSX files. It is needed to be modified like below

resolve: {
   extensions: [".js", ".ts", ".tsx", ".jsx"],
},

module: {
   loaders: [{
       test: /\.tsx?$/,
       loaders: ["awesome-typescript-loader"],
   },
       {
           test: /\.json$/,
           loaders: ["json-loader"],
       }],
},

This will allow webpack to build and load both .tsx and .ts files. Now that project is setup properly, we need to add UI for Map and RSS Action Type.

The UI for Map is added with the help of React-Leaflet library. React-Leaflet module is a module build on top of Leaflet Map library for loading maps in Browser. We add the React-Leaflet library using

$ yarn add react-leaflet

Now, we declare a MapView Component in React and render Map in it using the React-Leaflet Library. Custom styling can be applied to it. The render function for MapView React Component is defined as follows.

import * as React from "react";
import {Map, Marker, Popup, TileLayer} from "react-leaflet";
interface IMapProps {
   latitude: number;
   longitude: number;
   zoom: number;
}

export class MapView extends React.Component<IMapProps, any> {

   public constructor(props: IMapProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       const center = [this.props.latitude, this.props.longitude];
       console.log(center);
       return <Map center={center} zoom={this.props.zoom} style={{height: "300px"}}>
           <TileLayer url="http://{s}.tile.osm.org/{z}/{x}/{y}.png"/>
           <Marker position={center}>
               <Popup>
                   <span> Here </span>
               </Popup>
           </Marker>
       </Map>;
   }
}

For making the UI for RSS Action Type, we define an RSS Card Component. An RSS feed is constituted by various RSS Cards. An RSS Card is defined as follows.

import * as React from "react";

export interface IRssProps {
   title: string;
   description: string;
   link: string;
}

export class RSSCard extends React.Component <IRssProps, any> {

   constructor(props: IRssProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       return <div className="card">
           <div className="card-title">{this.props.title}</div>
           <div className="card-description">{this.props.description}</div>
       </div>;
   }
}

Now, we define an RSS feed which is constituted by various RSS Information Cards. Since screen size is limited and there is no option available to the user to scroll, we limit the number of cards displayed to 5 with slice operation on data array.

import * as React from "react";
import {IRssProps, RSSCard} from "./rss-card";

export interface IRSSFeedProps {
   feeds: Array<IRssProps>;
}

export class RSSFeed extends React.Component <IRSSFeedProps, any> {

   public constructor(props: IRSSFeedProps) {
       super(props);
   }

   public render(): JSX.Element | any | any {
       return <div className="rss-div">
           {this.props.feeds.map((feed: IRssProps) => {
                   return <RSSCard key={feed.title} title={feed.title} description={feed.description} link={feed.link}/>;
               }
           ).slice(0, 5)}
       </div>;
   }
}

Now, we can add these components to UI easily and render it with ReactDOM like:

ReactDOM.render(<TableView data={tableData} columns={action.columns}/>, tableDiv);

Below is an example screenshot of RSS and Map View in SUSI MagicMirror.

Resources:

Continue ReadingAdding Map and RSS Action Type Support to SUSI MagicMirror Module with React

Adding Face Recognition based Authentication to SUSI MagicMirror Module

SUSI MagicMirror Module is a module designed for MagicMirror that helps you get SUSI Intelligence right on your Mirror. You may then ask it questions in the way the Queen in the tale “Snow White and the Seven Dwarfs” asked. One key feature that was missing in it was that the user could be recognized and queries he asked are answered in a personalized manner. This could be achieved if SUSI uses the account dedicated to that person to answer his/her queries. Thus, we need an authentication support.

The authentication on MagicMirror is not as trivial as on Web, Android and iOS client apps for SUSI. Key difference here is that user, while using the MagicMirror, does not have access to a keyboard and mouse. Therefore, we cannot simply ask him to input email and password. Furthermore, a MagicMirror installed in your home may be used by several members of your family. Thus, we need a mechanism to tell each user apart.

This was done with the help of MMM-Facial-Recognition module which brings face recognition support to MagicMirror.

MMM-Facial-Recognition module provides support for recognizing multiple faces and setting the modules on the mirror screen based on the user facing the mirror using OpenCV. Other modules can also take advantage of knowing about the person with the help of module notifications sent by MMM-Facial-Recognition Module.

To add Face based Authentication support to SUSI with MMM-Facial-Recognition, we first need to add the latter to MagicMirror. It can be added easily by first cloning the repository to modules directory of MagicMirror.

$ git clone https://github.com/paviro/MMM-Facial-Recognition

Go inside the directory and install dependencies

$ npm run install

Now, we need to train a model for the users who are going to use the MagicMirror. This can be done by the MMM-Facial-Recognition-Tools. This tool captures photos from the camera and trains a model for Face Recognition. The guide to use the tool is very well written on the Github page so I am not including it here. After training for faces of the users, you will get a training.xml file. This file contains the information about the facial features of every person so that it can tell users apart. You need to copy this file to the Module directory for MMM-Facial-Recognition module i.e. MagicMirror/module/MMM-Facial-Recognition.

After this we can add the module to MagicMirror, by modifying the config file. Add the following lines in the config file (config.js). Copy and paster username array from the training script in the asked position.

{
    module: 'MMM-Facial-Recognition',
    config: {
        // 1=LBPH | 2=Fisher | 3=Eigen
        recognitionAlgorithm: 1,
        lbphThreshold: 50,
        fisherThreshold: 250,
        eigenThreshold: 3000,
        useUSBCam: true,
        trainingFile: 'modules/MMM-Facial-Recognition/training.xml',
        interval: 2,
        logoutDelay: 15,
        // Array with usernames (copy and paste from training script)
        users: [],
        defaultClass: "default",
        everyoneClass: "everyone",
        welcomeMessage: true
    }
}

You may configure the show and hide behavior of modules based on the person. Find more information about it in the official guide on the repository. After setting up it recognizes and shows welcome message to each user like this.

 

Now, we need to integrate this module to SUSI for Authentication. To do this first of all we make config for SUSI MagicMirror Module to add user authentication along with their name registered on Facial Recognition Module. It can be done by adding SUSI MagicMirror module config file (config.js) like below.

{
       module: "MMM-SUSI-AI",
       position: "top_center",
       config: {
            hotword: "Susi",
            users: [{
                face_recognition_username: "Pranjal Paliwal",
                email: "paliwal.pranjal83@gmail.com",
                password: "PASSWORD_HERE"
            }, {
                face_recognition_username: "Chashmeet Singh",
                email: "chashmeetsingh@gmail.com",
                password: "PASSWORD_HERE"
            }],
        },
        classes: 'default everyone'
},

Now, we need to know that which user is facing the mirror at that time. MMM-Facial-Recognition sends a module notification when a user is detected. The format of the notification is

sender : MMM-Facial-Recognition
type: CURRENT_USER
payload: Name of the User / None 

If the user is recognized we get the name of the User as payload. If no face could be identified, we get None as payload.

We need to find out user based on the user’s name registered in the module. We already have that parameter in the user object in users array in config for SUSI MagicMirror Module (MMM-SUSI-AI). We can iterate over users array to find out the user facing the mirror on receiving the notification. In SUSI Chat API, users are identified with the help of an access token. On identifying a user, we perform login with the help of SignInService to obtain token for him. The implementation of the above task can be understood via the following snippet.

public receivedNotification(type: NotificationType, payload: any): void {
   if (type === "CURRENT_USER") {
       console.log("Current User", payload);
       if (payload === "None") {
           this.configService.Config.accessToken = null;
       } else {
           console.log(this.config.users);
           for (const user of this.config.users) {
               if (user.face_recognition_username === payload) {
                   if (isUndefined(this.signInService)) {
                       this.signInService = new SignInService(user);
                   }
                   this.signInService.updateUser(user).then((token) => {
                       console.log("updating token for " + user);
                       this.configService.Config.accessToken = token;
                   });
                   return;
               }
           }
           this.configService.Config.accessToken = null;
       }
   }
}

Explanation: In the receivedNotification method of the Main Component of SUSI MagicMirror module, we check if notification is of type CURRENT_USER. If the payload is None, we set access-token to null. If a user is identified, we check if it is contained in the users array. If present, we perform Sign In to SUSI Server for that user and store the access token obtained in the Config.

Now, every time a recognized my Facial Recognition module, the access token is updated in the config. We use the accessToken field in Config to send the message to SUSI Chat API. The implementation of it can be referred below.

public async askSusi(query: string): Promise<any> {

   const accessToken = this.configService.Config.accessToken;

   const requestString: string = (!isUndefined(accessToken) && accessToken != null) ?
       `http://api.susi.ai/susi/chat.json?q=${query}&access_token=${accessToken}` :
       `http://api.susi.ai/susi/chat.json?q=${query}`;

   const response = await WebRequest.get(requestString);
   return JSON.parse(response.content);
}

By using the above approach, the request sent to SUSI Server are identified according to the person facing the mirror. SUSI can, therefore, answer according to the user. In this way, authentication with Face Recognition is performed in the SUSI Magic Mirror Module.

Resources

 

Continue ReadingAdding Face Recognition based Authentication to SUSI MagicMirror Module

Sending Data between components of SUSI MagicMirror Module

SUSI MagicMirror module is a module to add SUSI assistant right on your MagicMirror. The software for MagicMirror constitutes of an Electron app to which modules can be added easily. Since there are many modules, there might be functionalities that need interaction between various modules by transfer of information. MagicMirror also provides a node_helper script that facilitates a module to perform some background tasks. Therefore, a mechanism to transfer information from node_helper to various components of module is also needed.

MagicMirror provides an inbuilt module notification system that can be used to send notification across the modules and a socket notification system to send information between node_helper and various components of the system.

Our codebase for SUSI MagicMirror is divided mainly into two parts. A Main module that handles all the process of hotword detection, speech recognition, calling SUSI API and saving audio after Text to Speech and a Renderer module which performs the task of managing the display of content on the Mirror Screen and playing back the file obtained by Speech Synthesis. Plainly put, Main module mainly handles the backend logic of the application and the Renderer handles the frontend. Main and Renderer module work on different layers of the application and to facilitate communication between them, we need to make a mechanism. A schematic of flow that is needed to be maintained can be highlighted as:

As you can see in the above diagram, we need to transfer a lot of information between the components. We display animation and text based on the current state of recognition in the  module, thus we need to transfer this information frequently. This task is accomplished by utilizing the inbuilt socket notification system in the MagicMirror. For every event like when system enters into listening , busy or recognized speech state, we need to pass message to renderer. To achieve this, we made a rendererSend function to send notification to renderer.

const rendererSend =  (event: NotificationType , payload: any) => {
   this.sendSocketNotification(event, payload);
}

This function takes an event and a payload as arguments. Event tells which event occurred and payload is any data that we wish to send. This method in turn calls the method provided by MagicMirror module to send socket notifications within the module.

When certain events occur like when system enters busy state or listening state, we trigger the rendererSend call to send a socket notification to the module. The rendererSend method is supplied in the State Machine Components available to every state. The task of sending notifications can be done using the code snippet as follows:

// system enters busy state
this.components.rendererSend("busy", {});
// send speech recognition hypothesis text to renderer
this.components.rendererSend("recognized", {text: recognizedText});
// send susi api output json to renderer to display interactive results while Speech Output is performed
this.components.rendererSend("speak", {data: susiResponse});

The socket notification sent via the above method is received in SUSI Module via a callback called socketNotificationReceived . We need to define this callback with implementation while registering module to MagicMirror. So, we register the MMM-SUSI-AI module by adding the definition for socketNotificationReceived method.

Module.register("MMM-SUSI-AI", {
//other function definitions
***
   // define socketNotificationReceived function
   socketNotificationReceived: function (notification, payload) {
       susiMirror.receivedNotification(notification, payload);
   },
***
});

In this way, we send all the notification received to susiMirror object in the renderer module by calling the receivedNotification method of susiMirror object

We can now receive all the notifications in the SusiMirror and update UI. To handle notifications, we define receivedNotification method as follows:

public receivedNotification(type: NotificationType, payload: any): void {

   this.visualizer.setMode(type);
   switch (type) {
       case "idle":
            // handle idle state
           break;
       case "listening":
           // handle listening state
           break;
       case "busy":
           // handle busy state
         break;
       case "recognized":
           // handle recognized state. This notification also contains a payload about the hypothesis text           
           break;
       case "speak":
           // handle speaking state. We need to play back audio file and display text on screen for SUSI Output. Notification Payload contains SUSI Response
           break;
   }
}

In this way, we utilize the Socket Notification System provided by the MagicMirror Electron Application to send data across the components of Magic Mirror module for SUSI AI.

Resources

Continue ReadingSending Data between components of SUSI MagicMirror Module