Showing Pull Request Build logs in Yaydoc

In Yaydoc, I added the feature to show build status of the Pull Request. But there was no way for the user to see the reason for build failure, hence I decided to show the build log in the Pull Request similar to that of TRAVIS CI. For this, I had to save the build log into the database, then use GitHub status API to show the build log url in the Pull Request which redirects to Yaydoc website where we render the build log.

StatusLog.storeLog(name, repositoryName, metadata,  `temp/admin@fossasia.org/generate_${uniqueId}.txt`, function(error, data) {
                            if (error) {
                              status = "failure";
                            } else {
                              targetBranch = `https://${process.env.HOSTNAME}/prstatus/${data._id}`
                            }
                            github.createStatus(commitId, req.body.repository.full_name, status, description, targetBranch, repositoryData.accessToken, function(error, data) {
                              if (error) {
                                console.log(error);
                              } else {
                                console.log(data);
                              }
                            });
                          });

In the above snippet, I’m storing the build log which is generated from the build script to the mongodb and I’m appending the mongodb unqiueID to the `prstatus` url so that we can use that id to retrieve build log from the database.

exports.createStatus = function(commitId, name, state, description, targetURL, accessToken, callback) {
  request.post({
    url: `https://api.github.com/repos/${name}/statuses/${commitId}`,
    headers: {
      'User-Agent': 'Yaydoc',
      'Authorization': 'token ' + crypter.decrypt(accessToken)
    },
    "content-type": "application/json",
    body: JSON.stringify({
      state: state,
      target_url: targetURL,
      description: description,
      context: "Yaydoc CI"
    })
  }, function(error, response, body) {
    if (error!== null) {
      return callback({description: 'Unable to create status'}, null);
    }
    callback(null, JSON.parse(body));
  });
};

After saving the build log, I’m sending the request to GitHub for showing the status of the build along with build log url where user can click the detail link and can see the build log.

Resources:

Continue ReadingShowing Pull Request Build logs in Yaydoc

Showing Pull Request Build Status in Yaydoc

Yaydoc is integrated to various open source projects in FOSSASIA.  We have to make sure that the contributors PR should not break the build. So, I decided to check whether the PR is breaking the build or not. Then, I would notify the status of the build using GitHub status API.

exports.registerHook = function (data, accessToken) {
  return new Promise(function(resolve, reject) {
    var hookurl = 'http://' + process.env.HOSTNAME + '/ci/webhook';
    if (data.sub === true) {
      hookurl += `?sub=true`;
    }
    request({
      url: `https://api.github.com/repos/${data.name}/hooks`,
      headers: {
        'User-Agent': 'Yaydoc',
        'Authorization': 'token ' + crypter.decrypt(accessToken)
      },
      method: 'POST',
      json: {
        name: "web",
        active: true,
        events: [
          "push",
          "pull_request"
        ],
        config: {
          url: hookurl,
          content_type: "json"
        }
      }
    }, function(error, response, body) {
      if (response.statusCode !== 201) {
        console.log(response.statusCode + ': ' + response.statusMessage);
        resolve({status: false, body:body});
      } else {
        resolve({status: true, body: body});
      }
    });
  });
};

I’ll register the webhook, when user registers the repository to yaydoc for push and pull request event. Push event will be for building documentation and hosting the documentation to the GitHub pages. Pull_request event would be for checking the build of the pull request.

github.createStatus(commitId, req.body.repository.full_name, "pending", "Yaydoc is checking your build", repositoryData.accessToken, function(error, data) {
                    if (!error) {
                      var user = req.body.pull_request.head.label.split(":")[0];
                      var targetBranch = req.body.pull_request.head.label.split(":")[1];
                      var gitURL = `https://github.com/${user}/${req.body.repository.name}.git`;
                      var data = {
                        email: "admin@fossasia.org",
                        gitUrl: gitURL,
                        docTheme: "",
                        debug: true,
                        docPath: "",
                        buildStatus: true,
                        targetBranch: targetBranch
                      };
                      generator.executeScript({}, data, function(error, generatedData) {
                        var status, description;
                        if(error) {
                          status = "failure";
                          description = error.message;
                        } else {
                          status = "success";
                          description = generatedData.message;
                        }
                        github.createStatus(commitId, req.body.repository.full_name, status, description, repositoryData.accessToken, function(error, data) {
                          if (error) {
                            console.log(error);
                          } else {
                            console.log(data);
                          }
                       });
                 });
              }
        });

When anyone opens a new PR, GitHub will send  a request to yaydoc webhook. Then, I’ll send the status to GitHub saying that “Yaydoc is checking your build” with status `pending`. After, that I’ll documentation will be generated.Then, I’ll check the exit code. If the exit code is zero,  I’ll send the status `success` otherwise I’ll send `error` status.
Resources:

Continue ReadingShowing Pull Request Build Status in Yaydoc

Store Log History for Repositories Registered to Yaydoc

Yaydoc, our automatic documentation generation and deployment project, generates and deploys documentation for each of its registered repository. For every commit made to the registered repository, there is a corresponding build process running at Yaydoc. These build processes have their own logs which are stored as text files. However, until now, these commits were never visible to the user. So, if there would have been a failed build process, the user would never know the reason behind, rendering the user unable to rectify the error.

Hence, there was a need to make these logs available to users. The initial thought was to store only the latest log overriding all the previous logs for the repository. However, it was unanimously decided by the developers to store a history of logs for the repository. The main motive behind this was to enable users to compare logs between different commits.

The content from the log files created is stored in a MongoDB collection. Following is the schema defined for the build logs.

const BuildLog = mongoose.model(‘BuildLog’, mongoose.Schema({
  repository: String,  // `full_name` of the repository
  buildNumber: {       // Incrementing number for each build
    type: Number,
    default: 0,
  },
  generate: {
    data: Buffer,      // Generate Logs content
    datetime: Date,    // Date time of generate log creation
  },
  ghpages: {
    data: Buffer,      // Github Pages Logs content
    datetime: Date,    // Date time of github pages log creation
  },
}));

The repository collection is also updated, adding a builds key storing the number of times the build process was triggered for a given repository. This key is incremented on every new build and the new value is stored along with the builds as buildNumber.

The build process involves a documentation generation and a documentation deployment script. The process of incrementing the build number in the repository occurs when we store the documentation generation logs. After that, Github pages logs are stored when the documentation deployment process is completed.

Since the logs are stored in a text file at the location temp/<email>/<filename>.txt, we had to read the file using NodeJS File system. The file is read synchronously using the fs.readFileSync(filename) function and then stored in the MongoDB collection.

/**
 * Store logs created while generating docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file path of the generate logs
 * @param callback
 */
module.exports.storeGenerateLogs = function (name, filepath, callback) {
  Repository.incrementBuildNumber(name, function (error, repository) {
    var buildlog = new BuildLog({
      repository: name,
      buildNumber: repository.builds,
      generate: {
        data: fs.readFileSync(filepath),
        datetime: new Date()
      }
    });
    buildlog.save(function (error, repository) {
      callback(error, repository);
    });
  });
};

/**
 * Store logs created while deploying docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file of the ghpages deploy logs
 * @param callback
 */
module.exports.storeGithubPagesLogs = function (name, filepath, callback) {
  Repository.getRepositoryByName(name, function (error, repository) {
    if (error) {
      callback(error);
    } else {
      BuildLog.getParticularBuildLog(repository.name, repository.builds, 
      function (error, buildLog) {
        buildLog.ghpages.data = fs.readFileSync(filepath);
        buildLog.ghpages.datetime = new Date();
        buildLog.save(function (error, buildLog) {
          callback(error, buildLog);
        });
      });
    }
  });
};

The stored logs can then be retrieved at two different routes, with /:owner/:name/logs showing a list to logs generated in at most 10 builds and /:owner/:name showing the latest log. Similar to logs generated by Travis, accessing these routes doesn’t require the user to login to Yaydoc.

/**
 * Get a single repository with a log history of 10
 * @param name: `full_name` of the repository
 * @param callback
 */
module.exports.getRepositoryWithLogs = function (name, callback) {
  Repository.aggregate([
    { $match: {name: name}},
    {
      $lookup: {
        from: ‘buildLogs’,
        localField: ‘name’,
        foreignField: ‘repository’,
        as: ‘logs’
      }
    },
    { $unwind: ‘$logs’ },
    { $sort: { ‘logs.buildNumber’: -1 } },
    { $limit: 10 }
  ]).exec(function (error, results){
    callback(error, results);
  });
};

In order to retrieve a repository along with its logs, we perform an aggregation in MongoDB which is similar to a Left Join in SQL. This is the $lookup aggregation and it performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. A similar method is used to retrieve the latest log by setting the limit aggregation to 1.

Resources:

  1. MongoDB Aggregation Lookup: https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/
  2. Mongoose Aggregate Constructor: http://mongoosejs.com/docs/api.html#index_Mongoose-Aggregate
  3. NodeJS File System: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options
Continue ReadingStore Log History for Repositories Registered to Yaydoc

Deploying preview using surge in Yaydoc

In Yaydoc, we save the preview of the documentation in our local server and then we show the preview using express’s static serve method. But the problem is that Heroku doesn’t support persistent server, so our preview link gets expired within a few minutes. In order to solve the problem I planned to deploy the preview to surge so that the preview doesn’t get expired. For that I made a shell script which will deploy preview to the surge and then I’ll invoke the shell script using child_process.

#!/bin/bash

while getopts l:t:e:u: option
do
 case "${option}"
 in
 l) LOGIN=${OPTARG};;
 t) TOKEN=${OPTARG};;
 e) EMAIL=${OPTARG};;
 u) UNIQUEID=${OPTARG};;
 esac
done

export SURGE_LOGIN=${LOGIN}
export SURGE_TOKEN=${TOKEN}

./node_modules/.bin/surge --project temp/${EMAIL}/${UNIQUEID}_preview --domain ${UNIQUEID}.surge.sh

In the above snippet, I’m initializing the SURGE_LOGIN and SURGE_TOKEN environmental value, so that surge will deploy to the preview without asking any credentials while I am deploying the project. Then I’m executing surge by specifying the preview path and preview domain name.

exports.deploySurge = function(data, surgeLogin, surgeToken, callback) {
  var args = [
    "-l", surgeLogin,
    "-t", surgeToken,
    "-e", data.email,
    "-u", data.uniqueId
  ];

  var spawnedProcess = spawn('./surge_deploy.sh', args);
  spawnedProcess.on('exit', function(code) {
    if (code === 0) {
      callback(null, {description: 'Deployed successfully'});
    } else {
      callback({description: 'Unable to deploy'}, null);
    }
  });
}

Whenever the user generates documentation, I’ll invoke the shell script using child_process and then if it exits with exit code 0 I’ll pass the preview url via sockets to frontend and then the user can access the url.

Resource:

Continue ReadingDeploying preview using surge in Yaydoc

Avoiding Nested Callbacks using RxJS in Loklak Scraper JS

Loklak Scraper JS, as suggested by the name, is a set of scrapers for social media websites written in NodeJS. One of the most common requirement while scraping is, there is a parent webpage which provides links for related child webpages. And the required data that needs to be scraped is present in both parent webpage and child webpages. For example, let’s say we want to scrape quora user profiles matching search query “Siddhant”. The matching profiles webpage for this example will be https://www.quora.com/search?q=Siddhant&type=profile which is the parent webpage, and the child webpages are links of each matched profiles.

Now, a simplistic approach is to first obtain the HTML of parent webpage and then synchronously fetch the HTML of child webpages and parse them to get the desired data. The problem with this approach is that, it is slower as it is synchronous.

A different approach can be using request-promise-native to implement the logic in asynchronous way. But, there are limitations with this approach. The HTML of child webpages that needs to be fetched can only be obtained after HTML of parent webpage is obtained and number of child webpages are dynamic. So, there is a request dependency between parent and child i.e. if only we have data from parent webpage we can extract data from child webpages. The code would look like this

request(parent_url)
   .then(data => {
       ...
       request(child_url)
           .then(data => {
               // again nesting of child urls
           })
           .catch(error => {

           });
   })
   .catch(error => {

   });

 

Firstly, with this approach there is callback hell. Horrible, isn’t it? And then we don’t know how many nested callbacks to use as the number of child webpages are dynamic.

The saviour: RxJS

The solution to our problem is reactive extensions in JavaScript. Using rxjs we can obtain the required data without callback hell and asynchronously!

The promise-request object of the parent webpage is obtained. Using this promise-request object an observable is generated by using Rx.Observable.fromPromise. flatmap operator is used to parse the HTML of the parent webpage and obtain the links of child webpages. Then map method is used transform the links to promise-request objects which are again transformed into observables. The returned value – HTML – from the resulting observables is parsed and accumulated using zip operator. Finally, the accumulated data is subscribed. This is implemented in getScrapedData method of Quora JS scraper.

getScrapedData(query, callback) {
   // observable from parent webpage
   Rx.Observable.fromPromise(this.getSearchQueryPromise(query))
     .flatMap((t, i) => { // t is html of parent webpage
       // request-promise object of child webpages
       let profileLinkPromises = this.getProfileLinkPromises(t);
       // request-promise object to observable transformation
       let obs = profileLinkPromises.map(elem => Rx.Observable.fromPromise(elem));

       // each Quora profile is parsed
       return Rx.Observable.zip( // accumulation of data from child webpages
         ...obs,
         (...profileLinkObservables) => {
           let scrapedProfiles = [];
           for (let i = 0; i < profileLinkObservables.length; i++) {
             let $ = cheerio.load(profileLinkObservables[i]);
             scrapedProfiles.push(this.scrape($));
           }
           return scrapedProfiles; // accumulated data returned
         }
       )
     })
     .subscribe( // desired data is subscribed
       scrapedData => callback({profiles: scrapedData}),
       error => callback(error)
     );
 }

 

Resources:

Continue ReadingAvoiding Nested Callbacks using RxJS in Loklak Scraper JS

Scheduling Jobs to Check Expired Access Token in Yaydoc

In Yaydoc, we use the user access token to do various tasks like pushing documentation, registering webhook and to see it’s status.The user access token is very important to us, so I decided of adding Cron job which checks whether the user token has expired or not. But then one problem was that, if we have more number of users our cron job will send thousands of request at a time so it can break the app. So, I thought of queueing the process. I used `asyc` library for queuing the job.

const github = require("./github")
const queue = require("./queue")

User = require("../model/user");

exports.checkExpiredToken = function () {
  User.count(function (error, count) {
    if (error) {
      console.log(error);
    } else {
      var page = 0;
      if (count < 10) {
        page = 1;
      } else {
        page = count / 10;
        if (page * 10 < count) {
          page = (count + 10) /10;
        }
      }
      for (var i = 0; i <= page; i++) {
        User.paginateUsers(i, 10,
        function (error, users) {
          if (error) {
            console.log(error);
          } else {
            users.forEach(function(user) {
              queue.addTokenRevokedJob(user);
            })
          }
        })
      }
    }
  })
}

In the above code I’m paginating the list of users in the database and then I’m adding each user to the queue.

var tokenRevokedQueue = async.queue(function (user, done) {
  github.retriveUser(user.token, function (error, userData) {
    if (error) {
      if (user.expired === false) {
        mailer.sendMailOnTokenFailure(user.email);
        User.updateUserById(user.id, {
          expired: true
        }, function(error, data) {
          if (error) {
            console.log(error);
          }
        });
      }
      done();
    } else {
      done();
    }
  })
}, 2);

I made this queue with the help of async queue method. In the first parameter, I’m passing the logic and in second parameter, I’m passing how many jobs can be executed asynchronously. I’m checking if the user has revoked the token or not by sending API requests to GitHub’s user API. If it gives a response ‘200’ then the token is valid otherwise it’s invalid. If the user token is invalid, I’m sending email to the user saying that “Your access token in revoked so Sign In once again to continue the service”.

Resources:

Continue ReadingScheduling Jobs to Check Expired Access Token in Yaydoc

Using NodeJS modules of Loklak Scraper in Android

Loklak Scraper JS implements scrapers of social media websites so that they can be used in other platforms, like Android or in a native Java project. This way there will be only a single source of scraper, as a result it will be easier to update the scrapers in response to the change in websites. This blog explains how Loklak Wok Android, a peer for Loklak Server on Android platform uses the Twitter JS scraper to scrape tweets.

LiquidCore is a library available for android that can be used to run standard NodeJS modules. But Twitter scraper can’t be used directly, due to the following problems:

  • 3rd party NodeJS libraries are used to implement the scraper, like cheerio and request-promise-native and LiquidCore doesn’t support 3rd party libraries.
  • The scrapers are written in ES6, as of now LiquidCore uses NodeJS 6.10.2, which doesn’t support ES6 completely.

So, if 3rd party NodeJS libraries can be included in our scraper code and ES6 can be converted to ES5, LiquidCore can easily execute Twitter scraper.

3rd party NodeJS libraries can be bundled into Twitter scraper using Webpack and ES6 can be transpiled to ES5 using Babel.

The required dependencies can be installed using:

$npm install --save-dev webpack
$npm install --save-dev babel-core babel-loader babel-preset-es2015

Bundling and Transpiling

Webpack does bundling based on the configurations provided in webpack.config.js, present in root directory of project.

var fs = require('fs');

function listScrapers() {
   var src = "./scrapers/"
   var files = {};
   fs.readdirSync(src).forEach(function(data) {
       var entryName = data.substr(0, data.indexOf("."));
       files[entryName] = src+data;
   });
   return files;
}

module.exports = {
 entry: listScrapers(),
 target: "node",
 module: {
     loaders: [
         {
             loader: "babel-loader",
             test: /\.js?$/,
             query: {
                 presets: ["es2015"],
             }
         },
     ]
 },
 output: {
   path: __dirname + '/build',
   filename: '[name].js',
   libraryTarget: 'var',
   library: '[name]',
 }
};

 

Now let’s break the config file, the function listScrapers returns a JSONObject with key as name of scraper and value as relative location of scraper, ex:

{
   twitter: "./scrapers/twitter.js",
   github: "./scrapers/github.js"
   // same goes for other scrapers
}

The parameters in module.exports as described in the documentation of webpack for multiple inputs and to use the generated output externally:

  • entry: Since a bundle file is required for each scraper we provide the  the JSONObject returned by listScrapers function. The multiple entry points provided generate multiple bundled files.
  • target: As the bundled files are to be used in NodeJS platform,  “node” is set here.
  • module: Using webpack the code can be directly transpiled while bundling, the end users don’t need to run separate commands for transpiling. module contains babel configurations for transpiling.
  • output: options here customize the compilation of webpack
    • path: Location where bundled files are kept after compilation, “__dirname” means the current directory i.e. root directory of the project.
    • filename: Name of bundled file, “[name]“ here refers to the key of JSONObject provided in entry i.e. key of JSONObect returned from listScrapers. Example for Twitter scraper, the filename of bundled file will be “twitter.js”.
    • libraryTarget: by default the functions or methods inside bundled files can’t be used externally – can’t be imported. By providing the “var” the functions in bundled module can be accessed.
    • library: the name of the library.

Now, time to do the compilation work:

$ ./node_modules/.bin/webpack

The bundled files can be found in build directory. But, the generated bundled files are large files – around 77,000 lines. Large files are not encouraged for production purposes. So, a “-p” flag is used to generate bundled files for production – around 400 lines.

$ ./node_modules/.bin/webpack -p

Using LiquidCore to execute bundled files

The generated bundled file can be copied to the raw directory in res (resources directory in Android). Now, events are emitted from Activity/Fragment and in response to those events the scraping function is invoked in the bundled JS file, present in raw directory, the vice-versa is also possible.

So, we handle some events in our JS file and send some events to the android Activity/Fragment. The event handling and event creating code in JS file:

var query = "";
LiquidCore.on("queryEvent", function(msg) {
  query = msg.query;
});

LiquidCore.on("fetchTweets", function() {
  var twitterScraper = new twitter();
  twitterScraper.getTweets(query, function(data) {
    LiquidCore.emit("getTweets", {"query": query, "statuses": data});
  });
});

LiquidCore.emit('start');

 

First a “start” event is emitted from JS file, which is consumed in TweetHarvestingFragment by getScrapedTweet method using startEventListener.

EventListener startEventListener = (service, event, payload) -> {
   JSONObject jsonObject = new JSONObject();
   try {
       jsonObject.put("query", query);
       service.emit(LC_QUERY_EVENT, jsonObject); // value of LC_QUERY_EMIT is  "queryEvent"
   } catch (JSONException e) {
       Log.e(LOG_TAG, e.toString());
   }
   service.emit(LC_FETCH_TWEETS_EVENT); //value of  LC_FETCH_TWEETS_EVENT is  "fetchTweets"
};

 

The startEventListener then emits “queryEvent” with a JSONObject that contains the query to search tweets for scraping. This event is consumed in JS file by:

var query = "";
LiquidCore.on("queryEvent", function(msg) {
  query = msg.query;
});

 

After “queryEvent”, “fetchTweets” event is emitted from fragment, which is handled in JS file by:

LiquidCore.on("fetchTweets", function() {
  var twitterScraper = new twitter(); // scraping object is created
  twitterScraper.getTweets(query, function(data) { // function that scrapes twitter
    LiquidCore.emit("getTweets", {"query": query, "statuses": data});
  });
});

 

Once the scraped data is obtained, it is sent back to fragment by emitting “getTweets” event from JS file, “{“query”: query, “statuses”: data}” contains scraped data. This event is consumed in android by getTweetsEventListener.

EventListener getTweetsEventListener = (service, event, payload) -> { // payload contains scraped data
   Push push = mGson.fromJson(payload.toString(), Push.class);
   emitter.onNext(push);
};

 

LiquidCore creates a NodeJS instance to execute the bundled JS file. The NodeJS instance is called MicroService in LiquidCore terminology. For all this event handling to work, the NodeJS instance is created inside the method with a ServiceStartListner where all EventListener are added.

MicroService.ServiceStartListener serviceStartListener = (service -> {
   service.addEventListener(LC_START_EVENT, startEventListener);
   service.addEventListener(LC_GET_TWEETS_EVENT, getTweetsEventListener);
});
URI uri = URI.create("android.resource://org.loklak.android.wok/raw/twitter"); // Note .js is not used
MicroService microService = new MicroService(getActivity(), uri, serviceStartListener);
microService.start();

Resources

Continue ReadingUsing NodeJS modules of Loklak Scraper in Android

Sending Data between components of SUSI MagicMirror Module

SUSI MagicMirror module is a module to add SUSI assistant right on your MagicMirror. The software for MagicMirror constitutes of an Electron app to which modules can be added easily. Since there are many modules, there might be functionalities that need interaction between various modules by transfer of information. MagicMirror also provides a node_helper script that facilitates a module to perform some background tasks. Therefore, a mechanism to transfer information from node_helper to various components of module is also needed.

MagicMirror provides an inbuilt module notification system that can be used to send notification across the modules and a socket notification system to send information between node_helper and various components of the system.

Our codebase for SUSI MagicMirror is divided mainly into two parts. A Main module that handles all the process of hotword detection, speech recognition, calling SUSI API and saving audio after Text to Speech and a Renderer module which performs the task of managing the display of content on the Mirror Screen and playing back the file obtained by Speech Synthesis. Plainly put, Main module mainly handles the backend logic of the application and the Renderer handles the frontend. Main and Renderer module work on different layers of the application and to facilitate communication between them, we need to make a mechanism. A schematic of flow that is needed to be maintained can be highlighted as:

As you can see in the above diagram, we need to transfer a lot of information between the components. We display animation and text based on the current state of recognition in the  module, thus we need to transfer this information frequently. This task is accomplished by utilizing the inbuilt socket notification system in the MagicMirror. For every event like when system enters into listening , busy or recognized speech state, we need to pass message to renderer. To achieve this, we made a rendererSend function to send notification to renderer.

const rendererSend =  (event: NotificationType , payload: any) => {
   this.sendSocketNotification(event, payload);
}

This function takes an event and a payload as arguments. Event tells which event occurred and payload is any data that we wish to send. This method in turn calls the method provided by MagicMirror module to send socket notifications within the module.

When certain events occur like when system enters busy state or listening state, we trigger the rendererSend call to send a socket notification to the module. The rendererSend method is supplied in the State Machine Components available to every state. The task of sending notifications can be done using the code snippet as follows:

// system enters busy state
this.components.rendererSend("busy", {});
// send speech recognition hypothesis text to renderer
this.components.rendererSend("recognized", {text: recognizedText});
// send susi api output json to renderer to display interactive results while Speech Output is performed
this.components.rendererSend("speak", {data: susiResponse});

The socket notification sent via the above method is received in SUSI Module via a callback called socketNotificationReceived . We need to define this callback with implementation while registering module to MagicMirror. So, we register the MMM-SUSI-AI module by adding the definition for socketNotificationReceived method.

Module.register("MMM-SUSI-AI", {
//other function definitions
***
   // define socketNotificationReceived function
   socketNotificationReceived: function (notification, payload) {
       susiMirror.receivedNotification(notification, payload);
   },
***
});

In this way, we send all the notification received to susiMirror object in the renderer module by calling the receivedNotification method of susiMirror object

We can now receive all the notifications in the SusiMirror and update UI. To handle notifications, we define receivedNotification method as follows:

public receivedNotification(type: NotificationType, payload: any): void {

   this.visualizer.setMode(type);
   switch (type) {
       case "idle":
            // handle idle state
           break;
       case "listening":
           // handle listening state
           break;
       case "busy":
           // handle busy state
         break;
       case "recognized":
           // handle recognized state. This notification also contains a payload about the hypothesis text           
           break;
       case "speak":
           // handle speaking state. We need to play back audio file and display text on screen for SUSI Output. Notification Payload contains SUSI Response
           break;
   }
}

In this way, we utilize the Socket Notification System provided by the MagicMirror Electron Application to send data across the components of Magic Mirror module for SUSI AI.

Resources

Continue ReadingSending Data between components of SUSI MagicMirror Module

Managing States in SUSI MagicMirror Module

SUSI MagicMirror Module is a module for MagicMirror project by which you can use SUSI directly on MagicMirror. While developing the module, a problem I faced was that we need to manage the flow between the various stages of processing of voice input by the user and displaying SUSI output to the user. This was solved by making state management flow between various states of SUSI MagicMirror Module namely,

  • Idle State: When SUSI MagicMirror Module is actively listening for a hotword.
  • Listening State: In this state, the user’s speech input from the microphone is recorded to a file.
  • Busy State: The user has finished speaking or timed out. Now, we need to transcribe the audio spoken by the user, send the response to SUSI server and speak out the SUSI response.

The flow between these states can be explained by the following diagram:

As clear from the above diagram, transitions are not possible from a state to all other states. Only some transitions are allowed. Thus, we need a mechanism to guarantee only allowed transitions and ensure it triggers on the right time.

For achieving this, we first implement an abstract class State with common properties of a state. We store the information whether a state can transition into some other state in a map allowedTransitions which maps state names “idle”, “listening” and “busy” to their corresponding states. The transition method to transition from one state to another is implemented in the following way.

protected transition(state: State): void {
   if (!this.canTransition(state)) {
       console.error(`Invalid transition to state: ${state}`);
       return;
   }

   this.onExit();
   state.onEnter();
}

private canTransition(state: State): boolean {
   return this.allowedStateTransitions.has(state.name);
}

Here we first check if a transition is valid. Then we exit one state and enter into the supplied state.  We also define a state machine that initializes the default state of the Mirror and define valid transitions for each state. Here is the constructor for state machine.

constructor(components: IStateMachineComponents) {
        this.idleState = new IdleState(components);
        this.listeningState = new ListeningState(components);
        this.busyState = new BusyState(components);

        this.idleState.AllowedStateTransitions = new Map<StateName, State>([["listening", this.listeningState]]);
        this.listeningState.AllowedStateTransitions = new Map<StateName, State>([["busy", this.busyState], ["idle", this.idleState]]);
        this.busyState.AllowedStateTransitions = new Map<StateName, State>([["idle", this.idleState]]);

        this.currentState = this.idleState;
        this.currentState.onEnter();
}

Now, the question arises that how do we detect when we need to transition from one state to another. For that we subscribe on the Snowboy Detector Observable. We are using Snowboy library for Hotword Detection. Snowboy detects whether an audio stream is silent, has some sound or whether hotword was spoken. We bind all this information to an observable using the ReactiveX Observable pattern. This gives us a stream of events to which we can subscribe and get the results. It can be understood in the following code snippet.

detector.on("silence", () => {
   this.subject.next(DETECTOR.Silence);
});

detector.on("sound", () => {});

detector.on("error", (error) => {
   console.error(error);
});

detector.on("hotword", (index, hotword) => {
   this.subject.next(DETECTOR.Hotword);
});
public get Observable(): Observable<DETECTOR> {
   return this.subject.asObservable();
}

Now, in the idle state, we subscribe to the values emitted by the observable of the detector to know when a hotword is detected to transition to the listening state. Here is the code snippet for the same.

this.detectorSubscription = this.components.detector.Observable.subscribe(
   (value) => {
   switch (value) {
       case DETECTOR.Hotword:
           this.transition(this.allowedStateTransitions.get("listening"));
           break;
   }
});

In the listening state, we subscribe to the states emitted by the detector observable to find when silence is detected so that we can stop recording the audio stream for processing and move to busy state.

this.detectorSubscription = this.components.detector.Observable.subscribe(
   (value) => {
   switch (value) {
       case DETECTOR.Silence:
           record.stop();
           this.transition(this.allowedStateTransitions.get("busy"));
           break;
   }
});

The task of speaking the audio and displaying results on the screen is done by a renderer. The communication to renderer is done via a RendererCommunicator object using a notification system. We also bind its events to an observable so that we know when SUSI has finished speaking the result. To transition from busy state to idle state, we subscribe to renderer observable in the following manner.

this.rendererSubscription = this.components.rendererCommunicator.Observable.subscribe((type) => {
   if (type === "finishedSpeaking") {
       this.transition(this.allowedStateTransitions.get("idle"));
   }
});

In this way, we transition between various states of MagicMirror Module for SUSI in an efficient manner.

Resources

Continue ReadingManaging States in SUSI MagicMirror Module

Hotword Detection on SUSI MagicMirror with Snowboy

Magic Mirror in the story “Snow White and the Seven Dwarfs” had one cool feature. The Queen in the story could call Mirror just by saying “Mirror” and then ask it questions. MagicMirror project helps you develop a Mirror quite close to the one in the fable but how cool it would be to have the same feature? Hotword Detection on SUSI MagicMirror Module helps us achieve that.

The hotword detection on SUSI MagicMirror Module was accomplished with the help of Snowboy Hotword Detection Library. Snowboy is a cross platform hotword detection library. We are using the same library for Android, iOS as well as in MagicMirror Module (nodejs).

Snowboy can be added to a Javascript/Typescript project with Node Package Manager (npm) by:

$ npm install --save snowboy

For detecting hotword, we need to record audio continuously from the Microphone. To accomplish the task of recording, we have another npm package node-record-lpcm16. It used SoX binary to record audio. First we need to install SoX using

Linux (Debian based distributions)

$ sudo apt-get install sox libsox-fmt-all

Then, you can install node-record-lpcm16 package using npm using

$ npm install node-record-lpcm16

Then, we need to import it in the needed file using

import * as record from "node-record-lpcm16";

You may then create a new microphone stream using,

const mic = record.start({
   threshold: 0,
   sampleRate: 16000,
   verbose: true,
});

The mic constant here is a NodeJS Readable Stream. So, we can read the incoming data from the Microphone and process it.

We can now process this stream using Detector class of Snowboy. We declare a child class extending Snowboy Hotword Decoder to suit our needs.

import { Detector, Models } from "snowboy";

export class HotwordDetector extends Detector {
  
  1 constructor(models: Models) {
       super({
           resource: `${process.env.CWD}/resources/common.res`,
           models: models,
           audioGain: 2.0,
       });
       this.setUp();
   }

   // other methods
}

First, we create a Snowboy Detector by calling the parent constructor with resource file as common.res and a Snowboy model as argument. Snowboy model is a file which tells the detector which Hotword to listen for. Currently, the module supports hotword Susi but it can be extended to support other hotwords like Mirror too. You can train the hotword for SUSI for your voice and get the latest model file at https://snowboy.kitt.ai/hotword/7915 . You may then replace the susi.pmdl file in resources folder with our own susi.pmdl file for a better experience.

Now, we need to delegate the callback methods of Detector class to know about the current state of detector and take an action on its basis. This is done in the setUp() method.

private setUp(): void {
   this.on("silence", () => {
      // handle silent state
   });

   this.on("sound", () => {
      // handle sound detected state
   });

   this.on("error", (error) => {
      // handle error
   });

   this.on("hotword", (index, hotword) => {
      // hotword detected 
   });
}

If you go into the implementation of Detector class of Snowboy, it extends from NodeJS.WritableStream. So, we can pipe our microphone input read stream to Detector class and it handles all the states. This can be done using

mic.pipe(detector as any);

So, now all the input from Microphone will be processed by Snowboy detector class and we can know when the user has spoken the word “SUSI”. We can start speech recognition and do other changes in User Interface based on the different states.

After this, we can simply say “Susi” followed by our query to ask SUSI on the MagicMirror. A video implementation of the same can be seen here: 

https://youtu.be/JoZ5HBcM5xo

Resources:

Continue ReadingHotword Detection on SUSI MagicMirror with Snowboy