Deploying preview using surge in Yaydoc

In Yaydoc, we save the preview of the documentation in our local server and then we show the preview using express’s static serve method. But the problem is that Heroku doesn’t support persistent server, so our preview link gets expired within a few minutes. In order to solve the problem I planned to deploy the preview to surge so that the preview doesn’t get expired. For that I made a shell script which will deploy preview to the surge and then I’ll invoke the shell script using child_process.

#!/bin/bash

while getopts l:t:e:u: option
do
 case "${option}"
 in
 l) LOGIN=${OPTARG};;
 t) TOKEN=${OPTARG};;
 e) EMAIL=${OPTARG};;
 u) UNIQUEID=${OPTARG};;
 esac
done

export SURGE_LOGIN=${LOGIN}
export SURGE_TOKEN=${TOKEN}

./node_modules/.bin/surge --project temp/${EMAIL}/${UNIQUEID}_preview --domain ${UNIQUEID}.surge.sh

In the above snippet, I’m initializing the SURGE_LOGIN and SURGE_TOKEN environmental value, so that surge will deploy to the preview without asking any credentials while I am deploying the project. Then I’m executing surge by specifying the preview path and preview domain name.

exports.deploySurge = function(data, surgeLogin, surgeToken, callback) {
  var args = [
    "-l", surgeLogin,
    "-t", surgeToken,
    "-e", data.email,
    "-u", data.uniqueId
  ];

  var spawnedProcess = spawn('./surge_deploy.sh', args);
  spawnedProcess.on('exit', function(code) {
    if (code === 0) {
      callback(null, {description: 'Deployed successfully'});
    } else {
      callback({description: 'Unable to deploy'}, null);
    }
  });
}

Whenever the user generates documentation, I’ll invoke the shell script using child_process and then if it exits with exit code 0 I’ll pass the preview url via sockets to frontend and then the user can access the url.

Resource:

Using Firebase Test Lab for Testing test cases of Phimpme Android

As now we started writing some test cases for Phimpme Android. While running my instrumentation test case, I saw a tab of Cloud Testing in Android Studio. This is for Firebase Test Lab. Firebase Test Lab provides cloud-based infrastructure for testing Android apps. Everyone doesn’t have every devices of all the android versions. But testing on all of them is equally important.

How I used test lab in Phimpme

  • Run your first test on Firebase

Select Test Lab in your project on the left nav on the Firebase console, and then click Run a Robo test. The Robo test automatically explores your app on wide array of devices to find defects and report any crashes that occur. It doesn’t require you to write test cases. All you need is the app’s APK. Nothing else is needed to use Robo test.

Upload your Application’s APK (app-debug-unaligned.apk) in the next screen and click Continue

Configure the device selection, a wide range of devices and all API levels are present there. You can save the template for future use.

Click on start test to start testing. It will start the tests and show the real time progress as well.

  • Using Firebase Test Lab from Android Studio

It required Android Studio 2.0+. You needs to edit the configuration of Android Instrumentation test.

Select the Firebase Test Lab Device Matrix under the Target. You can configure Matrix, matrix is actually on what virtual and physical devices do you want to run your test. See the below screenshot for details.

Note: You need to enable the firebase in your project

So using test lab on firebase we can easily test the test cases on multiple devices and make our app more scalable.

Resources:

Scheduling Jobs to Check Expired Access Token in Yaydoc

In Yaydoc, we use the user access token to do various tasks like pushing documentation, registering webhook and to see it’s status.The user access token is very important to us, so I decided of adding Cron job which checks whether the user token has expired or not. But then one problem was that, if we have more number of users our cron job will send thousands of request at a time so it can break the app. So, I thought of queueing the process. I used `asyc` library for queuing the job.

const github = require("./github")
const queue = require("./queue")

User = require("../model/user");

exports.checkExpiredToken = function () {
  User.count(function (error, count) {
    if (error) {
      console.log(error);
    } else {
      var page = 0;
      if (count < 10) {
        page = 1;
      } else {
        page = count / 10;
        if (page * 10 < count) {
          page = (count + 10) /10;
        }
      }
      for (var i = 0; i <= page; i++) {
        User.paginateUsers(i, 10,
        function (error, users) {
          if (error) {
            console.log(error);
          } else {
            users.forEach(function(user) {
              queue.addTokenRevokedJob(user);
            })
          }
        })
      }
    }
  })
}

In the above code I’m paginating the list of users in the database and then I’m adding each user to the queue.

var tokenRevokedQueue = async.queue(function (user, done) {
  github.retriveUser(user.token, function (error, userData) {
    if (error) {
      if (user.expired === false) {
        mailer.sendMailOnTokenFailure(user.email);
        User.updateUserById(user.id, {
          expired: true
        }, function(error, data) {
          if (error) {
            console.log(error);
          }
        });
      }
      done();
    } else {
      done();
    }
  })
}, 2);

I made this queue with the help of async queue method. In the first parameter, I’m passing the logic and in second parameter, I’m passing how many jobs can be executed asynchronously. I’m checking if the user has revoked the token or not by sending API requests to GitHub’s user API. If it gives a response ‘200’ then the token is valid otherwise it’s invalid. If the user token is invalid, I’m sending email to the user saying that “Your access token in revoked so Sign In once again to continue the service”.

Resources:

Adding Github buttons to Generated Documentation with Yaydoc

Many times repository owners would want to link to their github source code, issue tracker etc. from the documentation. This would also help to direct some users to become a potential contributor to the repository. As a step towards this feature, we added the ability to add automatically generated GitHub buttons to the top of the docs with Yaydoc.

To do so we created a custom sphinx extension which makes use of http://buttons.github.io/ which is an excellent service to embed GitHub buttons to any website. The extension takes multiple config values and using them generates the `html` which it adds to the top of the internal docutils tree using a raw node.

GITHUB_BUTTON_SPEC = {
    'watch': ('eye', 'https://github.com/{user}/{repo}/subscription'),
    'star': ('star', 'https://github.com/{user}/{repo}'),
    'fork': ('repo-forked', 'https://github.com/{user}/{repo}/fork'),
    'follow': ('', 'https://github.com/{user}'),
    'issues': ('issue-opened', 'https://github.com/{user}/{repo}/issues'),
}

def get_button_tag(user, repo, btn_type, show_count, size):
    spec = GITHUB_BUTTON_SPEC[btn_type]
    icon, href = spec[0], spec[1].format(user=user, repo=repo)
    tag_fmt = '<a class="github-button" href="{href}" data-size="{size}"'
    if icon:
        tag_fmt += ' data-icon="octicon-{icon}"'
    tag_fmt += ' data-show-count="{show_count}">{text}</a>'
    return tag_fmt.format(href=href,
                          icon=icon,
                          size=size,
                          show_count=show_count,
                          text=btn_type.title())

The above snippet shows how it takes various parameters such as the user name, name of the repository, the button type which can be one of fork, issues, watch, follow and star, whether to display counts beside the buttons and whether a large button should be used. Another method named get_button_tags is used to read the various configs and call the above method with appropriate parameters to generate each button.

The extension makes use of the doctree-resolved event emitted by sphinx to hook into the internal doctree. The following snippet shows how it is done.

def on_doctree_resolved(app, doctree, docname):
    if not app.config.github_user_name or not app.config.github_repo:
        return
    buttons = nodes.raw('', get_button_tags(app.config), format='html')
    doctree.insert(0, buttons)

Finally we add the custom javascript using the add_javascript method.

app.add_javascript('https://buttons.github.io/buttons.js')

To use this with yaydoc, users would just need to add the following to their .yaydoc.yml file.

build:
  github_button:
    buttons:
      watch: true
      star: true
      issues: true
      fork: true
      follow: true
    show_count: true
    large: true

Resources

  1.  Homepage of Github:buttons – http://buttons.github.io/
  2. Sphinx extension Tutorial – http://www.sphinx-doc.org/en/stable/extdev/tutorial.html

Implementing Search Bar Using GitHub API In Yaydoc CI

In Yaydoc’s, documentation will be generated by typing the URL of the git repository to the input box from where user can generate documentation for any public repository, they can see the preview and if they have access, they can push the documentation to the github pages on one click. But In Yaydoc CI user can register the repository only if he has access to the specific repository, so I decided to show the list to the repository where user can select from the list but then it also has one problem that Github won’t give us all the user repository data in one api hit and then I made a search bar in which user can search repository and can register to the Yaydoc CI.

var search = function () {
  var username = $("#orgs").val().split(":")[1];
  const searchBarInput = $("#search_bar");
  const searchResultDiv = $("#search_result");

  searchResultDiv.empty();
  if (searchBarInput.val() === "") {
    searchResultDiv.append('<p class="text-center">Please enter the repository name<p>');
    return;
  }

  searchResultDiv.append('<p class="text-center">Fetching data<p>');

  $.get(`https://api.github.com/search/repositories?q=user:${username}+fork:true+${searchBarInput.val()}`, function (result) {
    searchResultDiv.empty();
    if (result.total_count === 0) {
      searchResultDiv.append(`<p class="text-center">No results found<p>`);
      styles.disableButton("btnRegister");
    } else {
      styles.disableButton("btnRegister");
      var select = '<label class="control-label" for="repositories">Repositories:</label>';
      select += '<select class="form-control" id="repositories" name="repository" required>';
      select += `<option value="">Please select</option>`;
      result.items.forEach(function (x){
        select += `<option value="${x.full_name}">${x.full_name}</option>`;
      });
      select += '</select>';
      searchResultDiv.append(select);
    }
  });
};

$(function() {
  $("#search").click(function () {
    search();
  });
});

In the above snippet I have defined search function which will get executed when user clicks the search button. The search function will get the search query from input box, if the search query is empty it’ll show the message as “Please enter repository name”, if it is not empty it’ll hit the GitHub API to fetch user repositories. If the GitHub returns empty array it’ll show “No results found”. In between searching time “Fetching data” will be shown.

$('#search_bar').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      search();
    }
  });

  $('#ci_register').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      e.preventDefault();
      return false;
    }
  });

Still we faced some problem, like on click enter button form is automatically submitting. So I’m registering event listener. In that listener I’m checking whether the key code is 13 or not. Key code 13 represent enter key, so if the key code is 13 then i’ll prevent the form from submitting. You can see the working of the search bar in the Yaydoc CI.

Resources:

Using API Blueprint with Yaydoc

As part of extending the capability of Yaydoc to document APIs, this week we integrated API Blueprint with Yaydoc. Now we can parse apib files and add the parsed content to the generated documentation. From the official Homepage of API Blueprint,

API Blueprint is simple and accessible to everybody involved in the API lifecycle. Its syntax is concise yet expressive. With API Blueprint you can quickly design and prototype APIs to be created or document and test already deployed mission-critical APIs. It is a documentation-oriented web API description language. The API Blueprint is essentially a set of semantic assumptions laid on top of the Markdown syntax used to describe a web API.

To Integrate API Blueprint with Yaydoc, we used an sphinx extension named sphinxcontrib-apiblueprint. This extension can directly translate text in API Blueprint format into docutils nodes. The advantage with this approach as compared to using tools like aglio is that the generated html fits in nicely with the already existent theme. Though we may in future provide ability to generate html using tools like aglio if the user prefers. Adding an extension to sphinx is very easy. In the conf.py template, we added the extension to the already enabled list of extensions.

extensions += [‘sphinxcontrib.apiblueprint’]

The above extension provides a directive apiblueprint which can be then used to include apib files. The directive is very similar to the built in include directive. The difference is just that it should be only be used to include files in API Blueprint format. You can see an example below of how to use this directive.

.. apiblueprint:: <path to apib file>

Although this is enough for projects which use the ResT markup format, This cannot be used with projects using markdown as the primary markup format, since markdown doesn’t support the concept of directives. To solve this, we used the eval_rst block provided by recommonmark in Yaydoc. It allows users to embed valid ReST within markdown and recommonmark will properly parse the embedded text as ReST. Now a user can use this to use directives within markdown. You can see an example below.

```eval_rst
.. apiblueprint:: <path to apib file>
```

In order to implement this, we used the AutoStructify class provided by recommonmark. Here’s a snippet from our conf.py template. Note that this does have far reaching effects. Now users would be able to use this to add constructs like toctree in markdown which wasn’t possible before.

from recommonmark.transform import AutoStructify

def setup(app):
    app.add_config_value('recommonmark_config', {
    'enable_eval_rst': True,
    }, True)
    app.add_transform(AutoStructify)

Let’s see all of this in action. Here’s a preview of a generated documentation with API Blueprint using Yaydoc.

Resources

Continuous Integration in Yaydoc using GitHub webhook API

In Yaydoc,  Travis is used for pushing the documentation for each and every commit. But this leads us to rely on a third party to push the documentation and also in long run it won’t allow us to implement new features, so we decided to do the continuous documentation pushing on our own. In order to build the documentation for each and every commit we have to know when the user is pushing code. This can be achieved by using GitHub webhook API. Basically we have to register our api to specific GitHub repository, and then GitHub will send a POST request to our API on each and every commit.

“auth/ci” handler is used to get access of the user. Here we request user to give access to Yaydoc such as accessing the public repositories , read organization details and write permission to write webhook to the repository and also I maintaining state by keeping the ci session as true so that I can know that this callback is for gh-pages deploy or ci deployOn

On callback I’m keeping the necessary informations like username, access_token, id and email in session. Then based on ci session state, I’m redirecting to the appropriate handler. In this case I’m redirecting to “ci/register”.After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

router.post('/register', function (req, res, next) {
      request({
        url: `https://api.github.com/repos/${req.session.username}/${repositoryName}/hooks?access_token=${req.session.token}`,
        method: 'POST',
        json: {
          name: "web",
          active: true,
          events: [
            "push"
          ],
          config: {
            url: process.env.HOSTNAME + '/ci/webhook',
            content_type: "json"
          }
        }
      }, function(error, response, body) {
        repositoryModel.newRepository(req.body.repository,
          req.session.username,
          req.session.githubId,
          crypter.encrypt(req.session.token),
          req.session.email)
          .then(function(result) {
            res.render("index", {
              showMessage: true,
              messages: `Thanks for registering with Yaydoc.Hereafter Documentation will be pushed to the GitHub pages on each commit.`
            })
          })
      })
    }
  })

After user choose the repository, they will send a POST request to “ci/register” and then I’m registering the webhook to the repository and I’m saving the repository, user details in the database, so that it can be used when GitHub send request to push the documentation to the GitHub Pages.

router.post('/webhook', function(req, res, next) {
  var event = req.get('X-GitHub-Event')
  if (event == 'Push') {
      repositoryModel.findOneRepository(
        {
          githubId: req.body.repository.owner.id,
          name: req.body.repository.name
        }
      ).
      then(function(result) {
        var data = {
          email: result.email,
          gitUrl: req.body.repository.clone_url,
          docTheme: "",
        }
        generator.executeScript({}, data, function(err, generatedData) {
            deploy.deployPages({}, {
              email: result.email,
              gitURL: req.body.repository.clone_url,
              username: result.username,
              uniqueId: generatedData.uniqueId,
              encryptedToken: result.accessToken
            })
        })
      })
      res.json({
        status: true
      })
   }
})

After you register on webhook, GitHub will send a request to the url which we registered on the repository. In our case “https:/yaydoc.herokuapp.com/ci/auth” is the url. The type of the event can be known by reading ‘X-GitHub-Event’ header. Right now I’m registering only for the push event. So we’ll only be getting the push event. GitHub also gives us the repository details in the request body.

When the user makes a commit to the repository, GitHub will send a POST request to the Yaydoc’s server. Then, we’ll get the repository name and Github’s user ID from the request body. By use of this, I’ll retrieve the access token from the database which we already registered while the user registers the repository to the CI. The documentation will be generated using generate script and pushed to GitHub pages using deploy script.

Now Yaydoc generates documentation on every push when the user commits to the repository and also it will enable us to integrate new features in our own custom environment. We also plan to build a full featured CI platform.

Resources:

Rendering Open Event Server’s API-Blueprint document

After writing the FOSSASIA‘s Open Event Server project API- Blueprint Document manually, we wanted to know how we could render the document, how to check it in an HTML-client friendly format and how to make it change the look as we go. In order to do that, we found two rendering ways.

They are:

1) The apiary editor:

This editor helps us to render API blueprints and print them in user readable API documented format. When we create the API blueprint manually, we always follow the pattern write an api blueprint i.e the name and metadata, then followed by resource groups and actions, which was already discussed in the last blog. In order to use the apiary editor, we start off by creating our first project. Initially during the our first use of this editor, we will get a default “polls and vote” example api project. This is a template we can use as guide. The pole/vote api looks something like this in the editor mode:

 

Apiary has a facility to test an API, document an API, inspect an API or simply edit an API. We first start off by creating a project “open-event-api”. Next, in the editor mode of the apiary, we add the contents of our api-blueprint documents.
Here is an example of how USERS API is rendered. If we get our request and response correctly, on clicking List All Users we will get a good 200 response like this in the editor:

However, if we tend to go off format with the api-blueprint, we get an invalid error:

The final rendering and how the API result can be seen in the document mode with the respective API’s request and response.
The document mode request and response look like this:

This rendered doc can be viewed publicly with the link got in the document mode. Similarly, we test it out in the editor for the rest of the ap. This is a simple way to render your api blueprint.

2)  The aglio renderer:

Since API blueprint is presented in the form of .apib format, the con side of it is it is not easily viewable by viewers. Even though we use apiary, view the rendered docs along with getting a shareable link, we would surely like the docs for our API server to be hosted in our server as well. So, we use Aglio exactly to do that .

It is an API Blueprint renderer which supports multiple themes. It converts the apib file into user readable formats such as pdf, html, etc. Here since we want to host it as a webpage, we render it in the form of .html.  It outputs static HTML of the result and can be served by any web host. Since API Blueprint is a Markdown-based document format, this lets us write API descriptions and documentation in a simple and straightforward way.
An example of how aglio rendered document in a three column format looks like:

The best thing about Aglio is not only does it support a lot many theme and templates, but it also allows you to provide your own custom theme and template to render the html file from the api blueprint.

How to use aglio renderer:

  • We first follow up with installation:
npm install -g aglio
  • After installation, we go to the folder the .apib file is stored and generate the HTML. There are 5 built in themes available with two column and three column layout. They are:
# Default theme
aglio -i input.apib -o output.html

-> This command takes as input the input.apib file as API Blueprint and creates a rendered output file named output.html.

 

# Use three-column layout
aglio -i input.apib --theme-template triple -o output.html

-> This command takes as input the input.apib file as API Blueprint and creates a rendered output file named output.html. However it uses the theme-template flag. The theme-template flag is used to define whether the layout of the rendered html is two column or three column. In this command, it is set as triple which means three column.

# Built-in color scheme
aglio --theme-variables slate -i input.apib -o output.html

-> Aglio has different color schemes that you can use while rendering the docs html file. Some of them are Olio, Streak, Slate, etc.

# Customize a built-in style
aglio --theme-style default --theme-style ./my-style.less -i input.apib -o output.html

-> Suppose you want to provide a syntactical style sheet such as SASS, LESS, etc. so as to define your own styling. You can do that as given in the above example. The my-style.less is a user defined syntactical stylesheet. This is then used to provide styling for the output file rendered.

# Custom layout template
aglio --theme-template /path/to/template.jade -i input.apib -o output.html

-> You can write your own custom layout template in a template.jade file and use that for generating the output.html instead of two or three column layout.

We run the build-in color scheme: aglio –theme-variables slate -i api_blueprint.apib -o output.html to generate our Open Event Server api document which we have something like this:

You can visit the live version of FOSSASIA‘s Open Event Server API Document right here: https://api.eventyay.com/

Generating responsive email using mjml in Yaydoc

In Yaydoc, an email with a download, preview and deploy link will be sent to the user after documentation is generated. But then initially, Yaydoc was sending email in plain text without any styling, so I decided to make an attractive HTML email template for it. The problem with HTML email is adding custom CSS and making it responsive, because the emails will be seen on various devices like mobile, tablet and desktops. When going through the GitHub trending list, I came across mjml and was totally stunned by it’s capabilities. Mjml is a responsive email generation framework which is built using React (popular front-end framework maintained by Facebook)

Install mjml to your system using npm.

npm init -y && npm install mjml

Then add mjml to your path

export PATH="$PATH:./node_modules/.bin”

Mjml has a lot of react components pre-built for creating the responsive email. For example mj-text, mj-image, mj-section etc…

Here I’m sharing the snippet used for generating email in Yaydoc.

<mjml>
  <mj-head>
    <mj-attributes>
      <mj-all padding="0" />
      <mj-class name="preheader" color="#CB202D" font-size="11px" font-family="Ubuntu, Helvetica, Arial, sans-serif" padding="0" />
    </mj-attributes>
    <mj-style inline="inline">
      a { text-decoration: none; color: inherit; }
 
    </mj-style>
  </mj-head>
  <mj-body>
    <mj-container background-color="#ffffff">
 
      <mj-section background-color="#CB202D" padding="10px 0">
        <mj-column>
          <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Hey! Your documentation generated successfully<i class="fa fa-address-book-o" aria-hidden="true"></i>
 
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#ffffff" padding="20px 0">
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552339/play_cuqe89.png" width="85px" padding="0 25px">
</mj-image>
 
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Preview it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552331/download_ktlqee.png" width="100px" padding="0 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Download it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552325/deploy_yy3oqw.png" width="100px" padding="0px 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
 
            <strong><a>Deploy it</a></strong>
            <br />
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#333333" padding="10px">
        <mj-column>
        <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Thanks for using Yaydoc<i class="fa fa-address-book-o" aria-hidden="true"></i>
        </mj-column>
        </mj-text>
      </mj-section>
    </mj-container>
  </mj-body>
</mjml>

The main goal of this example is to make a responsive email which looks like the image given below. So, In mj-head tag, I have imported all the necessary fonts using the mj-class tag and wrote my custom CSS in mj-style. Then I made a container with one row and one column using mj-container, mj-section and mj-column tag and changed the container background color to #CB202D using background-color attribute, then In that container I wrote a heading which says `Hey! Your documentation generated successfully`  with mj-text tag, Then you will get the red background top bar with the success message. Then moving on to the second part, I made a container with three columns and added one image to each column using mj-image tag by specifying image URL as src attribute, added the corresponding text below the mj-image tag using the mj-text tag. At last,  I  made one more container as the first one with different message saying `Thanks for using yaydoc`  with background color #333333

At last, transpile your mjml code to HTML by executing the following command.

mjml -r index.mjml -o index.html

Rendered Email
Resources:

Testing child process using Mocha in Yaydoc

Mocha is a javascript testing framework. It can be used in both nodeJS and browser as well, also it is one of the most popular testing framework available out there. Mocha is widely used for the Behavior Driven Development (BDD). In yaydoc, we are using mocha to test our web UI. One of the main task in yaydoc is documentation generation. We build a bash script to do our documentation generation. We run the bash script using node’s child_process module, but then in order to run the test you have to execute the child process before test execution. This can be achieved by mochas’s before hook. Install mocha in to your system

npm install -g mocha

Here is the test case which i wrote in yaydoc test file.

const assert = require('assert')
const spawn = require('child_process').spawn
const uuidV4 = require("uuid/v4")
describe('WebUi Generator', () => {
  let uniqueId = uuidV4()
  let email = [email protected].com'
  let args = [
    "-g", "https://github.com/fossasia/yaydoc.git",
    "-t", "alabaster",
    "-m", email,
    "-u", uniqueId,
    "-w", "true"
  ]
  let exitCode

  before((done) => {
    let process = spawn('./generate.sh', args)
    process.on('exit', (code) => {
      exitCode = code
      done()
    })
  })
  it('exit code should be zero', () => {
    assert.equal(exitCode, 0)
  })
 })

Describe() function is used to describe our test case. In our scenario we’re testing the generate script so we write as webui generator. As I mentioned above we have to run our child_process in before hook. It() function is the place where we write our test case. If the test case fails, an error will be thrown. We use the assert module from mocha to do the assertion. You can see our assertion in first it()  block for checking exit code is zero or not.

mocha test.js --timeout 1500000

Since documentation takes time so we have to mention time out while running mocha. If your test case passes successfully, you will get output similar to this.

WebUi Generator
    ✓ exit code should be zero

Resources: