Testing Deploy Functions Using Sinon.JS in Yaydoc

In yaydoc, we deploy the generated documentation to the GitHub pages as well as Heroku. It is one of the important functions in the source code. I don’t want to break the build in future by any unnoticed change, so I decided to write a test case for deploy function. But the deploy function had lot dependencies like child processes, sockets, etc. Also it is not a pure function, so there is no return object to assert the value. Then I decided to stub for child process to check whether the correct script was passed or not. In order to write stub I decided to use sinon js framework because it can be used for writing stubs, mocks and spies. One of the advantages with sinon is that it’ll work with any testing framework.

sinon.stub(require("child_process"), "spawn").callsFake(function (fileName, args) {
  if (fileName !== "./ghpages_deploy.sh" ) {
    throw new Error(`invalid ${fileName} invoked`);
  }

  if (fileName === "./ghpages_deploy.sh") {
    let ghArgs = ["-e", "-i", "-n", "-o", "-r"];
    ghArgs.forEach(function (x)  {
      if (args.indexOf(x) < 0) {
        throw new Error(`${x} argument is not passed`);
      }
    })
  }
 
  let process = {
    on: function (listenerId, callback) {
      if (listenerId !== "exit") {
        throw new Error("listener id is not exit");
      }
    }
  }
  return process;
});

In sinon you can create s stub by passing the object in the first parameter and the method name in the second parameter to sinon’s stub method. After it returns an object, pass the function which you would like to replace with the “callFakes” function.

In above code, I wrote a simple stub which overwrites NodeJS child_process’s spawn method. So I passed the “child_process” module in the first parameter and “spawn” method name in the second parameter. You must check whether they are passing the correct deploy script and the correct parameter. So, I wrote a function which checks the condition and then pass the method to the callFakes method.

describe('deploy script', function() {
  it("gh-pages deploy", function() {
    deploy.deployPages(fakeSocket, {
      gitURL: "https://github.com/sch00lb0y/yaydoc.git",
      encryptedToken: crypter.encrypt("dummykey"),
      email: "admin@fossasia.org",
      uniqueId: "ajshdahsdh",
      username: "fossasia"
    });
  });
});

Finally test the deploy function by calling it. I use mocha as a testing framework. I have already written a blog on mocha. If you’re interested in mocha please check out this blog.

Resources:

Continue ReadingTesting Deploy Functions Using Sinon.JS in Yaydoc

Advanced configurations in Yaydoc’s Web UI

Yaydoc’s User Interface consists of a form with three required fields; the user’s email address, git repository’s URL, and a theme for the generated website. Specific values of these fields are the minimum requirement to generate documentation for a project. There are certain other configuration variables for whom we assumed default values. Among these, we assumed `docs/` directory or the directory specified in the `yaydoc.yml` configuration file as the default path for the documentation. Also, `Default Branch` is assumed as the branch to generate documentation website. However, this cannot guarantee the generation of docs for every other project. These configurations can have different values based on a project.

Thus, there was a need to include certain input values for advanced configuration. The addition of these configurations in the UI doesn’t compel the user to specify them. In our attempt to improve user’s experience, we show the default values to the user when they are specifying custom values for these configurations.

If the user doesn’t specify a value for the repository’s branch, a default value is retrieved from Github’s Repository Components API, taking repository’s URL from the required input as the input URL.

/**
 * Setting the branch name with `default_branch` attriburte from
 * Github’s Repository Components API
 * @param gitUrl: URL of the github repository
 */
setDefaultBranchName: function (gitUrl) {
  var owner = gitUrl.split(“/”)[3] || ‘’;
  var repository = gitUrl.split(“/”)[4] || ‘’).split(‘.’)[0] || ‘’;
  $.get(‘https://api.github.com/repos/’ + owner + ‘/’ + repository, {
    headers: {“User-Agent”: “Yaydoc”}
  }).complete(function (data) {
    $(“#target_branch”).val(data.responseJSON.default_branch);
  });
}

There are certain cases in which the design of the Web User Interface could have been confusing. Since we are displaying all the advanced configurations at once, it could’ve appeared to the users that they are specifying empty values for the other. Thus to handle this, inputs were enabled on toggle when a checkbox beside them was checked. This was achieved making following changes in the front end of the code.

/**
 * Toggle editing of Branch Name input
 */
$(“#btnEditBranch”).click(function () {
  styles.toggleEditing(“target_branch”);
  ....
  ....
});

/**
 * Toggle Enabling/Disabling an input tag
 * @param id: `id` attribute of input tag
 */
toggleEditing: function (id) {
  const input = $(‘#’ + id);
  if (input.attr(‘disabled’)) {
    input.removeAttr(‘disabled’);
    $(‘#checkbox_’ + id).removeClass(‘glyphicon-unchecked’).addClass(‘glyphicon-check’);
  } else {
    input.attr(‘disabled’, ‘disabled’);
    $(‘checkbox_’ + id).removeClass(‘glyphicon-check’).addClass(‘glyphicon-unchecked’);
  }
}

Introducing advanced configurations to the User Interface has opened the possibility for even more projects to generate and deploy docs with much lesser constraints. One of our main aim for this project is to have a fairly simple UI and UX and we hope to bring further updated to achieve that.

Resources:

  1. Github’s Repository API: https://developer.github.com/v3/repos/
  2. jQuery’s AJAX Requests: https://api.jquery.com/jquery.get
Continue ReadingAdvanced configurations in Yaydoc’s Web UI

Continuous Integration in Yaydoc using GitHub webhook API

In Yaydoc,  Travis is used for pushing the documentation for each and every commit. But this leads us to rely on a third party to push the documentation and also in long run it won’t allow us to implement new features, so we decided to do the continuous documentation pushing on our own. In order to build the documentation for each and every commit we have to know when the user is pushing code. This can be achieved by using GitHub webhook API. Basically we have to register our api to specific GitHub repository, and then GitHub will send a POST request to our API on each and every commit.

“auth/ci” handler is used to get access of the user. Here we request user to give access to Yaydoc such as accessing the public repositories , read organization details and write permission to write webhook to the repository and also I maintaining state by keeping the ci session as true so that I can know that this callback is for gh-pages deploy or ci deployOn

On callback I’m keeping the necessary informations like username, access_token, id and email in session. Then based on ci session state, I’m redirecting to the appropriate handler. In this case I’m redirecting to “ci/register”.After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

router.post('/register', function (req, res, next) {
      request({
        url: `https://api.github.com/repos/${req.session.username}/${repositoryName}/hooks?access_token=${req.session.token}`,
        method: 'POST',
        json: {
          name: "web",
          active: true,
          events: [
            "push"
          ],
          config: {
            url: process.env.HOSTNAME + '/ci/webhook',
            content_type: "json"
          }
        }
      }, function(error, response, body) {
        repositoryModel.newRepository(req.body.repository,
          req.session.username,
          req.session.githubId,
          crypter.encrypt(req.session.token),
          req.session.email)
          .then(function(result) {
            res.render("index", {
              showMessage: true,
              messages: `Thanks for registering with Yaydoc.Hereafter Documentation will be pushed to the GitHub pages on each commit.`
            })
          })
      })
    }
  })

After user choose the repository, they will send a POST request to “ci/register” and then I’m registering the webhook to the repository and I’m saving the repository, user details in the database, so that it can be used when GitHub send request to push the documentation to the GitHub Pages.

router.post('/webhook', function(req, res, next) {
  var event = req.get('X-GitHub-Event')
  if (event == 'Push') {
      repositoryModel.findOneRepository(
        {
          githubId: req.body.repository.owner.id,
          name: req.body.repository.name
        }
      ).
      then(function(result) {
        var data = {
          email: result.email,
          gitUrl: req.body.repository.clone_url,
          docTheme: "",
        }
        generator.executeScript({}, data, function(err, generatedData) {
            deploy.deployPages({}, {
              email: result.email,
              gitURL: req.body.repository.clone_url,
              username: result.username,
              uniqueId: generatedData.uniqueId,
              encryptedToken: result.accessToken
            })
        })
      })
      res.json({
        status: true
      })
   }
})

After you register on webhook, GitHub will send a request to the url which we registered on the repository. In our case “https:/yaydoc.herokuapp.com/ci/auth” is the url. The type of the event can be known by reading ‘X-GitHub-Event’ header. Right now I’m registering only for the push event. So we’ll only be getting the push event. GitHub also gives us the repository details in the request body.

When the user makes a commit to the repository, GitHub will send a POST request to the Yaydoc’s server. Then, we’ll get the repository name and Github’s user ID from the request body. By use of this, I’ll retrieve the access token from the database which we already registered while the user registers the repository to the CI. The documentation will be generated using generate script and pushed to GitHub pages using deploy script.

Now Yaydoc generates documentation on every push when the user commits to the repository and also it will enable us to integrate new features in our own custom environment. We also plan to build a full featured CI platform.

Resources:

Continue ReadingContinuous Integration in Yaydoc using GitHub webhook API

Generating responsive email using mjml in Yaydoc

In Yaydoc, an email with a download, preview and deploy link will be sent to the user after documentation is generated. But then initially, Yaydoc was sending email in plain text without any styling, so I decided to make an attractive HTML email template for it. The problem with HTML email is adding custom CSS and making it responsive, because the emails will be seen on various devices like mobile, tablet and desktops. When going through the GitHub trending list, I came across mjml and was totally stunned by it’s capabilities. Mjml is a responsive email generation framework which is built using React (popular front-end framework maintained by Facebook)

Install mjml to your system using npm.

npm init -y && npm install mjml

Then add mjml to your path

export PATH="$PATH:./node_modules/.bin”

Mjml has a lot of react components pre-built for creating the responsive email. For example mj-text, mj-image, mj-section etc…

Here I’m sharing the snippet used for generating email in Yaydoc.

<mjml>
  <mj-head>
    <mj-attributes>
      <mj-all padding="0" />
      <mj-class name="preheader" color="#CB202D" font-size="11px" font-family="Ubuntu, Helvetica, Arial, sans-serif" padding="0" />
    </mj-attributes>
    <mj-style inline="inline">
      a { text-decoration: none; color: inherit; }
 
    </mj-style>
  </mj-head>
  <mj-body>
    <mj-container background-color="#ffffff">
 
      <mj-section background-color="#CB202D" padding="10px 0">
        <mj-column>
          <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Hey! Your documentation generated successfully<i class="fa fa-address-book-o" aria-hidden="true"></i>
 
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#ffffff" padding="20px 0">
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552339/play_cuqe89.png" width="85px" padding="0 25px">
</mj-image>
 
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Preview it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552331/download_ktlqee.png" width="100px" padding="0 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
            <strong><a>Download it</a></strong>
            <br />
          </mj-text>
        </mj-column>
        <mj-column>
          <mj-image src="http://res.cloudinary.com/template-gdg/image/upload/v1498552325/deploy_yy3oqw.png" width="100px" padding="0px 25px" >
        </mj-image>
          <mj-text align="center" color="#EC652D" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" vertical-align="top" padding="20px 25px">
 
            <strong><a>Deploy it</a></strong>
            <br />
          </mj-text>
        </mj-column>
      </mj-section>
      <mj-section background-color="#333333" padding="10px">
        <mj-column>
        <mj-text align="center" color="#ffffff" font-size="20px" font-family="Lato, Helvetica, Arial, sans-serif" padding="18px 0px">Thanks for using Yaydoc<i class="fa fa-address-book-o" aria-hidden="true"></i>
        </mj-column>
        </mj-text>
      </mj-section>
    </mj-container>
  </mj-body>
</mjml>

The main goal of this example is to make a responsive email which looks like the image given below. So, In mj-head tag, I have imported all the necessary fonts using the mj-class tag and wrote my custom CSS in mj-style. Then I made a container with one row and one column using mj-container, mj-section and mj-column tag and changed the container background color to #CB202D using background-color attribute, then In that container I wrote a heading which says `Hey! Your documentation generated successfully`  with mj-text tag, Then you will get the red background top bar with the success message. Then moving on to the second part, I made a container with three columns and added one image to each column using mj-image tag by specifying image URL as src attribute, added the corresponding text below the mj-image tag using the mj-text tag. At last,  I  made one more container as the first one with different message saying `Thanks for using yaydoc`  with background color #333333

At last, transpile your mjml code to HTML by executing the following command.

mjml -r index.mjml -o index.html

Rendered Email
Resources:

Continue ReadingGenerating responsive email using mjml in Yaydoc

Testing child process using Mocha in Yaydoc

Mocha is a javascript testing framework. It can be used in both nodeJS and browser as well, also it is one of the most popular testing framework available out there. Mocha is widely used for the Behavior Driven Development (BDD). In yaydoc, we are using mocha to test our web UI. One of the main task in yaydoc is documentation generation. We build a bash script to do our documentation generation. We run the bash script using node’s child_process module, but then in order to run the test you have to execute the child process before test execution. This can be achieved by mochas’s before hook. Install mocha in to your system

npm install -g mocha

Here is the test case which i wrote in yaydoc test file.

const assert = require('assert')
const spawn = require('child_process').spawn
const uuidV4 = require("uuid/v4")
describe('WebUi Generator', () => {
  let uniqueId = uuidV4()
  let email = 'fossasia@gmail.com'
  let args = [
    "-g", "https://github.com/fossasia/yaydoc.git",
    "-t", "alabaster",
    "-m", email,
    "-u", uniqueId,
    "-w", "true"
  ]
  let exitCode

  before((done) => {
    let process = spawn('./generate.sh', args)
    process.on('exit', (code) => {
      exitCode = code
      done()
    })
  })
  it('exit code should be zero', () => {
    assert.equal(exitCode, 0)
  })
 })

Describe() function is used to describe our test case. In our scenario we’re testing the generate script so we write as webui generator. As I mentioned above we have to run our child_process in before hook. It() function is the place where we write our test case. If the test case fails, an error will be thrown. We use the assert module from mocha to do the assertion. You can see our assertion in first it()  block for checking exit code is zero or not.

mocha test.js --timeout 1500000

Since documentation takes time so we have to mention time out while running mocha. If your test case passes successfully, you will get output similar to this.

WebUi Generator
    ✓ exit code should be zero

Resources:

 

Continue ReadingTesting child process using Mocha in Yaydoc

Documenting APIs with Yaydoc

API Documentation is a quick and concise way to tell a user about how to use a library or work with a program. It details classes, functions, parameters, return types and more. Courtesy of Sphinx, Yaydoc had build in support for Documenting APIs for Python based projects right from it’s inception. Sphinx has a built in tool autodoc which provides certain directives such as autoclass, automodule, etc which can be used to automatically extract docstrings from all specified Python packages and modules and use it to generate API documentation. As a user of Yaydoc you could add ReST sources files with appropriate directives provided by autodoc and we would handle the rest. As part of enhancing this feature we wanted to do three things.

  • Enhance support for Python
  • Extend API documentation to other languages apart from Python
  • Automate the process of generating ReST source files

For Enhancing support for python projects, we implemented a few things.

Since autodoc imports the modules it needs to document, There could be import errors if a dependency was not met. To fix this issue, Now a user can specify certain modules to be mocked. This would really come in handy with projects depending on packages with third party C extensions such as numpy, scipy, etc.

{% if mock_modules %}
mock_modules = [name.strip() for name in '{{ mock_modules }}'.split(',')]
sys.modules.update((mod_name, mock.Mock()) for mod_name in mock_modules)
{% endif %}

Apart from this, if we detect a setup.py in the repository or a requirements.txt, we automatically try to install from it to meet dependencies.

# autodoc imports the module while building source files. To avoid
# ImportError, install any packages in requirements.txt of the project
# if available
if [ -f $ROOT_DIR/setup.py ]; then
  pip install $ROOT_DIR/
elif [ -f $ROOT_DIR/requirements.txt ]; then
  pip install -q -r $ROOT_DIR/requirements.txt
fi

We also crawl the repository to detect any packages and add them to sys.path. With these changes, a user can expected generated API docs without having to extend conf.py.

{% if autoapi_python == 'true' %}
for (dirpath, dirnames, filenames) in os.walk('{{ root_dir }}'):
    # Directory contains __init__.py. It should be a python package
    if '__init__.py' in filenames:
        # appending instead of inserting at front so that user
        # cannot overwrite some of our own modules.
        sys.path.append(os.path.abspath(os.path.dirname(dirpath)))
{% endif %}

The second goal is a no brainer. We would like to support as many languages as we can. With this week’s update, Java has been added to the officially supported list of languages for which Yaydoc can generate full API documentation without any manual intervention. To extract API documentation for java source files, we used a sphinx extension named javasphinx. From the official javasphinx docs,

javasphinx is a Sphinx extension that provides a Sphinx domain for documenting Java projects and a javasphinx-apidoc command line tool for automatically generating API documentation from existing Java source code and Javadoc documentation.

javasphinx-apidoc -o source/ $ROOT_DIR/$AUTOAPI_JAVA_PATH/
sphinx-apidoc -o source/ $ROOT_DIR/$AUTOAPI_PYTHON_PATH/

For the third goal, we use the tools sphinx-apidoc and javasphinx-apidoc to generate source files.

Resources

Continue ReadingDocumenting APIs with Yaydoc

Improving Custom PyPI Theme Support In Yaydoc

Yaydoc has been supporting custom themes from nearly it’s inception. Themes, which it could not find locally, it would automatically try to install it via pip and set up appropriate metadata about the themes in the generated conf.py.  It was one of the first major enhancement we provided as compared to when using bare sphinx to generate documentation. Since then, a large number of features have been added to ease the process of documentation generation but the core theming aspects have remained unchanged.

To use a theme, sphinx needs the exact name of the theme and the absolute path to it. To obtain these metadata, the existing implementation accessed the __file__ attribute of the imported package to get the absolute path to the __init__.py file, a necessary element of all python packages. From there we searched for a file named theme.conf, and thus the directory containing that file was our required theme.

There were a few mistakes in our earlier implementation. For starters, we assumed that the distribution name of the theme in PyPI and the package name which should be imported would be same. This is generally true but is not necessary. One such theme from PyPI is Flask-Sphinx-Themes. While you need to install it using

pip install Flask-Sphinx-Themes

yet to import it in a module one needs to

import flask_sphinx_themes

This lead to build errors when specific themes like this was used. To solve this, we used the pkg_resources package. It allows us to get various metadata about a package in an abstract way without needing to specifically handle if the package is zipped or not.

try:
    dist = pkg_resources.get_distribution('{{ html_theme }}')
    top_level = list(dist._get_metadata('top_level.txt'))[0]
    dist_path = os.path.join(dist.location, top_level)
except (pkg_resources.DistributionNotFound, IndexError):
    print("\nError with distribution {0}".format('{{ html_theme }}'))
    html_theme = 'fossasia_theme'
    html_theme_path = ['_themes']

The idea here is that instead of searching for __init__.py, we read the name of the top_level directory using the first entry of the top_level.txt, a file created by setuptools when installing the package. We build the path by joining the location attribute of the Distribution object and the name of the top_level directory. The advantage with this approach is that we don’t need to import anything and thus no longer need to know the exact package name.

With this update, Support for custom themes has been greatly increased.

Resources

Continue ReadingImproving Custom PyPI Theme Support In Yaydoc

gh-pages Publishing in Yaydoc’s Web UI

A few weeks back we rolled out the web interface for yaydoc. Web UI will enable user to generate the documentation with one click and users can download the zipped format of generated documentation. In Yaydoc, we now added the additional feature of deploying the generated documentation to the GITHUB pages. In order to push the generated documentation, we have to get the access token of the user. So I used passport Github’s strategy to get the access token of the users.

passport.use(new githubStrategy({
  clientID: process.env.CLIENTID,
  clientSecret: process.env.CLIENTSECRET,
  callbackURL: process.env.CALLBACKURL
}, function (accessToken, refreshToken, profile, cb) {
  profile.token = accessToken;
  cb(null, profile)
}))

passport.serializeUser(function(user, cb) {
  cb(null, user);
});

passport.deserializeUser(function(obj, cb) {
  cb(null, obj);
}); 

After setting the necessary environment variables, we have to pass the strategy to the express handler.

router.get("/github", function (req, res, next) {
  req.session.uniqueId = req.query.uniqueId;
  req.session.email = req.query.email
  req.session.gitURL = req.query.gitURL
  next()
}, passport.authenticate('github', {
  scope: [
    'public_repo',
    'read:org'
  ]
}))

For maintaining state, I’m keeping the necessary information in the session so, that in the callback URL we know which repository have to push.

 

router.get("/callback", passport.authenticate('github'), function (req, res, next) {
  req.session.username = req.user.username;
  req.session.token = req.user.token
  res.redirect("/deploy")
})

router.get("/deploy", function (req, res, next) {
  res.render("deploy", {
    email: req.session.email,
    gitURL: req.session.gitURL,
    uniqueId: req.session.uniqueId,
    token: crypter.encrypt(req.session.token),
    username: req.session.username
  })
})

Github will send the access token to our callback. After this I’m templating the necessary information to the jade deploy template where it’ll invoke the deploy function via sockets. Then we’ll stream all the bash output log to the website.

io.on('connection', function(socket){
  socket.on('execute', function (formData) {
    generator.executeScript(socket, formData);
  });
  socket.on('deploy', function (data) {
    ghdeploy.deployPages(socket, data);
  });
});

exports.deployPages = function (socket, data) {
  var donePercent = 0;
  var repoName = data.gitURL.split("/")[4].split(".")[0];
  var webUI = "true";
  var username = data.username
  var oauthToken = crypter.decrypt(data.encryptedToken)
  const args = [
    "-e", data.email,
    "-i", data.uniqueId,
    "-w", webUI,
    "-n", username,
    "-o", oauthToken,
    "-r", repoName
  ];
  var process = spawn("./publish_docs.sh", args);

  process.stdout.on('data', function (data) {
    console.log(data.toString());
    socket.emit('deploy-logs', {donePercent: donePercent, data: data.toString()});
    donePercent += 18;
  });

  process.stderr.on('data', function (data) {
    console.log(data.toString());
    socket.emit('err-logs', data.toString());
  });

  process.on('exit', function (code) {
    console.log('child process exited with code ' + code);
    if (code === 0) {
      socket.emit('deploy-success', {pagesURL: "https://" + data.username + ".github.io/" + repoName});
    }
  });
}

Once documentation is pushed to gh-pages, the documentation URL will get appended to the web UI.

Resources:

Continue Readinggh-pages Publishing in Yaydoc’s Web UI

Using Express to show previews in Yaydoc

In yaydoc WebUI, documentation is generated using sphnix and can be downloaded as a zip file. If the user wants to see a preview of the documentation they have to unzip the zipped file and have to check the generated website for each and every build. So, we decided to implement preview feature to show the preview of generated documentation so that user will have an idea of how the website would look. Since WebUI is made with Express, we implemented the preview feature using Express’s static function. Mostly static function is used for serving static assets but we used to serve our generated site because all the generated sites are static. All the generated documentation will have an unique id and all the unique ids are generated as per uuidv4 spec. The generated document will be saved and moved to the unique folder.

mv $BUILD_DIR/_build/html $ROOT_DIR/../${UNIQUEID}_preview && cd $_/../
var express = require("express")
var path = require("path")
var favicon = require("serve-favicon");
var logger = require("morgan");
var cookieParser = require("cookie-parser");
var bodyParser = require("body-parser");
var uuidV4 = require("uuid/v4");

var app = express();
app.use(logger("dev"));
app.use(bodyParser.json()); 
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser()); app.use(express.static(path.join(__dirname, "public")));
app.use("/preview", express.static(path.join(__dirname, "temp")))

app.listen(3000)

The above snippet is the just a basic Express server. In which, we have a route /preview and with the express static handler. Pass the path of your generated website as  argument, Then your sites are served over /preview route.

Resources:

Continue ReadingUsing Express to show previews in Yaydoc

Deploying documentations generated by Yaydoc to Heroku

There are many web applications available online that generates static websites. Among these projects are two unique projects developed here at FOSSASIA. These are the Open Event WebApp Generator and Yaydoc (an automatic documentation generation and deployment project.). Since Yaydoc already supports the deployment of the generated documentations to Github pages, it was just a matter of time that the deployment to Heroku is also supported.

Heroku is an excellent cloud-based platform used as a web application deployment service. Heroku provides most of its services at free of cost to the users and is excellent to host static websites provided that a little bit of tweaking is done.

For this implementation, we use the `Platform API` provided by Heroku. Stating it’s description mentioned in the documentation,

The platform API empowers developers to automate, extend and combine Heroku with other services. You can use the platform API to programmatically create apps, provision add-ons and perform other tasks that could previously only be accomplished with Heroku toolbelt or dashboard.

In order to deploy the static websites to Heroku, we need to first prepare a bundle of source code that has been compiled and is ready for execution on the Heroku runtime. This bundle is known as a Slug.

cd temp/$EMAIL/${UNIQUE_ID}_preview
mkdir -p app
cd app

curl https://nodejs.org/dist/v6.11.0/node-v6.11.0-linux-x64.tar.gz | tar xzv > /dev/null

cp $BASE/web.js
rsync -av --progress ../ . --exclude app

cd ..
tar czfv slug.tgz ./app > /dev/null

We are using the files generated for preview to bundle them in a slug. Also, we download the NodeJS runtime files since we are deploying a static website to Heroku. Along with the static files, we require bundling a NodeJS server file (web.js) that will be used to reference the static files in the application.

After preparing the Slug, we publish the static web application to Heroku. For this, we start by creating a Heroku app using the command `heroku create <app-name>`. The app name is decided by the user when he or she fills the form in the Yaydoc Web App. Following that, we request Heroku to allocate a new slug for your app. After that, we upload the slug tar file to the platform.

# Create Heroku 
heroku create $APP_NAME

# Allocating new Slug
Arr=($(curl -u “:$API_KEY” -X \
-H ‘Content-Type:application/json’ \
-H ‘Accept: application/vnd.heroku+json;version=3’ \
-d ‘{“process_types”:{“web”:”node-v6.11.0-linux-x64/bin/node web.js”}}’ \
-n https://api.heroku.com/apps/${APP_NAME}/slugs | \
python -c “import sys,json; obj=json.load(sys.stdin);
print(obj[‘blob’][‘url’] + ‘\n’ + obj[‘id’])”))

# Upload the slug tar file
curl -X PUT \
-H “Content-Type:”\
--data-binary @slug.tgz \
“${Arr[0]}”

After uploading the slug to Heroku, we need to release the app. This is done using the following command.

curl -u “:$API_KEY” -X POST \
-H “Accept: application/vnd.heroku+json; version=3” \
-H “Content-Type: application/json” \
-d ‘{“slug”:”’${Arr[1]}’”}’ \
-n https://api.heroku.com/apps/$APP_NAME/releases

Releasing the application completes the process of deployment, making the documentation generated by Yaydoc up and running at the following URL: https://<app-name>.herokuapp.com/

 

Continue ReadingDeploying documentations generated by Yaydoc to Heroku