FOSSASIA Internship Program 2018

Are you interested to participate in the development of Open Source projects in a summer internship? Build up your developer profile with FOSSASIA and spend your summer coding on an open source project.  Contribute to SUSI.AIOpen EventBadgeyayYaydoc, Meilix or PSLab and join us at a workshop week and Jugaadfest in India. Please find the details below and submit your application to our form. Be sure to check out FOSSASIA’s program guidelines.

1. Program Details

  • Sign up on our dedicated form at fossasia.org/internship (Interns need to become members of the org and sign up on its social channels)
  • Internships are 3 months with monthly evaluations
  • plus preparation onboarding after acceptance
  • Eligible are contributors above 18 years of age. Any contributor is eligible including students, professionals, university staff etc. Prefered are contributors who have participated in the community previously.
  • Benefits of the program include Shirts, Swag, certificates. All participants who pass the final evaluation will be eligible to participate in a workshop week and Jugaadfest in September 2018 in Hyderabad. Travel grants and accommodation will be provided.
  • The program is intended as a full-time program. However, if contributors would like to participate who have a day job, they can still join and pass the program if they fulfill all program requirements. All contributors who pass the program will be able to receive funding for workshops and Jugaadfest participation.

2. Timeline

  • Application period ongoing until May 12
  • Acceptance ongoing until May 12
  • Start of pre-period:  May
  • Start of Internship: 1st June
  • Evaluation 1: July
  • Evaluation 2: August
  • Evaluation 3: September
  • End of Internship:  September, 2018
  • Issuing of Certificates: September 2018
  • FOSSASIA Workshop Week /Jugaadfest: September/October

3. Deliverables

  • Daily scrum email to project mailing list answering three questions: What did I do yesterday? What is my plan for today? Is there anything preventing me from achieving my goals, e.g. blockers?
  • Work according to pull requests and issues (submit code on Github and match it with issues)
  • Daily code submissions (software, hardware)
  • Documentation: Text, YouTube videos
  • 1 technical blog post a month with details on solving a problem in a FOSSASIA project (Monthly – 1: by Monday of second week)
  • Design items (in open formats, e.g. XCF, SVG, EPS)

4. Participating Projects

5. Best Practices

Please follow best practices as defined here: https://blog.fossasia.org/open-source-developer-guide-and-best-practices-at-fossasia/

6. Participant Benefits/Support

Participants will receive Swag, certificates and travel support to the FOSSASIA Workshop week and Jugaadfest.

  • Evaluation 1: July, 2018: Successful Participants receive a FOSSASIA Tshirt (sent out together with bag in evaluation 2)
  • Evaluation 2: August: Successful Participants receive a beautiful FOSSASIA bag
  • Evaluation 3: September: Successful Participants receive the following support to participate in the FOSSASIA India Workshop Week and Jugaadfest:
    • 100 SGD travel support from within India and 200 SGD support if coming from outside India
    • One week accommodation in Hyderabad (organized by FOSSASIA)
    • Catering during workshops
Continue ReadingFOSSASIA Internship Program 2018

Loklak Timeline Using Sphinx Extension In Yaydoc

In Yaydoc, I decided to add option, to show show the twitter timeline which showthe latest twitter feed. But I wanted to implement it using loklak instead of twitter embedded plugin. I started to search for an embedded plugin that exists for loklak. There is no such plugin, hence I built my own plugin. You can see the source code here.

Now that I have the plugin, the next phase is to add the plugin to the documentation. Adding the plugin by appending the plugin code to HTML is not viable. Therefore I decided to make Directive for Sphinx which adds a timeline based on the query parameter which user provides.

In order to make a Directive, I had to make a Sphinx extension which creates a timeline Directive. The Directive has to look like this

.. timeline :: fossasia

from docutils import nodes

from docutils.parsers import rst

class timeline(nodes.General, nodes.Element):
  pass

def visit(self, node):
  tag=u'''

”’

.format(node.display_name)
  self.body.append(tag)
  self.visit_admonition(node)

def depart(self, node):
  self.depart_admonition(node)

class TimelineDirective(rst.Directive):
  name = 'timeline'
  node_class = timeline
  has_content = True
  required_argument = 1
  optional_argument = 0
  final_argument_whitespace = False
  option_spec = {}

 def run(self):
    node = self.node_class()
    node.display_name = self.content[0]
    return [node]

def setup(app):            app.add_javascript("https://cdn.rawgit.com/fossasia/loklak-timeline-plugin/master/plugi
 n.js")
  app.add_node(timeline, html=(visit, depart))
  app.add_directive('timeline', TimelineDirective)

We have to create an empty class for Nodes that inherits`Node.General` and `Node.Elements`. This class is used for storing the value which will be passed by the directive.

I wrote a `Visit` function which executes when sphinx visits the `timeline` directive. `Visit` function basically appends the necessary html code needed to render the twitter timeline. Then I created TimelineDirective class which inherits rst.Directive. In that class, I defined a run method which read the argument from the directive and passed it to the node. Finally I defined a setup method which adds the loklak-timeline-plugin js to the render html node, and directive to the sphinx. Setup function has to be defined, in order to detect module as an extension by the sphinx.

Resources:

Continue ReadingLoklak Timeline Using Sphinx Extension In Yaydoc

Extending Markdown Support in Yaydoc

Yaydoc, our automatic documentation generator, builds static websites from a set of markup documents in markdown or reStructuredText format. Yaydoc uses the sphinx documentation generator internally hence reStructuredText support comes out of the box with it. To support markdown we use multiple techniques depending on the context. Most of the markdown support is provided by recommonmark, a docutils bridge for sphinx which basically converts markdown documents into proper docutil’s abstract syntax tree which is then converted to HTML by sphinx. While It works pretty well for most of the use cases, It does fall short in some instances. They are discussed in the following paragraphs.

The first problem was inclusion of other markdown files in the starting page. This was due to the fact that markdown does not supports any include mechanism. And if we used the reStructuredText include directive, the included text was parsed as reStructuredText. This problem was solved earlier using pandoc – an excellent tool to convert between various markup formats. What we did was that we created another directive mdinclude which converts the markdown to reStructuredText before inclusion. Although this was solved a while ago, The reason I’m discussing this here is that this was the inspiration behind the solution to our recent problem.

The problem we encountered was that recommonmark follows the Commonmark spec which is an ongoing effort towards standardization of markdown which has been somewhat lacking till now. The process is currently going on so the recommonmark library doesn’t yet support the concept of extensions to support various features of different markdown flavours not in the core commonmark spec. We could have settled for only supporting the markdown features in the core spec but tables not being present in the core spec was problematic. We had to support tables as it is widely used in most of the docs present in github repositories as GFM(Github Flavoured Markdown) renders ascii tables nicely.

The solution was to use a combination of recommonmark and pandoc. recommonmark provides a eval_rst code block which can be used to embed non-section reStructuredText within markdown. I created a new MarkdownParser class which inherited the CommonMarkParser class from recommonmark. Within it, using regular expressions, I convert any text within `<!– markdown+ –>` and `<!– endmarkdown+ –>`  into reStructuredText and enclose it within eval_rst code block. The result was that tables when enclosed within those trigger html comments would be converted to reST tables and then enclosed within eval_rst block which resulted in recommonmark renderering them properly. Below is a snippet which shows how this was implemented.

import re
from recommonmark.parser import CommonMarkParser
from md2rst import md2rst


MARKDOWN_PLUS_REGEX = re.compile('<!--\s+markdown\+\s+-->(.*?)<!--\s+endmarkdown\+\s+-->', re.DOTALL)
EVAL_RST_TEMPLATE = "```eval_rst\n{content}\n```"


def preprocess_markdown(inputstring):
    def callback(match_object):
        text = match_object.group(1)
        return EVAL_RST_TEMPLATE.format(content=md2rst(text))

    return re.sub(MARKDOWN_PLUS_REGEX, callback, inputstring)


class MarkdownParser(CommonMarkParser):
    def parse(self, inputstring, document):
        content = preprocess_markdown(inputstring)
        CommonMarkParser.parse(self, content, document)

Resources

Continue ReadingExtending Markdown Support in Yaydoc

Showing Pull Request Build logs in Yaydoc

In Yaydoc, I added the feature to show build status of the Pull Request. But there was no way for the user to see the reason for build failure, hence I decided to show the build log in the Pull Request similar to that of TRAVIS CI. For this, I had to save the build log into the database, then use GitHub status API to show the build log url in the Pull Request which redirects to Yaydoc website where we render the build log.

StatusLog.storeLog(name, repositoryName, metadata,  `temp/admin@fossasia.org/generate_${uniqueId}.txt`, function(error, data) {
                            if (error) {
                              status = "failure";
                            } else {
                              targetBranch = `https://${process.env.HOSTNAME}/prstatus/${data._id}`
                            }
                            github.createStatus(commitId, req.body.repository.full_name, status, description, targetBranch, repositoryData.accessToken, function(error, data) {
                              if (error) {
                                console.log(error);
                              } else {
                                console.log(data);
                              }
                            });
                          });

In the above snippet, I’m storing the build log which is generated from the build script to the mongodb and I’m appending the mongodb unqiueID to the `prstatus` url so that we can use that id to retrieve build log from the database.

exports.createStatus = function(commitId, name, state, description, targetURL, accessToken, callback) {
  request.post({
    url: `https://api.github.com/repos/${name}/statuses/${commitId}`,
    headers: {
      'User-Agent': 'Yaydoc',
      'Authorization': 'token ' + crypter.decrypt(accessToken)
    },
    "content-type": "application/json",
    body: JSON.stringify({
      state: state,
      target_url: targetURL,
      description: description,
      context: "Yaydoc CI"
    })
  }, function(error, response, body) {
    if (error!== null) {
      return callback({description: 'Unable to create status'}, null);
    }
    callback(null, JSON.parse(body));
  });
};

After saving the build log, I’m sending the request to GitHub for showing the status of the build along with build log url where user can click the detail link and can see the build log.

Resources:

Continue ReadingShowing Pull Request Build logs in Yaydoc

Configurable Settings for Repositories Registered to Yaydoc

Yaydoc, our automatic documentation generation and deployment project, generates and deploys documentation for each of its registered repositories. These repositories registered to Yaydoc have various configurable settings which can be edited to change the behavior of the build process and other processes surrounding it. These settings include

  • Report build status via email,
  • Get build status for pull requests to the repository,
  • Specify project branches,
  • Add or remove sub-projects to a registered master project, and
  • Enable or Disable the build process
  • Delete the repository

Report Build Status via Email

At Yaydoc, user has an option to receive a mail

  • On the first build after the repository is registered to Yaydoc, irrespective of the status
  • On every failed build
  • On the change of build status (Success to Failed or vice versa)
  • To the user who registered the repository to Yaydoc

It is possible that the user may register the repository from one email-address but wishes to receive the build status in another email. Furthermore, the user may never wish to receive any email from the repository. Keeping user’s experience at a priority, we also made it configurable for the user to disable the mail service.

This feature is implemented by adding a  mailService  attribute to the Repository Schema with status: Boolean and email: String as its sub-attributes. The feature is toggled, setting the value of ‘mailService.status’ to true or false.

const Repository = mongoose.model(‘Repository’, {
  ....
  ....
  mailService: {
    status: Boolean,
    email: String,
  },
  ....
  ....
});

Pull request implementing this feature: https://github.com/fossasia/yaydoc/pull/361/files

Get build status for pull requests to the repository

We made the generation of documentation website during each Pull Request to the registered repository. Since we do not want to increase the load on our server and also due to the fact that, since testing documentation generation during a Pull Request would be rare, and the user may not want to integrate Yaydoc to Pull Requests, we made this feature configurable with ‘disabled’ as the default configuration.

A detail description of the process can be found at: Showing Pull Request Build Status in Yaydoc | FOSSASIA blog

Specify Project Branches

The documentation for a registered repository is generated at every commit made to the repository. This process happens in commits made to any of the repository’s branches except for branches that they do not have a `.yaydoc.yml` file and the `gh-pages` branch. It is seen that In many of the open sources projects hosted on Github, the development flow follows an approach in which new features and patches are made in a separate branch of the same repository before sending a PR to merge it into the master repository. In such a case, generating documentation from all the branches is an overkill and would not be expected. Hence, we let the user specify branches from which the documentation should be generated and deployed.

The existing active branches of the repository along with the registered branches are retrieved and are used to display a Bootstrap select picker to specify branches.

router.get(‘/:owner/:repository/branches’, function (req, res, next) {
  var name = req.params.owner + ‘/’ + req.params.repository;
  async.parallel({
    branches: function (callback) {
      github.getRepositoryBranches(name, function (error, branches) {
        callback(error, branches);
      });
    },
    registeredBranches: function (callback) {
      var branches = [];
      for (var branches of registeredBranches) {
        branches.push(branch.name);
      }
      callback(error, branches);
    }
  }, function (error, results) {
    res.json({
      branches: results.branches,
      registeredBranches: results.registeredBranches
    });
  });
});

Once a user specify one or more of the existing branches of the repository, documentation build occurs only from those specified repositories if they contain the `.yaydoc.yml` file.

Pull request implementing this feature: https://github.com/fossasia/yaydoc/pull/424

Add or remove sub-projects to a registered master project

Apart from the generation of documentation of a single Github repository, we also offer the user to register sub-projects for a master project. The documentation generated from these subprojects are kept at a sub route of the website with auto-generated links at the start page. The addition and deletion of these sub-projects is also made configurable from repository settings.

Pull request implementing this feature: https://github.com/fossasia/yaydoc/pull/318

Resources:

  1. Async utilities for node and browser – https://caolan.github.io/async/
  2. Mongoose Schema Documentation: http://mongoosejs.com/docs/guide.html
Continue ReadingConfigurable Settings for Repositories Registered to Yaydoc

Handling Errors While Parsing the yaml File in Yaydoc

Yaydoc, our automatic documentation generator uses a yaml file to read a user’s configuration. The internal configuration parser basically converts the yaml file to a python dictionary. Then, it serializes the values of that dictionary using a custom serialization format. From there it associates those values with environment variables which are then passed to bash scripts for various tasks such as deployment, generation, etc.. Some of those environment variables are again passed to another python layer which interacts with sphinx where they are deserialized before use. This whole system works pretty well for our use cases.

Now let’s assume a user adds a yaml file where they have a malformed section in the file. For example, to specify a theme, one needs to add the following to the yaml file.

build:
  theme:
    name: sphinx_fossasia_theme

But our user has the following in their yaml file.

build:
  theme: sphinx_fossasia_theme

Now this will raise an error as we expect a dictionary as a value for the key ‘theme’ but we got a string. Now how do we handle such cases without ignoring the entire file as that would be too much of a penalty for such a small mistake? One approach would have been to wrap each call to connect with a bunch of try-catch but that would render the code unreadable as the initial motivation for implementing the connect method was to abstract the internal implementation so that other contributors who may not be well versed with python can also easily add config options without needing to learn a bunch of python constructs.

So, what we did was that, while merging the dictionary containing default options and the dictionary containing the user preferences, we check whether the default has the same data type as that of the incoming value. If they are, It’s deemed safe to merge. There are certain relaxations though, like if the current type is a list, then the incoming value can be of any time as that can always be converted to a list of a single element. This is required to support the following syntax.

key:
  - value
key: value

The above two blocks are equivalent due to the above-mentioned approach although the type is different.

Now, after this pre-validation step is over we can ensure that the if the assumed type for a key is let’s say a dictionary, then it would be a dictionary. Hence no type errors would be raised like trying to access a dict method for another object, say a string which happened with the earlier implementation. After this, an extra parameter was added to the connect method to which we can now pass a validation function which if returns false, those values would be ignored. Usage of this feature has been implemented to a small level where we validate the links to subprojects and if they look like a valid github repo only then will they be included. Note that their existence is not checked. Only a regex based validation is performed.

It was also important to notify the user about these events when we detect that a specific section is invalid and provide informative and helpful error messages without failing the build. Hence proper error messages were also added which were informative so that the user knows exactly which section is to blame. This is similar to compilers where the error message is crucial to debug a certain piece of code.

Resources

Continue ReadingHandling Errors While Parsing the yaml File in Yaydoc

Showing Pull Request Build Status in Yaydoc

Yaydoc is integrated to various open source projects in FOSSASIA.  We have to make sure that the contributors PR should not break the build. So, I decided to check whether the PR is breaking the build or not. Then, I would notify the status of the build using GitHub status API.

exports.registerHook = function (data, accessToken) {
  return new Promise(function(resolve, reject) {
    var hookurl = 'http://' + process.env.HOSTNAME + '/ci/webhook';
    if (data.sub === true) {
      hookurl += `?sub=true`;
    }
    request({
      url: `https://api.github.com/repos/${data.name}/hooks`,
      headers: {
        'User-Agent': 'Yaydoc',
        'Authorization': 'token ' + crypter.decrypt(accessToken)
      },
      method: 'POST',
      json: {
        name: "web",
        active: true,
        events: [
          "push",
          "pull_request"
        ],
        config: {
          url: hookurl,
          content_type: "json"
        }
      }
    }, function(error, response, body) {
      if (response.statusCode !== 201) {
        console.log(response.statusCode + ': ' + response.statusMessage);
        resolve({status: false, body:body});
      } else {
        resolve({status: true, body: body});
      }
    });
  });
};

I’ll register the webhook, when user registers the repository to yaydoc for push and pull request event. Push event will be for building documentation and hosting the documentation to the GitHub pages. Pull_request event would be for checking the build of the pull request.

github.createStatus(commitId, req.body.repository.full_name, "pending", "Yaydoc is checking your build", repositoryData.accessToken, function(error, data) {
                    if (!error) {
                      var user = req.body.pull_request.head.label.split(":")[0];
                      var targetBranch = req.body.pull_request.head.label.split(":")[1];
                      var gitURL = `https://github.com/${user}/${req.body.repository.name}.git`;
                      var data = {
                        email: "admin@fossasia.org",
                        gitUrl: gitURL,
                        docTheme: "",
                        debug: true,
                        docPath: "",
                        buildStatus: true,
                        targetBranch: targetBranch
                      };
                      generator.executeScript({}, data, function(error, generatedData) {
                        var status, description;
                        if(error) {
                          status = "failure";
                          description = error.message;
                        } else {
                          status = "success";
                          description = generatedData.message;
                        }
                        github.createStatus(commitId, req.body.repository.full_name, status, description, repositoryData.accessToken, function(error, data) {
                          if (error) {
                            console.log(error);
                          } else {
                            console.log(data);
                          }
                       });
                 });
              }
        });

When anyone opens a new PR, GitHub will send  a request to yaydoc webhook. Then, I’ll send the status to GitHub saying that “Yaydoc is checking your build” with status `pending`. After, that I’ll documentation will be generated.Then, I’ll check the exit code. If the exit code is zero,  I’ll send the status `success` otherwise I’ll send `error` status.
Resources:

Continue ReadingShowing Pull Request Build Status in Yaydoc

Store Log History for Repositories Registered to Yaydoc

Yaydoc, our automatic documentation generation and deployment project, generates and deploys documentation for each of its registered repository. For every commit made to the registered repository, there is a corresponding build process running at Yaydoc. These build processes have their own logs which are stored as text files. However, until now, these commits were never visible to the user. So, if there would have been a failed build process, the user would never know the reason behind, rendering the user unable to rectify the error.

Hence, there was a need to make these logs available to users. The initial thought was to store only the latest log overriding all the previous logs for the repository. However, it was unanimously decided by the developers to store a history of logs for the repository. The main motive behind this was to enable users to compare logs between different commits.

The content from the log files created is stored in a MongoDB collection. Following is the schema defined for the build logs.

const BuildLog = mongoose.model(‘BuildLog’, mongoose.Schema({
  repository: String,  // `full_name` of the repository
  buildNumber: {       // Incrementing number for each build
    type: Number,
    default: 0,
  },
  generate: {
    data: Buffer,      // Generate Logs content
    datetime: Date,    // Date time of generate log creation
  },
  ghpages: {
    data: Buffer,      // Github Pages Logs content
    datetime: Date,    // Date time of github pages log creation
  },
}));

The repository collection is also updated, adding a builds key storing the number of times the build process was triggered for a given repository. This key is incremented on every new build and the new value is stored along with the builds as buildNumber.

The build process involves a documentation generation and a documentation deployment script. The process of incrementing the build number in the repository occurs when we store the documentation generation logs. After that, Github pages logs are stored when the documentation deployment process is completed.

Since the logs are stored in a text file at the location temp/<email>/<filename>.txt, we had to read the file using NodeJS File system. The file is read synchronously using the fs.readFileSync(filename) function and then stored in the MongoDB collection.

/**
 * Store logs created while generating docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file path of the generate logs
 * @param callback
 */
module.exports.storeGenerateLogs = function (name, filepath, callback) {
  Repository.incrementBuildNumber(name, function (error, repository) {
    var buildlog = new BuildLog({
      repository: name,
      buildNumber: repository.builds,
      generate: {
        data: fs.readFileSync(filepath),
        datetime: new Date()
      }
    });
    buildlog.save(function (error, repository) {
      callback(error, repository);
    });
  });
};

/**
 * Store logs created while deploying docs for a given repository
 * @param name: `full_name` of the repository
 * @param filepath: file of the ghpages deploy logs
 * @param callback
 */
module.exports.storeGithubPagesLogs = function (name, filepath, callback) {
  Repository.getRepositoryByName(name, function (error, repository) {
    if (error) {
      callback(error);
    } else {
      BuildLog.getParticularBuildLog(repository.name, repository.builds, 
      function (error, buildLog) {
        buildLog.ghpages.data = fs.readFileSync(filepath);
        buildLog.ghpages.datetime = new Date();
        buildLog.save(function (error, buildLog) {
          callback(error, buildLog);
        });
      });
    }
  });
};

The stored logs can then be retrieved at two different routes, with /:owner/:name/logs showing a list to logs generated in at most 10 builds and /:owner/:name showing the latest log. Similar to logs generated by Travis, accessing these routes doesn’t require the user to login to Yaydoc.

/**
 * Get a single repository with a log history of 10
 * @param name: `full_name` of the repository
 * @param callback
 */
module.exports.getRepositoryWithLogs = function (name, callback) {
  Repository.aggregate([
    { $match: {name: name}},
    {
      $lookup: {
        from: ‘buildLogs’,
        localField: ‘name’,
        foreignField: ‘repository’,
        as: ‘logs’
      }
    },
    { $unwind: ‘$logs’ },
    { $sort: { ‘logs.buildNumber’: -1 } },
    { $limit: 10 }
  ]).exec(function (error, results){
    callback(error, results);
  });
};

In order to retrieve a repository along with its logs, we perform an aggregation in MongoDB which is similar to a Left Join in SQL. This is the $lookup aggregation and it performs a left outer join to an unsharded collection in the same database to filter in documents from the “joined” collection for processing. A similar method is used to retrieve the latest log by setting the limit aggregation to 1.

Resources:

  1. MongoDB Aggregation Lookup: https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/
  2. Mongoose Aggregate Constructor: http://mongoosejs.com/docs/api.html#index_Mongoose-Aggregate
  3. NodeJS File System: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options
Continue ReadingStore Log History for Repositories Registered to Yaydoc

Implementing a Custom Serializer for Yaydoc

At the crux of it, Yaydoc is comprised of a number of specialized bash scripts which perform various tasks such as generating documentation, publishing it to github pages, heroku, etc. These bash scripts also serve as the central communication portal for various technologies used in Yaydoc. The core generator is composed of several Python modules extending the sphinx documentation generator. The web Interface has been built using Node, Express, etc. Yaydoc also contains a Python package dedicated to reading configuration options from a Yaml file.

Till now the options were read and then converted to strings irrespective of the actual data type, based on some simple rules.

  • List was converted to a comma separated string.(Nested lists were not handled)
  • Boolean values were converted to true | false respectively.
  • None was converted to an empty string.

While these simple rules were enough at that time, It was certain that a better solution would be required as the project grew in size. It was also getting tough to maintain because a lot of hard-coding was required when we wanted to convert those strings to python objects. To handle these cases, I decided to create a custom serialization format which would be simple for our use cases and easily parseable from a bash script yet can handle all edge cases. The format is mostly similar to its earlier form apart from lists where it takes heavy inspiration from the python language itself.

With the new implementation, Lists would get converted to comma separated strings enclosed by square brackets. This allowed us to encode the type of the object in the string so that it can later be decoded. This handled the case of an empty list or a list with single element well. The implementation also handled nested lists.

Two methods were created namely serialize and deserialize which detected the type of the corresponding object using several heuristics and applied the proper serialization or deserialization rule.

def serialize(value):
    """
    Serializes a python object to a string.
    None is serialized to an empty string.
    bool values are converted to strings True False.
    list or tuples are recursively handled and are comma separated.
    """
    if value is None:
        return ''
    if isinstance(value, str):
        return value
    if isinstance(value, bool):
        return "true" if value else "false"
    if isinstance(value, (list, tuple)):
        return '[' + ','.join(serialize(_) for _ in value) + ']'
    return str(value)

To deserialize we also had to handle the case of nested lists. The following snippet does that properly.

def deserialize(value, numeric=True):
    """
    Deserializes a string to a python object.
    Strings True False are converted to bools.
    `numeric` controls whether strings should be converted to
    ints or floats if possible. List strings are handled recursively.
    """
    if value.lower() in ("true", "false"):
        return value.lower() == "true"
    if numeric and _is_numeric(value):
        return _to_numeric(value)
    if value.startswith('[') and value.endswith(']'):
        split = []
        element = ''
        level = 0
        for c in value:
            if c == '[':
                level += 1
                if level != 1:
                    element += c
            elif c == ']':
                if level != 1:
                    element += c
                level -= 1
            elif c == ',' and level == 1:
                split.append(element)
                element = ''
            else:
                element += c
        if split or element:
            split.append(element)
        return [deserialize(_, numeric) for _ in split]
    return value

With this new approach, we are able to handle much more cases as compared to the previous implementation and is much more robust. It does however still lacks lacks certain features such as serializing dictionaries. That may be be implemented in the future if need be.

Resources

Continue ReadingImplementing a Custom Serializer for Yaydoc

Deploying preview using surge in Yaydoc

In Yaydoc, we save the preview of the documentation in our local server and then we show the preview using express’s static serve method. But the problem is that Heroku doesn’t support persistent server, so our preview link gets expired within a few minutes. In order to solve the problem I planned to deploy the preview to surge so that the preview doesn’t get expired. For that I made a shell script which will deploy preview to the surge and then I’ll invoke the shell script using child_process.

#!/bin/bash

while getopts l:t:e:u: option
do
 case "${option}"
 in
 l) LOGIN=${OPTARG};;
 t) TOKEN=${OPTARG};;
 e) EMAIL=${OPTARG};;
 u) UNIQUEID=${OPTARG};;
 esac
done

export SURGE_LOGIN=${LOGIN}
export SURGE_TOKEN=${TOKEN}

./node_modules/.bin/surge --project temp/${EMAIL}/${UNIQUEID}_preview --domain ${UNIQUEID}.surge.sh

In the above snippet, I’m initializing the SURGE_LOGIN and SURGE_TOKEN environmental value, so that surge will deploy to the preview without asking any credentials while I am deploying the project. Then I’m executing surge by specifying the preview path and preview domain name.

exports.deploySurge = function(data, surgeLogin, surgeToken, callback) {
  var args = [
    "-l", surgeLogin,
    "-t", surgeToken,
    "-e", data.email,
    "-u", data.uniqueId
  ];

  var spawnedProcess = spawn('./surge_deploy.sh', args);
  spawnedProcess.on('exit', function(code) {
    if (code === 0) {
      callback(null, {description: 'Deployed successfully'});
    } else {
      callback({description: 'Unable to deploy'}, null);
    }
  });
}

Whenever the user generates documentation, I’ll invoke the shell script using child_process and then if it exits with exit code 0 I’ll pass the preview url via sockets to frontend and then the user can access the url.

Resource:

Continue ReadingDeploying preview using surge in Yaydoc