Getting Started Developing on Phimpme Android

Phimpme is an Android app for editing photos and sharing them on social media. To participate in project start by learning how people contribute in open source, learning about the version control system Git and other tools like Codacy and Travis.

Firstly, sign up for GitHub. Secondly, find the open source projects that interest you. Now as for me I started with Phimpme. Then follow these steps:

  1. Go through the project ReadMe.md and read all the technologies and tools they are using.
  2. Now fork that repo in your account.
  3. Open the Android Studio/Other applications that are required for that project and import the project through Git.
  4. For Android Studio sync all the Gradle files and other changes and you are all done and ready for the development process.

Install the app and run it on a phone. Now explore each and every bit use this app as a tester, think about the end cases and boundary condition that will make the app ‘ANR’ (App not responding) dialog appear. Congratulations you are ready to create an issue if that is a verified and original and actually is a bug.

Next,

  • Navigate to the main repo link, you will see an issues section as follows:
  • Create a new issue and report every detail about the issue (logcat, screenshots) For eg. Refer to Issue-1120
  • Now the next step is to work on that issue
  • On your machine, you don’t have to change the code in the development branch as it’s considered to be as a bad practice. Hence checkout as a new branch.
    For eg., I checked out for the above issue as ‘crashfixed’
git checkout -b "Any branch name you want to keep"
  • Make the necessary changes to that branch and test that the code is compiling and the issue is fixed followed by
git add.
git commit -m "Fix #Issue No -Description "
git push origin branch-name
  • Now navigate to the repo and you will an option to create a Pull Request.
    Mention the Issue number and description and changes you done, include screenshots of the fixed app.For eg. Pull Request 1131.

Hence you have done your first contribution in open source while learning with git. The pull request will initiate some checks like Codacy and Travis build and then if everything works it is reviewed and merged by co-developers.

The usual way how this works is, that it should be reviewed by other co-developers. These co-developers do not need merge or write access to the repository. Any developer can review pull requests. This will also help contributors to learn about the project and make the job of core developers easier.

Resources

Deleting Meilix Github Releases

Meilix is the repository which uses build script to generate community version of lubuntu as LXQT Desktop. Meilix-Generator is the webapp which uses Meilix to generate ISO and deploy it on Meilix Github Release. Then the webapp mail the link of the ISO to the user.
Increasing number of ISO will increase the number of releases which results in dirty looking of Meilix repository. So we need to delete older releases after certain interval of time to make the repository release page looks good and decrease unwanted space.
This releases_maintainer.sh script will do this work for us.

#!/usr/bin/env bash
set -e
echo "This is a script to delete obsolete meilix iso builds by Abishek V Ashok"
echo "You have to add an authorization token to make it functional."

# jq is the JSON parser we will be using
sudo apt-get -y install jq

# Storing the response to a variable for future usage
response=`curl https://api.github.com/repos/fossasia/meilix/releases | jq '.[] | .id, .published_at'`

index=1  # when index is odd, $i contains id and when it is even $i contains published_date
delete=0 # Should we delete the release?
current_year=`date +%Y`  # Current year eg) 2001
current_month=`date +%m` # Current month eg) 2
current_day=`date +%d`   # Current date eg) 24

for i in $response; do
    if [ $((index % 2)) -eq 0 ]; then # We get the published_date of the release as $i's value here
        published_year=${i:1:4}
        published_month=${i:6:2}
        published_day=${i:9:2}

        if [ $published_year -lt $current_year ]; then
             let "delete=1"
        else
            if [ $published_month -lt $current_month ]; then
                let "delete=1"
            else
                if [ $((current_day-$published_day)) -gt 10 ]; then
                    let "delete=1"
                fi
            fi
        fi
    else # We get the id of the release as $i`s value here
        if [ $delete -eq 1 ]; then
            curl -X DELETE -H "Authorization: token $KEY" https://api.github.com/repos/fossasia/meilix/releases/$i
            let "delete=0"
        fi
    fi
    let "index+=1"
done

This code uses Github API to curl the Meilix releases. Github API is very useful in providing lots of information but here we are only concerned with the release date and time of the build.
Then we setup a condition if that satisfies then the release will automatically will get deleted.

For taking care of the authentication, a token has been uploaded to the Travis settings of Meilix of FOSSASIA.

The personal token has been generated by a user with write access to the repository with repo scope token.

This sort out the issue of having bulk of releases in the Meilix repository of FOSSASIA.

References:
Users Github API  by REST API v3
Repo Github API   by REST API v3

Auto Deployment of SUSI Web Chat on gh-pages with Travis-CI

SUSI Web Chat uses Travis CI with a custom build script to deploy itself on gh-pages after every pull request is merged into the project. The build system auto updates the latest changes hosted on chat.susi.ai. In this blog, we will see how to automatically deploy the repository on gh pages.

To proceed with auto deploy on gh-pages branch,

  1. We first need to setup Travis for the project.
  2. Register on https://travis-ci.org/ and turn on the Travis for this repository.

Next, we add .travis.yml in the root directory of the project.

# Set system config
sudo: required
dist: trusty
language: node_js

# Specifying node version
node_js:
  - 6

# Running the test script for the project
script:
  - npm test

# Running the deploy script by specifying the location of the script, here ‘deploy.sh’ 

deploy:
  provider: script
  script: "./deploy.sh"


# We proceed with the cache if there are no changes in the node_modules
cache:
  directories:
    - node_modules

branches:
  only:
    - master

To find the code go to https://github.com/fossasia/chat.susi.ai/blob/master/.travis.yml

The Travis configuration files will ensure that the project is building for every change made, using npm test command, in our case, it will only consider changes made on the master branch.

If one wants to watch other branches one can add the respective branch name in travis configurations. After checking for build passing we need to automatically push the changes made for which we will use a bash script.

#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
    echo "Skipping deploy; The request or commit is not on master"
    exit 0
fi

# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out ../deploy_key -d

chmod 600 ../deploy_key
eval `ssh-agent -s`
ssh-add ../deploy_key

# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

git checkout $SOURCE_BRANCH

# Actual building and setup of current push or PR.
npm install
npm run build
mv build ../build/

git checkout $TARGET_BRANCH
rm -rf node_modules/
mv ../build/* .
cp index.html 404.html

# Staging the new build for commit; and then committing the latest build
git add -A
git commit --amend --no-edit --allow-empty

# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

  echo "No Changes in the Build; exiting"
  exit 0

else
  # There are changes in the Build; push the changes to gh-pages
  echo "There are changes in the Build; pushing the changes to gh-pages"

  # Actual push to gh-pages branch via Travis
  git push --force $SSH_REPO $TARGET_BRANCH
fi

This bash script will enable Travis CI user to push changes to gh pages, for this we need to store the credentials of the repository in encrypted form.

1. To get the public/private rsa keys we use the following command

ssh-keygen -t rsa -b 4096 -C "[email protected]"

2.  It will generate keys in .ssh/id_rsa folder in your home repository.

  1. Make sure you do not enter any passphrase while generating credentials otherwise Travis will get stuck at the time of decryption of the keys.
  2. Copy the public key and deploy the key to repository by visiting  

5. We also need to set the environment variable ENCRYPTED_KEY in Travis. Here’s a screenshot where to set it in the Travis repository dashboard.

6. Next, install Travis for encryption of keys.

sudo apt install ruby ruby-dev
sudo gem install travis

7. Make sure you are logged in to Travis, to login use the following command.

travis login

8. Make sure you have copied the ssh to deploy_key and then encrypt your private deploy_key and add it to root of your repository, use command –

travis encrypt-file deploy_key

9. After successful encryption, you will see a message

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

openssl aes-256-cbc -K $encrypted_3dac6bf6c973_key -iv $encrypted_3dac6bf6c973_iv -in deploy_key.enc -out ../deploy_key -d
  1. Add the above-generated deploy_key in Travis and push the changes on your master branch. Do not push the deploy_key only the encryption file i.e., deploy_key.enc
  1. Finally, push the changes and create a Pull request and merge it to test the deployment. Visit Travis logs for more details and debugging.

Resources

Showing Pull Request Build logs in Yaydoc

In Yaydoc, I added the feature to show build status of the Pull Request. But there was no way for the user to see the reason for build failure, hence I decided to show the build log in the Pull Request similar to that of TRAVIS CI. For this, I had to save the build log into the database, then use GitHub status API to show the build log url in the Pull Request which redirects to Yaydoc website where we render the build log.

StatusLog.storeLog(name, repositoryName, metadata,  `temp/[email protected]/generate_${uniqueId}.txt`, function(error, data) {
                            if (error) {
                              status = "failure";
                            } else {
                              targetBranch = `https://${process.env.HOSTNAME}/prstatus/${data._id}`
                            }
                            github.createStatus(commitId, req.body.repository.full_name, status, description, targetBranch, repositoryData.accessToken, function(error, data) {
                              if (error) {
                                console.log(error);
                              } else {
                                console.log(data);
                              }
                            });
                          });

In the above snippet, I’m storing the build log which is generated from the build script to the mongodb and I’m appending the mongodb unqiueID to the `prstatus` url so that we can use that id to retrieve build log from the database.

exports.createStatus = function(commitId, name, state, description, targetURL, accessToken, callback) {
  request.post({
    url: `https://api.github.com/repos/${name}/statuses/${commitId}`,
    headers: {
      'User-Agent': 'Yaydoc',
      'Authorization': 'token ' + crypter.decrypt(accessToken)
    },
    "content-type": "application/json",
    body: JSON.stringify({
      state: state,
      target_url: targetURL,
      description: description,
      context: "Yaydoc CI"
    })
  }, function(error, response, body) {
    if (error!== null) {
      return callback({description: 'Unable to create status'}, null);
    }
    callback(null, JSON.parse(body));
  });
};

After saving the build log, I’m sending the request to GitHub for showing the status of the build along with build log url where user can click the detail link and can see the build log.

Resources:

Showing Pull Request Build Status in Yaydoc

Yaydoc is integrated to various open source projects in FOSSASIA.  We have to make sure that the contributors PR should not break the build. So, I decided to check whether the PR is breaking the build or not. Then, I would notify the status of the build using GitHub status API.

exports.registerHook = function (data, accessToken) {
  return new Promise(function(resolve, reject) {
    var hookurl = 'http://' + process.env.HOSTNAME + '/ci/webhook';
    if (data.sub === true) {
      hookurl += `?sub=true`;
    }
    request({
      url: `https://api.github.com/repos/${data.name}/hooks`,
      headers: {
        'User-Agent': 'Yaydoc',
        'Authorization': 'token ' + crypter.decrypt(accessToken)
      },
      method: 'POST',
      json: {
        name: "web",
        active: true,
        events: [
          "push",
          "pull_request"
        ],
        config: {
          url: hookurl,
          content_type: "json"
        }
      }
    }, function(error, response, body) {
      if (response.statusCode !== 201) {
        console.log(response.statusCode + ': ' + response.statusMessage);
        resolve({status: false, body:body});
      } else {
        resolve({status: true, body: body});
      }
    });
  });
};

I’ll register the webhook, when user registers the repository to yaydoc for push and pull request event. Push event will be for building documentation and hosting the documentation to the GitHub pages. Pull_request event would be for checking the build of the pull request.

github.createStatus(commitId, req.body.repository.full_name, "pending", "Yaydoc is checking your build", repositoryData.accessToken, function(error, data) {
                    if (!error) {
                      var user = req.body.pull_request.head.label.split(":")[0];
                      var targetBranch = req.body.pull_request.head.label.split(":")[1];
                      var gitURL = `https://github.com/${user}/${req.body.repository.name}.git`;
                      var data = {
                        email: "[email protected]",
                        gitUrl: gitURL,
                        docTheme: "",
                        debug: true,
                        docPath: "",
                        buildStatus: true,
                        targetBranch: targetBranch
                      };
                      generator.executeScript({}, data, function(error, generatedData) {
                        var status, description;
                        if(error) {
                          status = "failure";
                          description = error.message;
                        } else {
                          status = "success";
                          description = generatedData.message;
                        }
                        github.createStatus(commitId, req.body.repository.full_name, status, description, repositoryData.accessToken, function(error, data) {
                          if (error) {
                            console.log(error);
                          } else {
                            console.log(data);
                          }
                       });
                 });
              }
        });

When anyone opens a new PR, GitHub will send  a request to yaydoc webhook. Then, I’ll send the status to GitHub saying that “Yaydoc is checking your build” with status `pending`. After, that I’ll documentation will be generated.Then, I’ll check the exit code. If the exit code is zero,  I’ll send the status `success` otherwise I’ll send `error` status.
Resources:

Adding Github buttons to Generated Documentation with Yaydoc

Many times repository owners would want to link to their github source code, issue tracker etc. from the documentation. This would also help to direct some users to become a potential contributor to the repository. As a step towards this feature, we added the ability to add automatically generated GitHub buttons to the top of the docs with Yaydoc.

To do so we created a custom sphinx extension which makes use of http://buttons.github.io/ which is an excellent service to embed GitHub buttons to any website. The extension takes multiple config values and using them generates the `html` which it adds to the top of the internal docutils tree using a raw node.

GITHUB_BUTTON_SPEC = {
    'watch': ('eye', 'https://github.com/{user}/{repo}/subscription'),
    'star': ('star', 'https://github.com/{user}/{repo}'),
    'fork': ('repo-forked', 'https://github.com/{user}/{repo}/fork'),
    'follow': ('', 'https://github.com/{user}'),
    'issues': ('issue-opened', 'https://github.com/{user}/{repo}/issues'),
}

def get_button_tag(user, repo, btn_type, show_count, size):
    spec = GITHUB_BUTTON_SPEC[btn_type]
    icon, href = spec[0], spec[1].format(user=user, repo=repo)
    tag_fmt = '<a class="github-button" href="{href}" data-size="{size}"'
    if icon:
        tag_fmt += ' data-icon="octicon-{icon}"'
    tag_fmt += ' data-show-count="{show_count}">{text}</a>'
    return tag_fmt.format(href=href,
                          icon=icon,
                          size=size,
                          show_count=show_count,
                          text=btn_type.title())

The above snippet shows how it takes various parameters such as the user name, name of the repository, the button type which can be one of fork, issues, watch, follow and star, whether to display counts beside the buttons and whether a large button should be used. Another method named get_button_tags is used to read the various configs and call the above method with appropriate parameters to generate each button.

The extension makes use of the doctree-resolved event emitted by sphinx to hook into the internal doctree. The following snippet shows how it is done.

def on_doctree_resolved(app, doctree, docname):
    if not app.config.github_user_name or not app.config.github_repo:
        return
    buttons = nodes.raw('', get_button_tags(app.config), format='html')
    doctree.insert(0, buttons)

Finally we add the custom javascript using the add_javascript method.

app.add_javascript('https://buttons.github.io/buttons.js')

To use this with yaydoc, users would just need to add the following to their .yaydoc.yml file.

build:
  github_button:
    buttons:
      watch: true
      star: true
      issues: true
      fork: true
      follow: true
    show_count: true
    large: true

Resources

  1.  Homepage of Github:buttons – http://buttons.github.io/
  2. Sphinx extension Tutorial – http://www.sphinx-doc.org/en/stable/extdev/tutorial.html

Scraping Concurrently with Loklak Server

At Present, SearchScraper in Loklak Server uses numerous threads to scrape Twitter website. The data fetched is cleaned and more data is extracted from it. But just scraping Twitter is under-performance.

Concurrent scraping of other websites like Quora, Youtube, Github, etc can be added to diversify the application. In this way, single endpoint search.json can serve multiple services.

As this Feature is under-refinement, We will discuss only the basic structure of the system with new changes. I tried to implement more abstract way of Scraping by:-

1) Fetching the input data in SearchServlet

Instead of selecting the input get-parameters and referencing them to be used, Now complete Map object is referenced, helping to be able to add more functionality based on input get-parameters. The dataArray object (as JSONArray) is fetched from DAO.scrapeLoklak method and is embedded in output with key results

    // start a scraper
    inputMap.put("query", query);
    DAO.log(request.getServletPath() + " scraping with query: "
           + query + " scraper: " + scraper);
    dataArray = DAO.scrapeLoklak(inputMap, true, true);

 

2) Scraping the selected Scrapers concurrently

In DAO.java, the useful get parameters of inputMap are fetched and cleaned. They are used to choose the scrapers that shall be scraped, using getScraperObjects() method.

Timeline2.Order order= getOrder(inputMap.get("order"));
Timeline2 dataSet = new Timeline2(order);
List<String> scraperList = Arrays.asList(inputMap.get("scraper").trim().split("\\s*,\\s*"));

 

Threads are created to fetch data from different scrapers according to size of list of scraper objects fetched. input map is passed as argument to the scrapers for further get parameters related to them and output data according to them.

List<BaseScraper> scraperObjList = getScraperObjects(scraperList, inputMap);
ExecutorService scraperRunner = Executors.newFixedThreadPool(scraperObjList.size());

try{
    for (BaseScraper scraper : scraperObjList)
    {
        scraperRunner.execute(() -> {
            dataSet.mergePost(scraper.getData());
        });

    }

} finally {
    scraperRunner.shutdown();

    try {
        scraperRunner.awaitTermination(24L, TimeUnit.HOURS);
    } catch (InterruptedException e) { }
}

 

3) Fetching the selected Scraper Objects in DAO.java

Here the variable of abstract class BaseScraper (SuperClass of all search scrapers) is used to create List of scrapers to be scraped. All the scrapers’ constructors are fed with input map to be scraped accordingly.

List<BaseScraper> scraperObjList = new ArrayList<BaseScraper>();
BaseScraper scraperObj = null;

if (scraperList.contains("github") || scraperList.contains("all")) {
    scraperObj = new GithubProfileScraper(inputMap);
    scraperObjList.add(scraperObj);
}
.
.
.

 

References:

Registering Organizations’ Repositories for Continuous Integration with Yaydoc

Among various features implemented in Yaydoc was the introduction of a modal in the Web Interface used for Continuous Deployment. The modal was used to register user’s repositories to Yaydoc. All the registered repositories then had their documentation updated continuously at each commit made to the repository. This functionality is achieved using Github Webhooks.

The implementation was able to perform the continuous deployment successfully. However, there was a limitation that only the public repositories owned by a user could be registered. Repositories owned by some organisations, which the user either owned or had admin access to couldn’t be registered to Yaydoc.

In order to perform this enhancement, a select tag was added which contains all the organizations the user have authorized Yaydoc to access. These organizations were received from Github’s Organization API using the user’s access token.

/**
 * Retrieve a list of organization the user has access to
 * @param accessToken: Access Token of the user
 * @param callback: Returning the list of organizations
 */
exports.retrieveOrgs = function (accessToken, callback) {
  request({
    url: ‘https://api.github.com/user/orgs’,
    headers: {
      ‘User-Agent’: ‘request’,
      ‘Authorization’: ‘token ’ + accessToken
    }
  }, function (error, response, body) {
    var organizations = [];
    var bodyJSON = JSON.parse(body);
    bodyJSON.forEach(function (organization) {
      organizations.push(organization.login);
    });
    return callback(organizations);
  });
};

On selecting a particular organization from the select tag, the list of repositories is updated. The user then inputs a query in a search input which on submitting shows a list of repositories that matches the tag. An AJAX get request is sent to Github’s Search API in order to retrieve all the repositories matching the keyword.

$(function () {
  ....
$.get(`https://api.github.com/search/repositories?q=user:${username}+fork:true+${searchBarInput.val()}`, function (result) {
    ....
    result.items.forEach(function (repository) {
      options +=<option>+ repo.full_name +</option>’;
    });
    ....
  });
  ....
});

The selected repository is then submitted to the backend where the repository is registered in Yaydoc’s database and a hook is setup to Yaydoc’s CI, as it was happening with user’s repositories. After a successful registration, every commit on the user’s or organization’s repository sends a webhook on receiving which, Yaydoc performs the documentation generation and deployment process.

Resources:

  1. Github’s Organization API: https://developer.github.com/v3/orgs/
  2. Github’s Search API: https://developer.github.com/v3/search/
  3. Simplified HTTP Request Client: https://github.com/request/request

Continuous Integration in Yaydoc using GitHub webhook API

In Yaydoc,  Travis is used for pushing the documentation for each and every commit. But this leads us to rely on a third party to push the documentation and also in long run it won’t allow us to implement new features, so we decided to do the continuous documentation pushing on our own. In order to build the documentation for each and every commit we have to know when the user is pushing code. This can be achieved by using GitHub webhook API. Basically we have to register our api to specific GitHub repository, and then GitHub will send a POST request to our API on each and every commit.

“auth/ci” handler is used to get access of the user. Here we request user to give access to Yaydoc such as accessing the public repositories , read organization details and write permission to write webhook to the repository and also I maintaining state by keeping the ci session as true so that I can know that this callback is for gh-pages deploy or ci deployOn

On callback I’m keeping the necessary informations like username, access_token, id and email in session. Then based on ci session state, I’m redirecting to the appropriate handler. In this case I’m redirecting to “ci/register”.After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

router.post('/register', function (req, res, next) {
      request({
        url: `https://api.github.com/repos/${req.session.username}/${repositoryName}/hooks?access_token=${req.session.token}`,
        method: 'POST',
        json: {
          name: "web",
          active: true,
          events: [
            "push"
          ],
          config: {
            url: process.env.HOSTNAME + '/ci/webhook',
            content_type: "json"
          }
        }
      }, function(error, response, body) {
        repositoryModel.newRepository(req.body.repository,
          req.session.username,
          req.session.githubId,
          crypter.encrypt(req.session.token),
          req.session.email)
          .then(function(result) {
            res.render("index", {
              showMessage: true,
              messages: `Thanks for registering with Yaydoc.Hereafter Documentation will be pushed to the GitHub pages on each commit.`
            })
          })
      })
    }
  })

After user choose the repository, they will send a POST request to “ci/register” and then I’m registering the webhook to the repository and I’m saving the repository, user details in the database, so that it can be used when GitHub send request to push the documentation to the GitHub Pages.

router.post('/webhook', function(req, res, next) {
  var event = req.get('X-GitHub-Event')
  if (event == 'Push') {
      repositoryModel.findOneRepository(
        {
          githubId: req.body.repository.owner.id,
          name: req.body.repository.name
        }
      ).
      then(function(result) {
        var data = {
          email: result.email,
          gitUrl: req.body.repository.clone_url,
          docTheme: "",
        }
        generator.executeScript({}, data, function(err, generatedData) {
            deploy.deployPages({}, {
              email: result.email,
              gitURL: req.body.repository.clone_url,
              username: result.username,
              uniqueId: generatedData.uniqueId,
              encryptedToken: result.accessToken
            })
        })
      })
      res.json({
        status: true
      })
   }
})

After you register on webhook, GitHub will send a request to the url which we registered on the repository. In our case “https:/yaydoc.herokuapp.com/ci/auth” is the url. The type of the event can be known by reading ‘X-GitHub-Event’ header. Right now I’m registering only for the push event. So we’ll only be getting the push event. GitHub also gives us the repository details in the request body.

When the user makes a commit to the repository, GitHub will send a POST request to the Yaydoc’s server. Then, we’ll get the repository name and Github’s user ID from the request body. By use of this, I’ll retrieve the access token from the database which we already registered while the user registers the repository to the CI. The documentation will be generated using generate script and pushed to GitHub pages using deploy script.

Now Yaydoc generates documentation on every push when the user commits to the repository and also it will enable us to integrate new features in our own custom environment. We also plan to build a full featured CI platform.

Resources:

sTeam Server Object permissions and Doxygen Documentation

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

sTeam Server object permissions

sTeam command line lacks the functionality to read and set the object access permissions. The permission bits are: read,write, execute, move, insert,
annotate, sanction. The permission function was designed analogous to the getfacl() command in linux. It should display permissions as: rwxmias corresponding to the  permission granted on the object.

The the key functions are get_sanction, which returns a list of objects and permissions and sanction_object, which adds a new object and its set of permissions. The permissions is stored as an integer and the function should break the individual bits like getfact().

The permission bits for the sTeam objects are declared in the
access.h

// access.h: The permission bits

#define FAIL           -1 
#define ACCESS_DENIED   0
#define ACCESS_GRANTED  1
#define ACCESS_BLOCKED  2

#define SANCTION_READ          1
#define SANCTION_EXECUTE       2
#define SANCTION_MOVE          4
#define SANCTION_WRITE         8
#define SANCTION_INSERT       16
#define SANCTION_ANNOTATE     32

The get_sanction method defined in the access.pike returns a mapping which has the ACL(Access Control List) of all the objects in the sTeam server.


// Returns the sanction mapping of this object, if the caller is privileged
// the pointer will be returned, otherwise a copy.
final mapping
get_sanction()
{
    if ( _SECURITY->trust(CALLER) )
	return mSanction;
    return copy_value(mSanction);
}

The functions gets the permission values which are set for every object in the server.

The sanction_object method defined in the object.pike sets the permissions for the new objects.


// Set new permission for an object in the acl. Old permission are overwritten.
int sanction_object(object grp, int permission)
{
    ASSERTINFO(_SECURITY->valid_proxy(grp), "Sanction on non-proxy!");
    if ( query_sanction(grp) == permission )
      return permission; // if permissions are already fine

    try_event(EVENT_SANCTION, CALLER, grp, permission);
    set_sanction(grp, permission);

    run_event(EVENT_SANCTION, CALLER, grp, permission);
    return permission;
} 

This method makes use of the set_sanction which sets the permission onthe object. The task ahead is to make use of the above functions and write a sTeam-shell command which would provide the user to easily access and change the permissions for the objects.

Merging into the Source

The work done during GSOC 2016 by Siddhant and Ajinkya on the sTeam server was merged into the gsoc201-societyserver-devel and gsoc2016-source branches in the societyserver repository.
The merged code can be found at:

https://github.com/societyserver/sTeam/tree/gsoc2016-source
https://github.com/societyserver/sTeam/tree/gsoc2016-societyserver-devel

The merged code needs to be tested before the debian package for the sTeam server is prepared. The testing has resulted into resolving of minor bugs.

Doxygen Documentation

The documentation for the sTeam is done using doxygen. The doxygen.pike is written and used to make the documentation for the sTeam server. The Doxyfile which includes the configuration for generating the sTeam documentation is modified and input files are added. The generated documentation is deployed on the gh-pages in the societyserver/sTeam repository.
The documentation can be found at:


http://societyserver.github.io/sTeam/files.html

The header files and the constants defined are also included in the sTeam documentation.

sTeam Documentation:

SocietyserverDoc

sTeam defined constants:

SocietyServerConstants

sTeam Macro Definitions:

SocietyServerMacroDefnitions

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.