Using Cloud storage for event exports

Open-event orga server provides the ability to the organizer to create a complete export of the event they created. Currently, when an organizer triggers the export in orga server, A celery job is set to complete the export task resulting asynchronous completion of the job. Organizer gets the download button enabled once export is ready.

Till now the main issue was related to storage of those export zip files. All exported zip files were stored directly in local storage and that even not by using storage module created under orga server.

local storage path

On a mission to solve this, I made three simple steps that I followed to solve this issue.

These three steps were:

  1. Wait for shutil.make_archive to complete archive and store it in local storage.
  2. Copy the created archive to storage ( specified by user )
  3. Delete local archive created.

The easiest part here was to make these files upload to different storage ( s3, gs, local) as we already have storage helper

def upload(uploaded_file, key, **kwargs):
    Upload handler

The most important logic of this issue resides to this code snippet.

    dir_path = dir_path + ".zip"
     storage_path = UPLOAD_PATHS['exports']['zip'].format(
         event_id = event_id
     uploaded_file = UploadedFile(dir_path, dir_path.rsplit('/', 1)[1])
     storage_url = upload(uploaded_file, storage_path)
    if get_settings()['storage_place'] != "s3" or get_settings()['storage_place'] != 'gs':
        storage_url = app.config['BASE_DIR'] + storage_url.replace("/serve_","/")
    return storage_url

From above snippet, it is clear that we are extending the process of creating the zip. Once the zip is created we will make storage path for cloud storage and upload it. Only one thing will take the time to understand here is the last second and third line of above snippet.

if get_settings()['storage_place'] != "s3" or get_settings()['storage_place'] != 'gs':
        storage_url = app.config['BASE_DIR'] + storage_url.replace("/serve_","/")

Initial the plan was simple to serve the files through “serve_static” but then the test cases were expecting a file at this location thus I had to remove “serve_” part for local storage and then it works fine on those three steps.

Next thing on this storage process need to be discussed is the feature to delete old exports. I believe one reason to keep them would be an old backup of your event will be always there with us at our cloud storage.

Generating a documentation site from markup documents with Sphinx and Pandoc

Generating a fully fledged website from a set of markup documents is no easy feat. But thanks to the wonderful tool sphinx, it certainly makes the task easier. Sphinx does the heavy lifting of generating a website with built in javascript based search. But sometimes it’s not enough.

This week we were faced with two issues related to documentation generation on loklak_server and susi_server. First let me give you some context. Now sphinx requires an index.rst file within /docs/  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository.

This leads to the following two problems:

  • Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors.
  • Any relative links in README break when it is included in another folder.

To fix the first issue, I used pypandoc, a thin wrapper around Pandoc. Pandoc is a wonderful command line tool which allows us to convert documents from one markup format to another. From the official Pandoc website itself,

If you need to convert files from one markup format into another, pandoc is your swiss-army knife.

pypandoc requires a working installation of Pandoc, which can be downloaded and installed automatically using a single line of code.


This gives us a cross-platform way to download pandoc without worrying about the current platform. Now, pypandoc leaves the installer in the current working directory after download, which is fine locally, but creates a problem when run on remote systems like Travis. The installer could get committed accidently to the repository. To solve this, I had to take a look at source code for pypandoc and call an internal method, which pypandoc basically uses to set the name of the installer. I use that method to find out the name of the file and then delete it after installation is over. This is one of many benefits of open-source projects. Had pypandoc not been open source, I would not have been able to do that.

url = pypandoc.pandoc_download._get_pandoc_urls()[0][pf]
filename = url.split(‘/’)[-1]

Here pf is the current platform which can be one of ‘win32’, ‘linux’, or ‘darwin’.

Now let’s take a look at our second issue. To solve that, I used regular expressions to capture any relative links. Capturing links were easy. All links in reStructuredText are in the same following format.

`Title <url>`__

Similarly links in markdown are in the following format


Regular expressions were the perfect candidate to solve this. To detect which links was relative and need to be fixed, I checked which links start with the \docs\ directory and then all I had to do was remove the \docs prefix from those links.

A note about loklak and susi server project

Loklak is a server application which is able to collect messages from various sources, including twitter.

SUSI AI is an intelligent Open Source personal assistant. It is capable of chat and voice interaction and by using APIs to perform actions such as music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information

Using NodeBuilder to instantiate node based Elasticsearch client and Visualising data

As mentions, Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. But in many setups, it is not possible to manually install an Elasticsearch node on a machine. To handle these type of scenarios, Elasticsearch provides the NodeBuilder module, which can be used to spawn Elasticsearch node programmatically. Let’s see how.

Getting Dependencies

In order to get the ES Java API, we need to add the following line to dependencies.

compile group: 'org.elasticsearch', name: 'securesm', version: '1.0'

The required packages will be fetched the next time we gradle build.

Configuring Settings

In the Elasticsearch Java API, Settings are used to configure the node(s). To create a node, we first need to define its properties.

Settings.Builder settings = new Settings.Builder();

settings.put("", "cluster_name");  // The name of the cluster

// Configuring HTTP details
settings.put("http.enabled", "true");
settings.put("http.cors.enabled", "true");
settings.put("http.cors.allow-origin", "https?:\/\/localhost(:[0-9]+)?/");  // Allow requests from localhost
settings.put("http.port", "9200");

// Configuring TCP and host
settings.put("transport.tcp.port", "9300");
settings.put("", "localhost");

// Configuring node details
settings.put("", "true");
settings.put("node.master", "true");

// Configuring index
settings.put("index.number_of_shards", "8");
settings.put("index.number_of_replicas", "2");
settings.put("index.refresh_interval", "10s");
settings.put("index.max_result_window", "10000");

// Defining paths
settings.put("path.conf", "/path/to/conf/");
settings.put("", "/path/to/data/");
settings.put("path.home", "/path/to/data/");;  // Buid with the assigned configurations

There are many more settings that can be tuned in order to get desired node configuration.

Building the Node and Getting Clients

The Java API makes it very simple to launch an Elasticsearch node. This example will make use of settings that we just built.

Node elasticsearchNode = NodeBuilder.nodeBuilder().local(false).settings(settings).node();

A piece of cake. Isn’t it? Let’s get a client now, on which we can execute our queries.

Client elasticsearhClient = elasticsearchNode.client();

Shutting Down the Node


A nice implementation of using the module can be seen at in the loklak project. It uses the settings from a configuration file and builds the node using it.

Visualisation using elasticsearch-head

So by now, we have an Elasticsearch client which is capable of doing all sorts of operations on the node. But how do we visualise the data that is being stored? Writing code and running it every time to check results is a lengthy thing to do and significantly slows down development/debugging cycle.

To overcome this, we have a web frontend called elasticsearch-head which lets us execute Elasticsearch queries and monitor the cluster.

To run elasticsearch-head, we first need to have grunt-cli installed –

$ sudo npm install -g grunt-cli

Next, we will clone the repository using git and install dependencies –

$ git clone git://
$ cd elasticsearch-head
$ npm install

Next, we simply need to run the server and go to indicated address on a web browser –

$ grunt server

At the top, enter the location at which elasticsearch-head can interact with the cluster and Connect.

Upon connecting, the dashboard appears telling about the status of cluster –

The dashboard shown above is from the loklak project (will talk more about it).

There are 5 major sections in the UI –
1. Overview: The above screenshot, gives details about the indices and shards of the cluster.
2. Index: Gives an overview of all the indices. Also allows to add new from the UI.
3. Browser: Gives a browser window for all the documents in the cluster. It looks something like this –

The left pane allows us to set the filter (index, type and field). The table listed is sortable. But we don’t always get what we are looking for manually. So, we have the following two sections.
4. Structured Query: Gives a dead simple UI that can be used to make a well structured request to Elasticsearch. This is what we need to search for to get Tweets from @gsoc that are indexed –

5. Any Request: Gives an advance console that allows executing any query allowable by Elasticsearch API.

A little about the loklak project and Elasticsearch

loklak is a server application which is able to collect messages from various sources, including twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

Source: github/loklak/loklak_server

The project uses Elasticsearch to index all the data that it collects. It uses NodeBuilder to create Elasticsearch node and process the index. It is flexible enough to join an existing cluster instead of creating a new one, just by changing the configuration file.


This blog post tries to explain how NodeBuilder can be used to create Elasticsearch nodes and how they can be configured using Elasticsearch Settings.

It also demonstrates the installation and basic usage of elasticsearch-head, which is a great library to visualise and check queries against an Elasticsearch cluster.

The official Elasticsearch documentation is a good source of reference for its Java API and all other aspects.

This API or that Library – which one?

Last week, I was playing with a scraper program in Loklak Server project when I came across a library Boilerpipe. There were some issues in the program related to it’s implementation. It worked well. I implemented it, pulled a request but was rejected due to it’s maintenance issues. This wasn’t the first time an API(or a library) has let me down, but this added one more point to my ‘Linear Selection Algorithm’ to select one.

Once Libraries revolutionized the Software Projects and now API‘s are taking abstraction to a greater level. One can find many API’s and libraries on GitHub or on their respective websites, but they may be buggy. This may lead to waste of one’s time and work. I am not blogging to suggest which one to choose between the two, but what to check before getting them into use in development.

So let us select a bunch of these and give score +1 if it satisfies the point, 0 for Don’t care condition and -1 , a BIG NO.

Now initialize the variable score to zero and lets begin.

1. First thing first. is it easy to understand

Does this library code belongs to your knowledge domain? Can you use it without any issue? Also consider your project’s platform compatibility with the library. If you are developing a prototype or a small software(like for an event like Hackathon), you shall choose easy-to-read tutorial as higher priority and score++. But if you are working on a project, you shouldn’t shy going an extra mile and retain the value of score.

2. Does it have any documentation or examples of implementation

It shall have to be well written, well maintained documentation. If it doesn’t, I am ok with examples. Choose well according to your comfort. If none, at least code shall be easy to understand.

3. Does it fulfill all my needs?

Test and try to implement all the methods/ API calls needed for the project. Sometimes it may not have all the methods you need for your application or may be some methods are buggy. Take care of this point, a faulty library can ruin all your hard work.

4. Efficiency and performance (BONUS POINT for this one)

Really important for projects with high capacity/performance issues.

5. See for the Apps where they are implemented

If you are in a hackathon or a dev sprint, Checking for applications working on this API shall work. Just skip the rest of the steps (except the first).

6. Can you find blogs, Stack Overflow questions and tutorials?

If yes, This is a score++

7. An Active Community, a Super GO!

Yaay! An extra plus with the previous point.

8. Don’t tell me it isn’t maintained

This is important as if the library isn’t maintained, you are prone to bugs that may pop up in  future and couldn’t be solved. Also it’s performance can never be improved. If there is no option, It is better to use it’s parts in your code so that you can work on it, if needed.

Now calculate the scores, choose the fittest one and get to work.

So with the deserving library in your hand, my first blog post here ends.

Adding swap space to your DigitalOcean droplet, if you run out of RAM

The Open Event Android App generator runs on a DigitalOcean. The deployment runs on a USD 10 box, that has 1 GB of RAM, but for testing I often use a USD 5 box, that has only 512mb of RAM.

When trying to build an android app using gradle and Java 8, there could be an issue where you run out of RAM (especially if it’s 512 only).

What we can do to remedy this problem is creating a swapfile. On an SSD based system, Swap spaces work almost as fast as RAM, because SSDs have very high R/W speeds.

Check hard disk space availability using

df -h

There should be an output like this

Filesystem      Size  Used Avail Use% Mounted on
udev            238M     0  238M   0% /dev
tmpfs            49M  624K   49M   2% /run
/dev/vda1        20G  1.1G   18G   6% /
tmpfs           245M     0  245M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           245M     0  245M   0% /sys/fs/cgroup
tmpfs            49M     0   49M   0% /run/user/1001

The steps to create a swap file and allocating it as swap are

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

We can verify using

sudo swapon --show
/swapfile file 1024M   0B   -1

And now if we see RAM usage using free -h , we’ll see

              total        used        free      shared  buff/cache   available
Mem:           488M         37M         96M        652K        354M        425M
Swap:          1.0G          0B        1.0G

Do not use this as a permanent measure for any SSD based filesystem. It can corrupt your SSD if used as swap for long. We use this only for short periods of time to help us build android apks on low ram systems.

Doing a table join in Android without using rawQuery

The Open Event Android App, downloads data from the API (about events, sessions speakers etc), and saves them locally in an SQLite database, so that the app can work even without internet connection.

Since there are multiple entities like Sessions, Speakers, Events etc, and each Session has ids of speakers, and id of it’s venue etc, we often need to use JOIN queries to join data from two tables.


Android has some really nice SQLite helper classes and methods. And the ones I like the most are the SQLiteDatabase.query, SQLiteDatabase.update, SQLiteDatabase.insert ones, because they take away quite a bit of pain for typing out SQL commands by hand.

But unfortunately, if you have to use a JOIN, then usually you have to go and use the SQLiteDatabase.rawQuery method and end up having to type your commands by hand.

But but but, if the two tables you are joining do not have any common column names (actually it is good design to have them so – by having all column names prefixed by tablename_ maybe), then you can hack the usual SQLiteDatabase.query() method to get a JOINed query.

Now ideally, to get the Session where speaker_id was 1, a nice looking SQL query should be like this –

SELECT * FROM speaker INNER JOIN session
ON speaker_id = session_speaker_id
WHERE speaker_id = 1

Which, in android, can be done like this –

String rawQuery = "SELECT * FROM " + SpeakerTable.TABLE_NAME + " INNER JOIN " + SessionTable.TABLE_NAME
        + " ON " + SessionTable.EXP_ID + " = " + SpeakerTable.ID
        + " WHERE " + SessionTable.ID + " = " +  id;
Cursor c = db.rawQuery(

But of course, because of SQLite’s backward compatible support of the primitive way of querying, we turn that command into

FROM session, speaker
WHERE speaker_id = session_speaker_id AND speaker_id = 1

Now this we can write by hacking the terminology used by the #query() method –

Cursor c = db.query(
        SessionTable.TABLE_NAME + " , " + SpeakerTable.TABLE_NAME,
        Utils.concat(SessionTable.PROJECTION, SpeakerTable.PROJECTION),
        SessionTable.EXP_ID + " = " + SpeakerTable.ID + " AND " + SpeakerTable.ID + " = " +  id,

To explain a bit, the first argument String tableName can take table1, table2 as well safely, The second argument takes a String array of column names, I concatenated the two projections of the two classes. and finally, put by WHERE clause into the String selection argument.

You can see the code for all database operations in the android app here

Getting code coverage in a Nodejs project using Travis and CodeCov

We had set up unit tests on the webapp generator using mocha and chai, as I had blogged before.

But we also need to get coverage reports for each code commit and the overall state of the repo.

Since it is hosted on Github, Travis comes to our rescue. As you can see from our .travis.yml file, we already had Travis running to check for builds, and deploying to heroku.

Now to enable Codecov, simply go to and enable your repository (You have to login with Github so see your Github repos) .

Once you do it, your dashboard should be visible like this

We use istanbul to get codecoverage. To try it out just use

istanbul cover _mocha

On the root of your project (where the /test/ folder is ) . That should generate a folder called coverage or lcov. Codecov can read lcov reports. They have provided a bash file which can be run to automatically upload coverage reports. You can run it like this –

bash <(curl -s

Now go back to your codecov dashboard, and your coverage report should show up.

Screenshot from 2016-08-29 21-23-00

If all is well, we can integrate this with travis so that it happens on every code push. Add this to your travis.yml file.

  - istanbul cover _mocha
- bash <(curl -s

This will ensure that on each push, we run coverage first. And if it is successful, we push the result to codecov.

We can see coverage file by file like this

Screenshot from 2016-08-29 21-23-35

And we can see coverage line by line in a file like this

Screenshot from 2016-08-29 21-26-55


GSoC 2016 Summary of work done – Improving sTeam

To understand my project you first need to understand what sTeam is. For that you can refer to the blog I wrote


I started off small by fixing already existing bugs. There were multiple bugs with the edit command in the command line interface of sTeam. I extended the edit command to allow opening of multiple files as tabs in a vim editor. To provide users with more options and to make working on sTeam client easier I added the feature to open new files in steam directly from inside vim. I wrote a vim script to do this and communicated with the sTeam server through this script. My first major task was to implement a TLS connection between the sTeam command line client and the server. For this I had to understand the COAL protocol, which is a home grown protocol for sTeam. I improved the tooling for sTeam by adding in commandso work with groups from the steam-shell and to allow re-login from debug.pike. After this I did some cleaning up work by removing repeated code. The code for login was being repeated in different tools so I made a separate file containing all the common code and imported this in all the tools. I wrote an extensive help command describing every command for steam-shell and giving their syntax. There was some conceptual error in the steam-shell. Rooms and gates are the same but gothrough command was allowing the users to enter a gate but not a room, this was changed to enter command supporting both gates and room, I also changed the output of look command to not show gates and rooms as separate entities.

The next two tasks were entirely new additions to the project. First I wrote a test suite to test the calls to COAL functions. Pike does not have any kind of testing framework so I had to design my own testing framework and write test cases to test the COAL function calls. This will help further development of sTeam as testing of new code becomes easier. The next addition was to write a linux command for sTeam. Steam tools were accessible only from the tools folder that got copied to a particular location on installation. Now on installation users can use the steam command from anywhere to access all the tools.

1. We have combined all the work into two branches.

The commits made by me in each branch can be seen here.

2. I wrote weekly blogs summarizing the work done during the week.

All the blogs can be found at

The list in reverse chronological order is as follows.

3. A list of tasks covered and all the Pull requests related to each can be seen here




Fix the edit script.



Extend edit command for multiple files. Each file opens in its own tab



Implementing TLS for COAL to make it COALS.



Add the functionality to open files from inside vim



Write a plugin to make closing of files easier by closing the logs automatically.



Add the command to create groups to steam-shell.

Issue-68 Issue-97

PR-77 PR-98

Add login command to allow relogin in debug.pike



Remove repeated code used for login



Add a detailed help command to make sTeam easier to use for new users.



Change gothrough to enter and allow them to enter rooms as well.



Change the output of look command and show rooms and gates under the same section



Make steam tools accessible from everywhere

Issue-126 Issue-128 Issue-130Issue-134

PR-127 PR-129 PR-131 PR-135

Write test cases to keep the software error free.

Issue-104 Issue-107 Issue-109Issue-110 Issue-111 Issue-113Issue-116 Issue-118 Issue-122 Issue-124

PR-105 PR-108 PR-112 PR-114 PR-115PR-117 PR-119 PR-123 PR-125




4. Scrum Reports

Daily scrum reports have been posted and discussed on #steam-devel on and a backup can be found on the mailing list

5. Further Enhancements

  • The testing framework needs to be improved
  • More test cases needs to be added


6. Conclusion

In the end I would like to thank Google and FOSSASIA for providing me this wonderful opportunity to learn and collaborate. I would like to thank my mentors Martin and Trilok for guiding me through all the difficult times and helping me solve bugs whenever I got stuck. I would continue contributing to open source and try joining more projects under FOSSASIA to improve my skill set and to get new experience. I will also be taking active part in Google Code In and will love to be a mentor.

Building Android preference screen

Some days ago, I started building a Setting Screen for my Android app. Everything was fine, until I opened it on an older Android version. The overview screen had no material design, a thing I would have accepted, if there weren’t those completely destroyed dialogs: Android’s internal preferences are using Android’s internal app.AlertDialogs. Those dialogs in combination with the AppCompat Dialog Theme, which I had applied to them, resulted in a dialog with two frames on older devices (One system default dialog and a material frame around it).
So I decided to switch to the library, only to face a lot more issues.

Including the Library

In order to use the new preferences, we need to import a library. To do so, we add this line to our gradle dependencies (You should change the version number to the latest).

compile ''

Building The Preference Screen

Creating the Preferences

At first, we need to create our preference structure: We create a new XML Android resource file as xml/app_preferences.xml. Now we can add our preference structure to this file. Make sure to add a unique android:keyattribute for each preference. More information: How to build the XML

        android:title="Category 1">          
            android:title="Switch Preference"
            android:summary="Switch Summary"
            android:defaultValue="true" />
            android:title="EditText Preference"
            android:summary="EditText Summary"
            android:dialogMessage="Dialog Message"
            android:defaultValue="Default value" /> 
            android:title="CheckBox Preference"
            android:summary="CheckBox Summary"

The v7.preference library provides some preferences we can use: CheckBoxPreference, SwitchPreferenceCompat, EditTextPreference and a ListPreference (and a basic Preference). If we need more than these predefined preferences, we have to build them on our own.

Creating the Preference Fragment

Now we need to create our Preference Fragment, where we can show the preferences from our XML file. We do this by creating a new class, called SettingsFragment, which extends PreferenceFragmentCompat. Since the onCreatePreferences is declared as abstract in the source code of the library, we are forced to include our own implementation to tell the fragment to load our just created app_preferences.xml.

public class SettingsFragment extends PreferenceFragmentCompat {
    public void onCreatePreferences(Bundle bundle, String s) {
        // Load the Preferences from the XML file

We can add this SettingsFragment (, like any other Fragment (e.g. with a FragmentTransaction) to an Activity.

Applying the Preference Theme

Finally we need to specify a preferenceTheme in our Activity’s theme. If we don’t do so, the App will crash with an IllegalStateException.
The v7.preference library provides only one Theme: PreferenceThemeOverlay(You may have a look at its source code). We add this with the following line in our Activity’s theme:

<item name="preferenceTheme">@style/PreferenceThemeOverlay</item>

After we have done this, our result should now look like this.
(The Activity’s parent theme is Theme.AppCompat and the background is set with android:windowBackground)

Settings Screen with PreferenceThemeOverlay

As you can see, it has an oversized font and a horizontal line below the category title. This is definitely not material design.

It more looks like a mixture of material design for the CheckBoxand Switch widgets on the right and an old design for everything else.

This leads us to the next point: Applying the material theme to our settings.

Applying the Material Design Theme

Since there is no material theme in our current preference library, we need to import the v14.preference library. It is strange that Google splitted up these two libraries, because the v14 version is obviously only an addition to the v7.preference library. However, this means for us, that we have to add one more line to our gradle dependencies (You should change the version number to the latest).

compile ''

Now we have access to two more themes: PreferenceThemeOverlay.v14 and PreferenceThemeOverlay.v14.Material (You may have a look at their source code). To use the material theme, we simply change the preferenceTheme in our Activity’s theme.

<item name="preferenceTheme">

A side effect of including the v14.preference library is that we can use a new preference called MultiSelectListPreference (it requires the v7.preference library to work).

Settings Screen with PreferenceThemeOverlay.v14.Material

Our result should look like this. And this time it is full material design.

The font is not oversized anymore and also the horizontal line below the category title has disappeared.

We can change the color of the CheckBox, the Switch and the PreferenceCategory title by changing the colorAccent in our Activity’s theme.

This was only the first step to create a material design Settings Screen. As soon as you open the Alert Dialog of the EditText preference, you will find more design issues.

Using Partial in Handlebars and Reusing Code

Open Event Webapp uses handlebar partials for optimizing code. We can reuse a template using Handlebars partial.

How to use Handlebars partial ?

To use Handlebars partial, we have to follow some easy steps:

Step 1: In the .hbs file containing code, register your partial by using function Handlebars.registerPartial 

Handlebars.registerPartial('myPartial', '{{name}}')

Step 2: Calling the partial

{{> myPartial }}

In Open-Event Webapp we have made partials for common templates like navbar and footer.

1. // Navbar template (navbar.hbs)

 <!-- Fixed navbar -->
 <nav class="navbar navbar-default navbar-fixed-top">
  <div class="container">
   <div class="navbar-header navbar-left pull-left">
    <a class="navbar-brand" href="{{ eventurls.main_page_url }}">
    {{#if eventurls.logo_url}}
    <img alt="{{}}" class="logo logo-dark" src="{{  eventurls.logo_url }}">
    {{ }}
 <div class="navbar-header navbar-right pull-right">
   <ul style="margin-left:20px" class="nav navbar-nav pull-left">
   {{#if show}}
    <li class="pull-left"><a href="{{link}}" style="padding-right:0; padding-left:0;margin-left:15px"><i class="fa fa-lg fa-{{icon}}" aria-hidden="true" title="{{{icon}}}"></i></a></li>
 <button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse" style="margin-left:1em;margin-top:1em;">
   <span class="sr-only">Toggle navigation</span>
   <span class="icon-bar"></span>
   <span class="icon-bar"></span>
   <span class="icon-bar"></span>

 <div class="hidden-lg hidden-md hidden-sm clearfix"></div>
   <div class="collapse navbar-collapse">
    <ul class="nav navbar-nav navbar-right">
     <li class="navlink"><a id="homelink" href="index.html">Home</a>
     {{#if timeList}}
     <li class="navlink">
     <a id="schedulelink"href="schedule.html">
     {{#if tracks}}
     <li class="navlink">
      <a id="trackslink" href="tracks.html">Tracks</a>
     {{#if roomsinfo}}
     <li class="navlink">
      <a id="roomslink" href="rooms.html">Rooms</a>   
    {{#if speakerslist}}
    <li class="navlink">
      <a id="speakerslink" href="speakers.html">Speakers</a>
//Compiling Template by providing path

2. const navbar = handlebars.compile(fs.readFileSync(__dirname + '/templates/partials/navbar.hbs').toString('utf-8'));
// Register Partial

3. handlebars.registerPartial('navbar', navbar);