Quick-filter for filtering of events

With a myriad of events present, it might get difficult for users to search for a specific event. So, giving the user a tool to search according to whatever clue they have to search for an event makes it easier for them. Search tool based on location, date (or day) or event’s name or rather a combination of any of these will answer most of the problems to this issue.

To create such a tool we implement a quick-filter component.

An Ember Component is a view that is completely isolated. Properties accessed in its templates got to the view object and actions are targeted at the view object. There is no access to the surrounding context or outer controller, all contextual information must be passed in.

Now, let’s create a component ‘quick-filter’

This command will create 3 files

  1. Quick-filter.js

A JS file is used mainly to run client side JavaScript code on a webpage. The quick-filter component encapsulates certain snippets of Handlebar templates that can be reused in our code. If we need to customize the behaviour of our component we define a subclass of Ember.Component in the app/components

import Ember from ’ember’;
const { Component } = Ember;
export default Component.extend({
tagName   : ‘footer’
classNames: [‘ui’, ‘action’, ‘input’, ‘fluid’]
});

 

  1. Quick-filter-test.js

This is where we check whether our component is compatible with other components of the system or not. Here, for now, we are just making sure if our component renders or not, by checking the presence of ‘All Dates’.

import { test } from ’ember-qunit’;
import moduleForComponent from ‘open-event-frontend/tests/helpers/component-helper’;
import hbs from ‘htmlbars-inline-precompile’;

moduleForComponent(‘quick-filter’, ‘Integration | Component | quick-filter’);

test(‘it renders’, function(assert) {
this.render(hbs`{{quick-filter}}`);
assert.ok(this.$().html().trim().includes(‘Search’));
});

 

  1. Quick-filter.hbs

Here we design our component. We have used semantic UI elements for designing. Specifically speaking we have used

  • ui-action-input
  • ui-dropdown
  • ui-blue-button-large

Here we have used semantics fluid class to make the component take width of its container.

{{input type=‘text’ placeholder=(t ‘Search for events’)}}
{{#unless device.isMobile }}
 {{input type=‘text’ placeholder=(t ‘Location’)}}
 {{#ui-dropdown class=‘search selection’ selected=filterDate forceSelection=false}}
   {{input type=‘hidden’ id=‘filter_date’ value=filterDate}}
   <i class=“dropdown icon”></i>
   <div class=“default text”>{{t ‘All Dates’}}</div>
   <div class=”menu”>
     <div class=”item” data-value=”all-dates”>{{t ‘All Dates’}}</div>
     <div class=“item” data-value=“today”>{{t ‘Today’}}</div>
     <div class=“item” data-value=“tomorrow”>{{t ‘Tomorrow’}}</div>
     <div class=”item” data-value=”this-week”>{{t ‘This Week’}}</div>
     <div class=“item” data-value=“this-weekend”>{{t ‘This Weekend’}}</div>
     <div class=“item” data-value=“next-week”>{{t ‘Next Week’}}</div>
     <div class=”item” data-value=”this-month”>{{t ‘This Month’}}</div>
   </div>
 {{/ui-dropdown}}
{{/unless}}
<button class=“ui blue button large” type=“button”>{{t ‘Search’}}</button>

Now, our component is ready, and the only part remaining is to place it in our application. We place it in app/templates/index.hbs

<div class=“ui container”>
 {{quick-filter}}
 <h2>{{t ‘Upcoming Events’}}</h2>
 <div class=”ui stackable three column grid”>
   {{#each model as |event|}}
     {{event-card event=event shareEvent=(action ‘shareEvent’)}}
   {{/each}}
 </div>

Now our filter component is up and running.

Continue ReadingQuick-filter for filtering of events

Deploying SUSI.AI with Docker

Docker is much more efficient than VM in allocating shared resources between various containers as shown in figure. To deploy SUSI we need to create docker container. There are two ways to build it. First way is fork the SUSI project in github. Then you can signup in dockerhub and create autobuild docker container. The second way is to manually build docker file from command prompt of your computer. The following instructions needs to be executed in cloud shell or linux machine.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get -y install docker.io
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo docker build https://github.com/fossasia/susi_server.git


The first three commands install docker software to the machine. Next three lines give required permissions and execution abilities to docker. Final command builds docker container from github. Thus, We have successfully made docker container. We can deploy it on the cloud by using following command.

sudo docker run -d -p 80:80 -p 443:443 susi


Deploying Susi on cloud with kubernetes

We will use Google Cloud Service platform for demonstration. Create your GCS account. Goto dashboard and click on compute engine.

Enable Billing and Create New Project named XYZ. Open the terminal, by clicking the Google cloud shell button on the top.

Please set the compute zone to your nearest zone by running the below command.

gcloud config set compute/zone us-central1-a


Now we need to create cluster, on which we deploy susi app. We do it by this command.

gcloud container clusters create hello-cluster --num-nodes=3


We need to get docker from dockerhub and push it to our project repo.We do it by these commands.

sudo docker pull jyothiraditya/susi_server
gcloud docker -- push <image-id> gcr.io/<project-id>/<name>

We run the docker image on cluster by following commands.

kubectl run susi-server --image=gcr.io/<project-id>/<name> --port=80
kubectl get pods

We expose the container to external traffic, with help of load balancer by the following command.

kubectl expose deployment susi-server --type="LoadBalancer --port=80"


We get the external ip-address to access susi from browser. By entering

kubectl get service susi-server


You can now view the app by going to “EXTERNAL-IP:80”.

References : Docker build , Kubernetes deployment, Google cloud deployment 

 

Continue ReadingDeploying SUSI.AI with Docker

Using LokLak to Scrape Profiles from Quora, GitHub, Weibo and Instagram

Most of us are really so curious to know about one’s social life. So taking this as a key point, LokLak has many profile scrapers in it. Profile scraper which are now available in LokLak  helps us to know about the posts, followers one has. Few of the profile scrapers available in LokLak are Quora Profile, GitHub Profile, Weibo Profile and Instagram Profile.

How do the scrapers work?

In loklak now we are using java to get the json objects of the scraped profile from different websites as mentioned above. So here is a simple explanation how one of the scraper works. In this post I am going to give you a gist about how Github Profile scraper API works:

In the github profile scraper one can search for a profile without logging in and know the contents like the followers, repositories, gists of that profile and many more.

The simple query which can be used is:

To scrape individual profiles:

https://loklak.org/api/githubprofilescraper.json?profile=kavithaenair

To scrape organization profiles:

https://loklak.org/api/githubprofilescraper.json?profile=fossasia

Jsoup is an API and it is a easiest way used by java developers for scraping the web i.e.,web scraping. This API is used for manipulating and extracting data using DOM, CSS like methods. So in here, the Jsoup API is helping us to extract the html data and with the help of the tags used in the html extracted data we are trying to get the relevant data which is needed.

How do we get the matching elements?

We here are using special methods like getElementsByAttributeValueContaining() of the org.jsoup.nodes.Element class to get the data. For instance, to get the email from the extracted data the code is written as:

String email = html.getElementsByAttributeValueContaining("itemprop", "email").text();
            if (!email.contains("@"))
                  email = "";
            githubProfile.put("email", email);

Code:

Here is the java code which imports and extracts data:

Imports the html file:

html = Jsoup.connect("https://github.com/" + profile).get();

Extracts the html file for individual user:

/*If Individual*/
           if (html.getElementsByAttributeValueContaining("class", "user-profile-nav").size() != 0) {
               scrapeGithubUser(githubProfile, terms, profile, html);
           }
           if (terms.contains("gists") || terms.contains("all")) {
               String gistsUrl = GITHUB_API_BASE + profile + GISTS_ENDPOINT;
               JSONArray gists = getDataFromApi(gistsUrl);
               githubProfile.put("gists", gists);
           }
           if (terms.contains("subscriptions") || terms.contains("all")) {
               String subscriptionsUrl = GITHUB_API_BASE + profile + SUBSCRIPTIONS_ENDPOINT;
               JSONArray subscriptions = getDataFromApi(subscriptionsUrl);
               githubProfile.put("subscriptions", subscriptions);
           }
           if (terms.contains("repos") || terms.contains("all")) {
               String reposUrl = GITHUB_API_BASE + profile + REPOS_ENDPOINT;
               JSONArray repos = getDataFromApi(reposUrl);
               githubProfile.put("repos", repos);
           }
           if (terms.contains("events") || terms.contains("all")) {
               String eventsUrl = GITHUB_API_BASE + profile + EVENTS_ENDPOINT;
               JSONArray events = getDataFromApi(eventsUrl);
               githubProfile.put("events", events);
          }
          if (terms.contains("received_events") || terms.contains("all")) {
              String receivedEventsUrl = GITHUB_API_BASE + profile + RECEIVED_EVENTS_ENDPOINT;
              JSONArray receivedEvents = getDataFromApi(receivedEventsUrl);
              githubProfile.put("received_events", receivedEvents);
          }

Extracts the html file for organization:

/*If organization*/
if (html.getElementsByAttributeValue("class", "orgnav").size() != 0) {
    scrapeGithubOrg(profile, githubProfile, html);
}

And this is the sample output:

For query: https://loklak.org/api/githubprofilescraper.json?profile=kavithaenair
 
{
  "data": [{
    "joining_date": "2016-04-12",
    "gists_url": "https://api.github.com/users/kavithaenair/gists",
    "repos_url": "https://api.github.com/users/kavithaenair/repos",
    "user_name": "kavithaenair",
    "bio": "GSoC'17 @loklak @fossasia ; Developer @fossasia ; Intern @amazon",
    "subscriptions_url": "https://api.github.com/users/kavithaenair/subscriptions",
    "received_events_url": "https://api.github.com/users/kavithaenair/received_events",
    "full_name": "Kavitha E Nair",
    "avatar_url": "https://avatars0.githubusercontent.com/u/18421291",
    "user_id": "18421291",
    "events_url": "https://api.github.com/users/kavithaenair/events",
    "organizations": [
      {
        "img_link": "https://avatars1.githubusercontent.com/u/6295529?v=3&s=70",
        "link": "https://github.com/fossasia",
        "label": "fossasia",
        "img_Alt": "@fossasia"
      },
      {
        "img_link": "https://avatars0.githubusercontent.com/u/10620750?v=3&s=70",
        "link": "https://github.com/coala",
        "label": "coala",
        "img_Alt": "@coala"
      },
      {
        "img_link": "https://avatars1.githubusercontent.com/u/11370631?v=3&s=70",
        "link": "https://github.com/loklak",
        "label": "loklak",
        "img_Alt": "@loklak"
      },
      {
        "img_link": "https://avatars2.githubusercontent.com/u/24720168?v=3&s=70",
        "link": "https://github.com/bvrit-wise-django-team",
        "label": "bvrit-wise-django-team",
        "img_Alt": "@bvrit-wise-django-team"
      }
    ],
    "home_location": "\n    Hyderabad, India\n",
    "works_for": "",
    "special_link": "https://www.overleaf.com/read/ftnvcphnwzhp",
    "email": "",
    "atom_feed_link": "https://github.com/kavithaenair.atom"
  }],
  "metadata": {"count": 1},
  "session": {"identity": {
    "type": "host",
    "name": "162.158.46.18",
    "anonymous": true
  }}
}
Continue ReadingUsing LokLak to Scrape Profiles from Quora, GitHub, Weibo and Instagram

Control flow of SUSI AI on Android and database management using Realm

While developing a chat based android application, one of the most important things is keeping track of user’s messages. Since the user might want to access them in the absence of Internet connectivity (i.e remotely) as well, storing them locally is also important.

In SUSI we are using Realm to keep things organized in a systematic manner and constructing model (or adding appropriate attributes) for every new data type which the application needs. Right now we have three main models namely ChatMessage, WebLink and WebSearchModel. These three java classes define the structure of each possible message.  ChatMessage evaluates and classifies incoming response from server either to be an image or map or pie chart or web search url or other valid types of response. WebSearchModel and WebLink models are there to manage those results which contains link to various web searches.

Various result based lists are maintained for smooth flow of application. Messages sent in absence of Internet are stored in a list – nonDelivered. All the messages have an attribute isDelivered which is set to true if and only if they have been queried, otherwise the attribute is set to false which puts it in the nonDelivered list. Once the phone is connected back to the internet and the app is active in foreground, the messages are sent to server, queried and we get the response back in the app’s database where the attributes are assigned accordingly.

 

I will explain a functionality below that will give a more clear view about our coding practices and work flow.

When a user long taps a message, few options are listed(these actions are defined in recycleradapters->ChatFeedRecyclerAdapter.java) from which you may select one. In the code, this triggers the method onActionsItemClicked(). In this Overridden method, we handle what happens when a user clicks on one of the following options from item menu. In this post I’ll be covering only about the star/important message option.

case R.id.menu_item_important:
    nSelected = getSelectedItems().size();
    if (nSelected >0)
    {
        for (int i = nSelected - 1; i >= 0; i--) 
        {
            markImportant(getSelectedItems().get(i));
        }
        if(nSelected == 1) 
        {
            Toast.makeText(context,nSelected+" message 
            marked 
            important",Toast.LENGTH_SHORT).show();
        } 
        else 
        {
            Toast.makeText(context, nSelected + " 
            messages marked important",                                                      
            Toast.LENGTH_SHORT).show();
        }
        Important = realm.where(ChatMessage.class).
        equalTo("isImportant",true)
        .findAll().sort("id");
        for(int i=0;i<important.size();++i)
            Log.i("message ","" + 
            important.get(i).getContent());
        Log.i("total ",""+important.size());
        actionMode.finish();
    }
return true;

We have the count of messages which were selected. Each message having a unique id is looped through and the attribute “isImportant” of each message object is modified accordingly. To modify this field, We call the method markImportant() and pass the id of message which has to be updated.

public void markImportant(final int position) {
    realm.executeTransaction(new Realm.Transaction() {
        @Override
        public void execute(Realm realm) {
            ChatMessage chatMessage = getItem(position);
            chatMessage.setIsImportant(true);
            realm.copyToRealmOrUpdate(chatMessage);
        }
    });
}

This method copies the instance of the message whose id it has received and updates the attribute “isImportant” and finally updates the message instance in the database.

Below given is the code for ImportantMessage activity which will help you understand properly how lists are used to query the database.

public class ImportantMessages extends AppCompatActivity {
 
    private Realm realm;
    private RecyclerView rvChatImportant;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
       super.onCreate(savedInstanceState);
       realm = Realm.getDefaultInstance();
       rvChatImportant = (RecyclerView) findViewById 
                          (R.id.rv_chat_important);
       actionBar.setDisplayHomeAsUpEnabled(true);
       setChatBackground();
       setupAdapter();
 
        //call to other methods
    }
 
    private void setupAdapter() {
        rvChatImportant = (RecyclerView) findViewById 
                          (R.id.rv_chat_important);
        LinearLayoutManager linearLayoutManager = new 
                            LinearLayoutManager(this);
        linearLayoutManager.setStackFromEnd(true);
        rvChatImportant. 
        setLayoutManager(linearLayoutManager);
        rvChatImportant.setHasFixedSize(false);
        RealmResults<ChatMessage> importantMessages = 
        realm.where(ChatMessage.class). 
        equalTo("isImportant",true).findAll().sort("id");
        TextView tv_msg = (TextView) findViewById 
                          (R.id.tv_empty_list);
 
        if(importantMessages.size()!=0)
            tv_msg.setVisibility(View.INVISIBLE);
        else
            tv_msg.setVisibility(View.VISIBLE);
 
        ChatFeedRecyclerAdapter recyclerAdapter = new 
             ChatFeedRecyclerAdapter(Glide.with(this), this, 
             importantMessages, true);
        rvChatImportant.setAdapter(recyclerAdapter);
        rvChatImportant.addOnLayoutChangeListener(new 
        View.OnLayoutChangeListener() {
            @Override
            public void onLayoutChange(View view, int left, 
            int top, int right, int bottom,
            int oldLeft, int oldTop, int oldRight, int 
            oldBottom) {
                if (bottom < oldBottom) {
                    rvChatImportant.postDelayed(new 
                    Runnable() {
                        @Override
                        public void run() {
                            int scrollTo = 
                            rvChatImportant.getA 
                            dapter().getItemCount() - 1;
                            scrollTo = scrollTo >= 0 ? 
                            scrollTo : 0;                             
                            rvChatImportant. 
                            scrollToPosition(scrollTo);
                        }
                    }, 10);
                }
            }
        });
    }
}
Continue ReadingControl flow of SUSI AI on Android and database management using Realm

Generating the Google IO Open Event Android App

The main aim of FOSSASIA Open Event Android App is to give an event organiser the ability to generate the app through a single click by providing the necessary json and binary files. As of late the Android application was tested on Google IO 2017 event. The sample files can be seen here. The data with respect to the event was taken from this site (https://events.google.com/io/). What was astonishing about this application is the simplicity with which we can make an event specific application by giving the vital assets required (json and binary files).

What was needed for generating the Google IO 2017 app?

For generating the app we had to provide the following files:

  1. images folder containing the necessary images of speaker, the logo of the event etc.
  2. event json file which has all the event specific information like the name of the event, the schedule of the event, the description of the event etc.
  3. forms json file having session and speaker form data.
  4. meta json file having the root url of the event.
  5. microlocations json file having all the locations where the events are going to happen.
  6. session_types json file consisting data of all the type of session which will occur in the vent.
  7. sessions json file consisting session specific data like the title of the session, start time and end time of session, which track that session belongs to etc.
  8. speakers json file consisting of speaker specific data like the name of the speaker, image of the speaker, social links of the speaker etc.
  9. sponsers json file consisting list of all sponsers of the event.
  10. tracks json file consisting of tracks specific data.
  11. config.json file which consists of the api url, app name.

After providing the required information we go to this site (http://droidgen.eventyay.com/) and the first thing this site asks us is the email id. Then we upload the required files mentioned above in a zip folder and we have a apk which we can test it out on our Android phone.

How did the Google IO sample app look like?

The files for the sample event can be found over here:

Folder Link:

https://github.com/fossasia/open-event/tree/master/sample/GoogleIO17

Zip File Link:

https://github.com/fossasia/open-event/blob/master/sample/GoogleIO17.zip

What were the issues found in the sample app?

There were certain issues which we observed on testing the app with the Google IO event:

  1. The theme of the app remains the same no matter which event it is. It is important to give the event organiser the ability to customise the theme of the app.
  2. The support for local speaker images needs to be provided as we want to give the event organiser an option to include the images locally or not.
  3. The background of the logo needs to be changed because in certain logos, the dark background causes visibility problems.
  4. Certain information in the app like the event information is hard-coded and needs to be taken from the assets folder instead of strings.xml.

Resources

Continue ReadingGenerating the Google IO Open Event Android App

Linking Codecov to your Angular2 Project

As a developer, the importance of testing code is well known. Testing source code helps to prevent bugs and syntax errors by cross-checking it with an expected output.
As stated on their official website: “Code coverage provides a visual measurement of what source code is being executed by a test suite”. This information indicates to the software developer where they should write new tests in the effort to achieve higher coverage, and consequently a lower chance of being buggy. Hence nearly every public repository on git uses codecov, a tool to measure the coverage of their source code.

In this blog, we shall see how to link codecov, with a public repository on Github when the code has been written in Angular2(Typescript). We shall assume that the repository uses Travis CI for integration.

STEP 1:
Go to https://codecov.io/gh/ and login to your Github account.

It will now give you the option to chose a repository to add for coverage. Select your repository.

STEP 2:
Navigate to Settings Tab, you should see something like this:

Follow the above-mentioned instructions.

STEP 3:
We now come to one of the most important parts of Codecov integration. Writing the files in our repo to enable this.
We will need three main files:
Travis.yml- which will ensure continuous integration services on your git hosted project
Codecov.yml- to personalise your settings and override the default settings in codecov.”The Codecov Yaml file is the single point of configuration, providing the developers with a transparent and version controlled file to adjust all Codecov settings.” as mentioned in the official website
Package.json- to inform npm of the new dependencies related to codecov, in addition to providing all the metadata to the user.

In .travis.yml, Add the following line:
after_success:

 - bash <(curl -s https://codecov.io/bash)

In codecov.yml, Add the following

Source: https://github.com/codecov/support/wiki/Codecov-Yaml#
 codecov:
 url: "string" # [enterprise] your self-hosted Codecov endpoint
 # ex. https://codecov.company.com
 slug: "owner/repo" # [enterprise] the project's name when using the global upload tokens
 branch: master # the branch to show by default, inherited from your git repository settings
 # ex. master, stable or release
 # default: the default branch in git/mercurial
 bot: username # the username that will consume any oauth requests
 # must have previously logged into Codecov
 ci: # [advanced] a list of custom CI domains
 - "ci.custom.com"
 notify: # [advanced] usage only
 after_n_builds: 5 # how many build to wait for before submitting notifications
 # therefore skipping status checks
 countdown: 50 # number of seconds to wait before checking CI status
 delay: 100 # number of seconds between each CI status check

coverage:
 precision: 2 # how many decimal places to display in the UI: 0 <= value <= 4 round: down # how coverage is rounded: down/up/nearest range: 50...100 # custom range of coverage colors from red -> yellow -> green

notify:
 irc:
 default: # -> see "sections" below
 server: "chat.freenode.net" #*S the domain of the irc server
 branches: null # -> see "branch patterns" below
 threshold: null # -> see "threshold" below
 message: "template string" # [advanced] -> see "customized message" below

gitter:
 default: # -> see "sections" below
 url: "https://webhooks.gitter.im/..." #*S unique Gitter notifications url
 branches: null # -> see "branch patterns" below
 threshold: null # -> see "threshold" below
 message: "template string" # [advanced] -> see "customized message" below

status:
 project: # measuring the overall project coverage
 default: # context, you can create multiple ones with custom titles
 enabled: yes # must be yes|true to enable this status
 target: auto # specify the target coverage for each commit status
 # option: "auto" (must increase from parent commit or pull request base)
 # option: "X%" a static target percentage to hit
 branches: # -> see "branch patterns" below
 threshold: null # allowed to drop X% and still result in a "success" commit status
 if_no_uploads: error # will post commit status of "error" if no coverage reports we uploaded
 # options: success, error, failure
 if_not_found: success # if parent is not found report status as success, error, or failure
 if_ci_failed: error # if ci fails report status as success, error, or failure


patch: # pull requests only: this commit status will measure the
 # entire pull requests Coverage Diff. Checking if the lines
 # adjusted are covered at least X%.
 default:
 enabled: yes # must be yes|true to enable this status
 target: 80% # specify the target "X%" coverage to hit
 branches: null # -> see "branch patterns" below
 threshold: null # allowed to drop X% and still result in a "success" commit status
 if_no_uploads: error # will post commit status of "error" if no coverage reports we uploaded
 # options: success, error, failure
 if_not_found: success
 if_ci_failed: error

changes: # if there are any unexpected changes in coverage
 default:
 enabled: yes # must be yes|true to enable this status
 branches: null # -> see "branch patterns" below
 if_no_uploads: error
 if_not_found: success
 if_ci_failed: error

ignore: # files and folders that will be removed during processing
 - "tests/*"
 - "demo/*.rb"

fixes: # [advanced] in rare cases the report tree is invalid, specify adjustments here
 - "old_path::new_path"

# comment: false # to disable comments
 comment:
 layout: "header, diff, changes, sunburst, suggestions, tree"
 branches: null # -> see "branch patterns" below
 behavior: default # option: "default" posts once then update, posts new if delete
 # option: "once" post once then updates, if deleted do not post new
 # option: "new" delete old, post new
 # option: "spammy" post new

Your package.json should look like this:

{
 "name": "example-typescript",
 "version": "1.0.0",
 "description": "Codecov Example Typescript",
 "main": "index.js",
 "devDependencies": {
 "chai": "^3.5.0",
 "codecov": "^1.0.1",
 "mocha": "^2.5.3",
 "nyc": "^6.4.4",
 "tsd": "^0.6.5",
 "typescript": "^1.8.10"
 },
 "scripts": {
 "postinstall": "tsd install",
 "pretest": "tsc test/*.ts --module commonjs --sourcemap",
 "test": "nyc mocha",
 "posttest": "nyc report --reporter=json && codecov -f coverage/*.json"
 },
 "repository": {
 "type": "git",
 "url": "git+https://github.com/Myname/example-typescript.git"
 },
 /*Optional*/
 "author": "Myname",
 "license": "Lic.name",
 "bugs": {
 "url": "https://github.com/example-typescript-path"
 },
 "homepage": "https://github.com/Myname/example-typescript#readme"
 }

Most of the code in package.json is metadata.
Two major parts of the code above are the devDependencies and the scripts.
In devDependencies, make sure to include the latest versions of all the packages your repository is using.
In scripts:

  • Postinstall – indicates the actions to be performed, once installation is complete.
  • Pretest – is for just before running ng test.
  • Test – indicates what is used while testing.
  • Posttest – is what is run just after testing is complete.

Check this repository for the sample files to generate the reports to be uploaded for Codecov: https://github.com/codecov/example-typescript

Check https://docs.codecov.io/v4.3.6/docs/codecov-yaml for detailed step by step instructions on writing codecov.yaml and https://docs.codecov.io/v4.3.6/docs for any general information

Continue ReadingLinking Codecov to your Angular2 Project

Advanced customization of the Yaydoc Build Process

Although, Yaydoc exposes many environment variables which can be used to configure various aspects of the build process, there may be cases where a user needs much more finer control over the build process. Yaydoc uses sphinx under the hood which uses a file named conf.py to allow users to customize the build. As part of the build process, Yaydoc generates a file named conf.py from a custom made jinja2 template. With this week’s update, now a user can extend the generated conf.py by providing their own conf.py whose content would be appended to the generated conf.py.

Why append you may ask. Why not just overwrite? This is because the generated conf.py has a lot of boilerplate code which when overwritten will need to be rewritten by the user. That is why the contents are appended so that the user will only need to specify any extra configuration options they may wish to add or override. This approach has the following advantages:

  • Ability to override or add any configuration option during build.
  • Since the conf.py file is execfile`d by sphinx during build, the user has the ability to execute arbitrary code to customize any part of the build process.

The following block of code implements this feature.

if [ -f $DOCPATH/conf.py ]; then
  echo >> BUILD_DIR/conf.py
  cat $DOCPATH/conf.py >> BUILD_DIR/conf.py
  rsync -a $DOCPATH. BUILD_DIR/ --exclude=conf.py
else
  cp -a $DOCPATH. BUILD_DIR/
fi

Here we check if user has provided a conf.py, we append it to the generated conf.py. To append we used the >> shell redirection feature. It redirects stdout to a file similar to > but instead of overwriting the file, it appends to it.

This brings us on parity with sphinx as  far as customization goes. We may expose some more configuration variables for easier setup in the future, but now you can always modify any aspects of the build process even if it is not exposed via a variable. This should be enough for most use cases. More changes are on the way. Stay tuned for more updates.

Continue ReadingAdvanced customization of the Yaydoc Build Process

Adding support for Markdown in Yaydoc

Yaydoc being based on sphinx natively supports reStructuredText. From the official docs:

reStructuredText is an easy-to-read, what-you-see-is-what-you-get plaintext markup syntax and parser system. It is useful for quickly creating simple web pages, and for standalone documents. reStructuredText is designed for extensibility for specific application domains.

Although it being superior to markdown in terms of features, Markdown is still the most heavily used markup language out there. This week we added support for markdown into Yaydoc. Now you can use Markdown to document your project and Yaydoc would create a site with no changes required from your end. To achieve this, we used recommonmark, which enables sphinx to parse CommonMark, a strongly defined, highly compatible specification of Markdown. It solved most of the problem with 3 lines of code in our customized conf.py .

from recommonmark.parser import CommonMarkParser

source_parsers = {
'.md': CommonMarkParser,
}

source_suffix = ['.rst', '.md']

With this addition, sphinx can now use recommonmark to convert markdown to html. But not everything has been solved. Here is an excerpt from a previous blogpost which explains a problem yet to be solved.

Now sphinx requires an index.rst file within docs directory  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository. But the Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors.

To solve this problem, a custom directive mdinclude was created. Directives are the primary extension mechanism of reStructuredText. Most of it’s implementation is a copy of the built in Include directive from the docutils package. Before including in the doctree, mdinclude converts the docs from markdown to reStructuredText using pypandoc. The implementation is similar to the one also discussed in a previous blogpost.

class MdInclude(rst.Directive):

required_arguments = 1
optional_arguments = 0

def run(self):
    if not self.state.document.settings.file_insertion_enabled:
        raise self.warning('"%s" directive disabled.' % self.name)
    source = self.state_machine.input_lines.source(
        self.lineno - self.state_machine.input_offset - 1)
    source_dir = os.path.dirname(os.path.abspath(source))
    path = rst.directives.path(self.arguments[0])
    path = os.path.normpath(os.path.join(source_dir, path))
    path = utils.relative_path(None, path)
    path = nodes.reprunicode(path)

    encoding = self.options.get(
        'encoding', self.state.document.settings.input_encoding)
    e_handler = self.state.document.settings.input_encoding_error_handler
    tab_width = self.options.get(
        'tab-width', self.state.document.settings.tab_width)

    try:
        self.state.document.settings.record_dependencies.add(path)
        include_file = io.FileInput(source_path=path,
                                    encoding=encoding,
                                    error_handler=e_handler)
    except UnicodeEncodeError as error:
        raise self.severe('Problems with "%s" directive path:\n'
                          'Cannot encode input file path "%s" '
                          '(wrong locale?).' %
                          (self.name, SafeString(path)))
    except IOError as error:
        raise self.severe('Problems with "%s" directive path:\n%s.' %
                          (self.name, ErrorString(error)))

    try:
        rawtext = include_file.read()
    except UnicodeError as error:
        raise self.severe('Problem with "%s" directive:\n%s' %
                          (self.name, ErrorString(error)))

    output = md2rst(rawtext)
    include_lines = statemachine.string2lines(output,
                                              tab_width, 
                                              convert_whitespace=True)
    self.state_machine.insert_input(include_lines, path)
    return []

With this, Yaydoc can now be used on projects that exclusively use markdown. There are some more hurdles which we need to cross in the following weeks. Stay tuned for more updates.

Continue ReadingAdding support for Markdown in Yaydoc

Automatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

One goal for the next version of the Open Event project is to allow an automatic import of events from various event listing sites. We will implement this using Open Event Import APIs and two additional modules: Query Server and Event Collect. The idea is to run the modules as micro-services or as stand-alone solutions.

Query Server
The query server is, as the name suggests, a query processor. As we are moving towards an API-centric approach for the server, query-server also has API endpoints (v1). Using this API you can get the data from the server in the mentioned format. The API itself is quite intuitive.

API to get data from query-server

GET /api/v1/search/<search-engine>/query=query&format=format

Sample Response Header

 Cache-Control: no-cache
 Connection: keep-alive
 Content-Length: 1395
 Content-Type: application/xml; charset=utf-8
 Date: Wed, 24 May 2017 08:33:42 GMT
 Server: Werkzeug/0.12.1 Python/2.7.13
 Via: 1.1 vegur

The server is built in Flask. The GitHub repository of the server contains a simple Bootstrap front-end, which is used as a testing ground for results. The query string calls the search engine result scraper scraper.py that is based on the scraper at searss. This scraper takes search engine, presently Google, Bing, DuckDuckGo and Yahoo as additional input and searches on that search engine. The output from the scraper, which can be in XML or in JSON depending on the API parameters is returned, while the search query is stored into MongoDB database with the query string indexing. This is done keeping in mind the capabilities to be added in order to use Kibana analyzing tools.

The frontend prettifies results with the help of PrismJS. The query-server will be used for initial listing of events from different search engines. This will be accessed through the following API.

The query server app can be accessed on heroku.

➢ api/list​: To provide with an initial list of events (titles and links) to be displayed on Open Event search results.

When an event is searched on Open Event, the query is passed on to query-server where a search is made by calling scraper.py with appending some details for better event hunting. Recent developments with Google include their event search feature. In the Google search app, event searches take over when Google detects that a user is looking for an event.

The feed from the scraper is parsed for events inside query server to generate a list containing Event Titles and Links. Each event in this list is then searched for in the database to check if it exists already. We will be using elastic search to achieve fuzzy searching for events in Open Event database as elastic search is planned for the API to be used.

One example of what we wish to achieve by implementing this type of search in the database follows. The user may search for

-Google Cloud Event Delhi
-Google Event, Delhi
-Google Cloud, Delhi
-google cloud delhi
-Google Cloud Onboard Delhi
-Google Delhi Cloud event

All these searches should match with “Google Cloud Onboard Event, Delhi” with good accuracy. After removing duplicates and events which already exist in the database from this list have been deleted, each event is rendered on search frontend of Open Event as a separate event. The user can click on any of these event, which will make a call to event collect.

Event Collect

The event collect project is developed as a separate module which has two parts

● Site specific scrapers
In its present state, event collect has scrapers for eventbrite and ticket-leap which, given a query, scrape eventbrite (and ticket-leap respectively) search results and downloads JSON files of each event using Loklak‘s API.
The scrapers can be developed in any form or any number of scrapers/scraping tools can be added as long as they are in alignment with the Open Event Import API’s data format. Writing tests for these against the concurrent API formats will take care of this. This part will be covered by using a json-validator​ to check against a pre-generated schema.

● REST APIs
The scrapers are exposed through a set of APIs, which will include, but not limited to,
➢ api/fetch-event : ​to scrape any event given the link and compose the data in a predefined JSON format which will be generated based on Open Event Import API. When this function is called on an event link, scrapers are invoked which collect event data such as event, meta, forms etc. This data will be validated against the generated JSON schema. The scraped JSON and directory structure for media files:
➢ api/export : to export all the JSON data containing event information into Open Event Server. As and when the scraping is complete, the data will be added into Open Event’s database as a new event.

How the Import works

The following graphic shows how the import works.




Let’s dive into the workflow. So as the diagram illustrates, the ‘search​’ functionality makes a call to api/list API endpoint provided by query-server which returns with events’ ‘Title’ and ‘Event Link’ from the parsed XML/JSON feed. This list is displayed as Open Event’s search results. Now the results having been displayed, the user can click on any of the events. When the user clicks on any event, the event is searched for in Open Event’s database. Two things happen now:

  • The event page loads if the event is found.
  • If the event does not already exist in the database, clicking on any event will

➢ Insert this event’s title and link in the database and get the event_id

➢ Make a call to api/fetch-event in event-collect which then invokes a site-specific scraper to fetch data about the event the user has chosen

➢ When the data is scraped, it is imported into Open Event database using the previously generated event_id. The page will be loaded using jquery ajax ​as and when the scraping is done.​When the imports are done, the search page refreshes with the new results. The Open Event Orga Server exposes a well documented REST API that can be used by external services to access the data.

Continue ReadingAutomatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

Sorting language-translation in Open Event Server project using Jinja 2 dictsort.

Working on the Open Event Server project an issue about arranging language-translation listing in alphabetical order came up. To solve this issue of language listing arrangement i.e. #2817, I found the ‘d0_dictsort’ function in jinja2 to sort dictionaries. It is a defined in jinja2.filters. Python dicts are unsorted and in our web application we at times may want to order them by either their key or value. So this function comes handy.

This is what the function looks like:

do_dictsort(value, case_sensitive=False, by='key')

We can write them in three ways as:

{% for record in my_dictionary|dictsort %}
    case insensitive and sort the dict by key

{% for record in my_dictionary|dicsort(true) %}
    case sensitive and sort the dict by key

{% for record in my_dictionary|dictsort(false, 'value') %}
    sort the dict by value, normally sorted and case insensitive
  1.       The first way is easily understood that dict has been sorted by key not taking case into consideration. It is just in the same way written as dictsort(false).
  2.       Second way is basically the first being case sensitive. dictsort(true) here tells us that case is sensitive.
  3.      Third way is dictsort(false,’value’). The first parameter defines that case insensitive while second parameter defines that it is sorted by ‘value’.

The issues was to sort translation selector for the page in alphabetical order. The languages were stored in a dictionary which to change in order, I found this function very easy and useful.

Basically what we had was:

This is how the function was used in the code for the sort. Like this:

<ul class="dropdown-menu lang-list">
   {% for code in all-languages|dictsort(false,'value') %}
       <li><a  href="#" style="#969191" class="translate" id="{{ code[0] }}">{{  all_languages[code[0]] }}<>a><li>
    {% endfor %}
<ul>


Here:
{{ all_languages }} is the list which contained the languages like French, English, etc., which could be accessed with its global language code. code here(index for all_languages) is a tuple of {‘global_language_code’,’language’} (An example would be (‘fr’,’French’), so code[0] gave me the language_code.

Finally, the result:

This is one of the simple ways to sort your dictionaries.

Continue ReadingSorting language-translation in Open Event Server project using Jinja 2 dictsort.