Hyperlinking Support for SUSI Webchat

SUSI responses can contain links or email ids. Whenever we want to access those links or send a mail to those email ids, it is very inconvenient for the user to manually copy the link and check out the contents, which is a very bad UX.

I used a module called ‘react-linkify’ to address this issue.
React-linkify’ is a React component to parse links (urls, emails, etc.) in text into clickable links.

Usage:

<Linkify>{text to linkify}</Linkify>

Any link that appears inside the linkify component will be hyperlinked and is made clickable. It uses regular expressions and pattern matching to detect URLs and mail ids and are made clickable such that clicking on URLs opens the link in a new window and clicking a mail id opens “mailto:” .

Code:

export const parseAndReplace = (text) => {return <Linkify properties={{target:"_blank"}}>{text}</Linkify>;}

Lets visit SUSI WebChat and try it out.

Query: search internet

Response: Internet The global system of interconnected computer networks that use the Internet protocol suite to… https://duckduckgo.com/Internet

The link has been parsed from the response text and has been successfully hyperlinked. Clicking the links opens the respective URL in a new window.

Resources
Continue ReadingHyperlinking Support for SUSI Webchat

Continuous Deployment Implementation in Loklak Search

In current pace of web technology, the quick response time and low downtime are the core goals of any project. To achieve a continuous deployment scheme the most important factor is how efficiently contributors and maintainers are able to test and deploy the code with every PR. We faced this question when we started building loklak search.

As Loklak Search is a data driven client side web app, GitHub pages is the simplest way to set it up. At FOSSASIA apps are developed by many developers working together on different features. This makes it more important to have a unified flow of control and simple integration with GitHub pages as continuous deployment pipeline.

So the broad concept of continuous deployment boils down to three basic requirements

  1. Automatic unit testing.
  2. The automatic build of the applications on the successful merge of PR and deployment on the gh-pages branch.
  3. Easy provision of demo links for the developers to test and share the features they are working on before the PR is actually merged.

Automatic Unit Testing

At Loklak Search we use karma unit tests. For loklak search, we get the major help from angular/cli which helps in running of unit tests. The main part of the unit testing is TravisCI which is used as the CI solution. All these things are pretty easy to set up and use.

Travis CI has a particular advantage which is the ability to run custom shell scripts at different stages of the build process, and we use this capability for our Continuous Deployment.

Automatic Builds of PR’s and Deploy on Merge

This is the main requirement of the our CD scheme, and we do so by setting up a shell script. This file is deploy.sh in the project repository root.

There are few critical sections of the deploy script. The script starts with the initialisation instructions which set up the appropriate variables and also decrypts the ssh key which travis uses for pushing the repo on gh-pages branch (we will set up this key later).

  • Here we also check that we run our deploy script only when the build is for Master Branch and we do this by early exiting from the script if it is not so.
#!/bin/bash

SOURCE_BRANCH="master"
TARGET_BRANCH="gh-pages"

# Pull requests and commits to other branches shouldn't try to deploy.
if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then
echo "Skipping deploy; The request or commit is not on master"
exit 0
fi

 

  • We also store important information regarding the deploy keys which are generated manually and are encrypted using travis.
# Save some useful information
REPO=`git config remote.origin.url`
SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:}
SHA=`git rev-parse --verify HEAD`

# Decryption of the deploy_key.enc
ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key"
ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv"
ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR}
ENCRYPTED_IV=${!ENCRYPTED_IV_VAR}
openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out deploy_key -d

chmod 600 deploy_key
eval `ssh-agent -s`
ssh-add deploy_key

 

  • We clone our repo from GitHub and then go to the Target Branch which is gh-pages in our case.
# Cloning the repository to repo/ directory,
# Creating gh-pages branch if it doesn't exists else moving to that branch
git clone $REPO repo
cd repo
git checkout $TARGET_BRANCH || git checkout --orphan $TARGET_BRANCH
cd ..

# Setting up the username and email.
git config user.name "Travis CI"
git config user.email "$COMMIT_AUTHOR_EMAIL"

 

  • Now we do a clean up of our directory here, we do this so that fresh build is done every time, here we protect our files which are static and are not generated by the build process. These are CNAME and 404.html
# Cleaning up the old repo's gh-pages branch except CNAME file and 404.html
find repo/* ! -name "CNAME" ! -name "404.html" -maxdepth 1  -exec rm -rf {} \; 2> /dev/null
cd repo

git add --all
git commit -m "Travis CI Clean Deploy : ${SHA}"

 

  • After checking out to our Master Branch we do an npm install to install all our dependencies here and then do our project build. Then we move our files generated by the ng build to our gh-pages branch, and then we make a commit, to this branch.
git checkout $SOURCE_BRANCH
# Actual building and setup of current push or PR.
npm install
ng build --prod --aot
git checkout $TARGET_BRANCH
mv dist/* .
# Staging the new build for commit; and then committing the latest build
git add .
git commit --amend --no-edit --allow-empty

 

  • Now the final step is to push our build files to gh-pages branch and as we only want to put the build there if the code has actually changed, we make sure by adding that check.
# Deploying only if the build has changed
if [ -z `git diff --name-only HEAD HEAD~1` ]; then

echo "No Changes in the Build; exiting"
exit 0

else
# There are changes in the Build; push the changes to gh-pages
echo "There are changes in the Build; pushing the changes to gh-pages"

# Actual push to gh-pages branch via Travis
git push --force $SSH_REPO $TARGET_BRANCH
fi

 

Now this 70 lines of code handle all our heavy lifting and automates a large part of our CD. This makes sure that no incorrect builds are entering the gh-pages branch and also enabling smoother experience for both developers and maintainers.

The important aspect of this script is ability to make sure Travis is able to push to gh-pages. This requires the proper setup of Keys, and it definitely is the trickiest part the whole setup.

  • The first step is to generate the SSH key. This is done easily using terminal and ssh-keygen.
$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com”

 

  • I would recommend not using any passphrase as it will then be required by Travis and thus will be tricky to setup.
  • Now, this generates the RSA public/private key pair.
  • We now add this public deploy key to the settings of the repository.
  • After setting up the public key on GitHub we give the private key to Travis so that Travis is able to push on GitHub.
  • For doing this we use the Travis Client, this helps to encrypt the key properly and send the key and iv to the travis. Which then using these values is able to decrypt the private key.
$ travis encrypt-file deploy_key
encrypting deploy_key for domenic/travis-encrypt-file-example
storing result as deploy_key.enc
storing secure env variables for decryption

Please add the following to your build script (before_install stage in your .travis.yml, for instance):

    openssl aes-256-cbc -K $encrypted_0a6446eb3ae3_key -iv $encrypted_0a6446eb3ae3_key -in super_secret.txt.enc -out super_secret.txt -d

Pro Tip: You can add it automatically by running with --add.

Make sure to add deploy_key.enc to the git repository.
Make sure not to add deploy_key to the git repository.
Commit all changes to your .travis.yml.

 

  • Make sure to add deploy_key.enc to git repository and not to add deploy_key to git.

And after all these steps everything is done our client-side web application will deploy on every push on the master branch.

These steps are required only one time in project life cycle. At loklak search, we haven’t touched the deploy.sh since it was written, it’s a simple script but it does all the work of Continuous Deployment we want to achieve.

Generation of Demo Links and Test Deployments

This is also an essential part of the continuous agile development that developers are able to share what they have built and the maintainers to review those features and fixes. This becomes difficult in a web application as the fixes and features are more often than not visual and attaching screenshots with every PR become the hassle. If the developers are able to deploy their changes on their gh-pages and share the demo links with the PR then it’s a big win for development at a faster pace.

Now, this step is highly specific for Angular projects while there are are similar approaches for React and other frameworks as well, if not we can build the page easily and push our changes to gh-pages of our fork.

We use @angular/cli for building project then use angular-cli-ghpages npm package to actually push to gh-pages branch of the fork. These commands are combined and are provided as node command npm run deploy. And this makes our CD scheme complete.

Conclusion

Clearly, the continuous deployment scheme has a lot of advantages over the other methods especially in the client side web apps where there are a lot of PR’s. This essentially eliminates all the deployment hassles in a simple way that any deployment doesn’t require any manual interventions. The developers can simply concentrate on coding the application and maintainers can just simply review the PR’s by seeing the demo links and then merge when they feel like the PR is in good shape and the deployment is done all by the Shell Script without requiring the commands from a developer or a maintainer.

Links

Loklak Search GitHub Repository: https://github.com/fossasia/loklak_search

Loklak Search Application: http://loklak.net/

Loklak Search TravisCI: https://travis-ci.org/fossasia/loklak_search/

Deploy Script: https://github.com/fossasia/loklak_search/blob/master/deploy.sh

Further Resources

Continue ReadingContinuous Deployment Implementation in Loklak Search

Implementing a chatbot using the SUSI.AI API

SUSI AI is an intelligent Open Source personal assistant. It is a server application which is able to interact with humans as a personal assistant. The first step in implementing a bot using SUSI AI is to specify the pathway for query response from SUSI AI server.

The steps mentioned below provide a step-by-step guide to establish communication with SUSI AI server:

    1. Given below is HTML code that demonstrates how to connect with SUSI API through an AJAX call. To put this file on a Node Js server, see Step 2.  To view the response of this call, follow Step 4.

      <!DOCTYPE html>
      <body>
      <h1>My Header</h1>
      <p>My paragraph.</p>
      //Script with source here 
      //Script to be written here 
      </body>
      </html>
      

      In above code add scripts given below and end each script with closing tag </script>. In the second script we are calling SUSI API with hello query and showing data that we are receiving through call on console.

      <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js">
    2. <script>
      $(function (){ 
          $.ajax({ 
              dataType: 'jsonp', 
              type:'GET', url: 'http://api.susi.ai/susi/chat.json? timezoneOffset=-300&q=hello', 
               success: function(data){ 
                   console.log('success', data); 
               } 
          }); 
      });
    3. Code below is in node js to setup localhost and getting the same above result on browser. Below is Node Js code to setup a server at localhost for the above created HTML file.

      var http = require('http');
       var fs = require('fs');
       http.createServer(function (req, res) {
        fs.readFile('YOURFILENAME.html', function(err, data) {
          res.writeHead(200, {'Content-Type': 'text/html'});
          res.write(data);
          res.end();
        });
       }).listen(9000);
    4. We will get following response by running this Node js code and checking results on http://localhost:9000/ To run this code install Node Js and write “node filename.js” in command line.
    5. You can open above window by right clicking on page and selecting Inspect. Go to the Network option and select the relevant api call from left section of Inspect window.

We have successfully got response from SUSI API and now we can use this response for building bots for receiving replies for user.

Continue ReadingImplementing a chatbot using the SUSI.AI API

Diving into the codebase of the Open Event Front-end Project

This post aims to help any new contributor to get acquainted with the code base of the Open Event Front-end project.

The open event front-end is primarily the front-end for the Open Event API server. The project provides the functionality of signing up, add, update and view events details and many other functions to the organisers, speakers and attendees of the event which can be concert, conference, summit or a meetup. The open event front-end project is built on a JavaScript web application framework Ember.js. Ember uses a library Ember data for managing the data in the application and for communicating with server API via endpoints. Ember is a battery included framework which means that it creates all the boilerplate code required to set up the project and create a working web application which can be modified according to our needs.

The open event front-end is primarily the front-end for the Open Event API server. The project provides the functionality of signing up, add, update and view events details and many other functions to the organisers, speakers and attendees of the event which can be concert, conference, summit or a meetup. The open event front-end project is built on a JavaScript web application framework Ember.js. Ember uses a library Ember data for managing the data in the application and for communicating with server API via endpoints. Ember is a battery included framework which means that it creates all the boilerplate code required to set up the project and create a working web application which can be modified according to our needs.

For example: When I created a new project using the command:

$ ember new open-event-frontend

It created a new project open-event-frontend with all the boilerplate code required to set up the project which can be seen below.

create .editorconfig
create .ember-cli
create .eslintrc.js
create .travis.yml
create .watchmanconfig
create README.md
create app/app.js
create app/components/.gitkeep
Installing app
create app/controllers/.gitkeep
create app/helpers/.gitkeep
create app/index.html
create app/models/.gitkeep
create app/resolver.js
create app/router.js
create app/routes/.gitkeep
create app/styles/app.css
create app/templates/application.hbs
create app/templates/components/.gitkeep
create config/environment.js
create config/targets.js
create ember-cli-build.js
create .gitignore
create package.json
create public/crossdomain.xml
create public/robots.txt
create testem.js
create tests/.eslintrc.js
create tests/helpers/destroy-app.js
create tests/helpers/module-for-acceptance.js
create tests/helpers/resolver.js
create tests/helpers/start-app.js
create tests/index.html
create tests/integration/.gitkeep
create tests/test-helper.js
create tests/unit/.gitkeep
create vendor/.gitkeep
NPM: Installed dependencies
Successfully initialized git.

Now if we go inside the project directory we can see the following files and folders have been generated by the Ember-CLI which is nothing but a toolkit to create, develop and build ember application. Ember has a runtime resolver which automatically resolves the code if it’s placed at the conventional location which ember knows.

➜ ~ cd open-event-frontend
➜ open-event-frontend git:(master) ls
app  config  ember-cli-build.js  node_modules  package.json  public  README.md  testem.js  tests  vendor

What do these files and folders contain and what is their role in reference to the project, “Open event front-end”?

Directory structure of open event frontend project

Fig 1: Directory structure of the Open Event Front-end project

Let’s take a look at the folders and files Ember CLI generates.

App: This is the heart of the project. It is responsible for deciding how our application will look and work. This is the place where folders and files for models, components, routes, templates and styles are stored. The majority of our application’s code is written in this folder. The adapter directory contains the application.js file which is responsible for authorising all outgoing API requests. Since our application is an event application so it has various screens which display the list of events, sessions, schedule, speakers, functionality to add, delete, modify the event details etc. Most of the screens have some features in common which can be reused everywhere inside the application which is kept in components directory. The controllers directory contains the files which will be responsible for the front-end behaviour which is not related to the models of our application. For example actions like sharing the event details. The data of the event is fetched by API but when the share functionality has to work is decided by the controller. Models directory helps in mapping the format of data i.e. in which format the data is fetched from the API. As we interact with the application, it moves through many different states/screens using different URLs. The routes directory decides which template will be rendered and which model will be loaded to display the data in the current template. The services directory contains the services which kept on running in the application like authentication service to check whether the current user has logged in or not, translation service to translate the details into some other language other than English which is supported by the app if required. The styles directory contains .scss files for styling of layouts. The template directory contains the layouts for the user interface of the application in form of Handlebar Templates which contains the HTML code which will be rendered on the screen when the template will be called via routes. The utils directory contains all the utility files like the format of date and time, demographic details, event type, type of topics etc.   

Directory structure of App folder

Fig 2: Directory structure of the app folder

Config: It contains files where we can we configure the settings of our application. The environment.js file contains the details like application name and API endpoints and other details required to configure the application. The deploy.js file contains the configurational details about the deployment.

Directory structure of config folder

Fig 3: Directory structure of config folder

Mirage: It contains the files which are required to mock the API server. Since we need to fetch the data from the server so whenever we make a request to get and post the data, mirage will respond to that data. It also contains the file which helps in seeding fake data for the application. Here mocking means creating objects that simulate the behaviour of real objects. The default.js application handles the seeding the fake data but currently, we don’t require that hence it’s empty. The config.js file has the method which sets the API host and namespace of the application.

Directory structure of mirage folder

Fig 4: Directory structure of mirage folder

Node_modules: As we know that Ember is built with Node and uses various node.js modules for functioning. So this directory contains all the files generated from the running the npm package manager which we run at starting while installing and setting up the project on the local device. The package.json file contains all the file maintains the list of npm dependencies for our app.

Public: This directory contains all the assets such as images and fonts related to the application. In the case of our application, to add images apart from the ones that are fetched from the API, they have to be placed here.

Directory structure of public folder

Fig 5: Directory structure of public folder

Translations: Since our application provides various languages support so this directory contains the files to translate the messages.

Directory structure of translations folder

Fig 6: Directory structure of translations folder

Tests: This is the best feature which Ember has provided. We don’t have to generate test they are automatically generated by the Ember-CLI which can be modified according to our requirements. Ember generates three types of test which are Acceptance, Integration and Unit tests. Other than that Ember offers testing for helpers, controllers, routes and models. Test runner of Ember-CLI testem is configured in testem.js. All three types of test have been used in our application.

Directory structure of test folder

Fig 7: Directory structure of test folder

ember-cli-build.js: This file contains the detail how Ember CLI should build our app. In the case of our project, it contains all the metadata such as app imports and other details to build the app.   

Vendor: This directory usually contains all the front-end dependencies (such as JavaScript or CSS) which are not managed by Bower go (which is a package manager for managing components that contain HTML, CSS, JavaScript, fonts). But in the case of our project, we have only kept .gitkeep inside this since we have not included any dependency which Bower doesn’t maintain.

To summarise, this post explains about the content and the role of different files and folders in the directory structure of the Open Event Front-end Project which may help a new contributor to get started by getting a basic understanding of the project.

Continue ReadingDiving into the codebase of the Open Event Front-end Project
Read more about the article Set spacing in RecyclerView items by custom Item Decorator in Phimpme Android App
Spacing in RecyclerView GridLayoutManager

Set spacing in RecyclerView items by custom Item Decorator in Phimpme Android App

We have decided to shift our images Gallery code from GridView to using Grid Layout manager in RecyclerView in Phimpme Android application. RecyclerView has many advantages as compare to Grid/ List view.

  • Advantages of using layout manager either List, Grid or Staggered.
  • We can use many built in animations.
  • Item decorator for customizing the item.
  • Recycle items using the View Holder pattern

Recycler View documentation

Adding recyclerview in xml

<android.support.v7.widget.RecyclerView

    android:layout_width="match_parent"

    android:layout_height="match_parent"

    android:id="@+id/rv"

    />

Setting layout manager

mLayoutManager = new GridLayoutManager(this, 3);

recyclerView.setLayoutManager(mLayoutManager);

In phimpme we have an item as an ImageView, to show into the grid. Setup the Grid using layout manager as above.

Gallery of images is set but there is no spacing in between grid items. Padding will not help in this case.

Found a way to set offset by creating a Custom item decoration class. Add a constructor with parameter as a dimension resource.

 

public class ItemOffsetDecoration extends RecyclerView.ItemDecoration {

   private int mItemOffset;

   public ItemOffsetDecoration(int itemOffset) {

       mItemOffset = itemOffset;

   }



   public ItemOffsetDecoration(@NonNull Context context, @DimenRes int itemOffsetId) {

       this(context.getResources().getDimensionPixelSize(itemOffsetId));

   }

   @Override

   public void getItemOffsets(Rect outRect, View view, RecyclerView parent,

           RecyclerView.State state) {

       super.getItemOffsets(outRect, view, parent, state);

       outRect.set(mItemOffset, mItemOffset, mItemOffset, mItemOffset);

   }

}

Author: gist.github.com/yqritc/ccca77dc42f2364777e1

Usage:

ItemOffsetDecoration itemDecoration = new ItemOffsetDecoration(context, R.dimen.item_offset);

mRecyclerView.addItemDecoration(itemDecoration)

Pass the item_offset value in the function. Go through the material design guidelines for a clear understanding of dimensions in item offset.

Continue ReadingSet spacing in RecyclerView items by custom Item Decorator in Phimpme Android App

Automatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

One goal for the next version of the Open Event project is to allow an automatic import of events from various event listing sites. We will implement this using Open Event Import APIs and two additional modules: Query Server and Event Collect. The idea is to run the modules as micro-services or as stand-alone solutions.

Query Server
The query server is, as the name suggests, a query processor. As we are moving towards an API-centric approach for the server, query-server also has API endpoints (v1). Using this API you can get the data from the server in the mentioned format. The API itself is quite intuitive.

API to get data from query-server

GET /api/v1/search/<search-engine>/query=query&format=format

Sample Response Header

 Cache-Control: no-cache
 Connection: keep-alive
 Content-Length: 1395
 Content-Type: application/xml; charset=utf-8
 Date: Wed, 24 May 2017 08:33:42 GMT
 Server: Werkzeug/0.12.1 Python/2.7.13
 Via: 1.1 vegur

The server is built in Flask. The GitHub repository of the server contains a simple Bootstrap front-end, which is used as a testing ground for results. The query string calls the search engine result scraper scraper.py that is based on the scraper at searss. This scraper takes search engine, presently Google, Bing, DuckDuckGo and Yahoo as additional input and searches on that search engine. The output from the scraper, which can be in XML or in JSON depending on the API parameters is returned, while the search query is stored into MongoDB database with the query string indexing. This is done keeping in mind the capabilities to be added in order to use Kibana analyzing tools.

The frontend prettifies results with the help of PrismJS. The query-server will be used for initial listing of events from different search engines. This will be accessed through the following API.

The query server app can be accessed on heroku.

➢ api/list​: To provide with an initial list of events (titles and links) to be displayed on Open Event search results.

When an event is searched on Open Event, the query is passed on to query-server where a search is made by calling scraper.py with appending some details for better event hunting. Recent developments with Google include their event search feature. In the Google search app, event searches take over when Google detects that a user is looking for an event.

The feed from the scraper is parsed for events inside query server to generate a list containing Event Titles and Links. Each event in this list is then searched for in the database to check if it exists already. We will be using elastic search to achieve fuzzy searching for events in Open Event database as elastic search is planned for the API to be used.

One example of what we wish to achieve by implementing this type of search in the database follows. The user may search for

-Google Cloud Event Delhi
-Google Event, Delhi
-Google Cloud, Delhi
-google cloud delhi
-Google Cloud Onboard Delhi
-Google Delhi Cloud event

All these searches should match with “Google Cloud Onboard Event, Delhi” with good accuracy. After removing duplicates and events which already exist in the database from this list have been deleted, each event is rendered on search frontend of Open Event as a separate event. The user can click on any of these event, which will make a call to event collect.

Event Collect

The event collect project is developed as a separate module which has two parts

● Site specific scrapers
In its present state, event collect has scrapers for eventbrite and ticket-leap which, given a query, scrape eventbrite (and ticket-leap respectively) search results and downloads JSON files of each event using Loklak‘s API.
The scrapers can be developed in any form or any number of scrapers/scraping tools can be added as long as they are in alignment with the Open Event Import API’s data format. Writing tests for these against the concurrent API formats will take care of this. This part will be covered by using a json-validator​ to check against a pre-generated schema.

● REST APIs
The scrapers are exposed through a set of APIs, which will include, but not limited to,
➢ api/fetch-event : ​to scrape any event given the link and compose the data in a predefined JSON format which will be generated based on Open Event Import API. When this function is called on an event link, scrapers are invoked which collect event data such as event, meta, forms etc. This data will be validated against the generated JSON schema. The scraped JSON and directory structure for media files:
➢ api/export : to export all the JSON data containing event information into Open Event Server. As and when the scraping is complete, the data will be added into Open Event’s database as a new event.

How the Import works

The following graphic shows how the import works.




Let’s dive into the workflow. So as the diagram illustrates, the ‘search​’ functionality makes a call to api/list API endpoint provided by query-server which returns with events’ ‘Title’ and ‘Event Link’ from the parsed XML/JSON feed. This list is displayed as Open Event’s search results. Now the results having been displayed, the user can click on any of the events. When the user clicks on any event, the event is searched for in Open Event’s database. Two things happen now:

  • The event page loads if the event is found.
  • If the event does not already exist in the database, clicking on any event will

➢ Insert this event’s title and link in the database and get the event_id

➢ Make a call to api/fetch-event in event-collect which then invokes a site-specific scraper to fetch data about the event the user has chosen

➢ When the data is scraped, it is imported into Open Event database using the previously generated event_id. The page will be loaded using jquery ajax ​as and when the scraping is done.​When the imports are done, the search page refreshes with the new results. The Open Event Orga Server exposes a well documented REST API that can be used by external services to access the data.

Continue ReadingAutomatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

ember.js – the right choice for the Open Event Front-end

With the development of the API server for the Open Event project we needed to decide which framework to choose for the new Open Event front-end. With the plethora of javascript frameworks available, it got really difficult to decide, which one is actually the right choice. Every month a new framework arrives, and the existing ones keep actively updating themselves often. We decided to go with Ember.js. This article covers the emberJS framework and highlights its advantages over others and  demonstrates its usefulness.

EmberJS is an open-source JavaScript application front end framework for creating web applications, and uses Model-View-Controller (MVC) approach. The framework provides universal data binding. It’s focus lies on scalability.

Why is Ember JS great?

Convention over configuration – It does all the heavy lifting.

Ember JS mandates best practices, enforces naming conventions and generates the boilerplate code for the various components and routes itself. This has advantages other than uniformity. It is easier for other developers to join the project and start working right away, instead of spending hours on existing codebase to understand it, as the core structure of all ember apps is similar. To get an ember app started with the basic route, user doesn’t has to do much, ember does all the heavy lifting.

ember new my-app
ember server

After installing this is all it takes to create your app.

Ember CLI

Similar to Ruby on Rails, ember has a powerful CLI. It can be used to generate boiler plate codes for components, routes, tests and much more. Testing is possible via the CLI as well.

ember generate component my-component
ember generate route my-route
ember test

These are some of the examples which show how easy it is to manage the code via the ember CLI.

Tests.Tests.Tests.

Ember JS makes it incredibly easy to use test-first approach. Integration tests, acceptance tests, and unit tests are in built into the framework. And can be generated from the CLI itself, the documentation on them is well written and it’s really easy to customise them.

ember generate acceptance-test my-test

This is all it takes to set up the entire boiler plate for the test, which you can customise

Excellent documentation and guides

Ember JS has one of the best possible documentations available for a framework. The guides are a breeze to follow. It is highly recommended that, if starting out on ember, make the demo app from the official ember Guides. That should be enough to get familiar with ember.

Ember Guides is all you need to get started.

Ember Data

It sports one of the best implemented API data fetching capabilities. Fetching and using data in your app is a breeze. Ember comes with an inbuilt data management library Ember Data.

To generate a data model via ember CLI , all you have to do is

ember generate model my-model

Where is it being used?

Ember has a huge community and is being used all around. This article focuses on it’s salient features via the example of Open Event Orga Server project of FOSSASIA. The organizer server is primarily based on FLASK with jinja2 being used for rendering templates. At the small scale, it was efficient to have both the front end and backend of the server together, but as it grew larger in size with more refined features it became tough to keep track of all the minor edits and customizations of the front end and the code started to become complex in nature. And that gave birth to the new project Open Event Front End which is based on ember JS which will be covered in the next week.

With the orga server being converted into a fully functional API, the back end and the front end will be decoupled thereby making the code much cleaner and easy to understand for the other developers that may wish to contribute in the future. Also, since the new front end is being designed with ember JS, it’s UI will have a lot of enhanced features and enforcing uniformity across the design would be much easier with the help of components in ember. For instance, instead of making multiple copies of the same code, components are used to avoid repetition and ensure uniformity (change in one place will reflect everywhere)

<.div class="{{if isWide 'event wide ui grid row'}}">
  {{#if isWide}}
    {{#unless device.isMobile}}
      <.div class="ui card three wide computer six wide tablet column">
        <.a class="image" href="{{href-to 'public' event.identifier}}">
          {{widgets/safe-image src=(if event.large event.large event.placeholderUrl)}}
        <./a>
      <./div>
    {{/unless}}
  {{/if}}
  <.div class="ui card {{unless isWide 'event fluid' 'thirteen wide computer ten wide tablet sixteen wide mobile column'}}">
    {{#unless isWide}}
      <.a class="image" href="{{href-to 'public' event.identifier}}">
        {{widgets/safe-image src=(if event.large event.large event.placeholderUrl)}}
      <./a>
    {{/unless}}
    <.div class="main content">
      <.a class="header" href="{{href-to 'public' event.identifier}}">
        <.span>{{event.name}}<./span>
      <./a>
      <.div class="meta">
        <.span class="date">
          {{moment-format event.startTime 'ddd, MMM DD HH:mm A'}}
        <./span>
      <./div>
      <.div class="description">
        {{event.shortLocationName}}
      <./div>
    <./div>
    <.div class="extra content small text">
      <.span class="right floated">
        <.i role="button" class="share alternate link icon" {{action shareEvent event}}><./i>
      <./span>
      <.span>
        {{#if isYield}}
          {{yield}}
        {{else}}
          {{#each tags as |tag|}}
            <.a>{{tag}}<./a>
          {{/each}}
        {{/if}}
      <./span>
    <./div>
  <./div>
<./div>

This is a perfect example of the power of components in ember, this is a component for event information display in a card format which in addition to being rendered differently for various screen sizes can act differently based on passed parameters, thereby reducing the redundancy of writing separate components for the same.

Ember is a step forward towards the future of the web. With the help of Babel.js it is possible to write ES6/2015 syntax and not worry about it’s compatibility with the browsers. It will take care of it.

This is perfectly valid and will be compatible with majority of the supported browsers.

actions: {
  submit() {
    this.onValid(()=> {
    });
  }
}

 

Some references used for the blog article:

  1. https://www.codeschool.com/blog/2015/10/26/7-reasons-to-use-ember-js/
  2. https://www.quora.com/What-are-the-advantages-of-using-Ember-js
  3. Official Ember Guides: https://guides.emberjs.com

 
This page/product/etc is unaffiliated with the Ember project. Ember is a trademark of Tilde Inc

Continue Readingember.js – the right choice for the Open Event Front-end

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website,

What is celery?

Celery is an asynchronous task queue/job queue based on distributed message passing.

What are tasks?

The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing.

After the tasks had been set up, an error constantly came up whenever a task was called

The error was:

DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed

The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit().

The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to :

@celery.task(name='lorem.ipsum')
def lorem_ipsum():
    pass

But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form:

def restore_session(session_id):
    session = DataGetter.get_session(session_id)
    session.deleted_at = None
    lorem_ipsum.delay()
    save_to_db(session, "Session restored from Trash")
    update_version(session.event_id, False, 'sessions_ver')


In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/.

def make_celery(app):
    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])
    celery.conf.update(app.config)
    task_base = celery.Task

    class ContextTask(task_base):
        abstract = True

        def __call__(self, *args, **kwargs):
            if current_app.config['TESTING']:
                with app.test_request_context():
                    return task_base.__call__(self, *args, **kwargs)
            with app.app_context():
                return task_base.__call__(self, *args, **kwargs)

    celery.Task = ContextTask
    return celery

celery = make_celery(current_app)


The __call__ method ensures that celery task is provided with proper app context to work with.

 

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

Event-driven programming in Flask with Blinker signals

Setting up blinker:

The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event.

The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time.

That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events.

Given below is the code to create named signals in a custom namespace:


from blinker import Namespace

event_signals = Namespace()
speakers_modified = event_signals.signal('event_json_modified')

If you want to emit a signal, you can do so by calling the send() method:


speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id)

From the user guide itself:

“ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “

To subscribe to a signal, blinker provides neat decorator based signal subscriptions.


@speakers_modified.connect
def name_of_signal_handler(app, **kwargs):

 

Some Design Decisions:

When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project.


@speakers_modified.connect
def new_handler(app, **kwargs):
# do whatever you want to do with kwargs['event_id']

In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made.

When to use signals and when not ?

The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be used, use a function call instead.

How to turn off signals while testing?

When in testing mode, signals may slow down your testing as unnecessary signals subscribers which are completely independent from the function being tested will be executed numerous times. To turn off executing the signal subscribers, you have to make a small change in the send function of the blinker library.

Below is what we have done. The approach to turn it off may differ from project to project as the method of testing differs. Refer https://github.com/jek/blinker/blob/master/blinker/base.py#L241 for the original function.


def new_send(self, *sender, **kwargs):
    if len(sender) == 0:
        sender = None
    elif len(sender) > 1:
        raise TypeError('send() accepts only one positional argument, '
                        '%s given' % len(sender))
    else:
        sender = sender[0]
    # only this line was changed
    if not self.receivers or app.config['TESTING']:
        return []
    else:
        return [(receiver, receiver(sender, **kwargs))
                for receiver in self.receivers_for(sender)]
                
Signal.send = new_send

event_signals = Namespace
# and so on ....

That’s all for now. Have some fun signaling 😉 .

 

Continue ReadingEvent-driven programming in Flask with Blinker signals

Set proper content type when uploading files on s3 with python-magic

In the open-event-orga-server project, we had been using Amazon s3 storage for a long time now. After some time we encountered an issue that no matter what the file type was, the Content-Type when retrieving this files from the storage solution was application/octet-stream.

An example response when retrieving an image from s3 was as follows:


Accept-Ranges →bytes
Content-Disposition →attachment; filename=HansBakker_111.jpg
Content-Length →56060
Content-Type →application/octet-stream
Date →Fri, 09 Sep 2016 10:51:06 GMT
ETag →"964b1d839a9261fb0b159e960ceb4cf9"
Last-Modified →Tue, 06 Sep 2016 05:06:23 GMT
Server →AmazonS3
x-amz-id-2 →1GnO0Ta1e+qUE96Qgjm5ZyfyuhMetjc7vfX8UWEsE4fkZRBAuGx9gQwozidTroDVO/SU3BusCZs=
x-amz-request-id →ACF274542E950116

 

As seen above instead of providing image/jpeg as the Content-Type, it provides the Content-Type as application/octet-stream.While uploading the files, we were not providing the content type explicitly, which seemed to be the root of the problem.

It was decided that we would be providing the content type explicitly, so it was time to choose an efficient library to determine the file type based on the content of the file and not the file extension. After researching through the available libraries python-magic seemed to be the obvious choice. python-magic is a python interface to the libmagic file type identification library. libmagic identifies file types by checking their headers according to a predefined list of file types.

Here is an example straight from python-magic‘s readme on its usage:


>>> import magic
>>> magic.from_file("testdata/test.pdf")
'PDF document, version 1.2'
>>> magic.from_buffer(open("testdata/test.pdf").read(1024))
'PDF document, version 1.2'
>>> magic.from_file("testdata/test.pdf", mime=True)
'application/pdf'

 

Given below is a code snippet for the s3 upload function in the project:


file_data = file.read()
    file_mime = magic.from_buffer(file_data, mime=True)
    size = len(file_data)
    # k is defined as  k = Key(bucket) in previous code
    sent = k.set_contents_from_string(
        file_data,
        headers={
            'Content-Disposition': 'attachment; filename=%s' % filename,
            'Content-Type': '%s' % file_mime
        }
    ) 

 

One thing to note that as python-magic uses libmagic-dev as a dependency and many of the distros do not come with libmagic-dev pre-installed, make sure you install libmagic-dev explicitly. (Installation instructions may vary per distro)


sudo apt-get install libmagic-dev

Voila !! Now when retrieving each and every file you’ll get the proper content type.

 

Continue ReadingSet proper content type when uploading files on s3 with python-magic