Hyperlinking Support for SUSI Webchat

SUSI responses can contain links or email ids. Whenever we want to access those links or send a mail to those email ids, it is very inconvenient for the user to manually copy the link and check out the contents, which is a very bad UX. I used a module called ‘react-linkify’ to address this issue. ‘React-linkify’ is a React component to parse links (urls, emails, etc.) in text into clickable links. Usage: <Linkify>{text to linkify}</Linkify> Any link that appears inside the linkify component will be hyperlinked and is made clickable. It uses regular expressions and pattern matching to detect URLs and mail ids and are made clickable such that clicking on URLs opens the link in a new window and clicking a mail id opens “mailto:” . Code: export const parseAndReplace = (text) => {return <Linkify properties={{target:"_blank"}}>{text}</Linkify>;} Lets visit SUSI WebChat and try it out. Query: search internet Response: Internet The global system of interconnected computer networks that use the Internet protocol suite to... https://duckduckgo.com/Internet The link has been parsed from the response text and has been successfully hyperlinked. Clicking the links opens the respective URL in a new window. Resources Linkify Library Examples Using React-Linkify

Continue ReadingHyperlinking Support for SUSI Webchat

Continuous Deployment Implementation in Loklak Search

In current pace of web technology, the quick response time and low downtime are the core goals of any project. To achieve a continuous deployment scheme the most important factor is how efficiently contributors and maintainers are able to test and deploy the code with every PR. We faced this question when we started building loklak search. As Loklak Search is a data driven client side web app, GitHub pages is the simplest way to set it up. At FOSSASIA apps are developed by many developers working together on different features. This makes it more important to have a unified flow of control and simple integration with GitHub pages as continuous deployment pipeline. So the broad concept of continuous deployment boils down to three basic requirements Automatic unit testing. The automatic build of the applications on the successful merge of PR and deployment on the gh-pages branch. Easy provision of demo links for the developers to test and share the features they are working on before the PR is actually merged. Automatic Unit Testing At Loklak Search we use karma unit tests. For loklak search, we get the major help from angular/cli which helps in running of unit tests. The main part of the unit testing is TravisCI which is used as the CI solution. All these things are pretty easy to set up and use. Travis CI has a particular advantage which is the ability to run custom shell scripts at different stages of the build process, and we use this capability for our Continuous Deployment. Automatic Builds of PR’s and Deploy on Merge This is the main requirement of the our CD scheme, and we do so by setting up a shell script. This file is deploy.sh in the project repository root. There are few critical sections of the deploy script. The script starts with the initialisation instructions which set up the appropriate variables and also decrypts the ssh key which travis uses for pushing the repo on gh-pages branch (we will set up this key later). Here we also check that we run our deploy script only when the build is for Master Branch and we do this by early exiting from the script if it is not so. #!/bin/bash SOURCE_BRANCH="master" TARGET_BRANCH="gh-pages" # Pull requests and commits to other branches shouldn't try to deploy. if [ "$TRAVIS_PULL_REQUEST" != "false" -o "$TRAVIS_BRANCH" != "$SOURCE_BRANCH" ]; then echo "Skipping deploy; The request or commit is not on master" exit 0 fi   We also store important information regarding the deploy keys which are generated manually and are encrypted using travis. # Save some useful information REPO=`git config remote.origin.url` SSH_REPO=${REPO/https:\/\/github.com\//git@github.com:} SHA=`git rev-parse --verify HEAD` # Decryption of the deploy_key.enc ENCRYPTED_KEY_VAR="encrypted_${ENCRYPTION_LABEL}_key" ENCRYPTED_IV_VAR="encrypted_${ENCRYPTION_LABEL}_iv" ENCRYPTED_KEY=${!ENCRYPTED_KEY_VAR} ENCRYPTED_IV=${!ENCRYPTED_IV_VAR} openssl aes-256-cbc -K $ENCRYPTED_KEY -iv $ENCRYPTED_IV -in deploy_key.enc -out deploy_key -d chmod 600 deploy_key eval `ssh-agent -s` ssh-add deploy_key   We clone our repo from GitHub and then go to the Target Branch which is gh-pages in our case. # Cloning the repository to repo/ directory, #…

Continue ReadingContinuous Deployment Implementation in Loklak Search

Implementing a chatbot using the SUSI.AI API

SUSI AI is an intelligent Open Source personal assistant. It is a server application which is able to interact with humans as a personal assistant. The first step in implementing a bot using SUSI AI is to specify the pathway for query response from SUSI AI server. The steps mentioned below provide a step-by-step guide to establish communication with SUSI AI server: Given below is HTML code that demonstrates how to connect with SUSI API through an AJAX call. To put this file on a Node Js server, see Step 2.  To view the response of this call, follow Step 4. <!DOCTYPE html> <body> <h1>My Header</h1> <p>My paragraph.</p> //Script with source here //Script to be written here </body> </html> In above code add scripts given below and end each script with closing tag </script>. In the second script we are calling SUSI API with hello query and showing data that we are receiving through call on console. <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"> <script> $(function (){ $.ajax({ dataType: 'jsonp', type:'GET', url: 'http://api.susi.ai/susi/chat.json? timezoneOffset=-300&q=hello', success: function(data){ console.log('success', data); } }); }); Code below is in node js to setup localhost and getting the same above result on browser. Below is Node Js code to setup a server at localhost for the above created HTML file. var http = require('http'); var fs = require('fs'); http.createServer(function (req, res) {  fs.readFile('YOURFILENAME.html', function(err, data) {    res.writeHead(200, {'Content-Type': 'text/html'});    res.write(data);    res.end();  }); }).listen(9000); We will get following response by running this Node js code and checking results on http://localhost:9000/ To run this code install Node Js and write “node filename.js” in command line. You can open above window by right clicking on page and selecting Inspect. Go to the Network option and select the relevant api call from left section of Inspect window. We have successfully got response from SUSI API and now we can use this response for building bots for receiving replies for user.

Continue ReadingImplementing a chatbot using the SUSI.AI API

Diving into the codebase of the Open Event Front-end Project

This post aims to help any new contributor to get acquainted with the code base of the Open Event Front-end project. The open event front-end is primarily the front-end for the Open Event API server. The project provides the functionality of signing up, add, update and view events details and many other functions to the organisers, speakers and attendees of the event which can be concert, conference, summit or a meetup. The open event front-end project is built on a JavaScript web application framework Ember.js. Ember uses a library Ember data for managing the data in the application and for communicating with server API via endpoints. Ember is a battery included framework which means that it creates all the boilerplate code required to set up the project and create a working web application which can be modified according to our needs. The open event front-end is primarily the front-end for the Open Event API server. The project provides the functionality of signing up, add, update and view events details and many other functions to the organisers, speakers and attendees of the event which can be concert, conference, summit or a meetup. The open event front-end project is built on a JavaScript web application framework Ember.js. Ember uses a library Ember data for managing the data in the application and for communicating with server API via endpoints. Ember is a battery included framework which means that it creates all the boilerplate code required to set up the project and create a working web application which can be modified according to our needs. For example: When I created a new project using the command: $ ember new open-event-frontend It created a new project open-event-frontend with all the boilerplate code required to set up the project which can be seen below. create .editorconfig create .ember-cli create .eslintrc.js create .travis.yml create .watchmanconfig create README.md create app/app.js create app/components/.gitkeep Installing app create app/controllers/.gitkeep create app/helpers/.gitkeep create app/index.html create app/models/.gitkeep create app/resolver.js create app/router.js create app/routes/.gitkeep create app/styles/app.css create app/templates/application.hbs create app/templates/components/.gitkeep create config/environment.js create config/targets.js create ember-cli-build.js create .gitignore create package.json create public/crossdomain.xml create public/robots.txt create testem.js create tests/.eslintrc.js create tests/helpers/destroy-app.js create tests/helpers/module-for-acceptance.js create tests/helpers/resolver.js create tests/helpers/start-app.js create tests/index.html create tests/integration/.gitkeep create tests/test-helper.js create tests/unit/.gitkeep create vendor/.gitkeep NPM: Installed dependencies Successfully initialized git. Now if we go inside the project directory we can see the following files and folders have been generated by the Ember-CLI which is nothing but a toolkit to create, develop and build ember application. Ember has a runtime resolver which automatically resolves the code if it’s placed at the conventional location which ember knows. ➜ ~ cd open-event-frontend ➜ open-event-frontend git:(master) ls app config ember-cli-build.js node_modules package.json public README.md testem.js tests vendor What do these files and folders contain and what is their role in reference to the project, “Open event front-end”? Fig 1: Directory structure of the Open Event Front-end project Let's take a look at the folders and files Ember CLI generates. App: This is the heart of the…

Continue ReadingDiving into the codebase of the Open Event Front-end Project
Read more about the article Set spacing in RecyclerView items by custom Item Decorator in Phimpme Android App
Spacing in RecyclerView GridLayoutManager

Set spacing in RecyclerView items by custom Item Decorator in Phimpme Android App

We have decided to shift our images Gallery code from GridView to using Grid Layout manager in RecyclerView in Phimpme Android application. RecyclerView has many advantages as compare to Grid/ List view. Advantages of using layout manager either List, Grid or Staggered. We can use many built in animations. Item decorator for customizing the item. Recycle items using the View Holder pattern Recycler View documentation Adding recyclerview in xml <android.support.v7.widget.RecyclerView    android:layout_width="match_parent"    android:layout_height="match_parent"    android:id="@+id/rv"    /> Setting layout manager mLayoutManager = new GridLayoutManager(this, 3); recyclerView.setLayoutManager(mLayoutManager); In phimpme we have an item as an ImageView, to show into the grid. Setup the Grid using layout manager as above. Gallery of images is set but there is no spacing in between grid items. Padding will not help in this case. Found a way to set offset by creating a Custom item decoration class. Add a constructor with parameter as a dimension resource.   public class ItemOffsetDecoration extends RecyclerView.ItemDecoration {   private int mItemOffset;   public ItemOffsetDecoration(int itemOffset) {       mItemOffset = itemOffset;   }   public ItemOffsetDecoration(@NonNull Context context, @DimenRes int itemOffsetId) {       this(context.getResources().getDimensionPixelSize(itemOffsetId));   }   @Override   public void getItemOffsets(Rect outRect, View view, RecyclerView parent,           RecyclerView.State state) {       super.getItemOffsets(outRect, view, parent, state);       outRect.set(mItemOffset, mItemOffset, mItemOffset, mItemOffset);   } } Author: gist.github.com/yqritc/ccca77dc42f2364777e1 Usage: ItemOffsetDecoration itemDecoration = new ItemOffsetDecoration(context, R.dimen.item_offset); mRecyclerView.addItemDecoration(itemDecoration) Pass the item_offset value in the function. Go through the material design guidelines for a clear understanding of dimensions in item offset.

Continue ReadingSet spacing in RecyclerView items by custom Item Decorator in Phimpme Android App

Automatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

One goal for the next version of the Open Event project is to allow an automatic import of events from various event listing sites. We will implement this using Open Event Import APIs and two additional modules: Query Server and Event Collect. The idea is to run the modules as micro-services or as stand-alone solutions. Query Server The query server is, as the name suggests, a query processor. As we are moving towards an API-centric approach for the server, query-server also has API endpoints (v1). Using this API you can get the data from the server in the mentioned format. The API itself is quite intuitive. API to get data from query-server GET /api/v1/search/<search-engine>/query=query&format=format Sample Response Header Cache-Control: no-cache Connection: keep-alive Content-Length: 1395 Content-Type: application/xml; charset=utf-8 Date: Wed, 24 May 2017 08:33:42 GMT Server: Werkzeug/0.12.1 Python/2.7.13 Via: 1.1 vegur The server is built in Flask. The GitHub repository of the server contains a simple Bootstrap front-end, which is used as a testing ground for results. The query string calls the search engine result scraper scraper.py that is based on the scraper at searss. This scraper takes search engine, presently Google, Bing, DuckDuckGo and Yahoo as additional input and searches on that search engine. The output from the scraper, which can be in XML or in JSON depending on the API parameters is returned, while the search query is stored into MongoDB database with the query string indexing. This is done keeping in mind the capabilities to be added in order to use Kibana analyzing tools. The frontend prettifies results with the help of PrismJS. The query-server will be used for initial listing of events from different search engines. This will be accessed through the following API. The query server app can be accessed on heroku. ➢ api/list​: To provide with an initial list of events (titles and links) to be displayed on Open Event search results. When an event is searched on Open Event, the query is passed on to query-server where a search is made by calling scraper.py with appending some details for better event hunting. Recent developments with Google include their event search feature. In the Google search app, event searches take over when Google detects that a user is looking for an event. The feed from the scraper is parsed for events inside query server to generate a list containing Event Titles and Links. Each event in this list is then searched for in the database to check if it exists already. We will be using elastic search to achieve fuzzy searching for events in Open Event database as elastic search is planned for the API to be used. One example of what we wish to achieve by implementing this type of search in the database follows. The user may search for -Google Cloud Event Delhi -Google Event, Delhi -Google Cloud, Delhi -google cloud delhi -Google Cloud Onboard Delhi -Google Delhi Cloud event All these searches should match with “Google Cloud Onboard Event, Delhi” with good accuracy.…

Continue ReadingAutomatic Imports of Events to Open Event from online event sites with Query Server and Event Collect

ember.js – the right choice for the Open Event Front-end

With the development of the API server for the Open Event project we needed to decide which framework to choose for the new Open Event front-end. With the plethora of javascript frameworks available, it got really difficult to decide, which one is actually the right choice. Every month a new framework arrives, and the existing ones keep actively updating themselves often. We decided to go with Ember.js. This article covers the emberJS framework and highlights its advantages over others and  demonstrates its usefulness. EmberJS is an open-source JavaScript application front end framework for creating web applications, and uses Model-View-Controller (MVC) approach. The framework provides universal data binding. It’s focus lies on scalability. Why is Ember JS great? Convention over configuration - It does all the heavy lifting. Ember JS mandates best practices, enforces naming conventions and generates the boilerplate code for the various components and routes itself. This has advantages other than uniformity. It is easier for other developers to join the project and start working right away, instead of spending hours on existing codebase to understand it, as the core structure of all ember apps is similar. To get an ember app started with the basic route, user doesn’t has to do much, ember does all the heavy lifting. ember new my-app ember server After installing this is all it takes to create your app. Ember CLI Similar to Ruby on Rails, ember has a powerful CLI. It can be used to generate boiler plate codes for components, routes, tests and much more. Testing is possible via the CLI as well. ember generate component my-component ember generate route my-route ember test These are some of the examples which show how easy it is to manage the code via the ember CLI. Tests.Tests.Tests. Ember JS makes it incredibly easy to use test-first approach. Integration tests, acceptance tests, and unit tests are in built into the framework. And can be generated from the CLI itself, the documentation on them is well written and it’s really easy to customise them. ember generate acceptance-test my-test This is all it takes to set up the entire boiler plate for the test, which you can customise Excellent documentation and guides Ember JS has one of the best possible documentations available for a framework. The guides are a breeze to follow. It is highly recommended that, if starting out on ember, make the demo app from the official ember Guides. That should be enough to get familiar with ember. Ember Guides is all you need to get started. Ember Data It sports one of the best implemented API data fetching capabilities. Fetching and using data in your app is a breeze. Ember comes with an inbuilt data management library Ember Data. To generate a data model via ember CLI , all you have to do is ember generate model my-model Where is it being used? Ember has a huge community and is being used all around. This article focuses on it’s salient features via the example of…

Continue Readingember.js – the right choice for the Open Event Front-end

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website, What is celery? Celery is an asynchronous task queue/job queue based on distributed message passing. What are tasks? The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing. After the tasks had been set up, an error constantly came up whenever a task was called The error was: DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit(). The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to : @celery.task(name='lorem.ipsum') def lorem_ipsum():    pass But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form: def restore_session(session_id):    session = DataGetter.get_session(session_id)    session.deleted_at = None    lorem_ipsum.delay()    save_to_db(session, "Session restored from Trash")    update_version(session.event_id, False, 'sessions_ver') In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/. def make_celery(app):    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])    celery.conf.update(app.config)    task_base = celery.Task    class ContextTask(task_base):        abstract = True        def __call__(self, *args, **kwargs):            if current_app.config['TESTING']:                with app.test_request_context():                    return task_base.__call__(self, *args, **kwargs)            with app.app_context():                return task_base.__call__(self, *args, **kwargs)    celery.Task = ContextTask    return celery celery = make_celery(current_app) The __call__ method ensures that celery task is provided with proper app context to work with.  

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

Event-driven programming in Flask with Blinker signals

Setting up blinker: The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event. The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time. That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events. Given below is the code to create named signals in a custom namespace: from blinker import Namespace event_signals = Namespace() speakers_modified = event_signals.signal('event_json_modified') If you want to emit a signal, you can do so by calling the send() method: speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id) From the user guide itself: “ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “ To subscribe to a signal, blinker provides neat decorator based signal subscriptions. @speakers_modified.connect def name_of_signal_handler(app, **kwargs):   Some Design Decisions: When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project. @speakers_modified.connect def new_handler(app, **kwargs): # do whatever you want to do with kwargs['event_id'] In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made. When to use signals and when not ? The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be…

Continue ReadingEvent-driven programming in Flask with Blinker signals

Set proper content type when uploading files on s3 with python-magic

In the open-event-orga-server project, we had been using Amazon s3 storage for a long time now. After some time we encountered an issue that no matter what the file type was, the Content-Type when retrieving this files from the storage solution was application/octet-stream. An example response when retrieving an image from s3 was as follows: Accept-Ranges →bytes Content-Disposition →attachment; filename=HansBakker_111.jpg Content-Length →56060 Content-Type →application/octet-stream Date →Fri, 09 Sep 2016 10:51:06 GMT ETag →"964b1d839a9261fb0b159e960ceb4cf9" Last-Modified →Tue, 06 Sep 2016 05:06:23 GMT Server →AmazonS3 x-amz-id-2 →1GnO0Ta1e+qUE96Qgjm5ZyfyuhMetjc7vfX8UWEsE4fkZRBAuGx9gQwozidTroDVO/SU3BusCZs= x-amz-request-id →ACF274542E950116   As seen above instead of providing image/jpeg as the Content-Type, it provides the Content-Type as application/octet-stream.While uploading the files, we were not providing the content type explicitly, which seemed to be the root of the problem. It was decided that we would be providing the content type explicitly, so it was time to choose an efficient library to determine the file type based on the content of the file and not the file extension. After researching through the available libraries python-magic seemed to be the obvious choice. python-magic is a python interface to the libmagic file type identification library. libmagic identifies file types by checking their headers according to a predefined list of file types. Here is an example straight from python-magic's readme on its usage: >>> import magic >>> magic.from_file("testdata/test.pdf") 'PDF document, version 1.2' >>> magic.from_buffer(open("testdata/test.pdf").read(1024)) 'PDF document, version 1.2' >>> magic.from_file("testdata/test.pdf", mime=True) 'application/pdf'   Given below is a code snippet for the s3 upload function in the project: file_data = file.read()    file_mime = magic.from_buffer(file_data, mime=True)    size = len(file_data)    # k is defined as  k = Key(bucket) in previous code    sent = k.set_contents_from_string(        file_data,        headers={            'Content-Disposition': 'attachment; filename=%s' % filename,            'Content-Type': '%s' % file_mime        }    )   One thing to note that as python-magic uses libmagic-dev as a dependency and many of the distros do not come with libmagic-dev pre-installed, make sure you install libmagic-dev explicitly. (Installation instructions may vary per distro) sudo apt-get install libmagic-dev Voila !! Now when retrieving each and every file you’ll get the proper content type.  

Continue ReadingSet proper content type when uploading files on s3 with python-magic