Implementing PNG Export of Schedule in Open Event Webapp

Recently, we implemented a cool feature in the Open Event Webapp. We provided the facility to download the png image of the schedule of an event at the click of a button. The user can download the schedule of an event in both the list and the calendar view. There are situations when the speaker or the organizer of an event want to get a hard-copy of the schedule so that they can paste it on notice boards for the convenience of the visitors. With the help of PNG Export functionality, they can download the png image of the schedule and print it easily. The whole code can be seen here. On the schedule page, the user can view the sessions in two modes: list and the calendar mode. Capturing a portion of the document and then converting it to an image is no easy task. It is because we can’t directly convert an HTML element to an image. We have to first somehow render that HTML element on the canvas and then convert it to an image.It is a two-step process. Rendering an HTML element on the canvas is the most important part and challenging part. The better we will be able to render an element on the canvas, the better will be the output quality of the image. Fortunately for us, we don’t have to implement it from scratch (which would have been extremely difficult and time-consuming). Enter html2canvas library. It renders an element onto the canvas after which we can convert it into an image. I will now explain how we implemented png export in the calendar mode. You can view the whole schedule template file here Here is a screenshot of calendar or grid view of the schedule. Currently selected date is 18th Mar, Saturday. The PNG Export button is on the top-right corner beside the ‘Calendar View’ button. Here is a little excerpt of the basic structure of the calendar mode of the sessions. I have given an overview of it in the comments. <div class="{{slug}} calendar"> <!-- slug represents the currently selected date --> <!-- This div contains all the sessions scheduled on the selected date --> <div class="col-md-12 paddingzero"> <!-- Contain content related to current date and time --> </div> <div class="calendar-content"> <div class="times"> <!-- This div contains the list of all the session times on the current day --> <!-- It is the left most column of the grid view which contains all the times → <div class="time"> <!-- This div contains information about the particular time --> </div> </div> <div class="rooms"> <!-- This div contains all the rooms of an event --> <!-- Each particular room has a set of sessions associated with it on that particular date --> <div class="room"> <!-- This div contains the list of session happening in a particular room --> <!-- Session Details --> </div> </div> </div> </div> Now, let us see how we will actually capture an image of the HTML element shown above. Here is the code related to it: $(".export-png").click(function() {…

Continue ReadingImplementing PNG Export of Schedule in Open Event Webapp

Global Search in Open Event Android

In the Open Event Android app we only had a single data source for searching in each page that was the content on the page itself. But it turned out that users want to search data across an event and therefore across different screens in the app. Global search solves this problem. We have recently implemented  global search in Open Event Android that enables the user to search data from the different pages i.e Tracks, Speakers, Locations etc all in a single page. This helps the user in obtaining his desired result in less time. In this blog I am describing how we implemented the feature in the app using JAVA and XML. Implementing the Search The first step of the work is to to add the search icon on the homescreen. We have done this with an id R.id.action_search_home. @Override public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) { super.onCreateOptionsMenu(menu, inflater); inflater.inflate(R.menu.menu_home, menu); // Get the SearchView and set the searchable configuration SearchManager searchManager = (SearchManager)getContext(). getSystemService(Context.SEARCH_SERVICE); searchView = (SearchView) menu.findItem(R.id.action_search_home).getActionView(); // Assumes current activity is the searchable activity searchView.setSearchableInfo(searchManager.getSearchableInfo( getActivity().getComponentName())); searchView.setIconifiedByDefault(true); } What is being done here is that the search icon on the top right of the home screen  is being designated a searchable component which is responsible for the setup of the search widget on the Toolbar of the app. @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater();    inflater.inflate(R.menu.menu_home, menu);    SearchManager searchManager =            (SearchManager) getSystemService(Context.SEARCH_SERVICE);    searchView = (SearchView) menu.findItem(R.id.action_search_home).getActionView();    searchView.setSearchableInfo(            searchManager.getSearchableInfo(getComponentName()));    searchView.setOnQueryTextListener(this);    if (searchText != null) {        searchView.setQuery(searchText, true);    }    return true; } We can see that a queryTextListener has been setup in this function which is responsible to trigger a function whenever a query in the SearchView changes. Example of a Searchable Component <?xml version="1.0" encoding="utf-8"?> <searchable xmlns:android="http://schemas.android.com/apk/res/android"    android:hint="@string/global_search_hint"    android:label="@string/app_name" /> For More Info : https://developer.android.com/guide/topics/search/searchable-config.html If this searchable component is inserted into the manifest in the required destination activity’s body the destination activity is set and intent filter must be set in this activity to tell whether or not the activity is searchable. Manifest Code for SearchActivity <activity        android:name=".activities.SearchActivity"        android:launchMode="singleTop"        android:label="Search App"        android:parentActivityName=".activities.MainActivity">    <intent-filter>        <action android:name="android.intent.action.SEARCH" />    </intent-filter>    <meta-data        android:name="android.app.searchable"        android:resource="@xml/searchable" /> </activity> And the attribute  android:launchMode=”singleTop”  is very important as if we want to search multiple times in the SearchActivity all the instances of our SearchActivity would get stored on the call stack which is not needed and would also eat up a lot of memory. Handling the Intent to the SearchActivity We basically need to do a standard if check in order to check if the intent is of type ACTION_SEARCH. if (Intent.ACTION_SEARCH.equals(getIntent().getAction())) {    handleIntent(getIntent()); } @Override protected void onNewIntent(Intent intent) {    super.onNewIntent(intent);    handleIntent(intent); } public void handleIntent(Intent intent) {    final String query = intent.getStringExtra(SearchManager.QUERY);    searchQuery(query); } The function searchQuery is called within handleIntent in order to search for the text that we received…

Continue ReadingGlobal Search in Open Event Android

Integrating Selenium Testing in the Open Event Webapp

Open Event Webapp generates static website of events from JSON data fed to it in the form of a zip or an API endpoint. Over the course of time, the features included in the generated sites have grown. We have the bookmark option, search bar, calendar view and many other facilities. Every once in awhile, it happens that when we fix an issue, another issue which was solved previously in the past resurges. This is extremely frustrating as a developer to solve the old bugs again. In software engineering field, we call these issue/bugs as regressions. Detecting them is challenging as the reviewer has to manually go through all the pages of the site, and check each and every function. It is not uncommon that a part may be left out during the review process introducing regressions in the app. We already have proper testing for the generator part of the project with help of libraries like mocha and chai. But, up until now, we didn’t have the client-side testing aka the frontend testing in the project.  We had to introduce a way to test the functionality of the site. To check if a link is dead, the search bar is working or not, bookmark function works properly or not and so on. Enter Selenium. Selenium is a suite of tools to automate web browsers across many platforms. Using Selenium, we can control the browser and instruct it to do an ‘action’ programmatically. We can then check whether that action had the appropriate reaction and make our test cases based on this concept. There are various implementations of Selenium available in many different languages: Java, Ruby, Javascript, Python etc. As the main language used in the project is Javascript, we decided to use it. https://www.npmjs.com/package/selenium-webdriver https://seleniumhq.github.io/selenium/docs/api/javascript/index.html After deciding on the framework to be used in the project, we had to find a way to integrate it in the project. We wanted to run the tests on every PR made to the repo. If it failed, the build would be stopped and it would be shown to the user. Now, the problem was that the Travis doesn’t natively support running Selenium on their virtual machine. Fortunately, we have a company called Sauce Labs which provides automated testing for the web and mobile applications. And the best part, it is totally free for open source projects. And Travis supports Sauce Labs. The details of how to connect to Sauce Labs is described in detail on this page: https://docs.travis-ci.com/user/gui-and-headless-browsers/ Basically, we have to create an account on Sauce Labs and get a sauce_username and sauce_access_key which will be used to connect to the sauce cloud. Travis provides a sauce_connect addon which creates a tunnel which allows the Sauce browsers to easily access our application. Once the tunnel is established, the browser in the Sauce cloud can use it to access the localhost where we serve the pages of the generated sites. A little code would make it more clear at this stage: Here is a short excerpt from the travis.yml file :- addons:  sauce_connect:…

Continue ReadingIntegrating Selenium Testing in the Open Event Webapp

Generating the Google IO Open Event Android App

The main aim of FOSSASIA Open Event Android App is to give an event organiser the ability to generate the app through a single click by providing the necessary json and binary files. As of late the Android application was tested on Google IO 2017 event. The sample files can be seen here. The data with respect to the event was taken from this site (https://events.google.com/io/). What was astonishing about this application is the simplicity with which we can make an event specific application by giving the vital assets required (json and binary files). What was needed for generating the Google IO 2017 app? For generating the app we had to provide the following files: images folder containing the necessary images of speaker, the logo of the event etc. event json file which has all the event specific information like the name of the event, the schedule of the event, the description of the event etc. forms json file having session and speaker form data. meta json file having the root url of the event. microlocations json file having all the locations where the events are going to happen. session_types json file consisting data of all the type of session which will occur in the vent. sessions json file consisting session specific data like the title of the session, start time and end time of session, which track that session belongs to etc. speakers json file consisting of speaker specific data like the name of the speaker, image of the speaker, social links of the speaker etc. sponsers json file consisting list of all sponsers of the event. tracks json file consisting of tracks specific data. config.json file which consists of the api url, app name. After providing the required information we go to this site (http://droidgen.eventyay.com/) and the first thing this site asks us is the email id. Then we upload the required files mentioned above in a zip folder and we have a apk which we can test it out on our Android phone. How did the Google IO sample app look like? The files for the sample event can be found over here: Folder Link: https://github.com/fossasia/open-event/tree/master/sample/GoogleIO17 Zip File Link: https://github.com/fossasia/open-event/blob/master/sample/GoogleIO17.zip What were the issues found in the sample app? There were certain issues which we observed on testing the app with the Google IO event: The theme of the app remains the same no matter which event it is. It is important to give the event organiser the ability to customise the theme of the app. The support for local speaker images needs to be provided as we want to give the event organiser an option to include the images locally or not. The background of the logo needs to be changed because in certain logos, the dark background causes visibility problems. Certain information in the app like the event information is hard-coded and needs to be taken from the assets folder instead of strings.xml. Resources This tool makes work a lot easier by generating JSON files…

Continue ReadingGenerating the Google IO Open Event Android App

Adding swap space to your DigitalOcean droplet, if you run out of RAM

The Open Event Android App generator runs on a DigitalOcean. The deployment runs on a USD 10 box, that has 1 GB of RAM, but for testing I often use a USD 5 box, that has only 512mb of RAM. When trying to build an android app using gradle and Java 8, there could be an issue where you run out of RAM (especially if it's 512 only). What we can do to remedy this problem is creating a swapfile. On an SSD based system, Swap spaces work almost as fast as RAM, because SSDs have very high R/W speeds. Check hard disk space availability using df -h There should be an output like this Filesystem Size Used Avail Use% Mounted on udev 238M 0 238M 0% /dev tmpfs 49M 624K 49M 2% /run /dev/vda1 20G 1.1G 18G 6% / tmpfs 245M 0 245M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 245M 0 245M 0% /sys/fs/cgroup tmpfs 49M 0 49M 0% /run/user/1001 The steps to create a swap file and allocating it as swap are sudo fallocate -l 1G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile We can verify using sudo swapon --show NAME TYPE SIZE USED PRIO /swapfile file 1024M 0B -1 And now if we see RAM usage using free -h , we'll see total used free shared buff/cache available Mem: 488M 37M 96M 652K 354M 425M Swap: 1.0G 0B 1.0G Do not use this as a permanent measure for any SSD based filesystem. It can corrupt your SSD if used as swap for long. We use this only for short periods of time to help us build android apks on low ram systems.

Continue ReadingAdding swap space to your DigitalOcean droplet, if you run out of RAM

Doing a table join in Android without using rawQuery

The Open Event Android App, downloads data from the API (about events, sessions speakers etc), and saves them locally in an SQLite database, so that the app can work even without internet connection. Since there are multiple entities like Sessions, Speakers, Events etc, and each Session has ids of speakers, and id of it's venue etc, we often need to use JOIN queries to join data from two tables.   Android has some really nice SQLite helper classes and methods. And the ones I like the most are the SQLiteDatabase.query, SQLiteDatabase.update, SQLiteDatabase.insert ones, because they take away quite a bit of pain for typing out SQL commands by hand. But unfortunately, if you have to use a JOIN, then usually you have to go and use the SQLiteDatabase.rawQuery method and end up having to type your commands by hand. But but but, if the two tables you are joining do not have any common column names (actually it is good design to have them so - by having all column names prefixed by tablename_ maybe), then you can hack the usual SQLiteDatabase.query() method to get a JOINed query. Now ideally, to get the Session where speaker_id was 1, a nice looking SQL query should be like this - SELECT * FROM speaker INNER JOIN session ON speaker_id = session_speaker_id WHERE speaker_id = 1 Which, in android, can be done like this - String rawQuery = "SELECT * FROM " + SpeakerTable.TABLE_NAME + " INNER JOIN " + SessionTable.TABLE_NAME + " ON " + SessionTable.EXP_ID + " = " + SpeakerTable.ID + " WHERE " + SessionTable.ID + " = " + id; Cursor c = db.rawQuery( rawQuery, null ); But of course, because of SQLite’s backward compatible support of the primitive way of querying, we turn that command into SELECT * FROM session, speaker WHERE speaker_id = session_speaker_id AND speaker_id = 1 Now this we can write by hacking the terminology used by the #query() method - Cursor c = db.query( SessionTable.TABLE_NAME + " , " + SpeakerTable.TABLE_NAME, Utils.concat(SessionTable.PROJECTION, SpeakerTable.PROJECTION), SessionTable.EXP_ID + " = " + SpeakerTable.ID + " AND " + SpeakerTable.ID + " = " + id, null, null, null, null ); To explain a bit, the first argument String tableName can take table1, table2 as well safely, The second argument takes a String array of column names, I concatenated the two projections of the two classes. and finally, put by WHERE clause into the String selection argument. You can see the code for all database operations in the android app here  https://github.com/fossasia/open-event-android/blob/master/android/app/src/main/java/org/fossasia/openevent/dbutils/DatabaseOperations.java

Continue ReadingDoing a table join in Android without using rawQuery

Getting code coverage in a Nodejs project using Travis and CodeCov

We had set up unit tests on the webapp generator using mocha and chai, as I had blogged before. But we also need to get coverage reports for each code commit and the overall state of the repo. Since it is hosted on Github, Travis comes to our rescue. As you can see from our .travis.yml file, we already had Travis running to check for builds, and deploying to heroku. Now to enable Codecov, simply go to http://codecov.io and enable your repository (You have to login with Github so see your Github repos) . Once you do it, your dashboard should be visible like this https://codecov.io/github/fossasia/open-event-webapp We use istanbul to get codecoverage. To try it out just use istanbul cover _mocha On the root of your project (where the /test/ folder is ) . That should generate a folder called coverage or lcov. Codecov can read lcov reports. They have provided a bash file which can be run to automatically upload coverage reports. You can run it like this - bash <(curl -s https://codecov.io/bash) Now go back to your codecov dashboard, and your coverage report should show up. If all is well, we can integrate this with travis so that it happens on every code push. Add this to your travis.yml file. script: - istanbul cover _mocha after_success: - bash <(curl -s https://codecov.io/bash) This will ensure that on each push, we run coverage first. And if it is successful, we push the result to codecov. We can see coverage file by file like this And we can see coverage line by line in a file like this  

Continue ReadingGetting code coverage in a Nodejs project using Travis and CodeCov

Motion in android

So earlier this year I attended a talk where the speaker wanted to introduce us to meaningful motion in android apps and he convinced us to use this in our apps as well. Motion came in with Material design, actually not really came but became popular with Material design and since google has added the same kind of motions to their apps as well, developers have started using it. I love motion, not only does it boost engagement but it’s instantly noticeable. Think of the apps you use that feature motion design and how pleasing, satisfying, fluent and natural they feel to experience. Eg. Zomato, Play music etc. Now think of some apps that don’t use any kind of motions and you’ll realise they look a bit boring and you as users will always prefer apps with some kind of motion. Touch So firstly let’s discover the feedback on touch. It helps to communicate to the user in a visual form that some interaction has been made. But also keep in mind that this animation should be enough for them to gain clarity and encourage further explorations and not distract them. For adding backgrounds you can use the following : ?android:attr/selectableItemBackground — Show a ripple effect within the bounds of the view. ?android:attr/selectableItemBackgroundBorderless — Show a ripple effect extending the bounds of the view. View Property Animator Introduced in API 12, this allows us to perform animated operations (in parallel) on a number of view properties using a single Animator instance Some of the parameters that can be added to a view are as follows : alpha() -Set the alpha value to be animated to scaleX() & scaleY() — Scales the view on it’s X and / or Y axis translationZ() — Translates the view on its Z axis setDuration() — Sets the duration of the animation setStartDelay() — Sets the delay on the animation setInterpolator() — Sets the animation interpolator setListener() — Set a listener for when the animation starts, ends, repeats or is cancelled. Now let’s write some code on how to do this on a button for example: mButton.animate().alpha(1f) .scaleX(1f) .scaleY(1f) .translationZ(10f) .setInterpolator(new FastOutSlowInInterpolator()) .setStartDelay(200) .setListener(new Animator.AnimatorListener() { @Override public void onAnimationStart(Animator animation) { } @Override public void onAnimationEnd(Animator animation) { } @Override public void onAnimationCancel(Animator animation) { } @Override public void onAnimationRepeat(Animator animation) { } }) .start(); Note : Use ViewCompat class to implement the ViewPropertyAnimator from Android API version 4 and up Object Animator Similar to the ViewPropertyAnimator, the ObjectAnimator allows us to perform animations on various properties of the target view (both in code and XML resource files). However, there a couple of differences: The ObjectAnimator only allows animations on a single property per instance e.g.Scale X followed by Scale Y. However, it allows animations on a custom Property e.g. A view’s foreground colour. Her we need to set the evaluator, set the delay and call start(). private void animateForegroundColor(@ColorInt final int targetColor) { ObjectAnimator animator = ObjectAnimator.ofInt(YOUR_VIEW, FOREGROUND_COLOR, Color.TRANSPARENT, targetColor); animator.setEvaluator(new ArgbEvaluator()); animator.setStartDelay(DELAY_COLOR_CHANGE); animator.start();} Interpolators An Interpolator can be used to define the rate of change for an animation, meaning the speed,…

Continue ReadingMotion in android

File upload progress in a Node app using Socket.io

If you look at the webapp generator, you'll see that there is an option to upload a zip file containing event data. We wanted to give visual cue to the user when he is uploading to see how much file has uploaded. We are uploading the file, and giving the generate start command via socket.io events instead of POST requests here. To observe file upload progress on socket (when sending file using a Buffer), there is an awesome node module available called socketio-upload-progress. In our webapp you can see we implemented it on the frontend here in the form.js and here in the backend in app.js Basically on the backend you should add the socketio-file-upload module as a middleware to express var siofu = require("socketio-file-upload"); var app = express() .use(siofu.router) .listen(8000); After a socket is opened, set up the upload directory and start listening for uploads io.on("connection", function(socket){ var uploader = new siofu(); uploader.dir = "/path/to/save/uploads"; uploader.listen(socket); }); On the frontend, we'll listen for an input change on an file input type element whose id is siofu_upload var socket = io.connect(); var uploader = new SocketIOFileUpload(socket); uploader.listenOnInput(document.getElementById("siofu_input")); One thing to note here is that, if you observe percentage of upload on frontend, it'll give you false values. The correct values of how much data is actually transferred can be found in the backend. So observe progress in backend, and send percentage to frontend using the same socket. uploader.on('progress', function(event) { console.log(event.file.bytesLoaded / event.file.size) socket.emit('upload.progress', { percentage:(event.file.bytesLoaded / event.file.size) * 100 }) });  

Continue ReadingFile upload progress in a Node app using Socket.io

Lambda expressions in Android

What are Lambda expressions Lambda Expressions are one of the most important features added to Java 8. Prior to Lambda Expressions, implementing functional interfaces i.e interfaces with only one abstract method has been done using syntax that has a lot of boilerplate code in it. In cases like this, what we are trying to do is pass a functionality as an argument to a method, such as what happens when a button is clicked. Lambda expressions enables you to do just that, in a way that is much more compact and clear. Syntax of Lambda Expressions A lambda expression consist of the following: A comma separated list of formal parameters enclosed in parentheses. The data types of the parameters in a lambda expression can be omitted. Also the parenthesis can be omitted if there is only one parameter. For example: TextView tView = (TextView) findViewById(R.id.tView); tView.setOnLongClickListener(v -> System.out.println("Testing Long Click")); The arrow token -> A body which contains a single expression or a statement block. If a single expression is specified, the java runtime evaluates the expression and then return its value. To specify a statement block, enclose statements in curly braces "{}" Lambda Expressions in Android To use Lambda Expressions and other Java 8 features in Android, you need to use the Jack tool-chain. Open your module level build.gradle file and add the following: android { ... defaultConfig { ... jackOptions { enabled true } } compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } Sync your build.gradle file and if you are having any issue with build tools, you may need to update buildToolsVersion in your build.gradle file to "24rc4" or just download the latest Android SDK Build-tools from the SDK Manager, under the Tools (Preview channel). Example Adding a click listener to a button without lambda expression Button button = (Button)findViewById(R.id.button); button.setOnClickListener(button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Toast.makeText(this, "Button clicked", Toast.LENGTH_LONG).show(); } });); with lambda expressions It is as simple as: Button button = (Button)findViewById(R.id.button); button.setOnClickListener(v -> Toast.makeText(this, "Button clicked", Toast.LENGTH_LONG).show();); As we can see above, using lambda expressions makes implementing a functional interface clearer and compact. Standard functional interfaces can be found in the java.util.function package [included in Java 8]. These interfaces can be used as target types for lambda expressions and method references. Credits : https://mayojava.github.io/android/java/using-java8-lambda-expressions-in-android/ Another way to have Java 8 features in your Android app is using the RetroLambda plugin.

Continue ReadingLambda expressions in Android