Designing and optimising new invoice PDFs

The Open Event project has proven to be an excellent event management application with a growing user base. With recent workflow refactors in the order process in open-event-frontend and introduction of event invoices (to be rolled out this month as a work product), the open-event-server’s invoices required a makeover. A ticket buyer is now required to give their billing information if the order is comprised of paid tickets and to accommodate this, and long information addresses, optimisation was required. Restructuring order invoices The new order invoices use nested tables concept instead of previously used two-cell tables. The pros of this new design is the accomodation of long-addresses and corresponding changes in billing information display. {% if order.is_billing_enabled %}                       <td style="text-align:center;">                           <table>                               <tr>                                   <td>                                       <strong>Company :</strong>                                   </td>                                   <td>                                       <strong>{{ order.company }}</strong>                                   </td>                               </tr>                               <tr>                                   <td valign="top">                                       <strong>Tax Info :</strong>                                   </td>                                   <td>                                       <strong>{{ order.tax_business_info }}</strong>                                   </td>                               </tr>                               <tr>                                   <td valign="top">                                       <strong>Address :</strong>      …

Continue ReadingDesigning and optimising new invoice PDFs

Migrating Event Ratings of Open Event with Stored Procedures

Many developers know about procedural languages and have used them in some form or another, but this is really an unpopular tool, despite its power. There are many advantages (and few disadvantages) of these languages, which we will learn about soon. Having a right amount of database-stored procedure code with the help of these languages can really enhance the speed and responsiveness of an application. This article will teach us how procedural languages can be utilized in database management and how they were used recently for a bug fix in Open Event Server. PostgreSQL, like any other powerful, relational database management system (RDBMS), provides the functionality to create and use stored procedures. Essentially, a stored procedure is database logic code which is saved on the database server. This code can be executed directly in the database, and can (and is!) often used to shift business logic from the application layer of a software to the database layer. This simple shift often has many advantages - including faster execution (as code executes at a lower stack level) and better security. When firing database queries from the application layer (i.e., the code that programmers write for storing programmable objects, performing business logic and so on), it often happens that parameters from the programming language itself are passed in to SQL, which then generates a complete SQL query. For example, here’s how a novice query might look like: import psycopg2 conn = psycopg2.connect(dbname="oevent", user="john", password="start") cur = conn.cursor() name = "Sam" cur.execute("SELECT * FROM users WHERE name='%s'" % name) # DANGEROUS! This is an extremely “exposed” code that can be exploited for malicious access, with a technique called SQL injection. This technique essentially “injects” malicious code via these passed parameters, like the variable name mentioned in the above code. With having stored procedures for business logic, there is no room for SQL injection. They solve this problem by writing the query beforehand, and having the parameterized data as a different entity. The pre-processed query within the corresponding stored procedure now looks like SELECT * FROM users WHERE name=?   The database driver sends the name of this stored procedure (or, in standard parameterised queries, just the query text itself) and a list of parameters, as distinct separate entities in the protocol. More details on how stored procedures enhance security can be found here. After learning so much about the advantages of stored procedures (which are enabled by procedural languages), let’s write one! Postgres supports multiple languages for writing stored procedures; here we will use PL/pgSQL, which is the most popular choice for Postgres. This procedural language, inspired (heavily) by Oracle’s PL/SQL language, looks very similar to SQL. To use this procedural language, we have to first install it. In Postgres, procedural languages are installed per-database, not server-wide. We can use the popular Postgres client psql for this purpose, or simply the createlang command on the command line: $ createlang plpgsql yourdb   Now let’s create a simple procedure that prints the corresponding grades…

Continue ReadingMigrating Event Ratings of Open Event with Stored Procedures

Make Flask Fast and Reliable – Simple Steps

Flask is a microframework for Python, which is mostly used in web-backend development.There are projects in FOSSASIA that are using flask for development purposes such as Open Event Server, Query Server, Badgeyay. Optimization is indeed one of the most important steps for a successful software product. So, in this post some few off- the-hook tricks will be shown which will make your flask-app more fast and reliable. Flask-Compress Flask-Compress is a python package which basically provides de-facto lossless compression  to your Flask application. Enough with the theory, now let’s understand the coding part: First install the module 2. Then for a basic setup 3.That's it! All it takes is just few lines of code to make your flask app optimized .To know more about the module check out flask-compress module. Requirements Directory A common practice amongst different FOSSASIA  projects which involves dividing requirements.txt files for development,testing as well as production. Basically when projects either use TRAVIS CI for testing or are deployed to Cloud Services like Heroku, there are some modules which are not really required at some places.  For example: gunicorn is only required for deployment purposes and not for development. So how about we have a separate directory wherein different .txt files are created for different purposes. Below is the image of file directory structure followed for requirements in badgeyay project. As you can see different .txt files are created for different purposes dev.txt - for development prod.txt - for production(i.e. deployment) test.txt - for testing. Resources Flask documentation: http://flask.pocoo.org/docs/ Optimizing Flask App: https://damyanon.net/flask-series-optimizations/ Badgeyay project:  https://github.com/fossasia/badgeyay

Continue ReadingMake Flask Fast and Reliable – Simple Steps

Service Workers in Loklak Search

Loklak search is a web application which is built on latest web technologies and is aiming to be a progressive web application. A PWA is a web application which has a rich, reliable, fast, and engaging web experience, and web API which enables us to get these are Service Workers. This blog post describes the basics of service workers and their usage in the Loklak Search application to act as a Network Proxy to and the programmatical cache controller for static resources. What are Service Workers? In the very formal definition, Matt Gaunt describes service workers to be a script that the browser runs in the background, and help us enable all the modern web features. Most these features include intercepting network requests and caching and responding from the cache in a more programmatical way, and independent from native browser based caching. To register a service worker in the application is a really simple task, there is just one thing which should be kept in mind, that service workers need the HTTPS connection, to work, and this is the web standard made around the secure protocol. To register a service worker if ('serviceWorker' in navigator) { window.addEventListener('load', function() { navigator.serviceWorker.register('/sw.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }, function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); }); } This piece of javascript, if the browser supports, registers the service worker defined by sw.js. The service worker then goes through its lifecycle, and gets installed and then it takes control of the page it gets registered with. What does service workers solve in Loklak Search? In loklak search, service workers currently work as a, network proxy to work as a caching mechanism for static resources. These static resources include the all the bundled js files and images. These bundled chunks are cached in the service workers cache and are responded with from the cache when requested. The chunking of assets have an advantage in this caching strategy, as the cache misses only happen for the chunks which are modified, and the parts of the application which are unmodified are served from the cache making it possible for lesser download of assets to be served. Service workers and Angular As the loklak search is an angular application we, have used the @angular/service-worker library to implement the service workers. This is simple to integrate library and works with the, CLI, there are two steps to enable this, first is to download the Service Worker package npm install --save @angular/service-worker And the second step is to enable the service worker flag in .angular-cli.json "apps": [   {      // Other Configurations      serviceWorker: true   } ] Now when we generate the production build from the CLI, along with all the application chunks we get, The three files related to the service workers as well sw-register.bundle.js : This is a simple register script which is included in the index page to register the service worker. worker-basic.js : This is…

Continue ReadingService Workers in Loklak Search

Analyzing Production Build Size in Loklak Search

Loklak search being a web application it is critical to keep the size of the application in check to ensure that we are not transferring any non-essential bytes to the user so that application load is faster, and we are able to get the minimal first paint time. This requires a mechanism for the ability to check the size of the build files which are generated and served to the user. Alongside the ability to check sizes it is also critically important to analyze the distribution of the modules along with their sizes in various chunks. In this blog post, I discuss the analysis of the application code of loklak search and the generated build files. Importance of Analysis The chunk size analysis is critical to any application, as the chunk size of any application directly determines the performance of any application, at any scale. The smaller the application the lesser is the load time, thus faster it becomes usable at the user side. The time to first-paint is the most important metric to keep in mind while analyzing any web application for performance, though the first paint time consists of many critical parts from loading, parsing, layout and paint, but still the size of any chunk determines all the time it will take to render it on the screen. Also as we use the 3rd party libraries and components it becomes crucially important to inspect the impact on the size of the application upon the inclusion of those libraries and components. Development Phase Checking Angular CLI provides a clean mechanism to track and check the size of all the chunks always at the runtime, these stats simply show the size of each chunk in the application in the terminal on every successful compilation, and this provides us a broad idea about the chunks to look and address. Deep Analysis using Webpack Bundle Analyzer The angular cli while generating the production build provides us with an option to generates the statistics about the chunks including the size and namespaces of the each module which is part of that chunk. These stats are directly generated by the webpack at the time of bundling, code splitting, and tree shaking. These statistics thus provide us to peek into the actual deeper level of chunk creation in webpack to analyze sizes of its various components. To generate the statistics we just need to enable the --stats-json flag while building. ng serve --prod --aot --stats-json This will generate the statistics file for the application in the /dist directory, alongside all the bundles. Now to have the visual and graphical analysis of these statistics we can use a tool like webpack-bundle-analyzer to analyze the statistics. We can install the webpack-bundle-analyzer via npm, npm install --save-dev webpack-bundle-analyzer Now, to our package.json we can add a script, running this script will open up a web page which contains graphical visualization of all the chunks build in the application // package.json {   …   …   {      “scripts”: {         …         …         "analyze":…

Continue ReadingAnalyzing Production Build Size in Loklak Search

Route Based Chunking in Loklak Search

The loklak search application running at loklak.org is growing in size as the features are being added into the application, this growth is a linear one, and traditional SPA, tend to ship all the code is required to run the application in one pass, as a single monolithic JavaScript file, along with the index.html. This approach is suitable for the application with few pages which are frequently used, and have context switching between those logical pages at a high rate and almost simultaneously as the application loads. But generally, only a fraction of code is what is accessed most frequently by most users, so as the application size grows it does not make sense to include all the code for the entire application at the first request, as there is always some code in the application, some views, are rarely accessed. The loading of such part of the application can be delayed until they are accessed in the application. The angular router provides an easy way to set up such system and is used in the latest version of loklak search. The technique which is used here is to load the content according to the route. This makes sure only the route which is viewed is loaded on the initial load, and subsequent loading is done at the runtime as and when required. Old setup for baseline Here are the compiled file sizes, of the project without the chunking the application. Now as we can see that the file sizes are huge, especially the vendor bundle, which is of 5.4M and main bundle which is about 0.5M now, these are the files which are loaded on the first load and due to their huge sizes, the first paint of the application suffers, to a great extent. These numbers will act as a baseline upon which we will measure the impact of route based chunking. Setup for route based chunking The setup for route based chunking is fairly simple, this is our routing configuration, the part of the modules which we want to lazy load are to be passed as loadChildren attribute of the route, this attribute is a string which is a path of the feature module which, and part after the hash symbol is the actual class name of the module, in that file. This setup enables the router to load that module lazily when accessed by the user. const routes: Routes = [ { path: '', pathMatch: 'full', loadChildren: './home/home.module#HomeModule', data: { preload: true } }, { path: 'about', loadChildren: './about/about.module#AboutModule' }, { path: 'contact', loadChildren: './contact/contact.module#ContactModule' }, { path: 'search', loadChildren: './feed/feed.module#FeedModule', data: { preload: true } }, { path: 'terms', loadChildren: './terms/terms.module#TermsModule' }, { path: 'wall', loadChildren: './media-wall/media-wall.module#MediaWallModule' } ]; Preloading of some routes As we can see that in two of the configurations above, there is a data attribute, on which preload: true attribute is specified. Sometimes we need to preload some part of theapplication, which we know we will access, soon enough. So angular…

Continue ReadingRoute Based Chunking in Loklak Search

Lazy Loading Images in Loklak Search

In last blog post, I discussed the basic Web API’s which helps us to create the lazy image loader component. I also discussed the structure which is used in the application, to load the images lazily. The core idea is to wrap the <img> element in a wrapper, <app-lazy-img> element. This enables us the detection of the element in the viewport and corresponding loading only if the image is present in the viewport. In this blog post, I will be discussing the implementation details about how this is achieved in Loklak search in an optimized manner. The logic for lazy loading of images in the application is divided into a Component and a corresponding Service. The reason for this splitting of logic will be explained as we discuss the core parts of the code for this feature. Detecting the Intersection with Viewport The lazy image service is a service for the lazy image component which is registered at the by the modules which intend to use this app lazy image component. The task of this service is to register the elements with the intersection observer, and, then emit an event when the element comes in the viewport, which the element can react on and then use the other methods of services to actually fetch the image. @Injectable() export class LazyImgService { private intersectionObserver: IntersectionObserver = new IntersectionObserver(this.observerCallback.bind(this), { rootMargin: '50% 50%' }); private elementSubscriberMap: Map<Element, Subscriber<boolean>> = new Map<Element, Subscriber<boolean>>(); } The service has two member attributes, one is IntersectionObserver, and the other is a Map which stores the the reference of the subscribers of this intersection observer. This reference is then later used to emit the event when the element comes in viewport. The rootMargin of the intersection observer is set to 50% this makes sure that when the element is 50% away from the viewport. The obvserve public method of the service, takes an element and pass it to intersection observer to observe, also put the element in the subscriber map. public observe(element: Element): Observable<boolean> { const observable: Observable<boolean> = new Observable<boolean>(subscriber => { this.elementSubscriberMap.set(element, subscriber); }); this.intersectionObserver.observe(element); return observable; } Then there is the observer callback, this method, as an argument receives all the objects intersecting the root of the observer, when this callback is fired, we find all the intersecting elements and emit the intersection event. Indicating that the element is nearby the viewport and this is the time to load it. private observerCallback(entries: IntersectionObserverEntry[], observer: IntersectionObserver) { entries.forEach(entry => { if (this.elementSubscriberMap.has(entry.target)) { if (entry.intersectionRatio > 0) { const subscriber = this.elementSubscriberMap.get(entry.target); subscriber.next(true); this.elementSubscriberMap.delete(entry.target); } } }); } Now, our LazyImgComponent enables us to uses this service to register its element, with the intersection observer and then reacting to it later, when the event is emitted. This method sets up the IO, to load the image, and subscribes to the event emittes by the service and eventually calls the loadImage method when the element intersects with the viewport. private setupIntersectionObserver() { this.lazyImgService .observe(this.elementRef.nativeElement) .subscribe(value =>…

Continue ReadingLazy Loading Images in Loklak Search

Lazy loading images in Loklak Search

Loklak Search delivers the media rich content to the users. Most of the media delivered to the users are in the form of images. In the earlier versions of loklak search, these images were delivered to the users imperatively, irrespective of their need. What this meant is, whether the image is required by the user or not it was delivered, consuming the bandwidth and slowing down the initial load of the app as large amount of data was required to be fetched before the page was ready. Also, the 404 errors were also not being handled, thus giving the feel of a broken UI. So we required a mechanism to control this loading process and tap into its various aspects, to handle the edge cases. This, on the whole, required few new Web Standard APIs to enable smooth working of this feature. These API’s are IntersectionObserver API Fetch API   As the details of this feature are involving and comprise of new API standards, I have divided this into two posts, one with the basics of the above mentioned API’s and the outline approach to the module and its subcomponents and the difficulties which we faced. The second post will mostly comprise of the details of the code which went into making this feature and how we tackled the corner cases in the path. Overview Our goal here was to create a component which can lazily load the images and provide UI feedback to the user in case of any load or parse error. As mentioned above the development of this feature depends on the relatively new web standards API’s so it’s important to understand the functioning of these AP’s we understand how they become the intrinsic part of our LazyImgComponent. Intersection Observer If we see history, the problem of intersection of two elements on the web in a performant way has been impossible since always, because it requires DOM polling constantly for the ClientRect the element which we want to check for intersection, as these operations run on main thread these kinds of polling has always been a source of bottlenecks in the application performance. The intersection observer API is a web standard to detect when two DOM elements intersect with each other. The intersection observer API helps us to configure a callback whenever an element called target intersects with another element (root) or viewport. To create an intersection observer is a simple task we just have to create a new instance of the observer. var observer = new IntersectionObserver(callback, options); Here the callback is the function to run whenever some observed element intersect with the element supplied to the observer. This element is configured in the options object passed to the Intersection Observer var options = { root: document.querySelector('#root'), // Defaults to viewport if null rootMargin: '0px', // The margin around root within which the callback is triggered threshold: 1.0 } The target element whose intersection is to be tested with the main element can be setup using…

Continue ReadingLazy loading images in Loklak Search

PIL to convert type and quality of image

Image upload is an important part of the server. The images can be in different formats and after applying certain javascript modifications, they can be changed to different formats. For example, when an image is uploaded after cropping in open event organizer server, it is saved in PNG format. But PNG is more than 5 times larger than JPEG image. So when we upload a 150KB image, the image finally reaching the server is around 1MB which is huge. So we need to decide in the server which image format to select in different cases and how to convert them.

(more…)

Continue ReadingPIL to convert type and quality of image

Optimizing page load time

The average size of a web page has been growing at an accelerating rate over the last few years. Last week, when the open-event webapp was tested, it was very slow to load because of loading 67.2 MB data ( which contained audio and image files ) on the web page.I have taken following steps which are good to read to make any web application load faster. 1. Stop preloading of audio files The HTML5 audio player loads all audio files by default on the page. To stop this we have to change the code as <audio controls preload="none"> <source src="{{audio}}" type="audio/mpeg"> </audio> Setting preload option to none help us to load audio only when it is clicked. Hence it decreases the HTTP calls and decreases the page loading time. Previously Network tab with Audio files takes 68.0MB  load and 13.6 minute loading time. Now Network tab with option <audio controls preloaded="none"> takes 1.1 MB load and  1 minute loading time. This was a huge improvement but more optimizations can be done. 2. Using Compression Middleware Express compression middleware works like a charm by compressing the response coming to the web page. It is too simple to add. $ npm install compression var compression = require('compression') var express = require('express') var app = express() // compress all requests app.use(compression()) // add all routes This has compressed the responses and decreased the page load time. 3.  Reduce the number of HTTP requests Another great way of speeding up your web pages is to simply reduce the number of files that need to be loaded. Combine files Combining multiple stylesheets into one file is a really useful way of eliminating extra HTTP requests. This strategy can also be adopted for your JavaScript files. A single larger CSS or JavaScript file will often load quicker because more time can be spent downloading the data rather than establishing multiple connections to a server. After such optimizations, the webapp loads now in 8-15 seconds with DOM loaded in 2 seconds. The result of these optimizations are awesome. We can test the page speed by using various tools like pagespeed, speedtracer etc.

Continue ReadingOptimizing page load time