Adding Service Workers In Generated Event Websites In Open Event Webapp

Open Event Webapp Generator takes in the event data in form of a JSON zip or an API endpoint as input and outputs an event website. Since the generated event websites are static, we can use caching of static assets to improve the page-load time significantly. All this has been made possible by the introduction of service workers in the browsers. Service workers are event-driven scripts (written in JavaScript) that have access to domain-wide events, including network fetches.With the help of service workers, we can cache all static resources, which could drastically reduce network requests and improve performance considerably, too. The service workers are like a proxy which sits between the browser and the network and intercept it. We will listen for fetch events on these workers and whenever a request for a resource is made, we intercept and process it first. Since our generated event sites are static, it makes no sense to fetch the assets again and again. When a user first loads a page, the script gets activated and try to cache all the static assets it can. On further reload of the page, whenever the browser request for an already stored asset, instead of fetching it from the network, it can directly give them back to the browser from the local store. Also, in the meantime, it is also caching all the new static assets so they won't be loaded or fetched again from the network later. Thus the performance of the app gets increased more and more with every reload. It becomes fully functional offline after a few page loads. The issue for this feature is here and the whole work can be seen here. To know more details about the basic functioning and the lifecycle of the service workers, check out this excellent Google Developers article. This blog mainly focuses on how we added service workers in the event websites. We create a new fallback HTML page offline.html to return in response to an offline user who requests a page which has not been cached yet. Similarly, for images, we have a fallback image named avatar.png which is returned when the requested image is not present in the local store and the network is down. Since these assets are integral to the functioning of the service worker, we cache them in the installation step itself. The whole service worker file can be seen here var urlsToCache = [ './css/bootstrap.min.css', './offline.html', './images/avatar.png' ]; self.addEventListener('install', function(event) { event.waitUntil( caches.open(CACHE_NAME).then(function(cache) { return cache.addAll(urlsToCache); }) ); }); All the other assets are cached lazily. Only when they are requested, we fetch them from the network and store it in the local store. This way, we avoid caching a large number of assets at the install step. Caching of several files in the install step is not recommended since if any of the listed files fails to download and cache, then the service worker won’t be installed! self.addEventListener('fetch', function(event) { event.respondWith(caches.match(event.request).then(function(response) { // Cache hit - return response if (response) { return response; } // Fetch resource from internet and put…

Continue ReadingAdding Service Workers In Generated Event Websites In Open Event Webapp

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load. In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index. Fields to Consider while Caching At the classifier endpoint, aggregations can be requested based on the following fields - Classifier Name Classifier Classes Countries Start Date End Date But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be - Classifier Name Start Date End Date Type of Cache The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section. Key The key is built using the fields mentioned previously - private static String getKey(String index, String classifier, String sinceDate, String untilDate) { return index + "::::" + classifier + "::::" + (sinceDate == null ? "" : sinceDate) + "::::" + (untilDate == null ? "" : untilDate); } [SOURCE] In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose. Value The object used as key in the HashMap is a wrapper containing the following - json - It is a JSONObject containing the actual data. expiry - It is the expiry of the object in milliseconds. class JSONObjectWrapper { private JSONObject json; private long expiry; ... } Timeout The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject - class JSONObjectWrapper { ... private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000); JSONObjectWrapper(JSONObject json) { this.json = json; this.expiry = System.currentTimeMillis() + timeout; } ... }   Cache Hit For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner - String key = getKey(index, classifier, sinceDate, untilDate); if (cacheMap.keySet().contains(key)) { JSONObjectWrapper jw = cacheMap.get(key); if (!jw.isExpired()) { // Do something with jw } } // Calculate the aggregations ... But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed. Filtering results For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed. Since…

Continue ReadingCaching Elasticsearch Aggregation Results in loklak Server

Custom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching

The SUSI Server is an AI powered server which is capable of responding to intelligent answers based on user’s queries. The queries to the susi server are obtained either as a websearch using the application or as an RSS feed. Two of the actions are websearch and RSS. These actions as the name suggests respond to queries based on search results from the web which are rendered in the clients. In order to use use these action types and display them in the SUSI iOS client, we need to first parse the actions looking for these action types and then creating a custom UI for them to display them. To start with, we need to make send the query to the server to receive an intelligent response from the server. This response is parsed into different action types supported by the server and saved into relevant objects. Here, we check the action types by looping through the answers array containing the actions and based on that, we save the data for that action. if type == ActionType.rss.rawValue {    message.actionType = ActionType.rss.rawValue    message.rssData = RSSAction(data: data, actionObject: action) } else if type == ActionType.websearch.rawValue {    message.actionType = ActionType.websearch.rawValue    message.message = action[Client.ChatKeys.Query] as? String ?? "" } Here, we parsed the data response from the server and looked for the rss and websearch action type followed by which we saved the data we received from the server for each of the action types in their own objects. Next, when a message object is created, we insert it into the dataSource item by appending it and use the `insertItems(at: [IndexPath])` method of collection view to insert them into the views at a particular index. Before adding them, we need to create a Custom UI for them. This UI will consist of a Collection View which is scrollable in the horizontal direction inside a CollectionView Cell. To start with this, we create a new class called `WebsearchCollectionView` which will be a `UIView` consisting of a `UICollectionView`. We start by adding a collection view into the UIView inside the `init` method by overriding it. Declare a collection view using flow layout and scroll direction set to `horizontal`. Also, hide the scroll indicators and assign the delegate and datasource to `self`. Now to populate this collection view, we need to specify the number of items that will show up. For this, we make use of the `message` variable declared. We use the `websearchData` in case of websearch action and `rssData` otherwise. Now to specify the number of cells, we use the below method which returns the number of rss or websearch action objects and defaults to 0 such cells. func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {     if let rssData = message?.rssData {         return rssData.count     } else if let webData = message?.websearchData {         return webData.count     }     return 0 } We display the title, description and image for each object for which we need to create a UI for the cells. Let’s start by creating…

Continue ReadingCustom UI Implementation for Web Search and RSS actions in SUSI iOS Using Kingfisher for Image Caching