Implementing MVC on Engelsystem

Engelsystem is an MVC based PHP application, but there was a 4th ties, Pages, introduced with the traditional MVC pattern. It seems to have everything an event manager could want.

The Model-View-Control (MVC) pattern, originally formulated in the late 1970s, is a software architecture pattern built on the basis of keeping the presentation of data separate from the methods that interact with the data. In theory, a well-developed MVC system should allow a front-end developer and a back-end developer to work on the same system without interfering, sharing, or editing files either party is working on.

Like everything else in software engineering, it seems, the concept of Model-View-Controller was originally invented bySmalltalk programmers. More specifically, it was invented by one Smalltalk programmer, Trygve Reenskaug. Trygve maintains a page that explains the history of MVC in his own words.

Block Diagram for MVC
                                                                  Block Diagram for MVC

 

The components of an MVC pattern are explained as follows:

  1. MODEL: The Model is the name given to the permanent storage of the data used in the overall design.
    Models represent knowledge. A model could be a single object (rather uninteresting), or it could be some structure of objects.
  2. VIEW: The View is where data, requested from the Model, is viewed and its final output is determined. A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter.
    Traditionally in web apps built using MVC, the View is the part of the system where the HTML is generated and displayed. The View also ignites reactions from the user, who then goes on to interact with the Controller.
  3. CONTROLLER: The final component of the triad is the Controller.A controller is the link between a user and the system. It provides the user with input by arranging for relevant views to present themselves in appropriate places on the screen.
    Its job is to handle data that the user inputs or submits, and update the Model accordingly. The Controller’s life blood is the user; without user interactions, the Controller has no purpose.

 

Even though MVC was originally designed for personal computing, it has been adapted and is widely being used by web developers due to its emphasis on separation of concerns, and thus indirectly, reusable code. The pattern encourages the development of modular systems, allowing developers to quickly update, add, or even remove functionality.

layercake

 

Initially, in Engelsystem there were files distributed in 4 tiers, Model, View, Controller, Pages, which are mentioned as follows:

Pages:

admin_active.php
admin_arrive.php
admin_export.php
admin_free.php
admin_groups.php
admin_import.php
admin_log.php
admin_news.php
admin_questions.php
admin_rooms.php
admin_settings.php
admin_shifts.php
admin_user.php
guest_credits.php
guest_login.php
guest_start.php
guest_stats.php
user_atom.php
user_ical.php
user_messages.php
user_myshifts.php
user_news.php
user_questions.php
user_settings.php
user_shifts.php

Model:

AngelType_model.php
LogEntries_model.php
Message_model.php
NeededAngelTypes_model.php
Room_model.php
Settings_model.php
ShiftEntry_model.php
ShiftTypes_model.php
Shifts_model.php
UserAngelTypes_model.php
UserDriverLicenses.model.php
User_model.php

Controller

angeltype_controller.php
rooms_controller.php
shifts_controller.php
shifttypes_controller.php
user_angeltypes_controller.php
user_driver_licenses_controller.php
user_controller.php

View

AngelTypes_view.php
Questions_view.php
Rooms_view.php
ShiftEntry_view.php
ShiftTypes_view.php
Shifts_view.php
UserAngelTypes_view.php
UserDriverLicenses_view.php
User_view.php

 

There were 26 Pages files, in which there were both sql queries along with the controller code. All these files were refactured into Controller and Model(which contains sql queries) files seperately to implement proper MVC pattern in Engelsystem.

 

We are developing new feature for Engelsystem and we will be applying this WordPress like update system toEngelsystem in the upcoming weeks. Developers who are interested in contributing can work with us.

Development: https://github.com/fossasia/engelsystem             Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Continue ReadingImplementing MVC on Engelsystem

Unit testing angular applications with jasmine

week10gsoc1

In the previous article i have addressed about writing tests with frisby. Frisby is helpful in writing tests for angular applications that have active communications involving RESTful api’s. So more or less frisby is helpful as an API testing tool.

The sTeam web interface actively talks to a REST api written in pike along with controllers that make the entire application a complete collaboration platform. So in order to test the working ability of individual units of code, it was necessary to have unit tests written for the application.

What is unit testing really ?
In order to validate different segments of the same code base or isolated parts of the code base we have a testing mechanism called Unit testing which extends the afore mentioned idea of having individual or isolated pieces of code that constitute as units that are tested and ensured that they are working properly.
NOTE: It is always a good practice to write unit tests for your code base in order to ensure that all the individual segments as well as the entire code base is working without flaws.

What is jasmine and how is it related ?
Jasmine is a unit testing framework which is helpful in writing unit tests for applications and it is combined with a test runner called Karma in order to run/execute the tests so written. Now to put things in a simpler perspective, jasmine is test framework which we are going to use, but to run these tests so written we must a test runner that shall execute and give us the outcome of the tests so written with the help of the afore mentioned framework.
NOTE: Jasmine has its own syntax and code conventions.

Is it really going to help ?
Yes, let me give an example to illustrate the effect of writing unit tests. Let’s suppose we have an angular application and it involves controllers, directives etc etc. Now in order to quantify that communication is happening between all the components and the process as a whole is not broken we must first ensure that in the chain of communication the previous functionalities are not effecting components that are connected further and also whilst validating that the code is not breaking there has to be some proof to support the quantification which we before established. Therefore writing unit tests will help us to prove all of that.

Syntax and style :


'use strict';

describe('controllers', function(){

  beforeEach(module('sTeam.controllers'));

  it('should ....', inject(function() {
    // here the test specs go
  }));

  it('should ....', inject(function() {
    // here the test specs go
  }));
});

Let’s go ahead and test a real directive


'use strict';

describe('directives', function() {

  beforeEach(module('sTeam.directives'));

  describe('fileModel', function() {
    it('Must give the flag whether the file upload directive is working or not', function() {
      module(function($provide) {
        $provide.value('model', 'attrs.fileModel');
      });
      inject(function($compile, $rootScope) {
        var DOM = $compile("the html goes here")($rootScope);
    	$rootScope.$digest();
    	expect(element.status()).toEqual('something');
      });
    });
  });
});

Basically in the above example we are testing a unit of the sTeam web interface, a file uploader. It is basically a directive written in order to upload files to the web interface.

So the unit test comprises of a function called $inject which is responsible for injected the DOM code which we wish to test, and then compile the HTML. So after compiling the HTML we have a function called expect which is used for checking whether the desired output is obtained in the DOM or not.

Source : fileModel.js

Thats it folks,
Happy Hacking !!

Continue ReadingUnit testing angular applications with jasmine

Unit testing JSON files in assets folder of Android App

So here is the scenario, your android app has a lot of json files in the assets folder that are used to load some data when in first runs.
You are writing some unit tests, and want to make sure the integrity of the data in the assets/*.json are preserved.

You’d assume, that reading JSON files should not involve using the Android Runtime in any way, and we should be able to read JSON files in local JVM as well. But you’re wrong. The JSONObject and JSONArray classes of Android are part of android.jar, and hence

 
JSONObject myJson = new JSONObject(someString);

The above code will not work when running unit tests on local JVM.

Fortunately, our codebase already using Google’s GSoN library to parse JSON, and that works on local JVM too (because GSoN is a core Java library, not specifically an Android library).

Now the second problem that comes is that when running unit tests on local JVM we do not have the getResources() or getAssets() functions.
So how do we retrieve a file from the assets folder ?

So what I found out (after a bit of trial and error and poking around with various dir paths), is that the tests are run from the app folder (app being the Android application module – it is named app by default by Android Studio, though you might have had named it differently)

So in the tests file you can define at the beginning

    public static final String  ASSET_BASE_PATH = "../app/src/main/assets/";

And also create the following helper function

    public String readJsonFile (String filename) throws IOException {
        BufferedReader br = new BufferedReader(new InputStreamReader(new FileInputStream(ASSET_BASE_PATH + filename)));
        StringBuilder sb = new StringBuilder();
        String line = br.readLine();
        while (line != null) {
            sb.append(line);
            line = br.readLine();
        }

        return sb.toString();
    }

Now wherever you need this JSON data you can just do the following

        Gson gson = new GsonBuilder().create();
        events = gson.fromJson(readJsonFile("events.json"),
                Event.EventList.class);
        eventDatesList = gson.fromJson(readJsonFile("eventDates.json"), EventDates.EventDatesList.class);
Continue ReadingUnit testing JSON files in assets folder of Android App

Autocomplete Address Form using Google Map API

Google map is one of the most widely used API of Google as most of the websites use Google map for showing address location. For a static address it’s pretty simple. All you need to do is mention the address and the map will show the nearest location. Problem arrives when the address is dynamically putted by the user. Suppose for an event in event organizer server, one enters the location. The main component used while taking input location is Google Autocomplete. But we went a step further and parsed the entire address based on city, state, country, etc. and allowed user to input the details as well which gave them the nearest location marked in Map if autocomplete couldn’t find the address.

Autocomplete Location

Screenshot from 2016-07-27 06:52:37

As we can see, in the above input box we get suggestions by Google Map on entering first few letters of our address. To this, we need the API https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&libraries=places&callback=initMap. You can find an example code of how to do this here.

After this is done, what we wanted is not to just include this address, but to provide a form to fill up the entire address in case some parts were missing on this address. The function that the autocomplete listens to is “place_changed” . So once we click on one of the options, this event is triggered. Once the event is triggered, we use the autocomplete.getPlace() to get the complete address json. The json looks something like this:

Screenshot from 2016-07-27 07:04:49

Now what we do is we create a form with input fields having the id same as the ones we require(e.g., country, administrative_area_level_1, locality, etc.). After that we select the long_name or the short_name from this json and put the value in the corresponding input field with the ids. The code for the process after getting the json is elaborated here.

Editing Address Form

After doing this it looks something like this:
Screenshot from 2016-07-27 07:12:13

However, now the important part is to show the map according to this fields. Also, every time we update a field, the map should be updated. For this we use a hack. Instead of removing the initial location field completely, we hide the field but keep the binding to autocomplete intact. As a result the map is shown when we select a particular address.

Now when we update the fields in the address form, we append the value of this field to the value in the initial location field. Though the field is hidden but it is still bound to autocomplete. As a result as soon as we append something to the string contained in the field, the map gets updated. Also, the updated value gets stored to the DB. Thus, with every update in field, the pointer is moved to the nearest location formed by appending all the data from the form.

After saving the location data to DB, if we wish to edit it, we can get back the json by making the same request with the location value. And then we will get back the same address form and the map accordingly.

Finally it looks something like this:

Screenshot from 2016-07-27 07:19:56

Continue ReadingAutocomplete Address Form using Google Map API

Testing Hero – II

I continued last weeks work on improving the testing framework and adding more test cases. While writing the test cases for create I had to write a separate test case for each kind of object. This caused a lot of repetition of code. Thus the first aim for the week was to design a mechanism to write generalized test cases so that we can have an array of object and loop through them and pass each object to the same test case.

https://github.com/societyserver/sTeam/issues/113

https://github.com/societyserver/sTeam/pull/114

Right now the structure has a central script called test.pike which imports various other scripts containing the test cases. Let us take one of these scripts, suppose move.pike. Now I wanted to write a generalized test case which performs the same action on various objects. So I created one more file containing this generalized test case and imported this into one of the testcases in move.pike. This test case in move.pike is responsible for enumerating the various kinds of objects, sending them to the generalized test case, collect the output and then send the result for the entire test to the central test.pike. Then I went ahead and implemented this model for moving various objects to non existential location and for creating various kinds of objects and the model seemed to work fairly well.

The journey was not so smooth. I had a few troubles on the way. In all the test cases I was deleting any objects that were created and used in the test. To delete any object I need to get a reference to the object. This reference keeps getting dropped for some reason and I get an error for calling the delete function on NULL as the reference no longer exists. I tried finding the cause of this and solve this bug, however I couldn’t and found a work around the errors by using if statements to check that the object references are not null before calling functions on these object references. I continued my work on generalizing the test cases and wrote the general tests for all the test cases in the move and create test suites.

In the later part of the week I started working on some merging with my team mate Ajinkya Wavare. I designed more test cases for checking the creation of groups and users. Groups could be created using the generalized test case however for users I had to add a special test case as the process of creating a user is different from creating other objects. I ended my week by writing the test case for a long standing error, i.e, call to the get_environment function.

testing hero 2
output of test
Continue ReadingTesting Hero – II

Import/Export feature of Open Event – Challenges

We have developed a nice import/export feature as a part of our GSoC project Open Event. It allows user to export an event and then further import it back.

Event contains data like tracks, sessions, microlocations etc. When I was developing the basic part of this feature, it was a challenge on how to export and then further import the same data. I was in need of a format that completely stores data and is recognized by the current system. This is when I decided to use the APIs.

API documentation of Open Event project is at http://open-event.herokuapp.com/api/v2. We have a considerably rich API covering most aspects of the system. For the export, I adopted this very simple technique.

  1. Call the corresponding GET APIs (tracks, sessions etc) for a database model internally.
  2. Save the data in separate json files.
  3. Zip them all and done.

This was very simple and convenient. Now the real challenge came of importing the event from the data exported. As exported data was nothing but json, we could have created the event back by sending the data back as POST request. But this was not that easy because the data formats are not exactly the same for GET and POST requests.

Example –

Sessions GET –

{
	"speakers": [
		{
			"id": 1,
			"name": "Jay Sean"
		}
	],
	"track": {
		"id": 1,
		"name": "Warmups"
	}
}

Sessions POST –

{
	"speaker_ids": [1],
	"track_id": 1
}

So the exported data can only be imported when it has been converted to POST form. Luckily, the only change between POST and GET APIs was of the related attributes where dictionary in GET was replaced with just the ID in POST/PUT. So when importing I had to make it so such that the dicts are converted to their POST counterparts. For this, all that I had to do was to list all dict-type keys and extract the id key from them. I defined a global variable as the following listing all dict keys and then wrote a function to extract the ids and convert the keys.

RELATED_FIELDS = {
    'sessions': [
        ('track', 'track_id', 'tracks'),
        ('speakers', 'speaker_ids', 'speakers'),
    ]
}

Second challenge

Now I realized that there was even a tougher problem, and that was how to re-create the relations. In the above json, you must have realized that a session can be related to speaker(s) and track. These relations are managed using the IDs of the items. When an event is imported, the IDs are bound to change and so the old IDs will become outdated i.e. a track which was at ID 62 when exported can be at ID 92 when it is imported. This will cause the relationships to break. So to counter this problem, I did the following –

  1. Import items in a specific order, independent first
  2. Store a map of old IDs v/s new IDs.
  3. When dependent items are to be created, get new ID from the map and relate with it.

Let me explain the above –

The first step was to import/re-create the independent items first. Here independent items are tracks and speakers, and the dependent item is session. Now while creating the independent items, store their new IDs after create. Create a map of old ids v/s new ids and store it. This map will hold a clue to what became what after they were recreated from the json. Now the key final step is that when dependent items are to be created, find the indepedent related keys in their json using the above defined RELATED_FIELDS listing. Once they are found, extract their IDs and find the new ID corresponding to their old ID. Link the new ID with the dependent item and that would be all.

This post covers the main challenges I faced when developing the import/export feature and how I overcame them. I hope it will provide some help when you are dealing with similar problems.

 

{{ Repost from my personal blog http://aviaryan.in/blog/gsoc/open-event-import-export-algo.html }}

Continue ReadingImport/Export feature of Open Event – Challenges

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?.

At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing.

I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle.

Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t.

In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.  

It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other.

Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well.

What is unit testing?

Unit testing is a software development method in which the smallest testable parts of an application are tested

Why do we need writing unit tests?

Let me point all arguments why unit testing is really important while developing a project.

  • To prove that our code works properly

If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code.

  • To reduce amount of bugs

It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code

  • To save development time

Developers don’t waste time on checking every code’s change if his code works correctly

  • Unit tests help to understand software design
  • To provide quick feedback about method which you are testing
  • To help document a code

How to write unit test in Python

In my work I write use tests in Python. I am going to share my sample code  with you now

  • Import module unittest
  • Choose function to test
  • Write unit test

Example OpenEvent test in Python

class TestPagesUrls(OpenEventTestCase):

   def setUp(self):

       self.app = Setup.create_app()

   def test_if_urls_exist(self):

       """Test all urls via GET method"""

       with app.test_request_context():

           for rule in app.url_map.iter_rules():

               if excluded_paths(rule):

                   status_code = self.app.get(request.url[:-1] + str(rule).replace('//', '/'),        follow_redirects=True).status_code

                   self.assertTrue(status_code in [200, 302, 401])

 

I want to check if all views exist but it required a lot of time. That’s why I wonder I how to avoid writing similar tests. Finally, based  on our list of routes I am able to write test which checks code’s status  on every page.

If some of them response returns status_code different than 200, 302 or 401, test fails.This results means that somethings is wrong. Simple, isn’t it ?  Try to test it manually…. This one short test cover about 40 use cases…

This example shows an incredible value of unit tests! If developer makes a bug in response he receives an error that something is wrong with a view. Travis CI allows to reject all  wrong pull requests and merge only these which fulfill our quality requirements.   

Fixing  error is one part but finding a bug is even harder task. But an ability to detect bug on early stage of process development reduces cost of software.

 

Continue ReadingUnit Testing

Features and Controls of Pocket Science Lab

Prerequisite reading:

PSLab is equipped with array of useful control and measurement tools. This tiny but powerful Pocket Science Lab enables you to perform various experiments and study a wide range of phenomena.

Some of the important applications of PSLab include a 4-channel oscilloscope, sine/triangle/square waveform generators, a frequency counter, a logic analyser and also several programmable current and voltage sources.

Add-on boards, both wired as well as wireless(NRF+MCU), enable measurement of physical parameters ranging from acceleration and angular velocity, to luminous intensity and Passive Infra-red. (Work under progress…)

As a reference for digital instruments a 12-MHz Crystal is chosen and a 3.3V voltage regulator is chosen for the analogue instruments. The device is then calibrated against professional instruments in order to squeeze out maximum performance.

Python based communication library and experiment specific PyQt4 based GUI’s make PSLab a must have tool for programmers, hobbyists, science and engineering teachers and also students.

PSLab is interfaced and powered by USB port of the computer. For connecting external signals it has several input/output terminals as shown in the figure.

pslabdesign
New panel design for PSLab

psl2

Feature list for the acquisition and control :

  • The most important feature of PSLab is a 4-channel oscilloscope which can monitor analog inputs at maximum of 2 million samples per second. Includes the usual controls such as triggering, and gain selection. Uses Python-Scipy for curve fitting.
oscilloscope
PSLab Oscilloscope

 

 

Waveform Generators

  • W1 : 5Hz – 5KHz arbitrary waveform generator. Manual amplitude control up to +/-3Volts
  • W2 : 5Hz – 5KHz arbitrary waveform generator. Amplitude of +/-3Volts. Attenuable via software
  • PWM : There are four phase correlated PWM outputs with maximum frequency 32MHz, 15nano second duty cycle, and phase difference control.

Measurement Functions

  • Frequency counter tested up to 16 MHz.
  • Capacitance Measurement. pF to uF range
  • PSLab has several 12-bit Analog inputs (function as voltmeters) with programmable gains, and maximum ranges varying from +/-5mV to +/-16V.

Voltage and Current Sources

  • 12-bit Constant Current source. Maximum current 3.3mA [subject to load resistance].
  • PSLab has three 12-bit Programmable voltage sources/ +/-3.3V,+/-5V,0-3V . (PV1, PV2, PV3)
controls
Main Control Panel

Other useful tools

  • 4MHz, 4-channel Logic analyzer with 15nS resolution.Voltage and Current Sources
  • SPI,I2C,UART outputs that can be configured and controlled entirely through Python functions. (Work in progress…)
  • On-board 2.4GHz transceiver for wireless data acquisition. (Work in progress..)
  • Graphical Interfaces for Oscilloscope, Logic Analyser, streaming data, wireless acquisition, and several experiments developed that use a common framework which drastically reduces code required to incorporate control and plotting widgets.
  • PSLab also has space for an ESP-12 module for WiFi access with access point / station mode.

Screen-shots of GUI apps.

advanced-controls
Advanced Controls with Oscilloscope
wirelesssensordataloger
Wireless Sensors ( Work in progress…)
logicanalyzer
Logic Analyzer

With all these features PSLab is taking a good shape and I see it as a potential tool that can change the way we teach and learn science. 🙂 🙂

 

Continue ReadingFeatures and Controls of Pocket Science Lab

The new AYABInterface module

One create knit work with knitting machines and the AYAB shield. Therefore, the computer communicates with the machine. This communication shall be done, in the future, with this new library, the AYABInterface.

Here are some design decisions:

Complete vs. Incomplete

The idea is to have the AYAB seperated from the knittingpattern format. The knittingpattern format is an incomplete format that can be extended for any use case.  In contrast, the AYAB machine has a complete instruction set. The knittingpattern format is a means to transform these formats into different complete instruction sets. They should be convertible but not mixed.

Desciptive vs. Imperative

The idea is to be able to pass the format to the AYABInterface as a description. As much knowledge about the behavior is capsuled in the AYABInterface module. With this striving, we are less prone to intermix concerns across the applications.

Responsibilty Driven Design

I see these separated responsibilities:

  • A communication part focusing on the protocol to talk and the messages sent across the wire. It is an interpreter of the protocol, transforming it from bytes to objects.
  • A configuration that is passed to the interface
  • Different Machines types supported.
  • Actions the user shall perform.

Different Representations

I see these representations:

  • Commands are transferred across the wire. (PySerial)
  • For each movement of a carriage, the needles are used and put into a new position, B or D. (communication)
  • We would like to knit a list of rows with different colors. (interface)
    • Holes can be described by a list of orders in which meshes are moved to other locations, i.e. on needle 1 we can find mesh 1, on needle 2 we find mesh 2 first and then mesh 3, so mesh 2 and mesh 3 are knit together in the following step
  • The knitting pattern format.

Actions and Information for the User

The user should be informed about actions to take. These actions should not be in the form of text but rather in the form of an object that represents the action, i.e. [“move”, “this carriage”, “from right to left”]. This way, they can be adequately represented in the UI and translated somewhere central in the UI.

Summary

The new design separates concerns and allows testing. The bridge between the machine and the knittnigpattern format are primitive, descriptive objects such as lists and integers.

Continue ReadingThe new AYABInterface module

ETag based caching for GET APIs

Many client applications require caching of data to work with low bandwidth connections. Many of them do it to provide faster loading time to the client user. The Webapp and Android app had similar requirements. Previously they provided caching using a versions API that would keep track of any modifications made to Events or Services. The response of the API would be something like this:

[{
  "event_id": 6,
  "event_ver": 1,
  "id": 27,
  "microlocations_ver": 0,
  "session_ver": 4,
  "speakers_ver": 3,
  "sponsors_ver": 2,
  "tracks_ver": 3
}]

The number corresponding to "*_ver" tells the number of modifications done for that resource list. For instance, "tracks_ver": 3 means there were three revisions for tracks inside the event (/events/:event_id/tracks). So when the client user starts his app, the app would make a request to the versions API, check if it corresponds to the local cache and update accordingly. It had some shortcomings, like checking modifications for a individual resources. And if a particular service (microlocation, track, etc.) resource list inside an event needs to be checked for updates, a call to the versions API would be needed.

ETag based caching for GET APIs

The concept of ETag (Entity Tag) based caching is simple. When a client requests (GET) a resource or a resource list, a hash of the resource/resource list is calculated at the server. This hash, called the ETag is sent with the response to the client, preferably as a header. The client then caches the response data and the ETag alongside the resource. Next time when the client makes a request at the same endpoint to fetch the resource, he sets an If-None-Match header in the request. This header contains the value of ETag the client saved before. The server grabs the resource requested by the client, calculates its hash and checks if it is equal to the value set for If-None-Match. If the value of the hash is same, then it means the resource has not changed, so a response with resource data is not needed. If it is different, then the server returns the response with resource data and a new ETag associated with that resource.

Little modifications were needed to deal with ETags for GET requests. Flask-Restplus includes a Resource class that defines a resource. It is a pluggable view. Pluggable views need to define a dispatch_request method that returns the response.

import json
from hashlib import md5

from flask.ext.restplus import Resource as RestplusResource

# Custom Resource Class
class Resource(RestplusResource):
    def dispatch_request(self, *args, **kwargs):
        resp = super(Resource, self).dispatch_request(*args, **kwargs)

        # ETag checking.
        # Check only for GET requests, for now.
        if request.method == 'GET':
            old_etag = request.headers.get('If-None-Match', '')
            # Generate hash
            data = json.dumps(resp)
            new_etag = md5(data).hexdigest()

            if new_etag == old_etag:
                # Resource has not changed
                return '', 304
            else:
                # Resource has changed, send new ETag value
                return resp, 200, {'ETag': new_etag}

        return resp

To add support for ETags, I sub-classed the Resource class to extend the dispatch_request method. First, I grabbed the response for the arguments provided to RestplusResource‘s dispatch_request method. old_etag contains the value of ETag set in the If-None-Match header. Then hash for the resp response is calculated. If both ETags are equal then an empty response is returned with 304 HTTP status (Not Modified). If they are not equal, then a normal response is sent with the new value of ETag.

[smg:~] $ curl -i http://127.0.0.1:8001/api/v2/events/1/tracks/1 
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 1061
ETag: ada4d057f76c54ce027aaf95a3dd436b
Server: Werkzeug/0.11.9 Python/2.7.6
Date: Thu, 21 Jul 2016 09:01:01 GMT

{"description": "string", "sessions": [{"id": 1, "title": "Fantastische Hardware Bauen & L\u00f6ten Lernen mit Mitch (TV-B-Gone) // Learn to Solder with Cool Kits!"}, {"id": 2, "title": "Postapokalyptischer Schmuck / Postapocalyptic Jewellery"}, {"id": 3, "title": "Query Service Wikidata "}, {"id": 4, "title": "Unabh\u00e4ngige eigene Internet-Suche in wenigen Schritten auf dem PC installieren"}, {"id": 5, "title": "Nitrokey Sicherheits-USB-Stick"}, {"id": 6, "title": "Heart Of Code - a hackspace for women* in Berlin"}, {"id": 7, "title": "Free Software Foundation Europe e.V."}, {"id": 8, "title": "TinyBoy Project - a 3D printer for education"}, {"id": 9, "title": "LED Matrix Display"}, {"id": 11, "title": "Schnittmuster am PC erstellen mit Valentina / Valentina Digital Pattern Design"}, {"id": 12, "title": "PC mit Gedanken steuern - Brain-Computer Interfaces"}, {"id": 14, "title": "Functional package management with GNU Guix"}], "color": "GREEN", "track_image_url": "http://website.com/item.ext", "location": "string", "id": 1, "name": "string"}

[smg:~] $ curl -i --header 'If-None-Match: ada4d057f76c54ce027aaf95a3dd436b' http://127.0.0.1:8001/api/v2/events/1/tracks/1 
HTTP/1.0 304 NOT MODIFIED
Connection: close
Server: Werkzeug/0.11.9 Python/2.7.6
Date: Thu, 21 Jul 2016 09:01:27 GMT

ETag based caching has a drawback. Since the hash is calculated for every GET request it increases the load on servers. So if four clients request the same resource, the server calcuates hashes four times. This can be solved by calculating and saving the ETag during creation and modification of resources, and then getting and sending this ETag directly.

Continue ReadingETag based caching for GET APIs