AJAX Image upload at Wizard

I recently had a requirement to implement upload of images through AJAX. The Event Background at the Wizard was one of the fields. One big advantage of this was immediate upload of images so the user doesn’t have to submit the entire form to save the image. Plus, less data had to be sent through the form.

Client Side

On the client side, the image was first cropped to a specific resolution and then uploaded with the form. We were using Croppie for cropping images. The requirement wasn’t uploading the file through AJAX, but uploading the output of Croppie.

The element bound to Croppie was inside a bootstrap modal.


<!-- Bootstrap Modal -->
    <div class="modal-body">
        <div id="upload-cropper">

        </div>
    </div>
    <div class="modal-footer">
        <div class="btn-group">
            <button type="button" class="btn btn-default" data-dismiss="modal">Cancel</button>
            <button type="button" id="save-crop" class="btn btn-success">Done</button>
        </div>
    </div>

$uploadCropper = $('#upload-cropper').croppie({
    viewport: {
        width: 490,
        height: 245,
        type: 'square'
    },
    boundary: {
        width: 508,
        height: 350
    }
});

The resulting image of croppie could be taken after the user clicked on the #save-crop button. The response value could be saved in an input field and a preview of the resulting image could be shown to the user. So to save the image now, the user could submit the form. The #background_url field would contain the data (base64 encoded) that could be processed at the server.

$("#save-crop").click(function () {
    $uploadCropper.croppie('result', {
        type: 'canvas',
        size: 'original'
    }).then(function (resp) {
        $('#cropper-modal').modal('hide')
        $("#background_url").val(resp);
        $("#image-view-group").show().find("img").attr("src", resp);
        $("#image-upload-group").hide();
    });
});

To upload image through AJAX I had to make a call at the proper endpoint (created below. See Back-end) with the response data. So when the user clicks on #save-crop button, a request to the server is made, and if the upload was successful, only then the preview of the image would be available.

Back-end: Upload endpoint

Let’s assume /files/bgimage is the endpoint where the background image data needs to be uploaded.


@expose('/files/bgimage', methods=('POST','DELETE'))
def bgimage_upload(self, event_id):
    if request.method == 'POST':
        background_image = request.form['bgimage']
        if background_image:
            background_file = uploaded_file(file_content=background_image)
            background_url = upload(
                background_file,
                UPLOAD_PATHS['event']['background_url'].format(
                    event_id=event_id
                ))
            event = DataGetter.get_event(event_id)
            event.background_url = background_url
            save_to_db(event)
            return jsonify({'status': 'ok', 'background_url': background_url})
        else:
            return jsonify({'status': 'no bgimage'})
    elif request.method == 'DELETE':
        event = DataGetter.get_event(event_id)
        event.background_url = ''
        save_to_db(event)
        return jsonify({'status': 'ok'})

The endpoint is defined for POST and DELETE methods. A request with the DELETE method removes the background_url for the Event. An AJAX request with the data should be simple in jQuery:


/* `then` method of Croppie */
.then(function (resp) {
    $('#cropper-modal').modal('hide')
    $("#event-image-upload-label").html(loadingImage);
    $.ajax({
        type: 'POST',
        url: "/files/bgimage",
        data: {bgimage: resp},
        dataType: 'json'
    }).done(function(data) {
        console.log(data);
        $("#image-view-group").show().find("img").attr("src", resp);
        $("#image-upload-group").hide();
    }).fail(function(data) {
        alert("Something went wrong. Please try again.");
    });
});

The AJAX request is sent after Croppie has cropped the image (then method).

Continue ReadingAJAX Image upload at Wizard

Multiple Tickets: Back-end

In my previous post I talked about approach for Multiple Ticket feature’s user-interface [Link]. In this post I’ll discuss about Flask back-end used for saving multiple tickets.

HTML Fields Naming

Since the number of Tickets a user creates is unknown to the server, details of tickets were needed to be sent as an array of values. So the server would accept the list of values and iterate over them. To send data as an array the naming had to include brackets. Below are some input fields used in tickets:

<tr>
    <td>
        <input type="hidden" name="tickets[type]">
        <input type="text" name="tickets[name]" class="form-control" placeholder="Ticket Name" required="required" data-uniqueticket="true">
        <div class="help-block with-errors"></div>
    </td>
    <td>
        <input type="number" min="0" name="tickets[price]" class="form-control"  placeholder="$" value="">
    </td>
    <td>
        <input type="number" min="0" name="tickets[quantity]" class="form-control" placeholder="100" value="{{ quantity }}">
    </td>
    <!-- Other fields -->
</tr>

At the server

When the POST request reaches the server, any of the above fields (say tickets[name]) would be available as a list. The Flask Request object includes a form dictionary that contains all the POST parameters sent with the request. This dictionary is an ImmutableMultiDict object, which has a getlist method to get array of elements.

For instance in our case, we can get tickets[name] using:

@expose('/create', methods=('POST', 'GET'))
def create_view(self):
    if request.method == 'POST':
        ticket_names = request.form.getlist('tickets[name]')

    # other stuff

The ticket_names variable would contain the list of all the Ticket names sent with the request. So for example if the user created three tickets at the client-side, the form would possibly look like:

<form method="post">
  <!-- Ticket One -->
  <input type="text" name="tickets[name]" class="form-control" value="Ticket Name One">
  <!-- Ticket Two -->
  <input type="text" name="tickets[name]" class="form-control" value="Ticket Name Two">
  <!-- Ticket Three -->
  <input type="text" name="tickets[name]" class="form-control" value="Ticket Name Three">

</form>

After a successful POST request to the server, ticket_names should contain ['Ticket Name One', 'Ticket Name Two', 'Ticket Name Three'].

Other fields, like tickets[type], tickets[price], etc. can all be extracted from the Request object.

Checkbox Fields

A problem arose when a checkbox field was needed for every ticket. In my case, a “Hide Ticket” option was needed to let the user decide if he wants the ticket to be shown at the public Events page.

Screenshot from 2016-08-13 12:39:29

The problem with checkboxes is that, for a checkbox of a particular name attribute, if it is not selected, POST parameters of the request made by the client will not contain the checkbox input field parameter. So if I define an input field as a checkbox with the following naming convention, and make a POST request to the server, the server will receive blah[] parameter only if the input element had been checked.

<input type="checkbox" name="blah[]" >

This creates a problem for “Hide ticket” checkboxes. For instance, at the client-side the user creates three tickets with the first and last tickets having their checkboxes selected, the server would get an array of two.

<form>
  <!-- Ticket One -->
  <input type="checkbox" name="tickets[hide]" checked>
  <!-- Ticket Two -->
  <input type="checkbox" name="tickets[hide]">
  <!-- Ticket Three -->
  <input type="checkbox" name="tickets[hide]" checked>

</form>
ticket_hide_opts = request.form.getlist('tickets[hide]')

ticket_hide_opts would be an array of length two. And there is no way to tell what ticket had its “Hide ticket” option checked. So for the hide checkbox field I had to define input elements with unique names to extract them at the server.

There is also a hack to overcome the unchecked-checkbox problem. It is by using a hidden field with the same name as the checkbox. You can read about it here: http://www.alexandrejoseph.com/blog/2015-03-03-flask-unchecked-checkbox-value.html.

Continue ReadingMultiple Tickets: Back-end

Multiple Tickets: User Interface

An Event can have multiple tickets for different purposes. For instance an Arts Exhibition can have multiple Galleries. The Organizer might be interested in assigning a ticket (let’s assume paid) for each Gallery. The user can then buy tickets for the Galleries that he wishes to attend. The feature that Multiple Tickets really provide is exclusiveness. Let’s say Gallery1 has a shorter area (by land) than others. Obviously the Organizer would want fewer people to be present there than other Galleries. To do this, he can create a separate ticket for Gallery1 and specify a shorter sales period. He can also reduce the Maximum number of order that a user can make (max_order). If we would have implemented single ticket per event, this wouldn’t have been possible.

Tickets at Wizard

To handle multiple tickets at the wizard, proper naming of input tags was required. Since the number of tickets that can be created by the user was unknown to the server we had to send ticket field values as lists. Also at the client-side a way was required to let users create multiple tickets.

User Interface

A ticket can be of three types: Free, Paid and Donation. Out of these, only the Paid tickets need a Price. The Tickets holder could be a simple table, with every ticket being a table row. This became more complex afterwards, when more details about the ticket needed to be displayed. A ticket would then be two table rows with one of them (details) hidden.

Ticket holder can be a simple bootstrap table:

<table class="table tickets-table">
    <thead>
        <tr>
            <th>Ticket Name</th>
            <th>Price</th>
            <th>Quantity</th>
            <th>Options</th>
        </tr>
    </thead>
    <tbody>
      <!-- Ticket -->
      <tr>
        <!-- Main info -->
      </tr>
      <tr>
        <!-- More details (initially hidden) -->
      </tr>
      <!-- /Ticket -->
    </tbody>
</table>

To make ticket creation interactive, three buttons were needed to create the above three tickets. The type-name doesn’t not necessarily have to be shown to the user. It could be specified with the Price. For Paid ticket, the Price input element would be a number. For Free and Donation tickets, a Price input element wasn’t required. We could specify an element displaying one of the two types: Free or Donation.

Here’s the holder table with a Free Ticket and a Donation Ticket:

Screenshot from 2016-08-09 16:51:06

Since only the Price field is changing in the three types of tickets, I decided to create a template ticket outside of the form and create a JavaScript function to create one of the tickets by cloning the template.

A Free Ticket with its edit options opened up. You can see other details about the ticket in the second table row.

Screenshot from 2016-08-09 16:52:14

This is a simplified version of the template. I’ve removed common bootstrap elements (grid system) including some other fields.

<div id="ticket-template">
<tr>
    <td>
        <input type="hidden" name="tickets[type]">
        <input type="text" name="tickets[name]" class="form-control" placeholder="Ticket Name" required="required" data-uniqueticket="true">
        <div class="help-block with-errors"></div>
    </td>
    <td>
        <!-- Ticket Price -->
    </td>
    <td>
        <input type="number" min="0" name="tickets[quantity]" class="form-control" placeholder="100" value="{{ quantity }}">
    </td>
    <td>
        <div class="btn-group">
            <a class="btn btn-info edit-ticket-button" data-toggle="tooltip" title="Settings">
                <i class="glyphicon glyphicon-cog"></i>
            </a>
            <a class="btn btn-info remove-ticket-button" data-toggle="tooltip" title="Remove">
                <i class="glyphicon glyphicon-trash"></i>
            </a>
        </div>
    </td>
</tr>
<tr>
    <td colspan="4">
        <div class="row" style="display: none;">
            <!-- Other fields including Description, Sales Start and End time,
              Min and Max Orders, etc.
            -->
        </div>
    </td>
</tr>
</div>

Like I said, the Price element of ticket will make the type obvious for the user, so a type field does not need to be displayed. But the type field is required by the server. You can see it specified as hidden in the template.

The function to create a Ticket according to the type:

I’ve commented snippets to make it easy to understand.

function createTicket(type) {
    /* Clone ticket from template */
    var $tmpl = $("#ticket-template").children().clone();

    var $ticket = $($tmpl[0]).attr("id", "ticket_" + String(ticketsCount));
    var $ticketMore = $($tmpl[1]).attr("id", "ticket-more_" + String(ticketsCount));

    /* Bind datepicker and timepicker to dates and times */
    $ticketMore.find("input.date").datepicker();
    $ticketMore.find("input.time").timepicker({
        'showDuration': true,
        'timeFormat': 'H:i',
        'scrollDefault': 'now'
    });

    /* Bind iCheck to checkboxes */
    $ticketMore.find("input.checkbox.flat").iCheck({
        checkboxClass: 'icheckbox_flat-green',
        radioClass: 'iradio_flat-green'
    });

    /* Bind events to Edit (settings) and Remove buttons */
    var $ticketEdit = $ticket.find(".edit-ticket-button");
    $ticketEdit.tooltip();
    $ticketEdit.on("click", function () {
        $ticketMore.toggle("slow");
        $ticketMore.children().children(".row").slideToggle("slow");
    });

    var $ticketRemove = $ticket.find(".remove-ticket-button");
    $ticketRemove.tooltip();
    $ticketRemove.on("click", function () {
        var confirmRemove = confirm("Are you sure you want to remove the Ticket?");
        if (confirmRemove) {
            $ticket.remove();
            $ticketMore.remove();
        }
    });

    /* Set Ticket Type field */
    $ticket.children("td:nth-child(1)").children().first().val(type);

    /* Set Ticket Price field */
    var html = null;
    if (type === "free") {
        html = '';
    } else if (type === "paid") {
        html = '';
    } else if (type === "donation") {
        html = '';
    }
    $ticket.children("td:nth-child(2)").html(html);

    /* Append ticket to table */
    $ticket.hide();
    $ticketMore.children().children(".row").hide();
    $ticketsTable.append($ticket, $ticketMore);
    $ticket.show("slow");

    ticketsCount += 1;
  }

The flow is simple. Clone the template, bind events to various elements, specify type and price fields and then append to the ticket holder table.

Screenshot from 2016-08-09 17:00:59

We use the Datepicker and Timepicker JavaScript libraries for date and time elements. So fields using these widgets need to have methods called on the elements. Also, we use iCheck for checkboxes and radio buttons. Apart from these, the Edit-Ticket and Remove-Ticket buttons also need event handlers. Edit-Ticket button toggles the Ticket Details segment (second tr of a ticket). Remove-Ticket deletes the ticket. After the Price and Type fields are set, the ticket is appended to the holder table with slow animation.

Continue ReadingMultiple Tickets: User Interface

Flask-SocketIO Notifications

In the previous post I explained about configuring Flask-SocketIO, Nginx and Gunicorn. This post includes integrating Flask-SocketIO library to display notifications to users in real time.

Flask Config

For development we use the default web server that ships with Flask. For this, Flask-SocketIO fallsback to long-polling as its transport mechanism, instead of WebSockets. So to properly test SocketIO I wanted to work directly with Gunicorn (hence the previous post about configuring development environment). Also, not everyone needs to be bothered with the changes required to run it.

class DevelopmentConfig(Config):
    DEVELOPMENT = True
    DEBUG = True

    # If Env Var `INTEGRATE_SOCKETIO` is set to 'true', then integrate SocketIO
    socketio_integration = os.environ.get('INTEGRATE_SOCKETIO')
    if socketio_integration == 'true':
        INTEGRATE_SOCKETIO = True
    else:
        INTEGRATE_SOCKETIO = False

    # Other stuff

SocketIO is integrated (in development env) if the developer has set the INTEGRATE_SOCKETIO environment variable to “true”. In Production, our application runs on Gunicorn, and SocketIO integration must always be there.

Flow

To send message to a particular connection (or a set of connections) Flask-SocketIO provides Rooms. The connections are made to join a room and the message is sent in the room. So to send message to a particular user we need him to join a room, and then send the message in that room. The room name needs to be unique and related to just one user. The User database Ids could be used. I decided to keep user_{id} as the room name for a user with id {id}. This information (room name) would be needed when making the user join a room, so I stored it for every user that logged in.

@expose('/login/', methods=('GET', 'POST'))
    def login_view(self):
        if request.method == 'GET':
            # Render template
        if request.method == 'POST':
            # Take email and password from form and check if 
            # user exists. If he does, log him in.
            login.login_user(user)

            # Store user_id in session for socketio use
            session['user_id'] = login.current_user.id

            # Redirect

After the user logs in, a connection request from the client is sent to the server. With this connection request the connection handler at server makes the user join a room (based on the user_id stored previously).

@socketio.on('connect', namespace='/notifs')
def connect_handler():
    if current_user.is_authenticated():
        user_room = 'user_{}'.format(session['user_id'])
        join_room(user_room)
        emit('response', {'meta': 'WS connected'})

The client side is somewhat similar to this:

<script src="{{ url_for('static', filename='path/to/socket.io-client/socket.io.js') }}"></script>
<script type="text/javascript">
$(document).ready(function() {
    var namespace = '/notifs';

    var socket = io.connect(location.protocol + "//" + location.host + namespace, {reconnection: false});

    socket.on('response', function(msg) {
        console.log(msg.meta);
        // If `msg` is a notification, display it to the user.
    });
});
</script>

Namespaces helps when making multiple connections over the same socket.

So now that the user has joined a room we can send him notifications. The notification data sent to the client should be standard, so the message always has the same format. I defined a get_unread_notifs method for the User class that fetches unread notifications.

class User(db.Model):
    # Other stuff

    def get_unread_notifs(self, reverse=False):
        """Get unread notifications with titles, humanized receiving time
        and Mark-as-read links.
        """
        notifs = []
        unread_notifs = Notification.query.filter_by(user=self, has_read=False)
        for notif in unread_notifs:
            notifs.append({
                'title': notif.title,
                'received_at': humanize.naturaltime(datetime.now() - notif.received_at),
                'mark_read': url_for('profile.mark_notification_as_read', notification_id=notif.id)
            })

        if reverse:
            return list(reversed(notifs))
        else:
            return notifs

This class method is used when a notification is added in the database and has to be pushed into the user SocketIO room.

def create_user_notification(user, action, title, message):
    """
    Create a User Notification
    :param user: User object to send the notification to
    :param action: Action being performed
    :param title: The message title
    :param message: Message
    """
    notification = Notification(user=user,
                                action=action,
                                title=title,
                                message=message,
                                received_at=datetime.now())
    saved = save_to_db(notification, 'User notification saved')

    if saved:
        push_user_notification(user)

def push_user_notification(user):
    """
    Push user notification to user socket connection.
    """
    user_room = 'user_{}'.format(user.id)
    emit('response',
         {'meta': 'New notifications',
          'notif_count': user.get_unread_notif_count(),
          'notifs': user.get_unread_notifs()},
         room=user_room,
         namespace='/notifs')
Continue ReadingFlask-SocketIO Notifications

Setting up Nginx, Gunicorn and Flask-SocketIO

One of my previous posts was on User Notifications (another blog). There I discussed a possible enhancement to notifications by using WebSocket API. This week I worked on the same using Flask-SocketIO library. Its development required setting up the backend, this post is about the same.

Flask-SocketIO and Gunicorn

From the Flask-SocketIO page itself:

Flask-SocketIO gives Flask applications access to low latency bi-directional communications between the clients and the server.

On the client-side the developer is free to use any library that works on the Socket.io protocol. Flask-SocketIO needs an asynchronous service to work with and gives a choice from the three: Eventlet (eventlet.net), Gevent (gevent.org) and Flask development server. I used it with Eventlet.

pip install flask-socketio
pip install eventlet

We were already using Gunicorn as our webserver, so integrating Eventlet only required specifying the worker class for Gunicorn.

gunicorn app:app --worker-class eventlet -w 1 --bind 0.0.0.0:5000 --reload

This command would start the gunicorn webserver, load the Flask app and bind it to port 5000. The worker class has to be specified as eventlet, using only one worker (-w 1). --reload option helps during development, it restarts the server if the python code changes.

The problem with Gunicorn occurs when working with static files. Gunicorn is not made to serve static assests like CSS stylesheets, JS scripts, etc. It should only be used to serve requests that require the Python application. We were serving static assets with Gunicorn and you could see the static files not changing at the browser during development (even if the server is restarted). The correct way to handle this was to use Nginx as a proxy server that serves static files, and passes other requests to the Flask application (running at Gunicorn).

Nginx

We use Vagrant for development. To test our application, a port in the host machine has to be forwarded to another port in the guest machine. We forward 8001 Host port to 5000 in Guest.

config.vm.network "forwarded_port", guest: 5000, host: 8001

To serve requests with Nginx we need it listening to port 5000 in our Virtualbox. It should serve the static files itself and should pass other requests to the Gunicorn server running the python application. The Gunicorn server should be running on another port, 5001 let’s assume. The following Nginx configuration does this:

server {
    listen       5000;

    location /static {
        alias /vagrant/app/static;
        autoindex on;
    }

    location / {
        proxy_pass http://127.0.0.1:5001;

        proxy_redirect http://127.0.0.1:5001/ http://127.0.0.1:8001/;
    }
}

You can see the static files (which are served at /static in out application) are being served directly. /vagrant/app/static is the directory where our static assets reside inside vagrant. autoindex on lets you browse static file directories in the browser.

For other locations (URIs) the request is passed onto port 5001 where our Gunicorn server is running. Many responses from the Gunicorn server might contain URLs in the headers, like the Location header. This URL is going to have the domain and port of the Gunicorn server, since Flask is running on this server. This response is then grabbed by Nginx to send it to the user. Nginx will convert such URLs to the domain and port that it itself is running on. So if a response from Gunicorn has a header: Location: http://127.0.0.1:5001/blah it would be converted to Location: http://127.0.0.1:5000/blah. Since we were running our application inside a virtualbox, on the outside (host) we needed our application to be available at port 8001. So the response header Location: http://127.0.0.1:5000/blah needed to be Location: http://127.0.0.1:8001/blah. proxy_redirect does this job.

To add support for WebSockets at Nginx some other config parameters need to be defined. You can get them at https://flask-socketio.readthedocs.io/en/latest/.

Continue ReadingSetting up Nginx, Gunicorn and Flask-SocketIO

ETag based caching for GET APIs

Many client applications require caching of data to work with low bandwidth connections. Many of them do it to provide faster loading time to the client user. The Webapp and Android app had similar requirements. Previously they provided caching using a versions API that would keep track of any modifications made to Events or Services. The response of the API would be something like this:

[{
  "event_id": 6,
  "event_ver": 1,
  "id": 27,
  "microlocations_ver": 0,
  "session_ver": 4,
  "speakers_ver": 3,
  "sponsors_ver": 2,
  "tracks_ver": 3
}]

The number corresponding to "*_ver" tells the number of modifications done for that resource list. For instance, "tracks_ver": 3 means there were three revisions for tracks inside the event (/events/:event_id/tracks). So when the client user starts his app, the app would make a request to the versions API, check if it corresponds to the local cache and update accordingly. It had some shortcomings, like checking modifications for a individual resources. And if a particular service (microlocation, track, etc.) resource list inside an event needs to be checked for updates, a call to the versions API would be needed.

ETag based caching for GET APIs

The concept of ETag (Entity Tag) based caching is simple. When a client requests (GET) a resource or a resource list, a hash of the resource/resource list is calculated at the server. This hash, called the ETag is sent with the response to the client, preferably as a header. The client then caches the response data and the ETag alongside the resource. Next time when the client makes a request at the same endpoint to fetch the resource, he sets an If-None-Match header in the request. This header contains the value of ETag the client saved before. The server grabs the resource requested by the client, calculates its hash and checks if it is equal to the value set for If-None-Match. If the value of the hash is same, then it means the resource has not changed, so a response with resource data is not needed. If it is different, then the server returns the response with resource data and a new ETag associated with that resource.

Little modifications were needed to deal with ETags for GET requests. Flask-Restplus includes a Resource class that defines a resource. It is a pluggable view. Pluggable views need to define a dispatch_request method that returns the response.

import json
from hashlib import md5

from flask.ext.restplus import Resource as RestplusResource

# Custom Resource Class
class Resource(RestplusResource):
    def dispatch_request(self, *args, **kwargs):
        resp = super(Resource, self).dispatch_request(*args, **kwargs)

        # ETag checking.
        # Check only for GET requests, for now.
        if request.method == 'GET':
            old_etag = request.headers.get('If-None-Match', '')
            # Generate hash
            data = json.dumps(resp)
            new_etag = md5(data).hexdigest()

            if new_etag == old_etag:
                # Resource has not changed
                return '', 304
            else:
                # Resource has changed, send new ETag value
                return resp, 200, {'ETag': new_etag}

        return resp

To add support for ETags, I sub-classed the Resource class to extend the dispatch_request method. First, I grabbed the response for the arguments provided to RestplusResource‘s dispatch_request method. old_etag contains the value of ETag set in the If-None-Match header. Then hash for the resp response is calculated. If both ETags are equal then an empty response is returned with 304 HTTP status (Not Modified). If they are not equal, then a normal response is sent with the new value of ETag.

[smg:~] $ curl -i http://127.0.0.1:8001/api/v2/events/1/tracks/1 
HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 1061
ETag: ada4d057f76c54ce027aaf95a3dd436b
Server: Werkzeug/0.11.9 Python/2.7.6
Date: Thu, 21 Jul 2016 09:01:01 GMT

{"description": "string", "sessions": [{"id": 1, "title": "Fantastische Hardware Bauen & L\u00f6ten Lernen mit Mitch (TV-B-Gone) // Learn to Solder with Cool Kits!"}, {"id": 2, "title": "Postapokalyptischer Schmuck / Postapocalyptic Jewellery"}, {"id": 3, "title": "Query Service Wikidata "}, {"id": 4, "title": "Unabh\u00e4ngige eigene Internet-Suche in wenigen Schritten auf dem PC installieren"}, {"id": 5, "title": "Nitrokey Sicherheits-USB-Stick"}, {"id": 6, "title": "Heart Of Code - a hackspace for women* in Berlin"}, {"id": 7, "title": "Free Software Foundation Europe e.V."}, {"id": 8, "title": "TinyBoy Project - a 3D printer for education"}, {"id": 9, "title": "LED Matrix Display"}, {"id": 11, "title": "Schnittmuster am PC erstellen mit Valentina / Valentina Digital Pattern Design"}, {"id": 12, "title": "PC mit Gedanken steuern - Brain-Computer Interfaces"}, {"id": 14, "title": "Functional package management with GNU Guix"}], "color": "GREEN", "track_image_url": "http://website.com/item.ext", "location": "string", "id": 1, "name": "string"}

[smg:~] $ curl -i --header 'If-None-Match: ada4d057f76c54ce027aaf95a3dd436b' http://127.0.0.1:8001/api/v2/events/1/tracks/1 
HTTP/1.0 304 NOT MODIFIED
Connection: close
Server: Werkzeug/0.11.9 Python/2.7.6
Date: Thu, 21 Jul 2016 09:01:27 GMT

ETag based caching has a drawback. Since the hash is calculated for every GET request it increases the load on servers. So if four clients request the same resource, the server calcuates hashes four times. This can be solved by calculating and saving the ETag during creation and modification of resources, and then getting and sending this ETag directly.

Continue ReadingETag based caching for GET APIs

User Notifications

The requirement for a notification area came up when I was implementing Event-Role invites feature. For not-existing users that were not registered in our system, an email with a modified sign-up link was sent. So just after the user signs up, he will be accepted as that particular role. Now for users that were already registered to our platform a dedicated area was needed to let the user know that he has been invited to be a role at an event. Similar areas were needed for Session invites, Call for papers, etc. To take care of these we thought of implementing a separate notifications area for the user, where such messages could be sent to registered users. Issue

Base Model

I kept base db model for a user notification very basic. It had a user field that would be a Foreign key to a User class object. title and message would contain the actual data that the user would read. message can contain HTML tags, so if someone wants to display the notification with some markup he could store that in the message. The user might also want to know when a notification was received. The received_at field stores a datetime object for the same purpose.

There is also has_read field that was later added. It stores a boolean value that tells if the user has marked the notification as Read.

class Notification(db.Model):
    """
    Model for storing user notifications.
    """

    id = db.Column(db.Integer, primary_key=True)

    user_id = db.Column(db.Integer, db.ForeignKey('user.id'))
    user = db.relationship('User', backref='notifications')

    title = db.Column(db.String)
    message = db.Column(db.Text)
    action = db.Column(db.String)
    received_at = db.Column(db.DateTime)
    has_read = db.Column(db.Boolean)

    def __init__(self,
                 user,
                 title,
                 message,
                 action,
                 received_at,
                 has_read=False):
        self.user = user
        self.title = title
        self.message = message
        self.action = action
        self.received_at = received_at
        self.has_read = has_read

action field helps the Admin identify the notification. Like if it is a message for Session Schedule change or an Event-Role invite. When a notification is logged, the administrator could tell what exactly the message is for.

Unread Notification Count

The user must be informed if he has received a notification. This info must be available at every page so he doesn’t have to switch over to the notification area to check for new ones. A notification icon at the navbar perhaps.

Screenshot from 2016-07-19 02:00:47

The data about this notification count had to be available at the navbar template at every page. I decided to define it as a method in the User class. This way it could be displayed using the User object. So if the user was authenticated, the icon with the notification count could be displayed.

class User(db.Model):
    """User model class
    """
    # other stuff
    
    def get_unread_notif_count(self):
        return len(Notification.query.filter_by(user=self,
                                                has_read=False).all())
{% if current_user.is_authenticated %}
    <!-- other stuff -->

    <li>
             <a class="info-number" href="{{ url_for('profile.notifications_view') }}">
                  <i class="fa fa-envelope-o"></i>
                  <span class="badge bg-green">{{ current_user.get_unread_notif_count() | default('', true) }}</span>
             </a>
    </li>

    <!-- other stuff -->

{% endif %}

If the count is zero, count number is not displayed.

Possible Enhancement

The notification count comes with the HTML generated by the template at the server. So to check for new notifications the user must either refresh the page or travel to another page. To show newly received notifications without refreshing the page the WebSocket API can be used. I’ve it in my bucket list and I’ll implement it soon.

Continue ReadingUser Notifications

Permission Decorators

A follow-up to one of my previous posts: Organizer Server Permissions System.

I recently had a requirement to create permission decorators for use in our REST APIs. There had to be separate decorators for Event and Services.

Event Permission Decorators

Understanding Event permissions is simple: Any user can create an event. But access to an event is restricted to users that have Event specific Roles (e.g. Organizer, Co-organizer, etc) for that event. The creator of an event is its Organizer, so he immediately gets access to that event. You can read about these roles in the aforementioned post.

So for Events, create operation does not require any permissions, but read/update/delete operations needed a decorator. This decorator would restrict access to users with event roles.

def can_access(func):
    """Check if User can Read/Update/Delete an Event.
    This is done by checking if the User has a Role in an Event.
    """
    @wraps(func)
    def wrapper(*args, **kwargs):
        user = UserModel.query.get(login.current_user.id)
        event_id = kwargs.get('event_id')
        if not event_id:
            raise ServerError()
        # Check if event exists
        get_object_or_404(EventModel, event_id)
        if user.has_role(event_id):
            return func(*args, **kwargs)
        else:
            raise PermissionDeniedError()
    return wrapper

The has_role(event_id) method of the User class determines if the user has a Role in an event.

# User Model class

    def has_role(self, event_id):
        """Checks if user has any of the Roles at an Event.
        """
        uer = UsersEventsRoles.query.filter_by(user=self, event_id=event_id).first()
        if uer is None:
            return False
        else:
            return True

Reading one particular event (/events/:id [GET]) can be restricted to users, but a GET request to fetch all the events (/events [GET]) should only be available to staff (Admin and Super Admin). So a separate decorator to restrict access to Staff members was needed.

def staff_only(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        user = UserModel.query.get(login.current_user.id)
        if user.is_staff:
            return func(*args, **kwargs)
        else:
            raise PermissionDeniedError()
    return wrapper

Service Permission Decorators

Service Permissions for a user are defined using Event Roles. What Role a user has in an Event determines what Services he has access to in that Event. Access here means permission to Create, Read, Update and Delete services. The User model class has four methods to determine the permissions for a Service in an event.

user.can_create(service, event_id)
user.can_read(service, event_id)
user.can_update(service, event_id)
user.can_delete(service, event_id)

So four decorators were needed to put alongside POST, GET, PUT and DELETE method handlers. I’ve pasted snippet for the can_update decorator. The rest are similar but with their respective permission methods for User class object.

def can_update(DAO):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            user = UserModel.query.get(login.current_user.id)
            event_id = kwargs.get('event_id')
            if not event_id:
                raise ServerError()
            # Check if event exists
            get_object_or_404(EventModel, event_id)
            service_class = DAO.model
            if user.can_update(service_class, event_id):
                return func(*args, **kwargs)
            else:
                raise PermissionDeniedError()
        return wrapper
    return decorator

This decorator is a little different than can_access event decorator in a way that it takes an argument, DAO. DAO is Data Access Object. A DAO includes a database Model and methods to create, read, update and delete object of that model. The db model for a DAO would be the Service class for the object. You can look that the model class is taken from the DAO and used as the service class.

The can_create, can_read and can_delete decorators look exactly the same except they use their (obvious) permission methods on the User class object.

Continue ReadingPermission Decorators

Shell hacks

Custom Shell

Working with database models needs a lot of use of the flask shell. You can access it with:

python manage.py shell

The default shell is quite unintuitive. It doesn’t pretty print the outputs and has no support for auto-completion. I’ve installed IPython that takes care of that. But still working with models means writing a lot of import statements. Plus if there was some change in the code related to the app, then the shell had to restarted again so the changes could be loaded. Meaning writing the import statements again.

We were using Flask-Script and I wanted to run a custom shell that imports all the required modules, models and helper functions, so I don’t have to write them over and over. Some of them were as long as:

from open_event.models.users_events_roles import UsersEventsRoles

So I created a custom shell command with different context that overrides the default shell provided by the Flask-Script Manager. It was pretty easy with Flask-Script. One thing I had to keep in mind is that it needed to be in a different file than manage.py. Since manage.py was committed to source repo, changes to it would be tracked. So I needed a different file that could be excluded from the source repo. I created an smg.py that imported the Manager from the open_event module and overrides the shell command.

from flask_script import Shell

from open_event import manager
from open_event.models.user import User
from open_event.models.event import Event
from open_event.helpers.data import save_to_db, delete_from_db

def _make_context():
    return dict(
        uq=User.query,
        eq=Event.query,
        su=User.query.get(1),
        User=User,
        Event=Event,
        savetodb=save_to_db,
        deletefromdb=delete_from_db
    )


if __name__ == "__main__":
    manager.add_command('shell', Shell(make_context=_make_context))
    manager.run()

Place this smg.py file in the same directory as manage.py, so you can access it with python smg.py shell.

The code is pretty simple to understand. We import the Shell class from flask_script, create its object with our context and then add it to the manager as a command. _make_context contains what I usually like to have in my shell. It must always return a dictionary. The keys of this dictionary would be available as statements inside the shell with their values specified here.

This helps a lot. Most of the time I would be working with the super_admin user, and I would need its User object from time to time. The super_admin user is always going to be the first user (User object with id 1). So instead of from open_event.models.user import User; su = User.query.get(1) I could just use the su variable. Models like User and Event are also readily available (so are their base queries). This is the default context that I always keep, but many times you need more models than the ones specified here. Like when I was working with the permissions system.

from open_event.models.users_events_roles import UsersEventsRoles
from open_event.models.service import Service
from open_event.models.role import Role

def _make_context():
    return dict(
        # usual stuff
        UER=UsersEventsRoles,
        Service=Service,
        Role=Role
    )

You can even write a script that fetches all the database models (instance of sqlalchemy Model class) and then add them to the _make_context dictionary. I like to keep it minimum and differently named, so there are no conflicts of classes when try to auto-complete.

One more thing, you need to exclude the smg.py file so that git doesn’t track it. You can simply add it in the .git/info/exclude file.

I also wrote some useful one-liner bash commands.

Revision History

Every time someone updates the database models, he needs to migrate and provide the migrations file to others by committing it to the source. These files help us upgrade our already defined database. We work with Alembic. Regarding alembic revisions (migration files) you can keep two things in mind. One is the Head, that keeps track of the latest revision, and another is Current, that specifies what revision your tables in the database are based on. If your current revision is not a head, it means your database tables are not up-to-date. You need to upgrade (python manage.py db upgrade). The can be multiple heads in the revision history. python manage.py heads displays all of them. The current revision can be fetched with python manage.py current. I wanted something that automatically checks if current is at a head.

python manage.py db history | grep --color "$(python manage.py db current 2> /dev/null)|$(python manage.py db heads)|$"

You can specify it as an alias in .bashrc. This command displays the revision history with the head and current revision being colored.

Screenshot from 2016-07-11 16:34:18

You can identify the head as it is always appended by “(head)”. The other colored revision would be the current.

You can also reverse the output, by printing bottom to up. This is helpful when the revision history gets large and you have to scroll up to see the head.

alias rev='python manage.py db history | tac | grep --color "$(python manage.py db current 2> /dev/null)|$(python manage.py db heads)|$"'

Screenshot from 2016-07-11 16:42:30.png

Current Revision changer

The current revision is maintained at the database in the one-column alembic_version table. If for some reason the migration file for current revision no longer exists (like when changing to a branch at git that doesn’t have the file), alembic would raise an error. So to change the revision at the database I wrote a bash function.

change_db_current () {
    echo "Current Revision: "
    read current_rev
    echo "New Revision: "
    read new_rev

    echo 'c test \ update alembic_version set version_num='"'"$new_rev"'"' where version_num='"'"$current_rev"'"';' | sudo su - postgres -c psql
}

Enter the current revision hash (displayed in the alembic error) and the revision you need to point the current to, and it will update the table with the new revision. Sick right! It’s pretty stupid actually. It was when I didn’t know how to stamp in alembic. Anyways, it was useful some time ago.

Continue ReadingShell hacks

Organizer Server Permissions System

This post discusses about the Organizer Server Permissions System and how it has been implemented using database models.

The Organizer Server Permissions System includes Roles, and Services that these roles can access. Roles can broadly be classified into Event-specific Roles and System-wide Roles.

System-Wide Roles

System-wide roles can be considered a part of the staff maintaining the platform. We define two such roles: Admin and Super Admin. The Super Admin is the highest level user with permissions to access system logs, manage event-specific roles, etc.

System-wide roles are the easiest to implement. Since they’re directly related to a user we can add them as class variables in the User models.

class User(db.Model):
    # other stuff

    is_admin = db.Column(db.Boolean, default=False)
    is_super_admin = db.Column(db.Boolean, default=False)

    @property
    def is_staff(self):
        return self.is_admin or self.is_super_admin

Staff groups Admin and Super Admin roles. So to check if a user is a part of the staff (an admin or a super admin), the is_staff property can directly be used.

user.is_staff

Event-Specific Roles

An Event itself can contain many entities like Tracks, Sponsors, Sessions, etc. Our goal was to define permissions for Roles to access these entities. We grouped these entities and put them under “Services”. Services are nothing but database models associated with an event, that need to have restricted access for the Roles.

We define the following Services for an Event:

  • Track
  • Session
  • Speaker
  • Sponsor
  • Microlocation

Each of these services can either be created, read, updated or deleted. And depending on the Role a user has been assigned for a particular event, he or she can perform such operations.

There are four Event-Specific Roles:

  • Organizer
  • Co-organizer
  • Track Organizer
  • Moderator

As soon as the user creates an event, he is assigned the role of an Organizer, giving him access to all the services for that event. An Organizer can perform any operation on any of the services. A Co-organizer also has access to all the services but can only update them. A Track Organizer can just read and update already created Tracks. The Moderator can only read Tracks.

Although the initial distribution of permissions is kept as above, the Super Admin can (has permissions to) edit them later.

Screenshot from 2016-06-24 02:41:47

To implement permissions for Event specific roles, three new database models were required: Role, Service and Permission.

Role and Service would contain the above mentioned Roles and Services respectively. Permission would contain a Role column, a Service column and four other columns specifying what operation (create/read/update/delete) that Role is allowed to perform on the Service.

The final objective was to define these methods for the User class:

user.can_create(service, event_id)
user.can_read(service, event_id)
user.can_update(service, event_id)
user.can_delete(service, event_id)

Before this we needed a table specifying what Event Roles have been created for an event, and which users have been assigned these roles. The `UsersEventsRoles` model maintained this relationship.

We also needed to check if a user has been assigned a particular role for an event. For this I created a method for each of the roles.

# Event-specific Roles
ORGANIZER = 'organizer'
COORGANIZER = 'coorganizer'
TRACK_ORGANIZER = 'track_organizer'
MODERATOR = 'moderator'

class User(db.Model):
    # other stuff

    def _is_role(self, role_name, event_id):
        role = Role.query.filter_by(name=role_name).first()
        uer = UsersEventsRoles.query.filter_by(user=self,
                                               event_id=event_id,
                                               role=role).first()
        if not uer:
            return False
        else:
            return True

    def is_organizer(self, event_id):
        return self._is_role(ORGANIZER, event_id)

    def is_coorganizer(self, event_id):
        return self._is_role(COORGANIZER, event_id)

    def is_track_organizer(self, event_id):
        return self._is_role(TRACK_ORGANIZER, event_id)

    def is_moderator(self, event_id):
        return self._is_role(MODERATOR, event_id)

Here _is_role helps reduce code redundancy.

Like I said, our final objective was to create methods that determine if a user has permission to perform a particular operation on a service based on the role, I defined the following methods:

    # ...`User` class

    def _has_perm(self, operation, service_class, event_id):
        # Operation names and their corresponding permission in `Permissions`
        operations = {
            'create': 'can_create',
            'read': 'can_read',
            'update': 'can_update',
            'delete': 'can_delete',
        }
        if operation not in operations.keys():
            raise ValueError('No such operation defined')

        try:
            service_name = service_class.get_service_name()
        except AttributeError:
            # If `service_class` does not have `get_service_name()`
            return False

        service = Service.query.filter_by(name=service_name).first()

        uer_querylist = UsersEventsRoles.query.filter_by(user=self,
                                                         event_id=event_id)
        for uer in uer_querylist:
            role = uer.role
            perm = Permission.query.filter_by(role=role,
                                              service=service).first()
            if getattr(perm, operations[operation]):
                return True

        return False

    def can_create(self, service_class, event_id):
        return self._has_perm('create', service_class, event_id)

    def can_read(self, service_class, event_id):
        return self._has_perm('read', service_class, event_id)

    def can_update(self, service_class, event_id):
        return self._has_perm('update', service_class, event_id)

    def can_delete(self, service_class, event_id):
        return self._has_perm('delete', service_class, event_id)

The can_create, can_read, etc. defined in operations are the four columns in the Permission db model. Like _is_role, _has_perm method helps implementing the DRY philosophy.

Continue ReadingOrganizer Server Permissions System