Push your apk to your GitHub repository from Travis

In this post I’ll guide you on how to directly upload the compiled .apk from travis to your GitHub repository.

Why do we need this?

Well, assume that you need to provide an app to your testers after each commit on the repository, so instead of manually copying and emailing them the app, we can setup travis to upload the file to our repository where the testers can fetch it from.

So, lets get to it!

Step 1 :

Link Travis to your GitHub Account.

Open up https://travis-ci.org.

Click on the green button in the top right corner that says “Sign in with GitHub”

screenshot-area-2016-07-15-205733.png<

Step 2 :

Add your existing repository to Travis

Click the “+” button next to your Travis Dashboard located on the left.

screenshot-area-2016-07-15-210630.png<

Choose the project that you want to setup Travis from the next page

screenshot-area-2016-07-15-210916.png
Toggle the switch for the project that you want to integrate

Click the cog here and add an Environment Variable named GITHUB_API_KEY.
Proceed by adding your Personal Authentication Token there.
Read up here on how to get the Token.

 screenshot-area-2016-07-15-213931.png<

Great, we are pretty much done here.

Let us move to the project repository that we just integrated and create a new file in the root of repository by clicking on the “Create new file” on the repo’s page.

Name it .travis.yml and add the following commands over there

language: android 
jdk:
  - oraclejdk8
android:
  components:
    - tools
    - build-tools-24.0.0
    - android-24
    - extra-android-support
    - extra-google-google_play_services
    - extra-android-m2repository
    - extra-google-m2repository
    - addon-google_apis-google-24
 before_install:
 - chmod +x gradlew
 - export JAVA8_HOME=/usr/lib/jvm/java-8-oracle
 - export JAVA_HOME=$JAVA8_HOME
 after_success:
 - chmod +x ./upload-gh-pages.sh
 - ./upload-apk.sh
 script:
 - ./gradlew build

Next, create a bash file in the root of your repository using the same method and name it upload-apk.sh

  #create a new directory that will contain out generated apk
  mkdir $HOME/buildApk/ 
  #copy generated apk from build folder to the folder just created
  cp -R app/build/outputs/apk/app-debug.apk $HOME/android/
  #go to home and setup git
  cd $HOME
  git config --global user.email "useremail@domain.com"
  git config --global user.name "Your Name" 
  #clone the repository in the buildApk folder
  git clone --quiet --branch=master  https://user-name:$GITHUB_API_KEY@github.com/user-name/repo-name  master > /dev/null
  #go into directory and copy data we're interested
  cd master  cp -Rf $HOME/android/* .
  #add, commit and push files
  git add -f .
  git remote rm origin
  git remote add origin https://user-name:$GITHUB_API_KEY@github.com/user-name/repo-name.git
  git add -f .
  git commit -m "Travis build $TRAVIS_BUILD_NUMBER pushed"
  git push -fq origin master > /dev/null
  echo -e "Donen"

Once you have done this, commit and push these files, a Travis build will be initiated in few seconds.
You can see it ongoing in your Dashboard at https://travis-ci.org/.

After the build has completed, you will can see an app-debug.apk in your Repository.

IMPORTANT NOTE :

You might be wondering as to why did I write [skip ci] in the commit message.

Well the reason for that is, Travis starts a new build as soon as it detects a commit made on the master branch of your repository.

So once the apk is uploaded, that will trigger another build in Travis and hence forming an infinite loop.

We can prevent this in 2 ways :

First, simply write [skip ci] somewhere in the commit message and it will cause Travis to ignore the commit.

Or, push the apk to any other branch which is not configured for Travis build.

So well, that’s almost it.

I hope that you found this tutorial helpful, and if you have any doubts regarding this feel free to comment down below, I would love to help you out.

Cheers.

Continue ReadingPush your apk to your GitHub repository from Travis

Setting up Celery with Flask

In this article, I will explain how to use Celery with a Flask application. Celery requires a broker to run. The most famous of the brokers is Redis. So to start using Celery with Flask, first we will have to setup the Redis broker.

Redis can be downloaded from their site http://redis.io. I wrote a script that simplifies downloading, building and running the redis server.

#!/bin/bash
# This script downloads and runs redis-server.
# If redis has been already downloaded, it just runs it
if [ ! -d redis-3.2.1/src ]; then
    wget http://download.redis.io/releases/redis-3.2.1.tar.gz
    tar xzf redis-3.2.1.tar.gz
    rm redis-3.2.1.tar.gz
    cd redis-3.2.1
    make
else
    cd redis-3.2.1
fi
src/redis-server

When the above script is ran from the first time, the redis folder doesn’t exist so it downloads the same, builds it and then runs it. In subsequent runs, it will skip the downloading and building part and just run the server.

Now that the redis server is running, we will have to install its Python counterpart.

pip install redis

After the redis broker is set, now its time to setup the celery extension. First install celery by using pip install celery. Then we need to setup celery in the flask app definition.

# in app.py
def make_celery(app):
	# set redis url vars
	app.config['CELERY_BROKER_URL'] = environ.get('REDIS_URL', 'redis://localhost:6379/0')
    app.config['CELERY_RESULT_BACKEND'] = app.config['CELERY_BROKER_URL']
    # create context tasks in celery
    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])
    celery.conf.update(app.config)
    TaskBase = celery.Task
    class ContextTask(TaskBase):
        abstract = True
        def __call__(self, *args, **kwargs):
            with app.app_context():
                return TaskBase.__call__(self, *args, **kwargs)
    celery.Task = ContextTask
    return celery

celery = make_celery(current_app)

Now that Celery is setup on our project, let’s define a sample task.

@app.route('/task')
def view():
	background_task.delay(*args, **kwargs)
	return 'OK'

@celery.task
def background_task(*args, **kwargs):
	# code
	# more code

Now to run the celery workers, execute

celery worker -A app.celery

That should be all. Now to run our little project, we can execute the following script.

bash run_redis.sh &  # to run redis
celery worker -A app.celery &  # to run celery workers
python app.py

If you are wondering how to run the same on Heroku, just use the free heroku-redis extension. It will start the redis server on heroku. Then to run the workers and app, set the Procfile as –

web: sh heroku.sh

Then set the heroku.sh as –

#!/bin/bash
celery worker -A app.celery &
gunicorn app:app

That’s a basic guide on how to run a Flask app with Celery and Redis. If you want more information on this topic, please see my post Ideas on Using Celery in Flask for background tasks.

Continue ReadingSetting up Celery with Flask

Ideas on using Celery with Flask for background tasks

Simply put, Celery is a background task runner. It can run time-intensive tasks in the background so that your application can focus on the stuff that matters the most. In context of a Flask application, the stuff that matters the most is listening to HTTP requests and returning response.

By default, Flask runs on a single-thread. Now if a request is executed that takes several seconds to run, then it will block all other incoming requests as it is single-threaded. This will be a very bad-experience for the user who is using the product. So here we can use Celery to move time-hogging part of that request to the background.

I would like to let you know that by “background”, Celery means another process. Celery starts worker processes for the running application and these workers receive work from the main application. Celery requires a broker to be used. Broker is nothing but a database that stores results of a celery task and provides a shared interface between main process and worker processes. The output of the work done by the workers is stored in the Broker. The main application can then access these results from the Broker.

Using Celery to set background tasks in your application is as simple as follows –

@celery.task
def background_task(*args, **kwargs):
    # do stuff
    # more stuff

Now the function background_task becomes function-able as a background task. To execute it as a background task, run –

task = background_task.delay(*args, **kwargs)
print task.state  # task current state (PENDING, SUCCESS, FAILURE)

Till now this may look nice and easy but it can cause lots of problems. This is because the background tasks run in different processes than the main application. So the state of the worker application differs from the real application.

One common problem because of this is the lack of request context. Since a celery task runs in a different process, so the request context is not available. Therefore the request headers, cookies and everything else is not available when the task actually runs. I too faced this problem and solved it using an excellent snippet I found on the Internet.

"""
Celery task wrapper to set request context vars and global
vars when a task is executed
Based on http://xion.io/post/code/celery-include-flask-request-context.html
"""
from celery import Task
from flask import has_request_context, make_response, request, g

from app import app  # the flask app


__all__ = ['RequestContextTask']


class RequestContextTask(Task):
    """Base class for tasks that originate from Flask request handlers
    and carry over most of the request context data.
    This has an advantage of being able to access all the usual information
    that the HTTP request has and use them within the task. Pontential
    use cases include e.g. formatting URLs for external use in emails sent
    by tasks.
    """
    abstract = True

    #: Name of the additional parameter passed to tasks
    #: that contains information about the original Flask request context.
    CONTEXT_ARG_NAME = '_flask_request_context'
    GLOBALS_ARG_NAME = '_flask_global_proxy'
    GLOBAL_KEYS = ['user']

    def __call__(self, *args, **kwargs):
        """Execute task code with given arguments."""
        call = lambda: super(RequestContextTask, self).__call__(*args, **kwargs)

        # set context
        context = kwargs.pop(self.CONTEXT_ARG_NAME, None)
        gl = kwargs.pop(self.GLOBALS_ARG_NAME, {})

        if context is None or has_request_context():
            return call()

        with app.test_request_context(**context):
            # set globals
            for i in gl:
                setattr(g, i, gl[i])
            # call
            result = call()
            # process a fake "Response" so that
            # ``@after_request`` hooks are executed
            # app.process_response(make_response(result or ''))

        return result

    def apply_async(self, args=None, kwargs=None, **rest):
        self._include_request_context(kwargs)
        self._include_global(kwargs)
        return super(RequestContextTask, self).apply_async(args, kwargs, **rest)

    def apply(self, args=None, kwargs=None, **rest):
        self._include_request_context(kwargs)
        self._include_global(kwargs)
        return super(RequestContextTask, self).apply(args, kwargs, **rest)

    def retry(self, args=None, kwargs=None, **rest):
        self._include_request_context(kwargs)
        self._include_global(kwargs)
        return super(RequestContextTask, self).retry(args, kwargs, **rest)

    def _include_request_context(self, kwargs):
        """Includes all the information about current Flask request context
        as an additional argument to the task.
        """
        if not has_request_context():
            return

        # keys correspond to arguments of :meth:`Flask.test_request_context`
        context = {
            'path': request.path,
            'base_url': request.url_root,
            'method': request.method,
            'headers': dict(request.headers),
        }
        if '?' in request.url:
            context['query_string'] = request.url[(request.url.find('?') + 1):]

        kwargs[self.CONTEXT_ARG_NAME] = context

    def _include_global(self, kwargs):
        d = {}
        for z in self.GLOBAL_KEYS:
            if hasattr(g, z):
                d[z] = getattr(g, z)
        kwargs[self.GLOBALS_ARG_NAME] = d

To run a task in Request context mode, do –

@celery.task(base=RequestContextTask, bind=True)
def background_task(self, *args, **kwargs):
    # do stuff
    # more stuff

If you are wondering what the RequestContextTask class does, it simply stores all request context vars and global vars when a background task is called (task.delay()) and then unpacks those values to their proper places when the task is about to be run. The above snippet can be easily extended to store any value.

Another challenge that some people may face is the occasional Parsing/Serialization error. This happens because the data being sent to/from a function that is to be background executed is too complex.

Serialization is the process of converting complex data structures and objects into a plain string. Serialization of data is necessary because the background tasks and the main thread run in different processes. Now think how will the main thread communicate the celery thread to do some task. This is done using serialization of the concerned data. So to avoid serialization errors, it is recommended that you make background tasks such that they require only simple arguments to run and they return only simple data.

So basically keeping small and simple tasks is recommended when using Celery. Follow this golden rule and you will not run into any problems.

 

{{ Repost from my personal blog | http://aviaryan.in/blog/gsoc/celery-flask-good-ideas.html }}

Continue ReadingIdeas on using Celery with Flask for background tasks

Using compression middleware in Node/Express

If you’re using ExpressJS to host your server, there’s a small performance tweak that you can do (works on universally on any kind of server – HTML templates to JSON REST APIs to image servers) – which is to enable GZIP compression.

The changed required to do it should be very simple. First install the compression npm package

 npm install --save compression

In you app.js (or wherever you start your express server), modify the code to include the compression middleware like this

var express = require('express);
var app = express();

var compression = require('compression');

app.use(compression());

// Rest of your app code
// app.get (endpoint, callback) 
// app.listen(port, callback)

This should easily reduce page loads time to the order of 15-20%.

For example, page load before compression middleware added 72f7abf2-4899-11e6-9cd5-68a7addaf3a6

 

And Page load time after compression is added69e6fba8-4899-11e6-9626-a25c26ea7d2b

Continue ReadingUsing compression middleware in Node/Express

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced.

Issue: https://github.com/societyserver/sTeam/issues/30

Solution: https://github.com/societyserver/sTeam/pull/95

Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates.

This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.

gsoc look
output of look before the changes
gsoc look1
output of look after the changes

 

Issue: https://github.com/societyserver/sTeam/issues/100

Solution: https://github.com/societyserver/sTeam/pull/101

By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Continue ReadingAdding extensive help for sTeam

Permission Decorators

A follow-up to one of my previous posts: Organizer Server Permissions System.

I recently had a requirement to create permission decorators for use in our REST APIs. There had to be separate decorators for Event and Services.

Event Permission Decorators

Understanding Event permissions is simple: Any user can create an event. But access to an event is restricted to users that have Event specific Roles (e.g. Organizer, Co-organizer, etc) for that event. The creator of an event is its Organizer, so he immediately gets access to that event. You can read about these roles in the aforementioned post.

So for Events, create operation does not require any permissions, but read/update/delete operations needed a decorator. This decorator would restrict access to users with event roles.

def can_access(func):
    """Check if User can Read/Update/Delete an Event.
    This is done by checking if the User has a Role in an Event.
    """
    @wraps(func)
    def wrapper(*args, **kwargs):
        user = UserModel.query.get(login.current_user.id)
        event_id = kwargs.get('event_id')
        if not event_id:
            raise ServerError()
        # Check if event exists
        get_object_or_404(EventModel, event_id)
        if user.has_role(event_id):
            return func(*args, **kwargs)
        else:
            raise PermissionDeniedError()
    return wrapper

The has_role(event_id) method of the User class determines if the user has a Role in an event.

# User Model class

    def has_role(self, event_id):
        """Checks if user has any of the Roles at an Event.
        """
        uer = UsersEventsRoles.query.filter_by(user=self, event_id=event_id).first()
        if uer is None:
            return False
        else:
            return True

Reading one particular event (/events/:id [GET]) can be restricted to users, but a GET request to fetch all the events (/events [GET]) should only be available to staff (Admin and Super Admin). So a separate decorator to restrict access to Staff members was needed.

def staff_only(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        user = UserModel.query.get(login.current_user.id)
        if user.is_staff:
            return func(*args, **kwargs)
        else:
            raise PermissionDeniedError()
    return wrapper

Service Permission Decorators

Service Permissions for a user are defined using Event Roles. What Role a user has in an Event determines what Services he has access to in that Event. Access here means permission to Create, Read, Update and Delete services. The User model class has four methods to determine the permissions for a Service in an event.

user.can_create(service, event_id)
user.can_read(service, event_id)
user.can_update(service, event_id)
user.can_delete(service, event_id)

So four decorators were needed to put alongside POST, GET, PUT and DELETE method handlers. I’ve pasted snippet for the can_update decorator. The rest are similar but with their respective permission methods for User class object.

def can_update(DAO):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            user = UserModel.query.get(login.current_user.id)
            event_id = kwargs.get('event_id')
            if not event_id:
                raise ServerError()
            # Check if event exists
            get_object_or_404(EventModel, event_id)
            service_class = DAO.model
            if user.can_update(service_class, event_id):
                return func(*args, **kwargs)
            else:
                raise PermissionDeniedError()
        return wrapper
    return decorator

This decorator is a little different than can_access event decorator in a way that it takes an argument, DAO. DAO is Data Access Object. A DAO includes a database Model and methods to create, read, update and delete object of that model. The db model for a DAO would be the Service class for the object. You can look that the model class is taken from the DAO and used as the service class.

The can_create, can_read and can_delete decorators look exactly the same except they use their (obvious) permission methods on the User class object.

Continue ReadingPermission Decorators

Building interactive elements with HTML and javascript: Interact.js + drag-drop

{ Repost from my personal blog @ https://blog.codezero.xyz/building-interactive-elements-with-html-and-javascript-interact-js-drag-drop }

In a few of the past blog posts, we saw about implementing drag-drop andresizing with HTML and javascript. The functionality was pretty basic with a simple drag-and-drop and resizing. That is where, a javascript library called as interact.js comes in.

interact.js is a lightweight, standalone JavaScript module for handling single-pointer and multi-touch drags and gestures with powerful features including inertia and snapping.

With Interact.js, building interactive elements is like a eating a piece of your favorite cake – that easy !

Getting started with Interact.js

You have multiple option to include the library in your project.

  • You can use bower to install (bower install interact) (or)
  • npm (npm install interact.js) (or)
  • You could directly include the library from a CDN (https://cdnjs.cloudflare.com/ajax/libs/interact.js/1.2.6/interact.min.js).

Implementing a simple draggable

Let’s start with some basic markup. We’ll be using the draggable class to enable interact.js on this element.

<div id="box-one" class="draggable">  
  <p> I am the first Box </p>
</div>  
<div id="box-two" class="draggable">  
    <p> I am the second Box </p>
</div>

The first step in using interact.js is to create an interact instance. Which you can create by using interact('<the selector>'). Once the instance is created, you’ll have to call the draggable method on it to enable drag. Draggable accepts a javascript object with some configuration options and some pretty useful callbacks.

// target elements with the "draggable" class
interact('.draggable')  
  .draggable({
    // enable inertial throwing
    inertia: true,
    // keep the element within the area of it's parent
    restrict: {
      restriction: "parent",
      endOnly: true,
      elementRect: { top: 0, left: 0, bottom: 1, right: 1 }
    },
    // enable autoScroll
    autoScroll: true,
    // call this function on every dragmove event
    onmove: dragMoveListener,
  });

  function dragMoveListener (event) {
    var target = event.target,
        // keep the dragged position in the data-x/data-y attributes
        x = (parseFloat(target.getAttribute('data-x')) || 0) + event.dx,
        y = (parseFloat(target.getAttribute('data-y')) || 0) + event.dy;

    // translate the element
    target.style.webkitTransform =
    target.style.transform =
      'translate(' + x + 'px, ' + y + 'px)';

    // update the posiion attributes
    target.setAttribute('data-x', x);
    target.setAttribute('data-y', y);
  }

Here we use the onmove event to move the box according to the dx and dyprovided by interact when the element is dragged.

Implementing a simple drag-drop

Now to the above draggable, we’ll add a drop zone into which the two draggable boxes can be dropped.

<div id="dropzone" class="dropzone">You can drop the boxes here</div>

Similar to a draggable, we first create an interact instance. Then we call the dropzone method on to tell interact that, that div is to be considered as a dropzone. The dropzone method accepts a json object with configuration options and callbacks.

// enable draggables to be dropped into this
interact('.dropzone').dropzone({  
  // Require a 50% element overlap for a drop to be possible
  overlap: 0.50,

  // listen for drop related events:

  ondropactivate: function (event) {
    // add active dropzone feedback
    event.target.classList.add('drop-active');
  },
  ondragenter: function (event) {
    var draggableElement = event.relatedTarget,
        dropzoneElement = event.target;

    // feedback the possibility of a drop
    dropzoneElement.classList.add('drop-target');
  },
  ondragleave: function (event) {
    // remove the drop feedback style
    event.target.classList.remove('drop-target');
  },
  ondrop: function (event) {
    event.relatedTarget.textContent = 'Dropped';
  },
  ondropdeactivate: function (event) {
    // remove active dropzone feedback
    event.target.classList.remove('drop-active');
    event.target.classList.remove('drop-target');
  }
});

Each event provides two important properties. relatedTarget which gives us the DOM object of the draggable that is being dragged. And target which gives us the DOM object of the dropzone. We can use this to provide visual feedback for the user when he/she is dragging.

Demo:

https://jsfiddle.net/niranjan94/yqwc4hqz/2/

Continue ReadingBuilding interactive elements with HTML and javascript: Interact.js + drag-drop

What is sTeam

Whenever I tell someone that I am working for Fossasia under Google Summer of Code, I am asked to explain what my project is all about. Here I give a detailed introduction to sTeam and explain what it actually is.

kopfleiste_big

Well some of you must of heard of MUDs (Multi User Dungeon) sTeam is based on this concept. For those who don’t know what MUDs are I will provide a brief introduction. MUDs are text based games. These games create a kind of real-time virtual world. These games are based on role playing and the user acts like one of the organisms of the virtual world. He can do whatever the organisms could have done in a real world. For example if I represent a man in a virtual world, I can do whatever a man can do, things like walk, talk, eat and stuff. This way the user interacts with his virtual environment.

This whole concept is taken to create a virtual office space. Just like MUDs sTeam creates virtual office space. Now how can we imagine an office? Well it will have rooms each room will have containers with different kinds of files, there will be people moving around working or having meetings. Well all this is possible in sTeam virtually. The user is also carrying a rucksack and can copy objects into this as he moves around. This is what sTeam is.

Steam provides two interfaces web and command line. The web interface can be accessed at http://societyserver.org/ and the code is put up at https://github.com/societyserver/sTeam.

logo-200

More of similar projects are available under Fossasia. Check this site for more details http://labs.fossasia.org/ideas.html.

Continue ReadingWhat is sTeam

Shell hacks

Custom Shell

Working with database models needs a lot of use of the flask shell. You can access it with:

python manage.py shell

The default shell is quite unintuitive. It doesn’t pretty print the outputs and has no support for auto-completion. I’ve installed IPython that takes care of that. But still working with models means writing a lot of import statements. Plus if there was some change in the code related to the app, then the shell had to restarted again so the changes could be loaded. Meaning writing the import statements again.

We were using Flask-Script and I wanted to run a custom shell that imports all the required modules, models and helper functions, so I don’t have to write them over and over. Some of them were as long as:

from open_event.models.users_events_roles import UsersEventsRoles

So I created a custom shell command with different context that overrides the default shell provided by the Flask-Script Manager. It was pretty easy with Flask-Script. One thing I had to keep in mind is that it needed to be in a different file than manage.py. Since manage.py was committed to source repo, changes to it would be tracked. So I needed a different file that could be excluded from the source repo. I created an smg.py that imported the Manager from the open_event module and overrides the shell command.

from flask_script import Shell

from open_event import manager
from open_event.models.user import User
from open_event.models.event import Event
from open_event.helpers.data import save_to_db, delete_from_db

def _make_context():
    return dict(
        uq=User.query,
        eq=Event.query,
        su=User.query.get(1),
        User=User,
        Event=Event,
        savetodb=save_to_db,
        deletefromdb=delete_from_db
    )


if __name__ == "__main__":
    manager.add_command('shell', Shell(make_context=_make_context))
    manager.run()

Place this smg.py file in the same directory as manage.py, so you can access it with python smg.py shell.

The code is pretty simple to understand. We import the Shell class from flask_script, create its object with our context and then add it to the manager as a command. _make_context contains what I usually like to have in my shell. It must always return a dictionary. The keys of this dictionary would be available as statements inside the shell with their values specified here.

This helps a lot. Most of the time I would be working with the super_admin user, and I would need its User object from time to time. The super_admin user is always going to be the first user (User object with id 1). So instead of from open_event.models.user import User; su = User.query.get(1) I could just use the su variable. Models like User and Event are also readily available (so are their base queries). This is the default context that I always keep, but many times you need more models than the ones specified here. Like when I was working with the permissions system.

from open_event.models.users_events_roles import UsersEventsRoles
from open_event.models.service import Service
from open_event.models.role import Role

def _make_context():
    return dict(
        # usual stuff
        UER=UsersEventsRoles,
        Service=Service,
        Role=Role
    )

You can even write a script that fetches all the database models (instance of sqlalchemy Model class) and then add them to the _make_context dictionary. I like to keep it minimum and differently named, so there are no conflicts of classes when try to auto-complete.

One more thing, you need to exclude the smg.py file so that git doesn’t track it. You can simply add it in the .git/info/exclude file.

I also wrote some useful one-liner bash commands.

Revision History

Every time someone updates the database models, he needs to migrate and provide the migrations file to others by committing it to the source. These files help us upgrade our already defined database. We work with Alembic. Regarding alembic revisions (migration files) you can keep two things in mind. One is the Head, that keeps track of the latest revision, and another is Current, that specifies what revision your tables in the database are based on. If your current revision is not a head, it means your database tables are not up-to-date. You need to upgrade (python manage.py db upgrade). The can be multiple heads in the revision history. python manage.py heads displays all of them. The current revision can be fetched with python manage.py current. I wanted something that automatically checks if current is at a head.

python manage.py db history | grep --color "$(python manage.py db current 2> /dev/null)|$(python manage.py db heads)|$"

You can specify it as an alias in .bashrc. This command displays the revision history with the head and current revision being colored.

Screenshot from 2016-07-11 16:34:18

You can identify the head as it is always appended by “(head)”. The other colored revision would be the current.

You can also reverse the output, by printing bottom to up. This is helpful when the revision history gets large and you have to scroll up to see the head.

alias rev='python manage.py db history | tac | grep --color "$(python manage.py db current 2> /dev/null)|$(python manage.py db heads)|$"'

Screenshot from 2016-07-11 16:42:30.png

Current Revision changer

The current revision is maintained at the database in the one-column alembic_version table. If for some reason the migration file for current revision no longer exists (like when changing to a branch at git that doesn’t have the file), alembic would raise an error. So to change the revision at the database I wrote a bash function.

change_db_current () {
    echo "Current Revision: "
    read current_rev
    echo "New Revision: "
    read new_rev

    echo 'c test \ update alembic_version set version_num='"'"$new_rev"'"' where version_num='"'"$current_rev"'"';' | sudo su - postgres -c psql
}

Enter the current revision hash (displayed in the alembic error) and the revision you need to point the current to, and it will update the table with the new revision. Sick right! It’s pretty stupid actually. It was when I didn’t know how to stamp in alembic. Anyways, it was useful some time ago.

Continue ReadingShell hacks

Deploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github

In this sprint for the kniteditor library we focused on automatic deployment for Windows and Mac. The idea: whenever a tag is pushed to Github, a new travis build is triggered. The new built app is uploaded to Github as an “.dmg” file.

Travis

Travis is configured with the “.travis.yml” file which you can see here:

language: python

# see https://docs.travis-ci.com/user/multi-os/
matrix:
  include:
    - os: linux
      python: 3.4
    - os: osx
      language: generic
  allow_failures:
    - os: osx

install:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/install.sh ; fi

script:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then   mac-build/test.sh ; fi

before_deploy:
  - if [ "$TRAVIS_OS_NAME" == "osx"   ] ; then cp mac-build/dist/KnitEditor.dmg /Users/travis/KnitEditor.dmg ; fi

deploy:
  # see https://docs.travis-ci.com/user/deployment/releases/
  - provider: releases
    api_key:
      secure: v18ZcrXkIMgyb7mIrKWJYCXMpBmIGYWXhKul4/PL/TVpxtg2f/zfg08qHju7mWnAZYApjTV/EjOwWCtqn/hm2CfPFo=
    file: /Users/travis/KnitEditor.dmg
    on:
      tags: true
      condition:  "\"$TRAVIS_OS_NAME\" == \"osx\""
      repo: AllYarnsAreBeautiful/kniteditor

Note that it builds both Linux and OSX. Thus, for each step one must distinguish. Here, only the OSX parts are shown. These steps are executed:

  1. Installation. The app and dmg files are built.
  2. Testing. The tests are shipped with the app in our case. This allows us to execute them at many more locations – where the user is.
  3. Before Deploy. Somehow Travis did not manage to upload from the original location. Maybe it was a bug. Thus, a absolute path was created for the use in (4).
  4. Deployment to github. In this case we use an API key. One could also use a password.

Installation:

#!/bin/bash
#
# execute with --user to pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

brew update

echo "# install python3"
brew install python3
echo -n "Python version: "
python3 --version
python3 -m pip install --upgrade pip

echo "# install pygame"
python3 -m pip uninstall -y pygame || true
# locally compiled pygame version
# see https://bitbucket.org/pygame/pygame/issues/82/homebrew-on-leopard-fails-to-install#comment-636765
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
brew install mercurial || true
python3 -m pip install $USER hg+http://bitbucket.org/pygame/pygame

echo "# install kivy dependencies"
brew install sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer

echo "# install requirements"
python3 -m pip install $USER -I Cython==0.23 \
                       --install-option="--no-cython-compile"
USE_OSX_FRAMEWORKS=0 python3 -m pip install $USER kivy
python3 -m pip uninstall -y Cython==0.23
python3 -m pip install $USER -r ../requirements.txt
python3 -m pip install $USER -r ../test-requirements.txt
python3 -m pip install $USER PyInstaller

./build.sh $USER

The first step is to update brew. It cost me 4 hours to find this bug, 2 hours to work around it before. If brew is not updated, Python 3.4 is installed instead of Python 3.5.

Then, Python, Pygame as the window provider for Kivy is installed, and the other requirements. It goes on with the build step. While installation is executed once on a personal Mac, the build step is executed several times, when the source code is changed.

#!/bin/bash
#
# execute with --user to make pip install in the user home
#
set -e

HERE="`dirname \"$0\"`"
USER="$1"
cd "$HERE"

(
  cd ..

  echo "# removing old installation of kniteditor"
  python3 -m pip uninstall -y kniteditor || true

  echo "# build the distribution"
  python3 -m pip install $USER wheel
  python3 setup.py sdist --formats=zip
  python3 setup.py bdist_wheel
  python3 -m pip uninstall -y wheel

  echo "# install the current version from the build"
  python3 -m pip install $USER --upgrade dist/kniteditor-`linux-build/package_version`.zip

  echo "# install test requirements"
  python3 -m pip install $USER --upgrade -r test-requirements.txt
)

echo "# build the app"
# see https://pythonhosted.org/PyInstaller/usage.html
python3 -m PyInstaller -d -y KnitEditor.spec

echo "# create the .dmg file"
# see http://stackoverflow.com/a/367826/1320237
KNITEDITOR_DMG="`pwd`/dist/KnitEditor.dmg"
rm -f "$KNITEDITOR_DMG"
hdiutil create -srcfolder dist/KnitEditor.app "$KNITEDITOR_DMG"

echo "The installer can be found in \"$KNITEDITOR_DMG\"."

In the first steps we install the kniteditor from the built “sdist”  zip file. This way we can uninstall it with pip. Also, the dependencies are installed. Then, PyInstaller is invoked with a spec and then the .dmg file is created.

The spec looks like this:

# -*- mode: python -*-

import sys
site_packages = [path for path in sys.path if path.rstrip("/\\").endswith('site-packages')]
print("site_packages:", site_packages)

from kivy.tools.packaging.pyinstaller_hooks import get_deps_all, \
    hookspath, runtime_hooks

block_cipher = None

added_files = [(site_packages_, ".") for site_packages_ in site_packages]

kwargs = get_deps_all()
kwargs["datas"] = added_files
kwargs["hiddenimports"] += ['queue', 'unittest', 'unittest.mock']


a = Analysis(['_KnitEditor.py'],
             pathex=[],
             binaries=None,
             win_no_prefer_redirects=False,
             win_private_assemblies=False,
             cipher=block_cipher,
             hookspath=hookspath(),
             runtime_hooks=runtime_hooks(),
             **kwargs)
pyz = PYZ(a.pure, a.zipped_data,
             cipher=block_cipher)
exe = EXE(pyz,
          a.scripts,
          exclude_binaries=True,
          name='KnitEditorX',
          debug=False,
          strip=False,
          upx=True,
          console=True )
coll = COLLECT(exe,
               a.binaries,
               a.zipfiles,
               a.datas,
               strip=False,
               upx=True,
               name='KnitEditor')
app = BUNDLE(coll,
             name='KnitEditor.app',
             icon=None,
             bundle_identifier="com.ayab-knitting.KnitEditor")

Note, that all files in all site-packages are included. This way, we do not need to cope with missing modules. Also, there are three different names for

  • the entry script “_KnitEditor.py”
  • the executable “KnitEditorX”
  • the library “kniteditor”

While on the command line, OSX is case sensitive, it is not sensitive on the file system. Thus, if one of the names is the same, we can get errors durig the PyInstaller build.

Lessons learned

Do “brew update” on travis.

Use absolute paths for deployment on Mac OSX travis.

Never use the same names in PyInstaller for the main script, a library and the executable. Otherwise you get a “not a directory” or “not a file” error.

Travis OSX build max out from time to time. It is much faster to have a Mac computer there, to create the scripts.

Continue ReadingDeploying a Kivy Application with PyInstaller for Mac OSX with Travis CI to Github