Using Flask-REST-JSONAPI’s Resource Manager In Open Event API Server

For the nextgen Open Event API Server, we are using flask-rest-jsonapi to write all the API endpoints. The flask-rest-jsonapi is based on JSON API 1.0 Specifications for JSON object responses. In this blog post, I describe how I wrote API schema and endpoints for an already existing database model in the Open Event API Server. Following this blog post, you can learn how to write similar classes for your database models. Let’s dive into how the API Schema is defined for any Resource in the Open Event API Server. Resource, here, is an object based on a database model. It provides a link between the data layer and your logical data abstraction. This ResourceManager has three classes. Resource List Resource Detail Resource Relationship (We’ll take a look at the Speakers API.) First, we see the already implemented Speaker Model : class Speaker(db.Model): """Speaker model class""" __tablename__ = 'speaker' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String, nullable=False) photo = db.Column(db.String) website = db.Column(db.String) organisation = db.Column(db.String) is_featured = db.Column(db.Boolean, default=False) sponsorship_required = db.Column(db.Text) def __init__(self, name=None, photo_url=None, website=None, organisation=None, is_featured=False, sponsorship_required=None): self.name = name self.photo = photo_url self.website = website self.organisation = organisation self.is_featured = is_featured self.sponsorship_required = sponsorship_required   Here’s the Speaker API Schema: class SpeakerSchema(Schema): class Meta: type_ = 'speaker' self_view = 'v1.speaker_detail' self_view_kwargs = {'id': '<id>'} id = fields.Str(dump_only=True) name = fields.Str(required=True) photo_url = fields.Url(allow_none=True) website = fields.Url(allow_none=True) organisation = fields.Str(allow_none=True) is_featured = fields.Boolean(default=False) sponsorship_required = fields.Str(allow_none=True) class SpeakerList(ResourceList): schema = SpeakerSchema data_layer = {'session': db.session, 'model': Speaker} class SpeakerDetail(ResourceDetail): schema = SpeakerSchema data_layer = {'session': db.session, 'model': Speaker} class SpeakerRelationship(ResourceRelationship): schema = SpeakerSchema data_layer = {'session': db.session, 'model': Speaker}   Last piece of code is listing the actual endpoints in __init__ file for flask-rest-jsonapi api.route(SpeakerList, 'speaker_list', '/events/<int:event_id>/speakers', '/sessions/<int:session_id>/speakers', '/users/<int:user_id>/speakers') api.route(SpeakerDetail, 'speaker_detail', '/speakers/<int:id>') api.route(SpeakerRelationship, 'speaker_event', '/speakers/<int:id>/relationships/event') api.route(SpeakerRelationship, 'speaker_user', '/speakers/<int:id>/relationships/user') api.route(SpeakerRelationship, 'speaker_session', '/speakers/<int:id>/relationships/sessions')   How to write API schema from database model? Each column of the database model is a field in the API schema. These are marshmallow fields and can be of several data types - String, Integer, Float, DateTime, Url. Three class definitions follow the Schema class. List: SpeakerList class is the basis of endpoints: api.route(SpeakerList, 'speaker_list', '/events/<int:event_id>/speakers', '/sessions/<int:session_id>/speakers', '/users/<int:user_id>/speakers') This class will contain methods that generate a list of speakers based on the id that is passed in view_kwargs. Let’s say that '/sessions/<int:session_id>/speakers' is requested. As the view_kwargs here contains sesssion_id, the query methods in SpeakerList class will fetch a list of speaker profiles related to  the sessions identified by session_id. The flask-rest-jsonapi allows GET and POST methods for ResourceList. When using these endpoints for POST, the before_create_object and before_post methods can be written. These methods are overridden from the base ResourceList class in flask-rest-jsonapi/resource.py when they are defined in Speaker’s class. Detail:  SpeakerDetail class provides these endpoints: api.route(SpeakerDetail, 'speaker_detail', '/speakers/<int:id>') The Resource Detail provides methods to facilitate GET, PATCH and DELETE requests provided for the endpoints. Methods like: before_get_object, before_update_object, after_update_object are derived from ResourceDetail class. The endpoints return an object of the resource based on the view_kwargs…

Continue ReadingUsing Flask-REST-JSONAPI’s Resource Manager In Open Event API Server

Documenting Open Event API Using API-Blueprint

FOSSASIA's Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually. Using API Blueprint:- We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :- FORMAT: V1 HOST: https://api.eventyay.com # Open Event API Server The Open Event API Server # Group Authentication The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>` API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API. The heading starts with # and the first heading is regarded as the name of the API. NOTE - Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.         Following the heading section comes the description of the API. Further, headings are used to break up the description section. Resource Groups: -----------------------------     By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users. # Group Users For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints. | Parameter | Description | Type | Required | |:----------|-------------|------|----------| | `name` | Name of the user | string | - | | `password` | Password of the user | string | **yes** | | `email` | Email of the user | string | **yes** |   Resources: ------------------     In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI. ## Users Collection…

Continue ReadingDocumenting Open Event API Using API-Blueprint

Open Event Server: Working with Migration Files

FOSSASIA's Open Event Server uses alembic migration files to handle all database operations and updations.  From creating tables to updating tables and database, all works with help of the migration files. However, many a times we tend to miss out that automatically generated migration files mainly drops and adds columns rather than just changing them. One example of this would be: def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.add_column('session', sa.Column('submission_date', sa.DateTime(), nullable=True)) op.drop_column('session', 'date_of_submission') Here, the idea was to change the has_session_speakers(string) to is_session_speakers_enabled (boolean), which resulted in the whole dropping of the column and creation of a new boolean column. We realize that, on doing so we have the whole data under  has_session_speakers lost. How to solve that? Here are two ways we can follow up: op.alter_column: ---------------------------------- When update is as simple as changing the column names, then we can use this. As discussed above, usually if we migrate directly after changing a column in our model, then the automatic migration created would drop the old column and create a new column with the changes. But on doing this in the production will cause huge loss of data which we don’t want. Suppose we want to just change the name of the column of start_time to starts_at. We don’t want the entire column to be dropped. So an alternative to this is using op.alter_column. The two main necessary parameters of the op.alter_column is the table name and the column which you are willing to alter. The other parameters include the new changes. Some of the commonly used parameters are: nullable – Optional: specify True or False to alter the column’s nullability. new_column_name – Optional; specify a string name here to indicate the new name within a column rename operation. type_ – Optional: a TypeEngine type object to specify a change to the column’s type. For SQLAlchemy types that also indicate a constraint (i.e. Boolean, Enum), the constraint is also generated. autoincrement –  Optional: set the AUTO_INCREMENT flag of the column; currently understood by the MySQL dialect. existing_type– Optional: a TypeEngine type object to specify the previous type. This is required for all column alter operations that don’t otherwise specify a new type, as well as for when nullability is being changed on a column. So, for example, if you want to change a column name from “start_time” to “starts_at” in events table you would write: op.alter_column(‘events’, ‘start_time’, new_column_name=’starts_at’) def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.alter_column('sessions_version', 'end_time', new_column_name='ends_at') op.alter_column('sessions_version', 'start_time', new_column_name='starts_at') op.alter_column('events_version', 'end_time', new_column_name='ends_at') op.alter_column('events_version', 'start_time', new_column_name='starts_at') Here, session_version and events_version are the tables name altering columns start_time to starts_at and end_time to ends_at with the op_alter_column parameter new_column_name. op.execute: -------------------- Now with alter_column, most of the alteration in the column name or constraints or types is achievable. But there can be a separate scenario for changing the column properties. Suppose I change a table with column “aspect_ratio” which was a string column and had values “on” and…

Continue ReadingOpen Event Server: Working with Migration Files

Adding Global Search and Extending Bookmark Views in Open Event Android

When we design an application it is essential that the design and feature set enables the user to find all relevant information she or he is looking for. In the first versions of the Open Event Android App it was difficult to find the Sessions and Speakers related to a certain Track. It was only possible to search for them individually. The user also could not view bookmarks on the Main Page but had to go to a separate tab to view them. These were some capabilities I wanted to add to the app. In this post I will outline the concepts and advantages of a Global Search and a Home Screen in the app. I took inspiration from the Google I/O 2017 App  that had these features already. And, I am demonstrating how I added a Home Screen which also enabled users to view their bookmarks on the Home Screen itself. Global Search v/s Local Search                   If we observe clearly in the above images we can see there exists a stark difference in the capabilities of each search. See how in the Local Search we are just able to search within the Tracks section and not anything else. This is fixed in the Global Search page which exists along with the new home screen. As all the results that a user might need are obtained from a single search, it improves the overall user-experience of the app. Also a noticeable feature that was missing in the current iteration of the application was that a user had to go to a separate tab to view his/her bookmarks. It would be better for the app to have a home page detailing all the Event’s/Conference’s details as well as display user bookmarks on the homepage. New Home                                     The above posted images/gifs indicate the functioning and the UI/UX of the new Homescreen within the app. Currently I am working to further improve the way the Bookmarks are displayed. The new home screen provides the user with the event details i.e FOSSASIA 2017 in this case. This would be different for each conference/event and the data is fetched from the open-event-orga server(the first part of the project) if it doesn’t already exist in the JSON files provided in the assets folder of the application. All the event information is being populated by the JSON files provided in the assets folder in the app directory structure. config.json sponsors.json microlocations.json event.json(this stores the information that we see on the home screen) sessions.json speakers.json track.json All the file names are descriptive enough to denote what do all of them store.I hope that I have put forward why the addition of a New Home with Bookmarks along with the Global Search feature was a neat addition to the app. Link to PR for this feature : https://github.com/fossasia/open-event-android/pull/1565 Resources https://guides.codepath.com/android/Heterogenous-Layouts-inside-RecyclerView https://developer.android.com/training/search/index.html  …

Continue ReadingAdding Global Search and Extending Bookmark Views in Open Event Android

Open Event Server: No (no-wrap) Ellipsis using jquery!

Yes, the title says it all i.e., Enabling multiple line ellipsis. This was used to solve an issue to keep Session abstract view within 200 characters (#3059) on FOSSASIA's Open Event Server project. There is this one way to ellipsis a paragraph in html-css and that is by using the text-overflow property: .div_class{ white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }’’ But the downside of this is the one line ellipis. Eg: My name is Medozonuo. I am..... And here you might pretty much want to ellipsis after a few characters in multiple lines, given that your div space is small and you do want to wrap your paragraph. Or maybe not. So jquery to the rescue. There are two ways you can easily do this multiple line ellipsis: 1) Height-Ellipsis (Using the do-while loop): //script: if ($('.div_class').height() > 100) {    var words = $('.div_class').html().split(/\s+/);    words.push('...');    do {        words.splice(-2, 1);        $('.div_class').html( words.join(' ') );    } while($('.div_class').height() > 100); } Here, you check for the div content's height and split the paragraph after that certain height and add a "...", do- while making sure that the paragraphs are in multiple lines and not in one single line. But checkout for that infinite loop. 2) Length-Ellipsis (Using substring function):   //script: $.each($('.div_class'), function() {        if ($(this).html().length > 100) {               var cropped_words = $(this).html();               cropped_words = cropped_words.substring(0, 200) + "...";               $(this).html(cropped_words);        } }); Here, you check for the length/characters rather than the height, take in the substring of the content starting from 0-th character to the 200-th character and then add in extra "...". This is exactly how I used it in the code. $.each($('.short_abstract',function() { if ($(this).html().length > 200) { var words = $(this).html(); words = words.substring(0,200 + "..."; $(this).html(words); } }); So ellipsing paragraphs over heights and lengths can be done using jQuery likewise.

Continue ReadingOpen Event Server: No (no-wrap) Ellipsis using jquery!

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website, What is celery? Celery is an asynchronous task queue/job queue based on distributed message passing. What are tasks? The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing. After the tasks had been set up, an error constantly came up whenever a task was called The error was: DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit(). The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to : @celery.task(name='lorem.ipsum') def lorem_ipsum():    pass But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form: def restore_session(session_id):    session = DataGetter.get_session(session_id)    session.deleted_at = None    lorem_ipsum.delay()    save_to_db(session, "Session restored from Trash")    update_version(session.event_id, False, 'sessions_ver') In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/. def make_celery(app):    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])    celery.conf.update(app.config)    task_base = celery.Task    class ContextTask(task_base):        abstract = True        def __call__(self, *args, **kwargs):            if current_app.config['TESTING']:                with app.test_request_context():                    return task_base.__call__(self, *args, **kwargs)            with app.app_context():                return task_base.__call__(self, *args, **kwargs)    celery.Task = ContextTask    return celery celery = make_celery(current_app) The __call__ method ensures that celery task is provided with proper app context to work with.  

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

Building the Scheduler UI

{ Repost from my personal blog @ https://blog.codezero.xyz/building-the-scheduler-ui } If you hadn't already noticed, Open Event has got a shiny new feature. A graphical and an Interactive scheduler to organize sessions into their respective rooms and timings. As you can see in the above screenshot, we have a timeline on the left. And a lot of session boxes to it's right. All the boxes are re-sizable and drag-drop-able. The columns represent the different rooms (a.k.a micro-locations). The sessions can be dropped into their respective rooms. Above the timeline, is a toolbar that controls the date. The timeline can be changed for each date by clicking on the respective date button. The Clear overlaps button would automatically check the timeline and remove any sessions that are overlapping each other. The Removed sessions will be moved to the unscheduled sessions pane at the left. The Add new micro-location button can be used to instantly add a new room. A modal dialog would open and the micro-location will be instantly added to the timeline once saved. The Export as iCal allows the organizer to export all the sessions of that event in the popular iCalendar format which can then be imported into various calendar applications. The Export as PNG saves the entire timeline as a PNG image file. Which can then be printed by the organizers or circulated via other means if necessary. Core Dependencies The scheduler makes use of some javascript libraries for the implementation of most of the core functionality Interact.js - For drag-and-drop and resizing Lodash - For array/object manipulations and object cloning jQuery - For DOM Manipulation Moment.js - For date time parsing and calculation Swagger JS - For communicating with our API that is documented according to the swagger specs. Retrieving data via the API The swagger js client is used to obtain the sessions data using the API. The client is asynchronously initialized on page load. The client can be accessed from anywhere using the javascript function initializeSwaggerClient. The swagger initialization function accepts a callback which is called if the client is initialized. If the client is not initialized, the callback is called after that. var swaggerConfigUrl = window.location.protocol + "//" + window.location.host + "/api/v2/swagger.json"; window.swagger_loaded = false; function initializeSwaggerClient(callback) { if (!window.swagger_loaded) { window.api = new SwaggerClient({ url: swaggerConfigUrl, success: function () { window.swagger_loaded = true; if (callback) { callback(); } } }); } else { if (callback) { callback(); } } } For getting all the sessions of an event, we can do, initializeSwaggerClient(function () { api.sessions.get_session_list({event_id: eventId}, function (sessionData) { var sessions = sessionData.obj; // Here we have an array of session objects }); }); In a similar fashion, all the micro-locations of an event can also be loaded. Processing the sessions and micro-locations Each session object is looped through, it's start time and end time are parsed into moment objects, duration is calculated, and it's distance from the top in the timeline is calculated in pixels. The new object with additional information, is stored…

Continue ReadingBuilding the Scheduler UI

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?. At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing. I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle. Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t. In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.   It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other. Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well. What is unit testing? Unit testing is a software development method in which the smallest testable parts of an application are tested Why do we need writing unit tests? Let me point all arguments why unit testing is really important while developing a project. To prove that our code works properly If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code. To reduce amount of bugs It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code To save development time Developers don’t waste time on checking every code’s change if his code works correctly Unit tests help to understand software design To provide quick feedback…

Continue ReadingUnit Testing

How can you get an access to Instagram API?

First of all you need to know that Instagram API uses OAuth 2.0 protocol. OAuth 2.0 provides a specific authorization flow for web apps, desktop apps and mobile apps. Instagram requires authentication before getting information from their API, but don't be afraid it's very simple. Pre Requirements: Created account in Instagram Registered Client(You can create your own client here) Requirements: CLIENT_ID -79e1a142dbeabd57a3308c52ad43e31d CLIENT_SECRET -34a6834081c44c20bd11e0a112a6adg1 REDIRECT_URI - http://127.0.0.1:8001/iCallback You can get above information from https://www.instagram.com/developer/clients/manage/ CODE - You need to open page https://api.instagram.com/oauth/authorize/?client_id=CLIENT-ID&redirect_uri=REDIRECT-URI&response_type=code https://api.instagram.com/oauth/authorize/?client_id=79e1a142dbeabd57a3308c52ad43e31d&redirect_uri=http://127.0.0.1:8001/iCallback&response_type=code You will be redirected to http://your-redirect-uri?code=CODE In my case it looks like this: http://127.0.0.1:8001/iCallback?code=2e122f3d76e8125b8b4982f16ed623c2 Now we have all information to get access token! curl -F 'client_id=CLIENT_ID' -F 'client_secret=CLIENT_SECRET' -F 'grant_type=authorization_code' -F 'redirect_uri=REDIRECT_URI' -F 'code=CODE' https://api.instagram.com/oauth/access_token if everything is ok you should receive { "access_token": "fb2e77d.47a0479900504cb3ab4a1f626d174d2d", "user": { "id": "1574083", "username": "rafal_kowalski", "full_name": "Rafal Kowalski", "profile_picture": "..." } }  In Open Event we used it to get all media from instagram - to use it  as for example a background in an event details' page curl 'https://api.instagram.com/v1/users/self/media/recent/?access_token=ACCESS_TOKEN'  

Continue ReadingHow can you get an access to Instagram API?

Adding revisioning to SQLAlchemy Models

{ Repost from my personal blog @ https://blog.codezero.xyz/adding-revisioning-to-sqlalchemy-models } In an application like Open Event, where a single piece of information can be edited by multiple users, it's always good to know who changed what. One should also be able to revert to a previous version if needed. That is where revisioning comes into picture. We use SQLAlchemy as the database toolkit and ORM. So, we wanted a versioning tool that would work well with our existing setup. That's when I came across SQLAlchemy-Continuum - a versioning extension for SQLAlchemy. Installation The installation of the module is just like any other python library. (don't forget to add it to your requirements.txt file, if you have one) pip install SQLAlchemy-Continuum Setup Now, it's time to enable SQLAlchemy-Continuum for the required models. Let's consider an Event model. import sqlalchemy as sa class Event(Base): __tablename__ = 'events' id = sa.Column(sa.Integer, primary_key=True, autoincrement=True) name = sa.Column(sa.String) start_time = sa.Column(db.DateTime, nullable=False) end_time = sa.Column(db.DateTime, nullable=False) description = db.Column(db.Text) schedule_published_on = db.Column(db.DateTime) We need to do three things to enable SQLAlchemy-Continuum. Call make_versioned() before the model(s) is/are defined. Add __versioned__ = {} to all the models that we want to be versioned Call configure_mappers from SQLAlchemy after declaring all the models. import sqlalchemy as sa from sqlalchemy_continuum import make_versioned # Must be called before defining all the models make_versioned() class Event(Base): __tablename__ = 'events' __versioned__ = {} # Must be added to all models that are to be versioned id = sa.Column(sa.Integer, primary_key=True, autoincrement=True) name = sa.Column(sa.String) start_time = sa.Column(db.DateTime, nullable=False) end_time = sa.Column(db.DateTime, nullable=False) description = db.Column(db.Text) schedule_published_on = db.Column(db.DateTime) # Must be called after defining all the models sa.orm.configure_mappers() SQLAlchemy-Continuum creates two tables: events_version which stores the version history for the Event model linked to the transaction table via a foreign key transaction which stores information about each transaction (like the user who performed the transaction, transaction datetime, etc) SQLAlchemy-Continuum also adds listeners to Event to record all create, update, delete actions. Usage All the CRUD (Create, read, update, delete) operations can be done as usual. SQLAlchemy-Continuum takes care of creating a version record for each CUD operation. The versions can be easily accessed using the versions property that is now available in the Event model. event = Event(name="FOSSASIA 2017", description="Open source conference in asia") session.add(event) # Event added to transaction session.commit() # Transaction comitted and event recored created # This would have created the first version record which can be accessed event.versions[0].name == "FOSSASIA 2017" # Lets make some changes to the recored. event.name = "FOSSASIA 2016" session.add(event) session.commit() # This would have created the second version record which can be accessed event.versions[1].name == "FOSSASIA 2016" # The first version record still remains and can be accessed event.versions[0].name == "FOSSASIA 2017" So, that's how basic versioning can be implemented in SQLAlchemy using SQLAlchemy-Continuum.

Continue ReadingAdding revisioning to SQLAlchemy Models