Event-driven programming in Flask with Blinker signals

Setting up blinker:

The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event.

The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time.

That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events.

Given below is the code to create named signals in a custom namespace:


from blinker import Namespace

event_signals = Namespace()
speakers_modified = event_signals.signal('event_json_modified')

If you want to emit a signal, you can do so by calling the send() method:


speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id)

From the user guide itself:

“ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “

To subscribe to a signal, blinker provides neat decorator based signal subscriptions.


@speakers_modified.connect
def name_of_signal_handler(app, **kwargs):

 

Some Design Decisions:

When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project.


@speakers_modified.connect
def new_handler(app, **kwargs):
# do whatever you want to do with kwargs['event_id']

In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made.

When to use signals and when not ?

The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be used, use a function call instead.

How to turn off signals while testing?

When in testing mode, signals may slow down your testing as unnecessary signals subscribers which are completely independent from the function being tested will be executed numerous times. To turn off executing the signal subscribers, you have to make a small change in the send function of the blinker library.

Below is what we have done. The approach to turn it off may differ from project to project as the method of testing differs. Refer https://github.com/jek/blinker/blob/master/blinker/base.py#L241 for the original function.


def new_send(self, *sender, **kwargs):
    if len(sender) == 0:
        sender = None
    elif len(sender) > 1:
        raise TypeError('send() accepts only one positional argument, '
                        '%s given' % len(sender))
    else:
        sender = sender[0]
    # only this line was changed
    if not self.receivers or app.config['TESTING']:
        return []
    else:
        return [(receiver, receiver(sender, **kwargs))
                for receiver in self.receivers_for(sender)]
                
Signal.send = new_send

event_signals = Namespace
# and so on ....

That’s all for now. Have some fun signaling 😉 .

 

Continue ReadingEvent-driven programming in Flask with Blinker signals

Set proper content type when uploading files on s3 with python-magic

In the open-event-orga-server project, we had been using Amazon s3 storage for a long time now. After some time we encountered an issue that no matter what the file type was, the Content-Type when retrieving this files from the storage solution was application/octet-stream.

An example response when retrieving an image from s3 was as follows:


Accept-Ranges →bytes
Content-Disposition →attachment; filename=HansBakker_111.jpg
Content-Length →56060
Content-Type →application/octet-stream
Date →Fri, 09 Sep 2016 10:51:06 GMT
ETag →"964b1d839a9261fb0b159e960ceb4cf9"
Last-Modified →Tue, 06 Sep 2016 05:06:23 GMT
Server →AmazonS3
x-amz-id-2 →1GnO0Ta1e+qUE96Qgjm5ZyfyuhMetjc7vfX8UWEsE4fkZRBAuGx9gQwozidTroDVO/SU3BusCZs=
x-amz-request-id →ACF274542E950116

 

As seen above instead of providing image/jpeg as the Content-Type, it provides the Content-Type as application/octet-stream.While uploading the files, we were not providing the content type explicitly, which seemed to be the root of the problem.

It was decided that we would be providing the content type explicitly, so it was time to choose an efficient library to determine the file type based on the content of the file and not the file extension. After researching through the available libraries python-magic seemed to be the obvious choice. python-magic is a python interface to the libmagic file type identification library. libmagic identifies file types by checking their headers according to a predefined list of file types.

Here is an example straight from python-magic‘s readme on its usage:


>>> import magic
>>> magic.from_file("testdata/test.pdf")
'PDF document, version 1.2'
>>> magic.from_buffer(open("testdata/test.pdf").read(1024))
'PDF document, version 1.2'
>>> magic.from_file("testdata/test.pdf", mime=True)
'application/pdf'

 

Given below is a code snippet for the s3 upload function in the project:


file_data = file.read()
    file_mime = magic.from_buffer(file_data, mime=True)
    size = len(file_data)
    # k is defined as  k = Key(bucket) in previous code
    sent = k.set_contents_from_string(
        file_data,
        headers={
            'Content-Disposition': 'attachment; filename=%s' % filename,
            'Content-Type': '%s' % file_mime
        }
    ) 

 

One thing to note that as python-magic uses libmagic-dev as a dependency and many of the distros do not come with libmagic-dev pre-installed, make sure you install libmagic-dev explicitly. (Installation instructions may vary per distro)


sudo apt-get install libmagic-dev

Voila !! Now when retrieving each and every file you’ll get the proper content type.

 

Continue ReadingSet proper content type when uploading files on s3 with python-magic

Generating a documentation site from markup documents with Sphinx and Pandoc

Generating a fully fledged website from a set of markup documents is no easy feat. But thanks to the wonderful tool sphinx, it certainly makes the task easier. Sphinx does the heavy lifting of generating a website with built in javascript based search. But sometimes it’s not enough.

This week we were faced with two issues related to documentation generation on loklak_server and susi_server. First let me give you some context. Now sphinx requires an index.rst file within /docs/  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository.

This leads to the following two problems:

  • Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors.
  • Any relative links in README break when it is included in another folder.

To fix the first issue, I used pypandoc, a thin wrapper around Pandoc. Pandoc is a wonderful command line tool which allows us to convert documents from one markup format to another. From the official Pandoc website itself,

If you need to convert files from one markup format into another, pandoc is your swiss-army knife.

pypandoc requires a working installation of Pandoc, which can be downloaded and installed automatically using a single line of code.

pypandoc.download_pandoc()

This gives us a cross-platform way to download pandoc without worrying about the current platform. Now, pypandoc leaves the installer in the current working directory after download, which is fine locally, but creates a problem when run on remote systems like Travis. The installer could get committed accidently to the repository. To solve this, I had to take a look at source code for pypandoc and call an internal method, which pypandoc basically uses to set the name of the installer. I use that method to find out the name of the file and then delete it after installation is over. This is one of many benefits of open-source projects. Had pypandoc not been open source, I would not have been able to do that.

url = pypandoc.pandoc_download._get_pandoc_urls()[0][pf]
filename = url.split(‘/’)[-1]
os.remove(filename)

Here pf is the current platform which can be one of ‘win32’, ‘linux’, or ‘darwin’.

Now let’s take a look at our second issue. To solve that, I used regular expressions to capture any relative links. Capturing links were easy. All links in reStructuredText are in the same following format.

`Title <url>`__

Similarly links in markdown are in the following format

[Title](url)

Regular expressions were the perfect candidate to solve this. To detect which links was relative and need to be fixed, I checked which links start with the \docs\ directory and then all I had to do was remove the \docs prefix from those links.

A note about loklak and susi server project

Loklak is a server application which is able to collect messages from various sources, including twitter.

SUSI AI is an intelligent Open Source personal assistant. It is capable of chat and voice interaction and by using APIs to perform actions such as music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information

Continue ReadingGenerating a documentation site from markup documents with Sphinx and Pandoc

Ticket Ordering or Positioning (back-end)

One of the many feature requests that we got for our open event organizer server or the eventyay website is ticket ordering. The event organizers wanted to show the tickets in a particular order in the website and wanted to control the ordering of the ticket. This was a common request by many and also an important enhancement. There were two main things to deal with when ticket ordering was concerned. Firstly, how do we store the position of the ticket in the set of tickets. Secondly, we needed to give an UI in the event creation/edit wizard to control the order or position of a ticket. In this blog, I will talk about how we store the position of the tickets in the backend and use it to show in our public page of the event.

Continue ReadingTicket Ordering or Positioning (back-end)

Generating xCal calendar in python

{ Repost from my personal blog @ https://blog.codezero.xyz/generate-xcal-calendar-in-python }

“xCal”, is an XML format for iCalendar data.

The iCalendar data format (RFC5545) is a widely deployed interchange format for calendaring and scheduling data.

A Sample xCal document

<?xml version="1.0" encoding="utf-8"?>  
<iCalendar xmlns:xCal="urn:ietf:params:xml:ns:xcal">  
    <vcalendar>
        <version>2.0</version>
        <prodid>-//Pentabarf//Schedule 1.0//EN</prodid>
        <x-wr-caldesc>FOSDEM 2016</x-wr-caldesc>
        <x-wr-calname>Schedule for events at FOSDEM 2016</x-wr-calname>
        <vevent>
            <method>PUBLISH</method>
            <uid>123e4567-e89b-12d3-a456-426655440000</uid>
            <dtstart>20160131T090000</dtstart>
            <dtend>20160131T091000</dtend>
            <duration>00:10:00:00</duration>
            <summary>Introduction to the SDR Track- Speakers, Topics, Algorithm</summary>
            <description>&lt;p&gt;The opening talk for the SDR devroom at FOSDEM 2016.&lt;/p&gt;</description>
            <class>PUBLIC</class>
            <status>CONFIRMED</status>
            <categories>Software Defined Radio</categories>
            <url>https:/fosdem.org/2016/schedule/event/sdrintro/</url>
            <location>AW1.125</location>
            <attendee>Martin Braun</attendee>
        </vevent>
    </vcalendar>
</iCalendar>

Each event/session will be in a seperate vevent block. Each speaker/attendee of an event/session will be in an attendee block inside a vevent block.

Some important elements are:

  1. version – Has the version of the iCalendar data
  2. prodid – Contains the name of the application/generator that generated this document
  3. x-wr-caldesc – A descriptive name for this calendar
  4. x-wr-calname – A description of the calendar

The structure and keywords used in xCal are the same as those used in the iCal format. To generate the XML document, we’ll be using python’s ElementTreeXML API that is part of the Python standard library.

We’ll be using two main classes of the ElementTree API:

  1. Element – used to create a standard node. (Used for the root node)
  2. SubElement – used to create a sub element and attache the new node to a parent

Let’s start with the root iCalendar node and set the required attributes.

from xml.etree.ElementTree import Element, SubElement, tostring

i_calendar_node = Element('iCalendar')  
i_calendar_node.set('xmlns:xCal', 'urn:ietf:params:xml:ns:xcal')

Now, to add the vcalendar node to the iCalendar node.

v_calendar_node = SubElement(i_calendar_node, 'vcalendar')

Let’s add the other aspects of the calendar to the vcalendar node as separate sub nodes.

version_node = SubElement(v_calendar_node, 'version')  
version_node.text = '2.0'

prod_id_node = SubElement(v_calendar_node, 'prodid')  
prod_id_node.text = '-//fossasia//open-event//EN'

cal_desc_node = SubElement(v_calendar_node, 'x-wr-caldesc')  
cal_desc_node.text = "Calendar"

cal_name_node = SubElement(v_calendar_node, 'x-wr-calname')  
cal_name_node.text = "Schedule for sessions"

Now, we have added information about our calendar. Now to add the actual events to the calendar. Each event would be a vevent node, a child of vcalendar node. We can loop through all our available event/sessions and add them to the calendar.

for session in sessions:  
    v_event_node = SubElement(v_calendar_node, 'vevent')

    uid_node = SubElement(v_event_node, 'uid')
    uid_node.text = str(session.id)

    dtstart_node = SubElement(v_event_node, 'dtstart')
    dtstart_node.text = session.start_time.isoformat()

    dtend_node = SubElement(v_event_node, 'dtend')
    dtend_node.text = tz.localize(session.end_time).isoformat()

    duration_node = SubElement(v_event_node, 'duration')
    duration_node.text =  "00:30"

    summary_node = SubElement(v_event_node, 'summary')
    summary_node.text = session.title

    description_node = SubElement(v_event_node, 'description')
    description_node.text = session.short_abstract

    class_node = SubElement(v_event_node, 'class')
    class_node.text = 'PUBLIC'

    status_node = SubElement(v_event_node, 'status')
    status_node.text = 'CONFIRMED'

    categories_node = SubElement(v_event_node, 'categories')
    categories_node.text = session.session_type.name

    url_node = SubElement(v_event_node, 'url')
    url_node.text = "https://some.conf/event/" + str(session.id)

    location_node = SubElement(v_event_node, 'location')
    location_node.text = session.microlocation.name

    for speaker in session.speakers:
        attendee_node = SubElement(v_event_node, 'attendee')
        attendee_node.text = speaker.name

Please note that all the timings in the XML Document must comply with ISO 8601 and must have the date+time+timezone. Example: 2007-04-05T12:30-02:00.

We’re still not done yet. We now have the XML document as an Element object. But we’ll be needing it as a string to either store it somewhere or display it.

The document can be converted to a string by using the ElementTree API’s tostring helper method and passing the root node.

xml_as_string = tostring(i_calendar_node)

And that’s it. You now have a proper XML document representing your events.

Continue ReadingGenerating xCal calendar in python

Integrating Travis CI and Codacy in PSLab Repositories

Continuous Integration Testing and Automated Code Review tools are really useful for developing better software, improving code and overall quality of the project. Continuous integration can help catch bugs by running tests automatically and to merge your code with confidence.

While working on my GsoC-16 project, my mentors guided and helped me to integrate Travis CI and Codacy in PSLab github repositories. This blog post is all about integrating these tools in my github repos, problems faced, errors occurred and the test results.

travisTravis CI is a hosted continuous integration and deployment system. It is used to build and test software projects hosted on github. There are two versions of it, travis-ci.com for private repositories, and travis-ci.org for public repositories.

Read : Getting started with Travis CI

Travis is configured with the “.travis.yml” file in your repository to tell Travis CI what to build. Following is the code from ‘.travis.yml‘ file in our PSLab repository. This repo contains python communication library for PSLab.

language: python
python:
  - "2.6"
  - "2.7"
  - "3.2"
  - "3.3"
  - "3.4"
# - "3.5"
# command to install dependencies
# install: "pip install -r requirements.txt"
# command to run tests
script: nosetests

With this code everything worked out of the box (except few initial builds which errored because of missing ‘requirements.txt‘ file) and build passed successfuly 🙂 🙂

Later Mario Behling added integration to FOSSASIA Slack Channel.

Slack notifications

Travis CI supports notifying  Slack channels about build results. On Slack, set up a new Travis CI integration. Select a channel, and you’ll find the details to paste into your ‘.travis.yml’. Just copy and paste the settings, which already include the proper token and you’re done.

The simplest configuration requires your account name and the token.

notifications:
  slack: '<account>:<token>'     
notifications:
  slack: fossasia:***tokenishidden****

Import errors in Travis builds of PSLab-apps Repository

PSLab-apps repository contains PyQt bases apps for various experiments. The ‘.travis.yml‘ file mentioned above gave several module import errors.

$ python --version
Python 3.2.5
$ pip --version
pip 6.0.7 from /home/travis/virtualenv/python3.2.5/lib/python3.2/site-packages (python 3.2)
Could not locate requirements.txt. Override the install: key in your .travis.yml to install dependencies.
0.33s$ nosetests
E
======================================================================
ERROR: Failure: ImportError (No module named sip)

The repo is installable and PSLab was working fine on popular linux distributions without any errors. I was not able to find the reason for build errors. Even after adding proper ‘requirements.txt‘ file,  travis builds errored.

On exploring the documentation I could figure out the problem.

Travis CI Environment uses separate virtualenv instances for each Python version. System Python is not used and should not be relied on. If you need to install Python packages, do it via pip and not apt. If you decide to use apt anyway, note that Python system packages only include Python 2.7 libraries (default python version). This means that the packages installed from the repositories are not available in other virtualenvs even if you use the –system-site-packages option. Therefore I was getting Import module errors.

This problem was solved by making following changes in the ‘.travis.yml‘ file

language: python

python:
  #- "2.6"
  - "2.7"
  #- "2.7_with_system_site_packages"
  - "3.2"
  #- "3.2_with_system_site_packages"
  - "3.3"
  - "3.4"
before_install:
    - sudo mkdir -p /downloads
    - sudo chmod a+rw /downloads
    - curl -L http://sourceforge.net/projects/pyqt/files/sip/sip-4.16.5/sip-4.16.5.tar.gz -o /downloads/sip.tar.gz 
    - curl -L http://sourceforge.net/projects/pyqt/files/PyQt4/PyQt-4.11.3/PyQt-x11-gpl-4.11.3.tar.gz -o /downloads/pyqt4.tar.gz
    # Builds
    - sudo mkdir -p /builds
    - sudo chmod a+rw /builds

install:
    - export DISPLAY=:99.0
    - sh -e /etc/init.d/xvfb start
    - sudo apt-get install -y libqt4-dev
    - sudo apt-get install -y mesa-common-dev libgl1-mesa-dev libglu1-mesa-dev
#    - sudo apt-get install -y python3-sip python3-sip-dev python3-pyqt4 cmake
    # Qt4
    - pushd /builds
    # SIP
    - tar xzf /downloads/sip.tar.gz --keep-newer-files
    - pushd sip-4.16.5
    - python configure.py
    - make
    - sudo make install
    - popd
    # PyQt4
    - tar xzf /downloads/pyqt4.tar.gz --keep-newer-files
    - pushd PyQt-x11-gpl-4.11.3
    - python configure.py -c --confirm-license --no-designer-plugin -e QtCore -e QtGui -e QtTest
    - make
    - sudo make install
    - popd
 # - "3.5"
# command to install dependencies
#install: "pip install -r requirements.txt"
# command to run tests
script: nosetests

notifications:
  slack: fossasia:*****tokenishidden*******


codacy

Codacy is an automated code analysis and review tool that helps developers ship better software, faster. With Codacy integration one can get static analysis, code complexity, code duplication and code coverage changes in every commit and pull request.

Read : Integrating Codacy in github is here.

Codacy integration has really helped me to understand and enforce code quality standard. Codacy gives you impact of every pull request in terms of quality and errors directly into GitHub.

codacy check

Codacy also grades your project in different categories like Code Complexity, Compatibility, security, code style, error prone etc. to help you better understand the overall project quality and what are the areas you should improve.

Here is a screen-shot of Codacy review for PSLab-apps repository.

codacyreport

I am extremely happy to share that my learning adventure has got  Project Certification at ‘A’ grade. Project quality analysis shows that more than 90% of the work has A grade 🙂 🙂

Travis CI and Codacy Badges for my GSoC Repositories:

PSLab : Python Library for Communication with PSLab

Travis CI Badge         Codacy Badge

PSLab-apps : Qt based GUI applications for PSLab

Travis CI Badge         Codacy Badge

Pocket Science Lab : ExpEYES Programs, Sensor Plugins

Travis CI Badge         Codacy Badge

That’s all for now. Have a happy coding, testing and learning 🙂 🙂

Continue ReadingIntegrating Travis CI and Codacy in PSLab Repositories

Python code examples

I’ve met many weird examples of  behaviour in python language while working on Open Event project. Today I’d like to share some examples with you. I think this knowledge is necessary, if you’d like to increase a  bit your knowledge in python area.

Simple adding one element to python list:

def foo(value, x=[]):
  x.append(value)
  return x

>>> print(foo(1))
>>> print(foo(2))
>>> print(foo(3, []))
>>> print(foo(4))

OUTPUT

[1] 
[1, 2] 
[3]
[1, 2, 4]

First output is obvious, but second not exactly. Let me explain it, It happens because x(empty list) argument is only evaluated once, So on every call foo(), we modify that list, appending a value to it. Finally we have [1,2, 4] output. I recommend to avoid mutable params as default.

Another example:

Do you know which type it is?

>>> print(type([ el for el in range(10)]))
>>> print(type({ el for el in range(10)}))
>>> print(type(( el for el in range(10))))

Again first and second type are obvious <class ‘list’>, <class ‘set’>. You may  think that last one should return type tuple but it returns a generator <class ‘generator’>.

Example:

Do you think that below code returns an exception?

list= [1,2,3,4,5]
>>> print(list [8:])

If you think that above expression returns index error you’re wrong. It returns empty list [].

Example funny boolean operators

>>> 'c' == ('c' or 'b')
True
>>> 'd' == ('a' or 'd')
False
>>> 'c' == ('c' and 'b')
False 
>>> 'd' == ('a' and 'd')
True

You can think that that OR and AND operators are broken.

You have to know how python interpreter behaves while looking for OR and AND operators.

So OR Expression takes the first statement and checks if it is true. If the first statement is true, then Python returns object’s value without checking second value. If first statement is false interpreter checks second value and returns that value.

AND operator checks if first statement is false, the whole statement has to be false. So it returns first value, but if first statement is true it checks second statement and returns second value.

Below i will show you how it works

>>> 'c' == ('c' or 'b')
>>> 'c' == 'c'
True
>>> 'd' == ('a' or 'd')
>>> 'd' == 'a'
False
>>> 'c' == ('c' and 'b')
>>> 'c' == 'b'
False 
>>> 'd' == ('a' and 'd')
>>> 'd' == 'd'
True

I hope that i have explained you how the python interpreter checks OR and AND operators. So know above examples should be more understandable.

Continue ReadingPython code examples

Resizing Uploaded Image (Python)

While we make websites were we need to upload images such as in event organizing server, the image for the event needs to be shown in various different sizes in different pages. But an image with high resolution might be an overkill for using at a place where we just need it to be shown as a thumbnail. So what most CMS websites do is re-size the image uploaded and store a smaller image as thumbnail. So how do we do that? Let’s find out.

Continue ReadingResizing Uploaded Image (Python)

Accepting Stripe payments on behalf of a third-party

{ Repost from my personal blog @ https://blog.codezero.xyz/accepting-stripe-payments-on-behalf-of-a-third-party }

In Open Event, we allow the organizer of each event to link their Stripe account, so that all ticket payments go directly into their account. To make it simpler for the organizer to setup the link, we have a Connect with stripe button on the event creation form.

Clicking on the button, the organizer is greeted with a signup flow similar to Login with Facebook or any other social login. Through this process, we’re able to securely and easily obtain the credentials required to accept payments on behalf of the organizer.

For this very purpose, stripe provides us with an OAuth interface called as Stripe Connect. Stripe Connect allows us to connect and interact with other stripe accounts through an API.

We’ll be using Python’s requests library for making all the HTTP Requests to the API.
You will be needing a stripe account for this.

Registering your platform
The OAuth Flow

The OAuth flow is similar to most platforms.

  • The user is redirected to an authorization page where they login to their stripe account and authorize your app to access their account
  • The user is then redirected back to a callback URL with an Authorization code
  • The server makes a request to the Token API with the Authorization code to retrieve the access_token, refresh_token and other credentials.

Implementing the flow

Redirect the user to the Authorization URL.
https://connect.stripe.com/oauth/authorize?response_type=code&client_id=ca_8x1ebxrl8eOwOSqRTVLUJkWtcfP92YJE&scope=read_write&redirect_uri=http://localhost/stripe/callback  

The authorization url accepts the following parameters.

  1. client_id – The client ID acquired when registering your platform.required.
  2. response_type – Response type. The value is always code. required.
  3. redirect_uri – The URL to redirect the customer to after authorization.
  4. scope – Can be read_write or read_only. The default is read_only. For analytics purposes, read_only is appropriate; To perform charges on behalf of the connected user, We will need to request read_write scope instead.

The user will be taken to stripe authorization page, where the user can login to an existing account or create a new account without breaking the flow. Once the user has authorized the application, he/she is taken back to the Callback URL with the result.

Requesting the access token with the authorization code

The user is redirected back to the callback URL.

If the authorization failed, the callback URL has a query string parameter error with the error name and a parameter error_description with the description of the error.

If the authorization was a success, the callback URL has the authorization code in the code query string parameter.

import requests

data = {  
    'client_secret': 'CLIENT_SECRET',
    'code': 'AUTHORIZATION_CODE',
    'grant_type': 'authorization_code'
}

response = requests.post('https://connect.stripe.com/oauth/token', data=data)

The client_secret is also obtained when registering your platform. The codeparameter is the authorization code.

On making this request, a json response will be returned.

If the request was a success, the following response will be obtained.

{
  "token_type": "bearer",
  "stripe_publishable_key": PUBLISHABLE_KEY,
  "scope": "read_write",
  "livemode": false,
  "stripe_user_id": USER_ID,
  "refresh_token": REFRESH_TOKEN,
  "access_token": ACCESS_TOKEN
}

If the request failed for some reason, an error will be returned.

{
  "error": "invalid_grant",
  "error_description": "Authorization code does not exist: AUTHORIZATION_CODE"
}

The access_token token obtained can be used as the secret key to accept payments like discussed in Integrating Stripe in the Flask web framework.

Continue ReadingAccepting Stripe payments on behalf of a third-party

Twitter Oauth

Oauth_logo.svg.png

What is Oauth?

It’s an open protocol which allows to secure an authorization in a simple and standard method from web, mobile and desktop applications.Facebook, Google Twitter, Github and more web services use this protocol to authenticate user. Using Oauth is very convenient, because it delegates user authentication to the service which host user account. It allows us to get resources from another web service without giving any login or password. If you have a service and want to prepare a authentication via Twitter, the best solution is to use OAuth. Recently Open Event team met a problem in an user profile page. We’d like to automatically fill information about user. Of course, to solve it we use Oauth protocol, to authenticate with Twitter After a three-steps authentication we can get name and profile picture.If you need another information from Twitter profile like recent tweets or followers’ list. You have to visit Twitter API site to see more samples of resource which you can get

How do Open event team implement communication between Orga-server and Twitter?

All services have a very similar flow. Below i will show you how it looks in our case.

Before starting you need to create your own twitter app. You can create app in Twitter apps site https://apps.twitter.com/. If  create an app you will see a CONSUMER KEY and CONSUMER SECRET KEY which shouldn’t be human-readable, so remember not to share these keys.

Below example shows how to get basic information about twitter profile

We use oauth2 python library https://github.com/joestump/python-oauth2

consumer = oauth2.Consumer(key=TwitterOAuth.get_client_id(),

                          secret=TwitterOAuth.get_client_secret())

client = oauth2.Client(consumer)

TwitterOAuth.get_client_id() CONSUMER KEY

TwitterOAuth.get_client_secret()  – CONSUMER SECRET KEY

Then we send GET request to request_token endpoint to get oauth_token

client.request('https://api.twitter.com/oauth/request_token', "GET")
Response: oauth_token=Z6eEdO8MOmk394WozF5oKyuAv855l4Mlqo7hhlSLik&
oauth_token_secret=Kd75W4OQfb2oJTV0vzGzeXftVAwgMnEK9MumzYcM&
oauth_callback_confirmed=true

Next step is to redirect user to Twitter Authentication site

twitter-oauth.png

You can see in an url a redirect_uri. So after sign in Client will get a callback from Twitter with oauth_verifier and oauth_token params

https://api.twitter.com/oauth/authenticate?oauth_token=RcYGqAAAAAAAwdbbAAABVoM1UMo&oauth_token_secret=wZfpPpCugAmdF3AuohEnvBTRxdCmllxu&oauth_callback_confirmed=true&redirect_uri=http://open-event-dev.herokuapp.com/tCallback

The last step is to get an access token. If we have an oauth_verifier and an oauth_token it’s pretty easy

def get_access_token(self, oauth_verifier, oauth_token):

   consumer = self.get_consumer()

   client = oauth2.Client(consumer)

   return client.request(

       self.TW_ACCESS_TOKEN_URI + 'oauth_verifier=' + oauth_verifier + 
       "&oauth_token=" + oauth_token, "POST")

Where TW_ACCESS_TOKEN_URI is https://api.twitter.com/oauth/access_token

Final step is to get our user information

resp, content = client.request("https://api.twitter.com/1.1/users/show.json?
                               screen_name=" + access_token["screen_name"] +
                               "&user_id=" + access_token["user_id"] , "GET")

user_info = json.loads(content)

In an user_info variable you can get a profile picture or a profile name.

Summarizing, oauth protocol is very secure and easy to use by developer. At the beginning an oauth flow can seem to be a little hard to  understand but if you spend some time trying tp understand it, everything becomes easier.  And it’s secured. because you don’t need to store a login or a password, and an access token has an expired time. This is the main feature of Oauth protocol.

Continue ReadingTwitter Oauth