Using custom themes with Yaydoc to build documentation

What is Yaydoc?

Yaydoc aims to be a one stop solution for all your documentation needs. It is continuously integrated to your repository and builds the site on each commit. One of it’s primary aim is to minimize user configuration. It is currently in active development.

Why Themes?

Themes gives the user ability to generate visually different sites with the same markup documents without any configuration. It is one of the many features Yaydoc inherits from sphinx.

Now sphinx comes with 10 built in themes but there are much more custom themes available on PyPI, the official Python package repository. To use these custom themes, sphinx requires some setup. But Yaydoc being an automated system needs to performs those tasks automatically.

To use a custom theme which has been installed, sphinx needs to know the name of the theme and where to find it. We do that by specifying two variables in the sphinx configuration file. html_theme and html_theme_path respectively. Custom themes provide a method that can be called to get the html_theme_path of the theme. Usually that method is named get_html_theme_path . But that is not always the case. We have no way find the appropriate method automatically.

So how do we get the path of an installed theme just by it’s name and how do we add it to the generated configuration file.

The configuration file is generated by the sphinx-quickstart command which Yaydoc uses to initialize the documentation directory. We can override the default generated files by providing our own project templates. The templates are based on the Jinja2 template engine.

Firstly, I replaced

html_theme = ‘alabaster’

With

html_theme = ‘{{ html_theme }}’

This provides us the ability to pass the name of the theme as a parameter to sphinx-quickstart. Now the user has an option to choose between 10 built-in themes. For custom themes however there is a different story. I had to solve two major issues.

  • The name of the package and the theme may differ.
  • We also need the absolute path to the theme.

The following code snippet solves the above mentioned problems.

{% if html_theme in (['alabaster', 'classic', 'sphinxdoc', 'scrolls',
'agogo', 'traditional', 'nature', 'haiku',
'pyramid', 'bizstyle'])
%}
# Theme is builtin. Just set the name
html_theme = '{{ html_theme }}'
{% else %}
# Theme is a custom python package. Lets install it.
import pip
exitcode = pip.main(['install', '{{ html_theme }}'])
if exitcode:
    # Non-zero exit code
    print("""{0} is not available on pypi. Please ensure the theme can be installed using 'pip install {0}'.""".format('{{ html_theme }}'), file=sys.stderr)
else:
    import {{ html_theme }}
    def get_path_to_theme():
        package_path = os.path.dirname({{ html_theme }}.__file__)
        for root, dirs, files in os.walk(package_path):
            if 'theme.conf' in files:
                return root
    path_to_theme = get_path_to_theme()
    if path_to_theme is None:
        print("\n{0} does not appear to be a sphinx theme.".format('{{ html_theme }}'), file=sys.stderr)
        html_theme = 'alabaster'
    else:
        html_theme = os.path.basename(path_to_theme)
        html_theme_path = [os.path.abspath(os.path.join(path_to_theme, os.pardir))]
{% endif %}

It performs the following tasks in order:

  • It first checks if the provided theme is one of the built in themes. If that is indeed the case, we just set the html_theme config value to the name of the theme.
  • Otherwise, It installs the package using pip.
  • Now __file__ has a special meaning in python. It returns us the path of the module. We use it to get the path of the installed package.
  • Now each sphinx theme must have a file named `theme.conf` which defines several properties of the theme. We do a recursive search for that file.
  • We set html_theme to be the name of the directory which contains that file, and html_theme_path to be it’s parent directory.

Now let’s see everything in action. Here are four pages created by Yaydocs from a single markup document with no user configuration.

 

Now you can choose between many of the themes available on PyPI. You can even create your own theme. Follow this blog to get more insights and latest news about Yaydoc.

 

Continue ReadingUsing custom themes with Yaydoc to build documentation

How to teach SUSI skills calling an External API

SUSI is an intelligent  personal assistant. SUSI can learn skills to understand and respond to user queries better. A skill is taught using rules. Writing rules is an easy task and one doesn’t need any programming background too. Anyone can start contributing. Check out these tutorials and do watch this video to get started and start teaching susi.

SUSI can be taught to call external API’s to answer user queries.

While writing skills we first mention string patterns to match the user’s query and then tell SUSI what to do with the matched pattern. The pattern matching is similar to regular expressions and we can also retrieve the matched parameters using $<parameter number>$ notation.

Example :

My name is *
Hi $1$!

When the user inputs “My name is Uday” , it is matched with “My name is *” and “Uday” is stored in $1$. So the output given is “Hi Uday!”.

SUSI can call an external API to reply to user query. An API endpoint or url when called must return a JSON or JSONP response for SUSI to be able to parse the response and retrieve the answer.

Rule Format for a skill calling an external API

The rule format for calling an external API is :

<regular expression for pattern matching>
!console: <return answer using $object$ or $required_key$>
{
  “url”:<API endpoint or url>,
  “path”:$.<key in the API response to find the answer>,
}
eol
  • Url is the API endpoint to be called which returns a JSON or JSONP response.
    The parameters to the url if any can be added using $$ notation.
  • Path is used to help susi know where to look for the answer in the returned response.
    If the path points to a root element, then the answer is stored in $object$, otherwise we can query $key$ to get the answer which is a value to the key under the path.
  • eol or end of line indicates the end of the rule.

Understanding the Path Attribute

Let us understand the Path attribute better through some test cases.

In each of the test cases we discuss what the path should be and how to retrieve the answer for a given required answer from the json response of an API.

  1. API response in json :

    {
      “Key1” : “Value1”
    }
    

Required answer : Value1
Path : “$.Key1    =>   Retrieve Answer:  $object$

 

  1. API response in json :
{
  “Key1” : [{“Key11” : “Value11”}]
}

Required answer : Value11
Path : $.Key1[0]   =>  Retrieve Answer: $Key11$
Path : $.Key1[0].Key11   => Retrieve Answer: $object$

 

  1. API response in json :
{
  “Key1” : {“Key11” : “Value11”}
}


Required answer : Value11
Path : $.Key1  => Retrieve Answer:  $Key11$
Path : $.Key1.Key11  => Retrieve Answer: $object$

 

  1. API response in json :
{
  “Key1” : {
           “Key11” : “Value11”,
           “Key12” : “Value12”
         }
}

Required answer : Value11 , Value12
Path : $.Key1  => Retrieve Answer:  $Key11$ , $Key12$

Where to write these rules?

Now, since we know how to write rules let’s see where to write them.

We use etherpads to write and test rules and once we finish testing our rule we can push those rules to the repo.

Steps to open, write and test rules:

  1. Open a new etherpad with a desired name <etherpad name> at http://dream.susi.ai/
  2. Write your skills code in the etherpad following the code format explained above.
  3. Now, to test your skill let’s chat with susi. Start a conversation with susi at http://susi.ai/chat to test your skills.
  4. Load your skills by typing dream <etherpad name> and wait for a response saying dreaming enabled for <etherpad name>
  5. Test your skill and follow step 4 every time you make changes to the code in your etherpad.
  6. After you are done testing, type stop dreaming and if you are satisfied with your skill do send a PR to help susi learn.

Examples

Let us try an example to understand this better.

1. Plot of a TV Show

Tvmaze is an open  TV API that provides information about tv shows. Let us write a rule to know the plot of a tv show. We can find many such APIs. Check out this link listing few of them.

  1.  Open an etherpad at http://dream.susi.ai/ named tvshowplot.
  2.   Enter the code to query plot of a TV show in the etherpad at                           http://dream.susi.ai/p/tvshowplot
* plot of *|* summary of *
!console:$object$
{
  "url":"http://api.tvmaze.com/singlesearch/shows?q=$2$",
  "path":"$.summary"
}
eol
  1. Now lets test our skill by starting a conversation with susi at http://susi.ai/chat.
  • User Query: dream tvshowplot
    Response:  dreaming enabled  for tvshowplot
  • User Query: what is the plot of legion
    Response: Legion introduces the story of David Haller: Since he was a teenager, David has struggled with mental illness. Diagnosed as schizophrenic, David has been in and out of psychiatric hospitals for years. But after a strange encounter with a fellow patient, he’s confronted with the possibility that the voices he hears and the visions he sees might be real. He’s based on the Marvel comics character Legion, the son of X-Men founder Charles Xavier (played by Patrick Stewart and James McAvoy in the films), first introduced in 1985.

Intermediate Processing:

Pattern Matching : $1$ = “what is the” ; $2$ = “legion”

Url : http://api.tvmaze.com/singlesearch/shows?q=legion

API response:

{
  "id": 6393,
  "url": "http:\/\/www.tvmaze.com\/shows\/6393\/legion",
  "name": "Legion",
  "type": "Scripted",
  "language": "English",
  "genres": [
             "Drama",
             "Action",
             "Science-Fiction"
            ],
  "summary": "<p><strong>Legion<\/strong> introduces the story of David Haller: Since he was a teenager, David has struggled with mental illness. Diagnosed as schizophrenic, David has been in and out of psychiatric hospitals for years. But after a strange encounter with a fellow patient, he's confronted with the possibility that the voices he hears and the visions he sees might be real. He's based on the Marvel comics character Legion, the son of X-Men founder Charles Xavier (played by Patrick Stewart and James McAvoy in the films), first introduced in 1985.<\/p>",
  "updated": 1491955072,
}

Note: The API response has been trimmed to show only the relevant content.

Path : $.summary

Retrieving Answer: so our required answer in the api response is under the key summary and is retrieved using $object$ since it is a root element.

 

Screenshots:

2. Cooking Recipes

Let us try it out with another API.
Recipepuppy is an cooking recipe API where users can query various recipes.

  1.  Open a etherpad at http://dream.susi.ai/ named recipe.
  2.   Enter the code to query a recipe in the etherpad at  http://dream.susi.ai/p/recipe
#Gives recipes and links to cook a dish
* cook *
!console:<p>To cook  <strong>$title$</strong> : <br>The ingridients required are: $ingredients$. <br> For instruction to prepare the dish $href$ </p>
{
  "url":"http://www.recipepuppy.com/api/?q=$2$",
  "path":"$.results"
}
eol
  1. Now lets test our skill by starting a conversation with susi at http://susi.ai/chat.
  • User Query: dream recipe
    Response:  dreaming enabled  for recipe
  • User Query: how to cook chicken biryani
    Response: To cook Chicken Biryani Recipe :
    The ingridients required are: chicken, seeds, chicken broth, rice, butter, peas, garlic, red onions, cardamom, curry paste, olive oil, tomato, coriander, cumin, brown sugar, tumeric.
    For instruction to prepare the dish Click Here!

Intermediate Processing:

Pattern Matching : $1$ = “how to” ; $2$ = “chicken biryani”

Url : http://www.recipepuppy.com/api/?q=chicken biryani

API response:

{
  "title": "Recipe Puppy",
  "version": 0.1,
  "href": "http:\/\/www.recipepuppy.com\/",
  "results": [
              {
                "title": "Chicken Biryani Recipe",
                "href": "http:\/\/www.grouprecipes.com\/53040\/chicken-biryani.html",
                "ingredients": "chicken, seeds, chicken broth, rice, butter, peas, garlic, red onions, cardamom, curry paste, olive oil, tomato, coriander, cumin, brown sugar, tumeric",
                "thumbnail": "http:\/\/img.recipepuppy.com\/413822.jpg"
             },
           ]
}

Note: The API response has been trimmed to show only the relevant content.

Path : $.results[0]

Retrieving Answer: so our required answer in the api response is under the key results and since it’s an array we are using the first element of the array and since the element is a dictionary too we use its keys correspondingly to answer. The $href$ is rendered as “Click Here” hyperlinked to the actual url.

 

Screenshots:

 

We have successfully taught susi a skill which tells users about the plot of a tv show and a skill to query recipes.
Cheers!
Following similar procedure, we can make use of other APIs and teach susi several new skills.

Resources
Continue ReadingHow to teach SUSI skills calling an External API

Generating a documentation site from markup documents with Sphinx and Pandoc

Generating a fully fledged website from a set of markup documents is no easy feat. But thanks to the wonderful tool sphinx, it certainly makes the task easier. Sphinx does the heavy lifting of generating a website with built in javascript based search. But sometimes it’s not enough.

This week we were faced with two issues related to documentation generation on loklak_server and susi_server. First let me give you some context. Now sphinx requires an index.rst file within /docs/  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository.

This leads to the following two problems:

  • Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors.
  • Any relative links in README break when it is included in another folder.

To fix the first issue, I used pypandoc, a thin wrapper around Pandoc. Pandoc is a wonderful command line tool which allows us to convert documents from one markup format to another. From the official Pandoc website itself,

If you need to convert files from one markup format into another, pandoc is your swiss-army knife.

pypandoc requires a working installation of Pandoc, which can be downloaded and installed automatically using a single line of code.

pypandoc.download_pandoc()

This gives us a cross-platform way to download pandoc without worrying about the current platform. Now, pypandoc leaves the installer in the current working directory after download, which is fine locally, but creates a problem when run on remote systems like Travis. The installer could get committed accidently to the repository. To solve this, I had to take a look at source code for pypandoc and call an internal method, which pypandoc basically uses to set the name of the installer. I use that method to find out the name of the file and then delete it after installation is over. This is one of many benefits of open-source projects. Had pypandoc not been open source, I would not have been able to do that.

url = pypandoc.pandoc_download._get_pandoc_urls()[0][pf]
filename = url.split(‘/’)[-1]
os.remove(filename)

Here pf is the current platform which can be one of ‘win32’, ‘linux’, or ‘darwin’.

Now let’s take a look at our second issue. To solve that, I used regular expressions to capture any relative links. Capturing links were easy. All links in reStructuredText are in the same following format.

`Title <url>`__

Similarly links in markdown are in the following format

[Title](url)

Regular expressions were the perfect candidate to solve this. To detect which links was relative and need to be fixed, I checked which links start with the \docs\ directory and then all I had to do was remove the \docs prefix from those links.

A note about loklak and susi server project

Loklak is a server application which is able to collect messages from various sources, including twitter.

SUSI AI is an intelligent Open Source personal assistant. It is capable of chat and voice interaction and by using APIs to perform actions such as music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information

Continue ReadingGenerating a documentation site from markup documents with Sphinx and Pandoc

Using NodeBuilder to instantiate node based Elasticsearch client and Visualising data

As elastic.io mentions, Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. But in many setups, it is not possible to manually install an Elasticsearch node on a machine. To handle these type of scenarios, Elasticsearch provides the NodeBuilder module, which can be used to spawn Elasticsearch node programmatically. Let’s see how.

Getting Dependencies

In order to get the ES Java API, we need to add the following line to dependencies.

compile group: 'org.elasticsearch', name: 'securesm', version: '1.0'

The required packages will be fetched the next time we gradle build.

Configuring Settings

In the Elasticsearch Java API, Settings are used to configure the node(s). To create a node, we first need to define its properties.

Settings.Builder settings = new Settings.Builder();

settings.put("cluster.name", "cluster_name");  // The name of the cluster

// Configuring HTTP details
settings.put("http.enabled", "true");
settings.put("http.cors.enabled", "true");
settings.put("http.cors.allow-origin", "https?:\/\/localhost(:[0-9]+)?/");  // Allow requests from localhost
settings.put("http.port", "9200");

// Configuring TCP and host
settings.put("transport.tcp.port", "9300");
settings.put("network.host", "localhost");

// Configuring node details
settings.put("node.data", "true");
settings.put("node.master", "true");

// Configuring index
settings.put("index.number_of_shards", "8");
settings.put("index.number_of_replicas", "2");
settings.put("index.refresh_interval", "10s");
settings.put("index.max_result_window", "10000");

// Defining paths
settings.put("path.conf", "/path/to/conf/");
settings.put("path.data", "/path/to/data/");
settings.put("path.home", "/path/to/data/");

settings.build();  // Buid with the assigned configurations

There are many more settings that can be tuned in order to get desired node configuration.

Building the Node and Getting Clients

The Java API makes it very simple to launch an Elasticsearch node. This example will make use of settings that we just built.

Node elasticsearchNode = NodeBuilder.nodeBuilder().local(false).settings(settings).node();

A piece of cake. Isn’t it? Let’s get a client now, on which we can execute our queries.

Client elasticsearhClient = elasticsearchNode.client();

Shutting Down the Node

elasticsearchNode.close();

A nice implementation of using the module can be seen at ElasticsearchClient.java in the loklak project. It uses the settings from a configuration file and builds the node using it.


Visualisation using elasticsearch-head

So by now, we have an Elasticsearch client which is capable of doing all sorts of operations on the node. But how do we visualise the data that is being stored? Writing code and running it every time to check results is a lengthy thing to do and significantly slows down development/debugging cycle.

To overcome this, we have a web frontend called elasticsearch-head which lets us execute Elasticsearch queries and monitor the cluster.

To run elasticsearch-head, we first need to have grunt-cli installed –

$ sudo npm install -g grunt-cli

Next, we will clone the repository using git and install dependencies –

$ git clone git://github.com/mobz/elasticsearch-head.git
$ cd elasticsearch-head
$ npm install

Next, we simply need to run the server and go to indicated address on a web browser –

$ grunt server

At the top, enter the location at which elasticsearch-head can interact with the cluster and Connect.

Upon connecting, the dashboard appears telling about the status of cluster –

The dashboard shown above is from the loklak project (will talk more about it).

There are 5 major sections in the UI –
1. Overview: The above screenshot, gives details about the indices and shards of the cluster.
2. Index: Gives an overview of all the indices. Also allows to add new from the UI.
3. Browser: Gives a browser window for all the documents in the cluster. It looks something like this –

The left pane allows us to set the filter (index, type and field). The table listed is sortable. But we don’t always get what we are looking for manually. So, we have the following two sections.
4. Structured Query: Gives a dead simple UI that can be used to make a well structured request to Elasticsearch. This is what we need to search for to get Tweets from @gsoc that are indexed –

5. Any Request: Gives an advance console that allows executing any query allowable by Elasticsearch API.

A little about the loklak project and Elasticsearch

loklak is a server application which is able to collect messages from various sources, including twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

Source: github/loklak/loklak_server

The project uses Elasticsearch to index all the data that it collects. It uses NodeBuilder to create Elasticsearch node and process the index. It is flexible enough to join an existing cluster instead of creating a new one, just by changing the configuration file.

Conclusion

This blog post tries to explain how NodeBuilder can be used to create Elasticsearch nodes and how they can be configured using Elasticsearch Settings.

It also demonstrates the installation and basic usage of elasticsearch-head, which is a great library to visualise and check queries against an Elasticsearch cluster.

The official Elasticsearch documentation is a good source of reference for its Java API and all other aspects.

Continue ReadingUsing NodeBuilder to instantiate node based Elasticsearch client and Visualising data

Building the Scheduler UI

{ Repost from my personal blog @ https://blog.codezero.xyz/building-the-scheduler-ui }

If you hadn’t already noticed, Open Event has got a shiny new feature. A graphical and an Interactive scheduler to organize sessions into their respective rooms and timings.

As you can see in the above screenshot, we have a timeline on the left. And a lot of session boxes to it’s right. All the boxes are re-sizable and drag-drop-able. The columns represent the different rooms (a.k.a micro-locations). The sessions can be dropped into their respective rooms. Above the timeline, is a toolbar that controls the date. The timeline can be changed for each date by clicking on the respective date button.

The Clear overlaps button would automatically check the timeline and remove any sessions that are overlapping each other. The Removed sessions will be moved to the unscheduled sessions pane at the left.

The Add new micro-location button can be used to instantly add a new room. A modal dialog would open and the micro-location will be instantly added to the timeline once saved.

The Export as iCal allows the organizer to export all the sessions of that event in the popular iCalendar format which can then be imported into various calendar applications.

The Export as PNG saves the entire timeline as a PNG image file. Which can then be printed by the organizers or circulated via other means if necessary.

Core Dependencies

The scheduler makes use of some javascript libraries for the implementation of most of the core functionality

  • Interact.js – For drag-and-drop and resizing
  • Lodash – For array/object manipulations and object cloning
  • jQuery – For DOM Manipulation
  • Moment.js – For date time parsing and calculation
  • Swagger JS – For communicating with our API that is documented according to the swagger specs.
Retrieving data via the API

The swagger js client is used to obtain the sessions data using the API. The client is asynchronously initialized on page load. The client can be accessed from anywhere using the javascript function initializeSwaggerClient.

The swagger initialization function accepts a callback which is called if the client is initialized. If the client is not initialized, the callback is called after that.

var swaggerConfigUrl = window.location.protocol + "//" + window.location.host + "/api/v2/swagger.json";  
window.swagger_loaded = false;  
function initializeSwaggerClient(callback) {  
    if (!window.swagger_loaded) {
        window.api = new SwaggerClient({
            url: swaggerConfigUrl,
            success: function () {
                window.swagger_loaded = true;
                if (callback) {
                    callback();
                }

            }
        });
    } else {
        if (callback) {
            callback();
        }
    }
}

For getting all the sessions of an event, we can do,

initializeSwaggerClient(function () {  
    api.sessions.get_session_list({event_id: eventId}, function (sessionData) {
        var sessions = sessionData.obj;  // Here we have an array of session objects
    });
});

In a similar fashion, all the micro-locations of an event can also be loaded.

Processing the sessions and micro-locations

Each session object is looped through, it’s start time and end time are parsed into moment objects, duration is calculated, and it’s distance from the top in the timeline is calculated in pixels. The new object with additional information, is stored in an in-memory data store, with the date of the event as key, for use in the timeline.

The time configuration is specified in a separate time object.

var time = {  
    start: {
        hours: 0,
        minutes: 0
    },
    end: {
        hours: 23,
        minutes: 59
    },
    unit: {
        minutes: 15,
        pixels: 48,
        count: 0
    },
    format: "YYYY-MM-DD HH:mm:ss"
};

The smallest unit of measurement is 15 minutes and 48px === 15 minutes in the timeline.

Each day of the event is stored in a separate array in the form of Do MMMM YYYY(eg. 2nd May 2013).

The array of micro-location objects is sorted alphabetically by the room name.

Displaying sessions and micro-locations on the timeline

According to the day selected, the sessions for that day are displayed on the timeline. Based on their time, the distance of the session div from the top of the timeline is calculated in pixels and the session box is positioned absolutely. The height of the session in pixels is calculated from it’s duration and set.

For pixels-minutes conversion, the following are used.

/**
 * Convert minutes to pixels based on the time unit configuration
 * @param {number} minutes The minutes that need to be converted to pixels
 * @returns {number} The pixels
 */
function minutesToPixels(minutes) {  
    minutes = Math.abs(minutes);
    return (minutes / time.unit.minutes) * time.unit.pixels;
}

/**
 * Convert pixels to minutes based on the time unit configuration
 * @param {number} pixels The pixels that need to be converted to minutes
 * @returns {number} The minutes
 */
function pixelsToMinutes(pixels) {  
    pixels = Math.abs(pixels);
    return (pixels / time.unit.pixels) * time.unit.minutes;
}
Adding interactivity to the session elements

Interact.js is used to provide interactive capabilities such as drag-drop and resizing.

To know how to use Interact.js, you can checkout some previous blog posts on the same, Interact.js + drag-drop and Interact.js + resizing.

Updating the session information in database on every change

We have to update the session information in database whenever it is moved or resized. Every time a session is moved or resized, a jQuery event is triggered on $(document) along with the session object as the payload.

We listen to this event, and make an API request with the new session object to update the session information in the database.


The scheduler UI is more complex than said in this blog post. To know more about it, you can checkout the scheduler’s javascript code atapp/static/js/admin/event/scheduler.js.

Continue ReadingBuilding the Scheduler UI

Python code examples

I’ve met many weird examples of  behaviour in python language while working on Open Event project. Today I’d like to share some examples with you. I think this knowledge is necessary, if you’d like to increase a  bit your knowledge in python area.

Simple adding one element to python list:

def foo(value, x=[]):
  x.append(value)
  return x

>>> print(foo(1))
>>> print(foo(2))
>>> print(foo(3, []))
>>> print(foo(4))

OUTPUT

[1] 
[1, 2] 
[3]
[1, 2, 4]

First output is obvious, but second not exactly. Let me explain it, It happens because x(empty list) argument is only evaluated once, So on every call foo(), we modify that list, appending a value to it. Finally we have [1,2, 4] output. I recommend to avoid mutable params as default.

Another example:

Do you know which type it is?

>>> print(type([ el for el in range(10)]))
>>> print(type({ el for el in range(10)}))
>>> print(type(( el for el in range(10))))

Again first and second type are obvious <class ‘list’>, <class ‘set’>. You may  think that last one should return type tuple but it returns a generator <class ‘generator’>.

Example:

Do you think that below code returns an exception?

list= [1,2,3,4,5]
>>> print(list [8:])

If you think that above expression returns index error you’re wrong. It returns empty list [].

Example funny boolean operators

>>> 'c' == ('c' or 'b')
True
>>> 'd' == ('a' or 'd')
False
>>> 'c' == ('c' and 'b')
False 
>>> 'd' == ('a' and 'd')
True

You can think that that OR and AND operators are broken.

You have to know how python interpreter behaves while looking for OR and AND operators.

So OR Expression takes the first statement and checks if it is true. If the first statement is true, then Python returns object’s value without checking second value. If first statement is false interpreter checks second value and returns that value.

AND operator checks if first statement is false, the whole statement has to be false. So it returns first value, but if first statement is true it checks second statement and returns second value.

Below i will show you how it works

>>> 'c' == ('c' or 'b')
>>> 'c' == 'c'
True
>>> 'd' == ('a' or 'd')
>>> 'd' == 'a'
False
>>> 'c' == ('c' and 'b')
>>> 'c' == 'b'
False 
>>> 'd' == ('a' and 'd')
>>> 'd' == 'd'
True

I hope that i have explained you how the python interpreter checks OR and AND operators. So know above examples should be more understandable.

Continue ReadingPython code examples

Testing, Documentation and Merging

As the GSoC period comes to an end the pressure, excitement and anxiety rises. I am working on the finishing steps of my project. I was successfully able to implement all the task I took up. A list of all the tasks and their implementation can be found here.

https://github.com/societyserver/sTeam/wiki/GSOC-2016-Work-Distribution#roadmap-and-work-distribution-on-steam-for-gsoc-2016

At the start of this week I had about 26 Pull Requests. Each Pull Request had independent pieces of code for a new task from the list. I had to merge all the pull requests and resolve the conflicts. My earlier tasks involved working on the same code so there were a lot of conflicts. I spend hours looking through the code and resolving conflicts. I also had to test each feature after merging any two of the branches. Finally we were able to combine all our code and come up with a branch that contains all the code implemented by me and Ajinkya Wavare.

https://github.com/societyserver/sTeam/tree/gsoc2016-societyserver-devel

https://github.com/societyserver/sTeam/tree/gsoc2016-source

These two are the branches we combined all our code in. I finished my work on linux command for sTeam by adding support for the last two tools which are export and import from git. I worked on to include a help to get new users to understand the use of the command.

https://github.com/societyserver/sTeam/pull/135

I also worked on documentation. I started with the testing suite which is implemented by me. I wrote comments to explain the work and also improved the code by removing unnecessary lines of code. After this I added the documentation for the new command in steam-shell that I had implemented. The command to work with groups from the steam-shell. One of the issue with the testing suite still stands unresolved. I have been breaking my head on it for a week now but to no results. I will attempt to solve it in the coming week.

https://github.com/societyserver/sTeam/issues/109

This error occurs for various objects in the first few runs and then the test suite runs normally error free.

Continue ReadingTesting, Documentation and Merging

PayPal Express Checkout in Python

As per the PayPal documentation …

Express Checkout is a fast, easy way for buyers to pay with PayPal. Express Checkout eliminates one of the major causes of checkout abandonment by giving buyers all the transaction details at once, including order details, shipping options, insurance choices, and tax totals.

The basic steps for using express checkout to receive one-time payments are:

  1. Getting the PayPal API credentials.
  2. Making a request to the API with the transaction details to get a token
  3. Using the token to send the users to the PayPal payment page
  4. Capturing the payment and charging the user after the user completes the payment at PayPal.

We will be using PayPal’s Classic NVP (Name-value pair) API for implementing this.

Getting PayPal API Credentials

To begin with, we’ll need API Credentials.
We’ll be using the Signature API credentials which consists of

  • API Username
  • API Password
  • Signature

To obtain these, you can follow the steps at Creating and managing NVP/SOAP API credentials – PayPal Developer.

You’ll be getting two sets of credentials. Sandbox and Live. We’ll just stick to the Sandbox for now.

Now, we need sandbox test accounts for making and receiving payments. Head over to Creating Sandbox Test Accounts – PayPal Developer and create two sandbox test accounts. One would be the facilitator and one would be the buyer.

PayPal NVP Servers

All the API actions will take place by making a request to the PayPal server. PayPal has 4 different NVP servers for 4 different purposes.

  1. https://api-3t.sandbox.paypal.com/nvp – Sandbox “testing” server for use with API signature credentials.
  2. https://api-3t.paypal.com/nvp– PayPal “live” production server for use with API signature credentials.
  3. https://api.sandbox.paypal.com/nvp – Sandbox “testing” server for use with API certificate credentials.
  4. https://api.paypal.com/nvp – PayPal “live” production server for use with API certificate credentials.

We’ll be using the Sandbox “testing” server for use with API signature credentials.

Creating a transaction and obtaining the token

To create a transaction, we’ll need to make a request with all the transaction details. We can use Python requests library to easily make the requests. All requests are POST.

We’ll be calling the SetExpressCheckout method of the NVP API to obtain the token.

import requests  
import urlparse

data = {  
    'USER': credentials['USER'],
    'PWD': credentials['PWD'],
    'SIGNATURE': credentials['SIGNATURE'],
    'SUBJECT': credentials['FACILITATOR_EMAIL'],
    'METHOD': 'SetExpressCheckout',
    'VERSION': 93,
    'PAYMENTREQUEST_0_PAYMENTACTION': 'SALE',
    'PAYMENTREQUEST_0_AMT': 100,
    'PAYMENTREQUEST_0_CURRENCYCODE': 'USD',
    'RETURNURL': 'http://localhost:5000/paypal/return/',
    'CANCELURL': 'http://localhost:5000/paypal/cancel/'
}
response = requests.post('https://api-3t.sandbox.paypal.com/nvp', data=data)  
token = dict(urlparse.parse_qsl(response.text))['TOKEN']

Here,

  • USER represents your Sandbox API Username.
  • PWD represents your Sanbox API Password.
  • SIGNATURE represents your Sandbox Signature.
  • SUBJECT represents the facilitator’s email ID.
  • PAYMENTREQUEST_0_AMT is the total transaction amount.
  • PAYMENTREQUEST_0_CURRENCYCODE is the 3 digit ISO 4217 Currency code.
  • RETURNURL is where the user will be sent to after the transaction
  • CANCELURL is where the user will be sent to if he/she cancels the transaction.

A URL-Encoded, Name-value pair response would be obtained. We can decode that into a dict by using Python’s urlparse modules.

From the response, we’re extracting the TOKEN which we will use to generate the payment URL for the user.

This token has to be retained since we’ll be using it in further steps of the process.

Redirecting the user to PayPal for Approval

With the token we obtained, we can form the payment URL.

https://www.sandbox.paypal.com/cgi-bin/webscr?cmd=_express-checkout&token=<TOKEN>

We’ll have to send the user to that URL. Once the user completes the transaction at PayPal, he/she will be returned to the RETURNURL where we’ll further process the transaction.

Obtaining approved payment details and capturing the payment

Once the user completes the transaction and gets redirected back to RETURNURL, we’ll have to obtain the confirmed payment details from PayPal. For that we can again use the token ID that we obtained before.

We’ll now be making a request to the GetExpressCheckoutDetails method of the API.

import requests  
import urlparse

data = {  
    'USER': credentials['USER'],
    'PWD': credentials['PWD'],
    'SIGNATURE': credentials['SIGNATURE'],
    'SUBJECT': credentials['FACILITATOR_EMAIL'],
    'METHOD': 'GetExpressCheckoutDetails',
    'VERSION': 93,
    'TOKEN': TOKEN
}

response = requests.post('https://api-3t.sandbox.paypal.com/nvp', data=data)  
result = dict(urlparse.parse_qsl(response.text))  
payerID = result['PAYERID']

A URL-Encoded, Name-value pair response would be obtained. We can decode that into a dict by using Python’s urlparse modules.

This will provide us with information about the transaction such as transaction time, transaction amount, charges, transaction mode, etc.

But, we’re more interested in the PAYERID which we’ll need to capture/collect the payment. The money is not transferred to the facilitators account until it is captured/collected. So, be sure to collect it.

To collect it, we’ll be making another request to the DoExpressCheckoutPaymentmethod of the API using the token and the PAYERID.

import requests  
import urlparse

data = {  
    'USER': credentials['USER'],
    'PWD': credentials['PWD'],
    'SIGNATURE': credentials['SIGNATURE'],
    'SUBJECT': credentials['FACILITATOR_EMAIL'],
    'METHOD': 'DoExpressCheckoutPayment',
    'VERSION': 93,
    'TOKEN': TOKEN,
    'PAYERID': payerID,
    'PAYMENTREQUEST_0_PAYMENTACTION': 'SALE',
    'PAYMENTREQUEST_0_AMT': 100,
    'PAYMENTREQUEST_0_CURRENCYCODE': 'USD',
}

response = requests.post('https://api-3t.sandbox.paypal.com/nvp', data=data)  
result = dict(urlparse.parse_qsl(response.text))  
status = result['ACK']

All the details have to be the same as the ones provided while obtaining the token. Once we make the request, we’ll again get a URL-Encoded, Name-value pair response. We can decode that into a dict by using Python’s urlparsemodules.

From the response, ACK (Acknowledgement status) will provide us with the status of the payment.

  • Success — A successful operation.
  • SuccessWithWarning — A successful operation; however, there are messages returned in the response that you should examine.
  • Failure — The operation failed; the response also contains one or more error messages explaining the failure.
  • FailureWithWarning — The operation failed and there are messages returned in the response that you should examine.

And, we have completed the PayPal transaction flow for Express Checkout. These are just the basics and might miss a few stuff. I suggest you go through the following links too for a better understanding of everything:

For Reference:
  1. PayPal Name-Value Pair API Basics – PayPal Developer
  2. How to Create One-Time Payments Using Express Checkout – PayPal Developer
Continue ReadingPayPal Express Checkout in Python

Testing Hero – II

I continued last weeks work on improving the testing framework and adding more test cases. While writing the test cases for create I had to write a separate test case for each kind of object. This caused a lot of repetition of code. Thus the first aim for the week was to design a mechanism to write generalized test cases so that we can have an array of object and loop through them and pass each object to the same test case.

https://github.com/societyserver/sTeam/issues/113

https://github.com/societyserver/sTeam/pull/114

Right now the structure has a central script called test.pike which imports various other scripts containing the test cases. Let us take one of these scripts, suppose move.pike. Now I wanted to write a generalized test case which performs the same action on various objects. So I created one more file containing this generalized test case and imported this into one of the testcases in move.pike. This test case in move.pike is responsible for enumerating the various kinds of objects, sending them to the generalized test case, collect the output and then send the result for the entire test to the central test.pike. Then I went ahead and implemented this model for moving various objects to non existential location and for creating various kinds of objects and the model seemed to work fairly well.

The journey was not so smooth. I had a few troubles on the way. In all the test cases I was deleting any objects that were created and used in the test. To delete any object I need to get a reference to the object. This reference keeps getting dropped for some reason and I get an error for calling the delete function on NULL as the reference no longer exists. I tried finding the cause of this and solve this bug, however I couldn’t and found a work around the errors by using if statements to check that the object references are not null before calling functions on these object references. I continued my work on generalizing the test cases and wrote the general tests for all the test cases in the move and create test suites.

In the later part of the week I started working on some merging with my team mate Ajinkya Wavare. I designed more test cases for checking the creation of groups and users. Groups could be created using the generalized test case however for users I had to add a special test case as the process of creating a user is different from creating other objects. I ended my week by writing the test case for a long standing error, i.e, call to the get_environment function.

testing hero 2
output of test
Continue ReadingTesting Hero – II

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced.

Issue: https://github.com/societyserver/sTeam/issues/30

Solution: https://github.com/societyserver/sTeam/pull/95

Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates.

This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.

gsoc look
output of look before the changes
gsoc look1
output of look after the changes

 

Issue: https://github.com/societyserver/sTeam/issues/100

Solution: https://github.com/societyserver/sTeam/pull/101

By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Continue ReadingAdding extensive help for sTeam