Using custom themes with Yaydoc to build documentation

What is Yaydoc? Yaydoc aims to be a one stop solution for all your documentation needs. It is continuously integrated to your repository and builds the site on each commit. One of it's primary aim is to minimize user configuration. It is currently in active development. Why Themes? Themes gives the user ability to generate visually different sites with the same markup documents without any configuration. It is one of the many features Yaydoc inherits from sphinx. Now sphinx comes with 10 built in themes but there are much more custom themes available on PyPI, the official Python package repository. To use these custom themes, sphinx requires some setup. But Yaydoc being an automated system needs to performs those tasks automatically. To use a custom theme which has been installed, sphinx needs to know the name of the theme and where to find it. We do that by specifying two variables in the sphinx configuration file. html_theme and html_theme_path respectively. Custom themes provide a method that can be called to get the html_theme_path of the theme. Usually that method is named get_html_theme_path . But that is not always the case. We have no way find the appropriate method automatically. So how do we get the path of an installed theme just by it’s name and how do we add it to the generated configuration file. The configuration file is generated by the sphinx-quickstart command which Yaydoc uses to initialize the documentation directory. We can override the default generated files by providing our own project templates. The templates are based on the Jinja2 template engine. Firstly, I replaced html_theme = ‘alabaster’ With html_theme = ‘{{ html_theme }}’ This provides us the ability to pass the name of the theme as a parameter to sphinx-quickstart. Now the user has an option to choose between 10 built-in themes. For custom themes however there is a different story. I had to solve two major issues. The name of the package and the theme may differ. We also need the absolute path to the theme. The following code snippet solves the above mentioned problems. {% if html_theme in (['alabaster', 'classic', 'sphinxdoc', 'scrolls', 'agogo', 'traditional', 'nature', 'haiku', 'pyramid', 'bizstyle']) %} # Theme is builtin. Just set the name html_theme = '{{ html_theme }}' {% else %} # Theme is a custom python package. Lets install it. import pip exitcode = pip.main(['install', '{{ html_theme }}']) if exitcode: # Non-zero exit code print("""{0} is not available on pypi. Please ensure the theme can be installed using 'pip install {0}'.""".format('{{ html_theme }}'), file=sys.stderr) else: import {{ html_theme }} def get_path_to_theme(): package_path = os.path.dirname({{ html_theme }}.__file__) for root, dirs, files in os.walk(package_path): if 'theme.conf' in files: return root path_to_theme = get_path_to_theme() if path_to_theme is None: print("\n{0} does not appear to be a sphinx theme.".format('{{ html_theme }}'), file=sys.stderr) html_theme = 'alabaster' else: html_theme = os.path.basename(path_to_theme) html_theme_path = [os.path.abspath(os.path.join(path_to_theme, os.pardir))] {% endif %} It performs the following tasks in order: It first checks if the provided theme is one of the built…

Continue ReadingUsing custom themes with Yaydoc to build documentation

How to teach SUSI skills calling an External API

SUSI is an intelligent  personal assistant. SUSI can learn skills to understand and respond to user queries better. A skill is taught using rules. Writing rules is an easy task and one doesn’t need any programming background too. Anyone can start contributing. Check out these tutorials and do watch this video to get started and start teaching susi. SUSI can be taught to call external API’s to answer user queries. While writing skills we first mention string patterns to match the user's query and then tell SUSI what to do with the matched pattern. The pattern matching is similar to regular expressions and we can also retrieve the matched parameters using $<parameter number>$ notation. Example : My name is * Hi $1$! When the user inputs “My name is Uday” , it is matched with “My name is *” and “Uday” is stored in $1$. So the output given is “Hi Uday!”. SUSI can call an external API to reply to user query. An API endpoint or url when called must return a JSON or JSONP response for SUSI to be able to parse the response and retrieve the answer. Rule Format for a skill calling an external API The rule format for calling an external API is : <regular expression for pattern matching> !console: <return answer using $object$ or $required_key$> { “url”: “<API endpoint or url>”, “path”: “$.<key in the API response to find the answer>”, } eol Url is the API endpoint to be called which returns a JSON or JSONP response. The parameters to the url if any can be added using $$ notation. Path is used to help susi know where to look for the answer in the returned response. If the path points to a root element, then the answer is stored in $object$, otherwise we can query $key$ to get the answer which is a value to the key under the path. eol or end of line indicates the end of the rule. Understanding the Path Attribute Let us understand the Path attribute better through some test cases. In each of the test cases we discuss what the path should be and how to retrieve the answer for a given required answer from the json response of an API. API response in json : { “Key1” : “Value1” } Required answer : Value1 Path : “$.Key1    =>   Retrieve Answer:  $object$   API response in json : { “Key1” : [{“Key11” : “Value11”}] } Required answer : Value11 Path : $.Key1[0]   =>  Retrieve Answer: $Key11$ Path : $.Key1[0].Key11   => Retrieve Answer: $object$   API response in json : { “Key1” : {“Key11” : “Value11”} } Required answer : Value11 Path : $.Key1  => Retrieve Answer:  $Key11$ Path : $.Key1.Key11  => Retrieve Answer: $object$   API response in json : { “Key1” : { “Key11” : “Value11”, “Key12” : “Value12” } } Required answer : Value11 , Value12 Path : $.Key1  => Retrieve Answer:  $Key11$ , $Key12$ Where to write these rules? Now, since we know…

Continue ReadingHow to teach SUSI skills calling an External API

Generating a documentation site from markup documents with Sphinx and Pandoc

Generating a fully fledged website from a set of markup documents is no easy feat. But thanks to the wonderful tool sphinx, it certainly makes the task easier. Sphinx does the heavy lifting of generating a website with built in javascript based search. But sometimes it’s not enough. This week we were faced with two issues related to documentation generation on loklak_server and susi_server. First let me give you some context. Now sphinx requires an index.rst file within /docs/  which it uses to generate the first page of the site. A very obvious way to fill it which helps us avoid unnecessary duplication is to use the include directive of reStructuredText to include the README file from the root of the repository. This leads to the following two problems: Include directive can only properly include a reStructuredText, not a markdown document. Given a markdown document, it tries to parse the markdown as  reStructuredText which leads to errors. Any relative links in README break when it is included in another folder. To fix the first issue, I used pypandoc, a thin wrapper around Pandoc. Pandoc is a wonderful command line tool which allows us to convert documents from one markup format to another. From the official Pandoc website itself, If you need to convert files from one markup format into another, pandoc is your swiss-army knife. pypandoc requires a working installation of Pandoc, which can be downloaded and installed automatically using a single line of code. pypandoc.download_pandoc() This gives us a cross-platform way to download pandoc without worrying about the current platform. Now, pypandoc leaves the installer in the current working directory after download, which is fine locally, but creates a problem when run on remote systems like Travis. The installer could get committed accidently to the repository. To solve this, I had to take a look at source code for pypandoc and call an internal method, which pypandoc basically uses to set the name of the installer. I use that method to find out the name of the file and then delete it after installation is over. This is one of many benefits of open-source projects. Had pypandoc not been open source, I would not have been able to do that. url = pypandoc.pandoc_download._get_pandoc_urls()[0][pf] filename = url.split(‘/’)[-1] os.remove(filename) Here pf is the current platform which can be one of ‘win32’, ‘linux’, or ‘darwin’. Now let’s take a look at our second issue. To solve that, I used regular expressions to capture any relative links. Capturing links were easy. All links in reStructuredText are in the same following format. `Title <url>`__ Similarly links in markdown are in the following format [Title](url) Regular expressions were the perfect candidate to solve this. To detect which links was relative and need to be fixed, I checked which links start with the \docs\ directory and then all I had to do was remove the \docs prefix from those links. A note about loklak and susi server project Loklak is a server application which is able…

Continue ReadingGenerating a documentation site from markup documents with Sphinx and Pandoc

Using NodeBuilder to instantiate node based Elasticsearch client and Visualising data

As elastic.io mentions, Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. But in many setups, it is not possible to manually install an Elasticsearch node on a machine. To handle these type of scenarios, Elasticsearch provides the NodeBuilder module, which can be used to spawn Elasticsearch node programmatically. Let's see how. Getting Dependencies In order to get the ES Java API, we need to add the following line to dependencies. compile group: 'org.elasticsearch', name: 'securesm', version: '1.0' The required packages will be fetched the next time we gradle build. Configuring Settings In the Elasticsearch Java API, Settings are used to configure the node(s). To create a node, we first need to define its properties. Settings.Builder settings = new Settings.Builder(); settings.put("cluster.name", "cluster_name"); // The name of the cluster // Configuring HTTP details settings.put("http.enabled", "true"); settings.put("http.cors.enabled", "true"); settings.put("http.cors.allow-origin", "https?:\/\/localhost(:[0-9]+)?/"); // Allow requests from localhost settings.put("http.port", "9200"); // Configuring TCP and host settings.put("transport.tcp.port", "9300"); settings.put("network.host", "localhost"); // Configuring node details settings.put("node.data", "true"); settings.put("node.master", "true"); // Configuring index settings.put("index.number_of_shards", "8"); settings.put("index.number_of_replicas", "2"); settings.put("index.refresh_interval", "10s"); settings.put("index.max_result_window", "10000"); // Defining paths settings.put("path.conf", "/path/to/conf/"); settings.put("path.data", "/path/to/data/"); settings.put("path.home", "/path/to/data/"); settings.build(); // Buid with the assigned configurations There are many more settings that can be tuned in order to get desired node configuration. Building the Node and Getting Clients The Java API makes it very simple to launch an Elasticsearch node. This example will make use of settings that we just built. Node elasticsearchNode = NodeBuilder.nodeBuilder().local(false).settings(settings).node(); A piece of cake. Isn't it? Let's get a client now, on which we can execute our queries. Client elasticsearhClient = elasticsearchNode.client(); Shutting Down the Node elasticsearchNode.close(); A nice implementation of using the module can be seen at ElasticsearchClient.java in the loklak project. It uses the settings from a configuration file and builds the node using it. Visualisation using elasticsearch-head So by now, we have an Elasticsearch client which is capable of doing all sorts of operations on the node. But how do we visualise the data that is being stored? Writing code and running it every time to check results is a lengthy thing to do and significantly slows down development/debugging cycle. To overcome this, we have a web frontend called elasticsearch-head which lets us execute Elasticsearch queries and monitor the cluster. To run elasticsearch-head, we first need to have grunt-cli installed - $ sudo npm install -g grunt-cli Next, we will clone the repository using git and install dependencies - $ git clone git://github.com/mobz/elasticsearch-head.git $ cd elasticsearch-head $ npm install Next, we simply need to run the server and go to indicated address on a web browser - $ grunt server At the top, enter the location at which elasticsearch-head can interact with the cluster and Connect. Upon connecting, the dashboard appears telling about the status of cluster - The dashboard shown above is from the loklak project (will talk more about it). There are 5 major sections in the UI - 1. Overview: The above screenshot, gives details about the indices and shards of the cluster.…

Continue ReadingUsing NodeBuilder to instantiate node based Elasticsearch client and Visualising data

Building the Scheduler UI

{ Repost from my personal blog @ https://blog.codezero.xyz/building-the-scheduler-ui } If you hadn't already noticed, Open Event has got a shiny new feature. A graphical and an Interactive scheduler to organize sessions into their respective rooms and timings. As you can see in the above screenshot, we have a timeline on the left. And a lot of session boxes to it's right. All the boxes are re-sizable and drag-drop-able. The columns represent the different rooms (a.k.a micro-locations). The sessions can be dropped into their respective rooms. Above the timeline, is a toolbar that controls the date. The timeline can be changed for each date by clicking on the respective date button. The Clear overlaps button would automatically check the timeline and remove any sessions that are overlapping each other. The Removed sessions will be moved to the unscheduled sessions pane at the left. The Add new micro-location button can be used to instantly add a new room. A modal dialog would open and the micro-location will be instantly added to the timeline once saved. The Export as iCal allows the organizer to export all the sessions of that event in the popular iCalendar format which can then be imported into various calendar applications. The Export as PNG saves the entire timeline as a PNG image file. Which can then be printed by the organizers or circulated via other means if necessary. Core Dependencies The scheduler makes use of some javascript libraries for the implementation of most of the core functionality Interact.js - For drag-and-drop and resizing Lodash - For array/object manipulations and object cloning jQuery - For DOM Manipulation Moment.js - For date time parsing and calculation Swagger JS - For communicating with our API that is documented according to the swagger specs. Retrieving data via the API The swagger js client is used to obtain the sessions data using the API. The client is asynchronously initialized on page load. The client can be accessed from anywhere using the javascript function initializeSwaggerClient. The swagger initialization function accepts a callback which is called if the client is initialized. If the client is not initialized, the callback is called after that. var swaggerConfigUrl = window.location.protocol + "//" + window.location.host + "/api/v2/swagger.json"; window.swagger_loaded = false; function initializeSwaggerClient(callback) { if (!window.swagger_loaded) { window.api = new SwaggerClient({ url: swaggerConfigUrl, success: function () { window.swagger_loaded = true; if (callback) { callback(); } } }); } else { if (callback) { callback(); } } } For getting all the sessions of an event, we can do, initializeSwaggerClient(function () { api.sessions.get_session_list({event_id: eventId}, function (sessionData) { var sessions = sessionData.obj; // Here we have an array of session objects }); }); In a similar fashion, all the micro-locations of an event can also be loaded. Processing the sessions and micro-locations Each session object is looped through, it's start time and end time are parsed into moment objects, duration is calculated, and it's distance from the top in the timeline is calculated in pixels. The new object with additional information, is stored…

Continue ReadingBuilding the Scheduler UI

Python code examples

I’ve met many weird examples of  behaviour in python language while working on Open Event project. Today I’d like to share some examples with you. I think this knowledge is necessary, if you’d like to increase a  bit your knowledge in python area. Simple adding one element to python list: def foo(value, x=[]):  x.append(value)   return x >>> print(foo(1)) >>> print(foo(2)) >>> print(foo(3, [])) >>> print(foo(4)) OUTPUT [1] [1, 2] [3] [1, 2, 4] First output is obvious, but second not exactly. Let me explain it, It happens because x(empty list) argument is only evaluated once, So on every call foo(), we modify that list, appending a value to it. Finally we have [1,2, 4] output. I recommend to avoid mutable params as default. Another example: Do you know which type it is? >>> print(type([ el for el in range(10)])) >>> print(type({ el for el in range(10)})) >>> print(type(( el for el in range(10)))) Again first and second type are obvious <class 'list'>, <class 'set'>. You may  think that last one should return type tuple but it returns a generator <class 'generator'>. Example: Do you think that below code returns an exception? list= [1,2,3,4,5] >>> print(list [8:]) If you think that above expression returns index error you’re wrong. It returns empty list []. Example funny boolean operators >>> 'c' == ('c' or 'b') True >>> 'd' == ('a' or 'd') False >>> 'c' == ('c' and 'b') False >>> 'd' == ('a' and 'd') True You can think that that OR and AND operators are broken. You have to know how python interpreter behaves while looking for OR and AND operators. So OR Expression takes the first statement and checks if it is true. If the first statement is true, then Python returns object’s value without checking second value. If first statement is false interpreter checks second value and returns that value. AND operator checks if first statement is false, the whole statement has to be false. So it returns first value, but if first statement is true it checks second statement and returns second value. Below i will show you how it works >>> 'c' == ('c' or 'b') >>> 'c' == 'c' True >>> 'd' == ('a' or 'd') >>> 'd' == 'a' False >>> 'c' == ('c' and 'b') >>> 'c' == 'b' False >>> 'd' == ('a' and 'd') >>> 'd' == 'd' True I hope that i have explained you how the python interpreter checks OR and AND operators. So know above examples should be more understandable.

Continue ReadingPython code examples

Testing, Documentation and Merging

As the GSoC period comes to an end the pressure, excitement and anxiety rises. I am working on the finishing steps of my project. I was successfully able to implement all the task I took up. A list of all the tasks and their implementation can be found here. https://github.com/societyserver/sTeam/wiki/GSOC-2016-Work-Distribution#roadmap-and-work-distribution-on-steam-for-gsoc-2016 At the start of this week I had about 26 Pull Requests. Each Pull Request had independent pieces of code for a new task from the list. I had to merge all the pull requests and resolve the conflicts. My earlier tasks involved working on the same code so there were a lot of conflicts. I spend hours looking through the code and resolving conflicts. I also had to test each feature after merging any two of the branches. Finally we were able to combine all our code and come up with a branch that contains all the code implemented by me and Ajinkya Wavare. https://github.com/societyserver/sTeam/tree/gsoc2016-societyserver-devel https://github.com/societyserver/sTeam/tree/gsoc2016-source These two are the branches we combined all our code in. I finished my work on linux command for sTeam by adding support for the last two tools which are export and import from git. I worked on to include a help to get new users to understand the use of the command. https://github.com/societyserver/sTeam/pull/135 I also worked on documentation. I started with the testing suite which is implemented by me. I wrote comments to explain the work and also improved the code by removing unnecessary lines of code. After this I added the documentation for the new command in steam-shell that I had implemented. The command to work with groups from the steam-shell. One of the issue with the testing suite still stands unresolved. I have been breaking my head on it for a week now but to no results. I will attempt to solve it in the coming week. https://github.com/societyserver/sTeam/issues/109 This error occurs for various objects in the first few runs and then the test suite runs normally error free.

Continue ReadingTesting, Documentation and Merging

PayPal Express Checkout in Python

As per the PayPal documentation ... Express Checkout is a fast, easy way for buyers to pay with PayPal. Express Checkout eliminates one of the major causes of checkout abandonment by giving buyers all the transaction details at once, including order details, shipping options, insurance choices, and tax totals. The basic steps for using express checkout to receive one-time payments are: Getting the PayPal API credentials. Making a request to the API with the transaction details to get a token Using the token to send the users to the PayPal payment page Capturing the payment and charging the user after the user completes the payment at PayPal. We will be using PayPal's Classic NVP (Name-value pair) API for implementing this. Getting PayPal API Credentials To begin with, we'll need API Credentials. We'll be using the Signature API credentials which consists of API Username API Password Signature To obtain these, you can follow the steps at Creating and managing NVP/SOAP API credentials - PayPal Developer. You'll be getting two sets of credentials. Sandbox and Live. We'll just stick to the Sandbox for now. Now, we need sandbox test accounts for making and receiving payments. Head over to Creating Sandbox Test Accounts - PayPal Developer and create two sandbox test accounts. One would be the facilitator and one would be the buyer. PayPal NVP Servers All the API actions will take place by making a request to the PayPal server. PayPal has 4 different NVP servers for 4 different purposes. https://api-3t.sandbox.paypal.com/nvp - Sandbox "testing" server for use with API signature credentials. https://api-3t.paypal.com/nvp- PayPal "live" production server for use with API signature credentials. https://api.sandbox.paypal.com/nvp - Sandbox "testing" server for use with API certificate credentials. https://api.paypal.com/nvp - PayPal "live" production server for use with API certificate credentials. We'll be using the Sandbox "testing" server for use with API signature credentials. Creating a transaction and obtaining the token To create a transaction, we'll need to make a request with all the transaction details. We can use Python requests library to easily make the requests. All requests are POST. We'll be calling the SetExpressCheckout method of the NVP API to obtain the token. import requests import urlparse data = { 'USER': credentials['USER'], 'PWD': credentials['PWD'], 'SIGNATURE': credentials['SIGNATURE'], 'SUBJECT': credentials['FACILITATOR_EMAIL'], 'METHOD': 'SetExpressCheckout', 'VERSION': 93, 'PAYMENTREQUEST_0_PAYMENTACTION': 'SALE', 'PAYMENTREQUEST_0_AMT': 100, 'PAYMENTREQUEST_0_CURRENCYCODE': 'USD', 'RETURNURL': 'http://localhost:5000/paypal/return/', 'CANCELURL': 'http://localhost:5000/paypal/cancel/' } response = requests.post('https://api-3t.sandbox.paypal.com/nvp', data=data) token = dict(urlparse.parse_qsl(response.text))['TOKEN'] Here, USER represents your Sandbox API Username. PWD represents your Sanbox API Password. SIGNATURE represents your Sandbox Signature. SUBJECT represents the facilitator's email ID. PAYMENTREQUEST_0_AMT is the total transaction amount. PAYMENTREQUEST_0_CURRENCYCODE is the 3 digit ISO 4217 Currency code. RETURNURL is where the user will be sent to after the transaction CANCELURL is where the user will be sent to if he/she cancels the transaction. A URL-Encoded, Name-value pair response would be obtained. We can decode that into a dict by using Python's urlparse modules. From the response, we're extracting the TOKEN which we will use to generate the payment URL for the user. This token…

Continue ReadingPayPal Express Checkout in Python

Testing Hero – II

I continued last weeks work on improving the testing framework and adding more test cases. While writing the test cases for create I had to write a separate test case for each kind of object. This caused a lot of repetition of code. Thus the first aim for the week was to design a mechanism to write generalized test cases so that we can have an array of object and loop through them and pass each object to the same test case. https://github.com/societyserver/sTeam/issues/113 https://github.com/societyserver/sTeam/pull/114 Right now the structure has a central script called test.pike which imports various other scripts containing the test cases. Let us take one of these scripts, suppose move.pike. Now I wanted to write a generalized test case which performs the same action on various objects. So I created one more file containing this generalized test case and imported this into one of the testcases in move.pike. This test case in move.pike is responsible for enumerating the various kinds of objects, sending them to the generalized test case, collect the output and then send the result for the entire test to the central test.pike. Then I went ahead and implemented this model for moving various objects to non existential location and for creating various kinds of objects and the model seemed to work fairly well. The journey was not so smooth. I had a few troubles on the way. In all the test cases I was deleting any objects that were created and used in the test. To delete any object I need to get a reference to the object. This reference keeps getting dropped for some reason and I get an error for calling the delete function on NULL as the reference no longer exists. I tried finding the cause of this and solve this bug, however I couldn’t and found a work around the errors by using if statements to check that the object references are not null before calling functions on these object references. I continued my work on generalizing the test cases and wrote the general tests for all the test cases in the move and create test suites. In the later part of the week I started working on some merging with my team mate Ajinkya Wavare. I designed more test cases for checking the creation of groups and users. Groups could be created using the generalized test case however for users I had to add a special test case as the process of creating a user is different from creating other objects. I ended my week by writing the test case for a long standing error, i.e, call to the get_environment function.

Continue ReadingTesting Hero – II

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced. Issue: https://github.com/societyserver/sTeam/issues/30 Solution: https://github.com/societyserver/sTeam/pull/95 Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates. This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.   Issue: https://github.com/societyserver/sTeam/issues/100 Solution: https://github.com/societyserver/sTeam/pull/101 By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Continue ReadingAdding extensive help for sTeam