Why should you use Heroku?

Last week I’ve dedicated most time to implement new heroku features to Open Event. Due to the fact that wasn’t easy tasks I can share my experience.

What is heroku?

Heroku is a cloud Platform-as-a-Service (PaaS) supporting several programming languages like Java, Node.js, Scala, Clojure, Python, PHP, and Go

Easy to deploy

what you need to do:

  1. Create account on heroku
  2. Download a heroku toolbelt to your enviroment.
  3. Go to your project directory(/open-event-orga-server)
  4. Sign in to heroku in your command line using your credentials
    $ heroku login
  5. Create app with name $
    heroku apps:create your_app_name

    after execution above command you will recive a link to your application

    http://your_app_name.herokuapp.com/
  6. Push latest changes to heroku
    $ git push heroku master
  7. If everythings is ok you can check results http://your_app_name.herokuapp.com/ (sometimes it does not work like we want 🙁 )

Easy to configure

To list/set/get config variables use:

$ heroku config
$ heroku config:set YOUR_VARIABLE=token123
$ heroku config:get YOUR_VARIABLE

or you can go to you application dashboard and make above operations

How you can get access to this variables using python langauges?

$ python

Python 2.7.10 (default, Oct 23 2015, 19:19:21) 

[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin

Type "help", "copyright", "credits" or "license" for more information.

>>> import os

>>> your_variable  = os.environ.get('YOUR_VARIABLE', None)

>>> print your_variable
token123

I’ve used this to display current release in my python application(You need to generate a special API token and add it to config variables)

os.popen('curl -n https://api.heroku.com/apps/open-event/releases -H 
"Authorization: Bearer ' + token + '" -H 
"Accept: application/vnd.heroku+json; version=3"').read()

Easy to monitor

If something is wrong with your APP you need to use this command

$ heroku logs

it shows all logs

To see 10 latest releases use:

$ heroku releases

How you can set up Open Event to deploy to heroku?

  1. Clone https://github.com/fossasia/open-event-orga-server
  2. Go to directory of open event orga server(/open-event-orga-server)
  3. Add git remote
     heroku git:remote -a open-event
  4. You can check if open event is added to git remote
    $ git remote -v
    heroku https://git.heroku.com/open-event.git (fetch)
    heroku https://git.heroku.com/open-event.git (push)
    origin https://github.com/fossasia/open-event-orga-server.git (fetch)
    origin https://github.com/fossasia/open-event-orga-server.git (push)
  5. Now you can deploy changes to open-event application(You need a permissions 🙂 )

Why should you use a Heroku?

It’s great to deploy apps because you are able to share content in short time what I’ve done. Besides it’s very well documented so you can find there answers for most of your questions. Finally most of things you can configure using Heroku dashboard so it’s the best advantages of this tool.

Continue Reading

Better fields and validation in Flask Restplus

We at Open Event Server project are using flask-restplus for API. Apart from auto-generating of Swagger specification, another great plus point of restplus is how easily we can set input and output models and the same is automatically shown in Swagger UI. We can also auto-validate the input in POST/PUT requests to make sure that we get what we want.

@api.expect(EVENT_POST, validate=True)
def put(self, id):
    """Modify object at id"""
    pass

As can be seen above, the validate param for namespace.expect decorator allows us to auto-validate the input payloads. This used to work well until one day I realized there were a few problems.

  1. When a field was defined as say for example field.Integer, then it will accept only Integer values, not even null.
  2. If there is a string field and it has required param set to True, then also it is possible to set empty string as its value and the in-built validator won’t catch it.
  3. Even if I somehow managed to hack my way to support null in field, it will also support null even if required=True.
  4. We had no control on what error message was returned.
EVENT = api.model('Event', {
    'id': fields.Integer,
    'name': fields.String(required=True)
})

Specially problem #1 was a huge one as it questioned the whole foundation of the API. So we realized it will be better if we don’t use namespace.expect and use a custom validator. For custom validator, we first had to create custom fields that this validator can benefit from. Luckily flask-restplus comes with a great API for creating custom fields. So we quickly created custom fields for all common fields (Integer, String) and more specific fields like Email, Uri and Color. Creating these specific fields were a huge advantage as now we can show proper example for each field types in the Swagger UI.

class Email(fields.String):
    """
    Email field
    """
    __schema_type__ = 'string'
    __schema_format__ = 'email'
    __schema_example__ = '[email protected]'

Consider the above code; now when we use Email as a field for a value, then the example shown for it in Swagger UI will be ‘[email protected]’. Quite cool, right?

Now we needed a way to validate these fields. For that, what we did was to create a validate method in each of the field-classes. This validate method would get the value and check if it was valid. Consider the following code –

import re
EMAIL_REGEX = re.compile(r'[email protected]S+.S+')

class Email():
	def validate(self, value):
		if not value:
		    return False if self.required else True
		if not EMAIL_REGEX.match(value):
		    return False
		return True

Once each of the field had their validate methods, we created a validate_payload() function that uses the API model and compares it with the payload. It will first check if all required keys are present in the payload or not. When that is true, it finally validates each field’s value using their field’s classvalidate method.

from flask import abort
from flask_restplus import fields
from custom_fields import CustomField

def validate_payload(payload, api_model):
    # check if any reqd fields are missing in payload
    for key in api_model:
        if api_model[key].required and key not in payload:
            abort(400, 'Required field '%s' missing' % key)
    # check payload
    for key in payload:
        field = api_model[key]
        if isinstance(field, fields.List):
            field = field.container
            data = payload[key]
        else:
            data = [payload[key]]
        if isinstance(field, CustomField) and hasattr(field, 'validate'):
            for i in data:
                if not field.validate(i):
                    abort(400, 'Validation of '%s' field failed' % key)

The CustomField is the base class that each of the custom fields mentioned above inherit. So checking if field was an instance of CustomField is enough to know if it is a custom field or not. Other thing that may look weird in the above code is use of fields.List. If you look closely, I have added this to support custom fields inside lists. So if you have used a custom field in a list, it will also work too. But obviously, this only supports single level lists for now. The thing is we didn’t needed more than that so I let it go. :stuck_out_tongue_winking_eye:

This basically sums up how we are validating input payloads at Open Event. Of course this is very basic but we will keep on improving it as the project progresses. Stay tuned to opev blog if you want to be in touch with the progress of the project.

Links to full code at the time of writing this post are –

  1. Custom Fields
  2. Validate Payload

I hope you found this post useful. Thanks for reading.

 

{{ Repost from my personal blog http://aviaryan.in/blog/gsoc/restplus-validation-custom-fields.html }}

Continue Reading

A look into the knitting pattern format

We are currently working on a format that allows to exchange instructions for knitting independent of how it is going to be knit: by a machine like Brother, Pfaff, or by hand.

For this to be possible we need the format to be as general as possible and have no ties to a specific form of knitting. At some point the transition to a format specifically designed for a certain machine is necessary. However, we believe that it is possible to have a format so general that instructions for all types of machines and for hand knitting could be generated from it.

We have decided to use JSON as the language to describe the format, because it is machine readable and human readable at the same time.
The structure of our format follows the rows in knitting.

Our first thought was about creating a format that would describe how meshes are connected and how the thread travels. This would allow for great flexibility and it should be possible to represent everything like this. However, we have decided against it, because we think this format would be quite complicated (different orientations and twists of meshes possible, different threads for multiple colors…) and would get quite big very quickly, because of all the different properties for each mesh and because of all the meshes. Furthermore, and this point is probably more important, knitters do not think this way. If we had a format like that, it would not be easy to understand what was happening for human beings.

Whenever I knit by hand, I never think about how all the meshes are connected by this single thread I am using. I always think about which operations I am performing when knitting in each row. Instructions for creating patterns in knitting are also written this way. They give the knitter a set of knitting instructions to do and possibly repeat. We have concluded, that most knitters think in knitting operations performed rather than connections between meshes.

17-diag

Knitting instructions from Garnstudio’s Café.

Therefore we have decided to base our format on knitting instructions/operations. The most common instructions probably are: knit, purl, cast on, bind off, knit two together, yarn over. Of course for increases and decreases there are many different operations which work in a similar way but have slight differences (e.g. skp, k2tog).
Since in knitting many things are possible and it is unlikely that we ever manage to create a complete list of all the possible operations you can perform in knitting we have decided to have a very open format, that allows the definition of new instructions.

Instructions are also defined in JSON format. Here is an example, the “k2tog” instruction:

[
    {
        "type" : "k2tog",
        "title" : {
            "en-en" : "Knit 2 Together"
        },
        "number of consumed meshes" : 2,
        "description" : {
            "wikipedia" : {
                "en-en" : "https://en.wikipedia.org/wiki/Knitting_abbreviations#Types_of_knitting_abbreviations"
            },
            "text" : {
                "en-en" : "Knit two stitches together, as if they were one stitch."
            }
        }
    }
]

 

Knitting patterns consist of multiple rows, which consist of multiple instructions. Furthermore we want to define the connections between rows. This is important, so we can express gaps or slits which are multiple rows long. For example when knitting pants the two legs will be separate. They will be knit separately and their combined width will be increased in comparison to the width of the hip.
Here is an example for a pattern which specifies a cast on in the first row, then a row where all stitches are knit, then the last row is bound off.

{
  "type" : "knitting pattern",
  "version" : "0.1",
  "comment" : {
    "content" : "cast on and bind off",
    "type" : "markdown"
    },
  "patterns" : [
    {
      "id" : "knit",
      "name" : "cobo",
      "rows" : [
        {
          "id" : 1,
          "instructions" : [
            {"id": "1.0", "type": "co"},
            {"id": "1.1", "type": "co"},
            {"id": "1.2", "type": "co"},
            {"id": "1.3", "type": "co"}
          ]
        },
        {
          "id" : 2,
          "instructions" : [
            {"id": "2.0"},
            {"id": "2.1"},
            {"id": "2.2"},
            {"id": "2.3"}
          ]
        },
        {
          "id" : 3,
          "instructions" : [
            {"id": "3.0", "type": "bo"},
            {"id": "3.1", "type": "bo"},
            {"id": "3.2", "type": "bo"},
            {"id": "3.3", "type": "bo"}
          ]
        }
      ],
      "connections" : [
        {
          "from" : {
            "id" : 1
          }, 
          "to" : {
            "id" : 2
          }
        },
        {
          "from" : {
            "id" : 2
          }, 
          "to" : {
            "id" : 3
          }
        }
      ]
    }
  ]
}

Connections are defined “from” one row “to” another.  The ids identify the rows. The optional attribute start defines the mesh where the connection starts. If start is not defined, the first mesh of the row is assumed. When indexing the list of meshes in a row the first index is 1. The optional attribute “meshes” describes how many meshes will be connected, starting from the mesh defined in “start”.

The resulting parsed Python object structure looks like this:

row model

The Python object structure for working with the parsed knitting pattern.

Each row has a list of instructions. Each instruction produces a number of meshes and consumes a number of meshes. These meshes are also the meshes that are consumed/produced by the rows.

 

Continue Reading

Improvements in sTeam shell and export script

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.

Logs

A breakthrough in sTeam development took place. Siddhant Gupta was able to successfully implement the TLS protocol in pike.

The Logs were displayed erroneously. Also the user was not able to scroll down the log to the latest message.  The golden ratio script used in the program was updated to it’s latest version.

Also the editor is opened using sudo command so as to access the vim scrips in the /usr/local/lib/steam/tools directory.
The files are opened using the vim command:
vim -S script -c edit filename1|sp filename2

Only one buffer is accessible at a time. In order to switch the buffer to log buffer the command is CTRL+Ww. Enter this command directly without entering the vim terminal using :.
The log buffer would be accessible, can be edited and scrolled down to the latest log message.

Issue. Github Issue Github PR
Access the log window till the end. Issue-20 PR-48
Open appropriate log window when a sTeam command is executed. Issue-49 PR-51

ScrollToLatest

The logs were displayed erroneously. Ideally the log should be displayed based on the buffer where the sTeam function is called and accordingly the relevant log buffer should be called and display the output. This error was fixed.

MultTabs

The log is displayed in the file named after the file which is opened and concatenated with the suffix “-disp”. In Vi the  :% buffer stores the value of the current file. Consequently this value was concatenated with “-disp” to display the buffer accordingly. The Vi script can be seen below.
ViFunc

Export to git script.

The  export-to-git script was tested vigorously.  All the known issue’s based on the script were replicated in the system and solution was found for them.

The export to git script is now capable of exporting multiple sources at a time. If the last argument is always the target repository then any number of previous arguments can be sources.

Issue. Github Issue Github PR
Support Multiple Source arguments Issue-14 Issue-19 PR-54
Include Source-name in branch name and add branch description. Issue-9 PR-55

Example command :
./export-to-git.pike /home/sTeam/file1 /home/coder/file2 /home/sTeam/container3 ~/gitfolder

The export-to-git script also exports the source name in the branch.  To help distinguish between the branches, we need more descriptive names:

./export-to-git.pike /sources/ /tmp/export-test/

This should create the branch sources-cur_time. Also when a file is specifically exported:

./ecport-to-git.pike /home/coder/demo1.txt /temp/export-test/

would create a branch with name home/coder/demo1.txt-cur_time. This is done to avoid ambiguity between files with same name existing in different locations.
Also a description is added fo the branch name using the git command
git config branch.<branch name>.description "describe branch"
To view this description go to the folder where the branch is exported and then enter the git command
git config branch.<branch name>.description

Export to Git script executing when Multiple Source arguments passed and the modified branch name.
GitExportAndBranchName

Branch Description

BranchDesc

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue Reading

Export Timeline as iCal

iCal or iCalendar is the Internet standard for sharing event schedules or session timelines among each other. The filename extension for iCal is .ics . It is supported by a number of applications such as Google Calendar, Apple Calendar,] Yahoo! Calendar, Lightning extension for Mozilla Thunderbird and SeaMonkey, and partially by Microsoft Outlook . In Open Event Organizer’s Server we have added a feature to export the Schedule of a particular Event in iCal format.

Continue Reading

Building interactive elements with HTML and javascript: Drag and Drop

{ Repost from my personal blog @ https://blog.codezero.xyz/building-interactive-elements-with-html-and-javascript-drag-and-drop }

Traditionally, all interactions in a website has been mostly via form inputs or clicking on links/button. The introduction of native Drag-and-drop as a part of the HTML5 spec, opened developers to a new way of Graphical input. So, how could this be implemented ?

Making any HTML element draggable is as simple as adding draggable="true" as an attribute.

<div id="a-draggable-div" draggable=true>  
    <h4>Drag me</h4>
</div>  

This will allow the user to drag div#a-draggable-div. Next, we need to designate a dropzone, into which the user can drop the div.

<div id="dropzone" ondragover="onDragOver(event)">  
</div>  
function onDragOver(e) {  
    // This function is called everytime 
    // an element is dragged over div#dropzone
    var dropzone = ev.target;

}

Now, the user will be able to drag the element. But, nothing will happen when the user drops it into the dropzone. We’ll need to define and handle that event. HTML5 provides ondrop attribute to bind to the drop event.

When the user drops the div into the drop zone, we’ll have to move the div from it’s original position into the drop zone. This has to be done in the drop event.

<div id="dropzone"  
ondrop="onDrop(event)"  
ondragover="onDragOver(event)"> </div>  
function onDrop(e) {  
    e.preventDefault();
    var draggableDiv = document.getElementById("a-draggable-div");
    draggableDiv.setAttribute("draggable", "false");
    e.target.appendChild(draggableDiv);
}

So, when the user drops the div into the drop zone, we’re disabling the draggable property of the div and appending it into the drop zone.

This is a very basic drag and drop implementation. It gets the job done. But, HTML5 provides us with more events to make the user’s experience even better 1.

Event Description
drag Fired when an element or text selection is being dragged.
dragend Fired when a drag operation is being ended (for example, by releasing a mouse button or hitting the escape key).
dragenter Fired when a dragged element or text selection enters a valid drop target.
dragexit Fired when an element is no longer the drag operation’s immediate selection target.
dragleave Fired when a dragged element or text selection leaves a valid drop target.
dragover Fired when an element or text selection is being dragged over a valid drop target
dragstart Fired when the user starts dragging an element or text selection.
drop Fired when an element or text selection is dropped on a valid drop target.

With these events a little bit of css magic a more user friendly experience can be created like highlighting the drop zones when the user starts to drag an element or changing the element’s text based on its state.

Demo:

https://jsfiddle.net/niranjan94/tkbcv3md/16/embedded/

External Resources:
Continue Reading

SASS for the theme based concept

Until yesterday, I was exploring all around the internet, to find the best possible way for the theme concept in the web app. The theme concept allows an organizer to choose the theme for the web app from the set of provided themes.

After some time, I realised that this awesomeness to the web app can be added by using Syntactically Awesome Stylesheets.

How SASS supports the theme concept?

 

In the web app, a folder name _scss is added which has the directory structure as shown

tree
_scss folder directory structure

There is a file _config.scss inside the _base folder that includes the SASS variables which are used in all the files after importing it.

Each of the SASS variables uses a flag !default at the end, which means it can be overwritten in any other file. This property of SASS leads to the theme concept.

//_.config.scss

@charset "UTF-8";
 
// Colors
$black: #000;
$white: #fff;
$red: #e2061c;
$gray-light: #c9c8c8;
$gray: #838282;
$gray-dark: #333333;
$blue: #253652;
 
// Corp-Colors
$corp-color: $white !default;
$corp-color-dark: darken($corp-color, 15%) !default;
$corp-color-second: $red !default;
$corp-color-second-dark: darken($corp-color-second, 15%) !default;
 
// Fontbasic
$base-font-size: 1.8 !default;
$base-font-family: Helvetica, Arial, Sans-Serif !default;
$base-font-color: $gray-dark !default;
 
// Border
$base-border-radius: 2px !default;
$rounded-border-radius: 50% !default;
$base-border-color: $gray !default;

The main file that includes all the required style is the application.scss. Here, $corp-color takes default value from  _config.scss. It can be overwritten by the themes.

//application.scss

@charset 'UTF-8';

// 1.Base
@import '_base/_config.scss';

/* Offsets for fixed navbar */
body {
 margin-top: 50px;
 background-color:$corp-color !important;
}

Making a light theme 

 

The light theme will overwrite the $corp-color value to $gray-light which is defined in _config.scss. This will change the background-color defined in application.scss to #c9c8c8. So, in this way a light theme is generated. The similar approach can be followed to generate the dark theme.

//_light.scss

@charset 'UTF-8';
@import '../../_base/config';
// 1.Overwrite stuff
$corp-color: $gray-light;

@import '../../application';

Making a dark theme 

 

//_dark.scss

@charset 'UTF-8';
@import '../../_base/config';
// 1.Overwrite stuff
$corp-color: $gray;

@import '../../application';

How to compile the CSS from SASS files ?

 

  1. We can easily compile these SASS files by using command
    sass /path/application.scss /path/schedule.css

    For generating light theme and dark theme:

    sass /path/_light.scss  /path/schedule.css
    sass /path/_dark.scss  /path/schedule.css

Optimization of Code

 

SASS is very powerful for optimizing the code. It allows the concepts such as nesting, mixins which reduce the line of code. I have rewritten schedule.css into application.scss to make it more optimized.

/* Adjustments for smaller screens */

@media (min-width: 450px) {
 body {
 font-size: 14px
 }
 .session {
   &-list{
     .label{
         font-size: 90%;
         padding: .2em .6em .3em;
           }
         }
    &-title {
         float:left;
         width: 75%;
         }
    &-location {
         float: right;
         width: 25%;
         text-align: right;
        }
  }
}

Further Exploration

 

This is one of the ways to deal with the SASS structure, but there are other resources that can be helpful in dealing with SASS projects.

Also, check out, Jeremy Hexon article here.

Continue Reading

steam-shell: Two processes in one terminal

Community bonding period turned out to be quite fruitful I got to know my community really well and not only that I also solved quite a number of issues which helped me understand the code base. Daily scrum meetings played a very important role in making us work professionally and cover some substantial work. Official coding period began on 23rd May and I was all set for the challenges and the sleepless nights to come. Here I will be discussing the tasks I covered in my first week.

As suggested by my mentors I had changed my plans a bit by moving the work on edit command before implementing the TLS layer on COAL. I started small by fixing the edit command. The edit command opens the specified file in vi/vim/emac. In vi and vim the editor was misbehaving and not letting us work on the file. I took up this as my first task for Google Summer of Code 2016. This helped me understand steam-shell and applauncher, which is used to load the editor, in detail. Vi and vim editors have an advantage of letting the user edit the file in the same terminal window.

Looking at the issue itself it was not possible to do any kind of backtracking. The vi editor was just throwing rubbish on the screen when the user attempted to type anything.

overlapping process
vi editor showing the garbage and the steam-shell command

At first I was under the impression that it was a problem with the editor itself. I even tried approaching the vi.stackexchange.com , where the vi developers could help me. However all this was in vain. After a lot more forensics and re-reading the code multiple number of times I realized two process were active and sharing the same terminal space. How did I come to this conclusion? Well it was a very minute detail that I noticed. While in the vi editor window, with the document open and the editor throwing garbage at you when you press the up arrow the editor clears some area and show the commands executed on the steam-shell. This can be seen in the above image

This simply meant that both the process for steam-shell and the vi editor was running and sharing the same terminal space. The solution was quite simple. Just called editor→wait() to suspend the calling the process till the called process was over.

Continue Reading

Swagger

Swagger is a specification for describing REST APIs. The main aim of Swagger is to provide a REST API definition format that is readable by both machines and humans.

You can think of two entities in a REST API: One the provider of API, and another the client using the API. Swagger essentially covers the gap between them by providing a format that is easy to use by the client and easy for the provider to define.

If not using Swagger, one would most certainly be creating the API first, writing the documentation (human-readable) with it. The client developer would read the documentation and use APIs as required. With Swagger, the specification can be considered as the document itself, helping both the client developer and the provider.

Here’s an example spec I wrote for our GET APIs at Organizer Server: https://gist.github.com/shivamMg/dacada0b45585bcd9cd0fbe4a722eddf

The format, although readable doesn’t really look what a client developer would be asking for.

Remember that the format is machine readable? What does it mean exactly?

Since the Swagger spec is a defined format, the provider can document it and people can write programs that understand the Swagger specification. Swagger itself comes with a set of tools (http://swagger.io/tools/) that use Swagger definitions created by the API provider to create SDKs for the clients to use.

Swagger-UI

One of our most used tools at our server is the Swagger UI (http://swagger.io/swagger-ui/).

It reads an API spec written in Swagger to generate corresponding UI that people can use to explore the APIs. Every API endpoint can have responses and parameters associated with it. For instance our Event endpoint at Server (“/events/:event_id”).

You can see how the UI displays the Model Schema, required parameters and possible response message.

Screenshot from 2016-06-07 19:08:21

Apart from documentation, Swagger UI provides the “Try it out!” tool that lets you make requests to the server for the corresponding API. This feature is incredibly useful for POST requests. No need for long curl commands in the terminal.

Here’s the Swagger config from our demo application: https://open-event.herokuapp.com/api/v2/swagger.json

The Swagger UI for this config can be found at https://open-event.herokuapp.com/api/v2

The UI for the example spec (gist) I linked before can be browsed here.

Swagger-js

https://github.com/swagger-api/swagger-js

Swagger-js is JS library that reads an API spec written in Swagger and provides an interface to the client developer to interact with the API. We will be using Swagger-js for Open Event Webapp.

Here’s an example to show you how it works. Let’s say the following endpoint returns (json) a list of events.

/events

The client developer can create a GET request and render the list with HTML.

$.getJSON("http://example.com/api/v2/events", function(data) {

  var events = "";
  $.each(data, function(i, event) {
    events += "<li id='event_" + i + "'>" + event.name + "</li>";
  });

  $("ul#events").html(events);
});

What if the provider moves the endpoint to “/event/all”? The client developer would need to change every instance of the URL to http://example.com/event/all.

Let’s now take the case of Swagger-js. With Swagger-js the client developer would essentially be writing programs to interact with the API Swagger spec defined by the provider, instead of directly consuming the API.

window.client = new SwaggerClient({
    url: "http://example.com/api/v2/swagger.json",
    success: function() {
      client.event.getEvents({
        responseContentType: 'application/json'
      }, function(data) {

        /* Create `events` string same as before */

        $("ul#events").html(events);
      });
    }
  });

 

The APIs at the Organizer server are getting more complex. Swagger helps in keeping the spec well defined.

That’s all for now. Hope you enjoyed the read.

Continue Reading
Close Menu