Using Cron Scheduling to automatically run background jobs

Cron scheduling is nothing new. It is basically a concept in which the system keeps executing lines of code every few seconds, minutes, or hours. It is required in many large applications which require some automation in their working. I had to use it for two purposes in our Open-Event application.

  • To automatically delete the items in the trash after a period of 30 days.
  • To automatically send after event mails to the speakers and organizers when an event gets completed

1. Delete items in Trash system

So the deleted items get stored in the trash of the admin. However if the items are not deleted or no action is performed on them then they should be deleted automatically if they stay in the trash for more than 30 days. I have used a framework – apscheduler (Advanced Python Scheduler) for this.The code is like this

from apscheduler.schedulers.background import BackgroundScheduler

def empty_trash():
 with app.app_context():
 print 'HELLO'
 events = Event.query.filter_by(in_trash=True)
 users = User.query.filter_by(in_trash=True)
 sessions = Session.query.filter_by(in_trash=True)
 for event in events:
 if datetime.now() - event.trash_date >= timedelta(days=30):
 DataManager.delete_event(event.id)

 for user in users:
 if datetime.now() - user.trash_date >= timedelta(days=30):
 transaction = transaction_class(Event)
 transaction.query.filter_by(user_id=user.id).delete()
 delete_from_db(user, "User deleted permanently")

 for session in sessions:
 if datetime.now() - session.trash_date >= timedelta(days=30):
 delete_from_db(session, "Session deleted permanently")


trash_sched = BackgroundScheduler(timezone=utc)
trash_sched.add_job(empty_trash, 'cron', day_of_week='mon-fri', hour=5,
                    minute=30)
trash_sched.start()

The trash_sched is initialized first. It is given an instance of BackgroundScheduler() meaning it will always run in the background as long as the application server is running.

The second line defines the trigger which is given to the scheduler meaning at what time intervals do we want the scheduler to execute the function.

trash_sched.add_job(empty_trash, 'cron', day_of_week='mon-fri', hour=5, 
                    minute=30)

Here the scheduler adds the function to the job list. The function is empty_trash and the trigger given here is ‘cron’. The following line:

day_of_week='mon-fri', hour=5, minute=30

sets the time at which the sheduler executes the job. The above line means that the job will be executed every day from Monday to Friday at 5:30 AM. Now coming to the function which is to be executed:

def empty_trash():
    with app.app_context():
        events = Event.query.filter_by(in_trash=True)
        users = User.query.filter_by(in_trash=True)
        sessions = Session.query.filter_by(in_trash=True)
        for event in events:
            if datetime.now() - event.trash_date >= timedelta(days=30):
                DataManager.delete_event(event.id)

        for user in users:
            if datetime.now() - user.trash_date >= timedelta(days=30):
                transaction = transaction_class(Event)
                transaction.query.filter_by(user_id=user.id).delete()
                delete_from_db(user, "User deleted permanently")

        for session in sessions:
            if datetime.now() - session.trash_date >= timedelta(days=30):
                delete_from_db(session, "Session deleted permanently")

There are three items in the trash: events, users and sessions. We get all the items in the trash by the query.all() method. Each model: Events, Users and Sessions has a column

trash_date = db.Column(db.DateTime)

The date on which the item is moved to the trash is stored in the trash_date column. And the following line:

 if datetime.now() - event.trash_date >= timedelta(days=30)

checks if the item has been in the trash for more than 30 days. If yes then it is deleted automatically. This function is constantly executed by the apscheduler according to the time settings given by us. Thus we do not need to manually delete the trash after 30 days.

2. Send After Event Mails

This is similar to the trash emptying function.

from apscheduler.schedulers.background import BackgroundScheduler

def send_after_event_mail():
 with app.app_context():
 events = Event.query.all()
 for event in events:
 upcoming_events = DataGetter.get_upcoming_events(event.id)
 organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')
 speakers = DataGetter.get_user_event_roles_by_role_name(event.id, 'speaker')
 if datetime.now() > event.end_time:
 for speaker in speakers:
 send_after_event(speaker.user.email, event.id, upcoming_events)
 for organizer in organizers:
 send_after_event(organizer.user.email, event.id, upcoming_events)

#logging.basicConfig()
sched = BackgroundScheduler(timezone=utc)
sched.add_job(send_after_event_mail, 'cron', day_of_week='mon-fri', hour=5, minute=30)
#sched.start()

The scheduler settings are the same as the trash scheduler settings. The function returns all the events and checks whether the event’s end_date has come or not. This check is performed against the present date by the following line:

if datetime.now() > event.end_time

If yes then the speakers and organizers for that event are obtained and after event mails are sent to them automatically.

In this way the whole system is automated. 🙂

Continue ReadingUsing Cron Scheduling to automatically run background jobs

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?.

At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing.

I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle.

Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t.

In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.  

It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other.

Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well.

What is unit testing?

Unit testing is a software development method in which the smallest testable parts of an application are tested

Why do we need writing unit tests?

Let me point all arguments why unit testing is really important while developing a project.

  • To prove that our code works properly

If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code.

  • To reduce amount of bugs

It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code

  • To save development time

Developers don’t waste time on checking every code’s change if his code works correctly

  • Unit tests help to understand software design
  • To provide quick feedback about method which you are testing
  • To help document a code

How to write unit test in Python

In my work I write use tests in Python. I am going to share my sample code  with you now

  • Import module unittest
  • Choose function to test
  • Write unit test

Example OpenEvent test in Python

class TestPagesUrls(OpenEventTestCase):

   def setUp(self):

       self.app = Setup.create_app()

   def test_if_urls_exist(self):

       """Test all urls via GET method"""

       with app.test_request_context():

           for rule in app.url_map.iter_rules():

               if excluded_paths(rule):

                   status_code = self.app.get(request.url[:-1] + str(rule).replace('//', '/'),        follow_redirects=True).status_code

                   self.assertTrue(status_code in [200, 302, 401])

 

I want to check if all views exist but it required a lot of time. That’s why I wonder I how to avoid writing similar tests. Finally, based  on our list of routes I am able to write test which checks code’s status  on every page.

If some of them response returns status_code different than 200, 302 or 401, test fails.This results means that somethings is wrong. Simple, isn’t it ?  Try to test it manually…. This one short test cover about 40 use cases…

This example shows an incredible value of unit tests! If developer makes a bug in response he receives an error that something is wrong with a view. Travis CI allows to reject all  wrong pull requests and merge only these which fulfill our quality requirements.   

Fixing  error is one part but finding a bug is even harder task. But an ability to detect bug on early stage of process development reduces cost of software.

 

Continue ReadingUnit Testing

sTeam API Endpoint Testing

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

sTeam API Endpoint Testing using Frisby

sTeam API endpoint testing is done using Frisby.  Frisby is a REST API testing framework built on node.js and Jasmine that makes testing API endpoints very easy, speedy and joyous.

Issue. Github Issue Github PR
sTeam-REST Frisby Test for login Issue-38 PR-40
sTeam-REST Frisby Tests Issue-41 PR-42

Write Tests

Frisby tests start with frisby.create with a description of the test followed by one of get, post, put, delete, or head, and ending with toss to generate the resulting jasmine spec test. Frisby has many built-in test helpers like expectStatus to easily test HTTP status codes, expectJSON to test expected JSON keys/values, and expectJSONTypes to test JSON value types, among many others.

// Registration Tests
frisby.create('Testing Registration API calls')
.post('http://steam.realss.com/scripts/rest.pike?request=register', {
email: "ajinkya007.in@gmail.com",
fullname: "Ajinkya Wavare",
group: "realss",
password: "ajinkya",
userid: "aj007"
}, {json: true})
.expectStatus(200)
.expectJSON({
"request-method": "POST",
"request": "register",
"me": restTest.testMe,
"__version": testRegistrationVersion,
"__date": testRegistrationDate
})
.toss();
The testMe, testRegistrationVersion and testRegistrationDate are the functions written in the rest_spec.js.

The frisby API endpoint tests have been written for testing the user login, accessing the user home directory, user workarea, user container, user document, user created image,  groups and subgroups.

The REST API url’s used for testing are described below. A payload consists of the user id and password.

Check if the user can login.

http://steam.realss.com/scripts/rest.pike?request=aj007

Test whether a user workarea exists or not. Here aj workarea has been created by the user.

http://steam.realss.com/scripts/rest.pike?request=aj007/aj

Test whether a user created container exists or not.

http://steam.realss.com/scripts/rest.pike?request=aj007/container

Test whether a user created document exists or not.

http://steam.realss.com/scripts/rest.pike?request=aj007/abc.pike

Test whether a user created image(object of any mime-type) inside a container exists or not.

http://steam.realss.com/scripts/rest.pike?request=aj007/container/Image.jpeg

Test whether a user created document exists or not. The group name and the subgroups can be queried.
eg. GroupName: groups, Subgroup: test.
The subgroup should be appended using “.” to the groupname.

http://steam.realss.com/scripts/rest.pike?request=groups.test

Here “groups” is a Groupname and “gsoc” is a subgroup of it.

http://ngtg.techgrind.asia/scripts/rest.pike?request=groups.gsoc

FrisbyTests

FrisbyTestCount

Unit Testing the sTeam REST API

The unit testing of the sTeam REST API is done using the karma and the jasmine test runner. The karma and the jasmine test runner are set up in the project repository.

The karma test runner : The main goal for Karma is to bring a productive testing environment to developers. The environment being one where they don’t have to set up loads of configurations, but rather a place where developers can just write the code and get instant feedback from their tests. Because getting quick feedback is what makes you productive and creative.

The jasmine test runner: Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests.

The karma and jasmine test runner were configured for the project and basic tests were ran. The angular js and angular mocks version in the local development repository was different. This had resulted into a new error been incorporated into the project repo.

Angular Unit-Testing: TypeError ‘angular.element.cleanData is not a function’

When angular and angular-mocks are not of the same version, these error occurs while running the tests. If the versions of the two javascript libraries don’t match your tests will be testing to you.

The jasmine test runner can be accessed from the browser. The karma tests can be performed from the command line. These shall be enhanced further during the course of work.

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue ReadingsTeam API Endpoint Testing

Testing Hero

Our sTeam code base had no tests written and therefore we were facing a lot of issues and were not able to merge our code easily. My next task dealt with this problem. I had to write test cases for the function calls to COAL commands. On some basic research I found out that the only testing framework available for pike is used for testing the pike interpreter itself. This includes a set of scripts. mktestsuite is one of the scripts and is responsible for generating the tests. The problem with this is that the tests should have a particular syntax and since it is used to test interpreter is assumes each line of pike code is a test. This prevented us from writing multiple line tests and also from setting up the client.

Issue:

https://github.com/societyserver/sTeam/issues/104 (Establish testing framework),

https://github.com/societyserver/sTeam/issues/107 (Add tests for create)

My first approach to the problem was to try using the scripts available to write the tests, however this didn’t turn out very well and tests written were very confusing and out of context. The lines written for setting up the client was also being counted as tests and there was no continuation that is variables defined in one line was not accessible in the next. I realized that this is not going to work out and decided to write my own testing framework. I started by writing a simple testing structure.

The framework has a central script called test.pike. This script is used to run all the test cases. This script uses the the scripts called move.pike and create.pike which are the scripts containing the actual test cases. These scripts contain various functions each of which is a test case and return 1 on passing the case and 0 on failing. test.pike, the central node, is responsible for looping through this functions and calling each of these and recording the result. The framework then outputs the total number of cases passed or failed. Again implementing this became simple as we can import pike scripts as objects and also extract the functions from them.

test
Running tests for move
test1
Running test for create (Earlier version of framework)

test.pike involved initialization for various variables. It establishes a connection to the server and initializes _Server and me, which are then passed to all the test cases. move.pike has various test cases involving moving things around. Moving user into a room, into a container or to an non existential location and moving a room inside a container. I also got my first failing test case which is moving user into a non existential location. This shouldn’t have been allowed but is not throwing any error and incorrectly writes the trail.

create.pike involved cases creating various kinds of objects and also attempting to create objects for classes that do not exist. As I went along writing these cases I also kept improving the central test.pike. I added the code for creation of a special room for testing while initializing and also clearing this room before destroying the object and exiting.

Solution:

 https://github.com/societyserver/sTeam/pull/105 (Establish testing framework),

https://github.com/societyserver/sTeam/pull/108 (Add tests for create)

Continue ReadingTesting Hero

Getting fired up with Firebase Database

As you might’ve noticed, in my Open Event Android Project, we are asking the user to enter his/her details and then using these details at the backend for generating the app according to his/her needs.

One thing to wonder is how did we transmit the details from webpage to the server.

Well, this is where Firebase comes to the rescue one more time!

If you’ve read my previous post on Firebase Storage, you might have started to appreciate what an awesometastic service Firebase is.

So without any further adieu, lets get started with this.

Step 1 :

Add your project to Firebase from the console.

 newProj
Click on the Blue button

Step 2 :

Add Firebase to your webapp

Open the project, you’ve just created and click on the bright red button that says, “ Add Firebase to your web app”

 addFirebase

Copy the contents from here and paste it after your HTML code.

Step 3 :

Next up, navigate to the Database section in your console and move to the Rules tab.

 screenshot-area-2016-07-18-204133.png

For now, let us edit the rules to allow anyone to read and write to the database.

 screenshot-area-2016-07-18-204437

Almost all set up now.

Step 4 :

Modify the HTML to allow entering data by the user

This looks something like this :

<form name="htmlform" id="form" enctype="multipart/form-data">
<p align="center"><b><big>FOSSASIA's App Generator</big></b></p>
<table align="center"
width = "900px"
height="200px">
<tr>
<td valign="top">
<label for="Email">Email</label>
</td>
<td valign="top">
<input id="email" type="email" name="Email" size="30">
</td>
<td>
<td valign="top">
<label for="name">App's Name</label>
</td>&nbsp;
<td valign="top">
<input id="appName" type="text" name="App_Name" maxlength="50" size="30">
</td>&nbsp;
</tr>
<tr>
<td valign="top">
<label for="link">Api Link</label>
</td>
<td valign="top">
<input id="apiLink" type="url" name="Api_Link" maxlength="90" size="30">
</td>
</tr>
<tr>
<td valign="top">
<label for="sessions">Zip containing .json files</label>
</td>
<td valign="top">
<input accept=".zip" type="file" id="uploadZip" name="sessions">
</td>
</tr>
<tr>
<td colspan="5" style="text-align:center">
<button type="submit">Generate and Download app</button>
</td>
</tr>
</table>
</form>
view raw index.html hosted with ❤ by GitHub

Now let us setup our javascript to extract this data and store this in Firebase Database.

<script src="https://www.gstatic.com/firebasejs/live/3.0/firebase.js"></script>
<script src="https://code.jquery.com/jquery-1.10.2.js"></script>
<script src="https://code.jquery.com/ui/1.11.2/jquery-ui.js"></script>
<script>
var $ = jQuery;
var timestamp = Number(new Date()); //this will server as a unique ID for each user
var form = document.querySelector("form");
var config = {
apiKey: "API_KEY",
authDomain: "app-id.firebaseapp.com",
databaseURL: "https://app-id.firebaseio.com",
storageBucket: "app-id.appspot.com",
};
firebase.initializeApp(config);
var database = firebase.database();
form.addEventListener("submit", function(event) {
event.preventDefault();
var ary = $(form).serializeArray();
var obj = {};
for (var a = 0; a < ary.length; a++) obj[ary[a].name] = ary[a].value;
console.log("JSON",obj);
var file_data = $('#uploadZip').prop('files')[0];
var storageRef = firebase.storage().ref(timestamp.toString());
storageRef.put(file_data);
var form_data = new FormData();
form_data.append('file', file_data);
firebase.database().ref('users/' + timestamp).set(obj);
database.ref('users/' + timestamp).once('value').then(function(snapshot) {
console.log("Received value",snapshot.val());
)};
});
</script>
view raw script.js hosted with ❤ by GitHub

We are almost finished with uploading the data to the database.

Enter data inside the fields and press submit.

If everything went well, you will be able to see the newly entered data inside your database.

screenshot-area-2016-07-18-205651.png

Now on to retrieving this data on the server.

Our backend runs on a python script, so we have a library known as python-firebase which helps us easily fetch the data stored in the Firebase database.

The code for it goes like this

firebase = firebase.FirebaseApplication('https://app-id.firebaseio.com', None)
result = firebase.get('/users', str(arg))
jsonData = json.dumps(result)
email = json.dumps(result['Email'])
email = email.replace('"', '')
app_name = json.dumps(result['App_Name'])
app_name = app_name.replace('"', '')
print app_name
print email
view raw firebase.py hosted with ❤ by GitHub

The data will be returned in JSON format, so you can manipulate and store it as you wish.

Well, that’s it!

You now know how to store and retrieve data to and from Firebase.
It makes the work a lot simpler as there is no Database schema or tables that need to be defined, firebase handles this on its own.

I hope that you found this tutorial helpful, and if you have any doubts regarding this feel free to comment down below, I would love to help you out.

Cheers.

Continue ReadingGetting fired up with Firebase Database

Uploading json assets and icons to your app via the app generator

If you have tried out our app generator webpage, you should’ve noticed an option that allows you to upload a zip file which will contain the json for the event.

Why do we need this?

Well, this is needed because not every event organizer can maintain a server and API endpoints which contain the details for their event, so they can simply generate a json for the event by exporting it through the options provided to them on Google Spreadsheets and then they can upload them on the server, so that these files can be packaged in the android and web apps.

Implementation :

The implementation is pretty straightforward and consists of 3 parts :

  1. Making changes to the html file to allow user to upload the zip
<tr>
  <td valign="top">
    <label for="sessions">Zip containing .json files</label>
  </td>
  <td valign="top">
    <input accept=".zip" type="file" id="uploadZip" name="sessions">
  </td>
</tr>

2. Retrieve this file in the javascript and then make an AJAX call to the server

var file_data = $('#uploadZip').prop('files')[0];
 var form_data = new FormData();                 
 form_data.append('file', file_data);
 $.ajax({
            
                  url: '/upload.php', // point to server-side PHP script
                  cache: false,
                  contentType: false,
                  processData: false,
                  data: form_data,                         
                  type: 'post',
                  success: function(php_script_response){
                    // do something
                  }
                });

So here, the form_data contains the details about the file to be uploaded.

In the AJAX call, upload.php takes reads form_data variable and then initiates the upload to the server.

3. Setup a PHP script on the server to respond to the above AJAX call

<?php
    if ( 0 < $_FILES['file']['error'] ) {
        echo 'Error: ' . $_FILES['file']['error'] . '<br>';
    }
    else {
        move_uploaded_file($_FILES['file']['tmp_name'], "/var/www/html/uploads/upload.zip"); 
    }
?>

Here in the PHP script, the file is read and uploaded to a temporary directory in the server.

We then manually copy it to a location and name of our choice.

In case there are multiple users accessing the website and uploading their assets at the same time, we need to pass a timestamp variable to the AJAX call to and later on use it while renaming the uploaded file.

This is to ensure that the file uploaded by one user is not overwritten by another user.

How do we use this data during app compilation

The uploaded zip is then uncompressed and its contents are moved to the assets folder of the android app’s directory.

zip_ref = zipfile.ZipFile(path_to_zip_file, 'r')
zip_ref.extractall(directory)
zip_ref.close()
#TODO: Change path here
for f in os.listdir(directory+ "/zip"):
	if f.endswith('.json'):
		copyfile(f, directoy + "open-event-android/android/app/src/main/assets/"+f)
	elif f.endswith('.png'):
		copyfile(f, directory + "open-event-android/android/app/src/main/res/drawable"+f)
replace(directory+"/open-event-android/android/app/src/main/res/values/strings.xml", 'mipmap/ic_launcher', 'drawable/' + f)

Here replace is a function that searches in the source file for the phrase supplied as it’s argument and changes it with the new phrase.

Now when the app is compiled and ran on a user’s device, it will first search for a json file in the assets directory and if it exists, use that for fetching the data instead of making a network call. For this I have used Gson to first parse the offline files otherwise retrofit makes request to the api and fetches the data from there.

But if there is no json in the assets folder, a normal network call using retrofit will be made and the data will be fetched from the API defined by the user.

Continue ReadingUploading json assets and icons to your app via the app generator

Adding Client Side validation to Login and Registration Forms

Its very important to have a client side validation apart from a server side validation. The server side validation only helps the developers but not the clients. Thus it was necessary to add a client side validation to the Login page and its other components so that the users  would be comfortable in this.

I had never before done this and was looking at how to achieve this using jQuery or JavaScript when I cam to know that we were already using an amazing validation tool : Bootstrap Validator

It just involves wrapping the form in the html with the validator plugin and all the checks are automatically carried out by it.

 

1.png

As you can see in the above image, we have just added a data-toggle=”validator” line to the form which automatically wraps the form with the plugin. The above form is the Create New Password form which checks whether the new password and the password entered again are matching or not. If not then the line ,

data-match="#new_password"

checks it with the password in the new_password field and gives an error on the page to the client defined by the following,

data-error="Passwords do not match

Thus adding such checks to the form becomes very simple rather than using jQuery for it. Similarly it was important to add validation to the Register page.

 <input type="email" name="email" class="form-control" id="email" 
 placeholder="Email" data-remote="{{ url_for('admin.check_duplicate_email') }}"
 data-remote-error="Email Address already exists" required="">

This is the validation for the email field. Apart from checking whether the text entered by the user is an email id or not it was also important to check whether the email entered by the user exists in the database or not.

data-remote="{{ url_for('admin.check_duplicate_email')

This line calls a view function to check whether the email entered by the user is duplicate or unique. Here is the function

2.png

This function takes the email value from the request.args and then performs a simple check in the db to check for duplicate email. If the user doesnt exist then the validator receives a simple string “200 OK”. If the error is 404 then it gives the error defined by,

data-remote-error="Email Address already exists"

However this error only for the data-remote part. If the error is something else then it is handled by ,

class="help-block with-errors"

Thus without the hassle of using Ajax and JQuery we can easily add validation to the forms using Bootstrap Validator.

Continue ReadingAdding Client Side validation to Login and Registration Forms

How to parse json assets with gson

So most of us have json assets in our app which we parse on runtime to get the data and use it accordingly but what I have seen is that most of the people create a JSONObject or JSONArray after reading the json into an inputstream but then handling it becomes difficult since we have to manually extract every entity in each array which makes it bound to a lot of errors. A better approach to using it is making use of gson : an open source library by Google to serialise and deserialise Java object to (and from) Json. It’s pretty easy to use and makes the development process easy. For those of you not still convinced on using gson, I’d like to demonstrate the code we had to write to without using gson and the one using gson as well.

So to start with lets see the json file we’ll be using. It’s the events.json file from the open event project.

{
  "events": [
    {
      "color": "#fdfdfd",
      "email": "dev@fossasia.org",
      "end_time": "2015-07-14T00:00:00",
      "id": 4,
      "latitude": 37.783839,
      "location_name": "Moscone centre",
      "logo": "http://mysecureshell.readthedocs.org/en/latest/_images/logo_redhat.png",
      "longitude": -122.400546,
      "name": "FOSSASIA",
      "slogan": "Fossasia",
      "start_time": "2015-05-28T13:00:00",
      "url": "www.google.com"
    }
  ]
}

As you can see it has an object that has an array of event objects. So what we’ll first do is that we’ll get the whole json as a string by openeing an inputstream and then directing it to a buffer. Then we convert the buffer array to a string object.

String json = null;
try {
    InputStream inputStream = getAssets().open("events.json");
    int size = inputStream.available();
    byte[] buffer = new byte[size];
    inputStream.read(buffer);
    inputStream.close();
    json = new String(buffer, "UTF-8");

} catch (IOException e) {
    e.printStackTrace();
}

Now we have the json as a string which we can now parse it using a combination of JSONObject and JSONArray. First we’ll access data in the outer json object i.e. “events”. That’ll be done by

JSONObject jsonObject = new JSONObject(json);
JSONArray events = jsonObject.getJSONArray("events");

Now that we have the array, we can traverse it to get the objects inside events array

for (int j=0; j < events.length(); j++){
    JSONObject cit = events.getJSONObject(j);
    String color = jsonObject.getString("color");
    String email = jsonObject.getString("email"); 
    String endTime = jsonObject.getString("end_time");
    String id = jsonObject.getString("id");
    String latitude = jsonObject.getString("latitude");
    String locationName = jsonObject.getString("location_name");
    String logo = jsonObject.getString("logo");
    String longitude = jsonObject.getString("longitude");
    String name = jsonObject.getString("name");
    String slogan = jsonObject.getString("slogan");
    String startTime = jsonObject.getString("start_time");
    String url = jsonObject.getString("url");
}

This is how we go about it. Now for the exiting part.

We already have an Event data class which has the constructor, getters and setters etc.

public class Event {

    int id;

    String name;

    String email;

    String color;

    String logo;

    @SerializedName("start_time")
    String start;

    @SerializedName("end_time")
    String end;

    float latitude;

    float longitude;

    @SerializedName("location_name")
    String locationName;

    String url;

    String slogan;

    public Event(int id, String name, String email, String color, String logo, String start,
                 String end, float latitude, float longitude, String locationName, String url, String slogan) {
        this.id = id;
        this.name = name;
        this.email = email;
        this.color = color;
        this.logo = logo;
        this.start = start;
        this.end = end;
        this.latitude = latitude;
        this.longitude = longitude;
        this.locationName = locationName;
        this.url = url;
        this.slogan = slogan;
    }

    public String getEmail() {
        return email;
    }

    public void setEmail(String email) {
        this.email = email;
    }

    public String getColor() {
        return color;
    }

    public void setColor(String color) {
        this.color = color;
    }

    public String getUrl() {
        return url;
    }

    public void setUrl(String url) {
        this.url = url;
    }

    public String getSlogan() {
        return slogan;
    }

    public void setSlogan(String slogan) {
        this.slogan = slogan;
    }

    public int getId() {

        return id;

    }

    public void setId(int id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public String getLogo() {
        return logo;
    }

    public void setLogo(String logo) {
        this.logo = logo;
    }

    public String getStart() {
        return start;
    }

    public void setStart(String start) {
        this.start = start;
    }

    public String getEnd() {
        return end;
    }

    public void setEnd(String end) {
        this.end = end;
    }

    public float getLatitude() {
        return latitude;
    }

    public void setLatitude(float latitude) {
        this.latitude = latitude;
    }

    public float getLongitude() {
        return longitude;
    }

    public void setLongitude(float longitude) {
        this.longitude = longitude;
    }

    public String getLocationName() {
        return locationName;
    }

    public void setLocationName(String locationName) {
        this.locationName = locationName;
    }

}

Now here we name the parameters to the same as that in the json we have or we can just add @SerializedName(entity_name). Then we go to the code for actually retrieving the data from the json file using this data class. How we can do that is by first making a class that’ll get the array of events for us.

public class EventResponseList {
    @SerializedName("events")
    public List<Event> event;
}

Now all we do is

EventResponseList eventResponseList = gson.fromJson(json, EventResponseList.class);

We have a list of events that were in the json array.

Voila! That’s it. It’s so easy to get a list of all the events from the JSONArray and since the library is available for gradle, it’s even better. You can just add

compile 'com.google.code.gson:gson:2.7'

to your build.gradle dependencies and you’re good to go. Cheers!

Continue ReadingHow to parse json assets with gson

Push your apk to your GitHub repository from Travis

In this post I’ll guide you on how to directly upload the compiled .apk from travis to your GitHub repository.

Why do we need this?

Well, assume that you need to provide an app to your testers after each commit on the repository, so instead of manually copying and emailing them the app, we can setup travis to upload the file to our repository where the testers can fetch it from.

So, lets get to it!

Step 1 :

Link Travis to your GitHub Account.

Open up https://travis-ci.org.

Click on the green button in the top right corner that says “Sign in with GitHub”

screenshot-area-2016-07-15-205733.png<

Step 2 :

Add your existing repository to Travis

Click the “+” button next to your Travis Dashboard located on the left.

screenshot-area-2016-07-15-210630.png<

Choose the project that you want to setup Travis from the next page

screenshot-area-2016-07-15-210916.png
Toggle the switch for the project that you want to integrate

Click the cog here and add an Environment Variable named GITHUB_API_KEY.
Proceed by adding your Personal Authentication Token there.
Read up here on how to get the Token.

 screenshot-area-2016-07-15-213931.png<

Great, we are pretty much done here.

Let us move to the project repository that we just integrated and create a new file in the root of repository by clicking on the “Create new file” on the repo’s page.

Name it .travis.yml and add the following commands over there

language: android 
jdk:
  - oraclejdk8
android:
  components:
    - tools
    - build-tools-24.0.0
    - android-24
    - extra-android-support
    - extra-google-google_play_services
    - extra-android-m2repository
    - extra-google-m2repository
    - addon-google_apis-google-24
 before_install:
 - chmod +x gradlew
 - export JAVA8_HOME=/usr/lib/jvm/java-8-oracle
 - export JAVA_HOME=$JAVA8_HOME
 after_success:
 - chmod +x ./upload-gh-pages.sh
 - ./upload-apk.sh
 script:
 - ./gradlew build

Next, create a bash file in the root of your repository using the same method and name it upload-apk.sh

  #create a new directory that will contain out generated apk
  mkdir $HOME/buildApk/ 
  #copy generated apk from build folder to the folder just created
  cp -R app/build/outputs/apk/app-debug.apk $HOME/android/
  #go to home and setup git
  cd $HOME
  git config --global user.email "useremail@domain.com"
  git config --global user.name "Your Name" 
  #clone the repository in the buildApk folder
  git clone --quiet --branch=master  https://user-name:$GITHUB_API_KEY@github.com/user-name/repo-name  master > /dev/null
  #go into directory and copy data we're interested
  cd master  cp -Rf $HOME/android/* .
  #add, commit and push files
  git add -f .
  git remote rm origin
  git remote add origin https://user-name:$GITHUB_API_KEY@github.com/user-name/repo-name.git
  git add -f .
  git commit -m "Travis build $TRAVIS_BUILD_NUMBER pushed"
  git push -fq origin master > /dev/null
  echo -e "Donen"

Once you have done this, commit and push these files, a Travis build will be initiated in few seconds.
You can see it ongoing in your Dashboard at https://travis-ci.org/.

After the build has completed, you will can see an app-debug.apk in your Repository.

IMPORTANT NOTE :

You might be wondering as to why did I write [skip ci] in the commit message.

Well the reason for that is, Travis starts a new build as soon as it detects a commit made on the master branch of your repository.

So once the apk is uploaded, that will trigger another build in Travis and hence forming an infinite loop.

We can prevent this in 2 ways :

First, simply write [skip ci] somewhere in the commit message and it will cause Travis to ignore the commit.

Or, push the apk to any other branch which is not configured for Travis build.

So well, that’s almost it.

I hope that you found this tutorial helpful, and if you have any doubts regarding this feel free to comment down below, I would love to help you out.

Cheers.

Continue ReadingPush your apk to your GitHub repository from Travis

Adding extensive help for sTeam

This task was something I came up with as an enhancement because of the problems I faced while using sTeam for the first time. During the first week of my using sTeam I had a tough time getting used to commands and that is when I had opened the issue to improve help. Help for commands were one liners and not very helpful so I took up the task to improve it, so that new users don’t have to face the difficulties that I faced.

Issue: https://github.com/societyserver/sTeam/issues/30

Solution: https://github.com/societyserver/sTeam/pull/95

Not a lot of technical details were involved in this task but it was time consuming. I write down a few lines explaining what each command does and also added a syntax for the commands. While doing these I also realized more improvements that could be made and added them to my task list. My mentor had explained to me how rooms and gates were the same. I discovered that the command gothrough was violating this as it allowed users to gothrough gates but not rooms. I discussed this on the irc and we came up with a solution that we should change this command to enter and allow it to operate on both rooms and gates.

This enhancement became my next task and I worked on changing this command. The function gothrough was changed to enter and the conditions required for it to work on rooms were added. This paved way for my task. The look command showed rooms and gates under different sections. Now that there were no difference between rooms and gates I combined these two sections to change the output of the look command.

gsoc look
output of look before the changes
gsoc look1
output of look after the changes

 

Issue: https://github.com/societyserver/sTeam/issues/100

Solution: https://github.com/societyserver/sTeam/pull/101

By the end of the week I had started on my next task, which was a major one. Writing testing framework and test cases for coal command calls. I will be discussing more about this in my next blog post.

Continue ReadingAdding extensive help for sTeam