Testing Presenter of MVP in Loklak Wok Android

Imagine working on a large source code, and as a new developer you are not sure whether the available source code works properly or not, you are surrounded by questions like, Are all these methods invoked properly or the number of times they need to be invoked? Being new to source code and checking manually already written code is a pain. For cases like these unit-tests are written. Unit-tests check whether the implemented code works as expected or not. This blog post explains about implementation of unit-tests of Presenter in a Model-View-Presenter (MVP) architecture in Loklak Wok Android.

Adding Dependencies to project

In app/build.gradle file

defaultConfig {
   ...
   testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}

dependencies {
   ...
   androidTestCompile 'org.mockito:mockito-android:2.8.47'
   androidTestCompile 'com.android.support:support-annotations:25.3.1'
   androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2'
}

Setup for Unit-Tests

The presenter needs a realm database and an implementation of LoklakAPI interface. Along with that a mock of the View is required, so as to check whether the methods of View are called or not.

The LoklakAPI interface can be mocked easily using Mockito, but the realm database can’t be mocked. For this reason an in-memory realm database is created, which will be destroyed once all unit-test are executed. As the presenter is required for each unit-test method we instantiate the in-memory database before all the tests start i.e. by annotating a public static method with @BeforeClass, e.g. setDb method.

@BeforeClass
public static void setDb() {
   Realm.init(InstrumentationRegistry.getContext());
   RealmConfiguration testConfig = new RealmConfiguration.Builder()
           .inMemory()
           .name("test-db")
           .build();
   mDb = Realm.getInstance(testConfig);
}

 

NOTE: The in-memory database should be closed once all unit-tests are executed. So, for closing the databasse we create a public static method annotated with @AfterClass, e.g. closeDb method.

@AfterClass
public static void closeDb() {
   mDb.close();
}

 

Now, before each unit-test is executed we need to do some setup work like instantiating the presenter, a mock instance of API interface generated  by using mock static method and pushing in some sample data into the database. Our presenter uses RxJava and RxAndroid which depend on IO scheduler and MainThread scheduler to perform tasks asynchronously and these schedulers are not present in testing environment. So, we override RxJava and RxAndroid to use trampoline scheduler in place of IO and MainThread so that our test don’t encounter NullPointerException. All this is done in a public method annotated with @Before e.g. setUp.

@Before
public void setUp() throws Exception {
   // mocking view and api
   mMockView = mock(SuggestContract.View.class);
   mApi = mock(LoklakAPI.class);

   mPresenter = new SuggestPresenter(mApi, mDb);
   mPresenter.attachView(mMockView);

   queries = getFakeQueries();
   // overriding rxjava and rxandroid
   RxJavaPlugins.setIoSchedulerHandler(scheduler -> Schedulers.trampoline());
   RxAndroidPlugins.setMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

   mDb.beginTransaction();
   mDb.copyToRealm(queries);
   mDb.commitTransaction();
}

 

Some fake suggestion queries are created which will be returned as observable when API interface is mocked. For this, simply two query objects are created and added to a List after their query parameter is set. This is implemented in getFakeQueries method.

private List<Query> getFakeQueries() {
   List<Query> queryList = new ArrayList<>();

   Query linux = new Query();
   linux.setQuery("linux");
   queryList.add(linux);

   Query india = new Query();
   india.setQuery("india");
   queryList.add(india);

   return queryList;
}

 

After that, a method is created which provides the created fake data wrapped inside an Observable as implemented in getFakeSuggestionsMethod method.

private Observable<SuggestData> getFakeSuggestions() {
   SuggestData suggestData = new SuggestData();
   suggestData.setQueries(queries);
   return Observable.just(suggestData);
}

 

Lastly, the mocking part is implemented using Mockito. This is really simple, when and thenReturn static methods of mockito are used for this. The method which would provide the fake data is invoked inside when and the fake data is passed as a parameter to thenReturn. For example, stubSuggestionsFromApi method

private void stubSuggestionsFromApi(Observable observable) {
   when(mApi.getSuggestions(anyString())).thenReturn(observable);
}

Finally, Unit-Tests

All the tests methods must be annotated with @Test.

Firstly, we test for a successful API request i.e. we get some suggestions from the Loklak Server. For this, getSuggestions method of LoklakAPI is mocked using stubSuggestionFromApi method and the observable to be returned is obtained using getFakeSuggestions method. Then, loadSuggestionFromAPI method is called, the one that we need to test. Once loadSuggestionFromAPI method is invoked, we then check whether the method of the View are invoked inside loadSuggestionFromAPI method, this is done using verify static method. The unit-test is implemented in testLoadSuggestionsFromApi method.

@Test
public void testLoadSuggestionsFromApi() {
   stubSuggestionsFromApi(getFakeSuggestions());

   mPresenter.loadSuggestionsFromAPI("", true);

   verify(mMockView).showProgressBar(true);
   verify(mMockView).onSuggestionFetchSuccessful(queries);
   verify(mMockView).showProgressBar(false);
}

 

Similarly, a failed network request for obtaining is suggestions is tested using testLoadSuggestionsFromApiFail method. Here, we pass an IOException throwable – wrapped inside an Observable – as parameter to stubSuggestionsFromApi.

@Test
public void testLoadSuggestionsFromApiFail() {
   Throwable throwable = new IOException();
   stubSuggestionsFromApi(Observable.error(throwable));

   mPresenter.loadSuggestionsFromAPI("", true);
   verify(mMockView).showProgressBar(true);
   verify(mMockView).showProgressBar(false);
   verify(mMockView).onSuggestionFetchError(throwable);
}

 

Lastly, we test if our suggestions are saved in the database by counting the number of saved suggestions and asserting that, in testSaveSuggestions method.

@Test
public void testSaveSuggestions() {
   mPresenter.saveSuggestions(queries);
   int count = mDb.where(Query.class).findAll().size();
  // queries is the List that contains the fake suggestions
   assertEquals(queries.size(), count);
}

Resources:

Continue ReadingTesting Presenter of MVP in Loklak Wok Android

Adding Unit Test For Local JSON Parsing in Open Event Android App

The Open Event project uses JSON format for transferring event information like tracks, sessions, microlocations and other. The event exported in the zip format from the Open Event server also contains the data in JSON format. The Open Event Android application uses this JSON data. Before we use this data in the app, we have to parse the data to get Java objects that can be used for populating views. There is a chance that the model and the JSON format changes in future. It is necessary that the models are able to parse the JSON data and the change in the model or JSON format don’t break JSON parsing.  In this post I explain how to unit test local JSON parsing so that we can ensure that the models are able to parse the local JSON sample data successfully.

Firstly we need to access assets from the main source set into the unit test. There is no way to directly access assets from main source set. We need to first add assets in test/resources directory. If assets are present in test/resources directory then we can use it using ClassLoader in the unit test. But we can’t just copy assets from the main source set to resources directory. If there is any change in sample JSON then we need to maintain both resources and it may make the sample inconsistent. We need to make assets shared.

Add the following code in the app level build.gradle file.

android {
    ...
    sourceSets.test.resources.srcDirs += ["src/main/assets"]
}

It will add src/main/assets as a source directory for test/resources directory.So after building the project the test will have access to the assets.

Create readFile() method

Now create a method readFile(String name) which takes a filename as a parameter and returns data of the file as a string.

private String readFile(String name) throws IOException {
        String json = "";
        try {
            InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream(name);
            int size = inputStream.available();
            byte[] buffer = new byte[size];
            inputStream.read(buffer);
            inputStream.close();
            json = new String(buffer, "UTF-8");
        } catch (IOException e) {
            e.printStackTrace();
        }
        return json;
}

Here the getResourceAsStream() function is used to open file as a InputStream. Then we are creating byte array object of size same as inputStream data. Using read function we are storing data of file into byte array. After this we are creating a String object using a byte array.

Create ObjectMapper object

Create and initialize an ObjectMapper object in the Test class.

private ObjectMapper objectMapper;

    @Before
    public void setUp() {
        objectMapper = OpenEventApp.getObjectMapper();
        objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, true);
}

Here setting DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES to true is very important. It will fail the test for any unrecognized fields in the sample.

Create doModelDeserialization() method

In the Open Event Android App we are using Jackson for JSON serialization and deserialization. Converting JSON data to java is called deserialization or parsing.

Now create doModelDeserialization() method which takes three parameters,

  • Class<T> type: Model Class type of the data
  • String name: Name of the JSON data file
  • boolean isList: true if JSON string contains the list of object else false

This method returns true if parsing is successful and false if there is any error in parsing.

private <T> boolean doModelDeserialization(Class<T> type, String name, boolean isList) throws IOException {
        if (isList) {
            List<T> items = objectMapper.readValue(readFile(name), objectMapper.getTypeFactory().constructCollectionType(List.class, type));
            if (items == null)
                return false;
        } else {
            T item = objectMapper.readValue(readFile(name), type);
            if (item == null)
                return false;
        }
        return true;
}

Here ObjectMapper is doing the main work of parsing data and returns parsed object using readValue() method.

Add Test

Now all the setup is done we just need to assert value returned by doModelDeserialization() method by passing appropriate parameters.

@Test
public void testLocalJsonDeserialization() throws IOException {
        assertTrue(doModelDeserialization(Event.class, "event", false));
        assertTrue(doModelDeserialization(Microlocation.class, "microlocations", true));
        assertTrue(doModelDeserialization(Sponsor.class, "sponsors", true));
        assertTrue(doModelDeserialization(Track.class, "tracks", true));
        assertTrue(doModelDeserialization(SessionType.class, "session_types", true));
        assertTrue(doModelDeserialization(Session.class, "sessions", true));
        assertTrue(doModelDeserialization(Speaker.class, "speakers", true));
}

Here only event JSON file doesn’t have a list of the objects so passing false as isList parameter for others we are passing true because its data contains a list of objects.

Conclusion:

Running unit tests after every build helps you to quickly catch and fix software regressions introduced by code changes to your app

Continue ReadingAdding Unit Test For Local JSON Parsing in Open Event Android App

Best Practices when writing Tests for loklak Server

Why do we write unit-tests? We write them to ensure that developers’ implementation doesn’t change the behaviour of parts of the project. If there is a change in the behaviour, unit-tests throw errors. This keep developers in ease during integration of the software and ensure lower chances of unexpected bugs.

After setting up the tests in Loklak Server, we were able to check whether there is any error or not in the test. Test failures didn’t mention the error and the exact test case at which they failed. It was YoutubeScraperTest that brought some of the best practices in the project. We modified the tests according to it.

The following are some of the best practices in 5 points that we shall follow while writing unit tests:

  1. Assert the assertions

There are many assert methods which we can use like assertNull, assertEquals etc. But we should use one which describes the error well (being more descriptive) so that developer’s effort is reduced while debugging.

Using these assertions related preferences help in getting to the exact errors on test fails, thus helping in easier debugging of the code.

Some examples can be:-

  • Using assertThat() over assertTrue

assertThat() give more descriptive errors over assertTrue(). Like:-

When assertTrue() is used:

java.lang.AssertionError: Expected: is <true> but: was <false> at org.loklak.harvester.TwitterScraperTest.testSimpleSearch(TwitterScraperTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at org.hamcr.......... 

 

When assertThat() is used:

java.lang.AssertionError:
Expected: is <true>
     but: was <false>
at org.loklak.harvester.TwitterScraperTest.testSimpleSearch(TwitterScraperTest.java:142)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at org.hamcr...........

 

NOTE:- In many cases, assertThat() is preferred over other assert method (read this), but in some cases other methods are used to give better descriptive output (like in next examples)

  • Using assertEquals() over assertThat()

For assertThat()

java.lang.AssertionError:

Expected: is "ar photo #test #car https://pic.twitter.com/vd1itvy8Mx"

but: was "car photo #test #car https://pic.twitter.com/vd1itvy8Mx"

at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)

at org.junit.Assert.assertThat(Ass........

 

For assertEquals()

org.junit.ComparisonFailure: expected:<[c]ar photo #test #car ...> but was:<[]ar photo #test #car ...>

at org.junit.Assert.assertEquals(Assert.java:115)

at org.junit.Assert.assertEquals(Assert.java:144)

at org.loklak.harvester.Twitter.........

 

We can clearly see that second example gives better error description than the first one.(An SO link)

  1. One Test per Behaviour

Each test shall be independent of other with none having mutual dependencies. It shall test only a specific behaviour of the module that is tested.

Have a look of this snippet. This test checks the method that creates the twitter url by comparing the output url method with the expected output url.

@Test

public void testPrepareSearchURL() {

    String url;

    String[] query = {

        "fossasia", "from:loklak_test",

        "spacex since:2017-04-03 until:2017-04-05"

    };

    String[] filter = {"video", "image", "video,image", "abc,video"};

    String[] out_url = {

        "https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd",

        "https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd",

    };

    // checking simple urls

    for (int i = 0; i < query.length; i++) {

        url = TwitterScraper.prepareSearchURL(query[i], "");


        //compare urls with urls created

        assertThat(out_url[i], is(url));

    }

}

 

This unit-test tests whether the method-under-test is able to create twitter link according to query or not.

  1. Selecting test cases for the test

We shall remember that testing is a very costly task in terms of processing. It takes time to execute. That is why, we need to keep the test cases precise and limited. In loklak server, most of the tests are based on connection to the respective websites and this step is very costly. That is why, in implementation, we must use least number of test cases so that all possible corner cases are covered.

  1. Test names

Descriptive test names that are short but give hint about their task which are very helpful. A comment describing what it does is a plus point. The following example is from YoutubeScraperTest. I added this point to my ‘best practices queue’ after reviewing the code (when this module was in review process).

/**

* When try parse video from input stream should check that video parsed.

* @throws IOException if some problem with open stream for reading data.

*/

@Test

public void whenTryParseVideoFromInputStreamShouldCheckThatJSONObjectGood() throws IOException {

    //Some tests related to method

}

 

AND the last one, accessing methods

This point shall be kept in mind. In loklak server, there are some tests that use Reflection API to access private and protected methods. This is the best example for reflection API.

In general, such changes to access specifiers are not allowed, that is why we shall resolve this issue with the help of:-

  •  Setters and Getters (if available, use it or else create them)
  •  Else use Reflection

If the getter methods are not available, using Reflection API will be the last resort to access the private and protected members of the class. Hereunder is a simple example of how a private method can be accessed using Reflection:

void getPrivateMethod() throws Exception {

    A ret = new A();

    Class<?> clazz = ret.getClass();

    Method method = clazz.getDeclaredMethod("changeValue", Integer.TYPE);

    method.setAccessible(true);

    System.out.println(method.invoke(ret, 2)); 
    //set null if method is static

}

 

I should end here. Try applying these practices, go through the links and get sync with these ‘Best Practices’ 🙂

Resources:

Continue ReadingBest Practices when writing Tests for loklak Server

Event-driven programming in Flask with Blinker signals

Setting up blinker:

The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event.

The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time.

That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events.

Given below is the code to create named signals in a custom namespace:


from blinker import Namespace

event_signals = Namespace()
speakers_modified = event_signals.signal('event_json_modified')

If you want to emit a signal, you can do so by calling the send() method:


speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id)

From the user guide itself:

“ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “

To subscribe to a signal, blinker provides neat decorator based signal subscriptions.


@speakers_modified.connect
def name_of_signal_handler(app, **kwargs):

 

Some Design Decisions:

When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project.


@speakers_modified.connect
def new_handler(app, **kwargs):
# do whatever you want to do with kwargs['event_id']

In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made.

When to use signals and when not ?

The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be used, use a function call instead.

How to turn off signals while testing?

When in testing mode, signals may slow down your testing as unnecessary signals subscribers which are completely independent from the function being tested will be executed numerous times. To turn off executing the signal subscribers, you have to make a small change in the send function of the blinker library.

Below is what we have done. The approach to turn it off may differ from project to project as the method of testing differs. Refer https://github.com/jek/blinker/blob/master/blinker/base.py#L241 for the original function.


def new_send(self, *sender, **kwargs):
    if len(sender) == 0:
        sender = None
    elif len(sender) > 1:
        raise TypeError('send() accepts only one positional argument, '
                        '%s given' % len(sender))
    else:
        sender = sender[0]
    # only this line was changed
    if not self.receivers or app.config['TESTING']:
        return []
    else:
        return [(receiver, receiver(sender, **kwargs))
                for receiver in self.receivers_for(sender)]
                
Signal.send = new_send

event_signals = Namespace
# and so on ....

That’s all for now. Have some fun signaling 😉 .

 

Continue ReadingEvent-driven programming in Flask with Blinker signals

Writing Simple Unit-Tests with JUnit

In the Loklak Server project, we use a number of automation tools like the build testing tool ‘TravisCI’, automated code reviewing tool ‘Codacy’, and ‘Gemnasium’. We are also using JUnit, a java-based unit-testing framework for writing automated Unit-Tests for java projects. It can be used to test methods to check their behaviour whenever there is any change in implementation. These unit-tests are handy and are coded specifically for the project. In the Loklak Server project it is used to test the web-scrapers. Generally JUnit is used to check if there is no change in behaviour of the methods, but in this project, it also helps in keeping check if the website code has been modified, affecting the data that is scraped.

Let’s start with basics, first by setting up, writing a simple Unit-Tests and then Test-Runners. Here we will refer how unit tests have been implemented in Loklak Server to familiarize with the JUnit Framework.

Setting-UP

Setting up JUnit with gradle is easy, You have to do just 2 things:-

1) Add JUnit dependency in build.gradle

Dependencies {

. . .

. . .<other compile groups>. . .

compile group: 'com.twitter', name: 'jsr166e', version: '1.1.0'

compile group: 'com.vividsolutions', name: 'jts', version: '1.13'

compile group: 'junit', name: 'junit', version: '4.12'

compile group: 'org.apache.logging.log4j', name: 'log4j-1.2-api', version: '2.6.2'

compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.6.2'

. . .

. . .

}

 

2) Add source for ‘test’ task from where tests are built (like here).

Save all tests in test directory and keep its internal directory structure identical to src directory structure. Now set the path in build.gradle so that they can be compiled.

sourceSets.test.java.srcDirs = ['test']

 

Writing Unit-Tests

In JUnit FrameWork a Unit-Test is a method that tests a particular behaviour of a section of code. Test methods are identified by annotation @Test.

Unit-Test implements methods of source files to test their behaviour. This can be done by fetching the output and comparing it with expected outputs.

The following test tests if twitter url that is created is valid or not that is to be scraped.

/**

 * This unit-test tests twitter url creation

 */

@Test

public void testPrepareSearchURL() {

String url;

String[] query = {"fossasia", "from:loklak_test",

"spacex since:2017-04-03 until:2017-04-05"};

String[] filter = {"video", "image", "video,image", "abc,video"};

String[] out_url = {

"https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd",

"https://twitter.com/search?f=tweets&vertical=default&q=from%3Aloklak_test&src=typd",

"and other output url strings to be matched…..."

};


// checking simple urls

for (int i = 0; i < query.length; i++) {

url = TwitterScraper.prepareSearchURL(query[i], "");


//compare urls with urls created

assertThat(out_url[i], is(url));

}


// checking urls having filters

for (int i = 0; i < filter.length; i++) {

url = TwitterScraper.prepareSearchURL(query[0], filter[i]);


//compare urls with urls created

assertThat(out_url[i+3], is(url));

}

}

 

Testing the implementation of code is useless as it will either make code more difficult to change or tests useless  . So be cautious while writing tests and keep difference between Implementation and Behaviour in mind.

This is the perfect example for a simple Unit-Test. As we see there are some points, which needs to be observed like:-

1) There is a annotation @Test .

2) Input array of query which is fed to the method TwitterScraper.prepareSearchURL() .

3) Array of urls out_url[], which are the expected urls to output.

4) asserThat() to compare the expected url (in array out_url[]) and the output url (in variable ‘url’).

NOTE: assertEquals() could also be used here, but we prefer to use assert methods to get error message that is readable (We will discuss about this some time later)

And the TestRunner

When we are working on a project, It is not feasible to run tests using gradle as they are first built  (else verified whether tests are build-ready) and then executed. gradle test shall be used only for building and testing the tests. For testing the project, one shall set-up TestRunner. It allows to run specific set of tests, one wants to run.

TestRunners are built once using gradle (with other tests) and can be run whenever you want. Also it is easy to stack up the test classes you want to run in SuiteClasses and @RunWith to run SuiteClasses with the TestRunner.

In loklak server, TestRunner runs the web-scraper tests. They are used by developers to test the changes they have made.

This is a sample TestRunner, code link here .

package org.loklak;


// Library classes imported

import org.junit.runner.RunWith;

import org.junit.runners.Suite;

// Source files to be tested

import org.loklak.harvester.TwitterScraperTest;

import org.loklak.harvester.YoutubeScraperTest;


/*

* TestRunner for harvesters

*/

@RunWith(Suite.class)

@Suite.SuiteClasses({

TwitterScraperTest.class,

YoutubeScraperTest.class

})

public class TestRunner {

}

 

You can also add TestRunners for different sections of the project. Like here it is initialized only to test harvesters.

To run the TestRunner

Add classpath of the jar file of the project and run ‘JUnitCore’ with TestRunner to get output on terminal.

java -classpath .:build/libs/<yourProject>.jar:build/classes/test org.junit.runner.JUnitCore org.loklak.TestRunner

In the project we have set up a shell script to run the tests.

Few points

1) Build the project and tests separately. Build tests only when changed as they take time to be built and executed.

2) Whenever you are done with the coding part, run the tests using TestRunner.

3) Write unit-tests whenever you add a new feature to the project to keep it up-to-date.

Now lets end up here.

So for now, Code it, Test it and Repeat.

Resources:

Continue ReadingWriting Simple Unit-Tests with JUnit

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?.

At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing.

I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle.

Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t.

In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.  

It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other.

Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well.

What is unit testing?

Unit testing is a software development method in which the smallest testable parts of an application are tested

Why do we need writing unit tests?

Let me point all arguments why unit testing is really important while developing a project.

  • To prove that our code works properly

If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code.

  • To reduce amount of bugs

It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code

  • To save development time

Developers don’t waste time on checking every code’s change if his code works correctly

  • Unit tests help to understand software design
  • To provide quick feedback about method which you are testing
  • To help document a code

How to write unit test in Python

In my work I write use tests in Python. I am going to share my sample code  with you now

  • Import module unittest
  • Choose function to test
  • Write unit test

Example OpenEvent test in Python

class TestPagesUrls(OpenEventTestCase):

   def setUp(self):

       self.app = Setup.create_app()

   def test_if_urls_exist(self):

       """Test all urls via GET method"""

       with app.test_request_context():

           for rule in app.url_map.iter_rules():

               if excluded_paths(rule):

                   status_code = self.app.get(request.url[:-1] + str(rule).replace('//', '/'),        follow_redirects=True).status_code

                   self.assertTrue(status_code in [200, 302, 401])

 

I want to check if all views exist but it required a lot of time. That’s why I wonder I how to avoid writing similar tests. Finally, based  on our list of routes I am able to write test which checks code’s status  on every page.

If some of them response returns status_code different than 200, 302 or 401, test fails.This results means that somethings is wrong. Simple, isn’t it ?  Try to test it manually…. This one short test cover about 40 use cases…

This example shows an incredible value of unit tests! If developer makes a bug in response he receives an error that something is wrong with a view. Travis CI allows to reject all  wrong pull requests and merge only these which fulfill our quality requirements.   

Fixing  error is one part but finding a bug is even harder task. But an ability to detect bug on early stage of process development reduces cost of software.

 

Continue ReadingUnit Testing

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.

Cafe instructions
The generated schema from the knittingpattern library.
Cafe
The original pattern schema Cafe.

 

 

 

 

 

 

 


The image above was generated by this Python code:

import knittingpattern, webbrowser
example = knittingpattern.load_from().example("Cafe.json")
webbrowser.open(example.to_svg(25).temporary_path(".svg"))

So far about the context. Now about the Quality tools we use:

Untitled

Continuous integration

We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following:

  1. Increase the version in the knitting pattern library and create a new pull request for it.
  2. Merge the pull request after the tests passed.
  3. Pull and create a new release with a git tag using
    setup.py tag_and_deploy

Travis then builds the new tag and uploads it to Pypi.

With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors.

Documentation Driven Development

As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code.

Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point.

When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten.

When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces.

With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more.

Testing the Documentation

Our documentation is hosted on Read the Docs. It should have these properties:

  1. Every module is documented.
  2. Everything that is public is documented.
  3. The documentation is syntactically correct.

These are qualities that can be tested, so they are tested. The code can not be deployed if it does not meet these standards. We use Sphinx for building the docs. That makes it possible to tests these properties in this way:

  1. For every module there exists a .rst file which automatically documents the module with autodoc.
  2. A Sphinx build outputs a list of objects that should be covered by documentation but are not.
  3. Sphinx outputs warnings throughout the build.

testing out documentation allows us to have it in higher quality. Many more tests could be imagined, but the basic ones already help.

Code Coverage

It is possible to test your code coverage and see how well we do using Codeclimate.com. It gives us the files we need to work on when we want to improve the quality of the package.

Landscape

Landscape is also free for open source projects. It can give hints about where to improve next. Also it is possible to fail pull requests if the quality decreases. It shows code duplication and can run pylint. Currently, most of the style problems arise from undocumented tests.

Summary

When starting with the more strict quality assurance, the question arose if that would only slow us down. Now, we have learned to write properly styled pep8 code and begin to automatically do what pylint demands. High test-coverage allows us to change the underlying functionality without changing the interface and without fear we may break something irrecoverably. I feel like having a burden taken from me with all those free tools for open-source software that spare my time to set quality assurance up.

Future Work

In the future we like to also create a user interface. It is hard, sometimes, to test these. So, we plan not to put it into the package but build it on the package.

Continue ReadingCode Quality in the knittingpattern Python Library

Importance of the test cases for the KnitLib

Having test cases is very important especially for a library like KnitLib because using test cases; we can clearly test particular fields.  In KnitLib, test cases show the information of how the KnitLib should be checked. Also test cases help for new contributors to understand about the KnitLib.

There are several test cases for the current KnitLib implementation such as tests on ayab communication, tests on ayab image, tests on command line interface, tests on KnitPat module and tests on knitting plugin.

For an example in ayab communication there are several important functions have been tested. Test on closing serial port communication, test on opening serial port with a baud rate of 115200 which ayab fits, tests on sending start message to the controller, tests on sending line of data via serial port and tests on reading line from serial communication.  Most of these tests have been done using mock tests. Mock is a python library to test in python.  Using mocks we can replace parts of our system with mock objects and have assertions about how they have been used. We can easily represent some complex objects without having to manually set up stubs as mock objects during a test.

It is very important to improve further test cases on the KnitLib because with the help of good test cases we can guarantee that the KnitLib’s features and functionalities should be working great.

Regards,

Shiluka.

Continue ReadingImportance of the test cases for the KnitLib