Testing Presenter of MVP in Loklak Wok Android

Imagine working on a large source code, and as a new developer you are not sure whether the available source code works properly or not, you are surrounded by questions like, Are all these methods invoked properly or the number of times they need to be invoked? Being new to source code and checking manually already written code is a pain. For cases like these unit-tests are written. Unit-tests check whether the implemented code works as expected or not. This blog post explains about implementation of unit-tests of Presenter in a Model-View-Presenter (MVP) architecture in Loklak Wok Android. Adding Dependencies to project In app/build.gradle file defaultConfig { ... testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } dependencies { ... androidTestCompile 'org.mockito:mockito-android:2.8.47' androidTestCompile 'com.android.support:support-annotations:25.3.1' androidTestCompile 'com.android.support.test.espresso:espresso-core:2.2.2' } Setup for Unit-Tests The presenter needs a realm database and an implementation of LoklakAPI interface. Along with that a mock of the View is required, so as to check whether the methods of View are called or not. The LoklakAPI interface can be mocked easily using Mockito, but the realm database can’t be mocked. For this reason an in-memory realm database is created, which will be destroyed once all unit-test are executed. As the presenter is required for each unit-test method we instantiate the in-memory database before all the tests start i.e. by annotating a public static method with @BeforeClass, e.g. setDb method. @BeforeClass public static void setDb() { Realm.init(InstrumentationRegistry.getContext()); RealmConfiguration testConfig = new RealmConfiguration.Builder() .inMemory() .name("test-db") .build(); mDb = Realm.getInstance(testConfig); }   NOTE: The in-memory database should be closed once all unit-tests are executed. So, for closing the databasse we create a public static method annotated with @AfterClass, e.g. closeDb method. @AfterClass public static void closeDb() { mDb.close(); }   Now, before each unit-test is executed we need to do some setup work like instantiating the presenter, a mock instance of API interface generated  by using mock static method and pushing in some sample data into the database. Our presenter uses RxJava and RxAndroid which depend on IO scheduler and MainThread scheduler to perform tasks asynchronously and these schedulers are not present in testing environment. So, we override RxJava and RxAndroid to use trampoline scheduler in place of IO and MainThread so that our test don’t encounter NullPointerException. All this is done in a public method annotated with @Before e.g. setUp. @Before public void setUp() throws Exception { // mocking view and api mMockView = mock(SuggestContract.View.class); mApi = mock(LoklakAPI.class); mPresenter = new SuggestPresenter(mApi, mDb); mPresenter.attachView(mMockView); queries = getFakeQueries(); // overriding rxjava and rxandroid RxJavaPlugins.setIoSchedulerHandler(scheduler -> Schedulers.trampoline()); RxAndroidPlugins.setMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline()); mDb.beginTransaction(); mDb.copyToRealm(queries); mDb.commitTransaction(); }   Some fake suggestion queries are created which will be returned as observable when API interface is mocked. For this, simply two query objects are created and added to a List after their query parameter is set. This is implemented in getFakeQueries method. private List<Query> getFakeQueries() { List<Query> queryList = new ArrayList<>(); Query linux = new Query(); linux.setQuery("linux"); queryList.add(linux); Query india = new Query(); india.setQuery("india"); queryList.add(india); return queryList; }   After that, a method is…

Continue ReadingTesting Presenter of MVP in Loklak Wok Android

Adding Unit Test For Local JSON Parsing in Open Event Android App

The Open Event project uses JSON format for transferring event information like tracks, sessions, microlocations and other. The event exported in the zip format from the Open Event server also contains the data in JSON format. The Open Event Android application uses this JSON data. Before we use this data in the app, we have to parse the data to get Java objects that can be used for populating views. There is a chance that the model and the JSON format changes in future. It is necessary that the models are able to parse the JSON data and the change in the model or JSON format don’t break JSON parsing.  In this post I explain how to unit test local JSON parsing so that we can ensure that the models are able to parse the local JSON sample data successfully. Firstly we need to access assets from the main source set into the unit test. There is no way to directly access assets from main source set. We need to first add assets in test/resources directory. If assets are present in test/resources directory then we can use it using ClassLoader in the unit test. But we can’t just copy assets from the main source set to resources directory. If there is any change in sample JSON then we need to maintain both resources and it may make the sample inconsistent. We need to make assets shared. Add the following code in the app level build.gradle file. android { ... sourceSets.test.resources.srcDirs += ["src/main/assets"] } It will add src/main/assets as a source directory for test/resources directory.So after building the project the test will have access to the assets. Create readFile() method Now create a method readFile(String name) which takes a filename as a parameter and returns data of the file as a string. private String readFile(String name) throws IOException { String json = ""; try { InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream(name); int size = inputStream.available(); byte[] buffer = new byte[size]; inputStream.read(buffer); inputStream.close(); json = new String(buffer, "UTF-8"); } catch (IOException e) { e.printStackTrace(); } return json; } Here the getResourceAsStream() function is used to open file as a InputStream. Then we are creating byte array object of size same as inputStream data. Using read function we are storing data of file into byte array. After this we are creating a String object using a byte array. Create ObjectMapper object Create and initialize an ObjectMapper object in the Test class. private ObjectMapper objectMapper; @Before public void setUp() { objectMapper = OpenEventApp.getObjectMapper(); objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, true); } Here setting DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES to true is very important. It will fail the test for any unrecognized fields in the sample. Create doModelDeserialization() method In the Open Event Android App we are using Jackson for JSON serialization and deserialization. Converting JSON data to java is called deserialization or parsing. Now create doModelDeserialization() method which takes three parameters, Class<T> type: Model Class type of the data String name: Name of the JSON data file boolean isList: true if JSON string contains the…

Continue ReadingAdding Unit Test For Local JSON Parsing in Open Event Android App

Best Practices when writing Tests for loklak Server

Why do we write unit-tests? We write them to ensure that developers’ implementation doesn't change the behaviour of parts of the project. If there is a change in the behaviour, unit-tests throw errors. This keep developers in ease during integration of the software and ensure lower chances of unexpected bugs. After setting up the tests in Loklak Server, we were able to check whether there is any error or not in the test. Test failures didn’t mention the error and the exact test case at which they failed. It was YoutubeScraperTest that brought some of the best practices in the project. We modified the tests according to it. The following are some of the best practices in 5 points that we shall follow while writing unit tests: Assert the assertions There are many assert methods which we can use like assertNull, assertEquals etc. But we should use one which describes the error well (being more descriptive) so that developer's effort is reduced while debugging. Using these assertions related preferences help in getting to the exact errors on test fails, thus helping in easier debugging of the code. Some examples can be:- Using assertThat() over assertTrue assertThat() give more descriptive errors over assertTrue(). Like:- When assertTrue() is used: java.lang.AssertionError: Expected: is <true> but: was <false> at org.loklak.harvester.TwitterScraperTest.testSimpleSearch(TwitterScraperTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at org.hamcr..........   When assertThat() is used: java.lang.AssertionError: Expected: is <true> but: was <false> at org.loklak.harvester.TwitterScraperTest.testSimpleSearch(TwitterScraperTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at org.hamcr...........   NOTE:- In many cases, assertThat() is preferred over other assert method (read this), but in some cases other methods are used to give better descriptive output (like in next examples) Using assertEquals() over assertThat() For assertThat() java.lang.AssertionError: Expected: is "ar photo #test #car https://pic.twitter.com/vd1itvy8Mx" but: was "car photo #test #car https://pic.twitter.com/vd1itvy8Mx" at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Ass........   For assertEquals() org.junit.ComparisonFailure: expected:<[c]ar photo #test #car ...> but was:<[]ar photo #test #car ...> at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.loklak.harvester.Twitter.........   We can clearly see that second example gives better error description than the first one.(An SO link) One Test per Behaviour Each test shall be independent of other with none having mutual dependencies. It shall test only a specific behaviour of the module that is tested. Have a look of this snippet. This test checks the method that creates the twitter url by comparing the output url method with the expected output url. @Test public void testPrepareSearchURL() { String url; String[] query = { "fossasia", "from:loklak_test", "spacex since:2017-04-03 until:2017-04-05" }; String[] filter = {"video", "image", "video,image", "abc,video"}; String[] out_url = { "https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd", "https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd", }; // checking simple urls for (int i = 0; i < query.length; i++) { url = TwitterScraper.prepareSearchURL(query[i], ""); //compare urls with urls created assertThat(out_url[i], is(url)); } }   This unit-test tests whether the method-under-test is able to create twitter link according to query or not. Selecting test cases for the test We shall remember that testing is a very costly task in terms of processing. It takes time to execute. That is why, we need to keep the…

Continue ReadingBest Practices when writing Tests for loklak Server

Event-driven programming in Flask with Blinker signals

Setting up blinker: The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event. The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time. That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events. Given below is the code to create named signals in a custom namespace: from blinker import Namespace event_signals = Namespace() speakers_modified = event_signals.signal('event_json_modified') If you want to emit a signal, you can do so by calling the send() method: speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id) From the user guide itself: “ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “ To subscribe to a signal, blinker provides neat decorator based signal subscriptions. @speakers_modified.connect def name_of_signal_handler(app, **kwargs):   Some Design Decisions: When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project. @speakers_modified.connect def new_handler(app, **kwargs): # do whatever you want to do with kwargs['event_id'] In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made. When to use signals and when not ? The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be…

Continue ReadingEvent-driven programming in Flask with Blinker signals

Writing Simple Unit-Tests with JUnit

In the Loklak Server project, we use a number of automation tools like the build testing tool ‘TravisCI’, automated code reviewing tool ‘Codacy’, and ‘Gemnasium’. We are also using JUnit, a java-based unit-testing framework for writing automated Unit-Tests for java projects. It can be used to test methods to check their behaviour whenever there is any change in implementation. These unit-tests are handy and are coded specifically for the project. In the Loklak Server project it is used to test the web-scrapers. Generally JUnit is used to check if there is no change in behaviour of the methods, but in this project, it also helps in keeping check if the website code has been modified, affecting the data that is scraped. Let’s start with basics, first by setting up, writing a simple Unit-Tests and then Test-Runners. Here we will refer how unit tests have been implemented in Loklak Server to familiarize with the JUnit Framework. Setting-UP Setting up JUnit with gradle is easy, You have to do just 2 things:- 1) Add JUnit dependency in build.gradle Dependencies { . . . . . .<other compile groups>. . . compile group: 'com.twitter', name: 'jsr166e', version: '1.1.0' compile group: 'com.vividsolutions', name: 'jts', version: '1.13' compile group: 'junit', name: 'junit', version: '4.12' compile group: 'org.apache.logging.log4j', name: 'log4j-1.2-api', version: '2.6.2' compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.6.2' . . . . . . }   2) Add source for 'test' task from where tests are built (like here). Save all tests in test directory and keep its internal directory structure identical to src directory structure. Now set the path in build.gradle so that they can be compiled. sourceSets.test.java.srcDirs = ['test']   Writing Unit-Tests In JUnit FrameWork a Unit-Test is a method that tests a particular behaviour of a section of code. Test methods are identified by annotation @Test. Unit-Test implements methods of source files to test their behaviour. This can be done by fetching the output and comparing it with expected outputs. The following test tests if twitter url that is created is valid or not that is to be scraped. /** * This unit-test tests twitter url creation */ @Test public void testPrepareSearchURL() { String url; String[] query = {"fossasia", "from:loklak_test", "spacex since:2017-04-03 until:2017-04-05"}; String[] filter = {"video", "image", "video,image", "abc,video"}; String[] out_url = { "https://twitter.com/search?f=tweets&vertical=default&q=fossasia&src=typd", "https://twitter.com/search?f=tweets&vertical=default&q=from%3Aloklak_test&src=typd", "and other output url strings to be matched…..." }; // checking simple urls for (int i = 0; i < query.length; i++) { url = TwitterScraper.prepareSearchURL(query[i], ""); //compare urls with urls created assertThat(out_url[i], is(url)); } // checking urls having filters for (int i = 0; i < filter.length; i++) { url = TwitterScraper.prepareSearchURL(query[0], filter[i]); //compare urls with urls created assertThat(out_url[i+3], is(url)); } }   Testing the implementation of code is useless as it will either make code more difficult to change or tests useless  . So be cautious while writing tests and keep difference between Implementation and Behaviour in mind. This is the perfect example for a simple Unit-Test. As we see there are some points,…

Continue ReadingWriting Simple Unit-Tests with JUnit

Unit Testing

There are many stories about unit testing. Developers sometimes say that they don’t write tests because they write a good quality code. Does it make sense, if no one is infallible?. At studies only a  few teachers talk about unit testing, but they only show basic examples of unit testing. They require to write a few tests to finish final project, but nobody really  teaches us the importance of unit testing. I have also always wondered what benefits can it bring. As time is a really important factor in our work it often happens that we simply resign of this part of process development to get “more time” rather than spend time on writing stupid tests. But now I know that it is a vicious circle. Customers requierments does not help us. They put a high pressure to see visible results not a few statistics about coverage status. None of them cares about some strange numbers. So, as I mentioned above, we usually focuses on building new features and get riid of tests. It may seem to save time, but it doesn’t. In reality tests save us a lot of time because we can identify and fix bugs very quickly. If a bug ocurrs because someone’s change we don’t have to spend long hours trying to figure out wgat is going out. That’s why we need tests.   It is especially visible in huge open source projects. FOSSASIA organization has about 200 contributors. In OpenEvent project we have about 20 active developers, who generate many lines of code every single day. Many of them change over and over again as well as interfere  with each other. Let me provide you with a simple example. In our team we have about 7 pull requests per day. As I mentioned above we want to make our code high quality and free of bugs, but without testing identifying if pull request causes a bug is very difficult task. But fortunately this boring job makes Travis CI for us. It is a great tool which uses our tests and runs them on every PR  to check if bugs occur. It helps us to quickly notice bugs and maintain our project very well. What is unit testing? Unit testing is a software development method in which the smallest testable parts of an application are tested Why do we need writing unit tests? Let me point all arguments why unit testing is really important while developing a project. To prove that our code works properly If developer adds another condition, test checks if method returns correct results. You simply don’t need to wonder if something is wrong with you code. To reduce amount of bugs It let you to know what inputs params’ function should get and what results should be returned. You simply don’t  write unused code To save development time Developers don’t waste time on checking every code’s change if his code works correctly Unit tests help to understand software design To provide quick feedback…

Continue ReadingUnit Testing

Code Quality in the knittingpattern Python Library

In our Google Summer of Code project a part of our work is to bring knitting to the digital age. We is Kirstin Heidler and Nicco Kunzmann. Our knittingpattern library aims at being the exchange and conversion format between different types of knit work representations: hand knitting instructions, machine commands for different machines and SVG schemata.               The image above was generated by this Python code: import knittingpattern, webbrowser example = knittingpattern.load_from().example("Cafe.json") webbrowser.open(example.to_svg(25).temporary_path(".svg")) So far about the context. Now about the Quality tools we use: Continuous integration We use Travis CI [FOSSASIA] to upload packages of a specific git tag  automatically. The Travis build runs under Python 3.3 to 3.5. It first builds the package and then installs it with its dependencies. To upload tags automatically, one can configure Travis, preferably with the command line interface, to save username and password for the Python Package Index (Pypi).[TravisDocs] Our process of releasing a new version is the following: Increase the version in the knitting pattern library and create a new pull request for it. Merge the pull request after the tests passed. Pull and create a new release with a git tag using setup.py tag_and_deploy Travis then builds the new tag and uploads it to Pypi. With this we have a basic quality assurance. Pull-requests need to run all tests before they can be merge. Travis can be configured to automatically reject a request with errors. Documentation Driven Development As mentioned in a blog post, documentation-driven development was something worth to check out. In our case that means writing the documentation first, then the tests and then the code. Writing the documentation first means thinking in the space of the mental model you have for the code. It defines the interfaces you would be happy to use. A lot of edge cases can be thought of at this point. When writing the tests, they are often split up and do not represent the flow of thought any more that you had when thinking about your wishes. Tests can be seen as the glue between the code and the documentation. As it is with writing code to pass the tests, in the conversation between the tests and the documentation I find out some things I have forgotten. When writing the code in a test-driven way, another conversation starts. I call implementing the tests conversation because the tests talk to the code that it should be different and the code tells the tests their inconsistencies like misspellings and bloated interfaces. With writing documentation first, we have the chance to have two conversations about our code, in spoken language and in code. I like it when the code hears my wishes, so I prefer to talk a bit more. Testing the Documentation Our documentation is hosted on Read the Docs. It should have these properties: Every module is documented. Everything that is public is documented. The documentation is syntactically correct. These are qualities that can be tested, so they…

Continue ReadingCode Quality in the knittingpattern Python Library

Importance of the test cases for the KnitLib

Having test cases is very important especially for a library like KnitLib because using test cases; we can clearly test particular fields.  In KnitLib, test cases show the information of how the KnitLib should be checked. Also test cases help for new contributors to understand about the KnitLib. There are several test cases for the current KnitLib implementation such as tests on ayab communication, tests on ayab image, tests on command line interface, tests on KnitPat module and tests on knitting plugin. For an example in ayab communication there are several important functions have been tested. Test on closing serial port communication, test on opening serial port with a baud rate of 115200 which ayab fits, tests on sending start message to the controller, tests on sending line of data via serial port and tests on reading line from serial communication.  Most of these tests have been done using mock tests. Mock is a python library to test in python.  Using mocks we can replace parts of our system with mock objects and have assertions about how they have been used. We can easily represent some complex objects without having to manually set up stubs as mock objects during a test. It is very important to improve further test cases on the KnitLib because with the help of good test cases we can guarantee that the KnitLib’s features and functionalities should be working great. Regards, Shiluka.

Continue ReadingImportance of the test cases for the KnitLib