Add Unit Test in SUSI.AI Android App

Unit testing is an integral part of software development. Hence, this blog focuses on adding unit tests to SUSI.AI Android app. To keep things simple, take a very basic example of anonymize feedback section. In this section the email of the user is truncated after ‘@’ symbol in order to maintain the anonymity of the user. Here is the function that takes ‘email’ as a parameter and returns the truncated email that had to be displayed in the feedback section : fun truncateEmailAtEnd(email: String?): String? { if (!email.isNullOrEmpty()) { val truncateAt = email?.indexOf('@') if (truncateAt is Int && truncateAt != -1) { return email.substring(0, truncateAt.plus(1)) + " ..." } } return null }   The unit test has to be written for the above function. Step - 1 : Add the following dependencies to your build.gradle file. //unit test testImplementation "junit:junit:4.12" testImplementation "org.mockito:mockito-core:1.10.19"   Step - 2 : Add a file in the correct package (same as the file to be tested) in the test package. The function above is present in the Utils.kt file. Thus create a file, called UtilsTest.kt, in the test folder in the package ‘org.fossasia.susi.ai.helper’. Step - 3 : Add a method, called testTruncateEmailAtEnd(), to the UtilsTest.kt and add ‘@Test’ annotation to before this method. Step - 4 : Now add tests for various cases, including all possible corner cases that might occur. This can be using assertEquals() which takes in two paramters - expected value and actual value. For example, consider an email ‘testuser@example.com’. This email is passed as a parameter to the truncateAtEnd() method. The expected returned string would be ‘testuser@ …’. So, add a test for this case using assertEquals() as : assertEquals("testuser@ ...", Utils.truncateEmailAtEnd("testuser@example.com"))   Similary, add other cases, like empty email string, null string, email with numbers and symbols and so on. Here is how the UtilsTest.kt class looks like. package org.fossasia.susi.ai.helper import junit.framework.Assert.assertEquals import org.junit.Test class UtilsTest { @Test fun testTruncateEmailAtEnd() { assertEquals("testuser@ ...", Utils.truncateEmailAtEnd("testuser@example.com")) assertEquals(null, Utils.truncateEmailAtEnd("testuser")) assertEquals(null, Utils.truncateEmailAtEnd("")) assertEquals(null, Utils.truncateEmailAtEnd(" ")) assertEquals(null, Utils.truncateEmailAtEnd(null)) assertEquals("test.user@ ...", Utils.truncateEmailAtEnd("test.user@example.com")) assertEquals("test_user@ ...", Utils.truncateEmailAtEnd("test_user@example.com")) assertEquals("test123@ ...", Utils.truncateEmailAtEnd("test123@example.com")) assertEquals(null, Utils.truncateEmailAtEnd("test user@example.com")) } }   Note: You can add more tests to check for other general and corner cases. Step - 5 : Run the tests in UtilsTest.kt. If all the test cases pass, then the tests pass. But, if the tests fail, try to figure out the cause of failure of the tests and add/modify the code in the Utils.kt accordingly. This approach helps recognize flaws in the existing code thereby reducing the risk of bugs and failures. Resources Build effective unit tests | Android Developers https://developer.android.com/training/testing/unit-testing/ Read about JUnit https://junit.org/junit5/ Read about Mockito https://site.mockito.org

Continue ReadingAdd Unit Test in SUSI.AI Android App

Handling Android Runtime permissions in UI Tests in SUSI.AI Android

With the introduction of Marshmallow (API Level 23), in SUSI.AI it was needed to ensure that: It was verified if we had the permission that was needed, when required The user was requested to grant permission, when it deemed appropriate The request (empty states or data feedback) was correctly handled within the UI to represent the outcome of being granted or denied the required permission You might have written UI tests. What about instances where the app needs the user’s permissions, like allow the app to access contacts on the device, for the tests to run? Would the tests pass when the test is run on Android 6.0+ devices? And, can Espresso be used to achieve this? Unfortunately, Espresso doesn’t have the ability to access components from outside of the application package. So, how to handle this? There are two approaches to handle this : 1) Using the UI Automator 2) Using the GrantPermissionRule Let us have a look at both of these approaches in detail : Using UI Automator to Handle Runtime Permissions on Android for UI Tests : UI Automator is a UI testing framework suitable for cross-app functional UI testing across system and installed apps. This framework requires Android 4.3 (API level 18) or higher. The UI Automator testing framework provides a set of APIs to build UI tests that perform interactions on user apps and system apps. The UI Automator APIs allows you to perform operations such as opening the Settings menu or the app launcher in a test device. This testing framework is well-suited for writing black box-style automated tests, where the test code does not rely on internal implementation details of the target app. The key features of this testing framework include the following : A viewer to inspect layout hierarchy. For more information, see UI Automator Viewer. An API to retrieve state information and perform operations on the target device. For more information, see Accessing device state. APIs that support cross-app UI testing. For more information, see UI Automator APIs. Unlike Espresso, UIAutomator can interact with system applications which means that you’ll be able to interact with the permissions dialog, if needed. So, how to do this? Well, if you want to grant a permission in a UI test then you need to find the corresponding UiObject that you wish to click on. In our case, the permissions dialog box is the UiObject. This object is a representation of a view - it is not bound to the view but contains information to locate the matching view at runtime, based on the properties of the UiSelector instance within it’s constructor. A UiSelector instance is an object that declares elements to be targeted by the UI test within the layout. You can set various properties such as a text value, class name or content-description, for this UiSelector instance. So, once you have your UiObject (the permissions dialog), you can determine which option you want to select and then use click( ) method to grant/deny permission access.…

Continue ReadingHandling Android Runtime permissions in UI Tests in SUSI.AI Android

Implement Tests for Feedback List in Open Event Orga App

In the Open Event Orga App test have been written for all the presenters and viewmodel classes to ensure that the implemented functionalities work well. In the following blog post I have discussed one particular test which I implemented which is the FeedbackList Presenter Test. Implementation Instantiation of the variables. @Rule public MockitoRule mockitoRule = MockitoJUnit.rule(); @Mock public FeedbackListView feedbackListView; @Mock public FeedbackRepository feedbackRepository; We should first know the meaning of the Annotations being used: @Rule : It tells mockito to create the mocks based on the @Mock annotation. This annotation always needs to be used. @Mock: It tells Mockito to mock the FeedbackListView interface and FeedbackRepository class. Here we are mocking 3 classes namely: MockitoRule, FeedbackListView, FeedbackRepository. Before moving forward we first need to understand the meaning of Mock. A mock object is a dummy implementation for an interface or a class in which you define the output of certain method calls. Mock objects are configured to perform a certain behavior during a test. They typically record the interaction with the system and tests can validate that.   private static final List<Feedback> FEEDBACKS = Arrays.asList(   Feedback.builder().id(2L).comment("Amazing!").build(),   Feedback.builder().id(3L).comment("Awesome!").build(),   Feedback.builder().id(4L).comment("Poor!").build() ); The list of feedbacks is populated with demo values which can be used for testing purpose later. 2) The @Before annotation is applied before the set up. Before any tests are created, the setUp( ) is executed. A feedbackListPresenter object is created and the required parameters are passed. The RxJava Plugin’s setIoSchedulerHandler, setComputationSchedulerHandler and setInitmainThreadSchedulerHandler use the Scheduler.Trampoline( ) .  It lets the internal call Observable call to end before asserting the result. setIOSchedulerHandler( ) -> It basically is a type of Scheduler which handles the Input and Output of the RxJava code. setComputationSchedulerHandler( ) -> It is another Scheduler which handles the computations which are carried out during call to RxJava methods. setInitMainThreadSchedulerHandler( ) -> It is called to notify the Scheduler that the IO operations would be carried out on the main thread. @Before public void setUp() {   feedbackListPresenter = new FeedbackListPresenter(feedbackRepository);   feedbackListPresenter.attach(ID, feedbackListView);   RxJavaPlugins.setIoSchedulerHandler(scheduler -> Schedulers.trampoline());   RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());   RxAndroidPlugins.setInitMainThreadSchedulerHandler(schedulerCallable -> Schedulers.trampoline()); } Some of the tests are discussed below: →  The following test is written to ensure that the feedback list gets updated automatically after a feedback is received. @Test public void shouldLoadFeedbackListAutomatically() {   when(feedbackRepository.getFeedbacks(anyLong(), anyBoolean())).thenReturn(Observable.fromIterable(FEEDBACKS));   feedbackListPresenter.start();   verify(feedbackRepository).getFeedbacks(ID, false); } As can be seen above , I have used the when and return functionality of Mockito. It is basically used to check the return type of the object. So when the required parameters are passed in the getFeedback( ) , then the return type of what is expected is mentioned in the thenReturn( ). verify ensures that the getFeedback( ) is called on the feedbackfeedbackRepository mock only. → The following test is written to ensure that there is an error message on loading data after swipe refresh is made. Firstly the list of feedbacks is fetched from the feedbackRepository with the help of getFeedbacks( ) where the parameters event id and the boolean variable true are…

Continue ReadingImplement Tests for Feedback List in Open Event Orga App

Adding New tests for Knowledge Service

Testing is done to test our application by executing some functions by creating instances of corresponding classes,executing functions and checking the actual behaviour of our app with expected result. The tools and frameworks used in Angular are Jasmine and Karma. In this blog, I will describe about how I have implemented tests for Newly Added Knowledge API service that helped us to increase overall code coverage by 1.05% in Susper . Adding tests for Knowledge API: We need to check the API that whether it is functioning or not and this can be done by using a mocked response (hardcoded response) for any query and then comparing this with the received response from the API. This will help us to check proper functioning of our API. This is a common practice in Angular and to achieve this we will be using some dependencies like MockBackend, MockConnection, BaseRequestOptions provided by Angular. import { MockBackend, MockConnection } from '@angular/http/testing'; import { Http, Jsonp, BaseRequestOptions, RequestMethod, Response, ResponseOptions, HttpModule, JsonpModule } from '@angular/http';   We will also need to define a Mock response, here I have used hard coded response for query India. Here is the mocked response. export const MockKnowledgeApi = { results: [ {"batchcomplete": "", "query": {"normalized": [{"from": "india", "to": "India"}], "pages": {"14533": {"pageid": 14533, "ns": 0, "title": "India", "extract": `India (IAST: Bh\u0101rat), also called the Republic of India (IAST: Bh\u0101rat Ga\u1e47ar\u0101jya), is a country in South Asia…. }}}} ], MaxHits : 5 };   Now we will use a Mock constant mock_Http_provider for http and will inject instances of MockBackend and BaseRequestOptions. const mockHttp_provider = { provide: Http, deps: [MockBackend, BaseRequestOptions], useFactory: (backend: MockBackend, options: BaseRequestOptions) => { return new Http(backend, options); } };   Now we need to add all the services and dependencies which we will be using in providers and we will inject the instances of Knowledgeapi Service and MockBackend in beforeEach function. beforeEach(inject([KnowledgeapiService, MockBackend], (knowledgeService: KnowledgeapiService, mockBackend: MockBackend) => { service = knowledgeService; backend = mockBackend;}));   Now we will use the same query for which we have created the mocked response and we will be checking the response of from our API. const searchquery = 'india'; connection.mockRespond(new Response(options)); expect(connection.request.method).toEqual(RequestMethod.Get); expect(connection.request.url).toBe( `https://en.wikipedia.org/w/api.php?&origin=*&format=json&action=query&prop=extracts&exintro=&explaintext=&` + `titles=${searchquery}`);}); service.getSearchResults(searchquery).subscribe((res) => { expect(res).toEqual(MockKnowledgeApi);});   This will check the working of our API and if it is working then our test case will pass. Like this we have implemented the tests for Knowledgeapi Service which helped us to test our API and increase overall code coverage significantly. Resources Testing in Angular:https://angular.io/guide/testing Testing by Mockbackend:https://angular.io/api/http/testing/MockBackend Using MockBackend to simulate response: https://codecraft.tv/courses/angular/unit-testing/http-and-jsonp/#_using_the_code_mockbackend_code_to_simulate_a_response

Continue ReadingAdding New tests for Knowledge Service

Setting up the Circle.CI config for SUSI Android

SUSI.AI Android app uses CircleCI to check for tests whenever a new PR is made. This is done to ensure that the app is consistent with the new changes and there is no problem adding that particular code change. CircleCI has now launched a v2 version of the .yml file and therefore to continue using CircleCI it is time to upgrade to v2 version of the script. Circle.CI config version 1 Config file tells the CI on what commands to run to test the build, which will determine if the build is successful or failed like tests, lints etc and hooks commands to run if the tests pass, which environment to run the code in - python, java, android, etc. Circle.CI published that 31, August 2018 is the last date upto when they will support the config version 1.0, therefore it Circle.CI was updated to version 2.0. The version 2.0 has an updated syntax of the script and previous script was modified so that it could provide the configuration for the CI builds. The updated script is shown below : version: 2 jobs: build:   working_directory: ~/code   docker:     - image: circleci/android:api-27-alpha   environment:     JVM_OPTS: -Xmx3200m   steps:     - checkout     - restore_cache:         key: jars-{{ checksum "build.gradle" }}-{{ checksum  "app/build.gradle" }}     - run:         name: Download Dependencies         command: ./gradlew androidDependencies     - save_cache:         paths:           - ~/.gradle         key: jars-{{ checksum "build.gradle" }}-{{ checksum  "app/build.gradle" }}     - run:         name: lint check         command: ./gradlew lint     - run:         name: Run Tests         command: ./gradlew build     - run:         command:           bash exec/prep-key.sh           bash exec/apk-gen.sh     - store_artifacts:         path: app/build/reports         destination: reports     - store_artifacts:         path: app/build/outputs         destination: outputs     - store_test_results:         path: app/build/test-results Few checks such as the connectedCheck, which were required for the UI testing, are not included in this script and instead an approach towards the increasing number of unit tests is followed. The reason being, implementing UI tests is a hard task for apps which have no constant designs and have constantly changing specifications. Since the architecture used is MVP, so on moving all logic to presenter components there won't be a need of most UI tests at all. This problem with UI tests will further increase in future. Therefore, moving in the direction of increasing unit tests is better because they are both easy, small and quick to code and run and the degree of dependence on flaky UI tests also decreases. If the view is implementing small sections of display logic, just verifying that the presenter calls those is enough to guarantee that the UI is going to work. The command in the previous config such as ./gradlew connectedCheck was removed and instead ./gradlew build was added to start the unit tests instead, also due to the updated gradle dependencies, changes were made to the apk uploading commands and also changes were performed in the apk-gen.sh file. bash exec/prep-key.sh chmod +x exec/apk-gen.sh ./exec/apk-gen.sh In the above code, lines concerned with apk-gen.sh can be combined and the resultant command was written as : bash exec/apk-gen.sh The apk-gen.sh script was configured and the latest…

Continue ReadingSetting up the Circle.CI config for SUSI Android

Adding new test cases for increasing test coverage of Loklak Server

It is a good practice to have test cases covering major portion of actual code base. The idea was same to add new test cases in Loklak Server to increase its test coverage. The results were quite amazing with a significant increase of about 3% in total test coverage of the overall project. And about 80-100% increase in the test coverage of individual files for which tests have been written. Integration Process For integration, a total of 6 new test cases have been written: ASCIITest GeoLocationTest CommonPatternTest InstagramProfileScraperTest EventBriteCrawlerServiceTest LocationWiseTimeServiceTest For increasing code coverage, Java docs have been written which increase the lines of code being covered. Implementation Basic implementation of adding a new test case is same. Here’s is an example of EventBriteCrawlerServiceTest implementation. This can be used as a reference for adding a new test case in the project. Prerequisite: If the test file being written tests any additional external service (e.g. EventBriteCrawlerServiceTest tests any event being created on EventBrite platform) then, a corresponding new test service or test event should be written beforehand on the given platform. For EventBriteCrawlerServiceTest, a test-event has been created. The given below steps will be followed for creating a new test case (EventBriteCrawlerServiceTest), assuming the prerequisite step has been done: A new java file is generated in test/org/loklak/api/search/ as EventBriteCrawlerServiceTest.java. Define package for the test file and import EventBriteCrawlerService.java file which has to be tested along with necessary packages and methods. package org.loklak.api.search; import org.loklak.api.search.EventBriteCrawlerService; import org.loklak.susi.SusiThought; import org.junit.Test; import org.json.JSONArray; import org.json.JSONObject; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertNotNull;   Define the class and methods that need to be tested. It is a good idea to break the testing into several test methods rather than testing more features in a single method. public class EventBriteCrawlerServiceTest {   @Test   public void apiPathTest() { }   @Test   public void eventBriteEventTest() { }   @Test   public void eventBriteOrganizerTest() { }   @Test   public void eventExtraDetailsTest() { } }   Create an object of EventBriteCrawlerService in each test method. EventBriteCrawlerService eventBriteCrawlerService = new EventBriteCrawlerService();   Call specific method of EventBriteCrawlerService that needs to be tested in each of the defined methods of test file and assert the actualresult to the expectedresult. @Test public void apiPathTest() {     EventBriteCrawlerService eventBriteCrawlerService = new EventBriteCrawlerService();      assertEquals("/api/eventbritecrawler.json", eventBriteCrawlerService.getAPIPath()); }   For methods fetching an actual result (Integration tests) of event page, define and initialise an expected set of results and assert the expected result with the actual result by parsing the json result. @Test public void eventBriteOrganizerTest() { EventBriteCrawlerService eventBriteCrawlerService = new EventBriteCrawlerService(); String eventTestUrl = "https://www.eventbrite.com/e/ testevent-tickets-46017636991"; SusiThought resultPage = eventBriteCrawlerService .crawlEventBrite(eventTestUrl); JSONArray jsonArray = resultPage.getData(); JSONObject organizer_details = jsonArray.getJSONObject(1); String organizer_contact_info = organizer_details .getString("organizer_contact_info"); String organizer_link = organizer_details .getString("organizer_link"); String organizer_profile_link = organizer_details .getString("organizer_profile_link"); String organizer_name = organizer_details .getString("organizer_name"); assertEquals("https://www.eventbrite.com /e/testevent-tickets-46017636991 #lightbox_contact", organizer_contact_info); assertEquals("https://www.eventbrite.com /e/testevent-tickets-46017636991 #listing-organizer", organizer_link); assertEquals("", organizer_profile_link); assertEquals("aurabh Srivastava", organizer_name); }   If the test file is testing the harvester, then import and add the test class in TestRunner.java file. e.g. import org.loklak.harvester.TwitterScraperTest; @RunWith(Suite.class) @Suite.SuiteClasses({   TwitterScraperTest.class }) Testing…

Continue ReadingAdding new test cases for increasing test coverage of Loklak Server

Deployment terms in Open Event Frontend

In Open Event Frontend, once a pull request is opened, we see some tests running on for the specific pull request like ‘Codacy’, ‘Codecov’, ‘Travis’, etc. New contributors eventually get confused what the tests are about. So this blog would be a walkthrough to these terms that we use and what they mean about the PR. Travis: Everytime you make a pull request, you will see this test running and in some time giving the output whether the test passed or failed. Travis is the continuous integration test that we are using to test that the changes that the pull request you proposed does not break any other things. Sometimes you will see the following message which indicates that your changes is breaking something else which is not intended. Thus, by looking at the Travis logs, you can see where the changes proposed in the pull request are breaking things. Thus, you can go ahead and correct the code and push again to run the Travis build until it passes. Codacy: Codacy is being used to check the code style, duplication, complexity and coverage, etc. When you create a pull request or update the pull request, this test runs which checks whether the code followed certain style guide or if there is duplication in code, etc. For instance let’s say if your code has a html page in which a tag has an attribute which is left undefined. Then codacy will be throwing error failing the tests. Thus you need to see the logs and go correct the bug in code. The following message shows that the codacy test has passed. Codecov: Codecov is a code coverage test which indicates how much of the code change that is proposed in the pull request is actually executed. Consider out of the 100 lines of code that you wrote, only 80 lines is being actually executed and rest is not, then the code coverage decreases. The following indicates the codecov report. Thus, it can be seen that which files are affected by what percent. Surge: The surge link is nothing but the deployment link of the changes in your pull request. Thus, checking the link manually, we can test the behavior of the app in terms of UI/UX or the other features that the pull request adds. References: Travis CI: https://travis-ci.org/ Codacy: https://www.codacy.com/ Codecov: https://codecov.io/    

Continue ReadingDeployment terms in Open Event Frontend

UI automated testing using Selenium in Badgeyay

With all the major functionalities packed into the badgeyay web application, it was time to add some automation testing to automate the review process in case of known errors and check if code contribution by contributors is not breaking anything. We decided to go with Selenium for our testing requirements. What is Selenium? Selenium is a portable software-testing framework for web applications. Selenium provides a playback (formerly also recording) tool for authoring tests without the need to learn a test scripting language. In other words, Selenium does browser automation:, Selenium tells a browser to click some element, populate and submit a form, navigate to a page and any other form of user interaction. Selenium supports multiple languages including C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Here, we are going to use Python (and specifically python 2.7). First things first: To install these package run this code on the CLI: pip install selenium==2.40 pip install nose Don’t forget to add them in the requirements.txt file Web Browser: We also need to have Firefox installed on your machine. Writing the Test An automated test automates what you'd do via manual testing - but it is done by the computer. This frees up time and allows you to do other things, as well as repeat your testing. The test code is going to run a series of instructions to interact with a web browser - mimicking how an actual end user would interact with an application. The script is going to navigate the browser, click a button, enter some text input, click a radio button, select a drop down, drag and drop, etc. In short, the code tests the functionality of the web application. A test for the web page title: import unittest from selenium import webdriver class SampleTest(unittest.TestCase): @classmethod def setUpClass(cls): cls.driver = webdriver.Firefox() cls.driver.get('http://badgeyay-dev.herokuapp.com/') def test_title(self): self.assertEqual(self.driver.title, 'Badgeyay') @classmethod def tearDownClass(cls): cls.driver.quit()   Run the test using nose test.py Clicking the element For our next test, we click the menu button, and check if the menu becomes visible. elem = self.driver.find_element_by_css_selector(".custom-menu-content") self.driver.find_element_by_css_selector(".glyphicon-th").click() self.assertTrue(elem.is_displayed())   Uploading a CSV file: For our next test, we upload a CSV file and see if a success message pops up. def test_upload(self): Imagepath = os.path.abspath(os.path.join(os.getcwd(), 'badges/badge_1.png')) CSVpath = os.path.abspath(os.path.join(os.getcwd(), 'sample/vip.png.csv')) self.driver.find_element_by_name("file").send_keys(CSVpath) self.driver.find_element_by_name("image").send_keys(Imagepath) self.driver.find_element_by_css_selector("form .btn-primary").click() time.sleep(3) success = self.driver.find_element_by_css_selector(".flash-success") self.assertIn(u'Your badges has been successfully generated!', success.text)   The entire code can be found on: https://github.com/fossasia/badgeyay/tree/development/app/tests We can also use the Phantom.js package along with Selenium for UI testing purposes without opening a web browser. We use this for badgeyay to run the tests for every commit in Travis CI which cannot open a program window. Resources Selenium with Python by Baiju Muthukadan: http://selenium-python.readthedocs.io Getting started with UI autometed tests using (Selenium + Python) by Daniel Anggrianto: https://engineering.aweber.com/getting-started-with-ui-automated-tests-using-selenium-python/ Selenium Webdriver Python Tutorial For Web Automation by Meenakshi Agarwal: http://www.techbeamers.com/selenium-webdriver-python-tutorial/ How to Use Selenium with Python by Guru99: https://www.guru99.com/selenium-python.html

Continue ReadingUI automated testing using Selenium in Badgeyay

Improving Loklak apps site

In this blog I will be describing some of the recent improvements made to the Loklak apps site. A new utility script has been added to automatically update the loklak app wall after a new app has been made. Invalid app query in app details page has been handled gracefully. A proper message is shown when a user enters an invalid app name in the url of the details page. Tests has been added for details page. Developing updatewall script This is a small utility script to update Loklak wall in order to expose a newly created app or update an existing app. Before moving into the working of this script let us discuss how Loklak apps site tracks all the apps and their details. In the root of the project there is a file names apps.json. This file contains an aggregation of all the app.json files present in the individual apps. Now when the site is loaded, index.html loads the Javascript code present in app_list.js. This app_list.js file makes an ajax call to root apps.json files, loads all the app details in a list and attaches this list to the AngularJS scope variable. After this the app wall consisting of various app details is rendered using html. So whenever a new app is created, in order to expose the app on the wall, the developer needs to copy the contents of the application’s app.json and paste it in the root apps.json file. This is quite tedious on the part of the developer as for making a new app he will first have to know how the site works which is not all directly related to his development work. Next, whenever he updates the app.json of his app, he needs to again update apps.json file with the new data. This newly added script (updatewall) automates this entire process. After creating a new app all that the developer needs to do is run this script from within his app directory and the app wall will be updated automatically. Now, let us move into the working of this script. The basic workflow of the updatewall script can be described as follows. The script loads the json data present in the app.json file of the app under consideration. Next it loads the json data present in the root apps.json file. if __name__ == '__main__': #open file containg json object json_list_file = open(PATH_TO_ROOT_JSON, 'r') #load json object json_list = json.load(json_list_file, object_pairs_hook=OrderedDict) json_list_file.close() app_json_file = open(PATH_TO_APP_JSON, 'r') app_json = json.load(app_json_file, object_pairs_hook=OrderedDict) app_json_file.close() #method to update Loklak app wall expose_app(json_list, app_json) When we are loading the json data we are using object_pairs_hook in order to load the data into an OrderedDict rather than a normal python dictionary. We are doing this so that the order of the dictionary items are maintained. Once the data is loaded we invoke the expose method. def expose_app(json_list, app_json): #if app is already present in list then fetch that app app = getAppIfPesent(json_list, app_json) #if app is not present then add…

Continue ReadingImproving Loklak apps site

Using Protractor for UI Tests in Angular JS for Loklak Apps Site

Loklak apps site’s home page and app details page have sections where data is dynamically loaded from external javascript and json files. Data is fetched from json files using angular js, processed and then rendered to the corresponding views by controllers. Any erroneous modification to the controller functions might cause discrepancies in the frontend. Since Loklak apps is a frontend project, any bug in the home page or details page will lead to poor UI/UX. How do we deal with this? One way is to write unit tests for the various controller functions and check their behaviours. Now how do we test the behaviours of the site. Most of the controller functions render something on the view. One thing we can do is simulate the various browser actions and test site against known, accepted behaviours with Protractor. What is Protractor Protractor is end to end test framework for Angular and AngularJS apps. It runs tests against our app running in browser as if a real user is interacting with our browser. It uses browser specific drivers to interact with our web application as any user would. Using Protractor to write tests for Loklak apps site First we need to install Protractor and its dependencies. Let us begin by creating an empty json file in the project directory using the following command. echo {} > package.json Next we will have to install Protractor. The above command installs protractor and webdriver-manager. After this we need to get the necessary binaries to set up our selenium server. This can be done using the following. ./node_modules/protractor/bin/webdriver-manager update ./node_modules/protractor/bin/webdriver-manager start Let us tidy up things a bit. We will include these commands in package.json file under scripts section so that we can shorten our commands. Given below is the current state of package.json { "scripts": { "start": "./node_modules/http-server/bin/http-server", "update-driver": "./node_modules/protractor/bin/webdriver-manager update", "start-driver": "./node_modules/protractor/bin/webdriver-manager start", "test": "./node_modules/protractor/bin/protractor conf.js" }, "dependencies": { "http-server": "^0.10.0", "protractor": "^5.1.2" } } The package.json file currently holds our dependencies and scripts. It contains command for starting development server, updating webdriver and starting webdriver (mentioned just before this) and command to run test. Next we need to include a configuration file for protractor. The configuration file should contain the test framework to be used, the address at which selenium is running and path to specs file. // conf.js exports.config = { framework: "jasmine", seleniumAddress: "http://localhost:4444/wd/hub", specs: ["tests/home-spec.js"] }; We have set the framework as jasmine and selenium address as http://localhost:4444/wd/hub. Next we need to define our actual file. But before writing tests we need to find out what are the things that we need to test. We will mostly be testing dynamic content loaded by Javascript files. Let us define a spec. A spec is a collection of tests. We will start by testing the category name. Initially when the page loads it should be equal to All apps. Next we test the top right hand side menu which is loaded by javascript using topmenu.json file. it("should have a category name", function()…

Continue ReadingUsing Protractor for UI Tests in Angular JS for Loklak Apps Site