Adding unit tests for effects in Loklak Search

Loklak search uses @ngrx/effects to listen to actions dispatched by the user and sending API request to the loklak server. Loklak search, currently, has seven effects such as Search Effects,  Suggest Effects which runs to make the application reactive. It is important to test these effects to ensure that effects make API calls at the right time and then map the response to send it back to the reducer.

I will  explain here how I added unit tests for the effects. Surprisingly, the test coverage increased from 43% to 55% after adding these tests.

Effects to test

We are going to test effects for user search. This effect listens to the event of type USER_SEARCH and makes a call to the user-search service with the query as a parameter. After a response is received, it maps the response and passes it on the UserSearchCompleteSuccessAction action which performs the later operation. If the service fails to get a response, it makes a call to the UserSearchCompleteFailAction.

Code

ApiUserSearchEffects is the effect which detects if the USER_SEARCH action is dispatched from some component of the application and consequently, it makes a call to the UserSearchService and handles the JSON response received from the server. The effects then, dispatch the action new UserSearchCompleteSuccessAction if response is received from server or either dispatch the action new UserSearchCompleteFailAction if no response is received. The debounce time is set to 400 so that response can be flushed if a new USER_SEARCH is dispatched within the next 400ms.

For this effect, we need to test if the effects actually runs when USER_SEARCH action is made. Further, we need to test if the correct parameters are supplied to the service and response is handled carefully. We also, need to check if the response if really flushed out within the certain debounce time limit.

@Injectable()
export class ApiUserSearchEffects {@Effect()
search$: Observable<Action>
= this.actions$
.ofType(userApiAction.ActionTypes.USER_SEARCH)
.debounceTime(400)
.map((action: userApiAction.UserSearchAction) => action.payload)
.switchMap(query => {
const nextSearch$ = this.actions$.ofType(userApiAction.ActionTypes.USER_SEARCH).skip(1);const follow_count = 10;return this.apiUserService.fetchQuery(query.screen_name, follow_count)
.takeUntil(nextSearch$)
.map(response => new userApiAction.UserSearchCompleteSuccessAction(response))
.catch(() => of(new userApiAction.UserSearchCompleteFailAction()));
});constructor(
private actions$: Actions,
private apiUserService: UserService
) { }

}

Unit test for effects

  • Configure the TestBed class before starting the unit test and add all the necessary imports (most important being the EffectsTestingModule) and providers. This step will help to isolate the effects completely from all other components and testing it independently. We also need to create spy object which spies on the method userService.fetchQuery with provider being UserService.

beforeEach(() => TestBed.configureTestingModule({
imports: [
EffectsTestingModule,
RouterTestingModule
],
providers: [
ApiUserSearchEffects,
{
provide: UserService,
useValue: jasmine.createSpyObj(‘userService’, [‘fetchQuery’])
}
]
}));
  • Now, we will be needing a function setup which takes params which are the data to be returned by the Mock User Service. We can now configure the response to returned by the service. Moreover, this function will be initializing EffectsRunner and returning ApiUserSearchEffects so that it can be used for unit testing.

function setup(params?: {userApiReturnValue: any}) {
const userService = TestBed.get(UserService);
if (params) { userService.fetchQuery.and.returnValue(params.userApiReturnValue);
}return {
runner: TestBed.get(EffectsRunner),
apiUserSearchEffects: TestBed.get(ApiUserSearchEffects)
};
}

 

  • Now we will be adding unit tests for the effects. In these tests, we are going to test if the effects recognise the action and return some new action based on the response we want and if it maps the response only after a certain debounce time.We have used fakeAsync() which gives us access to the tick() function. Next, We are calling the function setup and pass on the Mock Response so that whenever User Service is called it returns the Mock Response and never runs the service actually. We will now queue the action UserSearchAction in the runner and subscribe to the value returned by the effects class. We can now test the value returned using expect() block and that the value is returned only after a certain debounce time using tick() block.

it(‘should return a new userApiAction.UserSearchCompleteSuccessAction, ‘ +
‘with the response, on success, after the de-bounce’, fakeAsync(() => {
const response = MockUserResponse;const {runner, apiUserSearchEffects} = setup({userApiReturnValue: Observable.of(response)});

const expectedResult = new userApiAction.UserSearchCompleteSuccessAction(response);

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result);
tick(399); // test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));

it(‘should return a new userApiAction.UserSearchCompleteFailAction,’ +
‘if the SearchService throws’, fakeAsync(() => {
const { runner, apiUserSearchEffects } = setup({ userApiReturnValue: Observable.throw(new Error()) });

const expectedResult = new userApiAction.UserSearchCompleteFailAction();

runner.queue(new userApiAction.UserSearchAction(MockUserQuery));

let result = null;
apiUserSearchEffects.search$.subscribe(_result => result = _result );

tick(399); // Test debounce
expect(result).toBe(null);
tick(401);
expect(result).toEqual(expectedResult);
}));
});

Reference

Continue ReadingAdding unit tests for effects in Loklak Search

Documenting Open Event API Using API-Blueprint

FOSSASIA‘s Open Event Server API documentation is done using an api-blueprint. The API Blueprint language is a format used to describe API in an API blueprint file, where a blueprint file (or a set of files) is such that describes an API using the API Blueprint language. To follow up with the blueprint, an apiary editor is used. This editor is responsible for rendering the API blueprint and printing the result in user readable API documented format. We create the API blueprint manually.

Using API Blueprint:-
We create the API blueprint by first adding the name and metadata for the API we aim to design. This step looks like this :-

FORMAT: V1
HOST: https://api.eventyay.com

# Open Event API Server

The Open Event API Server

# Group Authentication

The API uses JWT Authentication to authenticate users to the server. For authentication, you need to be a registered user. Once you have registered yourself as an user, you can send a request to get the access_token.This access_token you need to then use in Authorization header while sending a request in the following manner: `Authorization: JWT <access_token>`


API blueprint starts with the metadata, here FORMAT and HOST are defined metadata. FORMAT keyword specifies the version of API Blueprint . HOST defines the host for the API.

The heading starts with # and the first heading is regarded as the name of the API.

NOTE – Also all the heading starts with one or more # symbol. Each symbol indicates the level of the heading. One # symbol followed by heading serves as the top level i.e. one # = Top Level. Similarly for  ## = second level and so on. This is in compliance with normal markdown format.
        Following the heading section comes the description of the API. Further, headings are used to break up the description section.

Resource Groups:
—————————–
    By using group keyword at the starting of a heading , we create a group of related resources. Just like in below screenshot we have created a Group Users.

# Group Users

For using the API you need(mostly) to register as an user. Registering gives you access to all non admin API endpoints. After registration, you need to create your JWT access token to send requests to the API endpoints.


| Parameter | Description | Type | Required |
|:----------|-------------|------|----------|
| `name`  | Name of the user | string | - |
| `password` | Password of the user | string | **yes** |
| `email` | Email of the user | string | **yes** |

 

Resources:
——————
    In the Group Users we have created a resource Users Collection. The heading specifies the URI used to access the resource inside of the square brackets after the heading. We have used here parameters for the resource URI which takes us into how to add parameters to the URI. Below code shows us how to add parameters to the resource URI.

## Users Collection [/v1/users{?page%5bsize%5d,page%5bnumber%5d,sort,filter}]
+ Parameters
    + page%5bsize%5d (optional, integer, `10`) - Maximum number of resources in a single paginated response.
    + page%5bnumber%5d (optional, integer, `2`) - Page number to fetchedfor the paginated response.
    + sort (optional, string, `email`) - Sort the resources according to the given attribute in ascending order. Append '-' to sort in descending order.
    + filter(optional, string, ``) - Filter according to the flask-rest-jsonapi filtering system. Please refer: http://flask-rest-jsonapi.readthedocs.io/en/latest/filtering.html for more.

 

Actions:
————–
    An action is specified with a sub-heading inside of  a resource as the name of Action, followed by HTTP method inside the square brackets.
    Before we get on further, let us discuss what a payload is. A payload is an HTTP transaction message including its discussion and any additional assets such as entity-body validation schema.

There are two payloads inside an Action:

  1. Request: It is a payload containing one specific HTTP Request, with Headers and an optional body.
  2. Response: It is a payload containing one HTTP Response.

A payload may have an identifier-a string for a request payload or an HTTP status code for a response payload.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)


Types of HTTP methods for Actions:

  • GET – In this action, we simply send the header data like Accept and Authorization and no body. Along with it we can send some GET parameters like page[size]. There are two cases for GET: List and Detail. For example, if we consider users, a GET for List helps us retrieve information about all users in the response, while Details, helps us retrieve information about a particular user.

The API Blueprint examples implementation of both GET list and detail request and response are as follows.

### List All Users [GET]
Get a list of Users.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "meta": {
                "count": 2
            },
            "data": [
                {
                    "attributes": {
                        "is-admin": true,
                        "last-name": null,
                        "instagram-url": null,

 

### Get Details [GET]
Get a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
            "data": {
                "attributes": {
                    "is-admin": false,
                    "last-name": "Doe",
                    "instagram-url": "http://instagram.com/instagram",

 

  • POST – In this action, apart from the header information, we also need to send a data. The data must be correct with jsonapi specifications. A POST body data for an users API would look something like this:
### Create User [POST]
Create a new user using an email, password and an optional name.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data":
              {
                "attributes":
                {
                  "email": "example@example.com",
                  "password": "password",


A POST request with this data, would create a new entry in the table and then return in jsonapi format the particular entry that was made into the table along with the id assigned to this new entry.

  • PATCH – In this action, we change or update an already existing entry in the database. So It has a header data like all other requests and a body data which is almost similar to POST except that it also needs to mention the id of the entry that we are trying to modify.
### Update User [PATCH]
+ `id` (integer) - ID of the record to update **(required)**

Update a single user by setting the email, email and/or name.

Authorized user should be same as user in request body or must be admin.

+ Request (application/vnd.api+json)

    + Headers

            Authorization: JWT <Auth Key>

    + Body

            {
              "data": {
                "attributes": {
                  "password": "password1",
                  "avatar_url": "http://example1.com/example1.png",
                  "first-name": "Jane",
                  "last-name": "Dough",
                  "details": "example1",
                  "contact": "example1",
                  "facebook-url": "http://facebook.com/facebook1",
                  "twitter-url": "http://twitter.com/twitter1",
                  "instagram-url": "http://instagram.com/instagram1",
                  "google-plus-url": "http://plus.google.com/plus.google1",
                  "thumbnail-image-url": "http://example1.com/example1.png",
                  "small-image-url": "http://example1.com/example1.png",
                  "icon-image-url": "http://example1.com/example1.png"
                },
                "type": "user",
                "id": "2"
              }
            }

Just like in POST, after we have updated our entry, we get back as response the new updated entry in the database.

  • DELETE – In this action, we delete an entry from the database. The entry in our case is soft deleted by default. Which means that instead of deleting it from the database, we set the deleted_at column with the time of deletion. For deleting we just need to send header data and send a DELETE request to the proper endpoint. If deleted successfully, we get a response as “Object successfully deleted”.
### Delete User [DELETE]
Delete a single user.

+ Request

    + Headers

            Accept: application/vnd.api+json

            Authorization: JWT <Auth Key>

+ Response 200 (application/vnd.api+json)

        {
          "meta": {
            "message": "Object successfully deleted"
          },
          "jsonapi": {
            "version": "1.0"
          }
        }


How to check after manually entering all these? We can use the
apiary website to render it, or simply use different renderer to do it. How? Checkout for my next blog on apiary and aglio.

Learn more about api blueprint here: https://apiblueprint.org/

Continue ReadingDocumenting Open Event API Using API-Blueprint

Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

Continue ReadingTesting Asynchronous Code in Open Event Orga App using RxJava

The Joy of Testing with MVP in Open Event Orga App

Testing applications is hard, and testing Android Applications is harder. The natural way an Android Developer codes is completely untestable. We are born and molded into creating God classes – Activities and perform every bit of logic inside them. Some thought that introduction to Fragments will introduce a little bit of modularity, but we proved otherwise by shifting to God Fragments. When the natural urge of an Android Developer to

  • apply logic,
  • load UI,
  • handle concurrency (hopefully, not using AsyncTask),
  • load data – from the network; disk; and cache,
  • and manage the state of the Activity/Fragment

finally, meets with a new form of revelation that he/she should test his/her application, all of the concepts acquired are shattered in a moment. The person goes to Android Docs to see why he started coding that way and realizes that even the system is flawed – Android Documentation is full of examples promoting God Activities, which they have used to show the use of API for only one reason – brevity. It’s not uncommon for us to steal some code from StackOverflow or Android Docs, but we don’t realize that they are presented there without an application environment or structure, all the required component just glued together to get it functionally complete, so that reader does not have to configure it anymore. Why would an answer from StackOverflow load a barcode scanner using a builder? Or build a ContentProvider to show how to query sorted data from SQLite? Or use a singleton for your provider classes? The simple answer is, they won’t. But this doesn’t mean you shouldn’t too.

The first thought that enters developer’s mind when he gets his hand on a piece of code is to paste it in the correct position and get it to work. There is always this moment of hesitation where conscience rings a bell saying, “What are you doing? Is it the right way to do it? How many lines till you stop overloading this Activity? It’s 2000 lines already”. But it dies as soon as we see the feature is working. And why test something which we have confirmed to work, right? I just wrote this code to show a progress bar while the data loads and hide it when it is done. I can see this working, what will I achieve in painfully writing a test for it, mocking the loading conditions and all. Wrong! Unit tests which test your trivial utils like Date Modification, String Parsing, etc are good and needed but are so trivial that they are hard to go wrong, and if they are, they are easy to fix as they have single usage throughout the app, and it is easy to spot bugs and fix them.

The real problem is testing of your app over dynamic conditions, where you have to emulate them so you can see if your app is making the right decisions. The progress bar working example may work for now, but what if it breaks over refactoring, or you wrote the same code elsewhere and forget to hide the progress bar? Simply copying a well-written test and changing 2-3 class names will fail the build and tell you what’s wrong. A well-contracted app can even contain tests to check that there’ll be no memory leaks. But none of this is possible if everything is jumbled into single Activity with callbacks, loaders, UI handling, business logic, etc. You can’t even think that where to begin. In this post, I will briefly discuss, how to design and test applications using MVP pattern.

MVP to the Rescue

Before moving ahead, I must put a disclaimer saying that MVP is a design pattern which follows a very opinionated implementation. The extent to which you want to refactor your app and the number of abstractions you are willing to do are in your hand. There aren’t any golden rules where your implementation will fail to be called as MVP. Remember, the client doesn’t care about architecture, it’s for you, so whatever makes your life and testing easier, works. Also, I’ll use an axiom related to testing here, “Test until fear turns into boredom”. You can apply the same to your MVP implementation. Testing and design patterns are here to eradicate your fear of failure, not to bore you.

So, in this guide, I’ll be using a use case of a QR Code Scanner Activity built using MVP pattern which we have employed in Open Event Orga Application (Github Repo). Because we are focusing on the test pattern and how to make it easy for us to test our app logic, I have omitted the model part from the MVP equation. The reason being that the possible model in the application would have been the camera loader or barcode initializer and the caveats associated with following this is that both these modules rely heavily on the Android specific view classes, namely, SurfaceView and other lifecycle methods. You could always create your way around it to include them in a separate model, but it won’t help us in writing unit tests for them, because firstly, they aren’t our logic to test, and secondly, they can’t be tested in a unit test (those models would have probably just implemented certain setters and getters).

So, the main purpose of our QR Code Scanner class is to scan a QR code and match it with a list of identifiers, and if there is a match, return it successfully to the caller. In this specific example, the identifiers will the ticket IDs of event attendees and the caller will be event organizer scanning QR codes to check the attendee in. The use case sounds simple, but has several mini use cases and dependencies of itself, which we have to take into our account while designing the View and Presenter class. Let’s discuss them one by one:

App State – Start: Activity starts, loads camera

  • Permission Granted: detects that the app already has Camera Permission,
    • starts scanning
  • Permission Absent: detects that the app doesn’t have Camera Permission,
    • Denied: asks for it, denied, shows error
    • Accepted: asks for it, granted, starts scanning

App State – QR Code Detected: Starts parsing them

  • Attendees are not present: Attendees aren’t present because of some reason, either due to internal error or have not loaded yet
    • stops parsing
  • Attendees are present:
    • QR Code does not match with any attendee : Do nothing
    • QR Code matches with one of the attendee : Send attendee to caller

App State: Camera or containing View is getting destroyed

  • Release Camera

There can be much more internal data flows, but this much is sufficient for our example. So, let’s start defining our contracts using the above knowledge. First, we will design our View. So what should be our strategy? Always think of presenters and views to be mapped in a 1:1 relation. They can have conversations with each other, the difference is that the view is only allowed to talk to the presenter, but presenter may talk to models too. So, the view is going to get all of its information from the presenter and can only take action when presenter tells it to, essentially saying that the view is dumb and passive. The presenter can talk to view and model(s), meaning the collection of information presenter has does not have to come from the view only. In fact, the less dependent presenter has to be on view, the better. The main motive of MVP is to make our logic less dependent on views.

Presenter Contract

In our example, presenter relies on the view to tell it when a certain event happens, they can be lifecycle callbacks, permission grants/denies, camera load/destroy or anything purely related to the Android implementation. So, we generally know from the start what kind of information presenter needs from a View, so let’s design our presenter first.

public interface IScanQRPresenter {

    void attach(long eventId, IScanQRView scanQRView);

    void start();

    void detach();

    void cameraPermissionGranted(boolean granted);

    void onBarcodeDetected(Barcode barcode);

    void onScanStarted();

    void onCameraLoaded();

    void onCameraDestroyed();

}

 

The methods are self-explanatory. Don’t worry if you didn’t get why we defined certain hooks like onScanStarted() or why we didn’t define onScanStopped(). These kinds of details will reveal themselves as you develop your components. You can skip to next section if you don’t want to know why we did it.

Basically, you should define a callback for events which are not reliable to be synchronized. What? Let me explain. Let’s say you got your camera permission and requested the view to start scanning, but you have to wait till the camera is loaded as it is not a synchronous call (and it shouldn’t be or your main thread will block). So, instead of that, our presenter will request the camera to load, go in an idle state, wait for the onCameraLoaded() call, and then request the scan to start, and since it is also not a synchronous work, it will go into idle state again and wait for onScanStarted() and then further its work. Whenever the camera is destroyed, the onCameraDestroyed() callback will be called and we will stop the scanning, and since there is nothing to be done after that, we won’t wait till scanning has stopped, thus dropping the need of onScanStopped() callback.

View Contract

View contract will come naturally to you once you have understood the data flow. It will contain the commands presenter will issue on the view and also, the requests for data that view holds. So, this will be our view.

public interface IScanQRView {

    boolean hasCameraPermission();

    void requestCameraPermission();

    void showPermissionError(String error);

    void onScannedAttendee(Attendee attendee);

    void showBarcodePanel(boolean show);

    void showBarcodeData(@NonNull String data);

    void showProgressBar(boolean show);

    void loadCamera();

    void startScan();

    void stopScan();

}

 

Almost all commands are straightforward but let me quickly explain showBarcodePanel(boolean) and showBarcodeData(String). They are used to display the currently visible barcode data to the user.

So, with our implementations set and data flow in place, let’s start writing tests for the feature. Yes, we’ll write tests without implementation and then you’ll see how easy it will be to write your views and presenters with only one goal to mind, to make the tests pass. If your tests are written correctly and cover everything, you should feel confident that your app will work without even seeing the actual implementation, because that is what tests are for. And by making passive and dumb views, imagine how light your instrumentation tests will be. Just check that the individual methods in view implementation are working as expected and you are done! No need to test logic or complex interactions, etc because you have got it covered in the unit tests themselves. This is not only a benefit of data flow tests but also a best practice. You always want to follow DRY, Don’t Repeat Yourself, even while testing.

Tests

So, we will start writing our tests now and you’ll realize how easy it is and all the hard work of abstraction and designing will pay off.

Attach Tests

So, firstly, we will test if the presenter calls appropriate methods when it is attached

@Test
public void shouldLoadAttendeesAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(eventRepository).getAttendees(eventId, false);
}

@Test
public void shouldLoadCameraAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(scanQRView).loadCamera();
}

 

Here, we are using Mockito to mock our EventDataRepository to return locally defined attendees instead of doing an actual network call. Then we are calling attach on presenter in each method and then we verify in the first test that the presenter is calling getAttendees on the EventDataRepository, and in the second test that it is requesting the view to load the camera.

Note that in the implementation of attach function, both loading of Camera and attendee loading will take place, but it is best practice to test them separately so that when a test fails, we know why it did

Detach Tests

@Test
public void shouldDetachViewOnStop() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    assertNotNull(scanQRPresenter.getView());

    scanQRPresenter.detach();

    assertNull(scanQRPresenter.getView());
}

@Test
public void shouldNotAccessViewAfterDetach() {
    scanQRPresenter.detach();

    scanQRPresenter.start();
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    scanQRPresenter.onBarcodeDetected(barcode2);
    scanQRPresenter.onCameraDestroyed();

    verifyZeroInteractions(scanQRView);
}

 

In detach tests, we are verifying that after attaching and view not being null, the detach method call makes the presenter leave the reference to the view making it null. And in the second test, we do all possible interactions after detaching and confirm that no call whatsoever was made on the view. Tests like these enforce to avoid memory leaks and check for any NullPointerExceptions that may happen after the view was made null.

Note: This does not mean that memory leaks will not happen if this test passes. You can cause memory leaks by giving the view reference to any long living object, not just the presenter. This just ensures that presenter will not hold the view reference after detach and won’t reference to the null view in future.

Permission Tests

@Test
public void shouldStartScanOnCameraLoadedIfPermissionPresent() {
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).startScan();
}

@Test
public void shouldAskPermissionOnCameraLoadedIfPermissionsAbsent() {
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).requestCameraPermission();
}

@Test
public void shouldStartScanningOnPermissionGranted() {
    scanQRPresenter.cameraPermissionGranted(true);

    verify(scanQRView).startScan();
}

@Test
public void shouldShowErrorOnPermissionDenied() {
    scanQRPresenter.cameraPermissionGranted(false);

    verify(scanQRView).showPermissionError(matches("(.*permission.*denied.*)|(.*denied.*permission.*)"));
}

The permission tests are straightforward:

Implicit Permission Handling

  1. If the view already has the camera permission and camera has loaded, start scanning
  2. If the view does not have camera permission and the camera has loaded, request the permission

Request Handling

  1. If the request was granted, start scanning
  2. If the permission was denied, show the permission error. Here, I have used regex to match that the error message contains permission and denied words, you can use anyString() from Mockito for more flexibility or a specific message for more tight testing

Camera Destroy Test

@Test
public void shouldStopScanOnCameraDestroyed() {
    scanQRPresenter.onCameraDestroyed();

    verify(scanQRView).stopScan();
}

Pretty simple, stop the scan on destruction of camera

Flow Tests

You can also test that the callback flow happens in order so that not only the unit tests work but also the implementation logic is correct.

/**
 * Checks that the flow of commands happen in order
 */
@Test
public void shouldFollowFlowOnImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).loadCamera();
    scanQRPresenter.onCameraLoaded();
    inOrder.verify(scanQRView).startScan();
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

Here, we are verifying two things, first that if we have the request granted implicitly, we load the camera and start the scan in order. This test isn’t that useful as we already tested that loading of the camera is done on attach and scan is started when the camera is loaded. In fact, this is an example of breaking the DRY rule. Even though it doesn’t hurt to include this, it also doesn’t help as it does not cover anything that hasn’t already been tested.

The second test is important and tests that progress bar is correctly shown and hidden after certain communications have taken place and the scan has started. Similarly, we can also test the progress bar behavior over all of the possible combinations of cases that can happen. The code snippets below show the tests:

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestDeny() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    inOrder.verify(scanQRView).showProgressBar(false);
}

QR Code Detection Tests

@Test
public void shouldNotSendAnyBarcodeIfAttendeesAreNull() {
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldNotSendAttendeeOnWrongBarcodeDetection() {
    scanQRPresenter.setAttendees(attendees);
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldSendAttendeeOnCorrectBarcodeDetection() {
    // Somehow the setting in setUp is not working, a workaround till fix is found
    RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());

    scanQRPresenter.setAttendees(attendees);

    barcode1.displayValue = "test4-91";
    scanQRPresenter.onBarcodeDetected(barcode1);

    verify(scanQRView).onScannedAttendee(attendees.get(3));
}

 

Lastly, the core tests of our presenter. To explain the tests, let me show you the sendNullInterleaved() method

private void sendNullInterleaved() {
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
}

 

So what it basically does is send some barcodes interleaved with null (no barcode detected) to the presenter using the onBarcodeDetected method to emulate the real camera sending barcode values with sometimes sending null whenever no barcode is in view.

The first test simply checks that no attendee is sent if the attendee list is null. Seems pretty obvious. The second one checks that if barcode does not match with any attendee’s identifier, it should not send any attendee as well. It does this by setting an arbitrary list of attendees with different identifiers and sending non-matching barcodes to the presenter. Lastly, the successful test, where a single matching barcode is sent to the presenter and it should send that particular attendee with matching identifier.

Phew! Quite a lot of tests and we are done. The tests were not very large and mostly self-explanatory. Now, you just have to implement the view and presenter methods till all the tests light up to be green and you are done! You have created a feature using MVP design and implemented it using test driven development.

So start testing and get a seal of reliability and confidence about your code and the satisfaction of seeing the green bar fill up.

Continue ReadingThe Joy of Testing with MVP in Open Event Orga App

Test SUSI Web App with Facebook Jest

Jest is used by Facebook to test all Javascript codes specially React code snippets. If you need to setup jest on your react application you can follow up these simple steps. But if your React application is made with “create-react-app”, you do not need to setup jest manually. Because it comes with Jest. You can run test using “react-scripts” node module.

Since SUSI chat is made with “create-react-app” we do not need to install and run Jest directly. We executed our test cases using “npm test” it executes “react-scripts test” command. It executes all “.js” files under “__tests__” folders. And all other files with “.spec.js” and “.test.js” suffixes.

React apps that are made from “create-react-app” come with sample test case (smoke test) and that checks whether whole application is built correctly or not. If it passes the smoke test then it starts to run further test cases.

import React from 'react';
import ReactDOM from 'react-dom';
import ChatApp from '../../components/ChatApp.react';
it('renders without crashing', () => {
 const div = document.createElement('div');
 ReactDOM.render( < ChatApp / > , div);
});

This checks all components which are inside of the “<ChatApp />” component and it check whether all these components integrate correctly or not.

If we need to check only one component in isolated environment. We can use shallow rendering API. we have used shallow rendering API to check each and every component in isolated environment.

We have to install enzyme and test renderer before use it.

npm install --save-dev enzyme react-test-renderer

import React from 'react';
import MessageSection from '../../components/MessageSection.react';
import { shallow } from 'enzyme';
it('render MessageListItem without crashing',()=>{
  shallow(<MessageSection />);
})

This test case tests only the “MessageListItem”, not its child components.

After executing “npm test” you will get the passed and failed number of test cases.

If you need to see the coverage you can see it without installing additional dependencies.

You just need to run this.

npm test -- --coverage

It will show the output like this.

This view shows how many lines, functions, statements, branches your program has and this shows how much of those covered from the test cases you have.

If we are going to write new test cases for susi chat, we have to make separate file in “__tests__” folder and name it with corresponding file name that we are going to test.

it('your test case description',()=>{
 //test what you need 
})

Normally test cases looks like this.in test cases you can use “test” instead of “it” .after test case description, there is a fat arrow function. In side this fat arrow function you can add what you need to test

In below example I have compared returned value of the function with static value.

function funcName(){
 return 1;
}

it('your test case description',()=>{
 expect(funcName()).toBe(1);
})

You have to add your function/variable that need to be tested into “expect()” and value you expect from that function or variable into “toBe()”.  Instead of “toBe()” you can use different functions according to your need.

If you have a long list of test cases you can group them ( using describe()).

describe('my beverage', () => {
  test('is delicious', () => {
    expect(myBeverage.delicious).toBeTruthy();
  });

  test('is not sour', () => {
    expect(myBeverage.sour).toBeFalsy();
  });
});

This post covers only few things of testing . You can learn more about jest testing from official documentation here.

Continue ReadingTest SUSI Web App with Facebook Jest

DetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

In the open event server project, we had chosen to go with celery for async background tasks. From the official website,

What is celery?

Celery is an asynchronous task queue/job queue based on distributed message passing.

What are tasks?

The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing.

After the tasks had been set up, an error constantly came up whenever a task was called

The error was:

DetachedInstanceError: Instance <User at 0x7f358a4e9550> is not bound to a Session; attribute refresh operation cannot proceed

The above error usually occurs when you try to access the session object after it has been closed. It may have been closed by an explicit session.close() call or after committing the session with session.commit().

The celery tasks in question were performing some database operations. So the first thought was that maybe these operations might be causing the error. To test this theory, the celery task was changed to :

@celery.task(name='lorem.ipsum')
def lorem_ipsum():
    pass

But sadly, the error still remained. This proves that the celery task was just fine and the session was being closed whenever the celery task was called. The method in which the celery task was being called was of the following form:

def restore_session(session_id):
    session = DataGetter.get_session(session_id)
    session.deleted_at = None
    lorem_ipsum.delay()
    save_to_db(session, "Session restored from Trash")
    update_version(session.event_id, False, 'sessions_ver')


In our app, the app_context was not being passed whenever a celery task was initiated. Thus, the celery task, whenever called, closed the previous app_context eventually closing the session along with it. The solution to this error would be to follow the pattern as suggested on http://flask.pocoo.org/docs/0.12/patterns/celery/.

def make_celery(app):
    celery = Celery(app.import_name, broker=app.config['CELERY_BROKER_URL'])
    celery.conf.update(app.config)
    task_base = celery.Task

    class ContextTask(task_base):
        abstract = True

        def __call__(self, *args, **kwargs):
            if current_app.config['TESTING']:
                with app.test_request_context():
                    return task_base.__call__(self, *args, **kwargs)
            with app.app_context():
                return task_base.__call__(self, *args, **kwargs)

    celery.Task = ContextTask
    return celery

celery = make_celery(current_app)


The __call__ method ensures that celery task is provided with proper app context to work with.

 

Continue ReadingDetachedInstanceError: Dealing with Celery, Flask’s app context and SQLAlchemy in the Open Event Server

Event-driven programming in Flask with Blinker signals

Setting up blinker:

The Open Event Project offers event managers a platform to organize all kinds of events including concerts, conferences, summits and regular meetups. In the server part of the project, the issue at hand was to perform multiple tasks in background (we use celery for this) whenever some changes occurred within the event, or the speakers/sessions associated with the event.

The usual approach to this would be applying a function call after any relevant changes are made. But the statements making these changes were distributed all over the project at multiple places. It would be cumbersome to add 3-4 function calls (which are irrelevant to the function they are being executed) in so may places. Moreover, the code would get unstructured with this and it would be really hard to maintain this code over time.

That’s when signals came to our rescue. From Flask 0.6, there is integrated support for signalling in Flask, refer http://flask.pocoo.org/docs/latest/signals/ . The Blinker library is used here to implement signals. If you’re coming from some other language, signals are analogous to events.

Given below is the code to create named signals in a custom namespace:


from blinker import Namespace

event_signals = Namespace()
speakers_modified = event_signals.signal('event_json_modified')

If you want to emit a signal, you can do so by calling the send() method:


speakers_modified.send(current_app._get_current_object(), event_id=event.id, speaker_id=speaker.id)

From the user guide itself:

“ Try to always pick a good sender. If you have a class that is emitting a signal, pass self as sender. If you are emitting a signal from a random function, you can pass current_app._get_current_object() as sender. “

To subscribe to a signal, blinker provides neat decorator based signal subscriptions.


@speakers_modified.connect
def name_of_signal_handler(app, **kwargs):

 

Some Design Decisions:

When sending the signal, the signal may be sending lots of information, which your signal may or may not want. e.g when you have multiple subscribers listening to the same signal. Some of the information sent by the signal may not be of use to your specific function. Thus we decided to enforce the pattern below to ensure flexibility throughout the project.


@speakers_modified.connect
def new_handler(app, **kwargs):
# do whatever you want to do with kwargs['event_id']

In this case, the function new_handler needs to perform some task solely based on the event_id. If the function was of the form def new_handler(app, event_id), an error would be raised by the app. A big plus of this approach, if you want to send some more info with the signal, for the sake of example, if you also want to send speaker_name along with the signal, this pattern ensures that no error is raised by any of the subscribers defined before this change was made.

When to use signals and when not ?

The call to send a signal will of course be lying in another function itself. The signal and the function should be independent of each other. If the task done by any of the signal subscribers, even remotely affects your current function, a signal shouldn’t be used, use a function call instead.

How to turn off signals while testing?

When in testing mode, signals may slow down your testing as unnecessary signals subscribers which are completely independent from the function being tested will be executed numerous times. To turn off executing the signal subscribers, you have to make a small change in the send function of the blinker library.

Below is what we have done. The approach to turn it off may differ from project to project as the method of testing differs. Refer https://github.com/jek/blinker/blob/master/blinker/base.py#L241 for the original function.


def new_send(self, *sender, **kwargs):
    if len(sender) == 0:
        sender = None
    elif len(sender) > 1:
        raise TypeError('send() accepts only one positional argument, '
                        '%s given' % len(sender))
    else:
        sender = sender[0]
    # only this line was changed
    if not self.receivers or app.config['TESTING']:
        return []
    else:
        return [(receiver, receiver(sender, **kwargs))
                for receiver in self.receivers_for(sender)]
                
Signal.send = new_send

event_signals = Namespace
# and so on ....

That’s all for now. Have some fun signaling 😉 .

 

Continue ReadingEvent-driven programming in Flask with Blinker signals

Keep your node server running using PM2

The open event webapp generator is a node projects (using an express server), and a copy of it runs all the time on my personal DigitalOcean box (other than our heroku instance).

On a service like Heroku, the platform manages the task of bringing your server process up. But on, say a Linux distro on the DO box, you have to manually do

 npm run server

to be able to run the server.

While that is all good, it is a foreground shell process, which means, you will lose the node server, when you log out (or your internet connection into the ssh breaks).
So we need to be able to keep the process running in the background.

The way we do it in bash on Unix, is that we can do either of the following

 npm run server&

The “&” at the end means it will make a background fork of this task. Or if you’ve already started it without it, you can also do the following.

npm run server # starts in foreground
#Press Ctrl + Z, this pauses task and frees the shell
bg 1 # sends task no 1 to background thread.

Again, both these are hacky methods, will work only on Unix OSs, and are not really recommended for production.
For production, we need a Process Manager, for Node.js the best we can get is pm2 – purpose built process manager for node.

Install pm2 first

sudo npm install -g pm2

Using pm2, we can start any process that can be started with node. We can start the app.js script like this

pm2 start src/app.js

Also, pm2 can run npm tasks too like

pm2 start npm -- start

Pm2 has a pretty status message display window. And we can start, stop, pause, kill and/or restart any process.

 

Screenshot from 2016-08-28 01-19-29

Continue ReadingKeep your node server running using PM2

sTeam GSoC 2016 Windup

(ˢᵒᶜⁱᵉᵗʸserver) aims to be a platform for developing collaborative applications.
sTeam server project repository: sTeam.
sTeam-REST API repository: sTeam-REST

An overview of the work done by ajinkya007 during Google Summer of code 2016 with FOSSASIA on its project sTeam.

The community bonding period saw the creation of a docker image and a debian package for the sTeam server. The integration of the sTeam shell into vi, improvements in the export and import to git scripts, user and group manipulation commands, sending mails through the commandline, viewing logs and the edit script modifications were done subsequently. The later part of GSOC saw that the sTeam-rest repository was restructured, unit and api-end point tests were performed. The new web interface developed was tested.
The code written during this period by me and siddhant was merged and the conflicts were resolved. The merged code was tested thoroughly as no automated test integration tool supports pike programming language. Documentation was generated using Doxygen and deployed in the gh-pages of the sTeam server repository.

A trello board was maintained throughout the course of GSOC 2016.

Trello Board: sTeam

Accomplishments

Issues Reported and Resolved

A list of tasks covered and all the Pull requests related to each:

Tasks Issue PR
Make changes in the Makefile for installation of sTeam. Issue-25 Issue-27 PR-66 PR-67
Edit script modifications Issue-20 Issue-29 Issue-43 PR-44 PR-48
Indentation of output in steal-shell. Issue-24 PR-42
Integrate steam-shell into vim or emacs. Issue-37 Issue-43 Issue-49 PR-41 PR-48 PR-51
Improve the import and export from git scripts. Issue-9 Issue-14 Issue-16 Issue-18 Issue-19 Issue-46 PR-45 PR-54 PR-55 PR-76
Create, Delete and List the user through commandline Issue-58 Issue-69 Issue-72 PR-59 PR-70 PR-78
Sending Mails through commandline Issue-74 PR-85
Generate error logs and display them in CLI Issue-83 PR-86
Create a file of any mime type from command line. Issue-79 PR-82
Add more commands for group operations. Issue-80 PR-84
Add more utility to the steam-shell Issue-56 Issue-71 Issue-73 PR-57 PR-75 PR-81
Restructure the sTeam-rest repository List of Issue’s List of PR’s
Write test cases to test sTeam-rest api List of Issue’s List of PR’s
Create a debian package and a docker image for easy deployment Create docker image Docker Image
Document the work done Issue 149 sTeam Server Structure, sTeam Server Documentation
Test the web-interface

Commits Merged

During the course of GSOC 2016, work was done on the sTeam and sTeam-rest repositories.

1. The work done on the sTeam repository.

We have combined all the work into two branches for the ease of creating a debian package. The commits made by me in each branch can be seen here.

2. The work done on the sTeam-rest repository

The push request’s sent for the issue’s are yet to be merged in the main repository. The list of PR’s for the sTeam-rest repository.

sTeam-rest PR’s

The weekly blogs

The blogs summarizing the work done during the week were published on my personal website. These can be found on Weekly Blogs
All the blogs can also be found on the Fossasia blog.
The list in reverse chronological order is as follows.

Scrums

Scrum reports were posted on the #steam-devel on irc.freenode.net and sTeam google group. The sTeam trello board also has everyday scrum reports.

Further Improvements

  1. sTeam command line lacks the functionality to read and set the object access permissions. sTeam function analogous to getfacl() to change the sTeam server object permisssions.
  2. sTeam debian package for easy installation of the sTeam server. The debian package is yet to be fully packaged.

Special Thanks

  • I would like to thank my mentors Mario Behling, Hong Phuc Dang, Martin Bahr, Trilok Tourani and my peers for being there to help me and guide me.
  • I would like to thank FOSSASIA, sTeam and Pike Community for giving me this opportunity and guiding me in this endeavour.
  • I would also like to thank Google Summer of Code for this experience.

Feel free to explore the repository. Suggestions for improvements are welcomed.

Checkout the FOSSASIA Idea’s page for more information on projects supported by FOSSASIA.

Continue ReadingsTeam GSoC 2016 Windup

Software Testing

In this post I would like to explain what is software testing and different methods, levels of software testing.

What is software testing?

Software Testing is the process of evaluating a system or its components with the intent to find whether it satisfies the specified requirements or not.

Testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.

What are the Levels of software testing?

Software products are tested at four levels:
  • Unit testing
  • Integration testing
  • System testing
  • Acceptance testing

Unit testing

  • During unit testing, modules are tested in isolation. If all modules were to be tested together, it may not be easy to determine which module has the error.
  • Unit testing reduces debugging effort several folds. Programmers carry out unit testing immediately after they complete the coding of a module.

Role of Unit Testing

  • Assure minimum quality of units before integration into system
  • Focus attention on relatively small units
  • Testing forces us to read our own code – spend more time reading than writing
  • Automated tests support maintainability and extendibility

Integration testing

After different modules of a system have been coded and unit tested:
  • modules are integrated in steps according to an integration plan
  • partially integrated system is tested at each integration step.

Role of Integration Testing

  • Gain confidence in the integrity of overall system design
  • Ensure proper interaction of components

Integration Testing Strategies

  • Big-bang
  • Top-down
  • Bottom-up
  • Critical-first
  • Function-at-a-time
  • As-delivered
  • Sandwich

System Testing

  • Gain confidence in the integrity of the system as a whole
  • Ensure compliance with functional requirements
  • Ensure compliance with performance requirements

Acceptance Testing

  • Testing performed by the customer or end-user himself
  • to determine whether the system should be accepted or reject

Testing is very important step of software development. We have used unit testing for our project Engelsystem. We are developing new features. Interested Developers can work with us.

Development: https://github.com/fossasia/engelsystem

Issues/Bugs:https://github.com/fossasia/engelsystem/issues

Continue ReadingSoftware Testing