Creating inter-component actions in Open Event Front-end

Open Event Front-end project is built using Ember JS, which lets us create modular components. While implementing the sessions route on the project, we faced a challenge of sending inter-component actions. To solve this problem we used the ember action helpers which bubbles up the action to the controller and sends it to the desired component. How did we solve it?

Handling actions in ember

In ember we can handle actions using the {{action ‘function’}} where function executes every time an action on element is triggered. This can be used to handle actions for the component. You can define actions for elements as:

<a href="#" class="item {{if (eq selectedTrackId track.id) 'active'}}" {{action 'filter' track.id}}>
  {{track.name}}
</a>

All the actions defined using the {{action}}  helper are defined inside the actions section of the component. Here the action filter is getting binded to onClick event of the anchor tag. The above helper will pass the name of the track(track) as a parameter to the filter function defined in the component.

Whenever the element is clicked the filter function defined in the component will get triggered. This method works for handling actions within a component, however when we need to trigger inter-component actions this approach does not help.

Sending actions from component to controller

We will send an action from the component to the controller of the route in which the component is rendered using a computed property defined inside the controller, which watched the selectedTrackId.

{{public/session-filter selectedTrackId=selectedTrackId tracks=tracks}}

Whenever the anchor is clicked it passes the id of the selected track to the filter function of the component. Inside the filter function we set the selectedTrackId variable passed to the component inside the route template as specified above.

actions: {
  filter(trackId = null) {
    this.set('selectedTrackId', trackId);
  }
}

The selectedTrackId is a observed by a computed property defined in the controller which modified the track list based on the id passed by the sessions-filter component.

Handling action in the controller

Inside the controller we have a computed property which observes the sessions array and the selectedTrackId which is passed by the session-filter component to the controller called sessionsByTracks.

sessionsByTracks: computed('model.sessions.[]', 'selectedTrackId', function() {
  if (this.get('selectedTrackId')) {
    return chain(this.get('model.sessions'))
      .filter(['track.id', this.get('selectedTrackId')])
      .orderBy(['track.name', 'startAt'])
      .value();
  } else {
    return chain(this.get('model.sessions'))
      .orderBy(['track.name', 'startAt'])
      .groupBy('track.name')
      .value();
  }
}),
tracks: computed('model.sessions.[]', function() {
  return chain(this.get('model.sessions'))
    .map('track')
    .uniqBy('id')
    .orderBy('name')
    .value();
})

sessionsByTracks is the property gets filtered using the lodash functions based upon the id of session passed to it. On the first render all the sessions are displayed as the selectedTrackId is set to null.

{{public/session-list sessions=sessionsByTracks}}

This property is passed to the session-list component which renders the filtered list of session based on the selected session in session-filter component. Check the source code for the example here.

Resources

Continue ReadingCreating inter-component actions in Open Event Front-end

Testing Asynchronous Code in Open Event Orga App using RxJava

In the last blog post, we saw how to test complex interactions through our apps using stubbed behaviors by Mockito. In this post, I’ll be talking about how to test RxJava components such as Observables. This one will focus on testing complex situations using RxJava as the library itself provides methods to unit test your reactive streams, so that you don’t have to go out of your way to set contraptions like callback captors, and implement your own interfaces as stubs of the original one. The test suite (kind of) provided by RxJava also allows you to test the fate of your stream, like confirming that they got subscribed or an error was thrown; or test an individual emitted item, like its value or with a predicate logic of your own, etc. We have used this heavily in Open Event Orga App (Github Repo) to detect if the app is correctly loading and refreshing resources from the correct source. We also capture certain triggers happening to events like saving of data on reloading so that the database remains in a consistent state. Here, we’ll look at some basic examples and move to some complex ones later. So, let’s start.

public class AttendeeRepositoryTest {

    private AttendeeRepository attendeeRepository;

    @Before
    public void setUp() {
        testDemo = new TestDemo();
    }

    @Test
    public void shouldReturnAttendeeByName() {
        // TODO: Implement test
    }

}

 

This is our basic test class setup with general JUnit settings. We’ll start by writing our tests, the first of which will be to test that we can get an attendee by name. The attendee class is a model class with firstName and lastName. And we will be checking if we get a valid attendee by passing a full name. Note that although we will be talking about the implementation of the code which we are writing tests for, but only in an abstract manner, meaning we won’t be dealing with production code, just the test.

So, as we know that Observables provide a stream of data. But here, we are only concerned with one attendee. Technically, we should be using Single, but for generality, we’ll stick with Observables.

So, a person from the background of JUnit would be tempted to write this code below.

Attendee attendee = attendeeRepository.getByAttendeeName("John Wick")
    .blockingFirst();

assertEquals("John Wick", attendee.getFirstName() + attendee.getLastName());

 

So, what this code is doing is blocking the thread till the first attendee is provided in the stream and then checking that the attendee is actually John Wick.

While this code works, this is not reactive. With the reactive way of testing, not only you can test more complex logic than this with less verbosity, but it naturally provides ways to test other behaviors of Reactive streams such as subscriptions, errors, completions, etc. We’ll only be covering a few. So, let’s see the reactive version of the above test.

attendeeRepository.getByAttendeeName("John Wick")
    .firstElement()
    .test()
    .assertNoErrors()
    .assertValue(attendee -> "John Wick".equals(
        attendee.getFirstName() + attendee.getLastName()
    ));

 

So clean and complete. Just by calling test() on the returned observable, we got this whole suite of testing methods with which, not only did we test the name but also that there are no errors while getting the attendee.

Testing for Network Error on loading of Attendees

OK, so let’s move towards a more realistic test. Suppose that you call this method on AttendeeRepository, and that you can fetch attendees from the network. So first, you want to handle the simplest case, that there should be an error if there is no connection. So, if you have (hopefully) set up your project using abstractions for the model using MVP, then it’ll be a piece of cake to test this. Let’s suppose we have a networkUtil object with a method isConnected.

The NetworkUtil class is a dependency of AttendeeRepository and we have set it up as a mock in our test using Mockito. If this is sounding somewhat unfamiliar, please read my previous article “The Joy of Testing with MVP”.

So, our test will look like this

@Test
public void shouldStreamErrorOnNetworkDown() {
    when(networkUtils.isConnected()).thenReturn(false);
    
    attendeeRepository.getAttendees()
        .test()
        .assertErrorMessage("No Network");
}

 

Note that, if you don’t define the mock object’s behavior like I have here, attendeeRepository will likely throw an NPE as it will be calling isConnected() on an undefined object.

With RxJava, you get a whole lot of methods for each use case. Even for checking errors, you get to assert a particular Throwable, or a predicate defining an operation on the Throwable, or error message as I have shown in this case.

Now, if you run this code, it’ll probably fail. That’s because if you are testing this by offloading the networking task to a different thread by using subscribeOn observeOn methods, the test body may be detached from Main Thread while the requests complete. Furthermore, if testing in an application made for Android, you would have use AndroidSchedulers.mainThread(), but as it is an Android dependency, the test will fail. Well actually, crash. There were some workarounds by creating abstractions for even RxJava schedulers, but RxJava2 provides a very convenient method to override the default schedulers in the form of RxJavaPlugins. Similarly, RxAndroidPlugins is present in the rx-android package. Let’s suppose you have the plan to use Schedulers.io() for asynchronous work and want to get the stream on Android’s Main Thread, meaning you use AndroidSchedulers.mainThread() in the observeOn method. To override these schedulers to Schedulers.trampoline() which queues your tasks and performs them one by one, like the main thread, your setUp will include this:

RxJavaPlugins.setIoSchedulerHandler(scheduler ->  Schedulers.trampoline());
RxAndroidPlugins.setInitMainThreadSchedulerHandler(scheduler -> Schedulers.trampoline());

 

And if you are not using isolated tests and need to resume the default scheduler behavior after each test, then you’ll need to add this in your tearDown method

RxJavaPlugins.reset();
RxAndroidPlugins.reset();

Testing for Correct loading of Attendees

Now that we have tested that our Repository is correctly throwing an error when the network is down, let’s test that it correctly loads attendees when the network is connected. For this, we’ll need to mock our EventService to return attendees when queried, since we don’t want our unit tests to actually hit the servers.

So, we’ll need to keep these things in mind:

  • Mock the network until it shows that it is connected to the Internet
  • Mock the EventService to return attendees when queried
  • Call the getter on the attendeeRepository and test that it indeed returned a list of attendees

For these conditions, our test will look like this:

@Test
public void shouldLoadAttendeesSuccessfully() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    when(networkUtils.isConnected()).thenReturn(true);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertValues(attendees.toArray(new Attendee[attendees.size()]));
}

 

The assertValues function asserts that these values were emitted by the observable. And if you want to be terser, you can even verify that in fact EventService’s getAttendees function was called by

verify(eventService).getAttendees();

 

But the problem in this way is that the getAttendees function returns an observable and just calling it does not necessarily means that it was subscribed, emitting the results, hence we need to test to ensure that it was indeed subscribed. If we call the normal test() function on the observable, it is already subscribed, making the result of testSubscribed always true. In order to test that correctly, let’s look at our final use case.

Testing for saving of Attendees

In the Open Event Orga App, we have strived to create self-sufficient and intelligent classes, thus, our repository is also built this way. It detects that new attendees are loaded from the server and saves them in the database. Now we’d want to test this functionality.

In this test, there is an added dependency of DatabaseRepository for saving the attendees, which we will mock. The conditions for this test will be:

  • Network is connected
  • EventService returns attendees
  • DatabaseRepository mocks the saving of attendees

For DatabaseRepository’s save method, we’ll be returning a Completable, which will notify when the saving of data is completed. The primary purpose of this test will be to assert that this completable is indeed subscribed when the attendee loading is triggered. This will not only ensure that the correct function to save the attendees is called, but also that it is indeed triggered and not just left hanging after the call. So, our test will look like this.

@Test
public void shouldSaveAttendeesInDatabase() {
    List<Attendee> attendees = Arrays.asList(
        new Attendee(),
        new Attendee(),
        new Attendee()
    );

    TestObserver testObserver = TestObserver.create();
    Completable completable = Completable.complete()
        .doOnSubscribe(testObserver::onSubscribe);

    when(networkUtils.isConnected()).thenReturn(true);
    when(databaseRepository.save(attendees)).thenReturn(completable);
    when(eventService.getAttendees()).thenReturn(Observable.just(attendees));

    attendeeRepository.getAttendees()
        .test()
        .assertNoErrors();

    testObserver.assertSubscribed();
}

 

Here, we have created a separate test observable and set it to be subscribed when the Completable is subscribed and we have returned that Completable when the save method is called. In the last, we have asserted that the test observer is indeed subscribed.

You can create more complex use cases and assert subscriptions, errors, the emptiness of a stream and much more, by using the built-in test functionalities of RxJava2. So, that’s all for this blog, you can visit these links for more details on unit testing RxJava

http://fedepaol.github.io/blog/2015/09/13/testing-rxjava-observables-subscriptions/

https://www.infoq.com/articles/Testing-RxJava

Continue ReadingTesting Asynchronous Code in Open Event Orga App using RxJava

Deploying documentations generated by Yaydoc to Heroku

There are many web applications available online that generates static websites. Among these projects are two unique projects developed here at FOSSASIA. These are the Open Event WebApp Generator and Yaydoc (an automatic documentation generation and deployment project.). Since Yaydoc already supports the deployment of the generated documentations to Github pages, it was just a matter of time that the deployment to Heroku is also supported.

Heroku is an excellent cloud-based platform used as a web application deployment service. Heroku provides most of its services at free of cost to the users and is excellent to host static websites provided that a little bit of tweaking is done.

For this implementation, we use the `Platform API` provided by Heroku. Stating it’s description mentioned in the documentation,

The platform API empowers developers to automate, extend and combine Heroku with other services. You can use the platform API to programmatically create apps, provision add-ons and perform other tasks that could previously only be accomplished with Heroku toolbelt or dashboard.

In order to deploy the static websites to Heroku, we need to first prepare a bundle of source code that has been compiled and is ready for execution on the Heroku runtime. This bundle is known as a Slug.

cd temp/$EMAIL/${UNIQUE_ID}_preview
mkdir -p app
cd app

curl https://nodejs.org/dist/v6.11.0/node-v6.11.0-linux-x64.tar.gz | tar xzv > /dev/null

cp $BASE/web.js
rsync -av --progress ../ . --exclude app

cd ..
tar czfv slug.tgz ./app > /dev/null

We are using the files generated for preview to bundle them in a slug. Also, we download the NodeJS runtime files since we are deploying a static website to Heroku. Along with the static files, we require bundling a NodeJS server file (web.js) that will be used to reference the static files in the application.

After preparing the Slug, we publish the static web application to Heroku. For this, we start by creating a Heroku app using the command `heroku create <app-name>`. The app name is decided by the user when he or she fills the form in the Yaydoc Web App. Following that, we request Heroku to allocate a new slug for your app. After that, we upload the slug tar file to the platform.

# Create Heroku 
heroku create $APP_NAME

# Allocating new Slug
Arr=($(curl -u “:$API_KEY” -X \
-H ‘Content-Type:application/json’ \
-H ‘Accept: application/vnd.heroku+json;version=3’ \
-d ‘{“process_types”:{“web”:”node-v6.11.0-linux-x64/bin/node web.js”}}’ \
-n https://api.heroku.com/apps/${APP_NAME}/slugs | \
python -c “import sys,json; obj=json.load(sys.stdin);
print(obj[‘blob’][‘url’] + ‘\n’ + obj[‘id’])”))

# Upload the slug tar file
curl -X PUT \
-H “Content-Type:”\
--data-binary @slug.tgz \
“${Arr[0]}”

After uploading the slug to Heroku, we need to release the app. This is done using the following command.

curl -u “:$API_KEY” -X POST \
-H “Accept: application/vnd.heroku+json; version=3” \
-H “Content-Type: application/json” \
-d ‘{“slug”:”’${Arr[1]}’”}’ \
-n https://api.heroku.com/apps/$APP_NAME/releases

Releasing the application completes the process of deployment, making the documentation generated by Yaydoc up and running at the following URL: https://<app-name>.herokuapp.com/

 

Continue ReadingDeploying documentations generated by Yaydoc to Heroku

Using HTTMock to mock Third Party APIs for Development of Open Event API server

In the process of implementing the connected social media in Open Event API server, there was a situation where we need to mock third party API services like Google OAuth, Facebook Graph API. In mocking, we try to run our tests on APIs Simulation instead of Original API such that it responds with dummy data similar to original APIs.

To implement this first we need the library support in our Orga Server to mock APIs, so the library that was used for this purpose is the httmock library for Python. One of my mentors @hongquan helped me to understand the approach that we will follow to get this implemented. So according to implementation, when we make a HTTP request to any API through tests then our implementation with httmock will be such that it

  • stands in the middle of the request,
  • Stops the request from going to the original API,
  • and returns a dummy response as if the response is from original API.

The content of this response is written by us in the test case. We have to make sure that it is same type of object as we receive from original API.

Steps to follow ( on mocking Google OAuth API )

  1. Look for response object on two requests (OAuth and profile details).
  2. Create the dummy response using the sample response object.
  3. Creating endpoints using the httpmock library.
  4. During test run, calling the specific method with HTTMock

Sample object of OAuth Response from Google is:

{
"access_token":"2YotnFZFEjr1zCsicMWpAA",
"token_type":"Bearer",
"expires_in":3600,
"refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",
"example_parameter":"example_value"
}

and from the sample object of Google Profile API we needed the link of profile for our API-server:

{'link':'http://google.com/some_id'}

 

Creating the dummy response

Creating dummy response was easy. All I had to do is provide proper header and content in response and use @urlmatch decorator

# response for getting userinfo from google

@urlmatch(netloc='https://www.googleapis.com/userinfo/v2/me')
def google_profile_mock(url, request):
   headers = {'content-type': 'application/json'}
   content = {'link':'http://google.com/some_id'}
   return response(200, content, headers, None, 5, request)

@urlmatch(netloc=r'(.*\.)?google\.com$')
def google_auth_mock(url, request):
   headers = {'content-type': 'application/json'}
   content = {
       "access_token":"2YotnFZFEjr1zCsicMWpAA",
       "token_type":"Bearer",
       "expires_in":3600,
       "refresh_token":"tGzv3JOkF0XG5Qx2TlKWIA",
       "example_parameter":"example_value"
   }
   return response(200, content, headers, None, 5, request)

 

So now we have the end points to mock the response. All we need to do is to use HTTMock inside the test case.

To use this setup all we need to do is:

with HTTMock(google_auth_mock, google_profile_mock):
                self.assertTrue('Open Event' in self.app.get('/gCallback/?state=dummy_state&code=dummy_code',
                                                          follow_redirects=True).data) 
            self.assertEqual(self.app.get('/gCallback/?state=dummy_state&code=dummy_code').status_code, 302)
            self.assertEqual(self.app.get('/gCallback/?state=dummy_state&code=dummy_code').status_code, 302)

And we were able to mock the Google APIs in our test case. Complete implementation in FOSSASIA API-Server can be seen here

 

Continue ReadingUsing HTTMock to mock Third Party APIs for Development of Open Event API server

Using wrapper div around HTML buttons to add extra functionality in Open Event Server

Open Event server had a bug wherein clicking on the notification of an invitation caused a server error. When invitations for a role in an event were sent, they showed up in the notifications header. Clicking on the notification there took the user to the notification page where there were options to Accept or Decline the invitation. The bug was that when the user clicked on either of the Accept/Decline button, the notification was not being marked read which semantically it should have been. Since the invite link expires after acceptance/decline, due to the persistence of the invitation in the notifications page, when the user clicked on the Accept/Decline button, it ran into a 404 error.

The Accept/Decline buttons already have a href attached to each one of them which triggered functions of invitation manager class. The aim here was to add one other thing to happen when any of these buttons was clicked. This bug was resolved by adding a wrapper around these buttons and adding the same functionality to this as that of the ‘Mark as Read’ button.

Adding a class to both the buttons

<a href='{accept_link}' class='btn btn-success btn-sm invite'>Accept</a>
<a href='{decline_link}' class='btn btn-danger btn-sm invite'>Decline</a>


Adding JavaScript to the invite button

if ($(e.target).is('.invite')) {
            var read_button = $(e.target).parents(".notification").find('a.read-btn');
            $.getJSON(read_button.attr('href'), function (data) {
                       read_button.parents('.notification').removeClass('info'); // show notification as read
                read_button.remove(); // delete mark as read button
});
Using parseInt() with Radix

Another error [comment] in the same issue was that sometimes the notification count went in negatives. This was resolved by adding a simple clause to check when notification count is greater than 0.

notif_count = ((notif_count - 1) > 0 ) ? (notif_count - 1) : 0;

 

To set count as the innerHTML of a div, which in this case was the notification count bubble, one uses parseInt();

div.innerHTML = parseInt(notif_count);

This might work but codacy gives an error. The error here is because of a radix not being passed to the parseInt() function.

What is a radix?
Radix simply denotes the integer value of the base of the numeration system. It is basically the value of a single digit in a given number.

For example, numbers written in binary notation have radix 2 while those written in octal notation have radix 8.

Passing radix to the parseInt() function specifies the number system in which the input is to be parsed. Though the radix can be hinted at by other means too, it is always a good practice to pass the radix explicitly.

// leading 0 => radix 8 (octal) 
var number = parseInt('0101');
// leading ‘0x’ => radix 16 (hexadecimal) 
var number = parseInt('0x0101');
// Numbers starting with anything else assumes a radix of 10 
var number = parseInt('101');
// specifying the radix explicitly, here radix of 2 => (binary) 
var number = parseInt('0101', 2);


If you ignore this argument, parseInt() will try to choose the most proper numeral system, but this can back-fire due to browser inconsistencies. For example:

parseInt("023");  // 23 in one browser (radix = 10)
parseInt("023");  // 19 in other browser (radix = 8)


Providing the radix is vital if you are to guarantee accuracy with variable input (basic range, binary, etc). This can be ensured by using a JavaScript linter for your code, which will throw an error for unintended results.

Issues :
Error on clicking on notification

Pull Request :
Fix error on clicking on notification

Additional Resources:

Continue ReadingUsing wrapper div around HTML buttons to add extra functionality in Open Event Server

CSS Trick: How to Object-fit an Image inside its Container Element in the Open Event Front-end

I came across this piece of css when the Nextgen Conference Logo on eventyay home-page had its aspect ratio not maintained. As you can see in this picture that the image is stretched to fill in the parent container’s size.

The CSS behind this image was:

.event-holder img {
    width: 100%;
    height: 165px;
    border: none;
}

Let’s see how object fit helped me to fix this problem.

What is object-fit ?

The object-fit property of an element describes how it is fitted or placed inside its container element. This container box has its boundaries defined by the max-height and max-width attribute of the object in question.
In the html code for the above logo we had:

img {
    width: 100%;
    height: 165px; 
}

The object in this example is an image ( img ) which is to be fitted inside a box of height 165px with 100% width.

The object-fit property can refer to any element like video or embedded item in the page, but it’s mostly applied to images.

object-fit provides us with fine grained control over how the object resizes to fill inside its container div. Essentially object-fit lets the image ( in this context, but can be applied to any object ) fill the box withmaintaining aspect ratio and/or filling up the entire area established by height and weight.

Here’s a short example for different values of this attribute:
( The image
 used here is a 4096px*2660px image, placed inside a div of height 100px and width 300px. )

 
  object-fit: fill
   src="download.png" class="fill"/>
  object-fit: contain
   src="download.png" class="contain"/>
  object-fit: cover
   src="download.png" class="cover"/>
  object-fit: none
   src="download.png" class="none"/>
  object-fit: scale-down
   src="download.png" class="scale-down"/>

img {
  width: 300px;
  height: 100px;
  border: 1px solid yellow;
  background: blue;
}
.fill {
  object-fit: fill;
}
.contain {
  object-fit: contain;
}
.cover {
  object-fit: cover;
}
.none {
  object-fit: none;
}
.scale-down {
  object-fit: scale-down;
}

As from the above illustration, it is evident that what I needed to fix aspect ratio on home page was to use object-fit: cover. We got this result by just adding one line of code. Here’s the final CSS:

.event-holder img {
    width: 100%;
    height: 165px;
    object-fit: cover;
    border: none;
}

 

And the final image, which is pleasing and aesthetic:


Quick cheat-sheet for object-fit values

fill

  • stretches the image to fit the content box
  • aspect-ratio disregarded

contain

  • increases or decreases the size of the image to fill the box
  • aspect-ratio preserved

cover

  • fill the height and width of box
  • aspect ratio preserved
  • often the image gets cropped

none

  • height and width of the container box ignored
  • image retains its original size

scale-down

  • image takes smallest concrete object size between none and contain

Additional Resources

Continue ReadingCSS Trick: How to Object-fit an Image inside its Container Element in the Open Event Front-end

Creating a Responsive Menu in Open Event Frontend

Open Event Frontend uses semantic ui for creating responsive HTML components, however there are some components that are not responsive in nature like buttons & menus. Therefore we need to convert tabbed menus to a one column dropdown menu in mobile views. In this post I describe how we make menus responsive. We are creating a semantic UI custom styling component in Ember to achieve this.

In Open Event we are using the tabbed menus for navigation to a particular route as shown below.

Menu (Desktop)

As you can see there is an issue when viewing the menu on mobile screens.

Menu (Mobile)

Creating custom component for menu

To make the menu responsive we created a custom component called tabbed-navigation which converts the horizontal menu into a vertical dropdown menu for smaller screens. We are using semantic ui styling components for the component to implement the vertical dropdown for mobile view.

tabbed-navigation.js

currentRoute: computed('session.currentRouteName', 'item', function() {
  var path = this.get('session.currentRouteName');
  var item = this.get('item');
  if (path && item) {
    this.set('item', this.$('a.active'));
    this.$('a').addClass('vertical-item');
    return this.$('a.active').text().trim();
  }
}),
didInsertElement() {
  var isMobile = this.get('device.isMobile');
  if (isMobile) {
    this.$('a').addClass('vertical-item');
  }
  this.set('item', this.$('a.active'));
},
actions: {
  toggleMenu() {
    var menu = this.$('div.menu');
    menu.toggleClass('hidden');
  }
}

In the component we check if the device is mobile & change the classes accordingly. For mobile devices we add the vertical-item class to all the items in the menu. We set a property called item in the component which stores the selected item of the menu.

We add a computed property called currentRoute which observes the current route and the selected item, and sets the item to currently active route, and returns the name of the current route.

We add an action toggleMenu which is used to toggle the display of the vertical menu for mobile devices.

tabbed-navigation.hbs

We add a vertical menu dynamically for mobile devices with a button and the name of the current selected item which is stored in currentRoute variable, we also toggle between horizontal & vertical menu based on the screen size.

{{#if device.isMobile}}
  <div role="button" class="ui segment center aligned" {{action 'toggleMenu'}}>
    {{currentRoute}}
  </div>
{{/if}}
<div role="button" class="mobile hidden ui fluid stackable {{unless isNonPointing (unless device.isMobile 'pointing')}} {{unless device.isMobile (if isTabbed 'tabular' (if isVertical 'vertical' 'secondary'))}} menu" {{action 'toggleMenu'}}>
  {{yield}}
</div>

tabbed-navigation.scss

.tabbed-navigation {
  .vertical-item {
    display: block !important;
    text-align: center;
  }
}

Our custom component must look like an item of the menu, to ensure this we use display block property which will allow us to place the menu appear below the toggle button. We also center the menu items so that it looks more like a vertical dropdown.

{{#tabbed-navigation}}
  {{#link-to 'events.view.index' class='item'}}
    {{t 'Overview'}}
  {{/link-to}}
  {{#link-to 'events.view.tickets' class='item'}}
    {{t 'Tickets'}}
  {{/link-to}}
  <a href="#" class='item'>{{t 'Scheduler'}}</a>
  {{#link-to 'events.view.sessions' class='item'}}
    {{t 'Sessions'}}
  {{/link-to}}
  {{#link-to 'events.view.speakers' class='item'}}
    {{t 'Speakers'}}
  {{/link-to}}
  {{#link-to 'events.view.export' class='item'}}
    {{t 'Export'}}
  {{/link-to}}
{{/tabbed-navigation}}

To use this component all we need to do is wrap our menu inside the tabbed-navigation component and it will convert the horizontal menu to the vertical menu for mobile devices.

The outcome of this change on the Open Event Front-end now looks like this:

Thank you for reading the blog, you can check the source code for the example here.

Resources

Continue ReadingCreating a Responsive Menu in Open Event Frontend

Handling soft and hard deletes in the Open Event server API

Really, handling soft and hard deletes can be a mess, if you think of it.

Earlier in the Open Event server project, we had a Boolean field called is_trashed which was set to true if a record was soft-deleted. That worked just fine, until there came a requirement to get the time at which the record was deleted. So duh… we added another column called deleted_at which would store the time at which the record was soft-deleted. And it all started working fine again.

But, shortly we realised it was bad design to have a redundant Boolean field is_trashed. So it was decided to remove the is_trashed field and only keep the deleted_at column at all places. If the deleted_at field contained a date, it would mean that the record has been soft deleted at that point of time. If the field was still NULL, then the record has not been soft deleted. That ends up the database aspect of implementing soft-deletes. Let’s move on to the API part then.

We are currently in the process of decoupling our front-end and back-end. And the API server for the same is in active development. We’ve been using flask-rest-jsonapi for the same purpose. So, the first thing that popped up in our minds, when we got around handling soft-deletes was the following.

Should the API framework implement soft-deletes for each API by itself, or should the individual API logic take care of it ?

After some discussion, it was decided to let the framework handle it for each API, so that the implementation remains uniform and obviously a little less headache for the developers. In our custom copy of flask-rest-jsonapi, we also added an option to turn off the soft deletes across the whole API. Turning it off for each resource is also in our road map and would be soon implemented in the future.

Now talking about the API itself, for GET endpoints by default soft-deleted records should not be retrieved. Retrieving all the records irrespective of whether it is soft-deleted or not and letting client figure out which records are deleted is a sign of bad design. If the client wants to retrieve the deleted records, it can do so by passing a query parameter is_trashed set to true.

Following is the URL pattern followed for the same, for the sake of the example, assume that the event with id 1 is soft-deleted:

GET /events?with_trashed=true   # get all events including the soft-deleted events
GET /events/1                       # send a 404 exception
GET /events/1?with_trashed=true # retrieve relevant data 

For DELETE request:

DELETE /events/1                 # soft-delete the event
DELETE /events/1?permanent=true   # hard-delete the event

Relevant links:

Continue ReadingHandling soft and hard deletes in the Open Event server API

Integrating Selenium Testing in the Open Event Webapp

Enter Selenium. Selenium is a suite of tools to automate web browsers across many platforms. Using Selenium, we can control the browser and instruct it to do an ‘action’ programmatically. We can then check whether that action had the appropriate reaction and make our test cases based on this concept. There are various implementations of Selenium available in many different languages: Java, Ruby, Javascript, Python etc. As the main language used in the project is Javascript, we decided to use it.

https://www.npmjs.com/package/selenium-webdriver
https://seleniumhq.github.io/selenium/docs/api/javascript/index.html

After deciding on the framework to be used in the project, we had to find a way to integrate it in the project. We wanted to run the tests on every PR made to the repo. If it failed, the build would be stopped and it would be shown to the user. Now, the problem was that the Travis doesn’t natively support running Selenium on their virtual machine. Fortunately, we have a company called Sauce Labs which provides automated testing for the web and mobile applications. And the best part, it is totally free for open source projects. And Travis supports Sauce Labs. The details of how to connect to Sauce Labs is described in detail on this page:

https://docs.travis-ci.com/user/gui-and-headless-browsers/

Basically, we have to create an account on Sauce Labs and get a sauce_username and sauce_access_key which will be used to connect to the sauce cloud. Travis provides a sauce_connect addon which creates a tunnel which allows the Sauce browsers to easily access our application. Once the tunnel is established, the browser in the Sauce cloud can use it to access the localhost where we serve the pages of the generated sites. A little code would make it more clear at this stage:

Here is a short excerpt from the travis.yml file :-

addons:
 sauce_connect:
   username: princu7
 jwt:
   secure: FslueGK2gtPHkRANMpUlGyCGsr1jTVuaKpP+SvYUxBYh5zbz73GMq+VsqlE29IZ1ER1+xMfWuCCvg3VA7HePyN6hzoZ/t0LADureYVPur6R5ZJgqgQpBinjpytIjo2BhN3NqaNWaIJZTLDSAT76R7HuNm01=

As we can see from the code, we have installed the sauce_connect addon and then added the sauce_username and sauce_access_key which we got when we registered on the cloud. Now, what is this gibberish we are seeing? Well, that is actually the sauce_access_key. It is just in its encrypted form. Generally, it is not a good practice to show the access keys in the source code. Anyone else can then use it and can cause harm to the resources allocated to us. You can read all about encrypting environment variables and JWT (JSON Web Tokens) here:-

https://docs.travis-ci.com/user/environment-variables/
https://docs.travis-ci.com/user/jwt

So, this sets up our tunnel to the Sauce Cloud. Here is one of the screenshots showing that our tunnel is opened and tests can be run through it.

sauce.png

After this, our next step is to make our test scripts run in the Sauce Cloud through the tunnel. We already use a testing framework mocha in the project. We can easily use mocha to run our client-side tests too. Here is a link to study it in a little more detail:

http://samsaccone.com/posts/testing-with-travis-and-sauce-labs.html

This is a short excerpt of the code from the test script

describe("Running Selenium tests on Chrome Driver", function() {
 this.timeout(600000);
 var driver;
 before(function() {
   if (process.env.SAUCE_USERNAME !== undefined) {
     driver = new webdriver.Builder()
       .usingServer('http://'+    process.env.SAUCE_USERNAME+':'+process.env.SAUCE_ACCESS_KEY+'@ondemand.saucelabs.com:80/wd/hub')
       .withCapabilities({
         'tunnel-identifier': process.env.TRAVIS_JOB_NUMBER,
         build: process.env.TRAVIS_BUILD_NUMBER,
         username: process.env.SAUCE_USERNAME,
         accessKey: process.env.SAUCE_ACCESS_KEY,
         browserName: "chrome"
       }).build();
   } else {
     driver = new webdriver.Builder()
       .withCapabilities({
         browserName: "chrome"
       }).build();
   }
 });

 after(function() {
   return driver.quit();
 });

 describe('Testing event page', function() {

   before(function() {
     eventPage.init(driver);
           eventPage.visit('http://localhost:5000/live/preview/a@a.com/FOSSASIASummit');
   });

   it('Checking the title of the page', function(done) {
     eventPage.getEventName().then(function(eventName) {
       assert.equal(eventName, "FOSSASIA Summit");
       done();
     });
   });
 });
});

Without going too much into the detail, I would like to offer a brief overview of what is going on. At a high level, before starting any tests, we are checking whether the test is being run in a Travis environment. If yes, then we are appropriately setting up the webdriver for it to run on the Sauce Cloud through the tunnel which we opened previously. We also specify the browser on which we would like to run, which in this care here, is Chrome.

After this preliminary setup is done, we move on to the actual tests. Currently, we only have a basic test to check whether the title of the event site generated is correct or not. We had generated FOSSASIA Summit in the earlier part of the test script. So we just run the site and check its title which should obviously be ‘FOSSASIA Summit’. If due to some error it is not the case, then an error will be thrown and the Travis build we fail. Here is the screenshot of a successful passing test:

6c0a0775-6ad4-45f9-bfa1-a8baef4e6401.png

More tests will be added over the upcoming weeks.

Resources:

Continue ReadingIntegrating Selenium Testing in the Open Event Webapp

Using AutoCompleteTextView for interactive search in Open Event Android App

Providing a search option is essential in the Open Event Android app to make it easy for the user to see the desired results only. But sometimes it becomes difficult to implement this with a good performance if the data set is large, so providing simply a list to scroll through may not be enough and efficient. AutoCompleteTextView provides a way to search data by offering the suggestions after a user types in some initial letters of the search query.

How does it work? Actually we feed the data to an adapter which is attached to the view. So, when a user starts typing the query the suggestions starts appearing with similar names in the form of the list.

For example see above. Typing “Hall” gives the user suggestion to pick up the entry which have word “Hall” in it. Making it easier for user to search.

Let’s see how to implement it. For the first step declare the view in XML layout like this. Where our view goes by the id “map_toolbar” and white text colour for the text that will be appearing in it. Input type signifies that the autocomplete and auto correct is enabled.

<AutoCompleteTextView
       android:id="@+id/map_toolbar"
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:ems="12"
       android:hint="@string/search"
       android:shadowColor="@color/white"
       android:imeOptions="actionSearch"
       android:inputType="textAutoComplete|textAutoCorrect"
       android:textColorHint="@color/white"
       android:textColor="@color/white"
/>

Now initialise the adapter in the fragment/activity with the list “searchItems” containing the information about the location. This function is in a fragment so modifying things accordingly. “textView” is the AutoCompleteTextView that we initialised. To explain this function further when a user clicks on any item from the suggestions the soft keyboard hides. You can do define desired operation here. 

Setting up AutoCompleteTextView with the locations

ArrayAdapter<String> adapter = new ArrayAdapter<>(getActivity(), android.R.layout.simple_dropdown_item_1line, searchItems);
textView.setAdapter(adapter);

textView.setOnItemClickListener((parent, view, position, id) -> {

Things you want to do on clicking the item

View mapView = getActivity().getCurrentFocus();

if (mapView != null) {
  InputMethodManager imm =     (InputMethodManager)getActivity().getSystemService(Context.INPUT_METHOD_SERVICE);
  imm.hideSoftInputFromWindow(mapView.getWindowToken(), 0);
}
});

See the complete code here to find the implementation of AutoCompleteTextView in the map fragment of Open Event Android app.

Continue ReadingUsing AutoCompleteTextView for interactive search in Open Event Android App