Implementing QR Code Detector in Open Event Organizer App

One of the main features of Open Event Organizer App is to scan a QR code from an attendee’s ticket to validate his/her entry to an event. The app uses Google’s Vision API library, com.google.android.gms.vision.barcode for QR code detection. In this blog, I talk about how to use this library to implement QR code detection with dynamic frame support in an Android App. The library uses a term barcode for all the supported formats including QR code. Hence in the blog, I use the term barcode for QR code format.

We use Google’s dagger for dependency injections in the app. So all the barcode related dependencies are injected in the activity using the dagger. Basically, there are these two classes – BarcodeDetector and CameraSource. The basic workflow is to create BarcodeDetector object which handles QR code detection. Add a SurfaceView in the layout which is used by the CameraSource to show preview to the user. Pass both of these to CameraSource. Enough talk, let’s look into the code while moving forward from here on. If you are not familiar with dagger dependency injection, I strictly suggest you have a look at some tutorial introducing dagger dependency injection.

So we have a barcode module class which takes care of creating  BarcodeDetector and CameraSource.

@Provides
BarcodeDetector providesBarCodeDetector(Context context) {
   BarcodeDetector barcodeDetector = new BarcodeDetector.Builder(context)
       .setBarcodeFormats(Barcode.QR_CODE)
       .build();
   return barcodeDetector;
}

@Provides
CameraSource providesCameraSource(Context context, BarcodeDetector barcodeDetector) {
   return new CameraSource
       .Builder(context, barcodeDetector)
       ...
       .build();
}

 

You can see in the code that BarcodeDetector is passed to the CameraSource builder. Now comes preview part. The user of the app should be able to see what is actually detected. Google has provided samples showing how to do that. It provides some classes that you can just add to your projects. The classes with the links are – BarcodeGraphic, CameraSourcePreview, GraphicOverlay and BarcodeGraphicTracker.

CameraSourcePreview is the custom view which is used in the QR detecting layout for preview. It handles all the SurfaceView related stuff with the additional BarcodeGraphic view which extends GraphicOveraly which is used to draw dynamic info based on the QR code detected. We use this class to draw a frame around the QR code detected. BarcodeGraphicTracker is used to receive newly detected items, add a graphical representation to an overlay, update the graphics as the item changes, and remove the graphics when the item goes away.

Override draw method of BarcodeGraphic according to your need of how you want to show results on the screen once barcode is detected. The method in the Organizer app looks like:

@Override
public void draw(Canvas canvas) {
   if (barcode == null) {
       return;
   }
   // Draws the bounding box around the barcode.
   RectF rect = new RectF(barcode.getBoundingBox());
   ...
   int width = (int) ((rect.right - rect.left)/3);
   int height = (int) ((rect.top - rect.bottom)/3);

   canvas.drawBitmap(Bitmap.createScaledBitmap(frameBottomLeft, width, height, false), rect.left, rect.top, null);
   ...
   canvas.drawRect(rect, rectPaint);
}

 

The class has a Barcode field which gets updated on barcode detection. In the above method, the field rect gets dimensions of the bounding box of the QR code detector. And accordingly, frames are drawn at the vertices of the rect . Include CameraSourcePreview inclosing GraphicOverlay in the activity’s layout.

<...CameraSourcePreview
   android:id="@+id/preview"
   android:layout_width="match_parent"
   android:layout_height="match_parent">

   <...GraphicOverlay />

</...CameraSourcePreview>

 

CameraSourcePreview and GraphicOverlay are saved in the activity from the layout. Pass CameraSource and GraphicOverlay to the CameraSourcePreview using the method start. Now the last part left is setting the processor to the BarcodeDetector to add a connection to the GraphicOverlay. Use BarcodeGraphicTracker which connects GraphicOverlay to BarcodeDetector. This is done by passing BarcodeTrackerFactory which has create method for BarcodeGraphicTracker to Multiprocessor. The code looks like:

barcodeDetector.setProcessor(
   new MultiProcessor.Builder<>(
       new BarcodeTrackerFactory(graphicOverlay)).build());

 

Now BarcodeDetector is connected to the layout. This will update the preview on the layout as overridden in the draw method of BarcodeGraphic on each barcode detection.

Links:
Google’s Vision API – link
Google Dagger github repo link – https://github.com/google/dagger

Continue ReadingImplementing QR Code Detector in Open Event Organizer App

Adding API endpoint to SUSI.AI for Skill Historization

SUSI Skill CMS is an editor to write and edit skill easily. It follows an API centric approach where the Susi server acts as API server. Using Skill CMS we can browse history of a skill, where we get commit ID, commit message  and name the author who made the changes to that skills. In this blogpost we will see how to fetch complete commit history of a skill in the susi skill repository. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. Susi skills are stored in susi_skill_data repository. We can access any skill based on four tuples parameters model, group, language, skill.  For managing version control in skill data repository, the following dependency is added to build.gradle . JGit is a library which implements the Git functionality in Java.

dependencies {
 compile 'org.eclipse.jgit:org.eclipse.jgit:4.6.1.201703071140-r'
}

To implement our servlet we need to extend our servlet to AbstractAPIHandler. In Susi Server, an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.

public class HistorySkillService extends AbstractAPIHandler implements APIHandler {}

The AbstractAPIHandler checks the permissions of the user using the userroles of and comparing it with the value minimum base role of each servlet. Thus to specify the user permission for a servlet we need Override the getMinimalBaseUserRole method.

 @Override
    public BaseUserRole getMinimalBaseUserRole() {
        return BaseUserRole.ANONYMOUS;
    }

UserRoles can be Admin, Privilege, User, Anonymous. In our case it is Anonymous. A User need not to log in to access this endpoint.

  @Override
    public String getAPIPath() {
        return "/cms/getSkillHistory.json";
    }

This methods sets the api endpoint path. One need to send requests at http://api.susi.ai/cms/getSkillHistory.json to get the modification history of skill. Next we will implement The ServiceImpl method where we will be processing the user request and giving back the service response.

@Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {

        String model_name = call.get("model", "general");
        File model = new File(DAO.model_watch_dir, model_name);
        String group_name = call.get("group", "knowledge");
        File group = new File(model, group_name);
        String language_name = call.get("language", "en");
        File language = new File(group, language_name);
        String skill_name = call.get("skill", "wikipedia");
        File skill = new File(language, skill_name + ".txt");
        JSONArray commitsArray;
        commitsArray = new JSONArray();
        String path = skill.getPath().replace(DAO.model_watch_dir.toString(), "models");
        //Add to git
        FileRepositoryBuilder builder = new FileRepositoryBuilder();
        Repository repository = null;
        try {
            repository = builder.setGitDir((DAO.susi_skill_repo))
                    .readEnvironment() // scan environment GIT_* variables
                    .findGitDir() // scan up the file system tree
                    .build();
            try (Git git = new Git(repository)) {
                Iterable<RevCommit> logs;
                logs = git.log().addPath(path).call();
                int i = 0;
                for (RevCommit rev : logs) {
                    commit = new JSONObject();
                    commit.put("commitRev", rev);
                    commit.put("commitName", rev.getName());
                    commit.put("commitID", rev.getId().getName());
                    commit.put("commit_message", rev.getShortMessage());
                    commit.put("author",rev.getAuthorIdent().getName());
                    commitsArray.put(i, commit);
                    i++;
                } success=true;
            } catch (GitAPIException e) {
                e.printStackTrace();
                success=false;
           } if(commitsArray.length()==0){
            success=false;
        }
        JSONObject result = new JSONObject();
        result.put("commits",commitsArray);
        result.put("success",success);
        return new ServiceResponse(result);
    }

To access any skill we need parameters model, group, language. We get this through call.get method where first parameter is the key for which we want to get the value and second parameter is the default value. Based on received model, group and language browse files in that folder we build the susi_skill_data repository path read the git variables and scan up the file system tree using FileRepositoryBuilder build() method. Next we fetch all the logs of the skill file and store them in json commits array and finally pass as a server response with success messages. In case of exceptions, pass service with success flags as false.

We have successfully implemented the servlet. Check the working of endpoint by sending request like http://api.susi.ai/cms/getSkillHistory.json?model=general&group=knowledge&language=en&skill=bitcoin and checking the response.

Susi skill cms uses this endpoint to fetch the skill history, try it out at http://skills.susi.ai/browseHistory

Resources
Continue ReadingAdding API endpoint to SUSI.AI for Skill Historization

Writing Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

We integrated Selenium Testing in the Open Event Webapp and are in full swing in writing tests to check the major features of the webapp. Tests help us to fix the issues/bugs which have been solved earlier but keep on resurging when some new changes are incorporated in the repo. I describe the major features that we are testing in this.

Bookmark Feature
The first major feature that we want to test is the bookmark feature. It allows the users to mark a session they are interested in and view them all at once with a single click on the starred button. We want to ensure that the feature is working on all the pages.

Let us discuss the design of the test. First, we start with tracks page. We select few sessions (2 here) for test and note down their session_ids. Finding an element by its id is simple in Selenium can be done easily. After we find the session element, we then find the mark button inside it (with the help of its class name) and click on it to mark the session. After that, we click on the starred button to display only the marked sessions and proceed to count the number of visible elements on the page. If the number of visible session elements comes out to be 2 (the ones that we marked), it means that the feature is working. If the number deviates, it indicates that something is wrong and the test fails.

82080522-21e8-403d-9906-3b4f420720b9.png

Here is a part of the code implementing the above logic. The whole code can be seen here

// Returns the number of visible session elements on the tracks page
TrackPage.getNoOfVisibleSessionElems = function() {
 return this.findAll(By.className('room-filter')).then(this.getElemsDisplayStatus).then(function(displayArr) {
   return displayArr.reduce(function(counter, value) { return value == 1 ? counter + 1 : counter; }, 0);
 });
};
// Bookmark the sessions, scrolls down the page and then count the number of visible session elements
TrackPage.checkIsolatedBookmark = function() {
 // Sample sessions having ids of 3014 and 3015 being checked for the bookmark feature
 var sessionIdsArr = ['3014', '3015'];
 var self = this;
 return  self.toggleSessionBookmark(sessionIdsArr).then(self.toggleStarredButton.bind(self)).then(function() {
   return self.driver.executeScript('window.scrollTo(0, 400)').then(self.getNoOfVisibleSessionElems.bind(self));
 });
};

Here is the excerpt of code which matches the actual number of visible session elements to the expected number. You can view the whole test script here

//Test for checking the bookmark feature on the tracks page
it('Checking the bookmark toggle', function(done) {
 trackPage.checkIsolatedBookmark().then(function(num) {
   assert.equal(num, 2);
   done();
 }).catch(function(err) {
   done(err);
 });
});

Now, we want to test this feature on the other pages: schedule and rooms page. We can simply follow the same approach as done on the tracks page but it is time expensive. Checking the visibility of all the sessions elements present on the page takes quite some time due to a large number of sessions. We need to think of a different approach.We had already marked two elements on the tracks page. We then go to the schedule page and click on the starred mode. We calculate the current height of the page. We then unmark a session and then recalculate the height of the page again. If the bookmark feature is working, then the height should decrease. This determines the correctness of the test. We follow the same approach on the rooms pages too. While this is not absolutely correct, it is a good way to check the feature. We have already employed the perfect method on the tracks page so there was no need of applying it on the schedule and the rooms page since it would have increased the time of the testing by a quite large margin.

Here is an excerpt of the code. The whole work can be viewed here

RoomPage.checkIsolatedBookmark = function() {
 // We go into starred mode and unmark sessions having id 3015 which was marked previously on tracks pages. If the bookmark feature works, then length of the web page would decrease. Return true if that happens. False otherwise
 var getPageHeight = 'return document.body.scrollHeight';
 var sessionIdsArr = ['3015'];
 var self = this;
 var oldHeight, newHeight;
 return self.toggleStarredButton().then(function() {
   return self.driver.executeScript(getPageHeight).then(function(height) {
     oldHeight = height;
     return self.toggleSessionBookmark(sessionIdsArr).then(function() {
       return self.driver.executeScript(getPageHeight).then(function(height) {
         newHeight = height;
         return oldHeight > newHeight;
       });
     });
   });
 });
};

Search Feature
Now, let us go to the testing of the search feature in the webapp. The main object of focus is the Search bar. It is present on all the pages: tracks, rooms, schedule, and speakers page and allows the user to search for a particular session or a speaker and instantly fetches the result as he/she types.

We want to ensure that this feature works across all the pages. Tracks, Rooms and Schedule pages are similar in a way that they display all the session of the event albeit in a different manner. Any query made on any one of these pages should fetch the same number of session elements on the other pages too. The speaker page contains mostly information about the speakers only. So, we make a single common test for the former three pages and a little different test for the latter page.

Designing a test for this feature is interesting. We want it to be fast and accurate. A simple way to approach this is to think of the components involved. One is the query text which would be entered in the search input bar. Other is the list of the sessions which would match the text entered and will be visible on the page after the text has been entered. We decide upon a text string and a list containing session ids. This list contains the id of the sessions should be visible on the above query and also contain few id of the sessions which do not match the text entered. During the actual test, we enter the decided text string and check the visibility of the sessions which are present in the decided list. If the result matches the expected order, then it means that the feature is working well and the test passes. Otherwise, it means that there is some problem with the default implementation and the test fails.

For eg: We decide upon the search text ‘Mario’ and then note the ids of the sessions which should be visible in that search.

c0e4910f-cf69-4b2a-8cc1-233badb35eee.png

Suppose the list of the ids come out to be

['3017''3029''3013''3031']

We then add few more session ids which should not be visible on that search text. Like we add two extra false ids 3014, 3015. Modified list would be something like this

['3017''3029''3013''3031''3014''3015']

Now we run the test and determine the visibility of the sessions present in the above list, compare it to the expected output and accordingly determine the fate of the test.

Expected: [truetruetruetruefalsefalse]
Actual Output: [truetruetruetruetruetrue]

Then the test would fail since the last two sessions were not expected to be visible.

Here is some code related to it. The whole work can be seen here

function commonSearchTest(text, idList) {
 var self = this;
 var searchText = text || 'Mario';
 // First 4 session ids should show up on default search text and the last two not. If no idList provided for testing, use the idList for the default search text
 var arrId = idList || ['3017', '3029', '3013', '3031', '3014', '3015'];
 var promise = new Promise(function(resolve) {
   self.search(searchText).then(function() {
     var promiseArr = arrId.map(function(curElem) {
       return self.find(By.id(curElem)).isDisplayed();
     });

     self.resetSearchBar().then(function() {
       resolve(Promise.all(promiseArr));
     });
   });
 });
 return promise;
}

Here is the code for comparing the expected and the actual output. You can view the whole file here

it('Checking search functionality', function(done) {
 schedulePage.commonSearchTest().then(function(boolArr) {
   assert.deepEqual(boolArr, [true, true, true, true, false, false]);
   done();
 }).catch(function(err) {
   done(err);
 });
});

The search functionality test for the speaker’s page is done in the same style. Just instead of having the session ids, we work with speaker ids there. Rest everything is done in a similar manner.

Resources:

Continue ReadingWriting Selenium Tests for Checking Bookmark Feature and Search functionality in Open Event Webapp

Implementing Tickets API on Open Event Frontend to Display Tickets

This blog article will illustrate how the tickets are displayed on the public event page in Open Event Frontend, using the tickets API. It will also illustrate the use of the add on, ember-data-has-query, and what role it will play in fetching data from various APIs. Our discussion primarily will involve the public/index route. The primary end point of Open Event API with which we are concerned with for fetching tickets for an event is

GET /v1/events/{event_identifier}/tickets

Since there are multiple  routes under public  including public/index, and they share some common event data, it is efficient to make the call for Event on the public route, rather than repeating it for each sub route, so the model for public route is:

model(params) {
return this.store.findRecord('event', params.event_id, { include: 'social-links' });
}

This modal takes care of fetching all the event data, but as we can see, the tickets are not included in the include parameter. The primary reason for this is the fact that the tickets data is not required on each of the public routes, rather it is required for the index route only. However the tickets have a has-many relationship to events, and it is not possible to make a call for them without calling in the entire event data again. This is where a really useful addon, ember-data-has-many-query comes in.

To quote the official documentation,

Ember Data‘s DS.Store supports querying top-level records using the query function.However, DS.hasMany and DS.belongsTo cannot be queried in the same way.This addon provides a way to query has-many and belongs-to relationships

So we can now proceed with the model for public/index route.

model() {
const eventDetails = this._super(...arguments);
return RSVP.hash({
  event   : eventDetails,
  tickets : eventDetails.query('tickets', {
    filter: [
      {
        and: [
          {
            name : 'sales-starts-at',
            op   : 'le',
            val  : moment().toISOString()
          },
          {
            name : 'sales-ends-at',
            op   : 'ge',
            val  : moment().toISOString()
          }
        ]
      }
    ]
  }),

We make use of this._super(…arguments) to use the event data fetched in the model of public route, eliminating the need for a separate API call for the same. Next, the ember-has-many-query add on allows us to query the tickets of the event, and we apply the filters restricting the tickets to only those, whose sale is live.
After the tickets are fetched they are passed onto the ticket list component to display them. We also need to take care of the cases, where there might be no tickets in case the event organiser is using an external ticket URL for ticketing, which can be easily handled via the is-ticketing-enabled property of events. And in case they are not enabled we don’t render the ticket-list component rather a button linked to the external ticket URL is rendered.  In case where ticketing is enabled the various properties which need to be computed such as the total price of tickets based on user input are handled by the ticket-list component itself.

{{#if model.event.isTicketingEnabled}}
  {{public/ticket-list tickets=model.tickets}}
{{else}}
<div class="ui grid">
  <div class="ui row">
      <a href="{{ticketUrl}}" class="ui right labeled blue icon button">
        <i class="ticket icon"></i>
        {{t 'Order tickets'}}
      </a>
  </div>
  <div class="ui row muted text">
      {{t 'You will be taken to '}} {{ticketUrl}} {{t ' to complete the purchase of tickets'}}
  </div>
</div>
{{/if}}

This is the most efficient way to fetch tickets, and also ensures that only the relevant data is passed to the concerned ticket-list component, without making any extra API calls, and it is made possible by the ember-data-has-many-query add on, with very minor changes required in the adapter and the event model. All that is required to do is make the adapter and the event model extend the RestAdapterMixin and ModelMixin provided by the add on, respectively.

Resources

Continue ReadingImplementing Tickets API on Open Event Frontend to Display Tickets

Continuous Integration in Yaydoc using GitHub webhook API

In Yaydoc,  Travis is used for pushing the documentation for each and every commit. But this leads us to rely on a third party to push the documentation and also in long run it won’t allow us to implement new features, so we decided to do the continuous documentation pushing on our own. In order to build the documentation for each and every commit we have to know when the user is pushing code. This can be achieved by using GitHub webhook API. Basically we have to register our api to specific GitHub repository, and then GitHub will send a POST request to our API on each and every commit.

“auth/ci” handler is used to get access of the user. Here we request user to give access to Yaydoc such as accessing the public repositories , read organization details and write permission to write webhook to the repository and also I maintaining state by keeping the ci session as true so that I can know that this callback is for gh-pages deploy or ci deployOn

On callback I’m keeping the necessary informations like username, access_token, id and email in session. Then based on ci session state, I’m redirecting to the appropriate handler. In this case I’m redirecting to “ci/register”.After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

After redirecting to the “ci/register”, I’m getting all the public repositories using GitHub API and then I’m asking the users to choose the repository on which users want to integrate Yaydoc CI

router.post('/register', function (req, res, next) {
      request({
        url: `https://api.github.com/repos/${req.session.username}/${repositoryName}/hooks?access_token=${req.session.token}`,
        method: 'POST',
        json: {
          name: "web",
          active: true,
          events: [
            "push"
          ],
          config: {
            url: process.env.HOSTNAME + '/ci/webhook',
            content_type: "json"
          }
        }
      }, function(error, response, body) {
        repositoryModel.newRepository(req.body.repository,
          req.session.username,
          req.session.githubId,
          crypter.encrypt(req.session.token),
          req.session.email)
          .then(function(result) {
            res.render("index", {
              showMessage: true,
              messages: `Thanks for registering with Yaydoc.Hereafter Documentation will be pushed to the GitHub pages on each commit.`
            })
          })
      })
    }
  })

After user choose the repository, they will send a POST request to “ci/register” and then I’m registering the webhook to the repository and I’m saving the repository, user details in the database, so that it can be used when GitHub send request to push the documentation to the GitHub Pages.

router.post('/webhook', function(req, res, next) {
  var event = req.get('X-GitHub-Event')
  if (event == 'Push') {
      repositoryModel.findOneRepository(
        {
          githubId: req.body.repository.owner.id,
          name: req.body.repository.name
        }
      ).
      then(function(result) {
        var data = {
          email: result.email,
          gitUrl: req.body.repository.clone_url,
          docTheme: "",
        }
        generator.executeScript({}, data, function(err, generatedData) {
            deploy.deployPages({}, {
              email: result.email,
              gitURL: req.body.repository.clone_url,
              username: result.username,
              uniqueId: generatedData.uniqueId,
              encryptedToken: result.accessToken
            })
        })
      })
      res.json({
        status: true
      })
   }
})

After you register on webhook, GitHub will send a request to the url which we registered on the repository. In our case “https:/yaydoc.herokuapp.com/ci/auth” is the url. The type of the event can be known by reading ‘X-GitHub-Event’ header. Right now I’m registering only for the push event. So we’ll only be getting the push event. GitHub also gives us the repository details in the request body.

When the user makes a commit to the repository, GitHub will send a POST request to the Yaydoc’s server. Then, we’ll get the repository name and Github’s user ID from the request body. By use of this, I’ll retrieve the access token from the database which we already registered while the user registers the repository to the CI. The documentation will be generated using generate script and pushed to GitHub pages using deploy script.

Now Yaydoc generates documentation on every push when the user commits to the repository and also it will enable us to integrate new features in our own custom environment. We also plan to build a full featured CI platform.

Resources:

Continue ReadingContinuous Integration in Yaydoc using GitHub webhook API

Implement Marker Clustering in the Open Event Android App

Markers are an integral part of any map based service. In the Open Event Android App for samples like Mozilla All Hands 2017, there are a lot of microlocations that the organizers want to integrate into the app’s map fragment. Due to the presence of large number of markers, the map fragment clutters, thereby harming the user experience. As an example, imagine yourself as the user and you see the map as in the image given below!

Therefore to tackle problem like this, the markers are grouped into clusters. On click of the cluster, the markers get declustered and fall into their respective locations with the map zoomed in.

Implementation

First and foremost, define the libraries to be used by the utilities in the build.gradle of your app module. Make to import the latest versions.

// Googleplay Variant
googleplayCompile 'com.google.android.gms:play-services-maps:10.2.6'
googleplayCompile 'com.google.android.gms:play-services-location:10.2.6'
googleplayCompile 'com.google.maps.android:android-maps-utils:0.4'

 

Implement the ClusterItem interface in your location POJO which will house a marker’s location. The POJO will therefore override the getPostion() method of the ClusterItem interface where you will return the LatLng.

public class MicrolocationClusterWrapper implements ClusterItem {

@Override
public LatLng getPosition() {
   return latLng;
}

}

 

Create a custom Cluster Renderer class that will extend the default cluster renderer with you location POJO as parameter. Implement ClusterManager’s onClusterItemClickListener to listen to marker clicks and add custom colors to them. Set the custom marker properties before the marker items are rendered with the markerOptions inside the onBeforeClusterItemRendered().

@Override
   protected void onBeforeClusterItemRendered(MicrolocationClusterWrapper item, MarkerOptions markerOptions) {
       super.onBeforeClusterItemRendered(item, markerOptions);

       markerOptions.title(item.getMicrolocation().getName());
       if (microlocationClusterWrapper != null && item.equals(microlocationClusterWrapper)) {
           markerOptions.icon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.color_primary));
       } else {
           markerOptions.icon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.dark_grey));
       }
   }

   @Override
   protected void onClusterItemRendered(final MicrolocationClusterWrapper clusterItem, Marker marker) {
       super.onClusterItemRendered(clusterItem, marker);
       clusterItem.setMarker(marker);
  }

   @Override
   public boolean onClusterItemClick(MicrolocationClusterWrapper item) {
       if (microlocationClusterWrapper != null) {
           getMarker(microlocationClusterWrapper).setIcon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.dark_grey));
       }
       microlocationClusterWrapper = item;
       getMarker(item).setIcon(ImageUtils.vectorToBitmap(context, R.drawable.map_marker, R.color.color_primary));
       return false;
   }
}

 

Finally in your map fragment, initialize your map, cluster manager class and your custom cluster renderer you just created. Implement the MapReadyCallback so that the Google Map object is not null. Remember to pass the cluster renderer as a listener for the cluster manager’s cluster item click listener. Use the setOnClusterClickListener to zoom the map on the click of cluster.

private void handleClusterEvents() {
   clusterManager.setOnClusterItemClickListener(clusterRenderer);

   clusterManager.setOnClusterClickListener(cluster -> {
               mMap.animateCamera(CameraUpdateFactory.newLatLngZoom(
                       cluster.getPosition(), (float) Math.floor(mMap
                               .getCameraPosition().zoom + 2)), 300,
                       null);

               return true;
           });

   mMap.setOnMapClickListener(clusterRenderer);
}

 

Conclusion

Maps are an integral part of any event based apps and marker clustering undoubtedly enhances the user experience in Maps.

Resources

  • Marker Clustering Android documentation

https://developers.google.com/maps/documentation/android-api/utility/marker-clustering

  • Complete Code Reference

https://github.com/fossasia/open-event-android/pull/1777

  • Marker Customization in the case of Clustering

https://github.com/googlemaps/google-maps-ios-utils/issues/21

Continue ReadingImplement Marker Clustering in the Open Event Android App

Resource Injection Using ButterKnife in Loklak Wok Android

Loklak Wok Android being a sophisticated Android app uses a lot of views, and of those most are manipulated at runtime. In Android to play with a View or ViewGroup defined in XML at runtime requires developers to add the following line:

(TypeOfView) parentView.findViewById(R.id.id_of_view);

 

This leads to lengthy code. And very often, more than one Views respond to a particular event. For example, hiding Views if a network request fails and showing a message to the user to “Try Again!”. Let’s say you have to hide 4 Views, are you going to do the following:

view1.setVisibility(View.GONE);
view2.setVisibility(View.GONE);
view3.setVisibility(View.GONE);
view4.setVisibility(View.GONE);
textView.setVisibility(View.VISIBLE); // has "Try Again!" message.

// more 5 lines of code when hiding textView and displaying 4 other Views

 

Surely not! And the old fashioned way to get a string value defined as a resource in string.xml

String appName = getActivity().getResources().getString(R.id.app_name);

 

Surely, all this works good, but being a developer while working on a sophisticated app you would like to focus on the logic of the app, rather than scratching your head to debug whether you properly did a findViewById or not, did you typecast it to the proper View, or where did you miss to change the visibility of a view in response to an event.

Well, all of this can be easily handled by using a library which provides you the dependency, here resources. All you need to do is just declare your resources, and that’s it, the library provides the resources to you, yes you don’t need to initialize it using findViewById. So let’s dive in and see how ButterKnife is used in Loklak Wok Android to handle these issues.

Adding ButterKnife to Android Project

In the app/build.gradle:

dependencies {
    compile 'com.jakewharton:butterknife:8.6.0'
    annotationProcessor 'com.jakewharton:butterknife-compiler:8.6.0'
    ...
}

Dealing with Views in Fragments

When views are declared, BindView annotation is used with its parameter as the ID of the view, for example, views in TweetHarvestingFragment :

@BindView(R.id.toolbar)
Toolbar toolbar;
@BindView(R.id.harvested_tweets_count)
TextView harvestedTweetsCountTextView;
@BindView(R.id.harvested_tweets_container)
RecyclerView recyclerView;
@BindView(R.id.network_error)
TextView networkErrorTextView;

NOTE: Views declared can’t be private.

Once Views are declared, then it needs to be injected, it is done using ButterKnife.bind(Object target, View Source). Here in TweetHarvestingFragment the target will be the fragment itself and source i.e. the parent view will be rootView (obtained by inflating the layout file of fragment). All this needs to be done in onCreateView method

View rootView = inflater.inflate(R.layout.fragment_tweet_harvesting, container, false);
ButterKnife.bind(this, rootView);

That’s it, we are done!

The same paradigm can be used to bind views to a ViewHolder of a RecyclerView, as implemented in HarvestTweetViewHolder:

@BindView(R.id.user_fullname)
TextView userFullname;
@BindView(R.id.username)
TextView username;
@BindView(R.id.tweet_date)
TextView tweetDate;
@BindView(R.id.harvested_tweet_text)
TextView harvestedTweetTextView;

public HarvestedTweetViewHolder(View itemView) {
   super(itemView);
   ButterKnife.bind(this, itemView);
}

 

Injecting resources like strings, dimensions, colors, drawables etc. is even easier, only the related annotation and ID needs to be provided. Example the string app_name is used in TweetHarvestingFragment to display the app name i.e. “Loklak Wok” in toolbar

@BindString(R.string.app_name)
String appName;

// directly used inside onCreateView to set the title in toolbar
toolbar.setTitlet(appName);

 

Using ButterKnife onClickListeners can be implemented in separate method, a clean way to define click events rather than polluting onCreate(in Activity) or onCreateView(in Fragment), as implemented in SuggestFragment

@OnClick(R.id.clear_image_button)
public void onClickedClearImageButton(View view) {
   tweetSearchEditText.setText("");
}

 

Multiple Views responding to a single Event

Using @BindViews annotation a list of multiple views can be created, and then a common ButterKnife.Action can be defined to act on the list of views. In TweetHarvestingFragment visibility of some views are changed if a network request fails or succeeds using this feature of ButterKnife:

// declaring List of Views
@BindViews({
       R.id.harvested_tweets_count,
       R.id.harvested__tweet_count_message,
       R.id.harvested_tweets_container})
List<View> networkViews;

// defining action for views
private final ButterKnife.Action<View> VISIBLE = (view, index) -> view.setVisibility(View.VISIBLE);
private final ButterKnife.Action<View> GONE = (view, index) -> view.setVisibility(View.GONE);

// applying action on the views
ButterKnife.apply(networkViews, VISIBLE);
ButterKnife.apply(networkViews, GONE);

Resources

Continue ReadingResource Injection Using ButterKnife in Loklak Wok Android

Implement Caching in the Live Feed of Open Event Android App

In the Open Event Android App, a live feed from the event’s Facebook page was recently implemented. Since it was a live feed, it was decided that it was futile to store it in the Realm database of the app. The data of the live feed didn’t persist anywhere, hence the feed used to be empty when the app ran without the internet connection.

To solve the problem of data persistence, it was decided to store the feed in the cache. Now, there were two paths before us – use retrofit okhttp cache management or use volley. Since retrofit is used to make the API requests in the app, we used the former. To implement caching with retrofit, its API response should include the cache control header. Since it was not a response generated by a personal server, interceptors were needed to force change the request.

Interceptors

Interceptors are a powerful mechanism that can monitor, rewrite, and retry calls. The solution was to use interceptors to rewrite the calls to force use of cache. Two interceptors were added, application interceptor for the request and the network interceptor for the response.

Implementation

Create a cache file to store the response.

private static Cache provideCache() {
   Cache cache = null;
   try {
       cache = new Cache(new File(OpenEventApp.getAppContext().getCacheDir(), "facebook-feed-cache"),
               10 * 1024 * 1024); // 10 MB
   } catch (Exception e) {
       Timber.e(e, "Could not create Cache!");
   }
   return cache;
}

 

Create a network interceptor by chaining the response with the cache control header and removing the pragma header to force use of cache.

private static Interceptor provideCacheInterceptor() {
   return chain -> {
       Response response = chain.proceed(chain.request());

       // re-write response header to force use of cache
       CacheControl cacheControl = new CacheControl.Builder()
               .maxAge(2, TimeUnit.MINUTES)
               .build();

       return response.newBuilder()
               .removeHeader("Pragma")
               .header(CACHE_CONTROL, cacheControl.toString())
               .build();
   };
}

 

Create an application interceptor by chaining the request with the cache control header for stale responses and removing the pragma header to make the feed available for offline usage.

private static Interceptor provideOfflineCacheInterceptor() {
   return chain -> {
       Request request = chain.request();

       if (!NetworkUtils.haveNetworkConnection(OpenEventApp.getAppContext())) {
           CacheControl cacheControl = new CacheControl.Builder()
                   .maxStale(7, TimeUnit.DAYS)
                   .build();

           request = request.newBuilder()
                   .removeHeader("Pragma")
                   .cacheControl(cacheControl)
                   .build();
       }

       return chain.proceed(request);
   };
}

 

Finally add the cache and the two interceptors while building the okhttp client.

OkHttpClient okHttpClient = okHttpClientBuilder.addInterceptor(new HttpLoggingInterceptor()
       .setLevel(HttpLoggingInterceptor.Level.BASIC))
       .addInterceptor(provideOfflineCacheInterceptor())
       .addNetworkInterceptor(provideCacheInterceptor())
       .cache(provideCache())
       .build();

 

Conclusion

Working of apps without the internet connection builds up a strong case for corner cases while testing. It is therefore critical to persist data however small to avoid crashes and bad user experience.

Resources

Continue ReadingImplement Caching in the Live Feed of Open Event Android App

Configurable Services in Loklak Search

Loklak search being an angular application has a concept of wiring down the code in the special form of classes called Services. These serviced have important characteristics, which make them a powerful feature of angular.

  • Services are shared common object wired together by Dependency Injection.
  • Services are lazily instantiated at the runtime.

 

The DI and the instantiation part of a service are handled by angular itself so we don’t have to bother about it. The parts of the services we are always concerned about is the logical part of the service. As the services are the sharable code at the time of writing a service we have to be 100% sure that this is the part of the code which we want to share with our components, else this can lead to the bad implementation of architecture which makes application harder to debug.

Now, the next question which arises is how services are different from something like redux state? Well, the difference lies in the word itself, services don’t have a persistent state of themselves. They are just a set of methods to separate a common piece of code from all the components into one class. These services have functions which take an input, processes them and spit an output.

Services in Loklak Search

So in loklak search, the main services are the ones which on request, fetch data from the backend API and return the data to the requester. All the services in loklak search have a fixed well-defined task, i.e. to use the API and get the data. This is how all the services must be designed, with a specific set of goals. A service should never try to do what is not necessary, or in other words, each service should have one and only one aim and it should do it nicely.

In loklak search, the services are classified by the API endpoints they hit to retrieve data. These services receive the query to be searched from the requested and they send the AJAX request to correct API endpoint and return the fetched data. This is the common structure of all the Loklak services, they all have a fetchQuery() method which takes a string argument query and requests the API for that query and after completion, it either returns the correct response from the API or throws an error if something goes wrong.

@Injectable()
class SearchService() {
public fetchQuery( query: string ) {  }
private extractData( response ) {  }
private handleError( error ) {  }
}

Problems faced in this structure

This simple structure was good enough for the application in the basic levels, but as the number of features in the application increase, our simple service becomes less and less flexible as the fetchQuery() method takes only a query string as an argument and requests the API for that query, along with some query parameters. These query parameters are the additional information given to the server to process and respond to our request in a particular way, like a number of results to be fetched, aggregations to be carried out, and much more. In the current implementation, the setting up these parameters were solely done by the service itself, so these parameters were fixed inside the service and there was no easy way to modify them. This reduced the flexibility of the service as all the requesters were bound to a fixed set of parameters, thus lacking the usability of the service in other places of the application.

 

Solution – Service Configs

The solution to this problem of service customizability is the Service Config classes. The objects of these classes contain the information about the query parameters which various requesters can configure according to their specific needs, and our services will simply configure the query params accordingly. This idea of having a shared structure for the service configurations plays very nicely with our scenario where we want extra control over the parameters which our service is configuring.

@Injectable()
class SearchService() {
public fetchQuery( query: string, config: SearchServiceConfig ) {  }
private extractData( response ) {  }
private handleError( error ) {  }
}

This small modification to our service structure enables us to have the amount of control which we required. The config class is a fairly simple one.

export class SearchServiceConfig {
private count: number;
private source: Source;
private fields: Set<AggregationFields>;
private aggregationLimit: number;
private maximumRecords: number;
private startRecord: number;
private timezoneOffset: string;
private filters: Set<Filter>;

// Other methods to get/set these attributes
}

Now any requester will instantiate a new object of this class and will set the attributes according to his needs then this object is passed to the fetchQuery() method of our function. Which designs the request to be sent accordingly.

Conclusion

In conclusion, i would like to mention the how these attributes are chosen to be a part of the Config and not as a query string. Our API endpoints accept the query string along with some attributes which filter out the results or run aggregations in various fields. So we should have all these attributes in our config as these all properties may vary according to the requesters need. Therefore, this idea of configurable services makes us not only better reuse the existing models and services in multiple situations but also make us write better predictable code.

Resources and Links

Continue ReadingConfigurable Services in Loklak Search

Add Autocomplete SearchView in Open Event Android App

The Open Event Android App has a map for showing all locations of sessions. All the locations have a marker in the map. It is difficult to find a particular location on the map because to know the name of location user has to click on the marker. Adding autocomplete SearchView will improve user experience by providing an ability to search the location by name and by suggesting name according to the search query. In this post I explain how to add autocomplete SearchView in the fragment or activity.

Add search icon in actionbar

The first step to do is to create a menu xml file and add a search menu item in it. Then inflate this menu xml file in Fragment in onCreateOptionsMenu() method.

1. Create menu.xml file

In this file add search menu element. Inside menu element add search menu item. Define id, title, and icon of search menu item. Add android.support.v7.widget.SearchView” as actionViewClass which will be used as action view when the user clicks on the icon.

<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools">
    
   <item
        android:id="@+id/action_search"
        android:icon="@drawable/ic_search_white_24dp"
        android:title="@string/search"
        app:actionViewClass="android.support.v7.widget.SearchView"
        app:showAsAction="ifRoom | collapseActionView"/>
</menu>

2. Inflate menu.xml file in Fragment

In the fragment’s onCreateOptionsMenu() method inflate menu.xml file using MenuInflater’s inflate() method. Then find search menu item using menu’s findItem() method by passing id of search menu item as parameter.

public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) {
        super.onCreateOptionsMenu(menu, inflater);
        inflater.inflate(R.menu.menu_map, menu);
        MenuItem item = menu.findItem(R.id.action_search);
}

Add and initialize SearchView  

Now after adding search icon we need to add SearchView and SearchAutoComplete fields in the fragment.

private SearchView searchView;
private SearchView.SearchAutoComplete   mSearchAutoComplete;

Initialize SearchView in onCreateOptionMenu() method by passing search menu item in the getActionView() method of MenuItemCompat.

Here SearchAutoComplete is a child object of SearchView so initialize it using findViewById method of SearchView by passing the id as parameter.

searchView = (SearchView) MenuItemCompat.getActionView(item);
mSearchAutoComplete = (SearchView.SearchAutoComplete) searchView.findViewById(android.support.v7.appcompat.R.id.search_src_text);

Define properties of SearchAutoCompleteView

By default background of drop down menu in SearchAutoComplete is black. You can change background using setDropDownBackgroundResource() method. Here i’m making it white by providing white drawable resource.

mSearchAutoComplete.setDropDownBackgroundResource(R.drawable.background_white);
mSearchAutoComplete.setDropDownAnchor(R.id.action_search);
mSearchAutoComplete.setThreshold(0)

The setDropDownAnchor() method sets the view to which the auto-complete drop down list should anchor. The setThreshold() method specifies the minimum number of characters the user has to type in the edit box before the drop down list is shown.

Create array adapter

Now it’s time to make the ArrayAdapter object which will provide the data set (strings) which will be used to run search queries.

ArrayAdapter<String> adapter = new ArrayAdapter<>(getActivity(), android.R.layout.simple_list_item_1, searchItems);

Here searchItems is List of strings. Now set this adapter to the mSearchAutoComplete object using setAdapter() method.

mSearchAutoComplete.setAdapter(adapter);

Now we are all set to run the app on device or emulator. Here’s demo how it will look

Conclusion

The SearchView with an ability to give suggestions serves the great user experience in the application.

Additional resources:

Continue ReadingAdd Autocomplete SearchView in Open Event Android App