How to Geolocate Tweets for Emoji Heatmapper App using LokLak API

Geolocating tweets i.e., getting the location of the tweets posted, is one of the major task in the app Emoji Heatmapper. As I have to plot a marker on the map according to the query searched and the location tracked from the tweet containing that query. This is easily done in the app using the LokLak Search API.
Social media, such as Twitter, are filled with millions of micro messages by people around the world. My geolocation task is to get the geographic location with a Twitter message (having emoji for Emoji Heatmapper) based on the information we have about the user and the message. Twitter provides various data APIs, and tweets are obtained as JSON objects that include the tweet text along with metadata, such as location coordinates (lat and long).

Solutions

So there are few ways to geolocate tweets:

  • Place object from tweets Tweets usually delivered by the Twitter API include a JSON “Place” object which has a location associated with the tweet. These can have fields such as the country and city related to the place, as well as lat and long coordinates. Twitter Users have the option to tag their twitter statuses i.e., tweets with a place; the tagging can also be done automatically based on matches to the user’s current GPS position, if the user allows this (if GPS is switched ON). For the tweets containing place objects, the geolocation has already been done by Twitter.
  • Coordinates from tweets Some tweets are tagged with the coordinates (lat and long) of the user when the message was written, based on the user’s current GPS position. Reverse geocoding using APIs from Google Maps can be done to know the detailed information about the place.
  • Location from user profile Many users provide a location in their profile, a field with values such as “IND”. These locations are mostly always the same(i.e.,static), corresponding to the user’s primary location rather than the location at the time of the message posting. We assume that the users are not avid travelers.
  • Content-based Geo-location Geo-location can also be done on a message or set of messages based on the textual content of the messages.A user’s primary location can be detected based on their dialect or the mention of regional issues like sports teams, for example, as well as the mention of landmarks.

Implementation

So, here comes about the usage of LokLak search API in Emoji Heatmapper app:
I have used the LokLak Search API to get the tweets, which contains the query(i.e., emoji) which is being searched for and the location of the tweets. In LokLak we perform geolocation using location information from tweet metadata and user profiles.
Try this simple query:

http://loklak.org/api/search.json?q=😄

Which shows the data related to search query in which we also have geolocation information. Below is the output of the query, in which a single Twitter status is displayed which has the search query in it and the highlighted part shows the information about the location:

 {
      "timestamp": "2017-06-01T17:46:41.874Z",
      "created_at": "2017-06-01T17:46:25.000Z",
      "screen_name": "CerdaEnterprise",
      "text": "\"These boots were made for wearing... & that's just what we'll do.\" 🎶😄 Coral + guest pins = 💟 http://Www.TfjByCei.com https://pic.twitter.com/469uVWumSN",
      "link": "https://twitter.com/CerdaEnterprise/status/870335741561151488",
      "id_str": "870335741561151488",
      "canonical_id": "",
      "parent": "",
      "source_type": "TWITTER",
      "provider_type": "SCRAPED",
      "retweet_count": 0,
      "favourites_count": 0,
      "place_name": "Made",
      "place_id": "",
      "text_length": 155,
      "place_context": "ABOUT",
      "place_country": "Netherlands",
      "place_country_code": "NL",
      "place_country_center": [
        -3.5756949584817903,
        26.74012558382941
      ],
      "location_point": [
        4.7930598188159195,
        51.67666995510302
      ],
      "location_radius": 0,
      "location_mark": [
        4.793226608645116,
        51.676940788684036
      ],
      "location_source": "ANNOTATION",
      "hosts": [
        "www.tfjbycei.com",
        "pic.twitter.com"
      ],
      "hosts_count": 2,
      "links": [
        "http://Www.TfjByCei.com",
        "https://pic.twitter.com/469uVWumSN"
      ],
      "links_count": 2,
      "unshorten": {},
      "images": [
        "https://pbs.twimg.com/media/DBQNqkDUQAAkntY.jpg",
        "https://pic.twitter.com/469uVWumSN"
      ],
      "images_count": 2,
      "audio": [],
      "audio_count": 0,
      "videos": [],
      "videos_count": 0,
      "mentions": [],
      "mentions_count": 0,
      "hashtags": [],
      "hashtags_count": 0,
      "classifier_language": "english",
      "classifier_language_probability": 3.246518732845919E-16,
      "without_l_len": 96,
      "without_lu_len": 96,
      "without_luh_len": 96,
      "user": {
        "appearance_first": "2017-06-01T17:46:41.874Z",
        "profile_image_url_https": "https://pbs.twimg.com/profile_images/769877281497944064/PgsRK9C7_bigger.jpg",
        "screen_name": "CerdaEnterprise",
        "user_id": "529052548",
        "name": "The Fashion Junkies",
        "appearance_latest": "2017-06-01T17:46:41.874Z"
      }
    }

As you can see the above JSON output from the “Search API of LokLak”, the location we get in the JSON output i.e., location_mark/location_point is used to plot the point on the map. If in case the location from where the tweet posted is missing, location from the user profile is used.

Location is pointed with a marker on the map as shown below:

You can find the information about Emoji Heatmapper app here and even can try it out:

http://apps.loklak.org/details.html?q=emojiHeatmapper.

Resources

Continue ReadingHow to Geolocate Tweets for Emoji Heatmapper App using LokLak API

Phimpme: Merging several Android Projects into One Project

To speed up our development of the version 2 of the Phimpme Android app, we decided to use some existing Open Source libraries and projects such as Open Camera app for Camera features and Leafpic for the photo gallery.

Integrating several big projects into one is a crucial step, and it is not difficult. Here are some points to ponder.

  1. Clone project separately. Build project and note down the features which you want to integrate. Explore the manifest file to know the Launcher, services and other permission app is taking.

I explore the Leafpic app manifest file. Found out its Launcher class, and changed it with the our code. Looked at the permissions app required such as Camera, access fine location, write external storage.

  1. Follow Bottom Up approach while integrating. It makes life easier. By the bottom up approach I mean create a new branch and then start deleting the code which is not required. Always work in branch so that no code lost or messed up.

Not everything is required at the moment. So I first integrate the whole leafpic app and then remove the splash screen. Also noted down the features needs to remove such as drawer, search bar etc.

  1. Remove and Commit, In a big project things are interlinked so removing one feature, actually made you remove most of the code. So my strategy is to focus on one feature and when it removed commit the code. This will help you in a reset hard your code anytime.

Like a lot of times I use reset –hard in my code, because it messed up.

  1. Licensing: One thing which is most important in using open source code is their License. Always follow what is written in their License. This is the way we can give credit to their hard work. There are various types of licenses, three famous are Apache, MIT and GNU General Public License. All have their pros and cons.

Apart of License there are some other condition as well. Like in leafpic which we are using in Phimpme has a condition to inform developers about the project in which we are using leafpic.

  1. Aware of File Duplication, sometimes many files have same name which results into file duplication. So refactor them already in the code.

I refactor the MainActivity in leafpic to LFMainActivity to not to clash with the existing. Resolve package name. This is also one of the tedious task. Android Studio already do a lot of refactoring but some part left.

  • Make sure your manifest file contains correct name.
  • The best way to refactor your package name in xml file is to ctrl+Shift+f in Ubuntu or Edit → Find → Find in path in Android Studio. Then search old package name and replace it with new. You can use alt+ j for multi cursor as well.
  • Clean and rebuild project.

Run app at each step so that you can catch the error at a point. Also use debugger for debugging.

Resources

Continue ReadingPhimpme: Merging several Android Projects into One Project

Using AutoCompleteTextView for interactive search in Open Event Android App

Providing a search option is essential in the Open Event Android app to make it easy for the user to see the desired results only. But sometimes it becomes difficult to implement this with a good performance if the data set is large, so providing simply a list to scroll through may not be enough and efficient. AutoCompleteTextView provides a way to search data by offering the suggestions after a user types in some initial letters of the search query.

How does it work? Actually we feed the data to an adapter which is attached to the view. So, when a user starts typing the query the suggestions starts appearing with similar names in the form of the list.

For example see above. Typing “Hall” gives the user suggestion to pick up the entry which have word “Hall” in it. Making it easier for user to search.

Let’s see how to implement it. For the first step declare the view in XML layout like this. Where our view goes by the id “map_toolbar” and white text colour for the text that will be appearing in it. Input type signifies that the autocomplete and auto correct is enabled.

<AutoCompleteTextView
       android:id="@+id/map_toolbar"
       android:layout_width="match_parent"
       android:layout_height="wrap_content"
       android:ems="12"
       android:hint="@string/search"
       android:shadowColor="@color/white"
       android:imeOptions="actionSearch"
       android:inputType="textAutoComplete|textAutoCorrect"
       android:textColorHint="@color/white"
       android:textColor="@color/white"
/>

Now initialise the adapter in the fragment/activity with the list “searchItems” containing the information about the location. This function is in a fragment so modifying things accordingly. “textView” is the AutoCompleteTextView that we initialised. To explain this function further when a user clicks on any item from the suggestions the soft keyboard hides. You can do define desired operation here. 

Setting up AutoCompleteTextView with the locations

ArrayAdapter<String> adapter = new ArrayAdapter<>(getActivity(), android.R.layout.simple_dropdown_item_1line, searchItems);
textView.setAdapter(adapter);

textView.setOnItemClickListener((parent, view, position, id) -> {

Things you want to do on clicking the item

View mapView = getActivity().getCurrentFocus();

if (mapView != null) {
  InputMethodManager imm =     (InputMethodManager)getActivity().getSystemService(Context.INPUT_METHOD_SERVICE);
  imm.hideSoftInputFromWindow(mapView.getWindowToken(), 0);
}
});

See the complete code here to find the implementation of AutoCompleteTextView in the map fragment of Open Event Android app.

Continue ReadingUsing AutoCompleteTextView for interactive search in Open Event Android App

Tray Icon for SUSI Desktop Client

Susi is an AI which replies smartly to anything that you put. Popular Assistants like Siri and Cortana, have their own tray icons that run on top of all apps. So, I thought to add a tray icon to SUSI AI as well.

A tray is an icon in an OS’s notification area usually attached with a context menu.

Platform limitations:

  • On Linux, app indicator is used if it is supported else GtkStatusIcon is used
  • App indicator is only shown when it has a context menu.
  • When app indicator is used all click events are ignored.

Starting Up:

You should have Node and ElectronJS installed

To create a tray I made two methods. One is for creating a tray and other is for creating a window when a tray icon is clicked.

app.on('ready', () => {
 createTray()
 createWindow()
 })

app.on(‘ready’,() => {}) is called when the electron is ready to run. It’s just like the main function.

Then we create the Tray.

const createTray = () => {
 tray = new Tray(path.join(__dirname, 'static/icons/png/tray.png'))
 tray.on('right-click', toggleWindow)
 tray.on('double-click', toggleWindow)
 tray.on('click', function (event) {
 toggleWindow()
 

When some developer is using SUSI he might // Show devtools when command clicked if 

(window.isVisible() && process.defaultApp && event.metaKey) {
 window.openDevTools({mode: 'detach'}) }}) }
Tray = new Tray(ICON HERE) // this creates a new tray with the icon specified.

Since there are 2 themes for the notification bar in OSX, I used a BLUE icon that looks good on both the themes.

I Used the icon above as a tray icon for SUSI.

Then next lines are for opening the tray with right-click or double-click. We can also change these functionalities like on right-click we can open some settings or give user an option to close the app.

const createWindow = () => {
  window = new BrowserWindow({
    width: 300,
    height: 450,
    show: false,
    frame: false,
    fullscreenable: false,
    resizable: false,
    transparent: true,
    webPreferences: {
      // Prevents renderer process code from not running when window is
      // hidden
      backgroundThrottling: false
    }
  })
  window.loadURL(`file://${path.join(__dirname, '/static/index.html')}`)
  // show inspect element in the window
  // window.openDevTools();
  // set window to always be on the top of other windows
  window.setAlwaysOnTop(true);
  // Hide the window when it loses focus
  window.on('blur', () => {
    if (!window.webContents.isDevToolsOpened()) {
      window.hide()
    }
  })
}

 

When we call createWindow it creates a Window with the HTML File specified in loadURL

window.loadURL(`file://${path.join(__dirname, '/static/index.html')}`)

In this we created a SUSI Chat Client in HTML and named it as index.html and passed that HTML to be loaded as the application window.

Sometimes when we a running a full screen, Especially in macOS then all apps are launched behind the fullscreen app. To run our tray icon we used another function that set the window to be always on top of all other windows.

 window.setAlwaysOnTop(true);

So if someone is reading a news article and wants to find a meaning of a word he need not open SUSI again, because SUSI is running on top of all the applications.

toggleWindow is called when we click the Tray Icon. The work of this function is fairly simple. On click of the tray icon, it makes hides or shows the SUSI application window.

const toggleWindow = () => {
  if (window.isVisible()) {
    window.hide()
  } else {
    showWindow()
  }
}

showWindow is the function that shows the windows and brings it to focus.

This function brings the SUSI Window to focus and shows it to the user.

const showWindow = () => {
 const position = getWindowPosition()
 window.setPosition(position.x, position.y, false)
 window.show()
 window.focus()
}

Running

Navigate to the project directory and run:

electron . 

SUSI icon will be visible in your notification tray.

Resources:

SUSI Desktop App: https://github.com/fossasia/susi_desktop/ 

Electron Documentation: https://electron.atom.io/

Electron Packager: https://github.com/electron-userland/electron-packager

Continue ReadingTray Icon for SUSI Desktop Client

SUSI Desktop App in Electron

 

Electron allows you to get a cross-platform desktop application up and running very quickly. An electron app is built using the node.js framework and is brought to us by Github. Apps built with Electron are just web sites which are opened in an embedded Chromium web browser.

Setting Up.

First things first, we need NodeJS installed. So go to https://nodejs.org/en/download/ and follow the instructions as per your Operating System.

Now we need Electron. We can install it globally and use it as a CLI, or install it locally in our application’s path. I recommend installing it globally, so that every app does not need to install it every time.

Open terminal and write:

$ npm install -g electron-prebuilt
$ npm install -g bower


Bower is used for managing front end components like HTML, CSS, JS etc.

To test the installation, type electron -h and it should show the version of Electron CLI.

Getting the project ready.

The main idea behind making desktop apps with JS is that you build one codebase and package it for each operating system separately. This abstracts away the knowledge needed to make native desktop apps and makes maintenance a lot easier.

First clone the SUSI Desktop app repository on your computer.

$ git clone https://github.com/fossasia/susi_desktop.git

Next step is to change directory to susi_desktop.

$ cd susi_desktop

Now we need to install dependencies which are used by our app.

$ npm install
$ bower install

Now we have the code and the dependencies setup. To run the Application write

$ npm start

Or

$ electron .

Any of the two commands can be used to run a electron application.

Screenshot:

Folder Structure.

In the Project folder or the root of the app we will keep the package.json file, which have all the dependencies to run the application.Also the main.js file and any other application-wide files we need are kept int the root of the app..

The “app” folder will house our HTML files of various types within folders like css, js and img.

Judging by the file structure, you can never guess this is a desktop app and not just a simple website. This project follows the AirBNB Style guide to code.

Purely starting the main process doesn’t give the users of your application any UI windows.

Conclusion

Overall Electron is an exciting way to build desktop web applications using web technologies.

SUSI Desktop app is built on it. Single code can be used to make native apps for Windows / Linux and Mac OS X.

Continue ReadingSUSI Desktop App in Electron

Pipelining Bash Script’s output to Webapp using Socket.io

Yaydoc, our automatic documentation generator, among other components, consists of a Web User Interface. This UI has a form that takes as its input certain information about a user’s project and generates documentations using this information in the backend with the help of a Bash Script. The caveat of executing such a Bash Script is that a user will have to wait for the processing to complete in order to get any output on the WebApp. This creates some problem as the user may not know if the process is executing properly. Furthermore, servers that are used to deploy such web applications have a limited time span within which it must send a response to a received GET or POST request. Since executing scripts may take some time, the process may lead to a Request Timeout.

We faced a similar problem with Yaydoc while deploying it to Heroku. Since Heroku has a timeout at 30 seconds, executing the Documentation Generation script lead to a Request Timeout as it takes more than 30 seconds for the execution. After doing a bit of research, we were introduced with Socket.io. Socket.IO is one of the most powerful Javascript frameworks which enables real-time bidirectional event-based communication.

At the client side, we define an “execute” event which emits the form data when the “Generate Docs” button is clicked. At the server side, we handle the event by executing a generator.executeScript(...) function with the socket and formData as its arguments.

/**
 * Client-side Event Handling
 */

$(function () {
 var socket = io();
 $(“#btnGenerate”).click(function () {
   var formData = getData();
   socket.emit(“execute”, formData);
 });
 ...
 ...
 ...
});

/**
 * Server-side Event Handling
 */
io.on(“connection”, function (socket) {
 socket.on(“execute”, function (formData) {
   generator.executeScript(socket, formData);
 });
});

 

Bash scripts are executed in NodeJS by creating child processes using the `child_process` module. This module provides four different methods for executing external applications. They are:

  1. execFile
  2. exec
  3. spawn
  4. fork

Out of these, the exec() and execFile() methods returns buffered data when the script executes successfully. We cannot use them as a solution because we need to continuously receive certain response from the server after execution of a limited number of commands in the script. Thus, we opt for spawn() which returns a stream based object every time the script produces some data. The spawn method is called in the executeScript method.

exports.executeScript = function (socket, formData) {
 ...
 ...
 var process = spawn(“./generate.sh”, args);
 process.stdout.on(“data”, function (data) {
   socket.emit(“logs”, {data: data});
 });
 ...
 ...
};

The emitted logs are then received at the client-side for display in the web application.

/**
 * Client-side Event Handling
 */
$(function () {
 ...
 ...
 socket.on(“logs”, function (data) {
   $(“#messages”).append($(“<li>”).text(data.data));
 }
 ...
 ...
});

A minimal sample of this application can be found at: https://github.com/imujjwal96/socket-bashing

Continue ReadingPipelining Bash Script’s output to Webapp using Socket.io

Automatically Generating index for documentation in Yaydoc

Yaydoc which uses Sphinx Documentation Generator internally needs a document named index.rst describing the overall layout of the documentation to generate a proper table of contents. Without an index.rst present, the build fails. With this week’s update that constraint has been relaxed. Now if yaydoc detects that index.rst has not been supplied, it automatically generates a minimal index for basic use. Although it is still recommended to provide your own index, you won’t be punished for its absence. The following sections show how this was implemented and also shows this feature in action.

Implementation

For generating a minimal index.rst, we perform the following steps:

  • If the repository has a README.rst or a README.md, we include it in the index
  • Several toctrees are generated as per how the documents in the repository are arranged.

The following code snippet returns a valid rst block which includes the document dirpath/filename

def get_include(dirpath, filename):
    ext = os.path.splitext(filename)[1]
    if ext == '.md':
        directive = 'mdinclude'
    else:
        directive = 'include'
    template = '.. {directive}:: {document}'
    path = os.path.relpath(os.path.join(dirpath, filename))
    document = path.replace(os.path.sep, '/')
    return template.format(directive=directive, document=document)

The following code snippet returns a valid rst block which creates a toctree of dirpath.

def get_toctree(dirpath, filenames):
    toctree = ['.. toctree::', '   :maxdepth: 1']
    caption_template = '   :caption: {caption}'
    content_template = '   {document}'

    caption = os.path.basename(dirpath).replace('_', ' ').title()
    if caption == os.curdir:
        caption = 'Contents'
    toctree.append(caption_template.format(caption=caption))
    # Inserting a blank line
    toctree.append('')

    valid = False
    for filename in filenames:
        path, ext = os.path.splitext(os.path.join(dirpath, filename))
        if ext not in ('.md', '.rst'):
            continue
        document = path.replace(os.path.sep, '/')
        document = document.lstrip('./').rstrip('/')
        toctree.append(content_template.format(document=document))
        valid = True

    if valid:
        return '\n'.join(toctree)
    else:
        return ''

The following code snippet walks the documentation directory and returns a valid content to be written to index.rst.

def get_index(root):
    index = []
    # Include README from root
    root_files = next(os.walk(root))[2]
    if 'README.rst' in root_files:
        index.append(get_include(root, 'README.rst'))
    elif 'README.md' in root_files:
        index.append(get_include(root, 'README.md'))
    # Add toctrees as per the directory structure
    for (dirpath, dirnames, filenames) in os.walk(os.curdir):
    if filenames:
        toctree = get_toctree(dirpath, filenames)
        if toctree:
            index.append(toctree)
    return '\n\n'.join(index) + '\n'

Result

Let’s assume that a sample project has the following directory tree for documentation.

+---_README.md
+---_docs/
|   +---_installation_guide/
|   |   +--- setup_heroku.md
|   |   +--- setup_docker.md
|   +---_tutorial/
|   |   +--- basic.md
|   |   +--- advanced.md

The following index.rst would be generated from the above tree

.. mdinclude:: ../README.md

.. toctree::
   :caption: Installation Guide
   :maxdepth: 1

   setup_heroku
   setup_docker

.. toctree::
   :caption: Tutorial
   :maxdepth:

   basic
   advanced

As you can see, this index.rst would be enough for most use cases. This update decreases the entry barrier for yaydoc. More features are on the way.

Resources

Continue ReadingAutomatically Generating index for documentation in Yaydoc

The Joy of Testing with MVP in Open Event Orga App

Testing applications is hard, and testing Android Applications is harder. The natural way an Android Developer codes is completely untestable. We are born and molded into creating God classes – Activities and perform every bit of logic inside them. Some thought that introduction to Fragments will introduce a little bit of modularity, but we proved otherwise by shifting to God Fragments. When the natural urge of an Android Developer to

  • apply logic,
  • load UI,
  • handle concurrency (hopefully, not using AsyncTask),
  • load data – from the network; disk; and cache,
  • and manage the state of the Activity/Fragment

finally, meets with a new form of revelation that he/she should test his/her application, all of the concepts acquired are shattered in a moment. The person goes to Android Docs to see why he started coding that way and realizes that even the system is flawed – Android Documentation is full of examples promoting God Activities, which they have used to show the use of API for only one reason – brevity. It’s not uncommon for us to steal some code from StackOverflow or Android Docs, but we don’t realize that they are presented there without an application environment or structure, all the required component just glued together to get it functionally complete, so that reader does not have to configure it anymore. Why would an answer from StackOverflow load a barcode scanner using a builder? Or build a ContentProvider to show how to query sorted data from SQLite? Or use a singleton for your provider classes? The simple answer is, they won’t. But this doesn’t mean you shouldn’t too.

The first thought that enters developer’s mind when he gets his hand on a piece of code is to paste it in the correct position and get it to work. There is always this moment of hesitation where conscience rings a bell saying, “What are you doing? Is it the right way to do it? How many lines till you stop overloading this Activity? It’s 2000 lines already”. But it dies as soon as we see the feature is working. And why test something which we have confirmed to work, right? I just wrote this code to show a progress bar while the data loads and hide it when it is done. I can see this working, what will I achieve in painfully writing a test for it, mocking the loading conditions and all. Wrong! Unit tests which test your trivial utils like Date Modification, String Parsing, etc are good and needed but are so trivial that they are hard to go wrong, and if they are, they are easy to fix as they have single usage throughout the app, and it is easy to spot bugs and fix them.

The real problem is testing of your app over dynamic conditions, where you have to emulate them so you can see if your app is making the right decisions. The progress bar working example may work for now, but what if it breaks over refactoring, or you wrote the same code elsewhere and forget to hide the progress bar? Simply copying a well-written test and changing 2-3 class names will fail the build and tell you what’s wrong. A well-contracted app can even contain tests to check that there’ll be no memory leaks. But none of this is possible if everything is jumbled into single Activity with callbacks, loaders, UI handling, business logic, etc. You can’t even think that where to begin. In this post, I will briefly discuss, how to design and test applications using MVP pattern.

MVP to the Rescue

Before moving ahead, I must put a disclaimer saying that MVP is a design pattern which follows a very opinionated implementation. The extent to which you want to refactor your app and the number of abstractions you are willing to do are in your hand. There aren’t any golden rules where your implementation will fail to be called as MVP. Remember, the client doesn’t care about architecture, it’s for you, so whatever makes your life and testing easier, works. Also, I’ll use an axiom related to testing here, “Test until fear turns into boredom”. You can apply the same to your MVP implementation. Testing and design patterns are here to eradicate your fear of failure, not to bore you.

So, in this guide, I’ll be using a use case of a QR Code Scanner Activity built using MVP pattern which we have employed in Open Event Orga Application (Github Repo). Because we are focusing on the test pattern and how to make it easy for us to test our app logic, I have omitted the model part from the MVP equation. The reason being that the possible model in the application would have been the camera loader or barcode initializer and the caveats associated with following this is that both these modules rely heavily on the Android specific view classes, namely, SurfaceView and other lifecycle methods. You could always create your way around it to include them in a separate model, but it won’t help us in writing unit tests for them, because firstly, they aren’t our logic to test, and secondly, they can’t be tested in a unit test (those models would have probably just implemented certain setters and getters).

So, the main purpose of our QR Code Scanner class is to scan a QR code and match it with a list of identifiers, and if there is a match, return it successfully to the caller. In this specific example, the identifiers will the ticket IDs of event attendees and the caller will be event organizer scanning QR codes to check the attendee in. The use case sounds simple, but has several mini use cases and dependencies of itself, which we have to take into our account while designing the View and Presenter class. Let’s discuss them one by one:

App State – Start: Activity starts, loads camera

  • Permission Granted: detects that the app already has Camera Permission,
    • starts scanning
  • Permission Absent: detects that the app doesn’t have Camera Permission,
    • Denied: asks for it, denied, shows error
    • Accepted: asks for it, granted, starts scanning

App State – QR Code Detected: Starts parsing them

  • Attendees are not present: Attendees aren’t present because of some reason, either due to internal error or have not loaded yet
    • stops parsing
  • Attendees are present:
    • QR Code does not match with any attendee : Do nothing
    • QR Code matches with one of the attendee : Send attendee to caller

App State: Camera or containing View is getting destroyed

  • Release Camera

There can be much more internal data flows, but this much is sufficient for our example. So, let’s start defining our contracts using the above knowledge. First, we will design our View. So what should be our strategy? Always think of presenters and views to be mapped in a 1:1 relation. They can have conversations with each other, the difference is that the view is only allowed to talk to the presenter, but presenter may talk to models too. So, the view is going to get all of its information from the presenter and can only take action when presenter tells it to, essentially saying that the view is dumb and passive. The presenter can talk to view and model(s), meaning the collection of information presenter has does not have to come from the view only. In fact, the less dependent presenter has to be on view, the better. The main motive of MVP is to make our logic less dependent on views.

Presenter Contract

In our example, presenter relies on the view to tell it when a certain event happens, they can be lifecycle callbacks, permission grants/denies, camera load/destroy or anything purely related to the Android implementation. So, we generally know from the start what kind of information presenter needs from a View, so let’s design our presenter first.

public interface IScanQRPresenter {

    void attach(long eventId, IScanQRView scanQRView);

    void start();

    void detach();

    void cameraPermissionGranted(boolean granted);

    void onBarcodeDetected(Barcode barcode);

    void onScanStarted();

    void onCameraLoaded();

    void onCameraDestroyed();

}

 

The methods are self-explanatory. Don’t worry if you didn’t get why we defined certain hooks like onScanStarted() or why we didn’t define onScanStopped(). These kinds of details will reveal themselves as you develop your components. You can skip to next section if you don’t want to know why we did it.

Basically, you should define a callback for events which are not reliable to be synchronized. What? Let me explain. Let’s say you got your camera permission and requested the view to start scanning, but you have to wait till the camera is loaded as it is not a synchronous call (and it shouldn’t be or your main thread will block). So, instead of that, our presenter will request the camera to load, go in an idle state, wait for the onCameraLoaded() call, and then request the scan to start, and since it is also not a synchronous work, it will go into idle state again and wait for onScanStarted() and then further its work. Whenever the camera is destroyed, the onCameraDestroyed() callback will be called and we will stop the scanning, and since there is nothing to be done after that, we won’t wait till scanning has stopped, thus dropping the need of onScanStopped() callback.

View Contract

View contract will come naturally to you once you have understood the data flow. It will contain the commands presenter will issue on the view and also, the requests for data that view holds. So, this will be our view.

public interface IScanQRView {

    boolean hasCameraPermission();

    void requestCameraPermission();

    void showPermissionError(String error);

    void onScannedAttendee(Attendee attendee);

    void showBarcodePanel(boolean show);

    void showBarcodeData(@NonNull String data);

    void showProgressBar(boolean show);

    void loadCamera();

    void startScan();

    void stopScan();

}

 

Almost all commands are straightforward but let me quickly explain showBarcodePanel(boolean) and showBarcodeData(String). They are used to display the currently visible barcode data to the user.

So, with our implementations set and data flow in place, let’s start writing tests for the feature. Yes, we’ll write tests without implementation and then you’ll see how easy it will be to write your views and presenters with only one goal to mind, to make the tests pass. If your tests are written correctly and cover everything, you should feel confident that your app will work without even seeing the actual implementation, because that is what tests are for. And by making passive and dumb views, imagine how light your instrumentation tests will be. Just check that the individual methods in view implementation are working as expected and you are done! No need to test logic or complex interactions, etc because you have got it covered in the unit tests themselves. This is not only a benefit of data flow tests but also a best practice. You always want to follow DRY, Don’t Repeat Yourself, even while testing.

Tests

So, we will start writing our tests now and you’ll realize how easy it is and all the hard work of abstraction and designing will pay off.

Attach Tests

So, firstly, we will test if the presenter calls appropriate methods when it is attached

@Test
public void shouldLoadAttendeesAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(eventRepository).getAttendees(eventId, false);
}

@Test
public void shouldLoadCameraAutomatically() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    verify(scanQRView).loadCamera();
}

 

Here, we are using Mockito to mock our EventDataRepository to return locally defined attendees instead of doing an actual network call. Then we are calling attach on presenter in each method and then we verify in the first test that the presenter is calling getAttendees on the EventDataRepository, and in the second test that it is requesting the view to load the camera.

Note that in the implementation of attach function, both loading of Camera and attendee loading will take place, but it is best practice to test them separately so that when a test fails, we know why it did

Detach Tests

@Test
public void shouldDetachViewOnStop() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));

    scanQRPresenter.start();

    assertNotNull(scanQRPresenter.getView());

    scanQRPresenter.detach();

    assertNull(scanQRPresenter.getView());
}

@Test
public void shouldNotAccessViewAfterDetach() {
    scanQRPresenter.detach();

    scanQRPresenter.start();
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    scanQRPresenter.onBarcodeDetected(barcode2);
    scanQRPresenter.onCameraDestroyed();

    verifyZeroInteractions(scanQRView);
}

 

In detach tests, we are verifying that after attaching and view not being null, the detach method call makes the presenter leave the reference to the view making it null. And in the second test, we do all possible interactions after detaching and confirm that no call whatsoever was made on the view. Tests like these enforce to avoid memory leaks and check for any NullPointerExceptions that may happen after the view was made null.

Note: This does not mean that memory leaks will not happen if this test passes. You can cause memory leaks by giving the view reference to any long living object, not just the presenter. This just ensures that presenter will not hold the view reference after detach and won’t reference to the null view in future.

Permission Tests

@Test
public void shouldStartScanOnCameraLoadedIfPermissionPresent() {
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).startScan();
}

@Test
public void shouldAskPermissionOnCameraLoadedIfPermissionsAbsent() {
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.onCameraLoaded();

    verify(scanQRView).requestCameraPermission();
}

@Test
public void shouldStartScanningOnPermissionGranted() {
    scanQRPresenter.cameraPermissionGranted(true);

    verify(scanQRView).startScan();
}

@Test
public void shouldShowErrorOnPermissionDenied() {
    scanQRPresenter.cameraPermissionGranted(false);

    verify(scanQRView).showPermissionError(matches("(.*permission.*denied.*)|(.*denied.*permission.*)"));
}

The permission tests are straightforward:

Implicit Permission Handling

  1. If the view already has the camera permission and camera has loaded, start scanning
  2. If the view does not have camera permission and the camera has loaded, request the permission

Request Handling

  1. If the request was granted, start scanning
  2. If the permission was denied, show the permission error. Here, I have used regex to match that the error message contains permission and denied words, you can use anyString() from Mockito for more flexibility or a specific message for more tight testing

Camera Destroy Test

@Test
public void shouldStopScanOnCameraDestroyed() {
    scanQRPresenter.onCameraDestroyed();

    verify(scanQRView).stopScan();
}

Pretty simple, stop the scan on destruction of camera

Flow Tests

You can also test that the callback flow happens in order so that not only the unit tests work but also the implementation logic is correct.

/**
 * Checks that the flow of commands happen in order
 */
@Test
public void shouldFollowFlowOnImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).loadCamera();
    scanQRPresenter.onCameraLoaded();
    inOrder.verify(scanQRView).startScan();
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(true);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

Here, we are verifying two things, first that if we have the request granted implicitly, we load the camera and start the scan in order. This test isn’t that useful as we already tested that loading of the camera is done on attach and scan is started when the camera is loaded. In fact, this is an example of breaking the DRY rule. Even though it doesn’t hurt to include this, it also doesn’t help as it does not cover anything that hasn’t already been tested.

The second test is important and tests that progress bar is correctly shown and hidden after certain communications have taken place and the scan has started. Similarly, we can also test the progress bar behavior over all of the possible combinations of cases that can happen. The code snippets below show the tests:

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestGrant() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(true);
    scanQRPresenter.onScanStarted();
    inOrder.verify(scanQRView).showProgressBar(false);
}

@Test
public void shouldShowProgressInBetweenImplicitPermissionDenyRequestDeny() {
    when(eventRepository.getAttendees(eventId, false))
        .thenReturn(Observable.fromIterable(attendees));
    when(scanQRView.hasCameraPermission()).thenReturn(false);

    scanQRPresenter.start();

    InOrder inOrder = inOrder(scanQRView);
    inOrder.verify(scanQRView).showProgressBar(true);
    scanQRPresenter.onCameraLoaded();
    scanQRPresenter.cameraPermissionGranted(false);
    inOrder.verify(scanQRView).showProgressBar(false);
}

QR Code Detection Tests

@Test
public void shouldNotSendAnyBarcodeIfAttendeesAreNull() {
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldNotSendAttendeeOnWrongBarcodeDetection() {
    scanQRPresenter.setAttendees(attendees);
    sendNullInterleaved();

    verify(scanQRView, never()).onScannedAttendee(any(Attendee.class));
}

@Test
public void shouldSendAttendeeOnCorrectBarcodeDetection() {
    // Somehow the setting in setUp is not working, a workaround till fix is found
    RxJavaPlugins.setComputationSchedulerHandler(scheduler -> Schedulers.trampoline());

    scanQRPresenter.setAttendees(attendees);

    barcode1.displayValue = "test4-91";
    scanQRPresenter.onBarcodeDetected(barcode1);

    verify(scanQRView).onScannedAttendee(attendees.get(3));
}

 

Lastly, the core tests of our presenter. To explain the tests, let me show you the sendNullInterleaved() method

private void sendNullInterleaved() {
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(barcode2);
    sendBarcodeBurst(null);
    sendBarcodeBurst(barcode1);
    sendBarcodeBurst(null);
}

 

So what it basically does is send some barcodes interleaved with null (no barcode detected) to the presenter using the onBarcodeDetected method to emulate the real camera sending barcode values with sometimes sending null whenever no barcode is in view.

The first test simply checks that no attendee is sent if the attendee list is null. Seems pretty obvious. The second one checks that if barcode does not match with any attendee’s identifier, it should not send any attendee as well. It does this by setting an arbitrary list of attendees with different identifiers and sending non-matching barcodes to the presenter. Lastly, the successful test, where a single matching barcode is sent to the presenter and it should send that particular attendee with matching identifier.

Phew! Quite a lot of tests and we are done. The tests were not very large and mostly self-explanatory. Now, you just have to implement the view and presenter methods till all the tests light up to be green and you are done! You have created a feature using MVP design and implemented it using test driven development.

So start testing and get a seal of reliability and confidence about your code and the satisfaction of seeing the green bar fill up.

Continue ReadingThe Joy of Testing with MVP in Open Event Orga App

Implementing Responsive Designs in Open Event Front-end

Screen size differs from user to user, from device to device and thus we must provide responsive user interface so that our design doesn’t break on any of the devices. At Open-Event-Frontend we have used Semantic-UI elements to implement responsive designs. Semantic-UI has classes like grid, stackable, container etc to maintain the responsiveness of the designs. Semantic also gives us functions to check the size of the screen we are currently in and thus depending upon the screen size, we can design our user-interface.

Let’s take /events/<event_identifier>  for an example

Two components have been added to this page

  1. The following four buttons that we are using, are ‘ui buttons’ and have icons.
  1. Preview – This button allows us to check how our event would look before it is published.
  2. Publish – When we are ready we can make our event available for the world
  3. Copy – Using this we can copy all the details of our event and create a new event  from it
  4. Delete – If something went wrong we can delete our event at any time

From the above mentioned buttons we have grouped Publish and Copy together.

It is done to add responsive behaviour. In case of mobile ,our buttons only have icon and no text ,and buttons are no longer right aligned. For this we have used device.isMobile to check whether we are on mobile or not to ensure the responsive behaviour is carried over to mobile too.

<div class=“twelve wide column {{unless device.isMobile ‘right aligned’}}”>
       {{#if device.isMobile}}
         <div class=“ui icon fluid buttons”>
           <button class=“ui button”><i class=“unhide icon”></i></button>
           <button class=“ui button”><i class=“check icon”></i></button>
           <button class=“ui button”><i class=“copy icon”></i></button>
           <button class=“ui red button” {{action ‘openDeleteEventModal’}}><i class=“trash icon”></i></button>
         </div>
       {{else}}
           <button class=”ui button labeled icon small”>
             <i class=”unhide icon”></i>
             {{t ‘Preview’}}
           </button>
           <div class=”ui small labeled icon buttons”>
             <button class=”ui button “>
               <i class=”check icon”></i>
               {{t ‘Publish’}}
             </button>
             <button class=”ui button “>
               <i class=”copy icon”></i>
               {{t ‘Copy’}}
             </button>
           </div>
           <button class=“ui red button labeled icon small” {{action ‘openDeleteEventModal’}}>
             <i class=“trash icon”></i>
             {{t ‘Delete’}}
           </button>
       {{/if}}
     </div>

Here class `right aligned` is added dynamically depending upon whether we are viewing on mobile or not.

  1. We have added tabs for all types of elements of events like tickets, speakers, scheduler. Also, we have made these tabs responsive. For mobile these tabs converts to stackable menu.


Here also classes are added depending on some condition.

<div class=“ui fluid pointing stackable menu {{unless device.isMobile ‘secondary’}}”>

Conditional addition of classes is a very useful and powerful feature of handlebars.

<div class=“row” style=“padding-top: 15px”>
   <div class=“sixteen wide column”>
     <div class=“ui fluid pointing stackable menu {{unless device.isMobile ‘secondary’}}”>
       {{#link-to ‘events.view.index’ class=’item’}}
         {{t ‘Overview’}}
       {{/link-to}}
       {{#link-to ‘events.view.tickets’ class=’item’}}
         {{t ‘Tickets’}}
       {{/link-to}}
       <a href=“#” class=‘item’>{{t ‘Scheduler’}}</a>
       {{#link-to ‘events.view.sessions’ class=’item’}}
         {{t ‘Sessions’}}
       {{/link-to}}
       {{#link-to ‘events.view.speakers’ class=’item’}}
         {{t ‘Speakers’}}
       {{/link-to}}
       {{#link-to ‘events.view.export’ class=’item’}}
         {{t ‘Export’}}
       {{/link-to}}
     </div>
   </div>
 </div>

Class fluid has been used so that our menu items takes the whole width of the screen.

Now we have added all the responsive elements to the page and it’s good to go with all devices.

Additional Resources

Continue ReadingImplementing Responsive Designs in Open Event Front-end

Implementing Loklak APIs in Java using Reflections

Loklak server provides a large API to play with the data scraped by it. Methods in java can be implemented to use these API endpoints. A common approach of implementing the methods for using API endpoints is to create the request URL by taking the values passed to the method, and then send GET/POST request. Creating the request URL in every method can be tiresome and in the long run maintaining the library if implemented this way will require a lot of effort. For example, assume a method is to be implemented for suggest API endpoint, which has many parameters, for creating request URL a lot of conditionals needs to be written – whether a parameter is provided or not.

Well, the methods to call API endpoints can be implemented with lesser and easy to maintain code using Reflection in Java. The post ahead elaborates the problem, the approach to solve the problem and finally solution which is implemented in loklak_jlib_api.

Let’s say, the status API endpoint needs to be implemented, a simple approach can be:

public class LoklakAPI {
    
    public static String status(String baseUrl) {
        String requestUrl = baseUrl   "/api/status.json";
        
        // GET request using requestUrl 
    }

    public static void main(String[] argv) {
        JSONObject result = status("https://api.loklak.org");
    }
}

This one is easy, isn’t it, as status API endpoint requires no parameters. But just imagine if a method implements an API endpoint that has a lot of parameters, and most of them are optional parameters. As a developer, you would like to provide methods that cover all the parameters of the API endpoint. For example, how a method would look like if it implements suggest API endpoint, the old SuggestClient implementation in loklak_jlib_api does that:

public static ResultList<QueryEntry> suggest(
           final String hostServerUrl,
           final String query,
           final String source,
           final int count,
           final String order,
           final String orderBy,
           final int timezoneOffset,
           final String since,
           final String until,
           final String selectBy,
           final int random) throws JSONException, IOException {
       ResultList<QueryEntry>  resultList = new ResultList<>();
       String suggestApiUrl = hostServerUrl
               + SUGGEST_API
               + URLEncoder.encode(query.replace(' ', '+'), ENCODING)
               + PARAM_TIMEZONE_OFFSET + timezoneOffset
               + PARAM_COUNT + count
               + PARAM_SOURCE + (source == null ? PARAM_SOURCE_VALUE : source)
               + (order == null ? "" : (PARAM_ORDER + order))
               + (orderBy == null ? "" : (PARAM_ORDER_BY + orderBy))
               + (since == null ? "" : (PARAM_SINCE + since))
               + (until == null ? "" : (PARAM_UNTIL + until))
               + (selectBy == null ? "" : (PARAM_SELECT_BY + selectBy))
               + (random < 0 ? "" : (PARAM_RANDOM + random))
               + PARAM_MINIFIED + PARAM_MINIFIED_VALUE;
 
                // GET request using suggestApiUrl
   }
}

A lot of conditionals!!! The targeted users may also get irritated if they need to provide all the parameters every time even if they don’t need them. The obvious solution to that is overloading the methods. But,  then again for each overloaded method, the same repetitive conditionals need to be written, a form of code duplication!! And what if you have to implement some 30 API endpoints and in future maintaining them, a single thought of it is scary and a nightmare for the developers.

Approach to the problem

To reduce the code size, one obvious thought that comes is to implement static methods that send GET/POST requests provided request URL and the required data. These methods are implemented in loklak_jlib_api by JsonIO and NetworkIO.

You might have noticed the method name i.e. status, suggest is also the part of the request URL. So, the common pattern that we get is:

base_url + "/api/" + methodName + ".json" + "?" + firstParameterName + "=" + firstParameterValue + "&" + secondParameterName + "=" + secondParameterValue ...

Reflection API to rescue

Using Reflection API inspection of classes, interfaces, methods, and variables can be done, yes you guessed it right if we can inspect we can get the method and parameter names. It is also possible to implement interfaces and create objects at runtime.

So, an interface is created and API endpoint methods are defined in it. Using reflection API the interface is implemented lazily at runtime. LoklakAPI interface defines all the API endpoint methods. Some of the defined methods are:

public interface LoklakAPI {
 
    @Target(ElementType.METHOD)
    @Retention(RetentionPolicy.RUNTIME)
    @interface GET {}
 
    @Target(ElementType.METHOD)
    @Retention(RetentionPolicy.RUNTIME)
    @interface POST {}
 
    @GET
    JSONObject search(String q);
 
    @GET
    JSONObject search(String q, int count);
 
    @GET
    JSONObject search(String q, int timezoneOffset, String since, String until);
 
    @GET
    JSONObject search(String q, int timezoneOffset, String since, String until, int count);
 
    @GET
    JSONObject peers();
 
    @GET
    JSONObject hello();
 
    @GET
    JSONObject status();
 
    @POST
    JSONObject push(JSONObject data);
}

Here GET and POST annotations are used to mark the methods which use GET and POST request respectively, so that appropriate method for GET and POST static method can be used JsonIOloadJson for GET request and pushJson for POST request.

A private static inner class ApiInvocationHandler is created in APIGenerator, which implements InvocationHandler – used for implementing interfaces at runtime.

private static class ApiInvocationHandler implements InvocationHandler {
 
   private String mBaseUrl;
 
   public ApiInvocationHandler(String baseUrl) {
       this.mBaseUrl = baseUrl;
   }
 
   @Override
   public Object invoke(Object o, Method method, Object[] values) throws Throwable {
       Parameter[] params = method.getParameters();
       Object[] paramValues = values;
 
       /*
       format of annotation name:
       @packageWhereAnnotationIsDeclared$AnnotationName()
       Example: @org.loklak.client.LoklakAPI$GET()
       */
       Annotation annotation = method.getAnnotations()[0];
       String annotationName = annotation.toString().toLowerCase();
 
       String apiUrl = createGetRequestUrl(mBaseUrl, method.getName(),
               params, paramValues);
       if (annotationName.contains("get")) { // GET REQUEST
           return loadJson(apiUrl);
       } else { // POST REQUEST
           JSONObject jsonObjectToPush = (JSONObject) paramValues[0];
           String postRequestUrl = createPostRequestUrl(mBaseUrl, method.getName());
           return pushJson(postRequestUrl, params[0].getName(), jsonObjectToPush);
       }
   }
}

The base URL is provided while creating the object, in the constructor. The invoke method is called whenever a defined method in the interface is called in actual code. This way an interface is implemented at runtime. The parameters of invoke method:

  • Object o – the object on which to call the method.
  • Method method – method which is called, parameters and annotations of the method are obtained using getParameters and getAnnotations methods respectively which are required to create our request url.
  • Object[] values – an array of parameter values which are passed to the method when calling it.

createGetRequestUrl is used to create the request URL for sending GET requests.

private static String createGetRequestUrl(
       String baseUrl, String methodName, Parameter[] params, Object[] paramValues) {
   String apiEndpointUrl = baseUrl.replaceAll("/+$", "") + "/api/" + methodName + ".json";
   StringBuilder url = new StringBuilder(apiEndpointUrl);
   if (params.length > 0) {
       String queryParamAndVal = "?" + params[0].getName() + "=" + paramValues[0];
       url.append(queryParamAndVal);
       for (int i = 1; i < params.length; i++) {
           String paramAndVal = "&" + params[i].getName()
                   + "=" + String.valueOf(paramValues[i]);
           url.append(paramAndVal);
       }
   }
   return url.toString();
}

Similarly, createPostRequestUrl is used to create request URL for sending POST requests.

private static String createPostRequestUrl(String baseUrl, String methodName) {
   baseUrl = baseUrl.replaceAll("/+$", "");
   return baseUrl + "/api/" + methodName + ".json";
}

Finally, Proxy.newProxyInstance is used to implement the interface at runtime. An instance of ApiInvocationHandler class is passed to the newProxyInstance method. This is all done by static method createApiMethods.

public static <T> T createApiMethods(Class<T> service, final String baseUrl) {
   ApiInvocationHandler apiInvocationHandler = new ApiInvocationHandler(baseUrl);
   return (T) Proxy.newProxyInstance(
           service.getClassLoader(), new Class<?>[]{service}, apiInvocationHandler);
}

Where:

  • Class<T> service – is the interface where API endpoint methods are defined, here LoklakAPI.class.
  • String baseUrl – web address of the server where Loklak server is hosted.

NOTE: For all this to work the library must be build using “-parameters” flag, else parameter names can’t be obtained. Since loklak_jlib_api uses maven, “-parameters” is provided in pom.xml file.

A small example:

String baseUrl = "https://api.loklak.org";
LoklakAPI loklakAPI = APIGenerator.createApiMethods(LoklakAPI.class, baseUrl);
 
JSONObject searchResult = loklakAPI.search("FOSSAsia");
Continue ReadingImplementing Loklak APIs in Java using Reflections