Data Access Layer in Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Its core feature is scanning a QR Code to validate Attendee Check In. Other features of the App are to display an overview of sales and tickets management. The App maintains a local database and syncs it with the Open Event API Server. The Data Access Layer in the App is designed such that the data is fetched from the server or taken from the local database according to the user’s need. For example, simply showing the event sales overview to the user will fetch the data from the locally saved database. But when the user wants to see the latest data then the App need to fetch the data from the server to show it to the user and also update the locally saved data for future reference. I will be talking about the data access layer in the Open Event Organizer App in this blog.

The App uses RxJava to perform all the background tasks. So all the data access methods in the app return the Observables which is then subscribed in the presenter to get the data items. So according to the data request, the App has to create the Observable which will either load the data from the locally saved database or fetch the data from the API server. For this, the App has AbstractObservableBuilder class. This class gets to decide which Observable to return on a data request.

Relevant Code:

final class AbstractObservableBuilder<T> {
   ...
   ...
   @NonNull
   private Callable<Observable<T>> getReloadCallable() {
       return () -> {
           if (reload)
               return Observable.empty();
           else
               return diskObservable
                   .doOnNext(item -> Timber.d("Loaded %s From Disk on Thread %s",
                       item.getClass(), Thread.currentThread().getName()));
       };
   }

   @NonNull
   private Observable<T> getConnectionObservable() {
       if (utilModel.isConnected())
           return networkObservable
               .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                   item.getClass(), Thread.currentThread().getName()));
       else
           return Observable.error(new Throwable(Constants.NO_NETWORK));
   }

   @NonNull
   private <V> ObservableTransformer<V, V> applySchedulers() {
       return observable -> observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread());
   }

   @NonNull
   public Observable<T> build() {
       if (diskObservable == null || networkObservable == null)
           throw new IllegalStateException("Network or Disk observable not provided");

       return Observable
               .defer(getReloadCallable())
               .switchIfEmpty(getConnectionObservable())
               .compose(applySchedulers());
   }
}

 

The class is used to build the Abstract Observable which contains both types of Observables, making data request to the API server and the locally saved database. Take a look at the method build. Method getReloadCallable provides an observable which will be the default one to be subscribed which is a disk observable which means data is fetched from the locally saved database. The method checks parameter reload which if true suggests to make the data request to the API server or else to the locally saved database. If the reload is false which means data can be fetched from the locally saved database, getReloadCallable returns the disk observable and the data will be fetched from the locally saved database. If the reload is true which means data request must be made to the API server, then the method returns an empty observable.

The method getConnectionObservable returns a network observable which makes the data request to the API server. In the method build, switchIfEmpty operator is applied on the default observable which is empty if reload is true, and the network observable is passed to it. So when reload is true, network observable is subscribed and when it is false disk observable is subscribed. For example of usage of this class to make a events data request is:

public Observable<Event> getEvents(boolean reload) {
   Observable<Event> diskObservable = Observable.defer(() ->
       databaseRepository.getAllItems(Event.class)
   );

   Observable<Event> networkObservable = Observable.defer(() ->
       eventService.getEvents(JWTUtils.getIdentity(getAuthorization()))
           ...
           ...

   return new AbstractObservableBuilder<Event>(utilModel)
       .reload(reload)
       .withDiskObservable(diskObservable)
       .withNetworkObservable(networkObservable)
       .build();
}

 

So according to the boolean parameter reload, a correct observable is subscribed to complete the data request.

Links:
1. Documentation about the Operators in ReactiveX
2. Information about the Data Access Layer on Wikipedia

Using Data Access Object to Store Information

We often need to store the information received from the network to retrieve that later. Although we can store and read data directly but by using data access object to store information enables us to do data operations without exposing details of the database. Using data access object is also a best practice in software engineering. Recently I modified Connfa app to store the data received in Open Event format. In this blog, I describe how to use data access object.

The goal is to abstract and encapsulate all access to the data and provide an interface. This is called Data Access Object pattern. In a nutshell, the DAO “knows” which data source (that could be a database, a flat file or even a WebService) to connect to and is specific for this data source. It makes no difference in applications when it accesses a relational database or parses xml files (using a DAO). The DAO is usually able to create an instance of a data object (“to read data”) and also to persist data (“to save data”) to the datasource.

Consider the example from Connfa app in which get the tracks from API and store them in SQL database. We use DAO to create a layer between model and database. Where AbstractEntityDAO is an abstract class which have the functions to perform CRUD operation. We extend it to implement them in our DAO model. Here is TrackDAO structure,

public class TrackDao extends AbstractEntityDAO<Track, Long> {

    public static final String TABLE_NAME = "table_track";

    @Override
    protected String getSearchCondition() {
        return "_id=?";
    }
    
    ...
}

Find the complete class to see the detailed methods to implement search conditions, get key columns, create instance etc.  here.

Here is a general method to get the data from the database. Where getFacade() for the given layer element, this method returns the requested facade object to represent the passed in layer element.

public List<ClassToSave> getAllSafe() {
   ILAPIDBFacade facade = getFacade();
   try {
       facade.open();
       return getAll();

   } finally {
       facade.close();
   }
}

Now we can create an instance to use these methods instead of directly using SQL operations. This functions gets the data and sort it accordingly.

private TrackDao mTrackDao;
 public List<Track> getTracks() {
   List<Track> tracks = mTrackDao.getAllSafe();
   Collections.sort(tracks, new Comparator<Track>() {
       @Override
       public int compare(Track track, Track track2) {
           return Double.compare(track.getOrder(), track2.getOrder());
       }
   });
   return tracks;
}

References:

 

Download SUSI.AI Setting Files from Data Folder

In this blog, I will discuss how the DownloadDataSettings servlet hosted on SUSI server functions. This post also covers a step by step demonstration on how to use this feature if you have hosted your own custom SUSI server and have admin rights to it. Given below is the endpoint where the request to download a particular file has to be made.

/data/settings

For systematic functionality and workflow, Users with admin login, are given a special access. This allows them to download the settings files and go through them easily when needed. There are various files which have email ids of registered users (accounting.json), user roles associated to them (authorization.json), groups they are a part of (groups.json) etc. To list all the files in the folder, use the given below end point:

/aaa/listSettings.json

How does the above servlet works? Prior to that, let us see how to to get admin rights on your custom SUSI.AI server.
For admin login, it is required that you have access to files and folders on server. Signup with an account and browse to

/data/settings/authorization.json

Find the email id with which you signed up for admin login and change userRole to “admin”. For example,

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "user"
	}
}

If you have signed up with an email id “test@test.com” and want to give admin access to it, modify the userRole to “admin”. See below.

{
	"email:test@test.com": {
		"permissions": {},
		"userRole": "admin"
	}
}

Till now, server did not have any email id with admin login or user role equal to admin. Hence, this exercise is required only for the first admin. Later admins can use changeUserRole application and give/change/modify user roles for any of the users registered. By now you must have admin login session. Let’s see now how the download and file listing servlets work.
First, the server creates a path by locally referencing settings folder with the help of DAO.data_dir.getPath(). This will give a string path to the data directory containing all the data-settings files. Now the server just has to make a JSONArray and has to pass a String array to JSONArray’s constructor, which will eventually be containing the name of all the data/settings files. If the process is not successfull ,then, “accepted” = false will be sent as an error to the user. The base user role to access the servlet is ADMIN as only admins are allowed to download data/setting files,
The file name which you have to download has to be sent in a HTTP request as a get parameter. For example, if an admin has to download accounting.json to get the list of all the registered users, the request is to be made in the following way:

BASE_URL+/data/settings?file=file_name

*BASE_URL is the URL where the server is hosted. For standard server, use BASE_URL = http://api.susi.ai.

In the initial steps, Server generates a path to data/settings folder and finds the file, name of which it receives in the request. If no filename is specified in the API call, by default, the server sends accounting.json file.

File settings = new File(DAO.data_dir.getPath()+"/settings");
String filePath = settings.getPath(); 
String fileName = post.get("file","accounting"); 
filePath += "/"+fileName+".json";

Next, the server will extract the file and using ServletOutputStream class, it will generate properties for it and set appropriate context for it. This context will, in turn, fetch the mime type for the file generated. If the mime type is returned as null, by default, mime type for the file will be set to application/octet-stream. For more information on mime type, please look at the following link. A complete list of mime types is compiled and documented here.

response.setContentType(mimetype);
response.setContentLength((int)file.length());

In the above code snippet, mime type and length of the file being downloaded is set. Next, we set the headers for the download response and use filename for that.

response.setHeader("Content-Disposition", "attachment; filename=" + fileName +".json");

All the manual work is done by now. The only thing left is to open a buffer stream, size of which has been defined as a class variable.
Here we use a byte array of size 4096 elements and write the file to client’s default download storage.

private static final int BUFSIZE = 4096;
byte[] byteBuffer = new byte[BUFSIZE];
             DataInputStream in = new DataInputStream(new FileInputStream(file));
            while ((in != null) && ((length = in.read(byteBuffer)) != -1))
            {
                outStream.write(byteBuffer,0,length);
            }

            in.close();
            outStream.close();

All the above-mentioned steps are enclosed in a try-catch block, which catches an exception if any ,and logs it in the log file. This message is also sent to the client for appropriate user information along with the success or failure indication through a boolean flag. Do not forget to close the input and output buffers as it may lead to memory leaks and someone with proper knowledge of network and buffer stream would be able to steal any essential or secured data.

Additional Resources