Implementing Email Notifications in Open Event Frontend

In Open Event Frontend, we have a ‘settings/email-preferences’ route where we give the user an access to change the email-notifications that he wants to subscribe for a specific event. For now, we are having three notifications for an event which the user can toggle to on and off. To achieve this, we did the following things:

First, we create a model for email-notifications so as to have a skeleton of the data that we are going to receive from the JSON API.

export default ModelBase.extend({

  /**
   * Attributes
   */

  nextEvent           : attr('boolean'),
  newPaper            : attr('boolean'),
  sessionAcceptReject : attr('boolean'),
  sessionSchedule     : attr('boolean'),
  afterTicketPurchase : attr('boolean'),

  /**
   * Relationships
   */
  user  : belongsTo('user'),
  event : belongsTo('event')
});

Thus, above code shows the attributes which we are going to receive via our JSON API and we will render the data accordingly on the page. We have established the relationship of the email-notifications with user and event so that in future wherever needed, we can query the records from either side. The client side has checkboxes to show the data to the user. Following is the format of the checkboxes:

<div class="row">
        <div class="column eight wide">
          {{t 'New Paper is Submitted to your Event'}}
        </div>
        <div class="ui column eight wide right aligned">
          {{ui-checkbox class='toggle' checked=preference.newPaper onChange=(pipe-action (action (mut preference.newPaper)) (action 'savePreference' preference))}}
        </div>
      </div>
      <div class="row">
        <div class="column eight wide">
          {{t 'Change in Schedule of Sessions in your Event'}}
        </div>
        <div class="ui column eight wide right aligned">
          {{ui-checkbox class='toggle' checked=preference.sessionSchedule onChange=(pipe-action (action (mut preference.sessionSchedule)) (action 'savePreference' preference))}}
        </div>
      </div>

The states of the checkboxes are determined by the data that we receive from the API. For example, if for a record, the API returns:

nextEvent           : true,
newPaper            : false,
sessionAcceptReject : true,
sessionSchedule     : false,
afterTicketPurchase : false,

Then, the respective states will be shown by the checkbox and the user can toggle the states to change the email-preferences as they want.

Thus to get the data sent by the server to the client, we return it as a model and query it as:

model() {
    return this.get('authManager.currentUser').query('emailNotifications', { include: 'event' });
}

As we can see, as mentioned earlier, we kept the relationships so that we can query the email-notifications specific to the particular user or specific to particular event. Here, we are showing a user’s email-notifications and hence we queried it with the user relationship.
The authManager loads the currentUser and queries the email-notifications for a particular use. We also want the event details to show the email-preferences, hence we include the event model to be fetched in the query also.

We also let the user change the preferences of the email-notifications so that he can customise the notifications and keep the ones he wants to receive. We implement the updating of email-preferences API as follows:

Whenever a user toggles the checkbox, we are having an action as called ‘savePreference’, which handles the updation of the preferences.

 

{{ui-checkbox class='toggle' checked=preference.newPaper onChange=(pipe-action (action (mut preference.newPaper)) (action 'savePreference' preference))}}

savePreference(emailPreference) {
      emailPreference.save()
        .then(() => {
          this.get('notify').success(this.l10n.t('Email notifications updated successfully'));
        })
        .catch(() => {
          emailPreference.rollbackAttributes();
          this.get('notify').error(this.l10n.t('An unexpected error occurred.'));
        });
    }

We are passing the parameter(the whole preference object to the action), and then just performing a ‘save’ method on it which will send a PATCH request to the server to update the data.

Thus, in this way, the user can change the email-notification preferences in the Open Event Frontend.

Resources:
Ember data Official guide
Blog on Models and Ember data by Daniel Lavigne: Learning Ember.js Part 4: Models

Source code: https://github.com/fossasia/open-event-frontend/pull/537/files

Continue ReadingImplementing Email Notifications in Open Event Frontend

Implementing the UI of the Scheduler in Open Event Frontend with Sub Rooms and External Events

This blog article will illustrate how exactly the UI of the scheduler is implemented in  Open Event Frontend, using the fullcalendar library. Our discussion primarily will involve the events/view/scheduler  route.

FullCalendar is an open source javascript scheduler with an option to use it’s scheduler functionality. To use it with ember JS, we make use of it’s ember-wrapper (ember-fullcalendar). We begin by installing it via CLI

npm install  ember-fullcalendar

Next to initialise it, we need to specify some properties in the config/environment.js File.

Note: It is wrongly mentioned in the official documentation. (issue) Hence specified in this blog.

emberFullCalendar: {
includeScheduler: true
}

This enables the scheduler functionality of the calendar, next we need to specify the model, which will include the details of the rooms and the events which need to be displayed, the full calendar scheduler requires them in a specific way, hence the model will hook will be:

model() {
return RSVP.hash({
events: [{
title      : 'Session 1',
start      : '2017-07-26T07:08:08',
end        : '2017-07-26T09:08:08',
resourceId : 'a'
}],
rooms: [
{ id: 'a', title: 'Auditorium A' },
{ id: 'b', title: 'Auditorium B', eventColor: 'green' },
{ id: 'c', title: 'Auditorium C', eventColor: 'orange' },
{ id       : 'd', title    : 'Auditorium D', children : [
{ id: 'd1', title: 'Room D1' },
{ id: 'd2', title: 'Room D2' }
] },
{ id: 'e', title: 'Auditorium E' },
{ id: 'f', title: 'Auditorium F', eventColor: 'red' }
]
});

Now we begin with the basic structure we will require for the scheduler in the template files.Since we need an option of dragging and dropping external events, we will split the page into two columns, and make a separate component for storing the list of external events.The component is aptly named external-event-list. Hence the scheduler.hbs will have the following grid column structure.

<div class="ui grid">
<div class="row">
<div class="three wide column">
{{scheduler/external-event-list}}
</div>
<div class="thirteen wide column">
{{full-calendar events=model.events
editable=true
resources=model.rooms
header=header
views=views
viewName='timelineDay'
drop=(action 'drop')
eventReceive=(action 'eventReceive')
ondragover=(action 'eventDrop')
}}
</div>
</div>
</div>

It is important to note here that the full-calendar has various callbacks which are triggered when the events are dragged and dropped. The editable property allows the user to change the events’ venue and timeline, and is enabled only for the organiser of the event. The resources refer to the rooms/locations where the session/events will be held. Since ember executes functionality via functions, each of the standard callback of the full calendar has been translated into an action. Please see the official docs for what each callback does.The only thing which remains is the list of external events. It is necessary for the actual event to be wrapped in fc-event spans, as  they are the default classes for full calendar, and the calendar is able to fetch the event or session name from these spans.

<div id='external-events'>
<h4 class="ui header">{{t 'Events'}}</h4>
<span class='fc-event' draggable="true">My Event 1</span>
<span class='fc-event' draggable="true">My Event 2</span>
<span class='fc-event' draggable="true">My Event 3</span>
</div>

Whenever the events will be dragged and dropped onto the scheduler, they will trigger the drop action inside it, which will fetch the data from them and create an event object for the scheduler.

Resources

Tags :

Open event, Open event frontend, ember JS, ember service, semantic UI, ember-data, ember controllers,  tickets, Open Event API, Ember models

Continue ReadingImplementing the UI of the Scheduler in Open Event Frontend with Sub Rooms and External Events

Enhancing SUSI Line Bot UI by Showing Carousels and Location Map

A good UI primarily focuses on attracting large numbers of users and any good UI designer aims to achieve user satisfaction with a user-friendly design. In this blog, we will learn how to show carousels and location map in SUSI line bot to make UI better and easy to use. We can show web search result from SUSI in form of text responses in line bot but it doesn’t follow design rule as it is not a good approach to show more text in constricted space. Users deem such a layout as less attractive. In SUSI webchat, location is returned with a map which looks more promising to users as illustrated below:

Web search is returned as carousels and is viewable as:

We can implement web search in line by sending an array of text responses. We can do this with the code below:

for (var i = 0; i < 5; i++) {
   msg[i] = "";
   msg[i] = {
       type: 'text',
       text: 'text to be sent here'
   }
}
return client.replyMessage(event.replyToken, msg);

If we send web search using text response it looks messy, a user will not like it that much as it is difficult for a user to read and understand a lot of text in one place. You can see it in the screenshot below:

To make it better looking we will implement carousels for web search in SUSI line bot. Carousels are tiles that can be scrolled horizontally to show rich content. We can implement carousels using this code:

for (var i = 1; i < 4; i++) {
   title = 'title of carousel';
   query = title;
   msg = 'description of carousel here';
   link = 'link to be opened here';
   if (title.length >= 40) {
       title = title.substring(0, 36);
       title = title + "...";
   }
   if (msg.length >= 60) {
       msg = msg.substring(0, 56);
       msg = msg + "...";
   }
   carousel[i] = {
       "title": title,
       "text": msg,
       "actions": [{
               "type": "uri",
               "label": "View detail",
               "uri": link
           },
           {
               "type": "message",
               "label": "Ask SUSI again",
               "text": query
           }
       ]
   };
}

In above code, we are checking the length of title and description because line API has a limit for the title up to 40 characters and description up to 60 characters. If limit exceeds we then trim the title or description accordingly and use it. Next, we assign title, description, and link to be opened in action to carousel so that we can use it in below code.

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "template",
       "altText": "this is a carousel template",
       "template": {
           "type": "carousel",
           "columns": [
               carousel[1],
               carousel[2],
               carousel[3]
           ]
       }
   }
]
return client.replyMessage(event.replyToken, answer);

Above code shows an array names answer in which we have added carousels that we have created in last code. After implementing this web search will look like this:

This UI looks neat and easy to read and users will like this. Ask SUSI again will send the title of the carousel to SUSI. To show location map on SUSI line bot just like it is shown on SUSI webchat we will follow this code:

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "location",
       "title": "name of the place here",
       "address": address,
       "latitude": latitude of location here,
       "longitude": longitude of location here
   }
]
return client.replyMessage(event.replyToken, answer);

We will send a location type message which will include latitude and longitude of the location so that Line bot can show location using map. After implementing location type response it will look like this:

On clicking on this location it will open a map inside line app on which we will see a pointer pointing at the location that we have provided.

If you want to learn more about line messaging API refer to this https://devdocs.line.me/en/#messaging-api

Resources
https://devdocs.line.me/en/#messaging-api

Continue ReadingEnhancing SUSI Line Bot UI by Showing Carousels and Location Map

Handling Requests for hasMany Relationships in Open Event Front-end

In Open Event Front-end we use Ember Data and JSON API specs to integrate the application with the server. Ember Data provides an easy way to handle API requests, however it does not support a direct POST for saving bulk data which was the problem we faced while implementing event creation using the API.

In this blog we will take a look at how we implemented POST requests for saving hasMany relationships, using an example of sessions-speakers route to see how we saved the tracks, microlocations & session-types. Lets see how we did it.

Fetching the data from the server

Ember by default does not support hasMany requests for getting related model data. However we can use external add on which enable the hasMany Get requests, we use ember-data-has-many-query which is a great add on for querying hasMany relations of a model.

let data = this.modelFor('events.view.edit');
data.tracks = data.event.get('tracks');
data.microlocations = data.event.get('microlocations');
data.sessionTypes = data.event.get('sessionTypes');
return RSVP.hash(data);

In the above example we are querying the tracks, microlocations & sessionTypes which are hasMany relationships, related to the events model. We can simply do a to do a GET request for the related model.

data.event.get('tracks');

In the above example we are retrieving the all the tracks of the event.

Sending a POST request for hasMany relationship
Ember currently does not saving bulk data POST requests for hasMany relations. We solved this by doing a POST request for individual data of the hasMany array.

We start with creating a `promises` array which contains all the individual requests. We then iterate over all the hasMany relations & push it to the `promises` array. Now each request is an individual promise.

let promises = [];

promises.push(this.get('model.event.tracks').toArray().map(track => track.save()));
promises.push(this.get('model.event.sessionTypes').toArray().map(type => type.save()));
promises.push(this.get('model.event.microlocations').toArray().map(location => location.save()));

Once we have all the promises we then use RSVP to make the POST requests. We make use of all() method which takes an array of promises as parameter and resolves all the promises. If the promises are not resolved successfully then we simply notify the user using the notify service, else we redirect to the home page.

RSVP.Promise.all(promises)
  .then(() => {
    this.transitionToRoute('index');
  }, function() {
    this.get('notify').error(this.l10n.t(Data did not save. Please try again'));
  });

The result of this we can now retrieve & create new tracks, microlocations & sessionTypes on sessions-speakers route.

Thank you for reading the blog, you can check the source code for the example here.

Resources

 

Continue ReadingHandling Requests for hasMany Relationships in Open Event Front-end

Getting Image location in the Phimpme Android’s Camera

The Phimpme Android app along with a decent gallery and accounts section comes with a nice camera section stuffed with all the features which a user requires for the day to day usage. It comes with an Auto mode for the best experience and also with a manual mode for the users who like to have some tweaks in the camera according to their own liking. Along with all these, it also has an option to get the accurate coordinates where the image was clicked. When we enable the location from the settings, it extracts the latitude and longitude of the image when it is being clicked and displays the visible region of the map at the top of the image info section as depicted in the screenshot below.

In this tutorial, I will be discussing how we have implemented the location functionality to fetch the location of the image in the Phimpme app.

Step 1

For getting the location from the device, the first step we need is to add the permission in the androidmanifest.xml file to access the GPS and the location services. This can be done using the following lines of code below.

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>

After this, we need to download install the google play services SDK to access the Google location API. Follow the official google developer’s guide on how to install the Google play services into the project from the resources section below.

Step 2

To get the last known location of the device at the time of clicking the picture we need to make use of the FusedLocationProviderClient class and need to create an object of this class and to initialise it in the onCreate method of the camera activity. This can be done using the following lines of code below:

private FusedLocationProviderClient mFusedLocationClient;
mFusedLocationClient = LocationServices.getFusedLocationProviderClient(this);

After we have created and initialised the object mFusedLocationClient, we need to call the getLastLocation method on it as soon as the user clicks on the take picture button in the camera. In this, we can also set onSuccessListener method which will return the Location object when it successfully extracts the present or the last known location of the device. This can be done using the following lines of code below:

mFusedLocationClient.getLastLocation()
       .addOnSuccessListener(this, new OnSuccessListener<Location>() {
           @Override
           public void onSuccess(Location location) {
               if (location != null) {
            //Get the latitude and longitude here
                  }

After this, we can successfully extract the latitude and the longitude of the device in the onSuccess method of the code snippet provided below and can store it in the shared preference to get the map view of the coordinates from a different activity of the application later on when the user tries to get the info of the images.

Step 3

After getting the latitude and longitude, we need to get the image view of the visible region of the map. We can make use of the Glide library to fetch the visible map area from the url which contains our location values and to set it to the image view.

The url of the visible map can be generated using the following lines of code.

String.format(Locale.US, getUrl(value), location.getLatitude(), location.getLongitude());

This is how we have added the functionality to fetch the coordinates of the device at the time of clicking the image and to display the map in the Phimpme Android application. To get the full source code, please refer to the Phimpme Android GitHub repository.

Resources

  1. Google Developer’s : Location services guide – https://developer.android.com/training/location/retrieve-current.html
  2. Google Developer’s : Google play services SDK guide – https://developer.android.com/studio/intro/update.html#channels
  3. GitHub : Open camera Source Code –  https://github.com/almalence/OpenCamera
  4. GitHub : Phimpme Android – https://github.com/fossasia/phimpme-android/
  5. GitHub : Glide library – https://github.com/bumptech/glide

 

Continue ReadingGetting Image location in the Phimpme Android’s Camera

Implement Wave Generation Functionality in The PSLab Android App

The PSLab Android App works as an Oscilloscope using the audio jack of Android device. The implementation for the scope using in-built mic is discussed in the post Using the Audio Jack to make an Oscilloscope in the PSLab Android App. Another application which can be implemented by hacking the audio jack is Wave Generation. We can generate different types of signals on the wires connected to the audio jack using the Android APIs that control the Audio Hardware. In this post, I will discuss about how we can generate wave by using the Android APIs for controlling the audio hardware.

Configuration of Audio Jack for Wave Generation

Simply cut open the wire of a cheap pair of earphones to gain control of its terminals and attach alligator pins by soldering or any other hack(jugaad) that you can think of. After you are done with the tinkering of the earphone jack, it should look something like shown in the image below.

Source: edn.com

If your earphones had mic, it would have an extra wire for mic input. In any general pair of earphones the wire configuration is almost the same as shown in the image below.

Source: flickr

Android APIs for Controlling Audio Hardware

AudioRecord and AudioTrack are the two classes in Android that manages recording and playback respectively. For Wave Generation application we only need AudioTrack class.

Creating an AudioTrack object: We need the following parameters to initialise an AudioTrack object.

STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purpose we are using stream music. Every stream has its own maximum and minimum volume level.

SAMPLING RATE: it is the rate at which source samples the audio signal.

BUFFER SIZE IN BYTES: total size in bytes of the internal buffer from where the audio data is read for playback.

MODES: There are two modes

  • MODE_STATIC: Audio data is transferred from Java to native layer only once before the audio starts playing.
  • MODE_STREAM: Audio data is streamed from Java to native layer as audio is being played.

getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode.

private int minTrackBufferSize;
private static final int SAMPLING_RATE = 44100;
minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);

audioTrack = new AudioTrack(
       AudioManager.STREAM_MUSIC,
       SAMPLING_RATE,
       AudioFormat.CHANNEL_OUT_MONO,
       AudioFormat.ENCODING_PCM_16BIT,
       minTrackBufferSize,
       AudioTrack.MODE_STREAM);

Function createBuffer() creates the audio buffer that is played using the audio track object i.e audio track object would write this buffer on playback stream. Function below fills random values in the buffer due to which a random signal is generated. If we want to generate some specific wave like Square Wave, Sine Wave, Triangular Wave, we have to fill the buffer accordingly.

public short[] createBuffer(int frequency) {
   // generating a random buffer for now
   short[] buffer = new short[minTrackBufferSize];
   for (int i = 0; i < minTrackBufferSize; i++) {
       buffer[i] = (short) (random.nextInt(32767) + (-32768));
   }
   return buffer;
}

We created a write() method and passed the audio buffer created in above step as an argument to the method. This method writes audio buffer into audio stream for playback.

public void write(short[] buffer) {
   /* write buffer to audioTrack */
   audioTrack.write(buffer, 0, buffer.length);
}

Amplitude of the signal can be controlled by changing the volume level of the stream on which the buffer is being played. As we are playing the audio in music stream, so STREAM_MUSIC is passed as a parameter to the setStreamVolume() method.

value: value is amplitude level of the stream. Every stream has its different amplitude levels. getStreamMaxVolume(STREAM_TYPE) method is used to find the maximum valid amplitude level of any stream.
flag: this stackoverflow post explain all the flags of the AudioManager class.

AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE); audioManager.setStreamVolume(AudioManager.STREAM_MUSIC, value, flag);

Roadmap

We are working on implementing methods to fill audio buffer with specific values such that waves like Sinusoidal wave, Square Wave, Sawtooth Wave can be generated during the playback of the buffer using the AudioTrack object.

Resources

Continue ReadingImplement Wave Generation Functionality in The PSLab Android App

Data Access Layer in Open Event Organizer Android App

Open Event Organizer is an Android App for Organizers and Entry Managers. Its core feature is scanning a QR Code to validate Attendee Check In. Other features of the App are to display an overview of sales and tickets management. The App maintains a local database and syncs it with the Open Event API Server. The Data Access Layer in the App is designed such that the data is fetched from the server or taken from the local database according to the user’s need. For example, simply showing the event sales overview to the user will fetch the data from the locally saved database. But when the user wants to see the latest data then the App need to fetch the data from the server to show it to the user and also update the locally saved data for future reference. I will be talking about the data access layer in the Open Event Organizer App in this blog.

The App uses RxJava to perform all the background tasks. So all the data access methods in the app return the Observables which is then subscribed in the presenter to get the data items. So according to the data request, the App has to create the Observable which will either load the data from the locally saved database or fetch the data from the API server. For this, the App has AbstractObservableBuilder class. This class gets to decide which Observable to return on a data request.

Relevant Code:

final class AbstractObservableBuilder<T> {
   ...
   ...
   @NonNull
   private Callable<Observable<T>> getReloadCallable() {
       return () -> {
           if (reload)
               return Observable.empty();
           else
               return diskObservable
                   .doOnNext(item -> Timber.d("Loaded %s From Disk on Thread %s",
                       item.getClass(), Thread.currentThread().getName()));
       };
   }

   @NonNull
   private Observable<T> getConnectionObservable() {
       if (utilModel.isConnected())
           return networkObservable
               .doOnNext(item -> Timber.d("Loaded %s From Network on Thread %s",
                   item.getClass(), Thread.currentThread().getName()));
       else
           return Observable.error(new Throwable(Constants.NO_NETWORK));
   }

   @NonNull
   private <V> ObservableTransformer<V, V> applySchedulers() {
       return observable -> observable
           .subscribeOn(Schedulers.io())
           .observeOn(AndroidSchedulers.mainThread());
   }

   @NonNull
   public Observable<T> build() {
       if (diskObservable == null || networkObservable == null)
           throw new IllegalStateException("Network or Disk observable not provided");

       return Observable
               .defer(getReloadCallable())
               .switchIfEmpty(getConnectionObservable())
               .compose(applySchedulers());
   }
}

 

The class is used to build the Abstract Observable which contains both types of Observables, making data request to the API server and the locally saved database. Take a look at the method build. Method getReloadCallable provides an observable which will be the default one to be subscribed which is a disk observable which means data is fetched from the locally saved database. The method checks parameter reload which if true suggests to make the data request to the API server or else to the locally saved database. If the reload is false which means data can be fetched from the locally saved database, getReloadCallable returns the disk observable and the data will be fetched from the locally saved database. If the reload is true which means data request must be made to the API server, then the method returns an empty observable.

The method getConnectionObservable returns a network observable which makes the data request to the API server. In the method build, switchIfEmpty operator is applied on the default observable which is empty if reload is true, and the network observable is passed to it. So when reload is true, network observable is subscribed and when it is false disk observable is subscribed. For example of usage of this class to make a events data request is:

public Observable<Event> getEvents(boolean reload) {
   Observable<Event> diskObservable = Observable.defer(() ->
       databaseRepository.getAllItems(Event.class)
   );

   Observable<Event> networkObservable = Observable.defer(() ->
       eventService.getEvents(JWTUtils.getIdentity(getAuthorization()))
           ...
           ...

   return new AbstractObservableBuilder<Event>(utilModel)
       .reload(reload)
       .withDiskObservable(diskObservable)
       .withNetworkObservable(networkObservable)
       .build();
}

 

So according to the boolean parameter reload, a correct observable is subscribed to complete the data request.

Links:
1. Documentation about the Operators in ReactiveX
2. Information about the Data Access Layer on Wikipedia

Continue ReadingData Access Layer in Open Event Organizer Android App

Using Data Access Object to Store Information

We often need to store the information received from the network to retrieve that later. Although we can store and read data directly but by using data access object to store information enables us to do data operations without exposing details of the database. Using data access object is also a best practice in software engineering. Recently I modified Connfa app to store the data received in Open Event format. In this blog, I describe how to use data access object.

The goal is to abstract and encapsulate all access to the data and provide an interface. This is called Data Access Object pattern. In a nutshell, the DAO “knows” which data source (that could be a database, a flat file or even a WebService) to connect to and is specific for this data source. It makes no difference in applications when it accesses a relational database or parses xml files (using a DAO). The DAO is usually able to create an instance of a data object (“to read data”) and also to persist data (“to save data”) to the datasource.

Consider the example from Connfa app in which get the tracks from API and store them in SQL database. We use DAO to create a layer between model and database. Where AbstractEntityDAO is an abstract class which have the functions to perform CRUD operation. We extend it to implement them in our DAO model. Here is TrackDAO structure,

public class TrackDao extends AbstractEntityDAO<Track, Long> {

    public static final String TABLE_NAME = "table_track";

    @Override
    protected String getSearchCondition() {
        return "_id=?";
    }
    
    ...
}

Find the complete class to see the detailed methods to implement search conditions, get key columns, create instance etc.  here.

Here is a general method to get the data from the database. Where getFacade() for the given layer element, this method returns the requested facade object to represent the passed in layer element.

public List<ClassToSave> getAllSafe() {
   ILAPIDBFacade facade = getFacade();
   try {
       facade.open();
       return getAll();

   } finally {
       facade.close();
   }
}

Now we can create an instance to use these methods instead of directly using SQL operations. This functions gets the data and sort it accordingly.

private TrackDao mTrackDao;
 public List<Track> getTracks() {
   List<Track> tracks = mTrackDao.getAllSafe();
   Collections.sort(tracks, new Comparator<Track>() {
       @Override
       public int compare(Track track, Track track2) {
           return Double.compare(track.getOrder(), track2.getOrder());
       }
   });
   return tracks;
}

References:

 

Continue ReadingUsing Data Access Object to Store Information

Receiving Data From the Network Asynchronously

We often need to receive information from networks in Android apps. Although it is a simple process but the problem arises when it is done through the main user interaction thread. To understand this problem better consider using an app which shows some text downloaded from the online database. As the user clicks a button to display a text it may take some time to get the HTTP response. So what does an activity do in that time? It creates a lag in the user interface and makes the app to stop responding. Recently I implemented this in Connfa App to download data in Open Event Format.

To solve that problem we use concurrent thread to download the data and send the result back to be processed separating it from main user interaction thread. AsyncTask allows you to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.

AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most). If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent package such as Executor, ThreadPoolExecutor and FutureTask.

An asynchronous task is defined by a computation that runs on a background thread and whose result is published on the UI thread. An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute.

In this blog, I describe how to download data in an android app with AsyncTask using OkHttp library.

First, you need to add dependency for OkHttp Library,

compile 'com.squareup.okhttp:okhttp:2.4.0' 


Extend AsyncTask class and implement all the functions in the class by overriding them to modify them accordingly. Here I declare a class named AsyncDownloader extending AsyncTask. We took the string parameter, integer type progress and string type result to return instead which will be our JSON data and param is our URL. Instead of using get the function of AsyncTask we implement an interface to receive data so the main user interaction thread does not have to wait for the return of the result.

public class AsyncDownloader extends AsyncTask<String, Integer, String> {
    private String jsonData = null;

    public interface JsonDataSetter {
        void setJsonData(String str);
    }


    JsonDataSetter jsonDataSetterListener;

    public AsyncDownloader(JsonDataSetter context) {

        this.jsonDataSetterListener = context;
    }

    @Override
    protected void onPreExecute() {
        super.onPreExecute();
    }

    @Override
    protected String doInBackground(String... params) {

        String url = params[0];

        OkHttpClient client = new OkHttpClient();
        Request request = new Request.Builder()
                .url(url)
                .build();
        Call call = client.newCall(request);

        Response response = null;

        try {
            response = call.execute();

            if (response.isSuccessful()) {
                jsonData = response.body().string();

            } else {
                jsonData = null;
            }

        } catch (IOException e) {
            e.printStackTrace();
        }


        return jsonData;
    }

    @Override
    protected void onPostExecute(String jsonData) {
        super.onPostExecute(jsonData);
        jsonDataSetterListener.setJsonData(jsonData);
    }
}


We download the data in doInBackground making a request using OkHttp library and send the result back in onPostExecute method using our interface method. See above to check out the complete class implementation.

You can make an instance to AsyncDownloader in an activity. Implement interface to receive that data.

Tip: Always close AsyncTask after using to avoid many concurrent tasks at one time.

Find the complete code here.

References:

Continue ReadingReceiving Data From the Network Asynchronously

Storing SUSI Settings for Anonymous Users

SUSI Web Chat is equipped with a number of settings by which it can be configured. For a user who is logged in and using the chat, he can change his settings and save them by sending it to the server. But for an anonymous user, this feature is not available while opening the chat every time, unless one stores the settings. Thus to overcome this problem we store the settings in the browser’s cookies which we do by following the steps.

The current User Settings available are :-

  1. Theme – One can change the theme to either ‘dark’ or ‘light’.
  2. To have the option to send a message on enter.
  3. Enable mic to give voice input.
  4. Enable speech output only for speech input.
  5. To have speech output always ON.
  6. To select the default language.
  7. To adjust the speech output rate.
  8. To adjust the speech output pitch.

The first step is to have a default state for all the settings which act as default settings for all users when opening the chat for the first time. We get these settings from the User preferences store available at the file UserPreferencesStore.js

let _defaults = {
                Theme: 'light',
                Server: 'http://api.susi.ai',
                StandardServer: 'http://api.susi.ai',
                EnterAsSend: true,
                MicInput: true,
                SpeechOutput: true,
               SpeechOutputAlways: false,
               SpeechRate: 1,
               SpeechPitch: 1,
           };

We assign these default settings as the default constructor state of our component so that we populate them beforehand.

2. On changing the values of the various settings we update the state through various function handlers which are as follows-

handleSelectChange – handles all the theme changes, handleEnterAsSendhandleMicInput – handles Mic Input as true, handleSpeechOutput – handles whether Speech Output should be enabled or not, handleSpeechOutputAlways – handles if user wants to listen to speech after every input, handleLanguage – handles the language settings related to speech, handleTextToSpeech – handles the Text to Speech Settings including Speech Rate, Pitch.        

  1. Once the values are changed we make use of the universal-cookie package and save the settings in the User’s browser. This is done in a function handleSubmit in the Settings.react.js file.

        

 import Cookies from 'universal-cookie';
     const cookies = new Cookies(); // Create cookie object.
     // set all values
     let vals = {
            theme: newTheme,
            server: newDefaultServer,
            enterAsSend: newEnterAsSend,
            micInput: newMicInput,
            speechOutput: newSpeechOutput,
            speechOutputAlways: newSpeechOutputAlways,
            rate: newSpeechRate,
            pitch: newSpeechPitch,
      }

    let settings = Object.assign({}, vals);
    settings.LocalStorage = true;
    // Store in cookies for anonymous user
    cookies.set('settings',settings);   
  1. Once the values are set in the browser we load the new settings every time the user opens the SUSI Web Chat by having the following checks in the file API.actions.js. We get the values from the cookies using the get function if the person is not logged in and initialise the new settings for that user.
if(cookies.get('loggedIn')===null||
    cookies.get('loggedIn')===undefined){
    // check if not logged in and get the value from the set cookie
    let settings = cookies.get('settings');
    if(settings!==undefined){
      // Check if the settings are set in the cookie
      SettingsActions.initialiseSettings(settings);
    }
  }
else{
// call the AJAX for getting Settings for loggedIn users
}

Resources

Continue ReadingStoring SUSI Settings for Anonymous Users