Adding Voice Recognition in Description Dialog Box in Phimpme project

In this blog, I will explain how I added Voice Recognition in a dialog box to describe an image in Phimpme Android application. In Phimpme Android application we have an option to add a description for the image. Sometimes the description can be long. Adding Voice Recognition text to speech will ease the user’s experience to add a description for the image.

Adding appropriate Dialog Box

In order to take input from the user to prompt the Voice Recognition function, I have added an image button in the description dialog box. Since the description dialog box will only contain an EditText and a button will have used material design to make it look better and add caption on top of it.

 

<LinearLayout
   android:id="@+id/rl_description_dialog"
   android:layout_width="match_parent"
   android:layout_height="wrap_content"
   android:orientation="horizontal"
   android:padding="15dp">
   <EditText
       android:id="@+id/description_edittxt"
       android:layout_weight="1"
       android:layout_width="fill_parent"
       android:layout_height="wrap_content"
       android:padding="15dp"
       android:hint="@string/description_hint"
       android:textColorHint="@color/grey"
       android:layout_margin="20dp"
       android:gravity="top|left"
       android:inputType="text" />
   <ImageButton
       android:layout_width="@dimen/mic_image"
       android:layout_height="@dimen/mic_image"
       android:layout_alignRight="@+id/description_edittxt"
       app2:srcCompat="@drawable/ic_mic_black"
       android:layout_gravity="center"
       android:background="@color/transparent"
       android:paddingEnd="10dp"
       android:paddingTop="12dp"
       android:id="@+id/voice_input"/>
</LinearLayout>

Function to prompt dialog box

We have added a function to prompt the dialog box from anywhere in the application. getDescriptionDialog() function is used to prompt the description dialog box. getDescriptionDialog() returns EditText which can be further be used to manipulate the text in the EditText. Please follow the following steps to inflate description dialog box in the activity.

First,

In the getDescriptionDialog() function we will inflate the layout by using getLayoutInflater function. We will pass the layout id as an argument in the function.

public EditText getDescriptionDialog(final ThemedActivity activity, AlertDialog.Builder descriptionDialog){
final View DescriptiondDialogLayout = activity.getLayoutInflater().inflate(R.layout.dialog_description, null);

Second,

Get the TextView in the description dialog box.

final TextView DescriptionDialogTitle = (TextView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_title);

Third,

Present the dialog using cardview to make use of the material design. Then take an instance of the EditText. This EditText can be further used to input text from the user either by text or Voice Recognition.

final CardView DescriptionDialogCard = (CardView) DescriptiondDialogLayout.findViewById(R.id.description_dialog_card);
EditText editxtDescription = (EditText) DescriptiondDialogLayout.findViewById(R.id.description_edittxt);

Fourth,

Set onClickListener when the user clicks the mic image icon. This onClicklistener will prompt the voice Recognition in the activity. We need to specify the language for the speech to text input. In the case of Phimpme its English so “en-US”. We have set the maximum results to 15.  

ImageButton VoiceRecognition = (ImageButton) DescriptiondDialogLayout.findViewById(R.id.voice_input);
VoiceRecognition.setOnClickListener(new View.OnClickListener() {
   @Override
   public void onClick(View v) {
       // This are the intents needed to start the Voice recognizer
       Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
       i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
       i.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 15); // number of maximum results..
       i.putExtra(RecognizerIntent.EXTRA_PROMPT, R.string.caption_speak);
       startActivityForResult(i, REQ_CODE_SPEECH_INPUT);

   }
});

Putting Text in the EditText

After Voice Recognition prompt ends the onActivityResult function checks to see if the data is received or not.

if (requestCode == REQ_CODE_SPEECH_INPUT && data!=null) {

We get the spoken text from intent data.getString() and store it in ArrayList. To store the received data in a string we need to get the first string from the ArrayList.

ArrayList<String> result = data
       .getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
voiceInput = result.get(0);

Setting the received data in the the EditText

editTextDescription.setText(voiceInput);

Conclusion

Using Voice recognition is a quick and simple way to add a long description on the image. It’s Speech to Text feature works without many mistakes and is useful in our Phimpme Android application.

Github

https://github.com/fossasia/phimpme-android

Resources

Tutorial for speech to Text: https://www.androidhive.info/2014/07/android-speech-to-text-tutorial/

To add description dialog box: https://developer.android.com/guide/topics/ui/dialogs.html

Developing LoklakWordCloud app for Loklak apps site

LoklakWordCloud app is an app to visualise data returned by loklak in form of a word cloud.

The app is presently hosted on Loklak apps site.

Word clouds provide a very simple, easy, yet interesting and effective way to analyse and visualise data. This app will allow users to create word cloud out of twitter data via Loklak API.

Presently the app is at its very early stage of development and more work is left to be done. The app consists of a input field where user can enter a query word and on pressing search button a word cloud will be generated using the words related to the query word entered.

Loklak API is used to fetch all the tweets which contain the query word entered by the user.

These tweets are processed to generate the word cloud.

Related issue: https://github.com/fossasia/apps.loklak.org/pull/279

Live app: http://apps.loklak.org/LoklakWordCloud/

Developing the app

The main challenge in developing this app is implementing its prime feature, that is, generating the word cloud. How do we get a dynamic word cloud which can be easily generated by the user based on the word he has entered? Well, here comes in Jqcloud. An awesome lightweight Jquery plugin for generating word clouds. All we need to do is provide list of words along with their weights.

Let us see step by step how this app (first version) works. First we require all the tweets which contain the entered word. For this we use Loklak search service. Once we get all the tweets, then we can parse the tweet body to create a list of words along with their frequency.

var url = "http://35.184.151.104/api/search.json?callback=JSON_CALLBACK&count=100&q=" + query;
        $http.jsonp(url)
            .then(function (response) {
                $scope.createWordCloudData(response.data.statuses);
                $scope.tweet = null;
            });

Once we have all the tweets, we need to extract the tweet texts and create a list of valid words. What are valid words? Well words like ‘the’, ‘is’, ‘a’, ‘for’, ‘of’, ‘then’, does not provide us with any important information and will not help us in doing any kind of analysis. So there is no use of including them in our word cloud. Such words are called stop words and we need to get rid of them. For this we are using a list of commonly used stop words. Such lists can be very easily found over the internet. Here is the list which we are using. Once we are able to extract the text from the tweets, we need to filter stop words and insert the valid words into a list.

 tweet = data[i];
            tweetWords = tweet.text.replace(", ", " ").split(" ");

            for (var j = 0; j < tweetWords.length; j++) {
                word = tweetWords[j];
                word = word.trim();
                if (word.startsWith("'") || word.startsWith('"') || word.startsWith("(") || word.startsWith("[")) {
                    word = word.substring(1);
                }
                if (word.endsWith("'") || word.endsWith('"') || word.endsWith(")") || word.endsWith("]") ||
                    word.endsWith("?") || word.endsWith(".")) {
                    word = word.substring(0, word.length - 1);
                }
                if (stopwords.indexOf(word.toLowerCase()) !== -1) {
                    continue;
                }
                if (word.startsWith("#") || word.startsWith("@")) {
                    continue;
                }
                if (word.startsWith("http") || word.startsWith("https")) {
                    continue;
                }
                $scope.filteredWords.push(word);
            }

What are we actually doing in the above snippet? We are simply iterating over each of the statuses returned by Loklak API. For each tweet, first we are splitting the text into words and then we are iterating over those words. For a given word we do a number of checks. First we check if the word begins or ends with a special character, for example quotation marks or brackets. If so we remove those character as it will cause trouble in calculating frequencies. Next we also check if the word is beginning with ‘#’ or ‘@’. If it is true, then we discard such words as we are handling hashtags and mentions separately. Finally we check whether the word is a stop word or not. If it is a stop word then we discard it. If a word passes all the checks, we add it to our list of valid words.

Once we are done with the tweet bodies, next we need to handle hashtags and mentions.

tweet.hashtags.forEach(function (hashtag) {
                $scope.filteredWords.push("#" + hashtag);
            });

            tweet.mentions.forEach(function (mention) {
                $scope.filteredWords.push("@" + mention);
            });

The above code simply iterates over the hashtags and mentions and inserts them into the filteredWords list. We have handled hashtags and mentions separately so that we can apply filters in future.

Once we are done with generating list of valid words, we need to calculate weight for each of the word. Here weight is nothing but the number of times a particular word is present in the list. We calculate this using JavaScript object. We iterate over the list of valid words. If word is not present in the object (or dictionary as you wish to call it), we create a new key by the name of that word and set its value to one. If a word is already present as a key, then we simply increment its value by one.

for (var word in $scope.wordFreq) {
            $scope.wordCloudData.push({
                text: word,
                weight: $scope.wordFreq[word]
            });
        }

The above code snippet calculates the frequency of each word by the process mentioned above.

Now we are all set to generate our word cloud. We simply use Jqcloud’s interface to configure it with the words and their respective frequencies, provide a list of color codes for a color gradient, and set autoResize to true so that our word cloud resizes itself when the screen size changes.

$scope.generateWordCloud = function() {
        if ($scope.wordCloud === null) {
            $scope.wordCloud = $('.wordcloud').jQCloud($scope.wordCloudData, {
                colors: ["#D50000", "#FF5722", "#FF9800", "#4CAF50", "#8BC34A", "#4DB6AC", "#7986CB", "#5C6BC0", "#64B5F6"],
                fontSize: {
                    from: 0.06,
                    to: 0.01
                },
                autoResize: true
            });
        } else {
            $scope.wordCloud = $(".wordcloud").jQCloud('update', $scope.wordCloudData);
        }
    }

Whenever the user searches for a new word, we simply update the existing word cloud with the cloud of the new word.

Future roadmap

  • Make the words in the cloud clickable. On clicking a word, the cloud should get replaced by the selected word’s cloud.
  • Add filters for hashtags, mentions, date.
  • Add option for exporting the cloud to an image, so that user’s can also use this app as a tool to generate word clouds as images and save them.
  • Add a loader and error notification for invalid or empty input.

Important resources

  • View the app source code here.
  • Learn more about Loklak API here.
  • Learn more about Jqcloud here.
  • Learn more about AngularJS here.

Enhancing SUSI Line Bot UI by Showing Carousels and Location Map

A good UI primarily focuses on attracting large numbers of users and any good UI designer aims to achieve user satisfaction with a user-friendly design. In this blog, we will learn how to show carousels and location map in SUSI line bot to make UI better and easy to use. We can show web search result from SUSI in form of text responses in line bot but it doesn’t follow design rule as it is not a good approach to show more text in constricted space. Users deem such a layout as less attractive. In SUSI webchat, location is returned with a map which looks more promising to users as illustrated below:

Web search is returned as carousels and is viewable as:

We can implement web search in line by sending an array of text responses. We can do this with the code below:

for (var i = 0; i < 5; i++) {
   msg[i] = "";
   msg[i] = {
       type: 'text',
       text: 'text to be sent here'
   }
}
return client.replyMessage(event.replyToken, msg);

If we send web search using text response it looks messy, a user will not like it that much as it is difficult for a user to read and understand a lot of text in one place. You can see it in the screenshot below:

To make it better looking we will implement carousels for web search in SUSI line bot. Carousels are tiles that can be scrolled horizontally to show rich content. We can implement carousels using this code:

for (var i = 1; i < 4; i++) {
   title = 'title of carousel';
   query = title;
   msg = 'description of carousel here';
   link = 'link to be opened here';
   if (title.length >= 40) {
       title = title.substring(0, 36);
       title = title + "...";
   }
   if (msg.length >= 60) {
       msg = msg.substring(0, 56);
       msg = msg + "...";
   }
   carousel[i] = {
       "title": title,
       "text": msg,
       "actions": [{
               "type": "uri",
               "label": "View detail",
               "uri": link
           },
           {
               "type": "message",
               "label": "Ask SUSI again",
               "text": query
           }
       ]
   };
}

In above code, we are checking the length of title and description because line API has a limit for the title up to 40 characters and description up to 60 characters. If limit exceeds we then trim the title or description accordingly and use it. Next, we assign title, description, and link to be opened in action to carousel so that we can use it in below code.

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "template",
       "altText": "this is a carousel template",
       "template": {
           "type": "carousel",
           "columns": [
               carousel[1],
               carousel[2],
               carousel[3]
           ]
       }
   }
]
return client.replyMessage(event.replyToken, answer);

Above code shows an array names answer in which we have added carousels that we have created in last code. After implementing this web search will look like this:

This UI looks neat and easy to read and users will like this. Ask SUSI again will send the title of the carousel to SUSI. To show location map on SUSI line bot just like it is shown on SUSI webchat we will follow this code:

const answer = [{
       type: 'text',
       text: ans
   },
   {
       "type": "location",
       "title": "name of the place here",
       "address": address,
       "latitude": latitude of location here,
       "longitude": longitude of location here
   }
]
return client.replyMessage(event.replyToken, answer);

We will send a location type message which will include latitude and longitude of the location so that Line bot can show location using map. After implementing location type response it will look like this:

On clicking on this location it will open a map inside line app on which we will see a pointer pointing at the location that we have provided.

If you want to learn more about line messaging API refer to this https://devdocs.line.me/en/#messaging-api

Resources
https://devdocs.line.me/en/#messaging-api

Getting Image location in the Phimpme Android’s Camera

The Phimpme Android app along with a decent gallery and accounts section comes with a nice camera section stuffed with all the features which a user requires for the day to day usage. It comes with an Auto mode for the best experience and also with a manual mode for the users who like to have some tweaks in the camera according to their own liking. Along with all these, it also has an option to get the accurate coordinates where the image was clicked. When we enable the location from the settings, it extracts the latitude and longitude of the image when it is being clicked and displays the visible region of the map at the top of the image info section as depicted in the screenshot below.

In this tutorial, I will be discussing how we have implemented the location functionality to fetch the location of the image in the Phimpme app.

Step 1

For getting the location from the device, the first step we need is to add the permission in the androidmanifest.xml file to access the GPS and the location services. This can be done using the following lines of code below.

<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION"/>

After this, we need to download install the google play services SDK to access the Google location API. Follow the official google developer’s guide on how to install the Google play services into the project from the resources section below.

Step 2

To get the last known location of the device at the time of clicking the picture we need to make use of the FusedLocationProviderClient class and need to create an object of this class and to initialise it in the onCreate method of the camera activity. This can be done using the following lines of code below:

private FusedLocationProviderClient mFusedLocationClient;
mFusedLocationClient = LocationServices.getFusedLocationProviderClient(this);

After we have created and initialised the object mFusedLocationClient, we need to call the getLastLocation method on it as soon as the user clicks on the take picture button in the camera. In this, we can also set onSuccessListener method which will return the Location object when it successfully extracts the present or the last known location of the device. This can be done using the following lines of code below:

mFusedLocationClient.getLastLocation()
       .addOnSuccessListener(this, new OnSuccessListener<Location>() {
           @Override
           public void onSuccess(Location location) {
               if (location != null) {
            //Get the latitude and longitude here
                  }

After this, we can successfully extract the latitude and the longitude of the device in the onSuccess method of the code snippet provided below and can store it in the shared preference to get the map view of the coordinates from a different activity of the application later on when the user tries to get the info of the images.

Step 3

After getting the latitude and longitude, we need to get the image view of the visible region of the map. We can make use of the Glide library to fetch the visible map area from the url which contains our location values and to set it to the image view.

The url of the visible map can be generated using the following lines of code.

String.format(Locale.US, getUrl(value), location.getLatitude(), location.getLongitude());

This is how we have added the functionality to fetch the coordinates of the device at the time of clicking the image and to display the map in the Phimpme Android application. To get the full source code, please refer to the Phimpme Android GitHub repository.

Resources

  1. Google Developer’s : Location services guide – https://developer.android.com/training/location/retrieve-current.html
  2. Google Developer’s : Google play services SDK guide – https://developer.android.com/studio/intro/update.html#channels
  3. GitHub : Open camera Source Code –  https://github.com/almalence/OpenCamera
  4. GitHub : Phimpme Android – https://github.com/fossasia/phimpme-android/
  5. GitHub : Glide library – https://github.com/bumptech/glide

 

Implement Wave Generation Functionality in The PSLab Android App

The PSLab Android App works as an Oscilloscope using the audio jack of Android device. The implementation for the scope using in-built mic is discussed in the post Using the Audio Jack to make an Oscilloscope in the PSLab Android App. Another application which can be implemented by hacking the audio jack is Wave Generation. We can generate different types of signals on the wires connected to the audio jack using the Android APIs that control the Audio Hardware. In this post, I will discuss about how we can generate wave by using the Android APIs for controlling the audio hardware.

Configuration of Audio Jack for Wave Generation

Simply cut open the wire of a cheap pair of earphones to gain control of its terminals and attach alligator pins by soldering or any other hack(jugaad) that you can think of. After you are done with the tinkering of the earphone jack, it should look something like shown in the image below.

Source: edn.com

If your earphones had mic, it would have an extra wire for mic input. In any general pair of earphones the wire configuration is almost the same as shown in the image below.

Source: flickr

Android APIs for Controlling Audio Hardware

AudioRecord and AudioTrack are the two classes in Android that manages recording and playback respectively. For Wave Generation application we only need AudioTrack class.

Creating an AudioTrack object: We need the following parameters to initialise an AudioTrack object.

STREAM TYPE: Type of stream like STREAM_SYSTEM, STREAM_MUSIC, STREAM_RING, etc. For wave generation purpose we are using stream music. Every stream has its own maximum and minimum volume level.

SAMPLING RATE: it is the rate at which source samples the audio signal.

BUFFER SIZE IN BYTES: total size in bytes of the internal buffer from where the audio data is read for playback.

MODES: There are two modes

  • MODE_STATIC: Audio data is transferred from Java to native layer only once before the audio starts playing.
  • MODE_STREAM: Audio data is streamed from Java to native layer as audio is being played.

getMinBufferSize() returns the estimated minimum buffer size required for an AudioTrack object to be created in the MODE_STREAM mode.

private int minTrackBufferSize;
private static final int SAMPLING_RATE = 44100;
minTrackBufferSize = AudioTrack.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);

audioTrack = new AudioTrack(
       AudioManager.STREAM_MUSIC,
       SAMPLING_RATE,
       AudioFormat.CHANNEL_OUT_MONO,
       AudioFormat.ENCODING_PCM_16BIT,
       minTrackBufferSize,
       AudioTrack.MODE_STREAM);

Function createBuffer() creates the audio buffer that is played using the audio track object i.e audio track object would write this buffer on playback stream. Function below fills random values in the buffer due to which a random signal is generated. If we want to generate some specific wave like Square Wave, Sine Wave, Triangular Wave, we have to fill the buffer accordingly.

public short[] createBuffer(int frequency) {
   // generating a random buffer for now
   short[] buffer = new short[minTrackBufferSize];
   for (int i = 0; i < minTrackBufferSize; i++) {
       buffer[i] = (short) (random.nextInt(32767) + (-32768));
   }
   return buffer;
}

We created a write() method and passed the audio buffer created in above step as an argument to the method. This method writes audio buffer into audio stream for playback.

public void write(short[] buffer) {
   /* write buffer to audioTrack */
   audioTrack.write(buffer, 0, buffer.length);
}

Amplitude of the signal can be controlled by changing the volume level of the stream on which the buffer is being played. As we are playing the audio in music stream, so STREAM_MUSIC is passed as a parameter to the setStreamVolume() method.

value: value is amplitude level of the stream. Every stream has its different amplitude levels. getStreamMaxVolume(STREAM_TYPE) method is used to find the maximum valid amplitude level of any stream.
flag: this stackoverflow post explain all the flags of the AudioManager class.

AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE); audioManager.setStreamVolume(AudioManager.STREAM_MUSIC, value, flag);

Roadmap

We are working on implementing methods to fill audio buffer with specific values such that waves like Sinusoidal wave, Square Wave, Sawtooth Wave can be generated during the playback of the buffer using the AudioTrack object.

Resources

Using Data Access Object to Store Information

We often need to store the information received from the network to retrieve that later. Although we can store and read data directly but by using data access object to store information enables us to do data operations without exposing details of the database. Using data access object is also a best practice in software engineering. Recently I modified Connfa app to store the data received in Open Event format. In this blog, I describe how to use data access object.

The goal is to abstract and encapsulate all access to the data and provide an interface. This is called Data Access Object pattern. In a nutshell, the DAO “knows” which data source (that could be a database, a flat file or even a WebService) to connect to and is specific for this data source. It makes no difference in applications when it accesses a relational database or parses xml files (using a DAO). The DAO is usually able to create an instance of a data object (“to read data”) and also to persist data (“to save data”) to the datasource.

Consider the example from Connfa app in which get the tracks from API and store them in SQL database. We use DAO to create a layer between model and database. Where AbstractEntityDAO is an abstract class which have the functions to perform CRUD operation. We extend it to implement them in our DAO model. Here is TrackDAO structure,

public class TrackDao extends AbstractEntityDAO<Track, Long> {

    public static final String TABLE_NAME = "table_track";

    @Override
    protected String getSearchCondition() {
        return "_id=?";
    }
    
    ...
}

Find the complete class to see the detailed methods to implement search conditions, get key columns, create instance etc.  here.

Here is a general method to get the data from the database. Where getFacade() for the given layer element, this method returns the requested facade object to represent the passed in layer element.

public List<ClassToSave> getAllSafe() {
   ILAPIDBFacade facade = getFacade();
   try {
       facade.open();
       return getAll();

   } finally {
       facade.close();
   }
}

Now we can create an instance to use these methods instead of directly using SQL operations. This functions gets the data and sort it accordingly.

private TrackDao mTrackDao;
 public List<Track> getTracks() {
   List<Track> tracks = mTrackDao.getAllSafe();
   Collections.sort(tracks, new Comparator<Track>() {
       @Override
       public int compare(Track track, Track track2) {
           return Double.compare(track.getOrder(), track2.getOrder());
       }
   });
   return tracks;
}

References:

 

Receiving Data From the Network Asynchronously

We often need to receive information from networks in Android apps. Although it is a simple process but the problem arises when it is done through the main user interaction thread. To understand this problem better consider using an app which shows some text downloaded from the online database. As the user clicks a button to display a text it may take some time to get the HTTP response. So what does an activity do in that time? It creates a lag in the user interface and makes the app to stop responding. Recently I implemented this in Connfa App to download data in Open Event Format.

To solve that problem we use concurrent thread to download the data and send the result back to be processed separating it from main user interaction thread. AsyncTask allows you to perform background operations and publish results on the UI thread without having to manipulate threads and/or handlers.

AsyncTask is designed to be a helper class around Thread and Handler and does not constitute a generic threading framework. AsyncTasks should ideally be used for short operations (a few seconds at the most). If you need to keep threads running for long periods of time, it is highly recommended you use the various APIs provided by the java.util.concurrent package such as Executor, ThreadPoolExecutor and FutureTask.

An asynchronous task is defined by a computation that runs on a background thread and whose result is published on the UI thread. An asynchronous task is defined by 3 generic types, called Params, Progress and Result, and 4 steps, called onPreExecute, doInBackground, onProgressUpdate and onPostExecute.

In this blog, I describe how to download data in an android app with AsyncTask using OkHttp library.

First, you need to add dependency for OkHttp Library,

compile 'com.squareup.okhttp:okhttp:2.4.0' 


Extend AsyncTask class and implement all the functions in the class by overriding them to modify them accordingly. Here I declare a class named AsyncDownloader extending AsyncTask. We took the string parameter, integer type progress and string type result to return instead which will be our JSON data and param is our URL. Instead of using get the function of AsyncTask we implement an interface to receive data so the main user interaction thread does not have to wait for the return of the result.

public class AsyncDownloader extends AsyncTask<String, Integer, String> {
    private String jsonData = null;

    public interface JsonDataSetter {
        void setJsonData(String str);
    }


    JsonDataSetter jsonDataSetterListener;

    public AsyncDownloader(JsonDataSetter context) {

        this.jsonDataSetterListener = context;
    }

    @Override
    protected void onPreExecute() {
        super.onPreExecute();
    }

    @Override
    protected String doInBackground(String... params) {

        String url = params[0];

        OkHttpClient client = new OkHttpClient();
        Request request = new Request.Builder()
                .url(url)
                .build();
        Call call = client.newCall(request);

        Response response = null;

        try {
            response = call.execute();

            if (response.isSuccessful()) {
                jsonData = response.body().string();

            } else {
                jsonData = null;
            }

        } catch (IOException e) {
            e.printStackTrace();
        }


        return jsonData;
    }

    @Override
    protected void onPostExecute(String jsonData) {
        super.onPostExecute(jsonData);
        jsonDataSetterListener.setJsonData(jsonData);
    }
}


We download the data in doInBackground making a request using OkHttp library and send the result back in onPostExecute method using our interface method. See above to check out the complete class implementation.

You can make an instance to AsyncDownloader in an activity. Implement interface to receive that data.

Tip: Always close AsyncTask after using to avoid many concurrent tasks at one time.

Find the complete code here.

References:

Storing SUSI Settings for Anonymous Users

SUSI Web Chat is equipped with a number of settings by which it can be configured. For a user who is logged in and using the chat, he can change his settings and save them by sending it to the server. But for an anonymous user, this feature is not available while opening the chat every time, unless one stores the settings. Thus to overcome this problem we store the settings in the browser’s cookies which we do by following the steps.

The current User Settings available are :-

  1. Theme – One can change the theme to either ‘dark’ or ‘light’.
  2. To have the option to send a message on enter.
  3. Enable mic to give voice input.
  4. Enable speech output only for speech input.
  5. To have speech output always ON.
  6. To select the default language.
  7. To adjust the speech output rate.
  8. To adjust the speech output pitch.

The first step is to have a default state for all the settings which act as default settings for all users when opening the chat for the first time. We get these settings from the User preferences store available at the file UserPreferencesStore.js

let _defaults = {
                Theme: 'light',
                Server: 'http://api.susi.ai',
                StandardServer: 'http://api.susi.ai',
                EnterAsSend: true,
                MicInput: true,
                SpeechOutput: true,
               SpeechOutputAlways: false,
               SpeechRate: 1,
               SpeechPitch: 1,
           };

We assign these default settings as the default constructor state of our component so that we populate them beforehand.

2. On changing the values of the various settings we update the state through various function handlers which are as follows-

handleSelectChange – handles all the theme changes, handleEnterAsSendhandleMicInput – handles Mic Input as true, handleSpeechOutput – handles whether Speech Output should be enabled or not, handleSpeechOutputAlways – handles if user wants to listen to speech after every input, handleLanguage – handles the language settings related to speech, handleTextToSpeech – handles the Text to Speech Settings including Speech Rate, Pitch.        

  1. Once the values are changed we make use of the universal-cookie package and save the settings in the User’s browser. This is done in a function handleSubmit in the Settings.react.js file.

        

 import Cookies from 'universal-cookie';
     const cookies = new Cookies(); // Create cookie object.
     // set all values
     let vals = {
            theme: newTheme,
            server: newDefaultServer,
            enterAsSend: newEnterAsSend,
            micInput: newMicInput,
            speechOutput: newSpeechOutput,
            speechOutputAlways: newSpeechOutputAlways,
            rate: newSpeechRate,
            pitch: newSpeechPitch,
      }

    let settings = Object.assign({}, vals);
    settings.LocalStorage = true;
    // Store in cookies for anonymous user
    cookies.set('settings',settings);   
  1. Once the values are set in the browser we load the new settings every time the user opens the SUSI Web Chat by having the following checks in the file API.actions.js. We get the values from the cookies using the get function if the person is not logged in and initialise the new settings for that user.
if(cookies.get('loggedIn')===null||
    cookies.get('loggedIn')===undefined){
    // check if not logged in and get the value from the set cookie
    let settings = cookies.get('settings');
    if(settings!==undefined){
      // Check if the settings are set in the cookie
      SettingsActions.initialiseSettings(settings);
    }
  }
else{
// call the AJAX for getting Settings for loggedIn users
}

Resources

Advantage of Open Event Format over xCal, pentabarf/frab XML and iCal

Event apps like Giggity and Giraffe use event formats like xCal, pentabarf/frab XML, iCal etc. In this blog, I present some of the advantages of using FOSSASIA’s Open Event data format over other formats. I added support for Open Event format in these two apps so I describe the advantages and improvements that were made with respect to them.

  • The main problem that is faced in Giggity app is that all the data like social links, microlocations, the link for the logo file etc., can not be fetched from a single file, so a separate raw file is being added to provide this data. Our Open Event format provides all this information from the single URL that could be received from the server so no need to use any separate file.
  • Here is the pentabarf format data for FOSSASIA 2016 conference excluding sessions. Although it provides all the necessary information it leaves the information for logo URL, details for longitude and latitude for microlocations (rooms) and links to social media and website. While the open event format provides all the missing details including some extra information like language, comments etc. See FOSSASIA 2016 Open Event format sample.
<conference>
<title>FOSSASIA 2016</title>
<subtitle/>
<venue>Science Centre Road</venue>
<city>Singapore</city>
<start>2016-03-18</start>
<end>2016-03-20</end>
<days>3</days>
<day_change>09:00:00</day_change>
<timeslot_duration>00:00:00</timeslot_duration>
</conference>
  • The parsing of received file format gets very complicated in case of iCal, xCal etc. as tags needs to be matched to get the data. Howsoever there are various libraries available for parsing JSON data. So we can create simply an array list of the received data to send it to the adapter. See this example for more information of working code. You can also see the parser for iCal to compare the complexity of the code.
  • The other more common problem is the structure of the formats received is sometimes it becomes complicated to define the sub parts of a single element. For example for the location we define latitude and longitude separately while in iCal format it is just separated by a comma. For example for

iCal

GEO:1.333194;103.736132

JSON

{
           "id": 1,
           "name": "Stage 1",
           "floor": 0,
           "latitude": 37.425420,
           "longitude": -122.080291,
           "room": null
}

And the information provided is more detailed.

  • Open Event format is well documented and it makes it easier for other developers to work on it. Find the documentation here.

References:

 

Scheduling Jobs to Check Expired Access Token in Yaydoc

In Yaydoc, we use the user access token to do various tasks like pushing documentation, registering webhook and to see it’s status.The user access token is very important to us, so I decided of adding Cron job which checks whether the user token has expired or not. But then one problem was that, if we have more number of users our cron job will send thousands of request at a time so it can break the app. So, I thought of queueing the process. I used `asyc` library for queuing the job.

const github = require("./github")
const queue = require("./queue")

User = require("../model/user");

exports.checkExpiredToken = function () {
  User.count(function (error, count) {
    if (error) {
      console.log(error);
    } else {
      var page = 0;
      if (count < 10) {
        page = 1;
      } else {
        page = count / 10;
        if (page * 10 < count) {
          page = (count + 10) /10;
        }
      }
      for (var i = 0; i <= page; i++) {
        User.paginateUsers(i, 10,
        function (error, users) {
          if (error) {
            console.log(error);
          } else {
            users.forEach(function(user) {
              queue.addTokenRevokedJob(user);
            })
          }
        })
      }
    }
  })
}

In the above code I’m paginating the list of users in the database and then I’m adding each user to the queue.

var tokenRevokedQueue = async.queue(function (user, done) {
  github.retriveUser(user.token, function (error, userData) {
    if (error) {
      if (user.expired === false) {
        mailer.sendMailOnTokenFailure(user.email);
        User.updateUserById(user.id, {
          expired: true
        }, function(error, data) {
          if (error) {
            console.log(error);
          }
        });
      }
      done();
    } else {
      done();
    }
  })
}, 2);

I made this queue with the help of async queue method. In the first parameter, I’m passing the logic and in second parameter, I’m passing how many jobs can be executed asynchronously. I’m checking if the user has revoked the token or not by sending API requests to GitHub’s user API. If it gives a response ‘200’ then the token is valid otherwise it’s invalid. If the user token is invalid, I’m sending email to the user saying that “Your access token in revoked so Sign In once again to continue the service”.

Resources: