How to Make Phimpme Android App Crash Free

Now Phimpme Android app is almost ready with lots of social sharing options. A user can upload images on multiple platforms like Tumblr, Flickr, Imgur, OwnCloud (open source), Nextcloud, dropbox, pinterest, etc. Apart from Sharing, Phimpme app also allow user to click image from own custom camera with different filters and various editing options. As everything is now almost ready so It also important to make app stable and crash free. To make app stable to compatible with all types of device, we can write instrumentation test cases. So in this post I will be explaining how I made Phimpme android app crash free. To do so I have integrated crash reporting service in Phimpme using Firebase Crash report service and Crashlytics.

Using Firebase Crash Reporting service:

Firebase is free of cost and provide various features along with crash reporting. To integrate firebase crash service there is step by step guide.

Step 1:

First, step is to register your app on firebase developer console. To register your Android app on firebase click here.  Add your app name and select your country.

 

Step 2:

Next, click on the Add Firebase to your Android app button and fill in the your Android application’s package name and the SHA-1 key. You can generate this key very easily with the help of Android studio. Type this command in your terminal to generate SHA-1

keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android

On successful completion of the command, the SHA-1 key will be displayed on the terminal.

Step 3:

Now add the SHA-1 and package name in firebase console. After that download googleservice.json file and place in app folder of your project.

Step 4:

Add following dependency in your android project and plugin in build.gradle

dependencies {
    classpath 'com.google.gms:google-services:3.1.0'
  }
 apply plugin: 'com.google.gms.google-services'


Step 5:

Once you have done the above four steps your app will be visible in firebase console and now you can add crash service. Now you can see crash in your firebase console

After this add the following dependency in build.gradle. This is very important.

compile 'com.google.firebase:firebase-crash:9.4.0'

Resources:

Continue ReadingHow to Make Phimpme Android App Crash Free

Auto Deployment of SUSI Server using Kubernetes on Google Cloud Platform

Recently, we auto deployed SUSI Server on Google Cloud Platform using Kubernetes and Docker Images after each commit in the GitHub repo with the help of Travis Continuous Integration. So, basically, whenever a new commit is added to the repo, during the Travis build, we build the docker image of the server and then use it to deploy the server on Google Cloud Platform. We use Kubernetes for deployment since it is very easy to scale up the Project when traffic on the server is increased and Docker because using it we can easily build docker images which then can be used to update the deployment. This schematic will make things more clear what exactly is the procedure.

Prerequisites

  1. You must be signed in to your Google Cloud Console and have enabled billing and must have credits left in your account.
  2. You must have a docker account and a repo in it. If you don’t have one, make it now.
  3. You should have enabled Travis on your repo and have a Travis.yml file in your repo.
  4. You must already have a project in Google Cloud Console. Make a new one if you don’t have.

Pre Deployment Steps

You will be needed to do some work on Google Cloud Platform before actually starting the auto deployment process. Those are:

  1. Creating a new Cluster.
  2. Adding and Formatting Persistence Disk
  3. Adding a Persistent Volume CLaim (PVC)
  4. Labeling a node as primary.

Check out this documentation on how to do that. It may help.

Implementation

Img src: https://cloud.google.com/solutions/continuous-delivery-with-travis-ci

1. The first step is simply to add this line in Travis.yml file and create an empty deploy.sh, file mentioned below.

after_success:
- bash kubernetes/travis/deploy.sh

Now we’ll be moving line by line and adding commands in the empty deploy.sh file that we created in the previous step.

2. Next step is to remove obsolete Google Cloud files and install Google Cloud SDK and kubectl command. Use following lines to do that.

echo ">>> Removing obsolete gcoud files"
sudo rm -f /usr/bin/git-credential-gcloud.sh
sudo rm -f /usr/bin/bq
sudo rm -f /usr/bin/gsutil
sudo rm -f /usr/bin/gcloud

echo ">>> Installing new files"
curl https://sdk.cloud.google.com | bash;
source ~/.bashrc
gcloud components install kubectl

3. In this step you will be needed to download a JSON file which contains your Google Cloud Credentials, then copy that file to your repo and encrypt it using Travis encryption keys. Follow https://youtu.be/7U4jjRw_AJk this video to see how to do that.

4. So, now you have added your encrypted credentials.json files in your repo and now you need to use those credentials to login into your google cloud account. So, use below lines to do that.

echo ">>> Decrypting credentials and authenticating gcloud account"
# Decrypt the credentials we added to the repo using the key we added with the Travis command line tool
openssl aes-256-cbc -K $encrypted_YOUR_key -iv $encrypted_YOUR_iv -in ./kubernetes/travis/Credentials.json.enc -out Credentials.json -d
gcloud auth activate-service-account --key-file Credentials.json
export GOOGLE_APPLICATION_CREDENTIALS=$(pwd)/Credentials.json
#add gcoud project id
gcloud config set project YOUR_PROJECT_ID
gcloud container clusters get-credentials YOUR_CONTAINER

The above lines of code first decrypt your credentials, then login into your account and set the project you already created earlier.

5. Now, we have logged into Google Cloud, we need to build docker image from a dockerfile. Follow official docker docs to see how to write a dockerfile. Here is an example of dockerfile. You will need to add “$DOCKER_USERNAME” and “$DOCKER_PASSWORD” as environment variables in Travis settings of your repo.

echo ">>> Building Docker image"
cd kubernetes/images

docker build --no-cache -t YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT .
docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"
docker tag YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:latest

6. Now, just push the docker image created in previous step and update the deployment.

echo ">>> Pushing docker image"
docker push YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO

echo ">>> Updating deployment"
kubectl set image deployment/YOUR_CONTAINER_NAME --namespace=default YOUR_CONTAINER_NAME=YOUR_DOCKER_USERNAME/YOUR_DOCKER_REPO:$TRAVIS_COMMIT

Summary

This blog was about how we have configured travis build and auto deployed SUSI Server on Google Cloud Platform using Kubernetes and Docker. You can do the same with your server too or if you are looking to contribute to SUSI Server, this may help you a little in understanding the code of the repo.

Resources

  1. The documentation for setting up your project on Google CLoud Console before starting auto deployment https://github.com/fossasia/susi_server/blob/afb00cd9c421876f5d640ce87941e502aa52e004/docs/installation/installation_kubernetes_gcloud.md
  2. The documentation for encrypting your google cloud credentials and adding them to your repo https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
  3. Docs for Docker to get you started with Docker https://docs.docker.com/
  4. Travis Documentation on how to secure your credentials https://docs.travis-ci.com/user/encryption-keys/
  5. Travis Documentation on how to add environment variables in your repo settings https://docs.travis-ci.com/user/environment-variables/
Continue ReadingAuto Deployment of SUSI Server using Kubernetes on Google Cloud Platform

SPI Communication in PSLab

PSLab supports communication using the Serial Peripheral Interface (SPI) protocol. The Desktop App as well as the Android App have the framework set-up to use this feature. SPI protocol is mainly used by a few sensors which can be connected to PSLab. For supporting SPI communication, the PSLab Communication library has a dedicated class defined for SPI. A brief overview of how SPI communication works and its advantages & limitations can be found here.

The class dedicated for SPI communication with numerous methods defined in them. The methods required for a particular SPI sensor may differ slightly, however, in general most sensors utilise a certain common set of methods. The set of methods that are commonly used are listed below with their functions.

In the setParameters method, the SPI parameters like Clock Polarity (CKP/CPOL), Clock Edge (CKE/CPHA), SPI modes (SMP) and other parameters like primary and secondary prescalar which are specific to the device used.

Primary Prescaler (0,1,2,3) for 64MHz clock->(64:1,16:1,4:1,1:1)

Secondary prescaler (0,1,..7)->(8:1,7:1,..1:1)

The values of CKP/CPOL and CKE/CPHA needs to set using the following convention and according to our requirements.

  • At CPOL=0 the base value of the clock is zero, i.e. the idle state is 0 and active state is 1.
    • For CPHA=0, data is captured on the clock’s rising edge (low→high transition) and data is changed at the falling edge (high→low transition).
    • For CPHA=1, data is captured on the clock’s falling edge (high→low transition) and data is changed at the rising edge (low→high transition).
  • At CPOL=1 the base value of the clock is one (inversion of CPOL=0), i.e. the idle state is 1 and active state is 0.
    • For CPHA=0, data is captured on the clock’s falling edge (high→low transition) and data is changed at the rising edge (low→high transition).
    • For CPHA=1, data is captured on the clock’s rising edge (low→high transition) and data is changed at the falling edge (high→low transition).

public void setParameters(int primaryPreScalar, int secondaryPreScalar, Integer CKE, Integer CKP, Integer SMP) throws IOException {
        if (CKE != null) this.CKE = CKE;
        if (CKP != null) this.CKP = CKP;
        if (SMP != null) this.SMP = SMP;

        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.SET_SPI_PARAMETERS);
        packetHandler.sendByte(secondaryPreScalar | (primaryPreScalar << 3) | (this.CKE << 5) | (this.CKP << 6) | (this.SMP << 7));
        packetHandler.getAcknowledgement();
    }

 

The start method is responsible for sending the instruction to initiate the SPI communication and it takes the channel which will be used for communication as input.

public void start(int channel) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.START_SPI);
        packetHandler.sendByte(channel);
    }

 

The setCS method is responsible for selecting the slave with which the SPI communication has to be done. This feature of SPI communication is known as Chip Select (CS) or Slave Select (SS). A master can use multiple Chip/Slave Select pins for communication whereas a slave utilises just one pin as SPI is based on single master multiple slaves principle. The capacity of PSLab is limited to two slave devices at a time.

public void setCS(String channel, int state) throws IOException {
        String[] chipSelect = new String[]{"CS1", "CS2"};
        channel = channel.toUpperCase();
        if (Arrays.asList(chipSelect).contains(channel)) {
            int csNum = Arrays.asList(chipSelect).indexOf(channel) + 9;
            packetHandler.sendByte(commandsProto.SPI_HEADER);
            if (state == 1)
                packetHandler.sendByte(commandsProto.STOP_SPI);
            else
                packetHandler.sendByte(commandsProto.START_SPI);
            packetHandler.sendByte(csNum);
        } else {
            Log.d(TAG, "Channel does not exist");
        }
    }

 

The stop method is responsible for sending the instruction to the stop the communication with the slave.

public void stop(int channel) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.STOP_SPI);
        packetHandler.sendByte(channel);
    }

 

PSLab SPI class has methods defined for sending either 8-bit or 16-bit data over SPI which are further classified on whether they request the acknowledgement byte (it helps to know whether the communication was successful or unsuccessful) or not.

The methods are so named send8, send16, send8_burst and send16_burst . The burst methods do not request any acknowledgement value and as a result work faster than the normal methods.

public int send16(int value) throws IOException {
        packetHandler.sendByte(commandsProto.SPI_HEADER);
        packetHandler.sendByte(commandsProto.SEND_SPI16);
        packetHandler.sendInt(value);
        int retValue = packetHandler.getInt();
        packetHandler.getAcknowledgement();
        return retValue;
    }

 

Resources:

Continue ReadingSPI Communication in PSLab

Implementing Search Bar Using GitHub API In Yaydoc CI

In Yaydoc’s, documentation will be generated by typing the URL of the git repository to the input box from where user can generate documentation for any public repository, they can see the preview and if they have access, they can push the documentation to the github pages on one click. But In Yaydoc CI user can register the repository only if he has access to the specific repository, so I decided to show the list to the repository where user can select from the list but then it also has one problem that Github won’t give us all the user repository data in one api hit and then I made a search bar in which user can search repository and can register to the Yaydoc CI.

var search = function () {
  var username = $("#orgs").val().split(":")[1];
  const searchBarInput = $("#search_bar");
  const searchResultDiv = $("#search_result");

  searchResultDiv.empty();
  if (searchBarInput.val() === "") {
    searchResultDiv.append('<p class="text-center">Please enter the repository name<p>');
    return;
  }

  searchResultDiv.append('<p class="text-center">Fetching data<p>');

  $.get(`https://api.github.com/search/repositories?q=user:${username}+fork:true+${searchBarInput.val()}`, function (result) {
    searchResultDiv.empty();
    if (result.total_count === 0) {
      searchResultDiv.append(`<p class="text-center">No results found<p>`);
      styles.disableButton("btnRegister");
    } else {
      styles.disableButton("btnRegister");
      var select = '<label class="control-label" for="repositories">Repositories:</label>';
      select += '<select class="form-control" id="repositories" name="repository" required>';
      select += `<option value="">Please select</option>`;
      result.items.forEach(function (x){
        select += `<option value="${x.full_name}">${x.full_name}</option>`;
      });
      select += '</select>';
      searchResultDiv.append(select);
    }
  });
};

$(function() {
  $("#search").click(function () {
    search();
  });
});

In the above snippet I have defined search function which will get executed when user clicks the search button. The search function will get the search query from input box, if the search query is empty it’ll show the message as “Please enter repository name”, if it is not empty it’ll hit the GitHub API to fetch user repositories. If the GitHub returns empty array it’ll show “No results found”. In between searching time “Fetching data” will be shown.

$('#search_bar').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      search();
    }
  });

  $('#ci_register').on('keyup keypress', function(e) {
    var keyCode = e.keyCode || e.which;
    if (keyCode === 13) {
      e.preventDefault();
      return false;
    }
  });

Still we faced some problem, like on click enter button form is automatically submitting. So I’m registering event listener. In that listener I’m checking whether the key code is 13 or not. Key code 13 represent enter key, so if the key code is 13 then i’ll prevent the form from submitting. You can see the working of the search bar in the Yaydoc CI.

Resources:

Continue ReadingImplementing Search Bar Using GitHub API In Yaydoc CI

Binding Images Dynamically in Open Event Orga App

In Open Event Orga App (Github Repo), we used Picasso to load images from URLs and display in ImageViews. Picasso is easy to use, lightweight, and extremely configurable but there has been no new release of the library since 2015. We were using Picasso in binding adapters in order to dynamically load images using POJO properties in the layout XML itself using Android Data Binding. But this configuration was a little buggy.

The first time the app was opened, Picasso fetched the image but it was not applied to the ImageView. When the device was rotated or the activity was resumed, it loaded just fine. This was a critical issue and we tried many things to fix it but none of it quite fit our needs. We considered moving on to other Image Loading libraries like Glide, etc but it was too heavy on the size and functionality for our needs. The last resort was to update the library to develop version using Sonatype’s snapshots Repository. Now, the Picasso v2.6.0-SNAPSHOT is very stable but not released to the maven central repository, and a newer develop version v3.0.0-SNAPSHOT was launched too. So we figured we should use that. This blog will outline the steps to include the develop version of Picasso, configuring it for our needs and making it work with Android Data Binding.

Setting up Dependencies

Firstly, we need to include the sonatype repository in the repositories block of our app/build.gradle

repositories {
   ...
   maven { url 'https://oss.sonatype.org/content/repositories/snapshots/' }
}

 

Then we need to replace the Picasso dependency entry to this:

compile 'com.squareup.picasso:picasso:3.0.0-SNAPSHOT'

 

Note that if you used Jake Wharton’s OkHttp3 Downloader for Picasso, you won’t need it now, so you need to remove it from the dependency block

And you need to use this to import the downloader

import com.squareup.picasso.OkHttp3Downloader;

 

Next, we set up our Picasso DI this way

Picasso providesPicasso(Context context, OkHttpClient client) {
   Picasso picasso = new Picasso.Builder(context)
       .downloader(new OkHttp3Downloader(client))
       .build();
   picasso.setLoggingEnabled(true);
   return picasso;
}

 

Set the singleton instance in our application:

Picasso.setSingletonInstance(picasso);

 

And we are ready to use it.

Creating Adapters

Circular Image Adapter

We show event logos as circular images, so we needed to create a binding adapter for that:

@BindingAdapter("circleImageUrl")
public static void bindCircularImage(ImageView imageView, String url) {
   if(TextUtils.isEmpty(url)) {
       imageView.setImageResource(R.drawable.ic_photo_shutter);
       return;
   }

   Picasso.with()
       .load(Uri.parse(url))
       .error(R.drawable.ic_photo_shutter)
       .placeholder(R.drawable.ic_photo_shutter)
       .transform(new CircleTransform())
       .tag(MainActivity.class)
       .into(imageView);
}

 

If the URL is empty, we just show the default photo, and otherwise we load the image into the view using standard CircleTransform

Note that there is no context argument in the with method. This was implemented in Picasso recently where they removed the need for context for loading images. Now, they use a Dummy ContentProvider to get application context, which is inspired by how Firebase does it.

Now, we can just normally use this binding in layout to load the event thumbnail like this

<ImageView
   android:layout_width="@dimen/image_small"
   android:layout_height="@dimen/image_small"
   android:contentDescription="@string/event_thumbnail"
   app:circleImageUrl="@{event.thumbnailImageUrl}" />

 

This gives us a layout like this:

Next we need to load the header image with a deafult image.

Default Image Adapter

For this, we write a very simple adapter without CircleTransform

@BindingAdapter(value = { "imageUrl", "placeholder" }, requireAll = false)
public static void bindDefaultImage(ImageView imageView, String url, Drawable drawable) {
   if(TextUtils.isEmpty(url)) {
       if (drawable != null)
           imageView.setImageDrawable(drawable);
       return;
   }

   RequestCreator requestCreator = Picasso.with().load(Uri.parse(url));

   if (drawable != null) {
       requestCreator
           .placeholder(drawable)
           .error(drawable);
   }

   requestCreator
       .tag(MainActivity.class)
       .into(imageView);
}

 

As imageUrl or placeholder can be null, we check for both, and setting correct images if they are not. We use this in our header layout with both the url and default image we need to show:

<ImageView
   android:scaleType="centerCrop"
   app:imageUrl="@{ event.largeImageUrl }"
   app:placeholder="@{ @drawable/header }"
   android:contentDescription="@string/event_background" />

 

And this gives us a nice dynamic header like this:

This wraps up the blog on Picasso’s latest develop version and Binding Adapters. If you want to know more about Picasso and Android Data Binding, check these links:

Continue ReadingBinding Images Dynamically in Open Event Orga App

Implementing Tree View in PSLab Android App

When a task expands over sub tasks, it can be easily represented by a stem and leaf diagram. In the context of android it can be implemented using an expandable list view. But in a scenario where the subtasks has mini tasks appended to it, it is hard to implement it using the general two level expandable list views. PSLab android application supports many experiments to perform using the PSLab device. These experiments are divided into major sections and each experiments are listed under them.

The best way to implement this functionality in the android application is using a multi layer treeview implementation. In this context three layers are enough as follows;


This was implemented with the help from a library called AndroidTreeView. This blog will outline how to modify and implement it in PSLab android application.

Basic Idea

Tree view implementation simply follows the data structure “Tree” used in algorithms. Every tree has a root where it starts and from the root there will be branches which are connected using edges. Every edge will have a parent and child. To reach a child, one has to traverse through only one route.

Setting Up Dependencies

Implementing tree view begins with setting up dependencies in the gradle file in the project.

compile 'com.github.bmelnychuk:atv:1.2.+'

Creating UI for tree view

The speciality about this implementation is that it can be loaded into any kind of a layout such as a linearlayout, relativelayout, framelayout etc.

final TreeNode Root = TreeNode.root();
Root.addChildren(
       // Add child nodes here
);
// Set up the tree view
AndroidTreeView experimentsListTree = new AndroidTreeView(getActivity(), Root);
experimentsListTree.setDefaultAnimation(true);
[LinearLayout/RelativeLayout].addView(experimentsListTree.getView());

Creating a node holder

Trees are made of a collection of tree nodes. A holder for a tree node can be created using an object which extends the BaseNodeViewHolder class provided by the library. BaseNodeViewHolder requires a holder class which is generally static so that it can be accessed without creating an instance which nests textviews, imageviews and buttons.

Once the holder extends the BaseNodeViewHolder, it should override two methods as follows;

@Override
public View createNodeView(final TreeNode node, ClassContainingNodeData header) {

}

@Override
public void toggle(boolean active) {

}

createNodeView() which inflate the view and toggle() method which can be used to toggle clicks on the tree node in the UI.

The following code snippet shows how to create an object which extends the above mentioned class with the overridden methods.

public class ExperimentHeaderHolder extends TreeNode.BaseNodeViewHolder<ExperimentHeaderHolder.ExperimentHeader> {

    private ImageView arrow;

    public ExperimentHeaderHolder(Context context) {
            super(context);
    }

    @Override
    public View createNodeView(final TreeNode node, ExperimentHeader header) {

            final LayoutInflater inflater = LayoutInflater.from(context);
            final View view = inflater.inflate(R.layout.header_holder, null, false);

            TextView title = (TextView) view.findViewById(R.id.title);
            title.setText(header.title);

            arrow = (ImageView) view.findViewById(R.id.experiment_arrow);
        
            return view;
    }

    @Override
    public void toggle(boolean active) {
            arrow.setImageResource(active ? arrow_drop_up : arrow_drop_down);
    }

    public static class ExperimentHeader {

            public String title;

            public ExperimentHeader(String title) {
               this.title = title;
            }
    }
}

Creating a TreeNode

Once the holder is complete, we can move on to creating an actual tree node. TreeNode class requires an object which extends the BaseNodeViewHolder class as mentioned earlier. Also it requires a viewholder which it can use to inflate the view in the tree layout. The viewholder can be a different class. The importance of this different implementation can be explained as follows;

TreeNode treeNode = new TreeNode(new ExperimentHeaderHolder.ExperimentHeader(“Title”))
       .setViewHolder(new ExperimentHeaderHolder(context));

In the Saved Experiments section of PSLab android application, all the three levels shouldn’t implement the toggle behavior as a user clicks on the experiment (last level item), he doesn’t expect the icon to change like the ones in headers where an arrow points up and down when he clicks on it. In this case we can reuse a holder which has the title attribute while creating only a holder which does not override the toggle function to ignore icon toggling at the last level of the tree view. This explanation can be illustrated using a code snippet as follows;

new TreeNode(new ExperimentHeaderHolder.ExperimentHeader(“Title”))
       .setViewHolder(new IndividualExperimentHolder(context));

Creating parent nodes and finally the Root node

The final part of the implementation is to create parent nodes to group up similar experiments together. The TreeNode object supports a method call addChild() and addChildren(). addChild() method allows adding one tree node to the specific tree node and addChildren() method allows adding many tree nodes at the same time. Following code snippet illustrates how to add many tree nodes to a node and make it a parent node.

treeDiodeExperiments.addChildren(treeZener, treeDiode, treeDiodeClamp, treeDiodeClip, treeHalfRectifier, treeFullWave);

Setting a click listener

Click listener is a very important implementation. Each tree node can be attached with a click listener using the interface provided by the library as follows;

treeNode.setClickListener(new TreeNode.TreeNodeClickListener() {
   @Override
   public void onClick(TreeNode node, Object value) {

   }
});

The value object is the class attached to the holder and its attributes can be retireved by casting it to the specific class using casting methods;

String title = ((ExperimentHeaderHolder.ExperimentHeader) value).title;

Resources:

Continue ReadingImplementing Tree View in PSLab Android App

Implementation of Colour Picker in Phimpme Android Application

In Phimpme Android Project, we have used Color Picker palette which has a various implementation in the project such as changing the theme color of the application, to choose the color for the text while editing an image.

Making XML Layout to choose the specific color from the color palette

The main aim was to provide Line of color, and the user can simply slide through the different colors. A single line should contain all the main colors. I took help of the Open Source project to arrange the layout of the colors. We will display the Color palette in a cardview.

<RelativeLayout
   android:id="@+id/container_edit_text"
   android:layout_width="match_parent"
   android:layout_height="match_parent"
   android:orientation="vertical"
   android:padding="@dimen/big_spacing">
   <uz.shift.colorpicker.LineColorPicker
       xmlns:app="http://schemas.android.com/apk/res-auto"
       android:id="@+id/color_picker_accent"
       android:layout_width="match_parent"
       android:layout_height="60dp"
       app:orientation="horizontal"
       app:selectedColorIndex="0" />
</RelativeLayout>

Inflating the list with the colors

We will inflate the list with all basic colors such as Red, Purple, Blue, Light Blue, Green, Yellow, Orange, Deep orange, Brown, Blue Gray. getAccentColors() function returns the specific color HEXCODE. A color code is six string length long code combination consisting of numbers and alphabets which are stored in the colors.xml resource file. For example, the color code for black is  #000000 and for white #FFFFFF.  

public static int[] getAccentColors(Context context){
   return new int[]{
           ContextCompat.getColor(context, R.color.md_red_500),
           ContextCompat.getColor(context, R.color.md_purple_500),
           ContextCompat.getColor(context, R.color.md_deep_purple_500),
           ContextCompat.getColor(context, R.color.md_blue_500),
           ContextCompat.getColor(context, R.color.md_light_blue_500),
           ContextCompat.getColor(context, R.color.md_cyan_500),
           ContextCompat.getColor(context, R.color.md_teal_500),
           ContextCompat.getColor(context, R.color.md_green_500),
           ContextCompat.getColor(context, R.color.md_yellow_500),
           ContextCompat.getColor(context, R.color.md_orange_500),
           ContextCompat.getColor(context, R.color.md_deep_orange_500),
           ContextCompat.getColor(context, R.color.md_brown_500),
           ContextCompat.getColor(context, R.color.md_blue_grey_500),
   };
}

Implementation of ColorPicker to change the Text color in the Edit Image Activity

Color Palette is implemented the followings steps:

 First,                                                                                                                                        A dialog box is made. This dialog will help to display the color palette.

 Second,                                                                                                                                We need to inflate the dialog box with color_picker_accent XML file we created  in the resource folder.

 Third,                                                                                                                                    We set the dialog box with the line of colors.

 Fourth,                                                                                                                                Get the selected color code. To get the selected color code we used the class  LineColorPicker. The color_picker_accent sends an id for the selected color,  where all the color code is stored.   

final AlertDialog.Builder dialogBuilder = new AlertDialog.Builder(getActivity());
final View dialogLayout = getActivity().getLayoutInflater().inflate(R.layout.color_piker_accent, null);
final LineColorPicker colorPicker = (LineColorPicker) dialogLayout.findViewById(R.id.color_picker_accent);
final TextView dialogTitle = (TextView) dialogLayout.findViewById(R.id.cp_accent_title);
dialogTitle.setText(R.string.text_color_title);
colorPicker.setColors(ColorPalette.getAccentColors(activity.getApplicationContext()));
changeTextColor(WHITE);

In when a user selects a particular color he can either choose OK and CANCEL

When user selects OK

We implemented onClick event to change the color of the text by using the function changeTextColor() which accepts color code as the parameter.

dialogBuilder.setPositiveButton(getString(R.string.ok_action).toUpperCase(), new DialogInterface.OnClickListener() {
   public void onClick(DialogInterface dialog, int which) {
       changeTextColor(colorPicker.getColor());
   }
});

When user selects Cancel

When the user selects cancel, we have set the default color of the text that is White.

dialogBuilder.setNeutralButton(getString(R.string.cancel).toUpperCase(), new DialogInterface.OnClickListener() {
   @Override
   public void onClick(DialogInterface dialog, int which) {
       changeTextColor(WHITE);
       dialog.cancel();
   }
});

changeTextColor function

It accepts the color code. We used this color to change the text color by using the inbuilt function setBackgroundColor() and setTextColor().

private void changeTextColor(int newColor) {
   this.mTextColor = newColor;
   mTextColorSelector.setBackgroundColor(mTextColor);
   mTextStickerView.setTextColor(mTextColor);
}

Conclusion

In this way, it provides a rich user experience. It adds vibrant colors in the application. A quick way to choose the color from the variety of option.

Github

https://github.com/fossasia/phimpme-android

Resources

Continue ReadingImplementation of Colour Picker in Phimpme Android Application

Caching Elasticsearch Aggregation Results in loklak Server

To provide aggregated data for various classifiers, loklak uses Elasticsearch aggregations. Aggregated data speaks a lot more than a few instances from it can say. But performing aggregations on each request can be very resource consuming. So we needed to come up with a way to reduce this load.

In this post, I will be discussing how I came up with a caching model for the aggregated data from the Elasticsearch index.

Fields to Consider while Caching

At the classifier endpoint, aggregations can be requested based on the following fields –

  • Classifier Name
  • Classifier Classes
  • Countries
  • Start Date
  • End Date

But to cache results, we can ignore cases where we just require a few classes or countries and store aggregations for all of them instead. So the fields that will define the cache to look for will be –

  • Classifier Name
  • Start Date
  • End Date

Type of Cache

The data structure used for caching was Java’s HashMap. It would be used to map a special string key to a special object discussed in a later section.

Key

The key is built using the fields mentioned previously –

private static String getKey(String index, String classifier, String sinceDate, String untilDate) {
    return index + "::::"
        + classifier + "::::"
        + (sinceDate == null ? "" : sinceDate) + "::::"
        + (untilDate == null ? "" : untilDate);
}

[SOURCE]

In this way, we can handle requests where a user makes a request for every class there is without running the expensive aggregation job every time. This is because the key for such requests will be same as we are not considering country and class for this purpose.

Value

The object used as key in the HashMap is a wrapper containing the following –

  1. json – It is a JSONObject containing the actual data.
  2. expiry – It is the expiry of the object in milliseconds.

class JSONObjectWrapper {
    private JSONObject json; 
    private long expiry;
    ... 
}

Timeout

The timeout associated with a cache is defined in the configuration file of the project as “classifierservlet.cache.timeout”. It defaults to 5 minutes and is used to set the eexpiryof a cached JSONObject –

class JSONObjectWrapper {
    ...
    private static long timeout = DAO.getConfig("classifierservlet.cache.timeout", 300000);

    JSONObjectWrapper(JSONObject json) {
        this.json = json;
        this.expiry = System.currentTimeMillis() + timeout;
    }
    ...
}

 

Cache Hit

For searching in the cache, the previously mentioned string is composed from the parameters requested by the user. Checking for a cache hit can be done in the following manner –

String key = getKey(index, classifier, sinceDate, untilDate);
if (cacheMap.keySet().contains(key)) {
    JSONObjectWrapper jw = cacheMap.get(key);
    if (!jw.isExpired()) {
        // Do something with jw
    }
}
// Calculate the aggregations
...

But since jw here would contain all the data, we would need to filter out the classes and countries which are not needed.

Filtering results

For filtering out the parts which do not contain the information requested by the user, we can perform a simple pass and exclude the results that are not needed.

Since the number of fields to filter out, i.e. classes and countries, would not be that high, this process would not be that resource intensive. And at the same time, would save us from requesting heavy aggregation tasks from the user.

Since the data about classes is nested inside the respective country field, we need to perform two level of filtering –

JSONObject retJson = new JSONObject(true);
for (String key : json.keySet()) {
    JSONArray value = filterInnerClasses(json.getJSONArray(key), classes);
    if ("GLOBAL".equals(key) || countries.contains(key)) {
        retJson.put(key, value);
    }
}

Cache Miss

In the case of a cache miss, the helper functions are called from ElasticsearchClient.java to get results. These results are then parsed from HashMap to JSONObject and stored in the cache for future usages.

JSONObject freshCache = getFromElasticsearch(index, classifier, sinceDate, untilDate);
cacheMap.put(key, new JSONObjectWrapper(freshCache));

The getFromElasticsearch method finds all the possible classes and makes a request to the appropriate method in ElasticsearchClient, getting data for all classifiers and all countries.

Conclusion

In this blog post, I discussed the need for caching of aggregations and the way it is achieved in the loklak server. This feature was introduced in pull request loklak/loklak_server#1333 by @singhpratyush (me).

Resources

Continue ReadingCaching Elasticsearch Aggregation Results in loklak Server

Implementing Tweet Search feature in Loklak Wok Android

Loklak Wok Android is a peer harvester that posts collected tweets to the Loklak Server. Along with that tweets can be searched using the app. This post describes how search API endpoint and TabLayout is used to implement the tweet searching feature.

Adding Dependencies to the project

This feature uses Retrofit2, Reactive extensions(RxJava2, RxAndroid and Retrofit RxJava adapter) and RetroLambda (for Java lambda support in Android).

In app/build.gradle:

apply plugin: 'com.android.application'
apply plugin: 'me.tatarka.retrolambda'

android {
   ...
   packagingOptions {
       exclude 'META-INF/rxjava.properties'
   }
}

dependencies {
   ...
   compile 'com.google.code.gson:gson:2.8.1'

   compile 'com.squareup.retrofit2:retrofit:2.3.0'
   compile 'com.squareup.retrofit2:converter-gson:2.3.0'
   compile 'com.squareup.retrofit2:adapter-rxjava2:2.3.0'

   compile 'io.reactivex.rxjava2:rxjava:2.0.5'
   compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
}

 

In build.gradle project level:

dependencies {
   classpath 'com.android.tools.build:gradle:2.3.3'
   classpath 'me.tatarka:gradle-retrolambda:3.2.0'
}

 

Implementation

The search API endpoint is defined in LoklakApi interface which would provide the tweet search result.

public interface LoklakApi {

   @GET("api/search.json")
   Observable<Search> getSearchedTweets(
           @Query("q") String query,
           @Query("filter") String filter,
           @Query("count") int count);
}

 

The POJOs (Plain Old Java Objects) for the result of search API endpoint are obtained using jsonschema2pojo, Gson uses POJOs to convert JSON to Java objects and vice-versa.

The REST client is created by Retrofit2 and is implemented in RestClient class. The Gson converter and RxJava adapter for retrofit is added in the retrofit builder. create method is called to generate the API methods(retrofit implements LoklakApi Interface).

public class RestClient {

   private RestClient() {
   }

   private static void createRestClient() {
       sRetrofit = new Retrofit.Builder()
               .baseUrl(BASE_URL)
               // gson converter
               .addConverterFactory(GsonConverterFactory.create(gson))
               // retrofit adapter for rxjava
               .addCallAdapterFactory(RxJava2CallAdapterFactory.create())
               .build();
   }

   private static Retrofit getRetrofitInstance() {
       if (sRetrofit == null) {
           createRestClient();
       }
       return sRetrofit;
   }

   public static <T> T createApi(Class<T> apiInterface) {
       // create method to generate API methods
       return getRetrofitInstance().create(apiInterface);
   }

}

 

As search API endpoint provides filter parameter which can be used to filter out tweets containing images and videos. So, the tweets are displayed in three categories i.e. latest, images and videos.

The tweets of different category are displayed using a ViewPager. The fragments in ViewPager are inflated by a class that extends FragmentPagerAdapter. SearchFragmentPagerAdapter extends FragmentPagerAdapter, at least two methods getItem and getCount needs to be overridden. Going by the name of methods, getItem provides ith fragment to the  ViewPager and based on the value returned by getCount number of tabs are inflated in TabLayout, a ViewGroup to display fragments in ViewPager in an elegant way. For better UI, the names (here the category of tweets) are displayed, for which we override getPageTitle method.

public class SearchFragmentPagerAdapter extends FragmentPagerAdapter {

   private List<Fragment>  mFragmentList = new ArrayList<>();
   private List<String> mFragmentNameList = new ArrayList<>();

   public SearchFragmentPagerAdapter(FragmentManager fm) {
       super(fm);
   }

   @Override
   public Fragment getItem(int position) {
       return mFragmentList.get(position);
   }

   @Override
   public int getCount() {
       return mFragmentList.size();
   }

   @Override
   public CharSequence getPageTitle(int position) {
       return mFragmentNameList.get(position);
   }

   public void addFragment(Fragment fragment, String pageTitle) {
       mFragmentList.add(fragment);
       mFragmentNameList.add(pageTitle);
   }
}

 

For easy understanding an analogy with RecyclerView can be made. The TabLayout here functions as a RecyclerView, ViewPager does the work of LayoutManager and FragmentPagerAdapter is analogous to RecyclerView.Adapter.

Now, the fragments which contain the categorical tweets are inflated in the parent fragment. Firstly, the ViewPager of TabLayout is set. Then fragments and their names are added to the FragmentPagerAdapter using the addFragment method implemented in SearchFragmentAdapter class above and finally the created adapter is set as the adapter of ViewPager, which is implemented in setupWithViewPager method.

@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
                        Bundle savedInstanceState) {
   // Inflate the layout for this fragment
   View rootView = inflater.inflate(R.layout.fragment_search, container, false);
   ButterKnife.bind(this, rootView);
   ...
   tabLayout.setupWithViewPager(viewPager);
   setupViewPager(viewPager);

   return rootView;
}

private void setupViewPager(ViewPager viewPager) {
   FragmentManager fragmentManager = getActivity().getSupportFragmentManager();
   SearchFragmentPagerAdapter pagerAdapter = new SearchFragmentPagerAdapter(fragmentManager);
   pagerAdapter.addFragment(SearchCategoryFragment.newInstance("", mQuery), "LATEST");
   pagerAdapter.addFragment(SearchCategoryFragment.newInstance("image", mQuery), "PHOTOS");
   pagerAdapter.addFragment(SearchCategoryFragment.newInstance("video", mQuery), "VIDEOS");
   viewPager.setAdapter(pagerAdapter);
}

 

SearchCategoryFragment are child fragments displayed as tabs in TabLayout. These child fragments are created using newInstance method which takes two parameters, category of tweets and the tweet search query respectively, the reason a constructor with these parameters are not used is that during a orientation change only the default constructor i.e. with no parameters is restored by Android system. So, these parameters are stored in a data structure called Bundle, once the fragment object is created using the default parameter the arguments present in the bundle are passed to fragment using setArguments method. These parameter are retrieved in onCreate lifecycle callback method of fragment which are used to fetch search results.

public static SearchCategoryFragment newInstance(String category, String query) {
   Bundle args = new Bundle();
   // query and category stored in bundle
   args.putString(Constants.TWEET_SEARCH_SUGGESTION_QUERY_KEY, query);
   args.putString(TWEET_SEARCH_CATEGORY_KEY, category);
   // fragment with default constructor created
   SearchCategoryFragment fragment = new SearchCategoryFragment();
   // arguments in bundle are passed to fragment
   fragment.setArguments(args);
   return fragment;
}

@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
   super.onCreate(savedInstanceState);
   Bundle bundle = getArguments();
   if (bundle != null) {
       // arguments retrieved
       mTweetSearchCategory = bundle.getString(TWEET_SEARCH_CATEGORY_KEY);
       mSearchQuery = bundle.getString(Constants.TWEET_SEARCH_SUGGESTION_QUERY_KEY);
   }
}

 

As we have search query and category we can now obtain the search result and pass the obtained result – a List of type Status – to the adapter of RecyclerView which shows the tweets beautifully inside a CardView. The adapter and LayoutManager of RecyclerView are instantiated and set in onCreateView lifecycle callback method. Finally, network request is sent by calling fetchSearchedTweets method to obtain the search results.

@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
                        Bundle savedInstanceState) {
   // Inflate the layout for this fragment
   View view = inflater.inflate(R.layout.fragment_search_category, container, false);
   ButterKnife.bind(this, view);

   mSearchCategoryAdapter = new SearchCategoryAdapter(getActivity(), new ArrayList<>());
   recyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));
   recyclerView.setAdapter(mSearchCategoryAdapter);

   // request sent to obtain search result.
   fetchSearchedTweets();

   return view;
}

 

The LoklakApi interface is implemented using the created Rest client and then getSearchedTweets method is invoked which takes in search query, category of tweets and maximum number of results in the mentioned order. If the network request is successful then setSearchResultView is invoked else setNetworkErrorView.

private void fetchSearchedTweets() {
   LoklakApi loklakApi = RestClient.createApi(LoklakApi.class);
   loklakApi.getSearchedTweets(mSearchQuery, mTweetSearchCategory, 30)
           .subscribeOn(Schedulers.io()) // network request sent in a background thread
           .observeOn(AndroidSchedulers.mainThread())
           .subscribe(this::setSearchResultView, this::setNetworkErrorView);
}

 

setSearchResultView displays the obtained result if any in RecyclerView else shows a message that there is no result for the search query.

private void setSearchResultView(Search search) {
   List<Status> statusList = search.getStatuses();
   networkErrorTextView.setVisibility(View.GONE);
   if (statusList.size() == 0) { // request successful but no results
       recyclerView.setVisibility(View.GONE);

       Resources res = getResources();
       String noSearchResultMessage = res.getString(R.string.no_search_match, mSearchQuery); // no result matched message
       // no result message displayed
       noSearchResultFoundTextView.setVisibility(View.VISIBLE);
       noSearchResultFoundTextView.setText(noSearchResultMessage);
   } else { // there are some results, so display them in RecyclerView
       recyclerView.setVisibility(View.VISIBLE);
       mSearchCategoryAdapter.setStatuses(statusList);
   }
}

 

In case of a failed network request, a TextView is displayed asking the user to check the network connections and click on it to retry.

private void setNetworkErrorView(Throwable throwable) {
   Log.e(LOG_TAG, throwable.toString());
   recyclerView.setVisibility(View.GONE);
   networkErrorTextView.setVisibility(View.VISIBLE);
}

 

When the TextView, networkErrorTextView, is clicked a network request is sent again and the visibility of networkErrorTextView is changed to GONE, as implemented in setOnClickNetworkErrorTextViewListner

@OnClick(R.id.network_error)
public void setOnClickNetworkErrorTextViewListener() {
   networkErrorTextView.setVisibility(View.GONE);
   fetchSearchedTweets();
}

 

References:

Resources

Continue ReadingImplementing Tweet Search feature in Loklak Wok Android

Adding Description to the Susi AI Skills

Susi skill CMS is an editor to write and edit skill easily. It follows an API-centric approach where the Susi server acts as API server and a web front-end act as the client for the API and provides the user interface. A skill is a set of intents. One text file represents one skill, it may contain several intents which all belong together. All the skills are stored in Susi Skill Data repository and the schema is as following.

Using this, one can access any skill based on four tuples parameters model, group, language, skill. To know what a skill is about we needed to add a !description operator which identifies the text as a description for the skill. Let’s check out how to achieve it.Susi Skill class provides parser methods for the set of intents, given as text files.

 public static JSONObject readEzDSkill(BufferedReader br) throws JSONException {}
if (line.startsWith("!") && (thenpos = line.indexOf(':')) > 0) {
        String head = line.substring(1, thenpos).trim().toLowerCase();
       String tail = line.substring(thenpos + 1).trim();
if (head.equals("description")) {
   description =tail;
    }
}
 if (description.length() > 0) intent.put("description", description); 

The method readEzDSkill parses the skill txt file, it checks if a line starts with ‘!description’ (‘bang operator with description’) it then stores the content in string variable description.
If a description is found in a skill, it is recorded and put into Json Array of intents.

private final Map<String, Set<String>> skillDescriptions; 
 if (intent.getDescription() !=null) {
  Set<String> descriptions = this.skillDescriptions.get(intent.getSkill());
  if (descriptions == null) {
     descriptions = new LinkedHashSet<>();
     this.skillDescriptions.put(intent.getSkill(), descriptions);
   }
   descriptions.add(intent.getDescription());
}

SusiMind class  process this json and stores the description in a map of skill path and description. This map is used by DescriptionSkillService to list descriptions for all the skills given its model, group and language. For adding the description servlet we need to inherit the service class from AbstractAPIHandler and implement APIhandler interface.In Susi Server, an abstract class AbstractAPIHandler extending HttpServelets and implementing API handler interface is provided.

 @Override
    public BaseUserRole getMinimalBaseUserRole() { return BaseUserRole.ANONYMOUS; }

    @Override
    public JSONObject getDefaultPermissions(BaseUserRole baseUserRole) {
        return null;
    }

    @Override
    public String getAPIPath() {
        return "/cms/getDescriptionSkill.json";
    }

The getAPIPath() methods sets the API endpoint path, it gets appended to base path which is 127.0.0.1:4000/cms/getDescriptionSkill.json for local host. The getMinimalBaseRole method tells the minimum Userrole required to access this servlet it can also be ADMIN, USER. In our case it is Anonymous. A User need not log in to access this endpoint.
Next, we implement serviceimpl method which gives us the desired response in JSON format.

@Override
    public ServiceResponse serviceImpl(Query call, HttpServletResponse response, Authorization rights, final JsonObjectWithDefault permissions) {
        String model = call.get("model", "");
        String group = call.get("group", "");
        String language = call.get("language", "");
        JSONObject descriptions = new JSONObject(true);
            for (Map.Entry<String, Set<String>> entry : DAO.susi.getSkillDescriptions().entrySet()) {
                String path = entry.getKey();
  if ((model.length() == 0 || path.indexOf("/" + model + "/") > 0) &&(group.length() == 0 || path.indexOf("/" + group + "/") > 0) &&(language.length() == 0 || path.indexOf("/" + language + "/") > 0)) {
      descriptions.put(path, entry.getValue());
   }
            }
            JSONObject json = new JSONObject(true)
                    .put("model", model)
                    .put("group", group)
                    .put("language", language)
                    .put("descriptions", descriptions);
        return new ServiceResponse(json);
    }

We can get the required parameters through a call.get() method where the first parameter is the key for which we want to get the value and second parameter is the default value. If the path contains the desired language, group and model, we return it as a response otherwise an error message is displayed. To check the response go to http://api.susi.ai/cms/getDescriptionSkill.json?model=general&group=knowledge&language=en or http://127.0.0.1:4000/cms/getDescriptionSkill.json.

This is how getDescriptionSkill service works. To add a description to the skill visit susi_skill_data, the storage place for susi skills. For more information and complete code take a look at Susi server and join gitter chat channel for discussions.

Resources

Continue ReadingAdding Description to the Susi AI Skills